Please enable JavaScript to view this site.

 

Least squares regression

Degrees of freedom. The number of degrees of freedom equals the number of rows of data analyzed (Prism skips any rows with missing or excluded values) minus the number of parameters in the model.

Multiple R. The coefficient of multiple correlation  is the correlation between the Y values and the predicted Y values. It is the square root of R2 and its value is always between 0 and 1.

R2. The fraction of all variance in Y that is explained by the multiple regression model. If R2 equals 1.0, then each Y value is predicted perfectly by the model, with no random variability. If R2 equals 0.0, then the regression model does a terrible job of predicting Y values - you'll get equally accurate predictions by simply predicting that each Y value equals the mean of the Y values you measured. With real data, of course, you won't see those extreme R2 values, but instead will see R2 values between 0.0 and 1.0. If you computed the r2 from linear regression on the graph of actual vs. predicted Y values, that r2 (from linear regression) would be the same as R2 from multiple regression.

Adjusted R2. Even if the data are all random, you expect R2 to get larger as you add more variables to the equation. Just by chance, the model will predict the data better if it has more components. The adjusted R2 value corrects for this, by correcting for the number of X variables in the model. Notes:

If you collect random data, you'd expect the adjusted R2 value to be zero on average. If you collected many sets of random data, the adjusted R2 value will be negative half the time, and positive half the time. How is it possible for the adjusted R2 to be negative? If the adjusted R2 were really the square of anything, then it would always be positive. But the adjusted R2 is not the square of anything - it is just R2 minus a correction.

The adjusted R2 is mostly useful for comparing the fits of models with different numbers of independent variables. You can't compare R2, because you expect R2 to be smaller in the fit with more variables just by chance.

Sum-of-squares. Multiple regression finds values for coefficients in the model that minimize the sum-of-squares of the differences between the predicted Y values and the actual Y values.

Sy.x and RMSE.  These are alternative ways to quantify the standard deviation of the residuals. We recommend the Sy.x, which is also called Se. Learn how these are calculated.

AICc. This is a value that accounts for goodness of fit and also the number of parameters in the model. If you fit the same data with the same weighting to two models the one with the lower AICc is more likely to be the correct model. This may not be the model that fits the data better. It is easy to fit the data better by adding lots of independent variables (or interactions) and so increasing the number of parameters fit by the model. The AICc gets smaller when the model fits the data better, but gets larger when you add parameters to the model. The value of the AICc depends on the units of the dependent variable, so cannot be interpreted in any useful way as a single value. Only the difference between two AICc values can be interpreted. Learn more about AICc (and how it is computed) in our explanation of how it is used in nonlinear regression.

Poisson regression

Prism can compute goodness-of-fit of Poission in four ways, selectable in the Diagnostics tab.

Pseudo R-Squared  

It is not possible to compute R2 with Poisson regression models. Instead, Prism reports the pseudo R2. You can interpret it as you do a regular R2. This is the simplest goodness-of-fit measure to understand, so we recommend it.

Pseudo R2is computed from log-likelihoods of three models: LLo, the log-likelihood of horizontal-line model; LLfit, the log-likelihood of the model you chose; and LLmax, the maximum log-likelihood possible, which would occur when the actual responses exactly equal the predicted responses so the model exactly predicts every point and all the residuals equal 0.0. The equation that computes the pseudo R2 is:

 R2 = (LLfit - LLo) / (LLmax - LLo)

Negative log likelihood

Least squares regression minimizes the sum-of-squares, which Prism reports. Poisson regression maximizes the negative log of the likelihood, which Prism can report.

Deviance or G2

The deviance is twice the difference between the maximum possible log-likelihood (see above) and the log-likelihood of the fitted model. The formula for the deviance is D=2(LLmax - LLfit). This is also called G2.

Dispersion ratio

When data are sampled from the Poisson distribution, the variance equals the mean. Prism can report the variance-to-mean ratio (VMR), called the dispersion ratio. Prism reports the degree of overdispersion with a value phi. . If phi is much greater than 1.0, then the actual variance of points around the curve is greater than the mean, and the Poisson model may not be appropriate. This is called overdispersion. Some programs offer extensions to Poisson regression to deal with overdispersion, but Prism does not (let us know if you need this).

AICc

The AICc is useful only if you separately fit the same data to two or more models. You can then use the AICc to choose between them. But note that it only makes sense to compare AICc between fits when the only difference is the model you chose. If the data  are not identical between fits, then any comparison of AICc values would be meaningless. It is also essential that the weighting, or regression method, be the same for all the fits. If you use Poisson regression for one fit, you need to use it for all.

 

© 1995-2019 GraphPad Software, LLC. All rights reserved.