## Please enable JavaScript to view this site.

This guide is for an old version of Prism. Browse the latest version or update Prism
 Pseudo R squared

Prism offers four pseudo R squared values. When evaluating these values, it’s important to remember that these cannot be interpreted in the same way as R squared in linear regression. This can be challenging at first due to the fact that – although they do not provide the same information about the model - they've been developed to have some analogies to that popular metric, such as the fact that they are constrained between 0 and 1.

## Tjur’s R squared

Tjur's R squared has an appealing intuitive definition. For all of the observed 0s in the data table, calculate the mean predicted value. Similarly, for all of the observed 1s in the data table, calculate that mean predicted value. Tjur's R squared is the distance (absolute value of the difference) between the two means. Thus, a Tjur's R squared value approaching 1 indicates that there is clear separation between the predicted values for the 0s and 1s. Additionally, Tjur’s R squared (like R squared in linear regression) is actually bound between 0 and 1.

Tjur’s R squared = |Average Predicted value for 0s – Average Predicted value for 1s|

## McFadden’s R squared

McFadden's (and Cox-Snell’s and Nagelkerke’s) R squared are calculated using likelihoods. The concept of likelihood and log likelihood are briefly discussed in the model diagnostics section of this guide. However, it’s not critical to understand how log likelihood is calculated to get an idea for what this pseudo R squared metric is telling you. Briefly, the likelihood (and log likelihood) give you an idea of how well a model fits to the data. McFadden’s R squared calculates the ratio of the log likelihood for the specified model and an intercept-only model, and subtracts this ratio from 1. In other words:

McFadden’s R squared = 1 – (LogLikelihood(Specified Model)/LogLikelihood(Intercept-only Model)

If the specified model fits the data well, the ratio or log likelihoods will be small, and McFadden’s R squared will be close to 1. If the intercept only model more closely fits the data, the ratio will be closer to 1, and McFadden’s R squared will be closer to zero.

## Cox-Snell’s R squared

Similar to McFadden’s R squared, Cox-Snell’s R squared uses the likelihood of the selected model and an intercept-only model fit to the same data (McFadden’s R squared uses the log likelihood). In this case,

Cox-Snell’s R Squared = 1 – [(Likelihood(Intercept-only Model)/(Likelihood(Specified Model)]2/n, where n is the number of observations.

It’s worth noting that while Cox-Snell’s R squared takes a similar approach to McFadden’s R squared, the upper limit of Cox-Snell’s R squared isn’t 1; in fact, the upper limit in many cases can be much less than 1. That means that even if the specified model fits the data perfectly, Cox-Snell’s R squared might be less than 1!

## Nagelkerke’s R squared

Nagelkerke’s R squared can be thought of as an “adjusted Cox-Snell’s R squared” mean to address the problem described above in which the upper limit of Cox-Snell’s R squared isn’t 1. This is done by dividing Cox-Snell’s R squared by its largest possible value. In other words:

Nagelkerke’s R squared = (Cox-Snell’s R squared)/(1 – Likelihood(Intercept-only Model)2/n), where n is the number of observations

This website contains more information on these and other pseudo R squared values, while this paper provides a good assessment of these and other goodness of fit metrics.