Please enable JavaScript to view this site.

The likelihood test compares nested models

The likelihood test compares the goodness-of-fit of two alternative nested models. "Nested" means that one model is a simpler case of the other. Let's consider what this means in different contexts:

If you asked Prism to test whether parameters are different between treatments, then the models are nested. You are comparing a model where Prism finds separate best-fit values for some parameters vs. a model where those parameters are shared among data sets. The second case (sharing) is a simpler version (fewer parameters) than the first case (individual parameters).

If you asked Prism to test whether a parameter value is different than a hypothetical value, then the models are nested. You are comparing the fit of a model where a parameter is fixed to a hypothetical value to the fit of a model where Prism finds the best-fit value of that parameter. The first case (fixed value) is a simpler version (fewer parameters to fit) than the second.

if you are comparing the fits of two equations you chose, and both models have the same number of parameters, then the two models cannot be nested. With nest models, one model has fewer parameters than the other. When the two models have the same number of parameters, Prism reports that it cannot compute the F test because the two models have the same number of degrees of freedom. In this case, Prism does not report a P value, and chooses to plot the model whose fit has the lower sum-of-squares. Prism reports the error message, "Models have the same DF."

if you are comparing the fits of two equations you chose with different numbers of parameters, the models may or may not be nested. Prism does not attempt to do the algebra necessary to make this determination. If you chose two models that are not nested, Prism will report results for the extra-sum-of-squares F test, and these results will not be useful.

Interpreting the P value

The likelihood test is based on traditional statistical hypothesis testing. It compares the improvement of fit (the likelihood ratio) with the more complicated model vs. the loss of degrees of freedom (more parameters).

The null hypothesis is that the simpler model (the one with fewer parameters) is correct. The improvement of the more complicated model is quantified by the likelihood ratio. You expect some improvement just by chance, and the amount you expect by chance is determined by the number of degrees of freedom in each model.  A P value is calculated from the likelihood ratio and the difference in df beween the two models.

The P value answers this question:

If the null hypothesis is really correct, in what fraction of experiments (the size of yours) will the likelihood ratio be as large as you observed, or even larger?

If the P value is small, conclude that the simple model (the null hypothesis) is wrong, and accept the more complicated model. Usually the threshold P value is set at its traditional value of 0.05.

If the P value is high, conclude that the data do not present a compelling reason to reject the simpler model.

Prism names the null and alternative hypotheses, and reports the P value. You set the threshold P value in the Compare tab of the nonlinear regression dialog. If the P value is less than that threshold, Prism chooses (and plots) the alternative (more complicated) model. It also reports the value of F and the numbers of degrees of freedom, but these will be useful only if you want to compare Prism's results with those of another program or hand calculations.

When Prism won't report a P value when comparing models

Prism skips the likelihood ratio test and does not report a P value in these situations:

If the simpler model fits the data better than (or the same) as the more complicated model. The whole point of the test is to deal with a tradeoff. The model with more parameters fits the data better fit, but that may just be due to chance. The test asks if that improvement in fit (decrease in sum of squares) is large enough to be "worth" the loss in degrees of freedom (increase in number of parameters). In the rare cases where the simpler model fits better (or the same) as the more complicated model (the one with more parameters), Prism will choose the simpler model without computing the test and report "Simpler model fits better":

If the fit of either model is ambiguous or flagged, then Prism chooses the other model without performing any statistical test. You have a choice in the Compare tab of nonlinear regression to turn off this criterion.

If the fit of one model did not converge, then Prism chooses the other model without doing the F test. Since the fit of one model didn't converge, it makes no sense to compare the sum-of-squares of the two models.

If one model fits perfectly, Prism chooses it without doing the F test.

If the two models have the same number of degrees of freedom. The idea of the likelihood ratio test is to balance the improvement of sum-of-squares (better fit) with the decrease in degrees of freedom (more parameters). The test makes no sense (and is mathematically impossible) if the two models have the same number of degrees of freedom. In this case, Prism picks the model that fits best.

Relationship to the extra sum-of-square F test

The extra sum-of-squares F test is equivalent to the likelihood ratio test when you choose least-squares regression. We present the results as a F test with least squares regression because that is more familiar to most biologists. The P value would be the same if the results were expressed as a likelihood ratio.

© 1995-2019 GraphPad Software, LLC. All rights reserved.