Please enable JavaScript to view this site.

This guide is for an old version of Prism. Browse the latest version or update Prism

Reality check before statistics

Apply a common sense reality check before looking at any statistical approach to comparing fits. If one of the fits has results that are scientifically invalid, then accept the other model. Only when both fits make scientific sense should you use statistical method to compare the two fits .

Prism partially automates this 'reality check' approach. If the fit of either model is ambiguous, then Prism chooses the other model without performing any statistical test.

Statistical approaches balance the change in sum-of-squares with the change in numbers of degrees of freedom

The more complicated model (the one with more parameters) almost always fits the data better than the simpler model. Statistical methods are needed to see if this difference is enough to prefer the more complicated model. Prism can do this via the extra sum-of-squares F test or using information theory and computation of AIC. Don't use R2 or the adjusted R2 to compare models (1).

Both these methods only make sense when the models being compared have different numbers of parameters, and so have different numbers of degrees of freedom. If you want to compare two models with the same number of parameters, there is no need to use either the F test or AIC. Simply choose the model that fits the data the best with the smallest sum-of-squares.

How do these methods work to compare data sets?

The Compare tab of Prism lets you ask "Do the best-fit values of selected unshared parameters differ between data sets?" or  "Does one curve adequately fit all data sets?". Applying the F test or Akaike's method to answering these questions is straightforward. Prism compares the sum-of-squares of two fits.

In one fit, the model is separately fit to each data set, and the goodness-of-fit is quantified with a sum-of-squares. The sum of these sum-of-square values quantifies the goodness of fit of the family of curves fit to all the data sets.

The other fit is a global fit to all the data sets at once, sharing specified parameters. If you ask Prism whether one curve adequately fits all data sets, then it shares all the parameters.

These two fits are nested (the second is a simpler case of the first, with fewer parameters to fit) so the sums-of-squares (actually the sum of sum of squares for the first fits) can be compared using either the F test or Akaike's method.

 

1.        Spiess, A.-N. & Neumeyer, N. An evaluation of R2 as an inadequate measure for nonlinear models in pharmacological and biochemical research: a Monte Carlo approach. BMC Pharmacol 10, 6–6 (2010).

© 1995-2019 GraphPad Software, LLC. All rights reserved.