Please enable JavaScript to view this site.

When fitting a nonlinear model to data, your main objective is often to discriminate between different models, or to ask whether an experimental intervention changed a parameter.

Comparing models can answer four distinct questions:

For each data set, which of two equations (models) fits best?  

Do the best-fit values of selected unshared parameters differ between data sets?  

For each dataset, does the best-fit value of a parameter differ from a theoretical value?  

Does one curve adequately fit all the data

Prism lets you choose from two approaches to comparing models: the extra sum-of-squares F tests and the AICc approach based on information theory.

How the F test works to compare models

How the AICc computations work

The idea of comparing models extends way beyond nonlinear regression. In fact, much of statistics can be viewed as comparing models

 

 

© 1995-2019 GraphPad Software, LLC. All rights reserved.