Please enable JavaScript to view this site.

This guide is for an old version of Prism. Browse the latest version or update Prism

This alternative approach is based on information theory, and does not use the traditional “hypothesis testing” statistical paradigm. Therefore it does not generate a P value, does not reach conclusions about “statistical significance”, and does not “reject” any model.

The method determines how well the data supports each model, taking into account both the goodness-of-fit (sum-of-squares) and the number of parameters in the model. The results are expressed as the probability that each model is correct, with the probabilities summing to 100%. If one model is much more likely to be correct than the other (say, 1% vs. 99%), you will want to choose it. If the difference in likelihood is not very big (say, 40% vs. 60%), you will know that either model might be correct, so will want to collect more data.

Of course, these probabilities are meaningful only in the context of comparing those two models. It is possible a third model you didn't test fits much better so is much more likely to be correct.

Prism names the null and alternative hypotheses, and reports the likelihood that each is correct It also reports the difference between the AICc values (as the AICc of the simple model minus the AICc of the more complicated model), but this will be useful only if you want to compare Prism's results with those of another program or hand calculations. Prism chooses and plots the model that is more likely to be correct, even if the difference in likelihoods is small.

 

 

© 1995-2019 GraphPad Software, LLC. All rights reserved.