Please enable JavaScript to view this site.

This guide is for an old version of Prism. Browse the latest version or update Prism

Navigation: PRINCIPLES OF REGRESSION > Principles of nonlinear regression > How nonlinear regression works

How profile likelihood asymmetrical confidence intervals are computed

Scroll Prev Top Next More

The basic idea of profile likelihood asymmetrical confidence intervals

Before Prism 7, Prism only reported asymptotic confidence intervals that are always symmetrical around the best-fit values.

For some parameters in some models, an asymmetrical interval does a much better job of expressing precision. Prism (starting with version 7) offers this with profile likelihood confidence intervals, as a choice in the Confidence tab of the nonlinear regression dialog. The disadvantages are that they are unfamiliar to many and that they take longer to compute (but with a fast computer, you may not even notice unless you have huge data sets and choose a user-defined equation).

The idea is pretty simple. The extra sum-of-squares test compares two models.  

The more complicated model is the model you chose. The entire page of results is for this model.

The simpler model fixes one parameter to a constant value. The idea is to fix that parameter to various values until you find the confidence limit (as explained below).

Here is a very simplified algorithm that sort of explains the idea behind the method. Define the sum-of-squares to be SS and the degrees of freedom to be DF.

1.Set a variable Delta to the SE of the parameter you are finding the CI for (then repeat for the other parameters).

2.Hold the parameter fixed to its best-fit value minus delta, and run the fit again letting all the other parameter values vary. Record the new SS and DF of this fit.

3.Compare the original best-fit with this fit that forces the parameter to be decreased by delta using the extra sum-of-squares F test. The second fit holds one parameter to a constant value so fits one fewer parameter as so has one more degree of freedom. Compute the P value.

a.If the P value is less than 0.05, delta is too large. Make it smaller and go back to step 2.

b.If the P value is greater than 0.05, delta is too small. Make it larger and go back to step 2.

c.If the P value is very close to 0.05, then the lower confidence limit equals the original best fit value minus the current value of delta.

4.Holding your parameter fixed to its best-fit value plus delta, run the fit again letting all the other parameter values vary. Record the SS and DF of this fit.

5.Compare the original best-fit with this fit that forces the parameter to be increased by delta using the extra sum-of-squares F test. Compute the P value.

a.If the P value is less than 0.05, delta is too large. Make it smaller and go back to step 5.

b.If the P value is greater than 0.05, delta is too small. Make it larger and go back to step 5.

c.If the P value is very close to 0.05, then the upper confidence limit equals the original best fit value plus the current value of delta.

6. Repeat for each parameter.

This creates a 100*(1-α)% confidence interval for a parameter (95% interval for the common situation where α is set to 0.05). If you were to test the null hypothesis that the true parameter value is equal to its best-fit value, that null hypothesis would not be rejected for any value of the parameter with the confidence interval.

More formally: Define θbf the best fit value of the parameter and θhyp  a hypothetical different value for the parameter. The null hypothesis that θbf = θhyp  will not be rejected at the α level of significance for any value of θhyp within the confidence interval but will be rejected for any value of θhyp outside the confidence interval.

How Prism computes profile likelihood confidence intervals

Prism actually uses the steps detailed by Venzon and Moolgavkar(1) for each parameter. This method creates a likelihood profile for each parameter. For various possible values of the parameter, the algorithm fits the curve (optimizing the other parameters) and determines the likelihood that the data would have come from this model. The confidence interval is the range of parameter values where the likelihood is not too much lower than its maximum value. Of course, "too low" is defined rigorously.

The maximum likelihood is at the best-fit value of the parameter. When these profiles are graphed in texts, it is usually the negative logarithm of the likelihood that is plotted. The maximum likelihood is the same as the minimum -log(likelihood) so in these graphs the best fit value is the X value where Y is at its lowest value.

If you assume all residuals follow a Gaussian distribution, maximizing the likelihood is the same as minimizing the sum-of-squares.

Notes

The final delta value for computing the upper confidence limit may not be equal to (or even be close to) the final value of delta for computing the lower limit. That's why the confidence interval may be asymmetrical around the best-fit value.

The P value goal of 0.05 above is used only when you want 95% confidence intervals. If you want 99% confidence intervals. use 0.01, etc.

The method in reference 1 (that Prism uses) is way more clever than described above so takes fewer computations.

The confidence intervals computed this way are for just that one parameter. The ideas is that each confidence interval has a 95% chance of including the true parameter value. The 95% does not apply to the set of intervals. It is not correct to say that we expect there to be a 95% chance that all the confidence intervals include the respective true parameter values.

When computing the extra sum of squares F test above, note that the two models differ by one degree of freedom. This is because we are fixing one parameter and letting Prism fit the others. Some publications (2) assume that you are fixing all the parameters, not just one. So the two models entered into the F test differ by K degrees of freedom, where K is the number of parameters fit. These intervals are wider, and I think the intent is that the 95% confidence level apply simultaneously to all the intervals, rather than to just one. Prism does not use this approach. With prism, the two models being compared always differ by one degree of freedom.

The method we use is also described by Watts(3). Prism matches the results he presents in Table IV for data in Table III.

In some cases, the method is unable to find one of the confidence limits and reports "???" instead of a value.

Reference

1.Venzon DJ, Moolgavkar SH. A method for computing profile-likelihood-based confidence intervals. Applied Statistics. 1988;37(1):87.

2.Kemmer, G., & Keller, S. (2010). Nonlinear least-squares data fitting in Excel spreadsheets. Nature Protocols, 5(2), 267–281. http://doi.org/10.1038/nprot.2009.182

3.Watts, D.G. (2010) Parameter estimates from nonlinear models,  Chapter 2 of Essential Numerical Computer Methods by M Johnson, Academic Press 2010.

© 1995-2019 GraphPad Software, LLC. All rights reserved.