Asymptotic confidence intervals are always centered on the best-fit value of the parameter, and extend the same distance above and below that value.

The 95% confidence intervals are computed by this equation:

From [BestFit- t*SE] TO [BestFit+ t*SE]

where BestFit is the best fit value for the parameter, SE is its standard error, and t is the value from the t distribution for the desired level of confidence (95% is standard) and the number of degrees of freedom (which equals the number of data points minus the number of parameters fit by regression). With 95% confidence and many degrees of freedom (more than a few dozen), this multiplier is very close to 1.96. Note that the value of t is not computed from your data, but is a constant that depends on the confidence level you choose, the number of data points, and the number of parameters.

Before Prism 7, Prism only reported asymptotic confidence intervals that are always symmetrical around the best-fit values.

For some parameters in some models, an asymmetrical interval does a much better job of expressing precision. Prism offers this with profile likelihood confidence intervals, as a choice in the Confidence tab of the nonlinear regression dialog. The disadvantages are that they are unfamiliar to many and that they take longer to compute (but with a fast computer, you may not even notice unless you have huge data sets and choose a user-defined equation).

The idea is pretty simple. The extra sum-of-squares test compares two models.

•The more complicated model is the model you chose. The entire page of results is for this model.

•The simpler model fixes one parameter to a constant value. The idea is to fix the parameter to various values until you find the right value (as explained below).

Here is a very simplified algorithm that sort of explains the idea behind the method. Define the sum-of-squares to be SS and the degrees of freedom to be DF.

1.Set a variable Delta to the SE of the parameter.

2.Hold the parameter fixed to its best-fit value minus delta, and run the fit again letting all the other parameter values vary. Record the new SS and DF of this fit.

3.Compare the original best-fit with this fit that forces the parameter to be decreased by delta using the extra sum-of-squares F test. The second fit holds one parameter to a constant value so fits one fewer parameter as so has one more degree of freedom. Compute the P value.

a.If the P value is less than 0.05, delta is too large. Make it smaller and go back to step 2.

b.If the P value is greater than 0.05, delta is too small. Make it larger and go back to step 2.

c.If the P value is very close to 0.05, then the lower confidence limit equals the original best fit value minus delta.

4.Holding your parameter fixed to its best-fit value plus delta, run the fit again letting all the other parameter values vary. Record the SS and DF of this fit.

5.Compare the original best-fit with this fit that forces the parameter to be increased by delta using the extra sum-of-squares F test. Compute the P value.

a.If the P value is less than 0.05, delta is too large. Make it smaller and go back to step 5.

b.If the P value is greater than 0.05, delta is too small. Make it larger and go back to step 5.

c.If the P value is very close to 0.05, then the upper confidence limit equals the original best fit value plus delta.

6. Repeat for each parameter.

Prism actually uses the steps detailed by Venzon and Moolgavkar(1) for each parameter. This method creates a likelihood profile for each parameter. For various possible values of the parameter, the algorithm fits the curve (optimizing the other parameters) and determines the likelihood that the data would have come from this model. The confidence interval is the range of parameter values where the likelihood is not too much lower than its maximum value. Of course, "too low" is defined rigorously.

The maximum likelihood is at the best-fit value of the parameter. When these profiles are graphed in texts, it is usually the negative logarithm of the likelihood that is plotted. The maximum likelihood is the same as the minimum -log(likelihood) so in these graphs the best fit value is the X value where Y is at its lowest value.

If you assume all residuals follow a Gaussian distribution, maximizing the likelihood is the same as minimizing the sum-of-squares.

Notes:

• The final delta value for computing the upper confidence limit may not be equal to (or even be close to) the final value of delta for computing the lower limit. That's why the confidence interval may be asymmetrical around the best-fit value.

•The P value goal of 0.05 above is used only when you want 95% confidence intervals. If you want 99% confidence intervals. use 0.01, etc.

•The method in reference 1 is way more clever than described above so takes fewer computations.

•The confidence intervals computed this way are for just that one parameter. We expect that each confidence interval has a 95% chance of including the true parameter value. The 95% does not apply to the set of intervals. It is not correct to say that we expect there to be a 95% chance that all the confidence intervals include the respective true parameter values.

•When computing the extra sum of squares F test above, note that the two models differ by one degree of freedom. This is because we are fixing one parameter and letting Prism fit the others. Some publications (2) assume that you are fixing all the parameters, not just one. So the two models entered into the F test differ by K degrees of freedom, where K is the number of parameters fit. These intervals are wider, and I think the intent is that the 95% confidence level apply simultaneously to all the intervals, rather than to just one. Prism does not use this approach. With prism, the two models being compared always differ by one degree of freedom.

•The method we use is also described by Watts(3). Prism matches the results he presents in Table IV for data in Table III.

•In some cases, the method is unable to find one of the confidence limits and reports "???" instead of a value.

When defining an equation, you can choose (in the Transforms to Report tab) to have Prism also report the difference or ratio (etc) of two parameters, with standard error and confidence interval. The calculations account for the covariance of the two parameters. See this document.

1.Venzon DJ, Moolgavkar SH. A method for computing profile-likelihood-based confidence intervals. Applied Statistics. 1988;37(1):87.

2.Kemmer, G., & Keller, S. (2010). Nonlinear least-squares data fitting in Excel spreadsheets. Nature Protocols, 5(2), 267–281. http://doi.org/10.1038/nprot.2009.182

3.Watts, D.G. (2010) Parameter estimates from nonlinear models, Chapter 2 of Essential Numerical Computer Methods by M Johnson, Academic Press 2010.