Please enable JavaScript to view this site.

 The goal of nonlinear regression

Nonlinear regression is used for two purposes

Scientists use nonlinear regression with one of two distinct goals:

To fit a model to your data in order to obtain best-fit values of the parameters, or to compare the fits of alternative models. If this is your goal, you must pick a model (or two alternative models) carefully, and pay attention all the results.

To simply fit a smooth curve in order to interpolate values from the curve, or perhaps to draw a graph with a smooth curve. If this is your goal, you can assess it purely by looking at the graph of data and curve. There is no need to learn much theory. Jump right to an explanation of interpolation with Prism.

The general idea of regression

Linear regression fits a straight-line model to your data. Nonlinear regression extends this idea to fit any model to your data. Distinguish nonlinear regression from linear regression, and from other types of regression.

The goal of linear and nonlinear regression is to adjust the values of the model's parameters to find the line or curve that comes closest to your data. So with linear regression, the goal is to find the best-fit values of the slope and intercept that makes the line come close to the data. With nonlinear regression of a normalized dose-response curve, the goal is to adjust the values of the EC50 (the concentration that provokes a response halfway between the minimum and maximum responses) and the slope of the curve.

More precisely, the goal of regression is to find the values of the parameters that are most likely to be correct. To do this requires making an assumption about the scatter of data around the curve.

Least-squares regression

The most common assumption is that data points are randomly scattered around an ideal curve (or line) with the scatter following a Gaussian distribution. If you accept this assumption, then the goal of regression is to adjust the model's parameters to find the curve that minimizes the sum of the squares of the vertical distances of the points from the curve.

Why minimize the sum of the squares of the distances? Why not simply minimize the sum of the actual distances?

If the random scatter follows a Gaussian distribution, it is far more likely to have two medium size deviations (say 5 units each) than to have one small deviation (1 unit) and one large (9 units). A procedure that minimized the sum of the absolute value of the distances would have no preference over a curve that was 5 units away from two points and one that was 1 unit away from one point and 9 units from another. The sum of the distances (more precisely, the sum of the absolute value of the distances) is 10 units in each case. A procedure that minimizes the sum of the squares of the distances prefers to be 5 units away from two points (sum-of-squares = 25) rather than 1 unit away from one point and 9 units away from another (sum-of-squares = 82). If the scatter is Gaussian (or nearly so), the curve determined by minimizing the sum-of-squares is most likely to be correct.