Please enable JavaScript to view this site.

Nonlinear regression works iteratively. With each iteration, it alters the values of the parameters to lower the sum-of-squares.

With global fitting, Prism does not fit each data set individually and then average them together. Rather, it fits all the data at once to the global model. With each iteration, it minimizes the sum (over data sets) of sum (over data points) of squares. There is nothing special about any particular parameter in any particular equation. It doesn't know about EC50s or plateaus or rate constants. It just uses the nonlinear regression algorithm to step by step improve the sum of sum of squares.

Here are the details:

Let's assume that we have four data sets and equation with three parameters. One way to represent this is with the equation y=f(X, A, B, C), or in other words, your model "y" is a function defined by X and three variables; A, B, and C. In this case, let's assume that A is shared among your four data sets, while B and C are calculated individually for each.

The above is  the user-friendly description of experimental design (or problem being solved by fitting engine).  From the math point of view, Prism actually fits the following function to all the data (from all data sets).

F(X, A, B1, B2, B3, B4, C1, C2, C3, C4) = 

{ IF (X from Data set 1) THEN f(X, A, B1, C1) { IF (X from Data set 2) 

THEN f(X, A, B2, C2) { IF (X from Data set 3) 

THEN f(X, A, B3, C3) { IF (X from Data set 4) THEN f(X, A, B4, C4)

Note that  parameter A (which is shared) looks like "common" one while parameters B and C (not shared) are split on "individual" parameters B1, B2, B3, B4 and C1, C2, C3, C4 respectively.

Here is an alternative way to write this function:

F(X, A, B1, B2, B3, C1, C2, C3) = f(X, A, B1*DummyB_1 + B2* DummyB_2 + B3* DummyB_3 

+ B4* DummyB_4, C1* DummyC_1 + C2* DummyC_2 + C3* DummyC_3 + C4* DummyC_4) 

In this version of the equation, DummyB_i = DummyC_i = 1, if X belongs to i-th data set DummyB_i = DummyC_i = 0, if X doesn't belong to i-th data set

Both forms of the equation (which are equivalent) will compute a Y value for each curve for every X value in the data, using the current values of the parameters. Prism can then compute the difference between the actual Y value (for each data set for each point) and the predicted Y value. Sum the squares of all those points to get the sum-of-squares which nonlinear regression minimizes. So the nonlinear regression works as usual. It just has a fancier model that accounts for which parameters have one value for all data sets and which have one value for all data sets (shared).

 

 

 

© 1995-2019 GraphPad Software, LLC. All rights reserved.