The runs test asks whether the curve deviates systematically from your data. The runs test is useful only if you entered single Y values (no replicates) or chose to fit only the means rather than individual replicates (weighting tab). If you entered and analyzed replicate data, use the replicates test instead.
A run is a series of consecutive points that are either all above or all below the regression curve. Another way of saying this is that a run is a consecutive series of points whose residuals are either all positive or all negative. After fitting a curve, Prism counts the actual number of runs and calculates the predicted number of runs (based on number of data points). The runs test compares these two values.
If the data points are randomly distributed above and below the regression curve, it is possible to calculate the expected number of runs. If there are Na points above the curve and Nb points below the curve, the number of runs you expect to see equals [(2NaNb)/(Na+Nb)]+1.
If the model fits the data poorly, you will tend to see clusters of points on the same side of the curve. This means you will have fewer runs than predicted from sample size, and the runs test will produce a low P value.
The P value answers this question:
If the data are randomly scattered above and below the curve, what is the probability of observing as few runs (or even fewer) than actually observed in this analysis?
If the runs test reports a low P value, conclude that the curve doesn't describe the data very well. The problem might be that some of the errors are not independent, that outliers are mucking up the fit, or that you picked the wrong model.
Note that the P value is one-tailed. If you observed more runs than expected, the P value will be higher than 0.50.
Prism reports the runs test for each data set, but does not report a global runs test. Prior versions of Prism reported a global runs test, but it is not clear if that result was actually meaningful.