GraphPad Curve Fitting Guide

Q&A: Deming Regression

Q&A: Deming Regression

Previous topic Next topic No expanding text in this topic  

Q&A: Deming Regression

Previous topic Next topic JavaScript is required for expanding text JavaScript is required for the print function Mail us feedback on this topic!  

Why doesn't Prism report R2 with Deming Regression results?

When Prism performs Deming regression, it reports the slope and intercepts with confidence intervals, and reports a P value testing the null hypothesis that the slope is really zero. However, Prism does not report any measure of goodness-of-fit with Deming regression, and so does not report R2. The reason is that we have been unable to find any paper or text that would explain how to compute or interpret such a value. In ordinary linear or nonlinear regression, R2 is the fraction of the variation that is accounted for by the model. But with Deming regression, this definition doesn't really make sense, and it isn't obvious to us how to extend it.

Is Deming regression the same as orthogonal linear regression?

In Prism's Deming dialog,  you specify whether X and Y are in the same units with equal uncertainties (variation). If you choose this option, Deming regression minimizes the sum of the square of the perpendicular distances of the points from the line. This is also called orthogonal linear regression. If you specify different SD values for X and Y, then Deming regression is not the same as orthogonal linear regression.

How was the equation to compute the SD from a set of duplicate measurements derived?

The previous page showed the equation for assessing the uncertainty (error) of a method from duplicate measurements from a number of samples using that method:

It is easy to derive that equation. To compute the variance, you find the average square of the difference between each value and the mean. For pairs, the mean is half the difference. So take half that distance squared, but also compute it for the other value of the pair. For both values, therefore, the contribution is di squared. Add that up for all the measurements and divide by the number of measurements, and you have the variance. Take the square root to get the SD.

Why N, rather than N-1? I suspect it should be N-1, but that won't matter much. It will make the SD a bit smaller than it should be, but that will be true for both the X and Y SD, so the ratio will hardly be changed.