Prism offers four normality tests (offered as part of the Column Statistics analysis):

We recommend using the D'Agostino-Pearson omnibus test. The Shapiro-Wilk test also works very well if every value is unique, but does not work well when there are ties. The basis of the test is hard for nonmathematicians to understand. For these reasons, we prefer the D'Agostino-Pearson test, even though the Shapiro-Wilk test works well in most cases.

The Kolmogorov-Smirnov test, with the Dallal-Wilkinson-Lilliefor corrected P value, is included for compatibility with older versions of Prism, but is not recommended.

All three tests ask how far a distribution deviates from the Gaussian ideal. Since the tests quantify deviations from Gaussian using different methods, it isn't surprising they give different results. The fundamental problem is that these tests do not ask which of two defined distributions (say, Gaussian vs. exponential) better fit the data. Instead, they compare Gaussian vs. not Gaussian. That is a pretty vague comparison. Since the different tests approach the problem differently, they give different results.

The Kolmogorov-Smirnov test requires 5 or more values. The Shapiro-Wilk test requires 3 or more values. The D'Agostino test requires 8 or more values, as does the Anderson-Darling test.

The normality tests all report a P value. To understand any P value, you need to know the null hypothesis. In this case, the null hypothesis is that all the values were sampled from a Gaussian distribution. The P value answers the question:

If that null hypothesis were true, what is the chance that a random sample of data would deviate from the Gaussian ideal as much as these data do?

You set the threshold in the analysis dialog. The default is to use the traditional 0.05 cut-off. If P<0.05, the data do not pass the normality test. If P> 0.05, the data do pass the normality test. This cut-off, of course, is totally arbitrary.

No. A population has a distribution that may be Gaussian or not. A sample of data cannot be Gaussian or not Gaussian. That term can only apply to the entire population of values from which the data were sampled.

Probably not. In almost all cases, we can be sure that the data were not sampled from an ideal Gaussian distribution. That is because an ideal Gaussian distribution includes some very low negative numbers and some super high positive values.Those values will comprise a tiny fraction of all the values in the Gaussian population, but they are part of the distribution. When collecting data, there are constraints on the possible values. Pressures, concentrations, weights, enzyme activities, and many other variables cannot have negative values, so cannot be sampled from perfect Gaussian distributions. Other variables can be negative, but have physical or physiological limits that donâ€™t allow super large values (or have extremely low negative values).

Each normality test reports an intermediate value that it uses to compute the P value. Unfortunately, there is no obvious way to interpret K2 (computed by the D'Agostino test), KS (computed by the Kolmogorov-Smirnov test), or W (computed by Shapiro-Wilk test). As far as I know, there is no straightforward way to use these values to decide if the deviation from normality is severe enough to switch away from parametric tests. Prism only reports these values so you can compare results with texts and other programs.

Not very useful, in most situations. With small samples, the normality tests don't have much power to detect nongaussian distributions. With large samples, it doesn't matter so much if data are nongaussian, since the t tests and ANOVA are fairly robust to violations of this standard.

What you would want is a test that tells you whether the deviations from the Gaussian ideal are severe enough to invalidate statistical methods that assume a Gaussian distribution. But normality tests don't do this.

References

1 RB D'Agostino, "Tests for Normal Distribution" in Goodness-Of-Fit Techniques edited by RB D'Agostino and MA Stepenes, Macel Decker, 1986.

Parts of this page are excerpted from Chapter 24 of Motulsky, H.J. (2010). Intuitive Biostatistics, 2nd edition. Oxford University Press. ISBN=978-0-19-973006-3.