

Because Gaussian distributions are also called Normal distributions. 
Prism offers four normality tests (offered as part of the Column Statistics analysis): We recommend using the D'AgostinoPearson omnibus test. The ShapiroWilk test also works very well if every value is unique, but does not work well when there are ties. The basis of the test is hard for nonmathematicians to understand. For these reasons, we prefer the D'AgostinoPearson test, even though the ShapiroWilk test works well in most cases. The KolmogorovSmirnov test, with the DallalWilkinsonLilliefor corrected P value, is included for compatibility with older versions of Prism, but is not recommended. 
All three tests ask how far a distribution deviates from the Gaussian ideal. Since the tests quantify deviations from Gaussian using different methods, it isn't surprising they give different results. The fundamentai problem is that these tests do not ask which of two defined distributions (say, Gaussian vs. exponential) better fit the data. Instead, they compare Gaussian vs. not Gaussian. That is a pretty vague comparison. Since the different tests approach the problem differently, they give different results. 
The KolmogorovSmirnov test requires 5 or more values. The ShapiroWilk test requires 3 or more values. The D'Agostino test requires 8 or more values, as does the AndersonDarling test. 
The normality tests all report a P value. To understand any P value, you need to know the null hypothesis. In this case, the null hypothesis is that all the values were sampled from a Gaussian distribution. The P value answers the question: If that null hypothesis were true, what is the chance that a random sample of data would deviate from the Gaussian ideal as much as these data do? 
You set the threshold in the analysis dialog. The default is to use the traditional 0.05 cutoff. If P<0.05, the data do not pass the normality test. If P> 0.05, the data do pass the normality test. This cutoff, of course, is totally arbitrary. 
No. A population has a distribution that may be Gaussian or not. A sample of data cannot be Gaussian or not Gaussian. That term can only apply to the entire population of values from which the data were sampled. 
Probably not. In almost all cases, we can be sure that the data were not sampled from an ideal Gaussian distribution. That is because an ideal Gaussian distribution includes some very low negative numbers and some superhigh positive values.Those values will comprise a tiny fraction of all the values in the Gaussian population, but they are part of the distribution. When collecting data, there are constraints on the possible values. Pressures, concentrations, weights, enzyme activities, and many other variables cannot have negative values, so cannot be sampled from perfect Gaussian distributions. Other variables can be negative, but have physical or physiological limits that don’t allow super large values (or have extremely low negative values). 
Yes, but plenty of simulations have shown that these tests work well even when the population is only approximately Gaussian. 
Not really. It is hard to define what "close enough" means, and the normality tests were not designed with this in mind. 
No. Deciding whether to use a parametric or nonparametric test is a hard decision that should not be automated based on a normality test. 
Each normality test reports an intermediate value that it uses to compute the P value. Unfortunately, there is no obvious way to interpret K2 (computed by the D'Agostino test), KS (computed by the KolmogorovSmirnov test), or W (computed by ShapiroWilk test). As far as I know, there is no straightforward way to use these values to decide if the deviation from normality is severe enough to switch away from parametric tests. Prism only reports these values so you can compare results with texts and other programs. 
Not very useful, in most situations. With small samples, the normality tests don't have much power to detect nongaussian distributions. With large samples, it doesn't matter so much if data are nongaussian, since the t tests and ANOVA are fairly robust to violations of this standard. What you would want is a test that tells you whether the deviations from the Gaussian ideal are severe enough to invalidate statistical methods that assume a Gaussian distribution. But normality tests don't do this. 
1 RB D'Agostino, "Tests for Normal Distribution" in GoodnessOfFit Techniques edited by RB D'Agostino and MA Stepenes, Macel Decker, 1986.
Parts of this page are excerpted from Chapter 24 of Motulsky, H.J. (2010). Intuitive Biostatistics, 2nd edition. Oxford University Press. ISBN=9780199730063.