

Why not always use nonparametric tests? You avoid assuming that the data are sampled from a Gaussian distribution  an assumption that is hard to be sure of. The problem is that nonparametric tests have lower power than do standard tests. How much less power? The answer depends on sample size.
This is best understood by example. Here are some sample data, comparing a measurement in two groups, each with three subjects.
Control 
Treated 
3.4 
1234.5 
3.7 
1335.7 
3.5 
1334.8 
When you see those values, it seems obvious that the treatment drastically increases the value being measured.
But let's analyze these data with the MannWhitney test (nonparametric test to compare two unmatched groups). This test only sees ranks. So you enter the data above into Prism, but the Mann Whitney calculations only see the ranks:
Control 
Treated 
1 
4 
3 
6 
2 
5 
The MannWhitney test then asks if the ranks were randomly shuffled between control and treated, what is the chance of obtaining the three lowest ranks in one group and the three highest ranks in the other group. The nonparametric test only looks at rank, ignoring the fact that the treated values aren't just higher, but are a whole lot higher. The answer, the twotail P value, is 0.10. Using the traditional significance level of 5%, these results are not significantly different. This example shows that with N=3 in each group, the MannWhitney test can never obtain a P value less than 0.05. In other words, with three subjects in each group and the conventional definition of 'significance', the MannWhitney test has zero power.
With large samples in contrast, the MannWhitney test has almost as much power as the t test. To learn more about the relative power of nonparametric and conventional tests with large sample size, look up the term "Asymptotic Relative Efficiency" in an advanced statistics book.