The P value from a Fisher's or chi-square test answers this question:
If there really is no association between the variable defining the rows and the variable defining the columns in the overall population, what is the chance that random sampling would result in an association as strong (or stronger) as observed in this experiment?
The chi-square test for trend is performed when there are two columns and more than two rows arranged in a natural order. It is also called the Cochran-Armitage method. The P value answers this question:
If there is no linear trend between row number and the fraction of subjects in the left column, what is the chance that you would happen to observe such a strong trend as a consequence of random sampling?
For more information about the chi-square test for trend, see the excellent text, Practical Statistics for Medical Research by D. G. Altman, published in 1991 by Chapman and Hall.
Don't forget that “statistically significant” is not the same as “scientifically important”.
P values and confidence intervals are intertwined. If the P value is less than 0.05, then the 95% confidence interval cannot contain the value that defines the null hypothesis. (You can make a similar rule for P values < 0.01 and 99% confidence intervals, etc.)
This rule is not always upheld with Prism's results from contingency tables.
The P value computed from Fisher's test is exactly correct. However, the confidence intervals for the Odds ratio and Relative Risk are computed by methods that are only approximately correct. Therefore it is possible that the confidence interval does not quite agree with the P value.
For example, it is possible for results to show P<0.05 with a 95% CI of the relative risk that includes 1.0. (A relative risk of 1.0 means no risk, so defines the null hypothesis). Similarly, you can find P>0.05 with a 95% CI that does not include 1.0.
These apparent contradictions happens rarely, and most often when one of the values you enter equals zero.
Calculating a chi-square test is standard, and explained in all statistics books.
The Fisher's test is called an "exact" test, so you would think there would be consensus on how to compute the P value. Not so!
While everyone agrees on how to compute one-sided (one-tail) P value, there are actually three methods to compute "exact" two-sided (two-tail) P value from Fisher's test. Prism computes the two-sided P value using the method of summing small P values. Most statisticians seem to recommend this approach, but some programs use a different approach.
If you want to learn more, SISA provides a detail discussion with references. Also see the section on Fisher's test in Categorical Data Analysis by Alan Agresti. It is a very confusing topic, which explains why different statisticians (and so different software companies) use different methods.
Prism gives you the choice of reporting a one-sided or two-sided P value.
With the chi-square test, the one-sided P value is half the two-sided P value. Zar points out (p.503, 5th edition) that there is one extremely rare situation where the one-sided P value can be misleading: If your experimental design is such that you chose both the row totals and the column totals.
Why we use the term "one-sided" and not "one-tailed"? To avoid confusion. The value of chi-square is always positive. To find the P value from chi-square, Prism calculates the probability (under the null hypothesis) of seeing that large a value of chi-square or even larger. So it only looks at the right tail of the chi-square distribution. But a chi-square value can be high when the deviation from the null hypothesis goes in either direction (positive or negative difference between proportions, relative risk greater than or less than 1). So the two-sided P value is actually computed from one tail of the chi-square distribution.
With Fisher's test, the definition of a one-sided P value is not ambiguous. But in most cases, the one-sided P value is not half the two-sided P value.