This example will compute the power of an unpaired t test. The goal of this example, however, is broader -- to show how easy it is to perform Monte Carlo analyses with Prism and to show you how useful they can be.
The question here is this: Given a certain experimental design and assumptions about random scatter, what is the chance (power) that an unpaired t test will give a P value less than 0.05 and thus be declared statistically significant?
From anywhere, click New..Analysis and choose Simulate Column Data. Choose to simulate two groups, with five values per group, sampled from populations with means of 25 and 35 distributed according to a Gaussian distribution with a SD of 10.
From the simulated data table, click Analyze and choose t test from the list of Column analyses. Accept all the default choices to perform an unpaired t test, reporting a two-tail P value.
Copy the P value from the results and paste onto a graph of the data. It will paste with a live link, so the P value will change if the values change. To simulate new data with different random numbers, click the red die icon, or drop the Change menu and choose Simulate Again
The layout below shows four such graphs placed on the layout as unlinked pictures that do not update when the graph changes. Even though there is only one graph in the project, this made it possible to put four different versions of it (with different random data) onto the layout. You can see that with random variation of the data, the P value varies a lot.
Start from the t test result, click Analyze and choose Monte Carlo simulation.
On the first (Simulations) tab, choose how many simulations you want Prism to perform. This example used 1000 simulations.
On the second (Parameters to tabulate) tab, choose which parameters you want to tabulate. The choice is the list of analysis constants that Prism creates when it analyzes the data. For this example, we only want to tabulate the P value (from the t test which compares means; don't mix it up with the P value from the F test which compares variances).
On the third (Hits) tab, define a criterion which makes a given simulated result a "hit". For this example, we'll define a hit to mean statistical significance with P<0.05.
Click OK and Prism will run the simulations. Depending on the speed of your computer, it will take a few seonds or a few dozen seconds.
The results of the simulations are shown in two pages.
One shows the tabulated parameters for all simulations. In this example, we only asked to tabulate the P value, so this table is a list of 1000 (the number of simulations requested) P values. To create a frequency distribution from this table, click Analyze, and choose Frequency Distribution. Choose a cumulative frequency distribution. You can see that about a quarter of the P values are less than 0.05.
The other results table summarizes the fraction of hits. For this set of simulations, 27.5% of the simulations were hits (P value less than 0.05), with a 95% confidence interval ranging from 24.8% to 30.4%. Another way of stating these results is that the power of our experimental design is 27.5%.
Note that the simulations depend on random number generation, which is initialized by the time of day you begin. So if your results might not be identical to what is shown above.
If we had run more simulations, of course that confidence interval would be narrower.
From this table, click New...Graph of existing data to create a pie or percentage plot.
Go back to step 1 and simulate a larger experiment, say with 10 values in each group. Or 20 or 100. How much will that increase the power?
Try reducing the definition of hit to be a P value less than 0.01 rather than 0.05. How does that affect power?