# Skewness

*Skewness* quantifies the asymmetry of a distribution of a set of values. GraphPad Prism can compute the skewness as part of the Column Statistics analysis.

**How skewness is computed**

Understanding how skewness is computed can help you understand what it means. These steps compute the skewness of a distribution of values:

- We want to know about symmetry around the sample mean. So the first step is to subtract the sample mean from each value, The result will be positive for values greater than the mean, negative for values that are smaller than the mean, and zero for values that exactly equal the mean.
- To compute a unitless measures of skewness, divide each of the differences computed in step 1 by the standard deviation of the values. These ratios (the difference between each value and the mean divided by the standard deviation) are called z ratios. By definition, the average of these values is zero and their standard deviation is 1.
- For each value, compute z
^{3}. Note that cubing values preserves the sign. The cube of a positive value is still positive, and the cube of a negative value is still negative. - Average the list of z
^{3 }by dividing the sum of those values by n-1, where n is the number of values in the sample. If the distribution is symmetrical, the positive and negative values will balance each other, and the average will be close to zero. If the distribution is not symmetrical, the average will be positive if the distribution is skewed to the right, and negative if skewed to the left. Why n-1 rather than n? For the same reason that n-1 is used when computing the standard deviation. - Correct for bias. For reasons that I do not really understand, that average computed in step 4 is biased with small samples -- its absolute value is smaller than it should be. Correct for the bias by multiplying the mean of z
^{3 }by the ratio n/(n-2). This correction increases the value if the skewness is positive, and makes the value more negative if the skewness is negative. With large samples, this correction is trivial. But with small samples, the correction is substantial.

**Interpreting skewness**

The basics:

- A symmetrical distribution has a skewness of zero.
- An asymmetrical distribution with a long tail to the right (higher values) has a positive skew.
- An asymmetrical distribution with a long tail to the left (lower values) has a negative skew.
- The skewness is unitless.
- Any threshold or rule of thumb is arbitrary, but here is one: If the skewness is greater than 1.0 (or less than -1.0), the skewness is substantial and the distribution is far from symmetrical.

How useful is it to assess skewness? Not very, I think. The numerical value of the skewness does not really answer any of these questions:

- Does the distribution deviate enough from a Gaussian distribution that parametric tests will give invalid results?
- Would the distribution be closer to Gaussian if the data were transformed by taking the logarithm (or reciprocal, or another transform) of all the values?
- Is the skewness due to one or a few outliers?

The skewness doesn't directly answer any of those questions. Note that the D' Agostino and Pearson omnibus normality test (a choice within Prism's column statistics analysis) is a normality test that combines the skewness with the kurtosis (a measure of how far the shape of the distribution deviates from the bell shape of a Gaussian distribution), and so tries to answer the first question.

The definition of the skewness is part of a mathematical progression. The standard deviation is computed by first summing the squares of he differences each value and the mean. The skewness is computed by first summing the cube of those distances. And the kurtosis is computed by first summing the fourth power of those distances.

While there are good reasons for computing the standard deviation by squaring the deviations, there doesn't appear to be a deeper meaning to summing the cube of the differences between each value and the mean. Since the skewness is computed based on cubes, a value that is twice as far from the mean as another value increases the skewness eight times as much as that other value (because 2^{3}=8). I don't see why alternative definitions of skewness where that factor is some other value (4, or 7 or 10 or any other value greater than 1) wouldn't be just as informative and useful.

**Multiple definitions of skewness**

Skewness has been defined in multiple ways. The method used by Prism (and described above) is the most common method. It is identical to the skew() function in Excel. This value of skewness is often abbreviated g_{1}.

**The confidence interval of skewness**

Whenever a value is computed from a sample, it helps to compute a confidence interval. In most cases, the confidence interval is computed from a standard error. The standard error of skewness (SES) depends on sample size. Prism does not calculate it, but it can be computed easily by hand using this formula:

The margin of error equals 1.96 times that value, and the confidence interval for the skewness equals the computed skewness plus or minus the margin of error. This table gives the standard error and margin of error for various sample sizes.

n |
SE of skewness |
Margin of error |

3 | 1.225 | 2.400 |

4 | 1.014 | 1.988 |

5 | 0.913 | 1.789 |

6 | 0.845 | 1.657 |

7 | 0.794 | 1.556 |

8 | 0.752 | 1.474 |

9 | 0.717 | 1.406 |

10 | 0.687 | 1.347 |

15 | 0.580 | 1.137 |

20 | 0.512 | 1.004 |

25 | 0.464 | 0.909 |

50 | 0.337 | 0.660 |

100 | 0.241 | 0.473 |

200 | 0.172 | 0.337 |

300 | 0.141 | 0.276 |

400 | 0.122 | 0.239 |

500 | 0.109 | 0.214 |

1000 | 0.077 | 0.152 |

2500 | 0.049 | 0.096 |

5000 | 0.035 | 0.068 |

10000 | 0.024 | 0.048 |