I gave her the description I use in my classes:
Since statistics deals with making decisions in the world of uncertainty, we, as statisticians, need to provide ourselves with a cushion to deal with uncertainty. It can be viewed as the larger the sample size, the more confident the more we are in our conclusions. For example, we estimate the standard deviation by dividing the sum of the squared deviations by (n-1). Hence if we have a sample size of five (5), we are dividing by four (4). This provides us with as a cushion of 20%. If however, our sample size is 100, we thus divide by 99. This gives us a padding of just 1%.
This explanation seems to have satisfied many of my students, and places the emphasis on a common statistical concept that, without getting into confidence intervals, the larger the sample size, the more confident we can be of our estimates. To summarize this idea in a slightly different way – as long as our sampling technique is random and representative, the likelihood that we have a good estimator of a parameter is greater with larger sample sizes.
I have attempted to address the various approaches to the degrees of freedom and hopefully my simplistic approach to the rationale behind what we are trying to accomplish can shed some light on future explanations of such a vital part of statistical analysis.
*Note: 2d refers to the 2nd moment about the mean, another way of describing the variance.
1). Gonick, L. and Smith, W. (1993), The Cartoon Guide to Statistics, Harper Collins Publishers, pg. 22
2). Breyfogle, Forrest W. III (1946), Implementing Six Sigma, John Wiley & Sons, pg. 1105
3). Upton, Graham and Cook, Ian (2002), Dictionary of Statistics, Oxford University Press, pg. 100
4). Deming, William Edwards (1950), Some Theory of Sampling, Dover Publications, Inc., pg. 352
5). Ibid. pg. 541
Thanks for reading A Simple Approach to Explaining the Degrees of Freedom