Okay... listen. I've been in your shoes. I've gone through stats. Sampling distributions used to confuse the crap outta me... until I understood the following:
Sampling distributions are just like any other distribution, except their "data points" are all possible sample parameters derived from samples of the same size!
And just to clarify...
A sample parameter is the mean or proportion from a given sample.
In the case of this article, we'll be primarily dealing with sample means (since IQ score is a numeric value). The same principles apply to sample proportions (which are essentially TRUE / FALSE values).
The IQ scenario re-explained with a sample
In our previous article, we found the z-score of a single IQ score. We were given this question:
What percentage of the population has an IQ score less than 105?
We answered this question by asking ourselves "Out of the distribution of all IQ individual scores...
...what percentage of IQ scores are less than 105?"
(This distribution can be referred to as the "population distribution", because it's the distribution representing all data points in the population.)
What if instead, we were asked:
What's the probability of finding a sample of 30 people will a mean IQ score of less than 105?
We're basically going to do the same thing, except this time we need to ask ourselves "Out of the distribution of all possible samples means from a sample size of 30...
...what percentage of sample means are less than 105?"
This "distribution of all possible sample means from a sample size of 30" is what's referred to as a sampling distribution!
Since we're working with sample means here, you may see it referred to as a sampling distribution of means. If you're working with proportions, it may be called a sampling distribution of proportions.
Standard Error (SE) illuminated
Notice in the images above how the 105 IQ score is farther to the right in our sampling distribution vs. population distribution?
Our population distribution had a range of values on the x-axis spanned from 40 to 160.
But now, our sampling distribution of means from samples of size 30 spans from 90 to 110.
...why is this happening?
It's because our sampling distribution is skinnier and more congregated around the population mean than the population distribution is... due to Standard Error (SE)!
Standard Error (SE) visualized
The below graphic does a great job of visualizing what's going on here:
As we increase our sample size (going from red to blue to green), our sampling distribution gets more and more congregated around the population parameter at the center (resulting in the range of values on our x-axis getting smaller). This is because our standard error (SE) gets smaller as our sample size grows!
Standard error (SE) is essentially standard deviation, except for sample distributions. Standard error tells you how far a given sample parameter is from the population parameter. The larger your sample size, the smaller your standard error, because your samples more closely represent the population!
To bring this back to our IQ score situation, now that we're working with a sample of IQ scores instead of a single IQ score, we're no longer dealing with the population distribution of all individual IQ scores. We're dealing with the sampling distribution of all other sample means from IQ score samples of the same size!
And our sample size gets bigger and bigger, our sampling distribution of means grows skinnier and skinnier, because our sample mean IQ scores become more representative of the population mean IQ score!
Further understanding Standard Error (SE) with dice
I absolutely love the below visual from Statistics How To. It essentially shows that as you increase your sample size (the number of dice rolled), your sampling distribution gets closer to the population mean (and in turn, has less Standard Error (SE)).
This is because as our sample size increases, the means of those samples get closer to the population mean (see Law of Large Numbers). That causes the distribution of those sample means (a.k.a. the sampling distribution) to be more congregated around the population mean!
Yes, Central Limit Theorem is being displayed here. Scroll below to skip ahead to that!
The Standard Error (SE) formula
In relation to sample means (like we currently are with IQ scores), we can calculate standard error (SE) with the following formula:
With standard deviation of the population held constant...
...as our sample size increases...
...the denominator becomes larger, therefore resulting in our standard error (SE) becoming smaller and smaller, and in-turn causing our sampling distribution to become skinnier and skinnier!
We've already seen this in action with the population distribution for individual IQ scores having an SE of 15...
...and the sampling distribution for sample means of sample size 30 having an SE of 2.74.
If you're still confused, do you see how the distances on the bottom are smaller for the sampling distribution than the population distribution? That's because our standard error decreased when we increased our sample size from one, individual IQ score to 30 IQ scores!
Addressing the Central Limit Theorem
While closely related, it's important to understand the difference between the Central Limit Theorem and Standard Error (SE).
The Central Limit Theorem states that as you increase your sample size, your sampling distribution will become normal, no matter if your population distribution is normal or not!
In summary, Central Limit Theorem deals with a sampling distribution becoming normal. Standard Error (SE) deals with how closely the sample parameter in a given sampling distribution represents the population parameter.
To zone in on the Central Limit Theorem, let's return back to this graphic of dice roll samples.
It's a fantastic illustration of how even though the population distribution of dice rolls with a single die is completely flat (a.k.a. "uniform")...
It's the population distribution because for any given dice, this is the probability of each of the potential outcomes occurring on any given role.
It's flat because each of the potential outcomes for a dice roll has an equal probability of occurring!
...as we increase our sample size (the number of dice that we're rolling), our sampling distribution becomes more and more normal.
That's a great call, and you're right! The Central Limit Theorem still applies with IQ scores, but it doesn't really matter, because the underlying population distribution of IQ scores is already normal!
The Central Limit Theorem will be a bigger player when we work with proportions and non-normal population distribution in the future.
Samples seem to complicate everything... why are they necessary?
Because knowing the population mean and population standard deviation (like we do for IQ scores) is a rare occasion. Oftentimes, we must work with populations in which we don't know the population mean or standard deviation.
In these types of situations, the best that we can do is take samples from said-population to estimate (i.e. confidence intervals) or make supported claims (i.e. hypothesis tests) about that population's true mean or standard deviation.