# What is a t-score? | Midterm Exam

t-scores are best used when you (1) are not given the population standard deviation or (2) do not have a sample size greater than 30. Outside of that, they pretty much operate the same as Z-scores!

The below graphic from Statology sums it up very well:

Let's dig into the distribution utilized for t-score vs. the distribution utilized for Z-score. This'll help us understand how a t-score is different from a Z-score!

## Understanding the difference from Z-score

t-scores and Z-scores are both used to identify points on their respective distributions (t-distribution and Z-distribution), illustrated in the image below provided by JMP.

It is incredibly easy to get confused on why t-distributions are even necessary, so here it is put in simple terms:

The t-distribution is used to account for the absence of population standard deviation or a small sample size when conducting statistical calculations.

Check out in the image above how as our degrees of freedom increase, we get closer and closer to representing the normal distribution. That's because higher degrees of freedom are a direct result of a larger sample size. When you have a larger sample size, that means that your sample is more representative of the entire population, which in turn makes your sample more similar to the standard normal distribution utilized with Z-scores.

To be clear: the Z-distribution is the "gold standard". You should always strive to use a Z-distribution when you can. If you cannot (due to no population standard deviation or a sample size below 30), then you'll have to settle for a t-distribution.

For the sake of your sanity, please refer to the below graphic if you're ever confused whether to use t-score or Z-score. It'll save you a lot of headache!

## But, proportions don't have standard deviation...

Yes, and thank goodness they don't. It makes things way easier for you!

If you're dealing with a proportion, you don't have to worry about t-scores! Just use the Z-score formula!

For reference, here's that formula:

Now remember: this does not mean that you can throw the assumptions out the window! You still need to check those for each sample that you work with.

## How to calculate t-score

Calculating t-scores will operate extremely similarly to calculating Z-scores. The only major difference is we'll utilize sample standard deviations instead of population standard deviations.

Click here to start learning how to calculate t-scores in the next article.

## How to associate t-scores with p-values

The biggest difference here is that we won't use the Z-score table...

...we'll instead use the t-score table.

With Z-scores, we calculated our value and then found the corresponding p-value in the table. For example, if we had a Z-score of -1.97, we could identify its associated p-value of 0.0244 like so:

With t-scores, it'll be a little different. We'll calculate our t-score and our degrees of freedom. Afterwards, we'll identify the range of values that our t-score falls between... and in turn, that our p-value falls between.

For example, if we had a t-score of 2.000 and 10 degrees of freedom, that'd mean we'd first identify the row corresponding to 10 degrees of freedom...

...then fall the t-score values that ours falls between (in this case, 1.812 and 2.228)...

...then locate the corresponding range of p-values (in this case, 0.05 and 0.025).

### Why don't we need to calculate an exact p-value?

You'll mainly be utilizing t-scores for confidence intervals and hypothesis tests.

#### It's not necessary for confidence intervals because...

For confidence intervals, you'll often be calculating 95% confidence intervals with a certain degrees of freedom value.

In the case of one tail tests (> or <), a 95% confidence interval means you'll be utilizing the column associated with a p-value of 0.05.

In the case of two tail tests (≠), you'll be utilizing the column associated with a p-value of 0.025.

Don't get caught up in the numbers here. What you need to understand is...

In relation to confidence intervals, the t-score table already contains all the necessary t-scores for relevant p-values.

#### It's not necessary for hypothesis tests because...

For hypothesis tests, you'll often be determining whether your t-score results in a p-value below 0.05 or 0.01. If it falls below those p-values, that means that your sample holds statistical significance.

In these situations, the only thing that matters is whether or not our t-score's p-value falls above or below the declared threshold. We don't need an exact value!

Take, for example, a hypothesis test in relation to a sample with 25 degrees of freedom.

In this hypothesis test, if our p-value is less than or equal to 0.05, that means that our sample holds statistical significance. If it's above 0.05, that means that it does not hold statistical significance.

Let's say our calculated t-score value turned out to be 1.800. In the row corresponding to 25 degrees of freedom, we can identify that a t-score of 1.800 falls between 1.708 and 2.060.

This corresponds to a p-value range of 0.05 and 0.025.

Any values between 0.05 and 0.025 are less than or equal to our threshold of 0.05... therefore this hypothesis test would result in statistical significance!

Now let's change things up: say our calculated t-score value turned out to be 1.500. In the row corresponding to 25 degrees of freedom, we can identify that a t-score of 1.500 falls between 1.316 and 1.708.

This corresponds to a p-value range of 0.10 and 0.05.

Any values between 0.10 and 0.05 are greater than our threshold of 0.05... therefore this hypothesis test would not result in statistical significance!

What you need to understand is...

In relation to hypothesis tests, we only need to determine if our t-score results in a p-value above or below a declared threshold. This can be accomplished through the ranges of t-score values contain in the t-score table.