When to use z or t statistics in significance tests | AP Statistics | Khan Academy
What I want to do in this video is give a primer on thinking about when to use a z statistic versus a t statistic when we are doing significance tests.
So, there's two major scenarios that we will see in an introductory statistics class. One is when we are dealing with proportions, so I'll write that on the left side right over here, and the other is when we are dealing with means.
In the proportion case, when we're doing our significance test, we will set up some null hypothesis that usually deals with the population proportion. We might say it is equal to some value; let's just call that p sub 1. Then maybe you have an alternative hypothesis that, well, no, the population proportion is greater than that, or less than that, or it's just not equal to that. So let me just go with that one: it's not equal to p sub 1.
Then what we do to actually test— to actually do the significance test— is we take a sample from the population. It's going to have a sample size of n. We need to make sure that we feel good about making the inference. We've talked about the conditions for inference in previous videos multiple times. But from this, we calculate the sample proportion, and then from this, we calculate the p-value.
The way that we do the p-value— remember, the p-value is the probability of getting a sample proportion at least this extreme, and if it's below some threshold, we reject the null hypothesis, and it suggests the alternative.
Over here, the way we do that is, well, we find an associated z value for that p, for that sample proportion. The way that we calculate it is we say, okay, look, our z is going to be how many of the sampling distribution's standard deviations we are away from the mean. Remember, the mean of the sampling distribution is going to be the population proportion.
So here we got this sample statistic, this sample proportion. The difference between that and the assumed proportion— remember, when we do these significance tests, we try to figure out the probability assuming the null hypothesis is true. So when we see this p sub 0, this is the assumed proportion from the null hypothesis.
That's the difference between these two: the sample proportion and the assumed proportion. Then you'd want to divide it by what's often known as the standard error of the statistic, which is just the standard deviation of the sampling distribution of the sample proportion.
This works out well for proportions because, in proportions, I can figure out what this is. This is going to be equal to the square root of the assumed population proportion times 1 minus the assumed population proportion, all of that over n.
Then I would use this z statistic to figure out the p-value. In this case, I would look at both tails of the distribution because I care about how far I am either above or below the assumed population proportion.
Now, with means, there's definitely some similarities here. You will make a null hypothesis— maybe you assume the population mean is equal to mu 1— and then there's going to be an alternative hypothesis that maybe your population mean is not equal to mu 1.
You're going to do something very simple: you take your population, take a sample of size n. Instead of calculating a sample proportion, you calculate a sample mean. Actually, you can calculate other things, like a sample standard deviation, but now you have an issue.
You say, well, ideally, I would use a z statistic. You could if you were able to say, well, I could take the difference between my sample mean and the assumed mean from the null hypothesis, so that would be this right over here. That's what that 0 means: the assumed mean from the null hypothesis.
I would then divide by the standard error of the mean, which is another way of saying the standard deviation of the sampling distribution of the sample mean. But this is not so easy to figure out.
In order to figure out this, this is going to be the standard deviation of the underlying population divided by the square root of n. We know what n is going to be if we conducted the sample, but we don't know what the standard deviation is.
So instead, what we do is we estimate this. We'll take the sample mean, we subtract from that the assumed population mean from the null hypothesis, and we divide by an estimate of this, which is going to be our sample standard deviation divided by the square root of n. But because this is an estimate, we actually get a better result.
Instead of saying, hey, this is an estimate of our z statistic, we will call this our t statistic. As we'll see, we’ll then look this up in a t table, and this will give us a better sense of the probability.