Chi-square statistic for hypothesis testing | AP Statistics | Khan Academy
Let's say there's some type of standardized exam where every question on the test has four choices: choice A, choice B, choice C, and choice D. The test makers assure folks that over many years, there's an equal probability that the correct answer for any one of the items is A, B, C, or D. Essentially, it is a 25 percent chance of any of them.
Now, let's say you have a hunch that maybe it is skewed towards one letter or another. How could you test this? Well, you could start with a null and alternative hypothesis, and then we can actually do a hypothesis test. So let's say that our null hypothesis is equal distribution of correct choices.
Another way of thinking about it is A would be correct 25 percent of the time, B would be correct 25 percent of the time, C would be correct 25 percent of the time, and D would be correct 25 percent of the time. Now, what would be our alternative hypothesis? Our alternative hypothesis would be not equal distribution.
Now, how are we going to actually test this? Well, we've seen this show before, at least the beginnings of the show. You have the population of all of your potential items here, and you could take a sample. Let's say we take a sample of 100 items, so N is equal to 100.
Let's write down the data that we get when we look at that sample. This is the correct choice, and then this would be the expected number that you would expect. This is the actual number. If this doesn't make sense yet, we'll see it in a second.
So there's four different choices: A, B, C, D, and a sample of 100. Remember, in any hypothesis test, we start assuming that the null hypothesis is true. So the expected number where A is the correct choice would be 25 percent of this hundred, so you'd expect 25 times A to be the correct choice, 25 times B to be the correct choice, 25 times C to be the correct choice, and 25 times D to be the correct choice.
But let's say our actual results, when we look at these hundred items, we get that A is the correct choice 20 times, B is the correct choice 20 times, C is the correct choice 25 times, and D is the correct choice 35 times. So if you just look at this, you might think, "Hey, maybe there's a higher frequency of D."
But maybe you say, "Well, this is just a sample, and just a random chance. It might have just gotten more Ds than not." There's some probability of getting this result even assuming that the null hypothesis is true, and that's the goal of these hypothesis tests: what’s the probability of getting a result at least this extreme?
If that probability is below some threshold, then we tend to reject the null hypothesis and accept an alternative. Those thresholds you have seen before; we've seen these significance levels. Let's say we set a significance level of 5, or 0.05.
So if the probability of getting this result, or something even more different than what's expected, is less than the significance level, then we'd reject the null hypothesis. But this all leads to one really interesting question: how do we calculate the probability of getting a result this extreme or more extreme?
How do we even measure that? This is where we're going to introduce a new statistic, and also for many of you, a new Greek letter. That is the capital Greek letter chi, which might look like an X to you, but it's a little bit curvier. You can look up more on that; you kind of curve that part of the X, but it's a chi, not an X.
The statistic is called chi squared, and it's a way of taking the difference between the actual and the expected and translating that into a number. The chi-squared distribution is well studied, and we can use that to figure out what is the probability of getting a result this extreme or more extreme. If that's lower than our significance level, we reject the null hypothesis, and it suggests the alternative.
But how do we calculate the chi-squared statistic here? Well, it's reasonably intuitive. What we do is for each of these categories—in this case, it’s for each of these choices—we look at the difference between the actual and the expected. So for choice A, we'd say 20 is the actual minus the expected.
Then we're going to square that, and then we're going to divide by what was expected. Then we're going to do that for choice B. So we're going to say the actual was 20, expected is 25, so (20 minus 25) squared over the expected, over 25. Plus, then we do that for choice C: (25 minus 25) squared over the expected, over 25.
Finally, for choice D, which is going to get us (35 minus 25) squared, all of that over 25. Now, let's see. If we calculate this, it's going to be negative 5 squared, so that's going to be 25. This is going to be 25, this is going to be zero, and (35 minus 25) is 10 squared, which is 100.
So this is one plus one plus zero plus four, so our chi-squared statistic in this example came out nice and clean—this won't always be the case—at 6. So what do we make of this? Well, what we can do is then look at a chi-squared distribution for the appropriate degrees of freedom.
We'll talk about that in a second and say, "What is the probability of getting a chi-square statistic 6 or larger?" To understand what a chi-squared distribution even looks like, these are multiple chi-square distributions for different values for the degrees of freedom.
To calculate the degrees of freedom, you look at the number of categories. In this case, we have four categories, and you subtract one. That makes a lot of sense because if you knew how many A's, B's, and C's there are, if you knew the proportions—even the assumed proportions—you can always calculate the fourth one. That's why it is four minus one: degrees of freedom.
So in this case, our degrees of freedom are going to be equal to three. Over here, sometimes you'll see it described as K, so K is equal to three. If we look at that little light blue, we’re looking at this chi-squared distribution where the degree of freedom is three and we want to figure out what is the probability of getting a chi-squared statistic that is 6 or greater.
So we would be looking at this area right over here, and you could figure it out using a calculator. If you're taking some type of a test, like an AP statistics exam, for example, you could use their tables they give you. A table like this could be quite useful. Remember, we're dealing with a situation where we have three degrees of freedom.
We have four categories, so four minus one is 3, and we got a chi-squared value. Our chi-squared statistic was 6. So this right over here tells us the probability of getting a 6.25 or greater for our chi-squared value is 10. If we go back to this chart, we just learned that this probability from 6.25 and up, when we have three degrees of freedom, is that this right over here is 10.
Well, that's 10. Then the probability—the probability of getting a chi-squared value greater than or equal to 6 is going to be greater than 10. We could also view this as our P-value. If our probability of assuming the null hypothesis is greater than 10 percent, well, it's definitely going to be greater than our significance level.
Because of that, we will fail to reject, fail to reject, and so this is an example of even though in your sample you just happen to get more Ds, the probability of getting a result at least as extreme as what you saw is going to be a little bit over 10 percent.