P-values and significance tests | AP Statistics | Khan Academy
Let's say that I run a website that currently has this off-white color for its background, and I know the mean amount of time that people spend on my website. Let's say it is 20 minutes, and I'm interested in making a change that will make people spend more time on my website. My idea is to make the background color of my website yellow.
But after making that change, how do I feel good about this actually having the intended consequence? Well, that's where significance tests come into play. What I would do is first set up some hypotheses: a null hypothesis and an alternative hypothesis. The null hypothesis tends to be a statement that, hey, your change actually had no effect; there's no news here.
So this would be that your mean is still equal to 20 minutes. It is still equal to 20 minutes after the change to yellow in this case for our background. We would also have an alternative hypothesis. Our alternative hypothesis is actually that our mean is now greater because of the change, that people are spending more time on my site. So our mean is greater than 20 minutes after the change.
Now, the next thing we do is we set up a threshold known as the significance level, and you will see how this comes into play in a second. Your significance level is usually denoted by the Greek letter Alpha, and you tend to see significant levels like one one-hundredth, or five one-hundredths, or one-tenth, or one percent, five percent, or ten percent. You might see other ones, but we're going to set a significance level for this particular case. Let's just say it's going to be 0.05.
What we're going to now do is we're going to take a sample of people visiting this new yellow background website, and we're going to calculate statistics: the sample mean, the sample standard deviation. We're going to say, "Hey, if we assume that the null hypothesis is true, what is the probability of getting a sample with the statistics that we get?" If that probability is lower than our significance level—if that probability is less than 0.05, if it's less than 5 percent—then we reject the null hypothesis and say that we have evidence for the alternative.
However, if the probability of getting the statistics for that sample are at the significance level or higher, then we say, "Hey, we can't reject the null hypothesis, and we aren't able to have evidence for the alternative." So what we would then do—I will call this step three— in step three we would take a sample. Let's say we take a sample size of 100 folks who visit the new website, the yellow background website, and we measure sample statistics.
We measure the sample mean here; let's say that for that sample, the mean is 25 minutes. We are also likely to, if we don't know what the actual population standard deviation is—which we typically don't know—we would also calculate the sample standard deviation. Then the next step is we calculate a P-value, and the P-value, which stands for probability value, is the probability of getting a statistic at least this far away from the mean if we were to assume that the null hypothesis is true.
So one way to think about it is as a conditional probability. It is the probability that our sample mean—our sample mean when we take a sample of size n equals 100—is greater than or equal to 25 minutes, given our null hypothesis is true. In other videos, we have talked about how to do this. If we assume that the sampling distribution of the sample means is roughly normal, we can use the sample mean, we can use our sample size, we can use our sample standard deviation.
Perhaps we use a t-statistic to figure out roughly what this probability is going to be, and then we decide whether we can reject the null hypothesis. So let me call that step five. In step five, there are two situations. If my P-value—if my P-value—is less than Alpha, then I reject my null hypothesis and say that I have evidence for my alternative hypothesis.
Now, if we have the other situation—if my P-value is greater than or equal to, in this case, 0.05—so if it's greater than or equal to my significance level, then I cannot reject the null hypothesis. I wouldn't say that I accept the null hypothesis; I would just say that we do not reject the null hypothesis.
Let's say, when I do all of these calculations, I get a P-value. Which would put me in this scenario right over here? Let's say that I get a P-value of 0.03. 0.03 is indeed less than 0.05, so I would reject the null hypothesis and say that I have evidence for the alternative.
This should hopefully make logical sense because what we're saying is, "Hey look, we took a sample and if we assume the null hypothesis, the probability of getting that sample is three percent, it's three one hundredths." Since that probability is less than our probability threshold here, we'll reject it and say we have evidence for the alternative.
On the other hand, there might have been a scenario where we do all of the calculations here and we figure out a P-value. A P-value that we get is equal to 0.5, which you can interpret as saying that, "Hey, if we assume the null hypothesis is true—that there's no change due to changing the background yellow—I would have a 50 chance of getting this result."
In that situation, since it's higher than my significance level, I wouldn't reject the null hypothesis. A world where the null hypothesis is true and I get this result, well, you know, it seems reasonably likely. This is the basis for significant tests generally, and as you will see, it is applicable in almost every field you'll find yourself in.
Now, there's one last point of clarification that I want to make very, very, very clear. Our P-value, the thing that we're using to decide whether or not we reject the null hypothesis, this is the probability of getting your sample statistics, given that the null hypothesis is true. Sometimes people confuse this and they say, "Hey, is this the probability that the null hypothesis is true given the sample statistics that we got?"
And I would say, clearly, no, that is not the case. We are not trying to gauge the probability that the null hypothesis is true or not. What we are trying to do is say, "Hey, if we assume the null hypothesis were true, what is the probability that we got the result that we did for our sample?" And if that probability is low, if it's below some threshold that we set ahead of time, then we decide to reject the null hypothesis and say that we have evidence for the alternative.