Constructing t interval for difference of means | AP Statistics | Khan Academy
Let's say that we have two populations. So that's the first population, and this is the second population right over here. We are going to think about the means of these populations.
So let's say this first population is the population of golden retrievers, and this second population is the population of chihuahuas. The mean that we're going to think about is maybe the mean weight.
So mu 1 would be the mean, the true mean weight of the population of golden retrievers, and mu 2 would be the true mean weight of the population of chihuahuas. What we want to think about is: what is the difference between these two population means, between these two population parameters?
Well, if we don't do—if we don't know this, all we can do is try to estimate it and maybe construct some type of confidence interval. And that's what we're going to talk about in this video.
So how do we go about doing it? Well, we've seen this or similar things before. What you would do is you would take a sample from both populations. So from population 1 here, I would take a sample of size n sub 1, and from that I can calculate a sample mean.
So this is a statistic that is trying to estimate that, and I can also calculate a sample standard deviation. I can do the same thing in the population of chihuahuas if that's what our population 2 is all about. So I could take a sample, and actually, this sample does not have to be the same as n 1, so I'll call it n sub 2. It could be, but doesn't have to be.
From that, I can calculate a sample mean x bar sub 2 and a sample standard deviation. So now, assuming that our conditions for inference are met—and we've talked about those before—we have the random condition, we have the normal condition, we have the independence condition.
Assuming those conditions are met (and we talk about those in other videos for means), let's think about how we can construct a confidence interval. You might say, "All right, well, that would be the difference of my sample means, x bar sub 1 minus x bar sub 2, plus or minus some z value times my standard deviation times the standard deviation of the sampling distribution of the difference of the sample means."
So x bar sub 1 minus x bar sub 2. And you might say, "Well, where do I get my z from?" Well, our confidence level would determine that confidence. If our confidence level is 95, that would determine our z.
Now, this would not be incorrect, but we face a problem because we are going to need to estimate what the standard deviation of the sampling distribution of the difference between our sample means actually is. To make that clear, let me write it this way: the variance of the sampling distribution of the difference of our sample means is going to be equal to the variance of the sampling distribution of sample mean one plus the variance of the sampling distribution of sample mean two.
Now, if we knew the true underlying standard deviations of this population and this population, then we could actually come up with these. In that case, this right over here would be equal to the variance of the population of population one divided by our sample size n one, plus the variance of the underlying population 2 divided by this sample size.
But we don't know these variances, and so we try to estimate them. We estimate them with our sample standard deviations. So we say this is going to be approximately equal to our first sample standard deviation squared over n 1, plus our second sample standard deviation squared over n 2.
So we can say that an estimate of the standard deviation of the sampling distribution of the difference between our sample means, an estimate, is going to be equal to the square root of this. It's going to be approximately equal to the square root of s 1 squared over n 1 plus s 2 squared over n 2.
But the problem is, once we use this estimate, that we can figure out a critical z value isn't going to be as good as a critical t value. So instead, you would say my confidence interval is going to be x bar sub 1 minus x bar sub 2 plus or minus a critical t value instead of a z value, because that works better when you are estimating the standard deviation of the sampling distribution of the difference between the sample means.
And so you have t star times our estimate of this, which is going to be equal to the square root of s sub 1 squared over n 1 plus s sub 2 squared over n 2.
Then you might say, "Well, what determines our t star?" Well, once again, you would look it up on a table using your confidence level. You might be saying, "Wait, hold on. When I look up a t value, I don't just care about a confidence level, I also care about degrees of freedom. What is going to be the degrees of freedom in this situation?"
Well, there's a simple answer and a complicated answer. Once we think about the difference of means, there are fairly sophisticated formulas that computers can use to get a more precise degrees of freedom.
But what you will typically see in a statistics class is a conservative view of degrees of freedom where you take the lower of n 1 and n 2 and you subtract 1 from that. So the degrees of freedom here is going to be the lower of n one minus one or n two minus one, or you take the lower of n one or n two and you subtract one from that.
In future videos, we will work through examples that do this.