Interpreting computer regression data | AP Statistics | Khan Academy
In other videos we've done linear regressions by hand, but we mentioned that most regressions are actually done using some type of computer or calculator. So, what we're going to do in this video is look at an example of the output that we might see from a computer, and to not be intimidated by it. We’ll also see how it gives us the equation for the regression line and some of the other data it gives us.
Here, it tells us Cheryl Dixon is interested to see if students who consume more caffeine tend to study more as well. She randomly selects 20 students at her school and records their caffeine intake in milligrams and the number of hours spent studying. A scatter plot of the data showed a linear relationship. This is a computer output from a least squares regression analysis on the data.
We have these things called the predictors, coefficient, and then we have these other things: standard error of coefficient, t, and p. Then all of these things down here, how do we make sense of this in order to come up with an equation for our linear regression?
Let's just get straight on our variables. Let's just say that we say that y is the thing that we're trying to predict. So, this is the hours spent studying, hours studying. Then, let's say x is what we think explains the hours studying. Here's one of the things that explains the hours studying, and this is the amount of caffeine ingested. So, this is caffeine consumed in milligrams.
Our regression line would have the form y hat. This tells us this is a linear regression that is trying to estimate the actual y values for given x's is going to be equal to mx plus b. Now, how do we figure out what m and b are based on this computer output?
When you look at this table here, this first column says 'predictor,' and it says 'constant' and has caffeine. All this is saying is, when you're trying to predict the number of hours studying, when you're trying to predict y, there are essentially two inputs: there is the constant value, and there is your variable, in this case caffeine, that you're using to predict the amount that you study.
This tells you the coefficients on each. The coefficient on a constant is the constant; you could view this as the coefficient on the x to the zeroth term. The coefficient on the constant is 2.544, and then the coefficient on the caffeine, well we just said that x is the caffeine consumed, so this is that coefficient: 0.164.
So just like that, we actually have the equation for the regression line. That is why these computer things are useful. We can just write it out: y hat is equal to 0.165x plus 2.544. So that’s the regression line.
What is this other information they give us? Well, I won't give you a very satisfying answer because all of this is actually useful for inferential statistics to think about things like, well, what is the probability that this is chance that we got something to fit this well?
So this right over here is the r squared, and if you wanted to figure out r from this, you would just take the square root. Here we could say that r is going to be equal to the square root of 0.60032, depending on how much precision you have. But you might say, well, how do we know if r is a positive square root or the negative square root of that?
r can take on values between negative one and positive one, and the answer is you would look at the slope. Here we have a positive slope, which tells us that r is going to be positive. If we had a negative slope, then we would take the negative square root.
Now this right here is the adjusted r squared, and we really don't have to worry about it too much when we're thinking about just bivariate data. We're talking about caffeine and our studying in this case. If we started to have more variables that tried to explain the hours studying, then we would care about adjusted r squared, but we're not going to do that just yet.
Last but not least, this s variable, this is the standard deviation of the residuals, which we study in other videos. Why is that useful? Well, that's a measure of how well the regression line fits the data. It’s a measure of, we could say, the typical error.
So, big takeaway: computers are useful. They'll give you a lot of data. The key thing is how do you pick out the things that you actually need? Because if you know how to do it, it can be quite straightforward.