Expected value of a binomial variable | Random variables | AP Statistics | Khan Academy
So I've got a binomial variable ( x ) and I'm going to describe it in very general terms. It is the number of successes after ( n ) trials, where the probability of success for each trial is ( p ). This is a reasonable way to describe really any binomial variable. We're assuming that each of these trials are independent, the probability stays constant, and we have a finite number of trials right over here. Each trial results in either a very clear success or failure.
What we're going to focus on in this video is, well, what would be the expected value of this binomial variable? What would the expected value of ( x ) be equal to? I will just cut to the chase and tell you the answer, and then later in this video, we'll prove it to ourselves a little bit more mathematically. The expected value of ( x ), it turns out, is just going to be equal to the number of trials times the probability of success for each of those trials.
So if you wanted to make that a little bit more concrete, imagine if a trial is a free throw, taking a shot from the free throw line. Success is made shot, so you actually make the shot; the ball went in the basket. Your probability is this yellow color; your probability would be your free throw percentage. So let's say it's 30 or 0.3, and let's say for the sake of argument that we're taking 10 free throws. So ( n ) is equal to 10.
This is making it all a lot more concrete. In this particular scenario, your expected value, if ( x ) is the number of made free throws after taking 10 free throws with a free throw percentage of 30, well, based on what I just told you, would be ( n ) times ( p ). It would be the number of trials times the probability of success in any one of those trials times 0.3, which is just going to be, of course, equal to 3.
Now, does that make intuitive sense? Well, if you're taking 10 shots with a 30 free throw percentage, it actually does feel natural that I would expect to make three shots. Now, with that out of the way, let's make ourselves feel good about this mathematically, and we're going to leverage some of our expected value properties. In particular, we're going to leverage the fact that if I have the expected value of the sum of two independent random variables, let's say ( x + y ), it's going to be equal to the expected value of ( x ) plus the expected value of ( y ).
So, assuming this right over here, let's construct a new random variable. Let's call our random variable ( y ), and we know the following things about ( y ): the probability that ( y ) is equal to 1 is equal to ( p ), and the probability that ( y ) is equal to 0 is equal to ( 1 - p ). These are the only two outcomes for this random variable.
And so, you might be seeing where this is going. You could view this random variable as really representing one trial. It becomes 1 on a success and 0 when you don't have a success. You can view our original random variable ( x ) as being equal to ( y + y ), and we're going to have 10 of these. So we're going to have 10 ( y )'s.
In the concrete sense, you could view the random variable ( y ) as equalling 1 if you make a free throw and equalling 0 if you don't make a free throw. It's really just representing one of those trials, and you can view ( x ) as the sum of ( n ) of those trials.
Well now, actually, let me be very clear here. I went immediately to the concrete, but I really should be saying ( n )-wise because I want to stay general right over here. So there are ( n ) ( y )'s right over here; this was just a particular example. But I am going to try to state general for the rest of the video, because now we are really trying to prove this result right over here.
So let's just take the expected value of both sides. So what is it going to be? So we get the expected value of ( x ) is equal to what's the expected value of all of this? By that property right over here, this is going to be the expected value of ( y ) plus the expected value of ( y ) plus and we're going to do this ( n ) times plus the expected value of ( y ), and we're going to have ( n ) of these.
So we have ( n ), and so you could rewrite this as being equal to, so this is our ( n ) right over here; this is ( n ) times the expected value of ( y ). Now, what is the expected value of ( y )? Well, this is pretty straightforward. We can actually just do it directly. The expected value of ( y ), let me just write it over here, the expected value of ( y ) is just the probability weighted outcomes, and since there's only two discrete outcomes here, it's pretty easy to calculate.
We have a probability of ( p ) of getting a one, so it's ( p ) times 1 plus we have a probability of ( 1 - p ) of getting a zero. Well, what does this simplify to? Well, 0 times anything that's 0, and then you have 1 times ( p ); this is just equal to ( p ).
So the expected value of ( y ) is just equal to ( p ), and so there you have it! We get the expected value of ( x ) is ( 10 ) times the expected value, or the expected value of ( x ) is ( n ) times the expected value of ( y ), and the expected value of ( y ) is ( p ). Thus, the expected value of ( x ) is equal to ( n \cdot p ). Hope you feel good about that!