Inverse matrix introduction | Matrices | Precalculus | Khan Academy
We know that when we're just multiplying regular numbers, we have the notion of a reciprocal. For example, if I were to take 2 and I were to multiply it by its reciprocal, it would be equal to 1. Or if I were to just take a, and a is not equal to 0, and I were to multiply it by its reciprocal, for any a that is not equal to 0, this will also be equal to 1.
This is a number that if I multiply it times anything, I am just going to get that original number. So that's interesting. Put it in the back of our minds. You learned this many, many years ago. Now we also have something that comes out of our knowledge of functions. We know that if there's some function, let's call it f of x, that goes from some set, we call that our domain, to some other set, we call that our range, that in many cases—not all cases—there's another function that can take us back.
We call that other function the inverse of f, so that if you apply the inverse of f to f of x, you're going to get back to where you were, you're going to get back to x. We also know that it goes the other way around. For example, if you did f of f inverse of x, f inverse of x that 2 will get us back to x. So the natural question is: Is there an analog for an inverse of a function or for reciprocal when we're multiplying, when we think about matrices?
So let's play with a few ideas. Let's imagine a matrix as a transformation, which is what we have already talked about. When we think about matrices as transformations, they really are functions. They're functions that are taking one point in a certain dimensional space—let's say in the coordinate plane—to another point. It transforms a vector to another vector.
For example, let's imagine something that does a clockwise 90-degree rotation. We know how to construct that transformation matrix, which really is a function. What it does is, in our transformation matrix, we want to say what do we do with the (1, 0) unit vector and what also do we do with the (0, 1) unit vector when you do that transformation?
Well, if you're doing a 90-degree clockwise turn, then the (1, 0) unit vector is going to go right over here and so that's going to be turned into the (0, -1) vector. So we'll write that right there. Then the (0, 1) vector is going to be turned into the (1, 0) vector. So, let me write it down. This is 90 degrees clockwise.
Then we can think about what 90-degree counterclockwise would look like. If you're going counterclockwise, your original (1, 0) vector right over here is going to go over here. It's going to become the (0, 1) vector, so we would write that right over here. Then the (0, 1) vector will then become this vector.
If you're doing a 90-degree counterclockwise rotation, it's going to become the (-1, 0) vector. So in theory, these two transformations should undo each other. If I do a transformation that first gets 90 degrees clockwise and then I apply a transformation that's 90 degrees counterclockwise, I should get back to where we began.
Now, let's see what happens when we compose these two transformations, and we know how to do that. We've already talked about it. We essentially multiply these two matrices. If we were to multiply
(0, -1)
(1, 0)
times
(0, -1)
(1, 0),
what do we get?
Well, let's see. This top left. This is composing two 2x2 matrices, and it's equivalent to multiplying them. We've seen that in other videos. So first we will look at this row and this column, and that's going to be (0 times 0) + (1 times 1), so that is going to be 1.
Then we're going to look at this row and this column. So (0 times -1) + (1 times 0) is just going to be 0. Then we are going to multiply this row times each of those columns. So (-1 times 0) + (0 times 1) is 0, and then (-1 times -1) + (0 times 0) is 1.
And look what happened. When we took the composition of these two matrices that should undo each other, we see that it does. It turns into the identity transformation or the identity matrix. We know that this matrix right over here, as a transformation, is just going to map everything onto their cells.
Now this is really interesting because if we view these two-by-two transformation matrices as functions, we've just shown that if we call this, say, our first function, then we could call this its inverse.
Actually, we use that same language when we talk about matrices. If we call this as being equal to A, we would call this as being equal to A inverse. So if I were to take matrix A and I were to multiply that times its inverse, I should get the identity matrix, which is right over here.
And here I'm speaking in generalities; I'm not even just talking about the 2x2 case. This should be the 3x3 case, the 4x4 case, so on and so forth. We also know that I could have defined this bottom one as A and the top one as A inverse, and so the other way should be true as well.
A inverse times A should also be equivalent to the identity matrix. And so that's completely analogous to what we saw in these function examples between a function and its inverse. Because the other day, as we said, an n by n matrix can be viewed as a transformation, can be viewed as a function, and we also see that it has analogs to just how we think about multiplication.
Because here we could do this multiplication as a composition of transformations, but we also can just view this as matrix multiplication. And so if we take a matrix and we multiply it by its inverse, that's analogous to taking a number and multiplying by its reciprocal, and we get the equivalent of what in the number world would just be one.
But in the matrix world, it's the identity matrix because the identity matrix has this nice property that if I were to take the identity matrix and I were to multiply it times any matrix, you're going to get the original matrix again, which is what we saw or at least the analog that we saw in the regular number world.