Vector form of the multivariable chain rule
So, in the last couple of videos, I talked about the multi-variable chain rule, which I have written up here. If you haven't seen those, go take a look. Here, I want to write it out in vector notation, and this helps us generalize it a little bit when the intermediary space is a little bit higher dimensional.
So, instead of writing x of t and y of t as separate functions and just trying to emphasize, oh, they have the same input space, and whatever x takes in, that's the same number y takes in, it's better and a little bit cleaner if we say there's a vector-valued function. It takes in a single number t, and then it outputs, you know, some kind of vector. In this case, you could say the components of v are x of t and y of t. That's fine, but I want to talk about what this looks like if we start writing everything in vector notation.
Just since we see dx dt here and dy dt here, you might start thinking, oh, well, we should take the derivative of that vector-valued function, the derivative of v with respect to t. When we compute this, it's nothing more than just taking the derivatives of each component. So, in this case, the derivative of x, so you'd write dx dt, and the derivative of y, dy dt. This is the vector-valued derivative.
Now, you might start to notice something here. Okay, so we've got one of those components multiplied by a certain value and another component multiplied by a certain value. You might recognize this as a dot product. This would be the dot product between the vector that contains the derivatives—the partial derivatives, partial of f with respect to y, partial of f with respect to x.
Well, whoops, I don't know why I wrote it that way. So, up here, that's with respect to x and then here to y. So this whole thing, we're taking the dot product with, you know, the vector that contains ordinary derivative dx dt and ordinary derivative dy dt.
Of course, both of these are special vectors; they're not just random. The left one, that's the gradient of f, gradient of f. The right vector here, that's what we just wrote—that's the derivative of v with respect to t. Just for being quick, I'm going to write that as v prime of t. That's saying completely the same thing as dv dt.
This right here is another way to write the multivariable chain rule. Maybe if you were being a little bit more exact, you would emphasize that when you take the gradient of f, the thing that you input into it is the output of that vector-valued function. You know, you're throwing in x of t and y of t, so you might emphasize that you take in that as an input, right?
Then you multiply it by the derivative, the vector-valued derivative of v of t. When I say multiply, I mean dot product, right? These are vectors, and you're taking the dot product. This should seem very familiar to you, you know, the single-variable chain rule.
And just to remind us, I'll throw it up here. If you take the derivative of, you know, a composition of two single-variable functions f and g, you take the derivative of the outside, f prime, and throw in g—throw in what was the interior function—and you multiply it by the derivative of that interior function, g prime of t. This is super helpful in single-variable calculus for computing a lot of derivatives.
Over here, it has a very similar form, right? The gradient, which really serves the function of the true extension of the derivative for multivariable functions, for scalar-valued multivariable functions at least. You take that derivative and throw in the inner function, which just happens to be a vector-valued function, but you throw it in there, and then you multiply it by the derivative of that.
But multiplying vectors in this context means taking the dot product of the two. This could mean if you have a function with a whole bunch of different variables.
So, let's say you have, you know, some f of x—or not f of x, but f of like x1 and x2, and it takes in a whole bunch of different variables and it goes out to x100. Then what you throw into it is a vector-valued function, something that's vector-valued, takes in a single variable.
In order to be able to compose them, it's going to have a whole bunch of intermediary functions. All you could write it as x1, x2, x3, all the way up to x100. These are all functions at this point; these are component functions of your vector-valued v.
This expression still makes sense, right? You can still take the gradient of f. It's going to have a hundred components. You can plug in any vector, any set of a hundred different numbers, and in particular, the output of a vector-valued function with a hundred different components is going to work.
Then you take the dot product with the derivative of this, and that's the more general version of the multivariable chain rule. Another cool way about writing it like this is you can interpret it in terms of the directional derivative. I think I'll do that in the next video; I won't do that here.
So, a certain way to interpret this is with the directional derivative.