Divergence notation
So I've said that if you have a vector field, a two-dimensional vector field with component functions P and Q, that the divergence of this guy, the divergence of V, which is a scalar-valued function of X and Y, is by definition the partial derivative of P with respect to X plus the partial derivative of Q with respect to Y.
And there's actually another notation for divergence that's kind of helpful for remembering the formula. What it is, is you take this nabla symbol, that upside-down triangle that we also use for the gradient, and imagine taking the dot product between that and your vector-valued function.
As we did with the gradient, the loose mnemonic you have for this upside-down triangle is you think of it as a vector full of partial differential operators. And that sounds fancy, but all it means is you take this partial partial X, a thing that wants to take in a function and take its partial derivative, and that's its first component.
The second component is this partial partial Y, a thing that wants to take in a function and take its partial derivative with respect to Y. Loosely, this isn't really a vector; these aren't numbers or functions or things like that. But it's something you can write down, and it'll be kind of helpful symbolically.
You imagine taking the dot product with that and, you know, V, who has components, these scalar-valued functions P of XY and Q of XY. When you imagine doing this dot product and you're kind of lining up terms, the first one multiplied by the second—right, quote-unquote multiplied—because in this case, when I say this first component multiplied by P, I really mean you're taking that partial derivative operator, partial partial X, and evaluating it at P.
That's kind of what multiplication looks like in this case. So you take that, and as per the dot product, you then add what happens if you take this partial operator, this partial partial Y, and quote-unquote multiply it with Q, which in the case of an operator means you kind of give it the function Q, and it's going to take its partial derivative.
So we see we get the same thing over here; it's the same formula that we have. It's just kind of a nice little—you could think of it as a mnemonic device for remembering what the divergence is. But another nice thing is this can apply to higher-dimensional functions as well.
Right, if we have something that's, let's see, a vector-valued function and it's going to be a three-dimensional vector field, so it's got X, Y, and Z as its inputs, and its output then also has to have three dimensions. So it might be like P, Q, and R, and all of these are functions of X, Y.
So that's P of X, Y, Q—oh no, no—X, Y, and Z, right? So P of X, Y, Z; Q of X, Y, Z; and then R of X, Y, Z. I haven't talked about three-dimensional divergence, but if you take this and then you imagine doing your nabla dotted with the vector-valued function, it can still make sense.
In this case, that nabla you're thinking of as having three different components, right? It's going to be, on the one hand, this partial partial X—I should say partial X there—partial X. The second component is partial partial Y, and the last component is partial partial Z.
The ordering of these variables here, X, Y, and Z, is just whatever I have here. So even if they didn't have the names X, Y, Z, you kind of put them in the same order that they show up in your function.
When you imagine taking the dot product between this and your P as a function, Q as a function, and R as a function—vector-valued, vector-valued output—what you'd get, and I'll write it over here, is you take that partial partial X and kind of multiply it with P, which means you're really evaluating at P, so partial X here.
Then you add partial partial Y, and you're evaluating at Q because you're kind of imagining multiplying these second components. And you'll add what happens when you multiply by these third components, where that's partial partial Z by that last component.
You know, since I haven't talked about three-dimensional vector fields or three-dimensional divergence, this last term—maybe it's not a given that you'd have as strong an intuition for why this shows up in divergence as the other two, but it's actually quite similar.
You're thinking about changes to the Z component of a vector as the value Z of the input as you're kind of moving up and down in that direction changes. But this pattern will go for even higher dimensions that we can't visualize—four, five, a hundred, whatever you want.
That's what makes this notation here quite nice, is that it encapsulates that and gives a really compact way of describing this formula that has a simple pattern to it, but would otherwise kind of get out of hand.
See you next video.