yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Quadratic approximation formula, part 1


6m read
·Nov 11, 2024

So our setup is that we have some kind of two variable function f(x, y) who has a scalar output, and the goal is to approximate it near a specific input point. This is something I've already talked about in the context of a local linearization. I've written out the full local linearization—hard words to say—local linearization in its most abstract and general form, and it looks like quite the beast.

But once you actually break it apart, which I'll do in a moment, it's not actually that bad. The goal of this video is going to be to extend this idea, and it'll literally be just adding terms onto this formula to get a quadratic approximation. What that means is we're starting to allow ourselves to use terms like (x^2), (xy), and (y^2). Quadratic basically just means anytime you have two variables multiplied together.

So here you have two x's multiplied together, here it's an x multiplied with a y, and here (y^2)—that kind of thing.

So let's take a look at this local linearization. It seems like a lot, but once you actually kind of go through term by term, you realize it's a relatively simple function. If you were to plug in numbers for the constant terms, it would come out as something relatively simple.

Because this right here, where you're evaluating the function at the specific input point, that's just going to be some kind of constant. That's just going to output some kind of number. Similarly, when you do that to the partial derivative, this little (f_x) means partial derivative at that point. You're just getting another number, and over here this is also just another number. But we've written it in the abstract form so that you can see what you would need to plug in for any function and for any possible input point.

The reason for having it like this, the reason that it comes out to this form, is because of a few important properties that this linearization has. Let me move this stuff out of the way; we'll get back to it in a moment. But I just want to emphasize a few properties that this has because it's going to be properties that we want our quadratic approximation to have as well.

First of all, when you actually evaluate this function at the desired point, at ((x_0, y_0)), what do you get? Well, this constant term isn't influenced by the variables, so you'll just get that (F) evaluated at those points, (x_0) and (y_0). Now the rest of the terms—when we plug in (x) here, this is the only place where you actually see the variable.

Maybe that's worth pointing out, right? We've got two variables here and there's a lot going on, but the only places where you actually see those variables show up, where you have to plug in anything, is over here and over here. When you plug in (x_0) for our initial input, this entire term goes to zero. Right? And then similarly, when you plug in (y_0) over here, this entire term is going to go to zero, which multiplies out to zero for everything.

So what you end up with—you don't have to add anything else—this is just a fact, and this is an important fact because it tells you your approximation for the function at the point about which you are approximating actually equals the value of the function at that point. So that's very good.

But we have a couple of other important facts also because this isn't just a constant approximation; this is doing a little bit more for us. If you were to take the partial derivative of this linearization with respect to (x) what do you get? What do you get when you actually take this partial derivative?

Well, if you look up at the original function, this constant term is nothing, so that just corresponds to a zero. Over here, this entire thing looks like a constant multiplied by (x -) something. If you differentiate this with respect to (x), what you're going to get is that constant term, which is the partial derivative of (f) evaluated at our specific point.

Then the other term has no (x)s in it; it's just a (y), which as far as (x) is concerned is a constant. So this whole thing would be zero, which means the partial derivative with respect to (x) is equal to the value of the partial derivative of our original function with respect to (x) at that point.

Now notice this is not saying that our linearization has the same partial derivative as (f) everywhere; it's just saying that its partial derivative happens to be a constant. The constant that it is, is the value of the partial derivative of (f) at that specific input point.

You can do pretty much the same thing, and you'll see that the partial derivative of the linearization with respect to (y) is a constant. The constant that it happens to be is the value of the partial derivative of (f) evaluated at that desired point. So these are three facts: the value of the linearization at the point and the value of its two different partial derivatives. These kind of define the linearization itself.

Now what we're going to do for the quadratic approximation is take this entire formula and I'm just literally going to copy it here, and then we’re going to add to it so that the second partial derivative information of our approximation matches that of the original function.

Okay, that's kind of a mouthful, but it'll become clear as I actually work it out. Let me just kind of clean it up at least a little bit here. So what we're going to do is we're going to extend this, and I'm going to change its name because I don't want it to be a linear function anymore. What I want is for this to be a quadratic function, so instead I'm going to call it (Q(x, y)), and now we're going to add some terms.

What I could do, what I could do is add, you know, a constant times (x^2) since that's something we're allowed, plus some kind of constant times (xy), and then another constant times (y^2). But the problem there is that if I just add these as they are, then it might mess things up when I plug in (x_0) and (y_0).

Right? It was very important that when you plug in those values, that you get the original value of the function and that the partial derivatives of the approximation also match that of the function. That could mess things up because once you start plugging in (x_0) and (y_0) over here, that might actually mess with the value.

So we're basically going to do the same thing we did with the linearization, where we put in every time we have an (x), we kind of attach it; we say (x - x_0) just to make sure that we don't mess things up when we plug in (x_0).

So instead, instead of what I had written there, what we're going to add as our quadratic approximation is some kind of constant—and we'll fill in that constant in a moment—times ((x - x_0)^2), and then we're going to add another constant multiplied by ((x - x_0)(y - y_0)), and then that times yet another constant, which I'll call (C), multiplied by ((y - y_0)^2).

All right, this is quite a lot going on; this is a heck of a function, and these are different constants that we're going to try to fill in to figure out what they should be to most closely approximate the original function (f). Now, the important part of making this (x - x_0) and (y - y_0) is that when we plug in here—when we plug in, you know, (x_0) for our variable (x), and when we plug in (y_0) for our variable (y), all of this stuff is just going to go to zero, and it's going to cancel out. Moreover, when we take the partial derivatives, all of it's going to go to zero as well, and we'll see that in a moment.

Maybe I'll just actually show that right now, or rather I think I'll call the video done here and then start talking about how we fill in these constants in the next video. So I will see you then.

More Articles

View All
Make Bold Guesses and Weed Out the Failures
Going even further, it’s not just science. When we look at innovation and technology and building, for example, everything that Thomas Edison did and Nikola Tesla did, these were from trial and error, which is creative guesses and trying things out. If y…
Philip Zimbardo on the Two Types of Heroes
Greetings. I’m Philip Zimbardo, Professor Emeritus of Psychology at Stanford University where I taught for more than 40 years. And now I’m also the president and founder of the Heroic Imagination Project. We are a San Francisco-based nonprofit whose missi…
Bloodwood: Rosewood Trafficking Is Destroying This National Park | National Geographic
Cambodia was once cloaked with forests. This is what it looks like today: more than half of the country’s trees have been clear-cut. Foreign appetites for red timbers are driving the destruction, and none is prized more than this Siamese rosewood. In Chin…
Monetary policy tools | Financial sector | AP Macroeconomics | Khan Academy
What we’re going to do in this video is think about monetary policy, which is policy that a central bank can use to affect the economy in some way. This is often contrasted with fiscal policy, and that would be a government deciding to tax or spend in som…
How Your Gut Influences Your Mental Health: It’s Practically a Second Brain | Dr. Emeran Mayer
The Mind-Gut Connection is something that people have intuitively known for a long time, but science has only, I would say, in the last few years, gotten a grasp and acceptance of this concept. It essentially means that your brain has intimate connections…
3 proofs that debunk flat-Earth theory | Big Think Top Ten 2018 | Michelle Thaller | Big Think
So, Oscar, you asked the question, “What are some of the easiest ways that you can prove that the Earth is round?” Because apparently, this is something that we’re debating—I have no idea why. That’s a hard thing for me to even start talking about because…