yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

A.I. economics: How cheaper predictions will change the world | Ajay Agrawal | Big Think


5m read
·Nov 3, 2024

I think economics has something to contribute in terms of our understanding of artificial intelligence because it gives us a different view.

So, for example, if you ask a technologist to tell you about the rise of semiconductors, they will talk to you about the increasing number of transistors on a chip and all the science underlying the ability to keep doubling the number of transistors every 18 months or so.

But if you ask an economist to describe to you the rise of semiconductors, they won’t talk about transistors on a chip; instead, they’ll talk about a drop in the cost of arithmetic. They’ll say, what’s so powerful about semiconductors is they substantially reduced the cost of arithmetic.

It’s the same with A.I. Everybody is fascinated with all the magical things A.I. can do, and what economists bring to the conversation is that they are able to look at a fascinating technology like artificial intelligence and strip all the fun and wizardry out of it and reduce A.I. down to a single question, which is, “What does this technology reduce the cost of?”

And in the case of A.I., the recent economists think it’s such a foundational technology, and why it’s so important it stands in a different category from virtually every other domain of technology that we see today, is because the thing for which it drops the cost is such a foundational input. We use it for so many things; in the case of A.I., that’s prediction.

And so why that’s useful is that as soon as we think of A.I. as a drop in the cost of prediction, first of all, it takes away all the confusion of well, what is this current renaissance in A.I. actually doing? Is it Westworld? Is it C-3PO? Is it a Hal, what is it? And really what it is, it’s simply a drop in the cost of prediction.

And we define prediction as taking information you have to generate information you don’t have. So it’s not just through the traditional form of forecasting like taking last month's sales and predicting next month's sales. It’s also taking, for example, if we have a medical image and we’re looking at a tumor and the data we have is the image and what we don’t have is the classification of the tumor as benign or malignant, the A.I. makes that classification; that’s a form of prediction.

And so when something becomes cheap—from economics 101, most people remember there’s a downward sloping demand curve—and so when something becomes cheaper, that means we use more of it.

And so in the case of prediction, as it becomes cheaper, we’ll use more and more of it. And so that will take two forms: one is that we’ll use more of it for things we traditionally use prediction for, like demand forecasting and supply chain management.

But where I think it’s really interesting is that when it becomes cheap, we’ll start using it for things that weren’t traditionally prediction problems, but we’ll start converting problems into prediction problems to take advantage of the new, cheap prediction.

So one example is driving. We’ve had autonomous cars for a long time, or autonomous vehicles, but we’ve always used them inside a controlled environment like a factory or a warehouse. And we did that because we had to control the number of—think of it as the if/then statement.

So we have a robot; the engineer would program the robot to move around the factory or the warehouse, and then they would give it a bit of intelligence. They would put a camera on the front of the robot, and they would give it some logic, saying, okay, if something walks in front then stop. If the shelf is empty then move to the next shelf. If/then. If/then.

But you could never put that vehicle on a city street because there is an infinite number of ifs. There are so many things that could happen in an uncontrolled environment. That’s why, as recently as six years ago, experts in the field were saying we’ll never have a driverless car on a city street in our lifetime—until it was converted into a prediction problem.

And the people who are familiar with this new, cheap form of prediction said why don’t we solve this problem in a different way, and instead we’ll treat it as a single prediction problem? And the prediction is: What would a good human driver do?

And so effectively, the way you can think about it is that we put humans in a car and we told them to drive. Humans have data coming in through the cameras on our face and the microphones on the side of our heads, and our data came in, we process the data with our monkey brains, and then we take action.

And our actions are very limited: we can turn left; we can turn right; we can brake; we can accelerate. The way you can think about it is, think about an A.I. sitting in the car along with the driver, and what that A.I. is trying to do is—it doesn’t have its own input sensors, eyes, and ears, so we have to give it some: we put a radar camera, LiDAR, around the car—and then the A.I. has this incoming data and every second it’s got data coming in, it tries to predict in the next second what will the human driver do?

In the beginning, it’s a terrible predictor; it makes lots of mistakes. And from a statistical point of view, we can say it has big confidence intervals; it’s not very confident. But it learns as it goes, and every time it makes a mistake, it thinks that the driver is about to turn left, but the driver doesn’t turn left, and it updates its model.

It thinks the driver was going to brake; the driver doesn’t brake, so it updates its model. And as it goes, the predictions get better and better and better, and the confidence intervals get smaller and smaller and smaller.

So we turned driving into a prediction problem. We’ve turned translation into a prediction problem. That used to be a rules-based problem where we had linguists with many rules and many exceptions, and that’s how we did translation. Now we’ve turned it into a prediction problem.

I think probably the most common surprise that people have is we have a lot of HR people that come into our lab, and they say: "Hey, we’re here to learn about A.I. because we need to know what kinds of people to hire for our company, you know, for our manufacturing or our sales or this or that division. Of course, it won’t affect my division because I’m in HR and we’re a very people-part of the business, and so A.I. is not going to affect us."

But of course, people are breaking HR down to a series of prediction problems. So, for example, the first thing HR people do is recruit, and recruit is essentially they take in a set of input data like resumes and interview transcripts, and then they try to predict from a set of applicants who will be the best for this job.

And once they hire people, then the next part is promotion. Promotion has also been converted into a prediction problem. You have a set of people working in the company, and you have to predict who will be the best at the next-level-up job.

And then the next role they do is retention. They have 10,000 people working in the company, and they have to predict which of those people are most likely to leave, particularly their stars, and also predict: what can we do that would most likely increase the chance of them staying?

And so one of the, what I would say, a black art right now in A.I. is converting existing problems into prediction problems so that A.I.s can handle them.

More Articles

View All
Monetizing Podcasts and Newsletters - Chris Best of Substack and Jonathan Gill of Backtracks
So Chris, what do you do? I’m the CEO of Substack. We make it simple to start a paid newsletter, and also you can put audio in it now. In Jonathan. I’m Jonathan Gill, co-founder and CEO of Backtracks. We help audio content creators know and grow their …
How to sell a $14M private jet.
What kind of a budget is your client looking to be in? What’s the maximum range you’re trying to reach? What city pairs? So, I mean, it depends on, you know, how old of an airplane your client’s willing to purchase. If you wanted a Legacy 600, you could …
Path of Stoicism: How to become a Stoic in the Modern World
We’re all pretty used to rain. We’re either prepared for it with an umbrella or raincoat, or just get wet. Rarely does it genuinely upset us. But what about when it rains for days and the streets flood so you can’t go outside? Or when you realize you can’…
Killer Snowballs | Science of Stupid
Welcome to the Science of Stupid Christmas Grotto! As you can see, we have spared literally no expense with the decorations. But what would really make my Christmas would be to wake up on the big day to a fresh dusting of snow. Nothing beats that gentle c…
Introduction to vector components | Vectors | Precalculus | Khan Academy
In other videos, we have talked about how a vector can be completely defined by a magnitude and a direction. You need both. Here we have done that; we have said that the magnitude of vector A is equal to three units. These parallel lines here on both side…
Example: Graphing y=3⋅sin(½⋅x)-2 | Trigonometry | Algebra 2 | Khan Academy
So we’re asked to graph ( y ) is equal to three times sine of one half ( x ) minus two in the interactive widget. And this is the interactive widget that you would find on Khan Academy. It first bears mentioning how this widget works. So this point right …