A.I. economics: How cheaper predictions will change the world | Ajay Agrawal | Big Think
I think economics has something to contribute in terms of our understanding of artificial intelligence because it gives us a different view.
So, for example, if you ask a technologist to tell you about the rise of semiconductors, they will talk to you about the increasing number of transistors on a chip and all the science underlying the ability to keep doubling the number of transistors every 18 months or so.
But if you ask an economist to describe to you the rise of semiconductors, they won’t talk about transistors on a chip; instead, they’ll talk about a drop in the cost of arithmetic. They’ll say, what’s so powerful about semiconductors is they substantially reduced the cost of arithmetic.
It’s the same with A.I. Everybody is fascinated with all the magical things A.I. can do, and what economists bring to the conversation is that they are able to look at a fascinating technology like artificial intelligence and strip all the fun and wizardry out of it and reduce A.I. down to a single question, which is, “What does this technology reduce the cost of?”
And in the case of A.I., the recent economists think it’s such a foundational technology, and why it’s so important it stands in a different category from virtually every other domain of technology that we see today, is because the thing for which it drops the cost is such a foundational input. We use it for so many things; in the case of A.I., that’s prediction.
And so why that’s useful is that as soon as we think of A.I. as a drop in the cost of prediction, first of all, it takes away all the confusion of well, what is this current renaissance in A.I. actually doing? Is it Westworld? Is it C-3PO? Is it a Hal, what is it? And really what it is, it’s simply a drop in the cost of prediction.
And we define prediction as taking information you have to generate information you don’t have. So it’s not just through the traditional form of forecasting like taking last month's sales and predicting next month's sales. It’s also taking, for example, if we have a medical image and we’re looking at a tumor and the data we have is the image and what we don’t have is the classification of the tumor as benign or malignant, the A.I. makes that classification; that’s a form of prediction.
And so when something becomes cheap—from economics 101, most people remember there’s a downward sloping demand curve—and so when something becomes cheaper, that means we use more of it.
And so in the case of prediction, as it becomes cheaper, we’ll use more and more of it. And so that will take two forms: one is that we’ll use more of it for things we traditionally use prediction for, like demand forecasting and supply chain management.
But where I think it’s really interesting is that when it becomes cheap, we’ll start using it for things that weren’t traditionally prediction problems, but we’ll start converting problems into prediction problems to take advantage of the new, cheap prediction.
So one example is driving. We’ve had autonomous cars for a long time, or autonomous vehicles, but we’ve always used them inside a controlled environment like a factory or a warehouse. And we did that because we had to control the number of—think of it as the if/then statement.
So we have a robot; the engineer would program the robot to move around the factory or the warehouse, and then they would give it a bit of intelligence. They would put a camera on the front of the robot, and they would give it some logic, saying, okay, if something walks in front then stop. If the shelf is empty then move to the next shelf. If/then. If/then.
But you could never put that vehicle on a city street because there is an infinite number of ifs. There are so many things that could happen in an uncontrolled environment. That’s why, as recently as six years ago, experts in the field were saying we’ll never have a driverless car on a city street in our lifetime—until it was converted into a prediction problem.
And the people who are familiar with this new, cheap form of prediction said why don’t we solve this problem in a different way, and instead we’ll treat it as a single prediction problem? And the prediction is: What would a good human driver do?
And so effectively, the way you can think about it is that we put humans in a car and we told them to drive. Humans have data coming in through the cameras on our face and the microphones on the side of our heads, and our data came in, we process the data with our monkey brains, and then we take action.
And our actions are very limited: we can turn left; we can turn right; we can brake; we can accelerate. The way you can think about it is, think about an A.I. sitting in the car along with the driver, and what that A.I. is trying to do is—it doesn’t have its own input sensors, eyes, and ears, so we have to give it some: we put a radar camera, LiDAR, around the car—and then the A.I. has this incoming data and every second it’s got data coming in, it tries to predict in the next second what will the human driver do?
In the beginning, it’s a terrible predictor; it makes lots of mistakes. And from a statistical point of view, we can say it has big confidence intervals; it’s not very confident. But it learns as it goes, and every time it makes a mistake, it thinks that the driver is about to turn left, but the driver doesn’t turn left, and it updates its model.
It thinks the driver was going to brake; the driver doesn’t brake, so it updates its model. And as it goes, the predictions get better and better and better, and the confidence intervals get smaller and smaller and smaller.
So we turned driving into a prediction problem. We’ve turned translation into a prediction problem. That used to be a rules-based problem where we had linguists with many rules and many exceptions, and that’s how we did translation. Now we’ve turned it into a prediction problem.
I think probably the most common surprise that people have is we have a lot of HR people that come into our lab, and they say: "Hey, we’re here to learn about A.I. because we need to know what kinds of people to hire for our company, you know, for our manufacturing or our sales or this or that division. Of course, it won’t affect my division because I’m in HR and we’re a very people-part of the business, and so A.I. is not going to affect us."
But of course, people are breaking HR down to a series of prediction problems. So, for example, the first thing HR people do is recruit, and recruit is essentially they take in a set of input data like resumes and interview transcripts, and then they try to predict from a set of applicants who will be the best for this job.
And once they hire people, then the next part is promotion. Promotion has also been converted into a prediction problem. You have a set of people working in the company, and you have to predict who will be the best at the next-level-up job.
And then the next role they do is retention. They have 10,000 people working in the company, and they have to predict which of those people are most likely to leave, particularly their stars, and also predict: what can we do that would most likely increase the chance of them staying?
And so one of the, what I would say, a black art right now in A.I. is converting existing problems into prediction problems so that A.I.s can handle them.