Would You Buy a Car That’s Programmed to Kill You? | Big Think
As machines become increasingly autonomous—by which I mean that they can sense their environment and they can make decisions about what to do or what not to do—of course it's based on their programming and their experience. But we don't have as direct control over what they do as we do today with the kinds of technology that we have now.
There's a couple of very interesting consequences of that. One of them is that they're going to be faced with having to make ethical decisions. I’ll call it ethic junior; it’s just making socially appropriate decisions. So, we're taking machines and we're putting them in situations where they're around people, and something that we take for granted and seems so natural that machines do not take for granted and do not find natural is the normal kinds of social courtesies and conventions that we operate by in dealing with other people.
You don't want to have a robot that's making a delivery run down the sidewalk, and you know everybody's got to get out of the way. It has to be able to walk in a crowd in a socially appropriate way. Your autonomous car—there are lots of very interesting ethical conundrums that come up, but a lot of them are just social.
For example, it pulls up to the crosswalk; should you cross? Should you wait? How's it going to signal you? Right now, the social convention is you make eye contact with the driver, and they tell you whether to cross. Now, I can't make eye contact with an autonomous car. So, there are lots of these sort of rough edges around how machines ought to be able to behave, and the situations are highly variable.
You can't just make a list of them and say do this and do that. We need to program into these devices some fairly general principles—you can call them ethical if you like—which will allow them to guide their own behavior in ways and in directions that are consistent with the expectations that we have in society.
Now, I'm teaching at Stanford, and I can tell you I haven't seen anything about this in the engineering curriculum. There's how to be an ethical engineer, but there isn't how do you build a device to be ethical. This is a completely new area that sometimes goes on the name of moral programming or computational ethics. There are some excellent books on this subject, but unfortunately, if you read those books, which I have to do because that's my job, they're mostly pointing out the problems. Nobody has a really good scheme for how to go about doing this.
So, we need to develop an engineering discipline of computational ethics, and we need to have course sequences in our engineering schools that teach how to get machines to behave appropriately in a wide variety of new circumstances. Let me point out some of the more serious kinds of conundrums just to give you a feel for it and then others that are just inconveniences.
On the very serious side, there's a classic philosophical debate that goes on over what's called the trolley problem. The trolley problem is basically you're in a trolley, and the track splits. If you take no action, the trolley is going to go to the right, and there are four people on the track, and it's going to kill those people. You can flip a switch, and it'll go down the left track, and there's only one person on that track. The ethical question is: Is it ethical to flip that switch?
It is true that the loss of life would be minimized, but it is also true that you have now taken an action to kill somebody. And if you're that person, you may not think that's the right thing to do. So, philosophers have been studying this and many variations, and there's a lot of very subtle and interesting work that goes on in this. But this is about to become very real because autonomous cars will face exactly these kinds of decisions.
I'm going to buy an autonomous car, and I'm in the car. I'm the one guy, and there may be circumstances in which there are four people in front of the car in some way, and to save their lives, my car has to drive off the edge of the bridge. There's a philosophical theory called utilitarianism, which has been around for a couple of centuries at least, that would say that maximizing the good for society is my car should kill me. But I'm not buying that car, and so we have a conundrum here.
I don't want to see people buying a Ford instead of a Chevy because the Ford's more likely to save my life no matter what, and the Chevy is going to be a little more forgiving of that and might kill me to save the lives of other people. I don't want that to be a selling point in cars, so we need to have a societal discussion over how this works.
To demonstrate why that is so interesting, I'll just give you a little twist on what I said. Right now, we're talking about me buying an autonomous car, but let's suppose I'm signed up for the great Uber network in the sky of the future, and cars are coming and whatever, and I don't own that car. Now, I feel a little bit differently about it because it's not my car; I'm just like I'm getting on a train.
We would never allow a train to—uh—the people on the train to vote. You know, like, "I want my car to kill me and not that one." There are certain instances where it makes more sense for the societal average interest to be operational. So, when I think about this issue, even the fact of who owns the car changes my own moral judgment about this particular kind of an issue.
Well, we need to be able to take these kinds of principles, talk about them, vet them, and put them into cars. Autonomous driving cars have got a number of different issues that are very, very important. Now, so far, I've just talked about life and death, but there's lots of shades of gray in between that are really quite different.
In fact, I’m going to make an argument to you today that we’re already down this path and we haven’t even recognized it yet for a very interesting reason. Because in order to avoid pointing out this problem, the car manufacturers do not talk about this as artificial intelligence. Let me give you an example.
A common function in cars is ABS—adaptive braking systems; I think that’s what that stands for. What that’ll do is if it can detect—which it can—that you're about to skid, it's going to pump the brakes and do various things to maintain control of the car and keep it going in a particular direction.
Now, what you might not know is that ABS in many cases on certain surfaces has a longer stopping distance than if you just jammed on the brakes, locked them, and the car spun around. So, imagine you're driving your car, and oh my God, there’s a kid in the middle of the road, and you just want that car to stop as quickly as possible. You slam on the brakes.
Well, the car is going to prioritize keeping going straight over running over that kid. In today's technology, there are circumstances in which that decision—which an engineer made a while back in designing that system—well, we want to keep the car stable. You no longer have the freedom to make the decision.
I don't mind if the car spins out of control as long as I miss that kid. So now imagine that the ABS function had been described as we're simulating the actions of a professional driver, and we're taking that judgment and programming it into a machine using these advanced artificial intelligence techniques so that the car can keep under control the same way a professional driver might.
Well, we might have felt a little bit differently about that if I presented you with that example, and we were talking about this as an AI technology. But by saying it's simply a function of the car and it's just like every other phone you know, it's like the turn signals and everything else, this issue never really got raised and never really got vetted.
But as we look to the future of autonomous driving, it's going to be a problem. Let me move on though to less severe situations. You're in your autonomous car, and you pull up—you’re on a two-lane street—this happens all the time. There’s a UPS truck right in front of you. You see it’s come to a stop; the guy jumps out, he opens up the back, he grabs the package, and starts heading off.
Now you, as a driver, are permitted a certain amount of latitude in how you behave. What would you do? You look around; you go across the double yellow line, and you pass that UPS truck. It's perfectly acceptable behavior, may I point out. You're breaking a rule; you're crossing a double yellow line.
If we were to program our cars simply to say, "You're never supposed to cross a double yellow line," that car is going to sit there until the guy's done, which might be a very long time if he’s gone to his lunch. So, the kinds of latitude that we permit people in their behavior in a lot of these circumstances to be able to break rules or bend rules in a very appropriate way— we need to talk about whether or not it's okay for a car to engage in that kind of behavior.
Let me give you another one. What would you feel if you went down to the movie theater, and there are scarce tickets available, and all of a sudden you find there are 16 robots in line in front of you, and you're at the back of the line? My reaction would be, "Wait a minute, that's not fair. Why do we have 16 robots that are going to pick up tickets for whoever owns the robots? I'm here; we should prioritize me over those robots."
I think when that begins to happen in practice, people will be up in arms because they can see what is actually happening. But that same situation is already happening today. If you try to get a ticket to Billy Joel at Madison Square Garden, scalpers run programs that can snap up all of those tickets in a matter of seconds, leaving all the humans who are sitting there trying to press the return button or, God forbid, fill out the little captcha—they don’t get the stuff.
So, it's exactly the same situation. The robots who are owned and working for somebody else are grabbing an asset before you have an opportunity or a fair chance to acquire that asset—to get that particular ticket. And if you could see that, people would be really mad today. But it's invisible because all the stuff is in the cloud.
So, we're already facing a lot of these same ethical and social issues, but they're not as visible as they need to be for us to have a meaningful public discussion about these particular topics.