Why creating AI that has free will would be a huge mistake | Joanna Bryson | Big Think
First of all, there’s the whole question about why is it that we in the first place assume that we have obligations towards robots? So we think that if something is intelligent, then that’s their special source; that’s why we have moral obligations. And why do we think that? Because most of our moral obligations—the most important thing to us—is each other. So basically, morality and ethics are the way that we maintain human society, including by doing things like keeping the environment okay, you know, making it so we can live.
So, one of the ways we characterize ourselves is as intelligent, and so when we then see something else and say, “Oh it’s more intelligent, well then maybe it needs even more protection.” In AI, we call that kind of reasoning heuristic reasoning: it’s a good guess that will probably get you pretty far, but it isn’t necessarily true. I mean, again, how you define the term “intelligent” will vary. If you mean by “intelligent” a moral agent, you know, something that’s responsible for its actions, well then, of course, intelligence implies moral agency.
When will we know for sure that we need to worry about robots? Well, there are a lot of questions there, but consciousness is another one of those words. The word I like to use is “moral patient.” It’s a technical term that the philosophers came up with, and it means, exactly, something that we are obliged to take care of. So now we can have this conversation. If you just mean “conscious means moral patient,” then it’s no great assumption to say “well then, if it’s conscious, then we need to take care of it.” But it’s way cooler if you can say, “Does consciousness necessitate moral patiency?” And then we can sit down and say, “Well, it depends what you mean by consciousness.” People use consciousness to mean a lot of different things.
So one of the things that we did last year, which was pretty cool, hit the headlines, because we were replicating some psychology stuff about implicit bias—actually the best one is something like “Scientists Show That A.I. Is Sexist and Racist, and It’s Our Fault,” which that’s pretty accurate, because it really is about picking things up from our society. Anyway, the point was, so here is an AI system that is so human-like that it’s picked up our prejudices and whatever… and it’s just vectors! It’s not an ape. It’s not going to take over the world. It’s not going to do anything; it’s just a representation; it’s like a photograph. We can’t trust our intuitions about these things.
We give things rights because that’s the best way we can find to handle very complicated situations. And the things that we give rights are basically people. I mean, some people argue about animals, but technically—and again this depends on whose technical definition you use—but technically, rights are usually things that come with responsibilities and that you can defend in a court of law. So normally we talk about animal welfare and we talk about human rights, but with artificial intelligence, you can even imagine itself knowing its rights and defending itself in the court of law.
But the question is, why would we need to protect the artificial intelligence with rights? Why is that the best way to protect it? So with humans, it’s because we’re fragile; it’s because there’s only one of us. And I actually think—this is horribly reductionist, but I actually think—it’s just the best way that we’ve found to be able to cooperate. It’s sort of an acknowledgment of the fact that we’re all basically the same thing, the same stuff, and we had to come up with—the technical term again is equilibrium—we had to come up with some way to share the planet, and we haven't managed to do it completely fairly (like ‘everybody gets the same amount of space’), but actually we all want to be recognized for our achievements, so even completely fair isn’t completely fair, if that makes sense.
And I don’t mean to be facetious there; it really is true that you can’t make all the things you would like out of fairness be true at once. That’s a fact about the world; it’s a fact about the way we define fairness. So, given how hard it is to be fair, why should we build AI that needs us to be fair to it? So what I’m trying to do is just make the problem simpler and focus us on the thing that we can’t help, which is the human condition. And I’m recommending that if you specify something, if you say okay this is when you really need rights in this context, okay once we’ve established that, don’t build that, okay?
A lot of people this rubs them the wrong way because they’ve watched Blade Runner or AI the movie or something like this. In a lot of these movies, we’re not really talking about AI; we’re not talking about something designed from the ground up; we’re talking basically about clones. And clones are a different situation. If you have something that’s exactly like a person, however it was made, then okay, then it’s exactly like a person and it needs that kind of protection.
But even biological clones, even if you just want to clone yourself, at least in the European Union, that’s illegal. I’m not sure about in America. I think it’s illegal in America too. But people think it’s unethical to create human clones partly because they don’t want to burden someone with the knowledge that they’re supposed to be someone else, that there was some other person that chose them to be that person. I don’t know if we’ll be able to stick to that, but I would say that AI clones fall into the same category.
If you’re really going to make something and then say, “Hey, congratulations, you’re me and you have to do what I say,” I wouldn’t want myself to tell me what to do, if that makes sense, if there were two of me! I think we’d like to be able to both be equals, and so you don’t want to have—an artifact is something you’ve deliberately built and that you’re going to own. If you have something that’s sort of a humanoid servant that you own, then the word for that is slave.
And so I was trying to establish that look, we are going to own anything we build, and so therefore it would be wrong to make it a person, because we’ve already established that slavery of people is wrong and bad and illegal. And so it never occurred to me that people would take that to mean that “the robots will be people that we just treat really badly.” It’s like no, that’s exactly the opposite!
So, I already mentioned that if somebody did manage to clone people somehow, which I don’t believe this is ever going to work, but people do talk about it and people spending tens of millions of dollars on it—“whole brain uploading.” So I don’t believe it’s possible. I don’t think it’s actually computationally tractable, but if that were to happen, then I would be there saying, “Yes, this is a person.” But how can we stop that the same way we stop human cloning, which is just to say, “Don’t do that?”
And particularly with AI, my point is that it shouldn’t be a commercial product. So if somebody does this in their basement or something, well then we have a few exceptions, but I’m much more concerned about people mass-producing such things.