The Ethics of A.I. on the Battlefield Are Less Clear-Cut Than You Might Think | Big Think
Everybody’s concerned about killer robots. We should ban them. We shouldn’t do any research into them. It may be unethical to do so.
There’s a wonderful paper, in fact, by a professor at the post naval graduate school in Monterrey, I believe, B.J. Strawser. I believe the title is "The Moral Requirement to Deploy Autonomous Drones." His basic point in that is really pretty straightforward. We have obligations to our military forces to protect them, and things that we can do which may protect them. A failure to do that is itself an ethical decision, which may cause – may be the wrong thing to do if you have technologies.
So let me give you an interesting scale that whole thing down to show you this doesn’t have to be about terminator-like robots coming in and shooting at people and things like that. Think about a landmine. Now, a landmine has a sensor, a little switch. You step on it, and it blows up. There’s a sensor; there’s an action that’s taken as a result of a change in its environment.
Now it’s a fairly straightforward matter to take some artificial intelligence technologies right off the shelf today and just put a little camera on that. It’s not expensive, same kind you have in your cell phone. There’s a little bit of processing power that could look at what’s actually happening around that landmine. And you might think, "Well, okay, I can see that the person who is nearby me is carrying a gun. I can see that they’re wearing a military uniform, so I’m going to blow up."
But if you see it’s just some peasant out in a field with a rake or a hoe, we can avoid blowing up under the circumstances. "Oh, that’s a child. I don’t want to blow up. I’m being stepped on by an animal. Okay, I’m not going to blow up."
Now that is an autonomous military technology of just the sort that there was a recent letter signed by a great many scientists. This falls into that class. And in the emerging discourse, that devices like that be banned. But I give this as an example of the device for which there’s a good argument that if we can’t deploy that technology, it’s more humane, it’s more targeted, and it’s more ethical to do so.
Now, that isn’t always the case. My point is not that that’s right and you should just go ahead, willy-nilly, and develop killer robots. My point is this is a much more subtle area which requires considerable more thought and research. And we should let the people who are working on it think through these problems and make sure that they understand the kinds of sensitivities and concerns that we have as a society about the use and deployment of these types of technologies.