How regulation today could avoid tomorrow’s A.I. disaster | Joanna Bryson | Big Think
If we're coding AI and we understand that there's moral consequences, does that mean the programmer has to understand it? It isn't only the programmer, although we do really think that we need to train programmers to be watching out for these kinds of situations, knowing how to whistle blow, knowing when to whistle blow.
There is a problem of people being over-reactive, and that costs companies, and I understand that. But we also have sort of a Nuremberg situation that we need everybody to be responsible. But ultimately, it isn't just about the programmers; the programmers work within the context of a company, and the companies work in the context of regulation. So it is about the law, it's about society.
One of the things, one of the papers that had come out in 2017 was Professor Alan Winfield was a thing about how if legislatures can't be expected to keep up with the pace of technological change, what they could keep up with is which professional societies do they trust. And they already do this in various disciplines; it's just new for AI.
You say you have to achieve the moral standards of at least one professional organization, so when they give their rules about what's okay. And that sort of allows you kind of a loose coupling because it's wrong for professional organizations to enforce the law to go after people, to sue them, whatever. That's not what professional organizations are for.
But it's also sensible; it is what professional organizations are for is to keep up with their own field and to set things like codes of conduct. So that's why you want to bring those two things together: the executive government and the professional organizations, and you can kind of have the legislature join those two together.
This is what I'm working hard to keep in the regulations that it's always people in organizations that are accountable, and so then they will be motivated to make sure that they can demonstrate they followed due process, both of the people who are operating the AI and the people who developed the AI.
Because it's like a car; when there's a car accident, normally the driver is at fault. Sometimes the person they hit is at fault because they did something completely unpredictable. But sometimes the manufacturer did something wrong with the brakes, and that's a real problem.
So we need to be able to show that the manufacturer followed good practice and it really is the fault of the driver. Or sometimes that there really isn't a fact of the matter, like it was an unforeseeable thing in the past. But of course, now it's happened, so in the future, we'll be more careful.
That just happened recently in Europe. There was a case where somebody was on... it wasn't like a totally driverless car, but I guess it was cruise control or something; it had some extra AI. And unfortunately, somebody had a stroke.
Now what happens a lot, and what automobile manufacturers have to look for is falling asleep at the wheel, but this guy had a stroke, which is different from falling asleep. So he was still kind of holding on semi in control but couldn't see anything, hit a family, and killed two of the three of the family.
And so the survivor was the father, and he said he wasn't happy only to get money from insurance or whatever the liability or whatever; he wanted to know that whoever had caused this accident was being held accountable. So there was a big lawsuit and that company, absolutely it was a car manufacturing company; they’re able to show they followed due process, they had been incredibly careful; this was just like a really unlikely thing to have happen, to have that kind of stroke that you'd be holding onto the steering wheel and all these other things.
And so it was decided that there was nobody actually at fault. But it could have been different. If Facebook is really moving fast and breaking things, then they're going to have a lot of trouble proving that they were doing due diligence when Cambridge Analytica got the data that it got. And so they are very likely to be held to account for anything that's found to have been a negative consequence.