Bill Nye: Worrying about the Robo-pocalypse Is a First-World Problem | Big Think
So when it comes to artificial intelligence, it is a fabulous science fiction premise to create a machine that will kill you. And I very much enjoyed Ex Machina, where the guy builds these big robots, and then there’s trouble. There’s trouble. And I can’t help but think about Colossus, the Forbin Project, where they have these computers that control the world’s nuclear arsenals. And then things go wrong, you know. Things just go wrong in a science fiction sense.
But they remind us that if we can build a computer smart enough to figure out that it needs to kill us, we can unplug it. There are two billion people on earth who do not have electricity. They are not concerned about the artificial intelligence computer that decides to crash subway cars and kill people. That’s not their issue. And they don’t even have electricity or clean running water.
So while we’re worried about artificial intelligence, I hope we also take the bigger picture that none of this happens right now without electricity. And so we still don’t have anything but really primitive means of generating electricity. I look forward to the day when everybody has clean water and a supply of quality electricity. Then we can take these meetings about the problems of artificial intelligence.
However, are there any viewers, listeners here who have not been to an airport where the train that takes you from terminal B to terminal A is automated or is not automated? Everybody’s been on an automated train, okay. In the developed world, especially the United States. Okay, that’s artificial intelligence.
Everybody has used a toilet that’s connected to a sewer system whose valves are controlled by software that somebody wrote—that is artificial intelligence. So keep in mind that if we unplug the trains or the sewer system valves, the thing will stop. We still control electricity, so this apocalyptic view of computers that people write software for to do tasks—repetitive tasks or complicated tasks that no one person can sort out for him or herself—that is not new.
I do not see that it’s artificial; I mean that it’s inherently bad. Artificial intelligence is not inherently bad. So just use your judgment, everybody. Let’s—we can do this. I worked on three-channel autopilots almost 40 years ago. The plane lands itself, and humans designed the system. It didn’t come from the sky. It’s artificially intelligent. That’s good. We can do this.