Has our ability to create intelligence outpaced our wisdom? | Max Tegmark on A.I. | Big Think
I’m optimistic that we can create an awesome future with technology as long as we win the race between the growing power of the tech and the growing wisdom with which we manage the tech. This is actually getting harder because of nerdy technical developments in the AI field.
It used to be, when we wrote state-of-the-art AI—like for example IBM’s Deep Blue computer who defeated Gary Kasparov in chess a couple of decades ago—that all the intelligence was basically programmed in by humans who knew how to play chess and then the computer won the game just because it could think faster and remember more. But we understood the software well.
Understanding what your AI system does is one of those pieces of wisdom you have to have to be able to really trust it. The reason we have so many problems today with systems getting hacked or crashing because of bugs is exactly because we didn’t understand the systems as well as we should have.
Now what’s happening is fascinating; today’s biggest AI breakthroughs are a completely different kind where rather than the intelligence being largely programmed in an easy-to-understand code, you put in almost nothing except a little learning rule by which a simulated arc of neurons can take a lot of data and figure out how to get stuff done.
This deep learning suddenly becomes able to do things often even better than the programmers were ever able to do. You can train a machine to play computer games with almost no hard-coded stuff at all. You don’t tell it what a game is, what the things are on the screen, or even that there is such a thing as a screen—you just feed in a bunch of data about the colors of the pixels and tell it, “Hey, go ahead and maximize that number in the upper left corner,” and gradually you come back and it’s playing some game much better than I could.
The challenge with this, even though it’s very powerful, this is very much “blackbox” now where, yeah, it does all that great stuff—and we don’t understand how. So suppose I get sentenced to ten years in prison by a Robojudge in the future and I ask, “Why?” And I’m told, “I WAS TRAINED ON SEVEN TERABYTES OF DATA, AND THIS WAS THE DECISION.” It’s not that satisfying for me.
Or suppose the machine that’s in charge of our electric power grid suddenly malfunctions and someone says, “Well, we have no idea why. We trained it on a lot of data and it worked,” that doesn’t instill the kind of trust that we want to put into systems.
When you get the blue screen of death when your Windows machine crashes or the spinning wheel of doom because your Mac crashes, “annoying” is probably the main emotion we have, but “annoying” isn’t the emotion we have if it’s myself flying an airplane and it crashes, or the software controlling the nuclear arsenal of the U.S., or something like that.
And as AI gets more and more out into the world, we absolutely need to transform today’s packable and buggy AI systems into AI systems that we can really trust.