Inside Google's DeepMind Project: How AI Is Learning on Its Own | Max Tegmark | Big Think
I define intelligence simply as how good something is at accomplishing complex goals. Human intelligence today is very different from machine intelligence today in multiple ways.
First of all, machine intelligence in the past used to be just always inferior to human intelligence. Gradually, machine intelligence got better than human intelligence in certain very, very narrow areas, like multiplying numbers fast like pocket calculators or remembering large amounts of data really fast.
What we’re seeing now is that machine intelligence is spreading out a little bit from those narrow peaks and getting a bit broader. We still have nothing that is as broad as human intelligence, where a human child can learn to get pretty good at almost any goal. But you have systems now, for example, that can learn to play a whole swath of different kinds of computer games or to learn to drive a car in pretty varied environments.
And uh... where things are obviously going in AI is increased breadth, and the Holy Grail of AI research is to build a machine that is as broad as human intelligence; it can get good at anything. And once that’s happened, it’s very likely it’s not only going to be as broad as humans but also better than humans at all the tasks, as opposed to just some right now.
I have to confess that I’m quite the computer nerd myself. I wrote some computer games back in high school and college, and more recently I’ve been doing a lot of deep learning research with my lab at MIT.
So something that really blew me away, like “whoa,” was when I first saw this Google DeepMind system that learned to play computer games from scratch. You had this artificial simulated neural network; it didn’t know what a computer game was, it didn’t know what a computer was, it didn’t know what a screen was. You just fed in numbers that represented the different colors on the screen and told it that it could output different numbers corresponding to different keystrokes, which also it didn’t know anything about, and then just kept feeding it the score.
All the software knew was to try to do randomly do stuff that would maximize that score. I remember watching this on the screen once when Demis Hassabis, the CEO of Google DeepMind, showed it, and seeing first how this thing really played total BS strategy and lost all the time. It gradually got better and better, and then it got better than I was, and then after a while, it figured out this crazy strategy in Breakout (where you’re supposed to bounce a ball off of a brick wall) where it would keep aiming for the upper left corner until it punched a hole through there and got the ball bouncing around in the back and just racked up crazy many points.
And I was like, “Whoa, that’s intelligent!” The guys who programmed this didn’t even know about that strategy because they hadn’t played that game very much. This is a simple example of how machine intelligence can surpass the intelligence of its creator, much in the same way as a human child can end up becoming more intelligent than its parents if educated well.
This is just tiny little computers, the sort of hardware you can have on your desktop. If you now imagine scaling up to the biggest computer facilities we have in the world and you give us a couple of more decades of algorithm development, I think it is very plausible that we can make machines that cannot just learn to play computer games better than us, but can view life as a game and do everything better than us...