Artificial General Intelligence: Humanity's Last Invention | Ben Goertzel | Big Think
The mathematician I.J. Good, back in the mid-1960s, introduced what he called the intelligence explosion, which in essence was the same as the concept that Vernor Vinge later introduced and Ray Kurzweil adopted and called the technological singularity. What I.J. Good said was the first intelligent machine will be the last invention that humanity needs to make.
Now, in the 1960s, the difference between neural AI and AGI wasn’t that clear, and I.J. Good wasn’t thinking about a system like AlphaGo that could beat Go but couldn’t walk down the street or add five plus five. In the modern vernacular, what we can say is the first human-level AGI, the first human-level artificial general intelligence, will be the last invention that humanity needs to make.
And the reason for that is once you get a human-level AGI, you can teach this human-level AGI math and programming and AI theory and cognitive science and neuroscience. This human-level AGI can then reprogram itself, and it can modify its own mind, and it can make itself into a yet smarter machine. It can make 10,000 copies of itself, some of which are much more intelligent than the original.
And once the first human-level AGI has created the second one, which is smarter than itself, well, that second one will be even better at AI programming and hardware design and cognitive science and so forth and will be able to create the third human-level AGI, which by now will be well beyond human level. So it seems that it’s going to be a laborious path to get to the first human-level AGI.
I don’t think it will take centuries from now, but it may be decades rather than years. On the other hand, once you get to a human-level AGI, I think you may see what some futures have called a hard takeoff, where you see the intelligence increase literally day by day as the AI system rewrites its own mind.
And this – it’s a big frightening but it’s also incredibly exciting. Does that mean humans will not ever make any more inventions? Of course it doesn’t. But what it means is if we do things right, we won’t need to. If things come out the way that I hope they will, what will happen is we’ll have these superhuman minds, and largely they’ll be doing their own things.
They will also offer to us the possibility to upload or upgrade ourselves and join them in realms of experience that we cannot now conceive in our current human forms. Or these superhuman AGIs may help humans to maintain a traditional human-like existence. I mean, if you have a million times human IQ and you can reconfigure elementary particles into new forms of matter at will, then supplying a few billion humans with food and water and video games, virtual reality headsets and national parks and flying cars and whatnot – this would be trivial for these superhuman minds.
So if they’re well disposed toward us, people who chose to remain in human form could have a simply much better quality of life than we have now. You don’t have to work for a living. You can devote your time to social, emotional, spiritual, intellectual and creative pursuits rather than laboriously doing things you might rather not do just in order to get food and shelter and an internet connection.
So, I think there are tremendous positive possibilities here, and there’s also a lot of uncertainty, and there’s a lot of work to get to the point where intelligence explodes in the sense of a hard takeoff. But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting.