yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Artificial General Intelligence: Humanity's Last Invention | Ben Goertzel | Big Think


3m read
·Nov 3, 2024

The mathematician I.J. Good, back in the mid-1960s, introduced what he called the intelligence explosion, which in essence was the same as the concept that Vernor Vinge later introduced and Ray Kurzweil adopted and called the technological singularity. What I.J. Good said was the first intelligent machine will be the last invention that humanity needs to make.

Now, in the 1960s, the difference between neural AI and AGI wasn’t that clear, and I.J. Good wasn’t thinking about a system like AlphaGo that could beat Go but couldn’t walk down the street or add five plus five. In the modern vernacular, what we can say is the first human-level AGI, the first human-level artificial general intelligence, will be the last invention that humanity needs to make.

And the reason for that is once you get a human-level AGI, you can teach this human-level AGI math and programming and AI theory and cognitive science and neuroscience. This human-level AGI can then reprogram itself, and it can modify its own mind, and it can make itself into a yet smarter machine. It can make 10,000 copies of itself, some of which are much more intelligent than the original.

And once the first human-level AGI has created the second one, which is smarter than itself, well, that second one will be even better at AI programming and hardware design and cognitive science and so forth and will be able to create the third human-level AGI, which by now will be well beyond human level. So it seems that it’s going to be a laborious path to get to the first human-level AGI.

I don’t think it will take centuries from now, but it may be decades rather than years. On the other hand, once you get to a human-level AGI, I think you may see what some futures have called a hard takeoff, where you see the intelligence increase literally day by day as the AI system rewrites its own mind.

And this – it’s a big frightening but it’s also incredibly exciting. Does that mean humans will not ever make any more inventions? Of course it doesn’t. But what it means is if we do things right, we won’t need to. If things come out the way that I hope they will, what will happen is we’ll have these superhuman minds, and largely they’ll be doing their own things.

They will also offer to us the possibility to upload or upgrade ourselves and join them in realms of experience that we cannot now conceive in our current human forms. Or these superhuman AGIs may help humans to maintain a traditional human-like existence. I mean, if you have a million times human IQ and you can reconfigure elementary particles into new forms of matter at will, then supplying a few billion humans with food and water and video games, virtual reality headsets and national parks and flying cars and whatnot – this would be trivial for these superhuman minds.

So if they’re well disposed toward us, people who chose to remain in human form could have a simply much better quality of life than we have now. You don’t have to work for a living. You can devote your time to social, emotional, spiritual, intellectual and creative pursuits rather than laboriously doing things you might rather not do just in order to get food and shelter and an internet connection.

So, I think there are tremendous positive possibilities here, and there’s also a lot of uncertainty, and there’s a lot of work to get to the point where intelligence explodes in the sense of a hard takeoff. But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting.

More Articles

View All
How Elon Musk Spends His Time
How do you spend your days now? Like what do you? My time is mostly split between SpaceX and Tesla. And of course, I try to spend a part of every week at OpenAI. So I spend most—I spend basically half a day at OpenAI most weeks. And then I have some OpenA…
Finding your footing in uncertain times: Balancing multiple kids with multiple schedules
The broadcast is now starting. All attendees are in listen-only mode. Hi everybody, thanks so much for joining us today. I’m Vicki Lang. I’m our learning scientist here at Khan Academy, and I’m joined by Dan from our marketing team who will be facilitati…
Adora Cheung - How to Set KPIs and Goals
All right, so I am going to be talking about setting your KPIs and goals for early stage startups. I’m going to be pretty pedantic in this lecture, and the reason why is doing this correctly is a necessary condition for starting as successful or building …
Ask me anything with Sal Khan: April 15 | Homeroom with Sal
Welcome to the Khan Academy daily homeroom. This is a way that we’re trying to stay in touch and help support parents, teachers, and students as we go through this school closure situation. Many of y’all know Khan Academy; we’re a not-for-profit with a mi…
In the Studio Pt. 2 ft Zedd | One Strange Rock
They didn’t want me to create a Zedd song. They wanted me to create a piece of music that matches what this is all about. [music playing] My first thoughts when the project came to me was, finally, and excited, because I’ve made classical music in my li…
General multiplication rule example: dependent events | Probability & combinatorics
We’re told that Maya and Doug are finalists in a crafting competition. For the final round, each of them will randomly select a card without replacement that will reveal what the star material must be in their craft. Here are the available cards. I guess …