yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Artificial General Intelligence: Humanity's Last Invention | Ben Goertzel | Big Think


3m read
·Nov 3, 2024

The mathematician I.J. Good, back in the mid-1960s, introduced what he called the intelligence explosion, which in essence was the same as the concept that Vernor Vinge later introduced and Ray Kurzweil adopted and called the technological singularity. What I.J. Good said was the first intelligent machine will be the last invention that humanity needs to make.

Now, in the 1960s, the difference between neural AI and AGI wasn’t that clear, and I.J. Good wasn’t thinking about a system like AlphaGo that could beat Go but couldn’t walk down the street or add five plus five. In the modern vernacular, what we can say is the first human-level AGI, the first human-level artificial general intelligence, will be the last invention that humanity needs to make.

And the reason for that is once you get a human-level AGI, you can teach this human-level AGI math and programming and AI theory and cognitive science and neuroscience. This human-level AGI can then reprogram itself, and it can modify its own mind, and it can make itself into a yet smarter machine. It can make 10,000 copies of itself, some of which are much more intelligent than the original.

And once the first human-level AGI has created the second one, which is smarter than itself, well, that second one will be even better at AI programming and hardware design and cognitive science and so forth and will be able to create the third human-level AGI, which by now will be well beyond human level. So it seems that it’s going to be a laborious path to get to the first human-level AGI.

I don’t think it will take centuries from now, but it may be decades rather than years. On the other hand, once you get to a human-level AGI, I think you may see what some futures have called a hard takeoff, where you see the intelligence increase literally day by day as the AI system rewrites its own mind.

And this – it’s a big frightening but it’s also incredibly exciting. Does that mean humans will not ever make any more inventions? Of course it doesn’t. But what it means is if we do things right, we won’t need to. If things come out the way that I hope they will, what will happen is we’ll have these superhuman minds, and largely they’ll be doing their own things.

They will also offer to us the possibility to upload or upgrade ourselves and join them in realms of experience that we cannot now conceive in our current human forms. Or these superhuman AGIs may help humans to maintain a traditional human-like existence. I mean, if you have a million times human IQ and you can reconfigure elementary particles into new forms of matter at will, then supplying a few billion humans with food and water and video games, virtual reality headsets and national parks and flying cars and whatnot – this would be trivial for these superhuman minds.

So if they’re well disposed toward us, people who chose to remain in human form could have a simply much better quality of life than we have now. You don’t have to work for a living. You can devote your time to social, emotional, spiritual, intellectual and creative pursuits rather than laboriously doing things you might rather not do just in order to get food and shelter and an internet connection.

So, I think there are tremendous positive possibilities here, and there’s also a lot of uncertainty, and there’s a lot of work to get to the point where intelligence explodes in the sense of a hard takeoff. But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting.

More Articles

View All
Setting up a system of equations from context example
In this video, we’re going to get some more practice setting up systems of equations, not solving them, but just setting them up. So we’re told Sanjay’s dog weighs 5 times as much as his cat. His dog is also 20 kilograms heavier than his cat. Let c be the…
Don’t Feel Harmed, And You Haven’t Been | The Philosophy of Marcus Aurelius
Marcus Aurelius pointed out that regardless of the severity of circumstances, there’s always a choice in how we judge them. “Choose not to be harmed—and you won’t feel harmed. Don’t feel harmed—and you haven’t been,” he stated. Marcus’ instruction sounds…
Second derivatives (vector-valued functions) | Advanced derivatives | AP Calculus BC | Khan Academy
So I have a vector valued function H here. When I say vector valued, it means you give me a T; it’s a function of T. So you give me a T, I’m not just going to give you a number; I’m going to give you a vector. As we’ll see, you’re going to get a two-dimen…
Why Invisibility is Power | Priceless Benefits of Being Invisible
In today’s society, an individual’s success seems increasingly synonymous with ‘relevance.’ How much attention do you draw to yourself? How much are people talking about you on social media? How much exposure do you have on Twitter? How many followers on …
15 TV Shows That Can Shape Your Success
If you think the shows you watch don’t influence how you think, then my friend, you are living in denial. Okay, you’re inspired by the things you see, especially when you’re young. Some TV series showed you what you could achieve, no matter where you come…
"Sell Your Stocks NOW" - Jeremy Grantham's Stock Market Warning
Us is not moderately overpriced; it is shockingly overpriced. As I said a year ago, I think they’ll do pretty well by selling. Billionaire investor Jeremy Grantham is warning that the stock market could collapse a whopping 60% from its current levels. If …