yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Will Superhuman Intelligence Be Our Friend or Foe? | Ben Goertzel | Big Think


4m read
·Nov 3, 2024

Some people are gravely worried about the uncertainty and the negative potential associated with transhuman, superhuman AGI. And indeed we are stepping into a great unknown realm. It’s almost like a Rorschach type of thing, really. I mean, we fundamentally don’t know what a superhuman AI is going to do, and that’s the truth of it, right?

And then if you tend to be an optimist, you will focus on the good possibilities. If you tend to be a worried person who’s pessimistic, you’ll focus on the bad possibilities. If you tend to be a Hollywood movie maker, you focus on scary possibilities, maybe with a happy ending because that’s what sells movies. We don’t know what’s going to happen.

I do think, however, this is the situation humanity has been in for a very long time. When the cavemen stepped out of their caves and began agriculture, we really had no idea that was going to lead to cities and space flight and so forth. And when the first early humans created language to carry out simple communication about the moose they had just killed over there, they did not envision Facebook, differential calculus, and MC Hammer, and all the rest, right?

I mean, there’s so much that has come about out of early inventions which humans couldn’t have ever foreseen. And I think we’re just in the same situation. I mean, the invention of language or civilization could have led to everyone’s death, right? And in a way, it still could. And the creation of superhuman AI, it could kill everyone, and I don’t want it to. Almost none of us do.

Of course, the way we got to this point as a species and a culture has been to keep doing amazing new things that we didn’t fully understand. And that’s what we’re going to keep on doing. Nick Bostrom’s book was influential, but I felt that in some ways it was a bit deceptive the way he phrased things. If you read his precise philosophical arguments, which are very logically drawn, what Bostrom says in his book, Superintelligence, is that we cannot rule out the possibility that a superintelligence will do some very bad things. And that’s true.

On the other hand, some of the associated rhetoric makes it sound like it’s very likely a superintelligence will do these bad things. And if you follow his philosophical arguments closely, he doesn’t show that. What he just shows is that you can’t rule it out, and we don’t know what’s going on. I don’t think Nick Bostrom or anyone else is going to stop the human race from developing advanced AI because it’s a source of tremendous intellectual curiosity, but also of tremendous economic advantage.

So if, let’s say, President Trump decided to ban artificial intelligence research – I don’t think he’s going to, but suppose he did. China will keep doing artificial intelligence research. If U.S. and China ban it, you know, Africa will do it. Everywhere around the world has AI textbooks and computers. And everyone now knows you can make people’s lives better and make money from developing more advanced AI.

So there’s no possibility in practice to halt AI development. What we can do is try to direct it in the most beneficial direction according to our best judgment. And that’s part of what leads me to pursue AGI via an open-source project such as OpenCog. I respect very much what Google, Baidu, Facebook, Microsoft, and these other big companies are doing in AI. There’s many good people there doing good research and with good-hearted motivations.

But I guess I’m enough of an old leftist raised by socialists, and I sort of – I’m skeptical that a company whose main motive is to maximize shareholder value is really going to do the best thing for the human race if they create a human-level AI. I mean, they might. On the other hand, there are a lot of other motivations there, and a public company in the end has a fiduciary responsibility to their shareholders.

All in all, I think the odds are better if AI is developed in a way that is owned by the whole human race and can be developed by all of humanity for its own good. And open-source software is sort of the closest approximation that we have to that now. So our aspiration is to grow OpenCog into sort of the Linux of AGI and have people all around the world developing it to serve their own local needs and putting their own values and understanding into it as it becomes more and more intelligent.

Certainly this doesn’t give us any guarantee. We can observe things like Linux has fewer bugs than Windows or OSX, and it’s open source. So more eyeballs on something sometimes can make it more reliable. But there’s no solid guarantee that making an AGI open source will make the singularity come out well.

But my gut feel is that there’s enough hard problems with creating a superhuman AI and having it respect human values and have a relationship of empathy with people as it grows. There’s enough problems there without the young AGI getting wrapped up in competition of country versus country and company versus company and internal politics within companies or militaries. I feel like we don’t want to add these problems of sort of human/primate social status competition dynamics. We don’t want to add those problems into the challenges that are faced in AGI development.

More Articles

View All
Bitcoin Just Got Cancelled
What’s up, Graham? It’s Guys here. So, this is not the video I was planning to make today, but here we are. Tesla and Elon Musk just completely pulled the rug from underneath Bitcoin, and with one single tweet, $365 billion was lost from the entire crypto…
Answering Google's Most Asked Questions of 2023
We can learn a lot about ourselves and the state of the world through our Google searches. What we ask the internet is not only a reflection of what we want to know, but also what we desire and fear. In 2023, people search for everything from deep questi…
A message from Sal Khan for the Khan Academy 2016 Annual Report
Welcome to the KH Academy 2016 annual report. In the actual text of the report, we’re going to go into a lot more detail on the financials and other things, but I’m hoping here to give you an overview, big picture. 2016 was a great year for KH Academy. T…
What Truly Matters To A Stoic
Hello everyone and welcome! This is the seventh edition already of the Einzelgänger Q&A. A while ago, I got a question from a follower named Sofia, below my video about Stoicism and not giving a… you know what. This particular video is about caring le…
Decimal multiplication place value
This is an exercise from Khan Academy. It tells us that the product 75 times 61 is equal to 4575. Use the previous fact to evaluate as a decimal this right over here: 7.5 times 0.061. Pause this video and see if you can have a go at it. All right, now le…
Warren Buffett's Latest Stock Market Moves! (Berkshire Hathaway Portfolio Update)
Well, it’s that time again. We’ve waited patiently for 45 days after the end of Q3, and thus Warren Buffett has released Berkshire Hathaway’s 13F filing. So, in this video, we’re going to be doing a deep dive into exactly what Warren Buffett has been buyi…