yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Machines playing God: How A.I. will overcome humans | Max Tegmark | Big Think


4m read
·Nov 3, 2024

I define intelligence as how good something is at accomplishing complex goals.

So let’s unpack that a little bit. First of all, it’s a spectrum of abilities since there are many different goals you can have, so it makes no sense to quantify something’s intelligence by just one number like an IQ. To see how ridiculous that would be, just imagine if I told you that athletic ability could be quantified by a single number, the “Athletic Quotient,” and whatever athlete had the highest AQ would win all the gold medals in the Olympics. It’s the same with intelligence.

So if you have a machine that’s pretty good at some tasks, these days it’s usually pretty narrow intelligence. Maybe the machine is very good at multiplying numbers fast because it’s your pocket calculator; maybe it’s good at driving cars or playing Go. Humans, on the other hand, have a remarkably broad intelligence. A human child can learn almost anything given enough time. Even though we now have machines that can learn, sometimes learn to do certain narrow tasks better than humans, machine learning is still very unimpressive compared to human learning.

For example, it might take a machine tens of thousands of pictures of cats and dogs until it becomes able to tell a cat from a dog, whereas human children can sometimes learn what a cat is from seeing it once.

Another area where we have a long way to go in AI is generalizing. If a human learns to play one particular kind of game, they can very quickly take that knowledge and apply it to some other kind of game or some other life situation altogether. And this is a fascinating frontier of AI research now: How can we have machines—how can we make them as good at learning from very limited data as people are?

And I think part of the challenge is that we humans aren’t just learning to recognize some patterns; we also gradually learn to develop a whole model of the world.

So if you ask, “Are there machines that are more intelligent than people today?” there are machines that are better than us at accomplishing some goals, but absolutely not all goals. AGI, artificial general intelligence, that’s the dream of the field of AI: to build a machine that’s better than us at all goals. We’re not there yet, but a good fraction of leading AI researchers think we are going to get there, maybe in a few decades.

And if that happens, you have to ask yourself if that might lead to machines getting not just a little better than us, but way better at all goals, having super intelligence. The argument for that is actually really interesting and goes back to the ‘60s, to the mathematician I. J. Goode, who pointed out that the goal of building an intelligent machine is, in and of itself, something that you can do with intelligence.

So once you get machines that are better than us at that narrow task of building AI, then future AIs can be built not by human engineers but by machines, except they might do it thousands or a million times faster.

In my book, I explore the scenario where you have this computer called Prometheus, which has vastly more hardware than a human brain does, and it’s still very limited by its software being kind of dumb. So at the point where it gets human-level general intelligence, the first thing it does is it uses this to realize, “Oh! I can reprogram my software to become much better,” and now it’s a lot smarter.

And a few minutes later, it does this again, and then it does it again, and does it again, and in a matter of perhaps a few days or weeks, a machine like that might be able to become not just a little bit smarter than us but leave us far, far behind.

I think a lot of people dismiss this kind of talk of super intelligence as science fiction because we’re stuck in this sort of carbon chauvinism idea that intelligence can only exist in biological organisms made of cells and carbon atoms.

As a physicist, from my perspective, intelligence is just a kind of information processing performed by elementary particles moving around according to the laws of physics. And there’s absolutely no law in physics that says you can’t do that in ways that are much more intelligent than humans.

We’re so limited by how much brain matter fits through our mommy’s birth canal and stuff like this, and machines are not, so I think it’s very likely that once machines reach human level, they’re not going to stop there; they’ll just blow right by, and that we might one day have machines that are as much smarter than us as we are smarter than snails.

More Articles

View All
Khan Academy Needs Your Help This Back to School
Hi everyone, Sal Khan here from Khan Academy. I just want to remind everyone that, as we’re going through what’s clearly a very difficult time, especially, well, in the world generally, but especially in education, the entire team here at Khan Academy is…
Cyberchondria: Do Online Health Searches Prompt Symptoms (and Worse)? | Mary Aiken
I’m sure everybody knows somebody who searches health-related information online. Well, there’s actually a name for it, and it’s called cyberchondria. Cyberchondria is defined as anxiety induced by escalation during online search to review morbid or serio…
The method that can "prove" almost anything - James A. Smith
In 2011, a group of researchers conducted a scientific study to find an impossible result: that listening to certain songs can make you younger. Their study involved real people, truthfully reported data, and commonplace statistical analyses. So how did t…
Making a Camp for Moose Season | Life Below Zero
Go this way, go this way. These bees! Oh yeah, a bear! Been going through here, digging up… penis. Oh, another one over there! I see bear markings on the trees back here too. So if other bears are coming through, they smell this; they know he’s the bear t…
Naming a cycloalkane | Organic chemistry | Khan Academy
Let’s see if we can name this guy right over here. And so, like always, we always want to look for the longest carbon chain or the longest carbon cycle. I think it’s pretty obvious from this picture that we have a very long carbon cycle here that we can s…
The Strange Tail of Spinosaurus | Podcast | Overheard at National Geographic
So, things to watch out for when we’re actually out in the field. And this is really serious. It kind of feels really surreal, and you think like, you know, this is like in a movie or something. But the problem is, in the movie, it’s stuntmen and fake sna…