yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

A.I. ‐ Humanity's Final Invention?


11m read
·Oct 29, 2024

Humans rule Earth without competition, but we're about to create something that may change that: our last invention, the most powerful tool, weapon, or maybe even entity: artificial superintelligence. This sounds like science fiction, so let's start at the beginning.

Intelligence is the ability to learn, reason, acquire knowledge and skills, and use them to solve problems. Intelligence is power, and we're the species that exploited it the most. So much so that humanity broke the game of nature and took control. But the journey there wasn't straightforward.

For most animals, intelligence costs too much energy to be worth it. Still, if we track intelligence in the tree of species over time, we can see lots of diverse forms of intelligence emerge. The earliest brains were in flatworms, 500 million years ago; just a tiny cluster of neurons to handle basic body functions. It took hundreds of millions of years for species to diversify and become more complex.

Life conquered new environments, gained new senses, and had to contend with fierce competition over resources. But in nature, all that matters is survival, and brains are expensive. So for almost all animals, a narrow intelligence fit for a narrow range of tasks was enough. In some environments, animals like birds, octopuses, and mammals evolved more complex neural structures. For them, it paid off to have more energy-consuming skills, like advanced navigation and communication.

Until 7 billion years ago, the hominins emerged. We don't know why, but their brains grew faster than their relatives. Something was different about their intelligence. Very slowly, it turned from narrow to general, from a screwdriver to a multi-tool able to think about diverse problems. Two million years ago, Homo erectus saw the world differently from anyone before, as something to be understood and transformed. They controlled fire, invented tools, and created the first culture.

We probably emerged from them around 250,000 years ago, with an even larger and more complex brain. It enabled us to work together in large groups and to communicate complex thoughts. We used our intelligence to improve our lives, to ask how things work and why things are the way they are. With each discovery, we asked more questions and pushed forward, preserving what we learned and outpacing what evolution could do with genes.

Knowledge builds on knowledge. Progress was slow at first and then sped up exponentially. Agriculture, writing, medicine, astronomy, or philosophy exploded into the world. 200 years ago, science took off and made us even better at learning about the world and speeding up progress. 35 years ago, the internet age began. Today, we live in a world made to suit our needs, created by us, for us.

This is incredibly new. We forget how hard it was to get here, how enormous the steps on the intelligence ladder were, and how long it took to climb them. But once we did, we became the most powerful animal in the world in a heartbeat. But we may be in the process of changing this. We're building machines that could be better at the very thing that gave us the power to conquer the planet: humanity's final invention: artificial intelligence.

Artificial intelligence, or AI, is software that performs mental tasks with a computer code that uses silicon instead of neurons to solve problems. In the beginning, AI was very simple lines of code on paper—mere proofs of concept to demonstrate how machines could perform mental tasks. Only in the 1960s did we start seeing the first examples of what we would recognize as AI: a chatbot in 1964, an approach program to sort through molecules in 1965, slow specialized systems requiring experts to use them. Their intelligence was extremely narrow, built for a single task inside a controlled environment—the equivalent of flatworms 500 million years ago, doing the minimum amount of mental work.

Progress in AI research paused several times when researchers lost hope in the technology. But just like changing environments create new niches for life, the world around AI changed. Between 1950 and 2000, computers got a billion times faster while programming became easier and widespread. In 1972, AI could navigate a room. In 1989, it could read handwritten numbers. But it remained a fancy tool, no match for humans until, in 1997, an AI shocked the world by beating the world champion in chess, proving that we could build machines that could surpass us.

But we calmed ourselves because a chess bot is quite stupid— not a flatworm, but maybe a bee—only able to perform a specialized narrow task. But within this narrow task, it's so good that no human will ever again beat AI at chess. As computers continued to improve, AI became a powerful tool for more and more tasks. In 2004, it drove a robot on Mars. In 2011, it began recommending YouTube videos to you.

But this was only possible because humans broke down problems into easy-to-digest chunks that computers could solve quickly, until we taught AIs to teach themselves: the rise of the self-learning machines. This is not a technical video, so we're massively oversimplifying here. In a nutshell, the sheer power of supercomputers was combined with the almost endless data collected in the information age to make a new generation of AI.

AI experts began drastically improving forms of AI software called neural networks—enormously huge networks of artificial neurons that start out being bad at their tasks. They then used machine learning, which is an umbrella term for many different training techniques and environments that allow algorithms to write their own code and improve themselves. The scary thing is that we don't exactly know how they do it and what happens inside them—just that it works and that what comes out the other end is a new type of AI, a capable black box of code.

These new AIs could master complex skills extremely quickly, with much less human help. They were still narrow intelligences, but a huge step up. In 2014, Facebook AI could identify faces with 97% accuracy. In 2016, an AI beat the best humans in the incredibly complex game of Go. In 2018, a self-learning AI learned chess in four hours, just by playing against itself, and then defeated the best specialized chess bot.

Since then, machine learning has been applied to reading, image processing, solving tests, and much more. Many of these AIs are already better than humans for whatever narrow task they were trained. But they still remained a simple tool. AI still didn't seem that big of a deal for most people. And then came the chatbot ChatGPT. The work that went into it is massive; it trained on nearly everything written on the internet to learn how to handle language, which it now does better than most people.

It can summarize, translate, and help with some math problems. It's incredibly more broad than any other system; just a few years ago, it crushed any single benchmark, but a lot of them at once. Many large tech companies are spending billions to build powerful competitors. AI is already transforming customer service, banking, healthcare, marketing, copyrighting, creative spaces, and more.

AI-generated content has already taken hold of social media, YouTube, and news websites. Elections are expected to be inundated by propaganda and misinformation. No one is sure how much good or harm can come from adopting AI everywhere. Change is scary; there will be winners and losers.

One of the biggest questions governments and corporations have now is how to manage the transition to an AI-boosted economy. All these potential gains or risks are just the result of today's AI. ChatGPT's intelligence is a major step up, but it remains narrow. One: it can write a great essay in seconds; it doesn't understand what it's writing. But what if the AIs stopped being narrow? General AI: what makes humans different from current AI is our general intelligence.

Humans can technically absorb any piece of knowledge and start working on any problem. We're great at many very different skills and tasks, from playing chess to writing or solving science puzzles. Not equally, of course; some of us are experts in some fields and beginners in others. But we can technically do all of them.

In the past, AI was narrow and able to become good at one skill but was rather bad in all the others. Simply by building faster computers and pouring more money into AI training will get us new, more powerful generations of AI. But what is the next step for AI? It's to become a general intelligence like us, an AGI. If the AI improvement process continues as it has been, it's not unlikely that AGI could be better in most or even all skills that humans can do.

We don't know how to build AGI, how it will work, or what it will be able to do. Since narrow AIs today are capable of mastering one mental task quickly, AGI might be able to do the same with all mental tasks. So even if it starts out stupid, AGI might be able to become as smart and capable as a human. While this sounds like science fiction, most AI researchers think this will happen sometime this century, maybe already in a few years.

Humanity is not ready for what will happen next—not socially, not economically, not morally. Earlier, we defined intelligence as the ability to learn, reason, acquire knowledge and skills, and use them to solve problems—all things humans excel at. An AGI as intelligent as even an average human would already disrupt modern civilization because they're not bound by the same limitations as we are. Today's AIs, like ChatGPT, already think and solve the tasks they were made for at least 10 times faster than even very skilled humans.

Maybe AGI will be slower, but it may also be faster—maybe much faster. And since AGIs are software, you could copy them endlessly as long as you have enough storage and run them in parallel. There are 8 million scientists in the world now. Imagine an AI copied a million times and put to work. Imagine 1 million scientists working 24/7, thinking 10 times faster than humans without being distracted, only focused on the task they've been given.

What if suddenly AGIs could do all intelligence-based jobs in the world, from interpreting law to coding to creating animated YouTube videos, better, faster, and much cheaper than humans? Would whoever controls this AGI suddenly own the economy?

And thinking bigger, human progress is our intelligence applied to problems. So what could a million AGIs achieve? Solve fundamental questions of science like dark energy, invent new technology that gives us limitless energy, fix climate change, cure aging and cancer. But then again, sadly humans apply their intelligence not just for the benefit of all.

What if the AGIs are tasked to guide drones or pull the triggers in war, or to engineer a virus that only kills people with green eyes, or to create the most profitable social media so addictive that people starve in front of their screens? The creation of AGI could reasonably be as big of an event as taming fire or electricity and give whoever invents it equally as much power.

But now, let's go one step further. What if the potential of AGI doesn't stop here? Intelligence explosion: intelligence and knowledge build and accelerate each other. But humans are limited by biology and evolution. Once we evolved the right hardware, our software outpaced evolution by orders of magnitude, and within a heartbeat, we ruled this planet. But our software basically hasn't changed much since then, which is why we have obesity and destroy the climate for short-term gains.

Since AGI is software on a computer, once it's smart enough to do AI research, the rate of AI progress should speed up a lot. And that results in better AI that's better at AI research, without much human involvement. It may even be possible that AI could learn how to directly improve itself, in which case, some experts fear this feedback loop could be incredibly fast—maybe just months or years after the first self-improving AGI is switched on.

Maybe it would actually take decades. We simply don't know; this is all speculative. But such an intelligence explosion might lead to a true superintelligent entity. We don't know what such a being would look like, what its motives or goals would be, or what would go on in its inner world. We could be as laughably stupid to superintelligence as squirrels are to us, unable to even comprehend its way of thinking.

This hypothetical scenario keeps many people up at night. Humanity is the only example we have of an animal becoming smarter than all others, and we have not been kind to what we perceive as less intelligent beings. AGI might be the last invention of humanity. It's possible that it could become the most intelligent and therefore most powerful being on Earth—a God in a box that could exercise its power to bring unimaginable wealth and happiness to humans while securing our future.

Or it could subvert civilization and bring about our end, with humanity unable to come up with a way to stop it. We'll look at some of these potential futures in more videos. But for now, let's wrap up. The only thing we know for sure is that today, right now, many of the largest and richest companies in the world are racing to create ever more powerful AIs.

Whatever our future is, we are running towards it. Who knows how long we have until we must confront our AI future? Luckily, you still have plenty of time to prepare for it if you're learning on Brilliant. That is brilliant!

Brilliant will make you a better thinker and problem solver in just minutes a day, with thousands of bite-sized hands-on lessons on just about anything you may be curious about, including AI. Their latest course, "How LLMs Work," takes you under the hood of real language models. It demystifies technologies like ChatGPT with interactive lessons on everything from how models build vocabulary to how they choose their next word.

You'll learn how to tune LLMs to produce output with exactly your desired tonality, whether it's poetry or a cover letter, and you'll understand why training is really everything by comparing models trained on Taylor Swift lyrics and the legal speech of big tech terms and conditions. It's an immersive AI workshop, allowing you to experience and harness the mechanics of today's most advanced tool.

We've also partnered with Brilliant to create a series of lessons to take your scientific knowledge to the next level. These lessons let you further explore the topics in our most popular videos, from rapes and metabolism to climate science and supernovae. Each lesson on Brilliant is interactive, like a one-on-one version of a Cazor video, and you can get started whenever, wherever, right from whatever device you'd like to get hands-on with Cazor lessons and explore everything Brilliant has to offer, from AI and programming to math, science, and beyond.

Start your free 30-day trial by signing up at brilliant.org/nutshell. There's even an extra perk for Cazor viewers: anyone signing up through our link will get 20% off an annual membership once their trial ends.

More Articles

View All
Will Theater be Revived When Netflix Gets Old? | Big Think
The reason the theater is uniquely valuable is because you have to concentrate. In other words, you’re hijacked for two hours and you’ve got to turn off your cell phone. You’ve got to stop talking to your neighbor, and you’ve actually all got to examine s…
On Claiming Belief In God: Discussion with Dennis Prager
So everybody, when I never met in person, I never met Jordan Peterson in person. But I said to him when we met right before lunch something that I said to me by so many people who meet me for the first time: “I feel like I know you.” And that is the highe…
The Psychology Behind "Nice Guys Finish Last" | Keith Campbell | EP 480
Just to take it in a little bit of a Freudian direction, it seems you can think you know if you think about it in terms of sort of Freud’s developmental model. The narcissism is like being stuck in the phallic stage a little bit. In that model, you can us…
7 Steps to Start Building Long-Term Wealth (The Richest Man in Babylon)
George S. Clayson first published The Richest Man in Babylon in 1926. Today, this book is still regarded as one of the best personal finance books ever written due to the wealth of wisdom that lies within its pages. Now, in this book, Clayson focuses on s…
Content Marketing Tips from Experts at First Round Capital and Andreessen Horowitz
Today we have Camille Ricketts from First Round and so much Oxy from a16z, and we have a ton of questions about content, content marketing, editorial from Twitter, so I think we’re just gonna jump right into them. Okay, good, cool. So, Adore Chung, partn…
Rounding decimals to the hundredths on the number line | Grade 5 (TX TEKS) | Khan Academy
We are told point A is graphed on the number line below. We see that right over there. What is A rounded to the nearest hundredth? Pause this video and see if you can figure that out before we do it together. All right, so let’s just think about the cand…