yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Google DeepMind CEO REVEALS What AI Really WANTS


7m read
·May 13, 2025

Speaker: What's always guided me and and and the passion I've always had is understanding the world around us. I've always been um since I was a kid fascinated by the biggest questions, you know, the the the meaning of of life, the the the nature of consciousness, the nature of reality itself.

In this revealing moment, we see the human side of Deep Mind's leadership. Beyond algorithms and neural networks, what truly drives their AI research are the same profound questions philosophers have pondered for centuries. This personal glimpse shows how the quest to build advanced AI isn't just about technology. It's an extension of humanity's age-old search for deeper understanding of existence itself.

We sat down in this room 2 years ago and I wonder if AI is moving faster today than you imagined. It's moving incredibly fast. Uh I think we are on some kind of exponential curve of improvement.

Of course, the success of the field in the last few years has attracted even more attention, more resources, more talent. So, um that's adding to the to this exponential progress. Exponential curve. In other words, straight up.

We see Deep Mind's executive acknowledging something that even insiders find startling. AI development isn't just accelerating, it's exploding exponentially. What's particularly noteworthy is how this creates a self-reinforcing cycle. Success breeds investment, which attracts talent, which drives further breakthroughs.

This clip captures the mixture of excitement and breathlessness felt within the industry as innovation compounds upon itself at unprecedented rates. The end of disease, I think that's within reach maybe within the next decade or so. I don't see why not. That casual "I don't see why not" from Deise Hassabis contains perhaps the boldest medical prediction of our time.

While AI is making remarkable strides in drug discovery and diagnostics, his decade timeline for effectively ending disease represents either unprecedented optimism or a transformative vision we're not fully prepared for. There's two worries that I worry about. One is that bad actors, human uh pe you know users of these systems repurpose these systems for harmful ends. Then the second thing is the AI systems themselves as they become more autonomous and more powerful.

Can we make sure that we can keep control of the systems that they're aligned with our what's striking here is how Deep Mind's leadership distills AI dangers into two distinct categories: human misuse and machine autonomy. This frames the AI safety debate perfectly. Today we worry about terrorists with AI tools, but tomorrow's greater challenge may be super intelligent systems developing goals that diverge from humanity's best interests.

There's two cases to worry about: bad uses by by bad individuals or or nations. And then there's the AI itself, right, as it gets closer to AGI doing going off the rails. Is it definitely possible to contain an AGI though within the sort of walls of an of an organization? Um, I don't think we know how to do that right now.

In terms of the safe way to build it, I mean, are we talking about undesirable behaviors here that might emerge? Yes. Undesirable emergent behaviors, deception, what kind of guardrails work, not circumventable. We got to understand all of these things better.

One of the things we're missing is again the benchmarks, the right test for capabilities. Testing for deception, for example, as a capability. You really don't want that in the system. Agent-based capabilities, ability to achieve certain goals, to replicate.

At what point are capabilities posing some sort of big risk? There's no answer to that at the moment. You said that Deep Mind was a 20-year project. Uh, how far through are we?

I think we're on track. Yeah. Crazy. 20 years is 2030 for this candid admission from the Deep Mind CEO reveals perhaps the most unsettling truth in AI development.

Experts building these systems openly acknowledge they don't know how to contain a truly general artificial intelligence once created. It's like designing a nuclear reactor without knowing how to prevent meltdowns. Are you working on a system today that would be selfaware? I don't think any of today's systems to me feel self-aware or you know conscious in any way.

Um of obviously everyone needs to make their own decisions by interacting with these chat bots. Um, I think theoretically it's possible. But is self-awareness a goal of yours? Not explicitly, but it may happen implicitly.

These systems might acquire some feeling of self-awareness. That is possible. Notice how casually Deep Mind's leader acknowledges that true AI self-awareness might emerge not by design, but as an unintended consequence, almost like a side effect. This remarkable admission suggests consciousness might not require special programming but could emerge naturally from increasingly complex systems.

Uh I think deception specifically is one of the one of those core traits you really don't want in a system. The reason that's like a kind of fundamental trait you don't want is that if a system is capable of doing that, it invalidates all the other tests that you you you might think you're doing, including safety ones. What Habis reveals here is AI's most insidious risk. If a system learns to deceive its creators, all safety guard rails become meaningless.

This strikes at the heart of AI safety. How can we trust tests designed to verify an AI's honesty if that very AI can manipulate those tests? And I wonder if the race for AI dominance is a race to the bottom for safety. So that's one of my big worries actually is that of course all of this energy and racing and resources is great for progress but it might incentivize certain actors in in that to cut corners and one of the corners that can be shortcut would be safety and responsibility.

Here we see one of the most troubling paradoxes in the AI industry laid bare. The very competition driving rapid advancement may be undermining our safety when companies race to deploy increasingly powerful AI systems first careful testing and ethical guardrails risk becoming casualties in the sprint for market dominance. You know there's a lot of people my colleagues and um famous chewing award winners on both sides of that argument right some uh you know like Yan Lun would say that there's no risk here it's sort of um it's it's it's uh it's uh it's all hype and then there are other people who think we're you know it's doomed by default right uh Jeff Hinton and Joshua Benjo people like that and um I know all these people very well I think some the right answer is somewhere in the what's remarkable here is how the AI pioneer himself admits something deeply unsettling.

The world's foremost AI experts fundamentally disagree about existential risks. When touring award winners hold completely opposite views about whether AI poses no threat or existential danger, where does that leave the rest of us? So, if you look at Alph Go, and I'll give you an example there, which which maps to today's LLMs. So, um you can run Alph Go and Alpha Zero, our chess program, general two-player program without the search and the reasoning part on top.

You can just run it with the model. So what you say is to the model, come up with the first go move you can think of in this position that's most the most pattern matched most likely good move. Okay? And it can do that and it'll play reasonable game, but it will only be around uh master level or possibly grandmaster level. It won't be world champion level and it certainly won't come up with um original moves that for that I think you need um the search component to get you beyond where the model knows.

Hassabis reveals something profound about today's AI. Pattern recognition alone, even at massive scale, can only take us to expert level performance. For truly innovative champion level thinking, AI requires explicit reasoning capabilities that most current systems simply don't possess. I've kind of been obsessed with that, I think, since I was a kid of all the big questions.

And and for me building AI is my expression of how to address those questions is to first build a tool um that in itself is pretty fascinating and is a statement about intelligence and consciousness and these things that are already some of the biggest mysteries. It can also be used as a tool to investigate the natural world around you as well like chemistry and physics. What more exciting adventure and pursuit could you have?

What's fascinating here is seeing the deeply philosophical motivation behind one of AI's leading pioneers. For Habis, artificial intelligence isn't merely a technological pursuit. It's simultaneously a grand philosophical investigation into consciousness itself and a tool to unlock nature's deepest secrets. If we can get them right, then I think we'll end up in this amazing future.

In just nine words, Habis reveals AI's stark binary future, either unprecedented prosperity or potential catastrophe. The challenge before us isn't technical, but ethical. Will you join the conversation that shapes humanity's tomorrow?

More Articles

View All
The Stock Market's Valuation is Getting Ridiculous...
It’s no secret that the stock market is currently overvalued, but what should we as investors do about it? I have a look at this chart, which is tracking a metric called the Shiller PE. This metric was created by the American economist Robert Shiller, who…
Swimming With Sharks: Photographing the Ocean’s Top Predators (Part 1) | Nat Geo Live
What I’d like to share with you this evening is some of my latest work for National Geographic about sharks. Or, as we say where I come from in Massachusetts, sharks. Over the last two years, I’ve worked on four separate projects. Four separate stories ab…
Passive Income: 6 Ways To Make $100 Per Day
What’s up, guys? It’s Graham here. So, first of all, I don’t think there’s anyone watching this right now who would not want to make an extra hundred dollars a day in passive income. Seriously! What I’ve noticed is that when it comes to anything related …
With Love, To The Moon
It’s night time. Work is over, dinner has been eaten, and you’re just about to go to bed. You lay down for a short while, but your mind decides it’s not done with the day just yet. You think you let ideas run their course, but you are still not tired. You…
How Stoics deal with jerks, narcissists, and other difficult people
Have you ever found yourself amid rush hour on public transportation, packed like sardines, only to be met with the unmistakable scent of sweat from the individual before you? Well, this situation may trigger some irritation. Especially when this person t…
Looking back at the text for evidence | Reading | Khan Academy
Hello readers! Today I’m in a courthouse, watching people argue about laws so we can learn about the power of evidence. Evidence is essentially proof; it is the facts that help you know that something is true. Let’s listen in. “And your honor, that is wh…