yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Are conscious machines possible? | Oxford professor Michael Wooldridge


6m read
·Nov 3, 2024

  • AI is not about trying to create life, right? That's not what it's about, at all. But it's kind of, very much feels like that. I mean, if we ever achieved the ultimate dream of AI, which I call the "Hollywood dream of AI," the kind of thing that we see in Hollywood movies, then we will have created machines that are conscious, potentially, in the same way that human beings are. So it's very like that kind of dream of creating life- and that, in itself, is a very old dream. It goes back to the ancient Greeks: The Greeks had myths about the blacksmiths to the gods who could create life from metal creatures. In medieval Prague they had the myth of the 'Golem,' which was a creature that was fashioned from clay and brought to life.

You know, the dream of creating life from nothing. So, it's a fascinating idea. It's an idea that's been there throughout human history, but it's an idea that we seem to now have the tools to potentially make real. Hi, my name's Mike Wooldridge. I'm a professor of computer science at the University of Oxford and an AI researcher, and most recently, I'm the author of "A Brief History of AI" out now in Flatiron.

So John McCarthy was an American researcher, and he applied for funding from the Rockefeller Foundation for a summer school at Dartmouth. What he had to do for this funding bid was to give a name for what they wanted to do. And so he picked the term Artificial Intelligence, and it's the name that stuck. So what McCarthy was working in was a trend in artificial intelligence, which is called 'Symbolic AI.' When we consider what we should do, we kind of have a conversation with ourselves: "I should do this because X and Y and Z, no I shouldn't do it because A and B and so on." And the Symbolic AI is about trying to recreate that kind of reasoning.

So, how do we approach artificial intelligence? How do we go about doing it? We wanna build a machine that can do some task which requires intelligence in humans, let's say translating French into English. So the Symbolic AI view of this is that what you do is you go and find somebody who's really expert and you find out from them all the knowledge that they use when they translate from French to English, and you code it up in what are computer versions of sentences. And if you do that right, so is the idea, then the machine will have that human expertise. That's the Symbolic AI approach, right, that human intelligent behavior is a problem of knowledge. If you give the machine the right knowledge, it will be able to do the problem.

But there's a different trend. It says, "Look, forget about trying to tell the machine how to do it by giving it the knowledge. Just show the machine what you want it to do, and get the machine to learn." In the French to English translation example, you're not telling it how to do the translation. You're just saying, "Look, for this input, this is what I would want you to produce as the output. For this French input, I would want this English output." And you give it lots of examples like that. And the idea is it will learn how to do it. So that's machine learning, is what that's all about.

And the techniques themselves are not a new thing. Two researchers called McCulloch and Pitts, in the 1940s, came up with this idea for what are now called 'neural networks,' but throughout the 60s and early 70s, really progress stalled. And so there was a backlash against AI in the mid-1970s, and that was called 'The AI Winter.' It turned out that to make neural networks work, you needed lots and lots of data- but also, these things are computationally very expensive. You need lots of compute power in order to make these neural networks work.

And that's the area where we've seen lots of progress over the last 15 years. That's really the reason that we're having this conversation today. That's the reason that AI is such an important field at the moment. So what most of contemporary AI is about is focused on getting AI systems to do very, very narrow tasks, very, very specific things. And in those specific tasks, it might be better than any living human being, but it can't do anything else. You can drive a car, I can drive a car, I can then get out of the car and play a game of football, rather badly in my case, and then make a good meal and tell a joke, and I can do that- the whole range of things.

You consider a driverless car, however good it is at driving, it's doing one tiny narrow thing. So, the grand dream of AI, it's not kind of formalized anywhere, there's no very specific version of it, but nowadays it goes by the name of 'Artificial General Intelligence,' AGI. And basically what it means if AGI succeeds, if we achieve with that grand dream, then we'll have machines that have the same intellectual capabilities that human beings do- but there's one other fascinating part of the puzzle.

So a colleague of mine here at the University of Oxford called Robin Dunbar, he's an evolutionary psychologist, and he was interested in the following question: Why do human beings have big brains? It's a very natural question. Why do human beings have big brains? What Dunbar became convinced by was the idea that we have big brains because we are social animals, and we have big brains to be able to cope with many social relationships. You know, where I keep track of: 'What Bob thinks about what Alice thinks about Bob, you know,' that kind of thing- how these stand in relation to one another.

And what I found about that so fascinating is that it means that human intelligence is, in a fundamental way, social intelligence. Back in the 1950s when John McCarthy and his contemporaries were thinking about AI, what they wanted to do was to demonstrate that machines could do things like learn and solve problems. And it's only much more recently that AI has become concerned with these social aspects. What happens if you have two AI systems that can start to interact with one another? Then how do we give them social skills, skills like cooperation, the ability to work as a team, to coordinate with each other, to negotiate with each other?

So, how might we get there, to conscious machines? One of the steps along that path is the idea that we will be able to build machines which can put themselves in another's mind. I think that's a step in the right direction, but the truth is we don't know how to even take that step at the moment. Human beings are wonderful creations. I mean, they are the most incredible creations in the entire Universe, but there's nothing magic about them. We are a bunch of atoms that are bumping up against each other.

For that reason, I don't think there should be any logical reason that says that conscious machines aren't possible. But saying that something is logically possible and saying that we know how to do it are completely different things. Do we know how to do it? Absolutely not. And actually, one of the fundamental problems is that consciousness itself in human beings is really not remotely understood. It is one of the big mysteries in science. How do that large number of neurons that are connected in all those kind of weird ways create consciousness and self-awareness, the human experience?

So the path ahead I think is gonna be slow and torturous. These are fearsomely complex things that are being created. But, one of the fascinating things, not about AI, but about computing generally, is that the limits to computing: they're not the limits of concrete or steel or anything like that in the physical world. You're really bounded only by what you can imagine.

  • Get smarter faster, with videos from the world's biggest thinkers. To learn even more from the world's biggest thinkers, get Big Think+ for your business.

More Articles

View All
Robert Greene’s Motivation for Writing the 48 Laws of Power
From the 48 levels of power, law 21, play a sucker to catch a sucker: seem dumber than your mark. So, when I was preparing for this, I was reading these daily meditations, and I was actually shocked. I was really quite shocked by them. I was shocked by th…
What is ESG investing? | John Fullerton | Big Think
The idea of ESG in investing, which stands for environmental, social, and governance, has been around probably for twenty years now. It sort of followed the SRI movements of socially responsible investing, and this was our attempt to think beyond sharehol…
How to Land a Million Dollar Deal on Shark Tank Ask Mr. Wonderful #24 Kevin O'Leary & Anne Wojcicki
Hey, Mr. Wonderful here, but I’m in the kitchen, so we don’t need Mr. Wonderful; we need Chef Wonderful. How are we gonna get them? Eg, well, um, but there’s no Chef Wonderful. You know what? I want to talk about Mother’s Day. It’s coming up, and this ye…
2 step estimation example
We are told a teacher bought 12 sheets of stickers to use on the homework of her students. Each sheet had 48 stickers. At the end of the year, the teacher had 123 stickers remaining. Which is the best estimate for the number of stickers the teacher used? …
George Ought to Help
Imagine you have a friend called George. You’ve been friends since childhood. Although you’re not as close as you were back then, you still see each other once in a while and get along very well. One day, you and George are approached by an old mutual fri…
Perfect Muzzle Flash Photos - Smarter Every Day 43
Hey, it’s me Destin. Welcome back to Smarter Every Day. So, ah, first things first, let me show you that this weapon is unloaded. And I really like to think about firearms because there’s a lot of science involved here. What causes muzzle flash? Alright…