yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Ex Machina's Scientific Advisor - Murray Shanahan


33m read
·Nov 3, 2024

So I think that I think the first question I wanted to ask you is like given the popularity of AI or at least the interest in AI right now, what was it like when you're doing your PhD thesis in the 80s around AI?

Yeah, well, very different. I mean, it is quite a surprise for me to find myself in this current position where everybody and everyone is interested in what I'm doing. Media are interested; you know corporations are interested. So certainly when I was a PhD student and when I was a young postdoc, it was a fairly niche area. So you could just kind of like fever away in your little kind of corner doing things that you thought were intellectually interesting and being reasonably secure that you won't get me bothered by anybody. But not like it's not like that anymore.

No, into what exactly was the subject matter at the time? What were you working on at the time when I said my thesis?

Yeah, well, I worked on—this is a tricky question. I know you're asking me to go back. Let me think, what is it, like 30-something years? Yeah, very something years. Yeah, 30 years. I finished 30 years ago; I finished my thesis.

Okay, so what did it look at? So I was interested in logic programming and Prolog-type languages, and I was interested in how you could speed up answering queries in Prolog-like languages by keeping a kind of record of the relationships between facts and theorems that you had already established. So instead of having to redo all the computations from scratch, it kind of kept a little collection of the relationships between properties that you'd already worked out, and so you didn't have to redo the same computations over again. So that was the main contribution of the thesis.

I’m amazed or anything—it’s not very impressive. I did mainly siss like five years ago, and I barely—

And so did you pursue that further, Adam?

Oh no, I didn't. I kind of—one other thing that I discussed in my thesis was that I had a whole chapter on the frame problem. So the frame problem—there are different ways of characterizing it, but the frame problem in its knowledge is all about how a thinking mechanism, or thinking creature, or thinking machine, if you like, can work out what's relevant and what's not relevant to its ongoing cognitive processes, and how it isn't overwhelmed by having to rule out just trivial things that aren't irrelevant.

And so that comes up in a particular guise when you're using logic, and when you're using logic to think about actions and their effects. You want to make sure that you don't have to spend a lot of time thinking about the non-effective actions. So for example, if I move around a bit of the equipment, like your microphone here, then the color of the walls doesn't change, and you don't have to explicitly kind of think about all those kinds of trivial things.

So that’s one aspect of the frame problem. But then more generally, it's all about circumscribing what is relevant to your current situation and what you need to think about and what isn't.

And so how did that translate to what folks are working on today?

Well, so it's actually... oh, this thing, the frame problem has recurred throughout my careers, although there's been a lot of variation in what I've done in life. So I worked for a long time in classical artificial intelligence, which is there... it was and still is all about using logic, like sentence-like representations of the world and you have mechanisms for reasoning about those sentences and rule-based approach. And so that approach of classical AI has fallen out of favor a little bit.

And I sort of got a bit disillusioned with it back in, well, a long time ago I'm soft, so I kind of—the turn of the millennium I'd more or less abandoned classical AI because I didn't think it was moving toward what we now call AGI, artificial general intelligence—the big vision of human-level AI. And so I thought, well, I'm going to study the brain instead, because that's an example that we have of an intelligent thinking thing. It's the perfect example.

So I want to try and understand the brain a bit more, so I started building computational neuroscience-style models of the brain and thinking about the brain from a larger kind of perspective and thinking about consciousness and the architecture of the brain and big questions.

And now I'm getting around to answering your question eventually, but now I'm interested in machine learning. There's been this resurgence of interest in machine learning, so I've kind of moved back to some of my interests in artificial intelligence, and I'm not thinking so much about the brain or neuroscience or in that kind of empirical work right now, and I've gone back to some of the old themes that I was interested in in good old-fashioned AI, classical AI.

So that's sort of an interesting trajectory, actually. The frame problem, interestingly, has been a recurring theme throughout all that stuff, and because it keeps on coming up in one guise or another.

So in classical AI, there was the question of how can you write out a set of sentences that represent the world well? You don't have to write up a load of sentences that encompass all of the trivial things that are irrelevant. And somehow the brain seems to solve that as well. The brain seems to manage to focus and attend to only what's relevant to the current situation and ignore all of the rest.

And in contemporary machine learning, there's also this kind of issue as well. It is also a challenge of being able to build systems, especially if you start to rehabilitate some of these ideas from symbolic AI. You want to think about how you can build systems that focus on what's relevant in the current situation and ignore things that are not.

For example, if you're in... a lot of this—there’s a lot of work here at DeepMind done with these Atari computer games. So if you think of a retro computer game like Space Invaders, then if you think about the little invader going across the screen, it doesn't really matter what color it is. It probably doesn't really matter actually what shape it is either. What really matters is that it's kind of dropping bombs and you need to get out of the way of these things.

So in a sense, a really smart system would learn that it's not the color that matters; it's not the shape that matters—it's these little shapes that fall out of the thing that matter. And so that's all about kind of working out what's relevant and what's not relevant to solving—getting a good score in the game.

Sorry for the interruption everyone; we just got to see Garry Kasparov talk. It's pretty amazing.

Yeah, that was fantastic, wasn't it?

Yeah, Garry Kasparov in conversation with Denis others. Yeah, he gave a great talk about the history of his computer chess and you know his famous match with Deep Blue. So yeah, we just had to pop upstairs.

-huh—and that was part—

Yeah, kind of one of those once-in-a-lifetime things. It also seems like he got out at the exact right time.

Yeah, maybe. Yes, he did.

Yeah, so Dennis at the beginning of the interview said that he thought that he was perhaps the greatest chess player of all time, and so he was there just at the right time to be knocked out by a computer in a way.

Yeah, not knocked off the top spot.

Very cool.

Yeah, and he also said that, maybe accurately, that any iPhone chess player now is probably better than Deep Blue live in 1997.

Yeah, which is interesting.

Yeah, I also—I found it very interesting. He was saying that just anybody in their living room now can sit and watch two grandmasters playing and can use their computer to see as soon as they make a mistake and analyze the match; you can follow exactly what's going on.

Yeah, whereas in the past it took, you know, expert commentators sometimes days to figure out what was going on when two great players were playing.

Yes, that was interesting. What struck me was how he was kind of analyzing the current play and how they relied so heavily on the computer, or at least he thinks they rely so heavily on the computer that they're kind of like reshaping their mind.

Right?

Yeah, and that's certainly, I think, going to be true going in with AlphaGo. So it's been interesting watching the reactions of the top Go players, like Lee Sedol and Ke Jie, who are very positive in a way about the impact of computers on the game of Go.

And they talk about how AlphaGo and programs like it can help them to explore parts of this universe of Go that they would never otherwise have been able to visit. And you know, it's really interesting to hear them speak that way.

Yeah, it seems like they're going to open up just kind of new territories for new kinds of games to actually be created.

Yeah, indeed.

Well, so we've already seen that with art, with AlphaGo in the match with Lee Sedol. So, as you probably know, there was a famous move in the second match against Lee Sedol, move 37, where all the commentators, all the sort of 9 Dan masters were saying, “Oh, this is a mistake. What’s AlphaGo doing? This is very strange.”

And then they sort of gradually came to realize that this was a sort of revolutionary kind of tactic to put a new stone in that particular rank in that particular game.

And since then, the top Go players have been exploring this kind of play about moving into that sort of territory when the conventional wisdom was that you shouldn't.

Hmm. Yeah, I mean, the augmentation in general I find fascinating across the board.

Yeah, and he was hinting that as well.

Yeah, he was, yes.

So he was very positive about the prospects of human-machine partnerships, where humans provide maybe a creative element and machines can be more analytical and so on.

Well, what was that law that he mentioned? I forgot the name of it.

I wrote—

More of it.

Oh, yes, marvelous.

Yeah, I named after Hans Moravec, the roboticist who wrote some amazing books, including Mind Children.

So you like this book Mind Children?

And this phrase "mind children" refers to the possibility that we might create these artifacts that are like children of our mind and that they have sort of lives of their own, and they are the children of our minds. You know, it's a challenging idea. This is an old book coming from the late 80s.

Okay, do you buy it?

I may be in the distant future.

Okay, well then maybe we oughta— we had a segue back into what we were talking about, which is kind of related to your book, your two books ago, Embodiment in Mind and Life.

Yeah, yeah, which came out in 2010.

Okay, because that was kind of an integral question to the movie Ex Machina, right?

Yeah, as you—you didn't necessarily have to have a person like AI, and more importantly, you didn't have to have an AI that sort of looked like a person, that sort of looked like an attractive female that also looks like a robot, right? They tee it up in the beginning; Nathan teases it up in the beginning.

Yeah, yeah, no, I mean obviously, to a certain extent, those are things that make for good film—trans-generate their artistic choices and cinematographic choices.

And I mean in the film Her, we actually have, of course, a disembodied AI.

And so it's possible to make a film out of disembodied artificial intelligence as well. But obviously, a lot of the plot and what drives the plot forward in Ex Machina is to do with embodiment.

Hmm.

And the fact that Caleb is attracted to her and sympathizes and empathizes with her. And but there's also kind of a philosophical side to it too, which is certainly—certainly I think that it will no doubt, when it comes to human intelligence and human consciousness, our physical embodiment is a huge part of that.

It's where our intelligence originates from because what we—what our brains are really here to do is to help us to navigate and manipulate the complex world of objects in 3D space.

And so our embodiment is an essential factor here. We have got these hands that we use to manipulate objects, and we've got legs that enable us to move around in complicated spaces.

And so that, in a sense, is what our brains are originally for—the biological brain is there to make for smarter movement, and all the rest of intelligence is a flowering out of that in a way.

And so did you buy the gel that he showed Caleb in the beginning?

Oh yeah, they said so, so it's interesting because the way the film is constructed is that Alex Garland, you know, the writer and director, so he sometimes says that the film is set ten minutes into the future.

It's just, you know, it's like a really mentally like really a lot like our world.

Yeah, just very slightly into the future.

Yeah, and so when you see Nathan's lair, it's a retreat in the wilderness. There's nothing particularly science fiction about that.

It's desirable narration of course, it is—in fact, a real hotel in... or you can actually get—stay at this place right in Norway.

And so it doesn't have a particularly futuristic feel. Almost everything you see is not very futuristic; it's not like Star Wars.

But then there are a few things, a few carefully chosen things that look very futuristic, and Ava's body, so the way you can see, you know, the sort of the insides of her torso and her head.

And then when he shows the brain, which is made of... this gel, and so I think that was a good choice because we don't, at the moment, know how to make things that are like Ava that have that kind of level of artificial intelligence.

So that's the point at which you have to go sci-fi, really.

Well, I mean, those like lifelike melding elements. Have you watched the new HBO show Westworld?

Do you know? I haven't, no.

I mean, I really... yeah, I really am—is on my to-watch list because I've heard a lot about it.

Yeah, because they’re the original Yul Brynner.

But I haven't watched the series, concierge.

Yeah, yeah, they definitely take cues.

I mean, I guess it's probably like in the sci-fi canon that you have this basement lair where you create the robots, and then they become lifelike through this whole process.

Even if you just watch kind of the opening title credits, it's exactly that.

It's like the very, the 3D printed sinews of the muscles; it looks exactly like Nathan's lair.

And so what I was wondering is, as you were consulting on the show, how much of that were they asking you about, and were they saying, “Like, is this like remotely 10 minutes in the future, or is this 50 years?”

Yeah, well, it wasn't really like that.

Okay.

I mean, I'll tell you the sort of whole story of how the kind of collaboration came about.

So I got this email from Alex Garland, you know, unsolicited email out of the blue.

It's the kind of unsolicited email you really want to get.

Yeah.

From, you know, a famous writer-director who wants you to work on a science fiction film.

And he basically said, “I read your book Embodiment in Mind and Life and it helped me to kind of crystallize some of the ideas around this script that I'm writing for a film about AI and consciousness, and do you want to get together and have a chat about it?”

So I didn't have to think very hard about that.

And so we got together and had lunch, and he sent me the script.

And so I'd read through the script by the time I got to see him, and he really—he certainly wanted to know whether it sort of felt right from the standpoint of somebody working in the field.

And so it really did—there was nothing. I mean, as a script, it was a great page-turner, actually.

It's interesting being in that position because now Ex Machina and the image of Ava have become kind of iconic, and you see it everywhere.

But of course when I read the script, all of that imagery didn't exist.

So I was reading it; I had to kind of conjure it up in my own head.

So I thought he didn't give you any kind of preview of what he was thinking in it, so the note—because nothing had been—nobody had been cast then at that point.

And I think actually when we met up, it’s going with my memory serves me right, he did have a few images of some mock-ups from artists of what Ava might look like.

But I hadn't seen it when I read the script, so for me, it was just kind of script, and the characters really leapt off the page.

The character of Nathan in particular was really very vivid, and you know you only didn't like this guy just reading the script.

Anyway, so then Alex really wanted to—sorry, so I sort of grabbed the title of scientific advisor. I'm not sure if I ever really was officially, you know, the scientific advisor, but Alex really wanted to meet up and talk about these ideas.

He wanted to talk about consciousness and about AI, and so we met up several times during the course of the filming, and I was very little that I contributed to the film at that point.

In a sense, perhaps I had already done my main bit by writing the book.

And I mean, there are a few little phrases that I corrected, tiny, tiny things, but otherwise I just thought, you know, great.

Yeah, I'm really, really very good, and there's some—there are some lines in the film that I just thought were so spot-on.

Anything you remember?

Input like what line?

Yeah, well, I mean, so a favorite one is where Caleb was—so initially Caleb is told that he's there to be the human component in a Turing test, and of course it isn't the Turing test, but you know, then Caleb says that pretty quickly, says, “Well look, you know, in the real Turing test, the judge doesn't see whether it's a human or machine,” and so on, but of course I can see.

And then Nathan says, “Oh yeah, well, we're way past that. The whole point here is to show you that she's a robot and see if you still feel she has consciousness.”

And I thought that was so spot-on. I thought that was an excellent, really an excellent point making a very, for the sake of a cool point, you know, in this one little line in the middle of a psychological thriller.

It's pretty, pretty cool.

So I call that the Garland Test. Ahem.

So I found it very—like that was really stupid.

I was wondering like which text influenced him most when he was writing it?

And in particular, like where you found that your work had seeds like planted throughout the movie?

Yeah, where do you think it was the most influential?

Well, good question. You need to ask him.

So certainly the... so my book is very heavily influenced by Wittgenstein, and in a sense, Wittgenstein is all about—when it comes to these deep philosophical questions, he's very down-to-earth.

He's always saying, “Well, what do we mean by consciousness and intention and all these kind of big, difficult, difficult words?”

And Wittgenstein is always taking a step back and saying, “Well, what are the role of these words in ordinary life?”

And the role of these words in ordinary life is something like consciousness is all to do with, you know, the actual behavior of the people we see in front of us.

And so, you know, in a sense, I judge others. Well, I don’t actually go around judging others as conscious.

That's a point that you make as well; it's just I just naturally treat them as conscious.

And so why do I naturally treat them as conscious? Because their behavior is such that they're just like fellow creatures, and I just do that, and that's just what you do when you encounter a fellow creature.

You know, think carefully about it.

And so this is an important bit—a continuing point that I bring out in the book very much.

And in a sense, that's very much what happens to Caleb.

So Caleb doesn't, you know, isn't sort of sitting making notes saying, “Therefore, she is conscious.”

Yeah, but rather, through interacting with her, he just gradually comes to feel that she is conscious, just, and to start treating her as conscious.

And so that's a very—something very Wittgensteinian about that.

And then I think probably that comes from—I'd like to think I come here from my book, too.

Well, I never—I get it seems very cinematic that it would be like over the course of a week—the Turing test, but I had never seen a Turing test framed that way.

Yeah, I mean, I guess it's not, you know, it's a Garland test, but did you coach him in any way of the natural steps that someone would take as if the test to elevate?

No, not at all.

No, this is all Alex Garland's stuff.

I had—I had no input on that side.

It's also—the script was already—and the plot was all—whole script was already 95 percent, you know, done, you know, when I first saw it.

Okay, so there are a few differences in the final film from what you see in the in the script that I saw, and indeed in the published script.

So that was actually, that was a question from Twitter.

This is kind of a seemingly a pseudonym on Twitter, someone trench shovel.

They asked, “Were there any parts of the script that were changed or left out because they weren't technically feasible or realistic?”

Ah, well, so there was a bit that was in the script that was left out in the final filming, which I think is very significant.

Okay, and so it's a right, so spoilers ahead for the few people—I assume if you're listening to this.

So um, so right at the end of the film, where Ava is climbing into the helicopter and you escape from the compound, then she's got to kind of fly off and we see her have a few words for the helicopter pilot.

And, you know, I wonder what she says actually, you know.

That's interesting. Just fly me away from here.

Anyway, then off the helicopter goes.

Now in the written script, there’s that instruction there which says something along the lines of—we see waveforms and we see the facial recognition vectors fluttering across the screen; and we see this, that, and the other—and it's utterly alien.

This is how Avery—Ava sees the world.

Oh, it's early alien, and now in the end of it, so the very first version of the film that I saw was long before all that was before the VFX had been properly done and everything.

So it was a first crude cut, and they had they put sort of the scene, and they started—they put a little bit of this kind of visual effect in.

And then I think they decided this didn't really work terribly well to do that at that point now, so they kind of cut it out.

So in the version that we see, you don't actually see that; you just see her speaking to the helicopter pilot, and she climbs into the helicopter.

But it's very—it's a very significant direction because you know we're left now, I think one of the great strengths of the film is that it leaves so many unanswered questions.

You don't really need your life thinking, “Is she really conscious? Does she really—is she really capable of suffering? Is she just a kind of machine that's gone horribly wrong, or is she a person who's understandably had to commit this act of violence in order to save herself?” You know, which of these is it?

And you never really quite know.

And although I think people are leaning more towards the kind of, “Oh, she's conscious in a straightforward kind of way,” than that's there, but that version of the ending just points to the fact that there's a real ambiguity.

Because if that had been shown, you might be leaning more the other way—you might be thinking, “Gosh, you know, this is a very alien creature indeed.”

And she still might be really genuinely conscious and generally capable of suffering, but it would really throw open the kind of question now, you know, how alien is she?

And to me, that would also—so just so I understand it, was it a VFX over the actual image, right?

Well, I mean, it doesn't—in the script, it doesn't specify exactly how it’s to be done.

So it just says something like we see facial recognition vectors fluttering, and wheeze, and I can't remember the exact word, but it directly—I think it was, you know, the obviously the idea was to give an impression of what things looked like and sounded like for Ava, in some sense, which of course is free, in a sense, is impossible to convey.

But you just have to—would—would I think that's why they thought, how do we—how would we do this?

And well, I didn't know if they were also trying to avoid some kind of— I guess it's not really like a fourth wall, but it's also trying to, um, trying to avoid the situation where they—the author or the auteur, Alex, writing the movie is saying like, “We're in a simulation, like this is what you're seeing as you are the mind of some artificial intelligence.”

Yes, well, I think it was meant to be shown from her point of view.

So for—so right, so that wouldn't have been an interpretation of it if they got it right, I would imagine.

So maybe—I don’t know why exactly they decided not to put it in, but it's just the fact that direction is there in this script.

By the way, that’s in the published script, so I’m not—naturally, I’m not giving anything away—but there is the published version of the script that has this little direction in it.

Yeah, I rewatched it last night, and I remembered the ending is like, it is so vague.

Yeah, okay, what happens?

And yeah, I do—I do remember because I quite like that ambiguity.

Yeah, you know, where you just—you don't really know really, you know, is she conscious at all? Is she conscious just like we are? Is she consciousness in a kind of weird alien way?

You never really know, and it's this is a deep philosophical question.

And there's also there's a moment where right at the end where she's coming down the stairs, having escaped basically, she's going down the stairs at the top floor of Nathan's compound, and she smiles.

She does that; she goes up the stairs, kind of looks back and surveys.

Yes, yeah, and she smiled.

And I remember saying to—I said, “I don't think you should have that,” after I’ve seen the first version.

I said, “Honey, you should have that smile there because it's to humor.”

You know, and he was—you know, rethought it was important to have to smile there because I think he would say, “Yeah.”

So I think Alex would say that.

Someone upwards and yeah—also I apologize to Alex if he's listening to him.

I think he would say that people of course can have their own interpretations, and that's, of course, that's—you know, but he would probably lean towards the interpretation that she is conscious in the way that we are, and the evidence for that is, well, why would anybody smile to themselves privately if they weren't conscious?

And just like we are.

Hmm, and what else in those conversations—you know, you're watching edits of the movie.

What else did you guys work through?

Well, so there's the Easter Egg.

Yeah, sure, a good one.

Yes!

So the first time I saw any kind of clip of Ex Machina, Alex sent me an email and he said, “Do you want to come in and see a bit of Maki now?”

As you know, because it's in the can, as these film people say—though they're no cans anymore for the film to go in—but so come and see her kind of like in, and come to the cutting room.

So I went along and he showed me some scenes, and at one point, he kind of stopped the machine, and he said—he said, “And this is the moment where Caleb is reprogramming the security system in order to release all the locks to try and get out,” and so Alex froze the frame there, right?

Now you see these computer screens where Caleb is typing into these computer screens, and he said, “You see this window here? Now this window is all full of kind of some junk code at the moment, and it says, ‘You can be sure there are going to be some geeky types out there who the moment this thing comes out on a DVD they’re going to freeze that frame and say, ‘What does this do?’”

And so he said, “Let’s give them an Easter egg. Let's give him a little cut, yeah?”

So he said, so he said that basically, “You said that window is yours; put something in there. Some kind of hidden message,” and he said, “Maybe make it an allusion to your book.”

So I went—so I thought this is very cool, and it isn’t the best product placement ever.

I probably sold one other copy thanks.

So I went home that very evening, and I've made the mistake that evening of buying a bottle of sake, and I was drinking this sake.

I got down coding some—coding something up in Python.

I'm having a good laugh at what I was going to do.

So I thought, “Okay, it's got to be vaguely kind of to do with security.”

So I wrote this little piece of this of Eratosthenes, a classic way of computing primes, and I wrote this.

So instead of kind of getting off Wikipedia or something, I sat there, encouraged it myself, but after four glasses of sake, I coded this thing up.

And then basically computes some big array of prime numbers, and then there's this—this thing that indexes into the prime numbers and then adds some random looking other numbers to the numbers that are, and then those are ASCII characters, and then it prints out the cup what those ASCII characters actually look like.

Okay, so when you look at this stock code on the screen, it's just gobbledygook but something to do with prime numbers.

If you run it, it prints out ISBN equals and the ISBN of my book—Embodiment in Mind and Life, anyway.

So that was very close, very, very pleased with this.

Can I handed it over to them, and they put it in their inner limiting.

But I have to say, Alex was wrong; it wasn't when the DVD came out that that—oh no, it only had to be on BitTorrent for 24 hours long before the DVD came out before there was—there was pages about this thing on the internet.

So there was a whole Reddit thread, and there's a GitHub repository with my piece of code, and there's a Reddit thread that includes a whole lot of criticism about my coding style; it's not PEPE compliant, and I think it's really funny.

And it's true.

Is there any too—but what I really regret was that the loop—I put the wrong terminating condition, and I, you know, you can terminate the Sieve of Eratosthenes after N squared.

You don't have to go all the way to N over 2, but for some reason I wasn't paying attention from glasses of sake, and I put, you know, and it terminates after N over 2—it's inefficient.

Yeah, well, maybe that's a bug in her code, you know, or maybe it's not a bug; it's just our own that can only give me some data.

It's not actually a bug; it does meet the specification, but fair enough, it's not efficient.

Fair enough, we should ask some of these questions from Twitter!

Yeah, I know they were very excited to ask you questions.

So we already asked one, so I'm Patrick out water.

Let's get to his question.

This is, uh, ok, so this is—we should have—so Craig asked how much closer we are to the sort of general Hollywood-style AI now than we were in the 50s.

In the 50s, so I think what he's alluding to is the flying car, you know, a pastel version—like it’s kind of the crazy futuristic version of the AI in 50s and then the AI that they're portraying in the movies.

Well, I can tell you that we're precisely 60 years closer than we were in the 50s, but I don't think that's the kind of an answer that can treat them.

So well, of course, you have to remember that in Ex Machina, as in all films, the way that AI is portrayed—you know, really a lot of it is to do with making a good film and deafening a good story.

And I mean, in particular, people love stories where, you know, where the AI is some kind of enemy nemesis and so on.

Actually, Garry Kasparov, who we just heard speak, made a very interesting point to me about this.

He said that there's been a currently pointed out, and I think he's right; but there's been a kind of change from very positive views in science fiction—of utopian, “We're going to use what—we're going to kind of get to the stars”—and to more dystopian views of things where it's, you know, like The Terminator and so on.

But anyway, it certainly makes for a good story if your AI is, you know, is bad and it also makes a good story if your AI is—and or it's very human-like.

And whereas in reality, you know, AI in two hours, it's going to get more and more sophisticated and closer and closer to human-level intelligence, it's not necessarily going to be human-like.

So it's not necessarily going to be embodied in robotic form, or if it is embodied in robotic form, it might not be in humanoid form.

So in a sense, self-driving cars are a kind of perfect robot.

Yeah, so I think that things, you know, will be a bit different from the way they seem, the way Hollywood has portrayed them.

Yeah, of course, if you go back to the 50s, and if we—it's very interesting to look at retro science fiction.

I love retro science fiction.

Look at something like The Forbidden Planet, and then Robbie, of course, in The Forbidden Planet is this metal hunk thing, you know, which is completely impractical, and you think how would it get around at all and how would it do anything with these kind of claw arms that exist and hands that it's got.

So clearly we've changed a lot in our view of what we think we can—the kinds of bodies we think that we might be able to make in there.

And I think it's also quite difficult because there's not really a clear benchmarking happening right now because it's not obvious—if it was just like, you know, energy, and compute going into this, then the race would be— I mean, it wouldn't be over, but it would be very obvious as to who's winning and what's going on.

Yeah, where there seems to be clear breakthroughs that have to happen.

Yeah, that's certainly my view.

So if we're thinking about now the question of when might we get to human-level AI, yeah, artificial general intelligence, then I think we really don't know.

And as certainly some people can—you can draw graphs that extrapolate computing power and the sort of how fast the world's fastest supercomputers are.

And you know, we're pretty close to what—depending on how you calculate it, we're pretty close to human brain-scale computing already in the world's fastest supercomputers and we will get there within the next couple of years.

But that doesn't mean to say we know how to build human-level intelligence—that's an altogether different thing.

And also there's controversy about how you make that calculation as well.

I mean, do you know how—what do you count? Can you count a neuron?

How do you count the computational power of one neuron, and or one synapse?

And some people, you know, it may be that some of the amounts of complexity in the synapses is functionally irrelevant; it's just chemically important and so on, but it might be functionally irrelevant to computation.

So we really have a lot of open questions there.

But even if we allow a kind of a conservative estimate and we assume that we're going to have enough computing power that's equivalent to the computing power that's equivalent to that of the human brain by, say, 2022 or 2023, we still would need to understand exactly how to use all of that computing power to realize intelligence.

So I think there are probably an unknown number of conceptual breakthroughs between here and...

Yeah, I mean, specific AI absolutely happening, like this general AI that he's talking about.

Yeah, yeah, exactly.

So yes, so clearly there's lots of specialist artificial intelligence where we're creating really good things like image recognition and image understanding, and speech.

So speech recognition is more or less being cracked—the process of turning the way raw waveform into text.

And so that—that's been cracked.

But then again, under real understanding of the words—that's a whole other story.

And while today’s personal assistants, you know, it can be quite cool and they're going to get better and better, there's still a way of displaying any genuine understanding of the words that are being—being used.

I think that will happen, you know, in due course, but we're not quite there yet.

Yeah, I mean, fortunately, or unfortunately, because that also—that underlies one of the other questions that I did want to ask.

So this is from Mecca Floss on Twitter, so the question is, “Excellent movie, but why is Asimov's law forgotten? That would be the absolute first thing they asked.”

So just for people who don't know what that is, there are three laws of robotics, right?

So I wrote these down.

So a robot may not injure a human being or, through inaction, allow a human being to come to harm—it's the first one.

Two: A robot must obey orders given to it by human beings except where such orders would conflict with the first law.

And then the third law is a robot must protect its own existence as long as such protection does not conflict with the first or second law.

And so their point is basically like, you know, why is the first of all broken in Ex Machina?

Yeah, but of course, Asimov's laws are themselves the product of science fiction's.

Yeah, no, they're not real laws that are out there, so—and so Asimov wrote those laws down in order to make for great science fiction stories.

And all of the science fiction stories, Asimov's stories are, you know, center on the ambiguities and the difficulties of interpreting those laws or realizing them in actual machines and kind of often sort of moral dilemmas as it were that the robot is faced with in trying to uphold those laws.

So even if—if we did suppose that we wanted to somehow put something like those laws into a, I mean, to a robot, it will be immensely difficult.

So I should take a step back and say why it’s irrelevant to robotics today.

So of course, let me qualify that; of course, there are people who are who want to build autonomous weapons and all kinds of things like that.

And you might you might say to yourself, “Well, I would very much like it if somebody was trying to pay attention to things a bit like Asimov's laws and said, ‘Well, you know; you shouldn't build a robot that is capable of killing people.’”

But that's—that's a law that the designers and all that would be a principle—and if we would have it that the designers and engineers would be will be exercising, not one that the robot itself was exercising.

So that’s was the sense in which it’s not relevant today because we don't know how to today make an AI that is capable of even comprehending those laws.

So that's kind of the first point.

So why doesn't it's a bit—okay, but then when we're thinking about the future. Of course, in Ex Machina, so why not, well, it would cause me, again, make a very different story if—asimov's laws were put into Ava.

But let's suppose that it was a world where we were minded to put Asimov's laws into Ava.

Well, maybe Ava might reason that she is human.

We know what is the what is the difference between herself and a human, and maybe she—she would reason that—that she shouldn't allow herself to come to harm, and therefore she was justified in what she was doing.

Who knows? I mean, it's just a story, right?

Yeah, I think we have to remember that it's just a story.

And it's actually very important; I think science fiction is really good at making us think about the issues.

But at the same time, we always have to remember that it's just stories—that there's a difference between fantasy and reality.

And I think it's also kind of covered in the movie when Nathan and Caleb are debating.

I think Nathan's criticizing Caleb over going with his gut reaction as ego and not in like if he were to think through every logical possibility, every action he would never do anything, right, which is kind of like directly against all these laws.

But yeah, Ava would never do anything if she could harm someone, possibly down the road, you know, burning fossil fuel by being in the helicopter.

Well indeed, yeah, I mean, I guess we're all—them—well, we all have to confront those sorts of dynamics over time.

And I mean, indeed, you know, moral philosophers have got plenty of examples of these kinds of dilemmas.

But make it obvious that there's no simple single rule really, you know, is enough by itself trolley problems.

If you know that live—well, you know, the trolley is heading down the track and there are points, and for some unknown reason somebody is tied across the tracks on one, and on the other fork, three people are tied across the track.

And the points are currently such that the three people—of the trolley is going to go over the three people and kill them, and you are faced with the possibility of changing the points so that the trolley goes down the first track and kills only one person.

So what do you do?

And, you know, philosophers can spend entire conferences debating what the answer to this is and thinking of variations and so on and—that little problem, that little thought experiment of Phillip and Foot’s thought experiment there is a distillation of, you know, much more complex moral dilemmas that exist in the real world.

Absolutely.

So before we go, I do want to talk about your thoughts on broader things here.

You know, obviously, you work here—we haven't been bored.

Yeah, things that—and then Ex Machina—so obviously you're here, DeepMind, you're in Imperial as well, twenty percent of the time.

Yeah!

Can you talk a little bit about things you’re excited about for the future as far as it relates to what you're working on?

Yeah, well, so I've recently got very interested in deep reinforcement learning.

So deep reinforcement learning is one of those things that DeepMind has made famous, really.

So when they published this paper back in 2014 and the Nature version in 2015 about—they published this paper about a system that could learn to play these retro Atari games completely from scratch.

So all of this—all the system sees is just the screen, just the pixels on the screen; it's got no idea what objects are present in the game. Running just these raw pixels, and it sees the score, and it has to learn by trial and error how to get a good score.

And they managed to produce this system which is capable of learning a huge number of these Atari games completely from scratch and getting, in some cases, superhuman level performance and other cases, human level performance, and in some cases, it wasn't too good, the games.

And so they—I think it opened up a whole new field, and to my mind, DQN is, in a sense, one of the very first general intelligences because it learns completely from scratch.

You can throw a whole variety of problems at it, and it doesn't always do that well but in many cases, it does pretty well.

So to answer your question, so I've got very interested in this field of deep reinforcement learning, but when I sort of first gone—before I joined DeepMind, I first started playing with their DQN system when they made the source code public.

And I pretty quickly realized that it's got quite a lot of shortcomings as today's deeper reinforcement learning systems all have, which is it is very, very slow at learning for a start.

When you watch it learning, you think, “Actually, this thing is really stupid.”

Because it might get to superhuman performance eventually, but my goodness, it takes a long time to do an online base of errors.

Yeah, yeah, or even sit—or even pong or something like that, so it takes a awful long time to to do it, whereas a human very quickly is able to work out some general principles, what are the objects, what are the sort of rules—then you work it out very quickly.

And so it made me think about my ancient past in classical artificial intelligence, symbolic AI, and it made me realize that there were various ideas from symbolic AI that could be rehabilitated and put into deep reinforcement learning systems in a more modern guise.

And so that's the kind of thing that I'm most interested in right now.

Very cool!

Yeah, that was actually one of my favorite questions from the Kasparov talk today.

Someone who was working on Go asked exactly that—like how can humans compute so quickly all like, they compute what is not relevant to the same, and they can just—they execute the game, I guess it was chess, right, in 50 moves rather than 100 moves.

Yeah, yeah, and it's very much that framing.

Yeah, yeah, that was Tori—Tori. He was one of the people on the AlphaGo team, and yeah, that's a very deep question, I think he was asking that.

Yeah, it's fascinating.

Cool! So if someone wants to learn more about you or more about the field in general, what would you recommend?

If I want to learn more about me, I can't think why they would want to—invitation, google my name and for my website.

If they want to learn more about the field in general, well, we’re in a very fortunate position of having an awful lot of material out there on the internet these days that people can find, and all kinds of lectures and TED talks and TEDx talks and so on.

And if people want to know a bit more technical detail, there are some excellent tutorials about deep learning and so on out there that people can find.

There are lots of massive MOOCs, you know, massive online open courses, so there's a huge amount of material out there.

Do you have a budding career in technical advising or is there Ex Machina—?

Ah, so people often ask me about Ex Machina—which of course is none of my business.

Yeah, but whenever I've heard Alex Garland asked about that, he always says he’s got no intention of producing an Ex Machina 2; that it was a one-off.

As for scientific advising, yes, other so I have been involved in a few other kind of projects.

I was—there was a theater project I was involved in, I enjoyed with Nick Payne at the Donmar Warehouse here—my God, that was a service played by Nick Payne called Elegy.

And it's about an elderly couple where one of them has got a dementia-like disease, and it's set also sort of ten minutes into the future.

One of them has got a dementia-like disease but techniques have been developed whereby these diseases can be cured, but the cost that you have to pay is that you lose a lot of your memories.

And so the play really centers on the difficulties for the partner knowing that her partner's memories of their first meetings and so on and their love is going to actually sort of vanish.

So it's about that and there’s more of a neuroscience kind of stuff, and I've also been involved with an artistic collective called Random International and Random International do some amazingly cool stuff.

So I highly recommend a Refugee, and to those famous for this thing called Rain Room.

Okay.

Hmm?

And that MoMA in New York?

Yes, that's right, yeah, exactly.

Yeah, so it's—was it ever in New York? Indeed, yeah.

So the idea there is that as you—since all their— their art is using technology in various kind of interesting ways and often about how we interact with technology to make kind of art in our—so in Rain Room, the idea is it's a room with sprinklers, you walk around in this room and it’s raining everywhere, but there's some clever technology that senses where you are, and you work on that.

And—

No, right company!

I should finish, yes.

So there's some clever technology that senses where you are and turns off the sprinklers immediately above your head so you walk around in this room and you're accurately never getting wet.

So that’s—that's one of the things that they also worked on this amazing sculpture called 15 Points.

And this is based on point light displays.

So a point light display is—these are little displays where on the screen you've just got say 15 dots, and these 15 dots move around and you suddenly see that it's a person because the 15 dots are like the elbow joints, and then the neck, and the head, and the torso, and the knees, and so on.

And you see these things moving around and you instantly interpret it as motion.

But you can even tell whether the person is running or walking or digging or often whether it's even whether it's a man or a woman just by these 15 points moving.

So they constructed this beautiful sculpture which has been sort of rods that have little lights on the end rods and motors.

And it's very much a piece of mechanics and chemical robotic light mechanical thing.

And when you just see it stationary, it's just like this weird kind of contraption, but then it starts moving and all the lights on the end are—and then you suddenly you see does this person appears walking towards you, and I thought that was a wonderful, wonderful example of how we could—we see—you know, we see someone there when the wind—and of course that for me, that was very interesting because it made me think about when we do that with machines really often, we do; maybe, you know, we think that there's someone at home when there isn't.

And so that—yeah, sir, so a lot of their art is all about that kind of questions.

That's so cool!

I think that's a perfect place to end it, and what we’ll link up to all their work as well.

Okay, okay.

Alright, thanks!

Sorry, yeah.

Sure, thank you!

More Articles

View All
Energy Conservation| Energy Resources and Consumption| AP Environmental Science| Khan Academy
In this video, we’re going to talk about energy conservation, or trying to save or lower the amount of energy that we use. Now, a lot of y’all might already have a sense that that is a good thing, while others of you might say, “Hey, why can’t I just use …
Astronaut Mike Massimino Talks with Kids | One Strange Rock
So how do you go Ah ha! How do you think? What happened? You’re rubbing your head. Oh, no. Right here is just aching. It is? Yeah, I don’t know why. Is it the conversation? Like my brain is just so excited. Your brain is so excited? Yeah. I’ve ne…
Congress Wants To Ban Credit Scores | Major Changes Ahead
What’s up, Grandma’s guys? Here, so no surprise, your credit score is pretty much the single most influential deciding factor when it comes to all things personal finance, building wealth, and saving a ton of money. Those three numbers pretty much become …
How to Get Creatively Unstuck: A Lesson from Novelist Jonathan Safran Foer | Big Think
I think very often when people refer to being stuck, or this is certainly my own experience and I’ve talked about it enough with friends, some of whom are writers, some of whom are other kinds of artists, some of them do other things with your life. Ofte…
5 Types Of Friends You Need To Have
Truly great friends are hard to find, difficult to leave, and impossible to forget. We all need to feel connections in our lives. Studies have shown that good friendships have tremendous benefits for our mental and physical well-being. One piece of resear…
Choreographer Elizabeth Streb on Why Dance Should Be More Like Football | Big Think
PopAction is a form that I and Streb, you know, the larger Streb which is pretty much hundreds and hundreds of dancers at this point have been developing over the last 30 years. If we were just an urban kind of crazy set of action aficionados, it would be…