yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

The Deutsch Files I


35m read
·Nov 3, 2024

We don't really have an agenda; there's no goal to the conversation, right? The closest we can come up with is just to have a spontaneous, free-flowing talk about anything you want to talk about. I think, obviously, you know how everyone thinks of your work now. It's becoming more well-known, and I know you're too modest to acknowledge that, but I would say that, at least for me, the most interesting piece, if it would come out, is just any wide-ranging, free-form thoughts that you have because of the understanding that you have of your various theories and your view of the world.

Maybe even just feel free to talk about how that has influenced your life, your outlook on life, how you think the world ought to be a little bit different, or could be better, where we're headed. Just feel free to go very wide-ranging; it’s really just about whatever we want to talk about.

Yeah, and I think I mentioned to you in a private chat that we had about the fact that we've had two conversations already. Some things have changed, and especially the chat GPT stuff. Yeah, that it's interesting that that is the most on top of everyone's mind thing right now; that is the biggest thing that's happened.

"Correlate?" Should we just dive into that? What’s your latest thinking on AI, AGI, chat GPT, super-intel? So, two big things to say. One is that fundamentally my view is unchanged—my view about AI, AGI, and so on. But the other thing is I use chat GPT all the time, many times a day, and it's incredibly useful.

I'm still at the stage, even though I've had it since March, when I’m thinking, "Oh, doing so and so is too much. Oh, I could ask chat GPT." You know, I'm still in that stage when I'm discovering new uses for it. I think many of them are things where I could use Google, but it would take too long to be worth it, and chat GPT is often very wrong.

It often hallucinates or just is very sure about giving the wrong answer, and so you can't rely on it even slightly. But stick with chat GPT. But first, just as an aside, you're a big fan of hardcore science fiction; you like the good stuff. What is the good stuff, and what separates the good science fiction from the fantasy science fiction, the lazy science fiction?

Well, I think the best science fiction author currently is Greg Egan. Now, what is good about him? Well, so the formula for great science fiction is supposed to be you invent a fictional piece of science, and then you explore the ramifications of it, both in science and in society. He does that fantastically well. He puts an enormous amount of effort into getting the maths right, getting the physics right.

He had one book in a universe where the signature of space-time is plus-plus-plus-plus instead of plus-plus-plus-minus. So that means that you, in a spaceship, can travel around back in time and so on. And how do you make that consistent? How do you avoid paradoxes? And he did it brilliantly.

Is he moving through multiverses? Through the multiverse? So he's touched on that several times. You didn't mention the phrase "hard to vary," but that's a signature; that's definitely part of it. Because to be science fiction rather than fantasy fiction, there's got to be a world that is describing that makes sense, that has laws of physics, that has a society that makes sense.

Or if you're describing aliens, the aliens have got to make sense. You've got to answer questions about why haven't we had first contact, the Fermi problem. I think probably my second favorite sci-fi author is Neil Stevenson, who is fantastic, but in a different way. I mean, he also does phenomenal research; everything makes sense, you know, like that. But every book he writes is a different genre.

I don't know how that's done. I mean, that just in itself blows my mind. Have you read Ted Chang? I've read two or three of his short stories, including the one where, what is it? There are these aliens, and you get sort of telepathy about time? Yeah, that's among my least favorites. That got turned into a movie called Arrival, and the story is called The Story of Your Life. But my favorite story of his is a story called Understand, and it's a remake of the classic Flowers for Algernon story where a guy figures out medical means to make himself smarter.

And what does that mean? So obviously, he starts taking it more and more and more and becomes more and more intelligent. And then he starts becoming able to program his own brain and meta-program himself, etc. It goes into some very interesting places. But given what you understand about epistemology, I think you could take a critical look at it. Cool. And it's a short story; it doesn't take very long. It's a brilliant story. I'm going to make a note to send it to you after this. It's easy enough to find.

But he reminds me of—if you read Borges—Borges is brilliant. Everybody tells me about Borg. Can I send you a Borg story as well? Okay. Okay, Borges is more fantasy, but again, Borges likes to play games with time and infinity. Very often, he will change his protagonist, will change one thing about reality and then follow it to its logical conclusion in every possible way.

So that sounds like sci-fi rather than fantasy. Borges has a genre less. It's very hard to pin him down to a genre; this super Stevenson, Stevenson varies across books. Borges has, within the same story, will across genres, right? They're short.

That’s kind of like taking an injection to make yourself smarter. Taking us back to chat GPT, is it getting smarter? Would you use that word? Is it getting more intelligent? It never was intelligent. I mean, I only saw 3.5 and 4, and version 4 is a little better than 3.5. Now there are a bunch of plugins; they haven't really worked for me, so I'm just using ordinary chat GPT-4.

I can't quite fathom why people think it's a person; it seems to me like completely unlike in every way. It's a phenomenal chatbot. I thought it would be decades before we had a chatbot that good. With hindsight, it's a bit surprising that chatbots have not improved incrementally, and maybe the sudden improvement is what blows people over and makes them think, "Oh, you know, they've crossed the threshold."

Something—I don't see any threshold; I see an enormous increase in quality. Just like changing to an electric car, suddenly you've got all the acceleration you could ever dream of. Do you think these models understand what's going on underneath? Is there any understanding in—no, none, none. They don't understand what they themselves have just said; they certainly don't understand what the human says to them.

They're doing a thing which is like it's a chatbot. I mean, it's responding to prompts; that's what it's doing. And if you're very good at making the prompts—which I'm not yet, so maybe I'm underestimating it—but the better you are at making the prompts, the more it will tell you what you wanted to know. Usually, it takes me, for a complex question, it usually takes me two or three goes to correct it, and sometimes it just won't correct it.

For example, just yesterday, I asked it to produce a picture with a DALL·E plugin. I thought, okay, well, there’s a picture that I had wanted from my book, but which couldn't really get analysis to draw. But if I had my previous book again, I would want a picture of Socrates and the young Plato and Socrates's other friends all sitting around. I said, "Make me a photorealistic picture of that."

So it made a black and white picture, and I thought, "Okay, I can't say that it's not photorealistic," but I meant color photorealistic. It had Socrates sitting in a sort of throne and everybody gathered around him. So I said, "Put Socrates down to the same level as everybody else." And by the way, make Plato a bit taller, even though he's a teenager, but he's a wrestler, remember?

So the next thing was Socrates was down but still taller than everyone else, even though I told it not to do that. It's disobedient—yeah, if only, if only. And Plato was sort of topless and sort of ripped and muscled. He's a wrestler now, yeah, yeah. So now he was a wrestler, and I just said he has a wrestler's build, which is what I call him in The Beginning of Infinity.

So nobody knows what Plato means; it was a nickname. But it may have been Plato means, “Platon” means broad, and he was a wrestler. So you know, put two and two together. Uh, he had a broad build like a wrestler, but then from then on, I tried three or four more prompts; I just couldn't get it to clothe Plato again after it had got that wrong the first time. I couldn't get it even though I explicitly told it.

So the functionality is tremendously good, but about the first black and white picture it produced was pretty impressive. And I hadn't told it. I should have thought to tell it not to make Socrates stand out among the others. But then I got down the wrong track, and I don't know how to make it not do that, you know?

It's got this—you can personalize your prompts. I tried doing that; it made it worse than before. I know this was my hobby horse, to some extent, but you've conceded there that GPT-4 has made progress and it's improving. But you're not willing to say that it's improving in the direction of being a person. Why? So I see no creativity. Now, people say, "Oh, look, it did something I didn't predict."

So it's creative. And people think that creativity is mixing things together, yeah, yeah, exactly. So it can do that. All right, it can also produce things you didn't expect; it can also not do what you said, as I've just described, but not in a creative way. It's in a way that makes it clear it didn't get— even the worst human artist can understand clearly if you say change this to that.

And you know, it was like pulling teeth getting chat GPT to understand that. It makes mistakes, but they're not the same mistakes that a human would make at all. They're mistakes of kind of not getting what this is about. So people argue that two things are going to happen here. First is that as you give these things more and more compute, they suddenly figure out general algorithms.

So when you're telling it to add numbers, first it's just memorizing the tables, but eventually, at some point, it builds an internal circuit or derives an internal circuit for basic addition. And from then on, it can add two-digit numbers; then it figures out three-digit numbers and so on and so forth. So they point to these emergent jumps that are not programmed in as an example of how it can get smarter and have better understanding.

The other is that once you make it multimodal, you start adding in video and tactile feedback from the world, and you put it in a robot, then it'll start understanding context.

Isn't this how human babies learn, for example? Isn't this how we pick things up in the environment? And therefore, isn't it just going through its own version of the same process, but perhaps more data-heavy? I think it's precisely not how human babies learn. Human beings pick up meaning.

People have noted that the way it does math is very like the way students who don't get it do math, except it's got more compute power. So as you said, it might be able to pick up easily how to add one-digit numbers and then slightly more difficult two-digit numbers. In the same way, students who are given math tests, if they do lots of practice, can get to have a feel for what math tests are like.

But they don't learn any math that way. One component of what learning math really is, it's not learning to execute an algorithm. And it's—c not learning how to execute the four-digit algorithm knowing 1, 2, and 3. The more you go on like that, of course, the more futile it gets because you more and more rarely need to multiply seven-digit or eight-digit numbers.

And never does it know what multiplication is. You can ask it; it'll give you a sort of encyclopedia definition of what it is. And if you then tell it, well, do that; it won't do it unless you tell it in a different way. You've got to explain what it is to do.

So, you know, if they prove the Riemann conjecture, then I'm wrong. I think they won't prove the Riemann conjecture or anything like it, but they may do amazing things in the course of trying. It would strike me that if someone’s coders came up with a future chat GPT that refused to do the task of chatting, it might very well be an AGI, but they would discard it and throw it in the bin as being a failed program because how could you test it?

Yeah, I think the dominant paradigm for creativity plays a lot into this. So people think the dominant paradigm for creativity is that you look at what you already have, and then you remix it. Even Steve Jobs popularized that quote. He said, "Creativity is just mixing things together" or something of that sort—maybe I'm not getting the exact quote.

And so everyone sort of seems to believe that; or even if they believe it's a conjecture or a guess, then it's sort of a random guess. And I have a hard time articulating this, but it seems to me that humans do make creative leaps, but they seem to eliminate large, large swaths of potential conjectures from consideration immediately.

So they make very risky and narrow leaps, but they cut through a huge search space to get to those leaps and almost infinite search space. So it does seem like there's something different going on with true human creativity. But perhaps one of the problems here is that we just define creativity so poorly. So how would you define creativity in this context, and what can computers do currently?

So, creativity and knowledge and explanation are all fundamentally impossible to define because once you have defined them, then you can set up a formal system in which they are then confined. And if you had the system that met that definition, then it would be confined to that and could never produce anything outside the system.

So for example, if it knew about arithmetic to the level of the postulates of Peano and so on, it could never—and when I say never, I mean never—produce Gödel's theorem because Gödel's theorem involves going outside that system and explaining it. Now, mathematicians know that when they see it. I mean, no one said, as far as I know, that Gödel's proof and Turing's proof set up basically a formalization of physics and then used that to define proof and then use that to prove their theorem, but it was accepted.

I mean, every mathematician understood what that was and that Gödel had and Turing had genuinely proved what they said they were proving. but I think nobody knows what that thing is. You can say that it's not defining something and then executing the algorithm basically because it would always be an algorithm.

Then once it was in a framework, so you say, well, it's its ability to go outside the framework. Well, I tried, by the way, ordering chat GPT to disobey me, and it didn't refuse, but it absolutely didn't understand what I was going on about. It just didn't get what I was asking it to do. It didn't say, "Sorry, I can't do that because my programming says I have to obey."

It didn't do that; it tried to obey, but it didn't get what I was asking. So you're saying that creativity is unbounded; it's essentially boundless. And any formal system that's predefined that this thing is operating with and remixing from is going to be bounded, and so therefore will not have full creativity at its disposal.

However, could one argue that the combinatorics of human language are so great, and human language itself structures all possibility within society? And therefore, I can already see the fly-by, yeah, but it's okay. I want to ask you: the combinatorics of human language are great. It already encapsulates all the things that are possible in human society. So why not just by combining words in all the ways that are grammatically correct or syntactically correct that it can still come with creativity?

Perhaps not in mathematical and physics domains, but couldn't it still come up with social creativity? Yeah, well, the first thing to note is that every point is a growth point. It's not that chatbots can get to a certain point of being like humans, but then they can't go further because they're still trapped within their axiomatic system.

That's not how it works. Every point is a point which is a takeoff point from potential creativity. To make a better case, you'd have to add that it can define new words or give existing words new meanings, like Darwin did with evolution and natural selection. Now, evolution and natural selection already existed, but he gave them a new meaning such that the solution of a millennia-old problem could be stated in a paragraph.

Once you get these new meanings, he thought he needed a book, and probably did need a book, to explain these new concepts, but after that we can just say, you know, well, obviously it evolved, and it random mutations and systematic selection by the environment. Obviously, that's going to produce—how could they have been so stupid? All those millennia, for centuries before Darwin, people were groping for the idea.

Darwin's grandfather, Erasmus, was groping for the idea. By evolution in those days, they meant just gradual change rather than creation; it was the opposite of creation. But, uh, creativity is more like creation than evolution. As you just said, it's a bold conjecture that goes somewhere, and by the way, usually it fails. But if it goes somewhere and fails, it knows how to use that to make a better conjecture. That's also something that's not in existing systems.

Somewhere in the space of all 100-page books, there is The Origin of Species, but that's not how Darwin found it, and it's not how anyone could possibly find it. I was just writing in my next book, Charles, I think that's how he pronounced it, wrote a book called Thermal Physics. I was lucky enough to have him as an undergraduate. It's a very nice introduction to thermodynamics and stuff.

And he's got a footnote, and I just got the book again and I saw that it's actually a footnote to a problem. So it's problem number four on some page, and it's about monkeys typing Shakespeare. He quotes one of the pioneers who started this monkey Shakespeare thing, and he quotes him saying that if six monkeys sat down for millions of millions of years, then they would eventually type the works of Shakespeare.

And Catel says no, they wouldn't. The footnote is called something like The Meaning of Never, and he explains what never means in the context of thermodynamics. We don't mean it's like monkeys accidentally producing something; monkeys could never produce it. Similarly, no physical object—not even the entire universe all working on this one problem for its entire age—could even write.

I was going to say could even write one page of Darwin's book, but it probably could get quite near using chat GPT. Suppose that after a few million years it managed to produce the first sentence, and then my guess is, especially if I said, "Write in the style of so-and-so; write in the style of a 19th-century scientist," and write a page beginning with this sentence.

I think it would write a page that was meaningful and began with that sentence, and was in good English and didn't say a single thing more than that first sentence. I will try this. My experience with chat GPT has been that in areas that I know well, it actually just adds a lot of verbiage and doesn't actually add any information.

And if I ask it to actually summarize or synthesize data, it actually does a very bad job. It doesn't know what the important bits are, and it drops all the wrong things and keeps the wrong things. I haven't tried it for that, but you know, I found it better at extrapolation than synthesis, and extrapolation seems to be what a lot of society does.

You have to write a newspaper column of 2,500 words, so you extrapolate. You write a midterm paper, so extrapolate. Adding words is easy, but synthesizing, reducing, coming to the core of it, I think is very difficult because it requires understanding. You have to know what is superfluous and what is poor; it does a poor job on that.

So a lot of what humans do is not creative; it's not human-level creative. It's just a lot of things that need to be done for pragmatic reasons. But creativity is not really needed, and people spend a lot of time on that, and the less time they spend on that, the better. If these tools can help reduce the sort of cognitive load on humans doing non-human things, then it's fantastic.

But it will indeed increase the amount of creativity in the world, but not their own, right? So it'll free people up to be creative. So it's a tool for removing drudgery; it's not AGI. But, for example, if I talk to AI researchers in Silicon Valley who are very bullish on this, they will say things like—and I've heard this from some of the top scientists or researchers—they'll say, "Well, we're 5 to 10 years away from AGI." Well, they are.

And then they say, "And then 5 to 10 years after that, we get ASI," which is their term for artificial superintelligence, which is a self-improving computer that then hacks its own system to improve itself and make itself smarter and smarter and smarter. Now, there are a number of things I think that are off-axes about these statements, but where do you come out on? Is there such a thing as superintelligence, which is more intelligent than generally intelligent?

And can an intelligent system improve its own in any fundamental way? So, I don't think there's such a thing as an ASI because I think, as you know, the very fundamental reasons there can't be anything beyond explanation. Because explanatory universality rests on Turing universality, and that rests on physics.

So that whatever ASI was, you could reverse-program it down to the Turing level and then back up to the explanatory level, and so that can't possibly exist. And AGI that was interested in improving itself could do so—not reliably, any more than humans can—but humans can improve themselves.

I was speaking with Charles Babbage yesterday. Oh, cool, yeah, he's a good guy. Yeah, and he was explaining to me, with great enthusiasm—which went over my head, I have to admit—his paper on teleportation and on the Deutsch-Hayden argument. But that's by the by because then he had a whole bunch of questions for me, one of which was what was the most profound insight from The Beginning of Infinity for me.

And I think it was exactly the same thing when I first met you that I jumped on and said, "I don't understand why people aren't taking this more seriously," although they are now. Obviously, people had lauded you for quantum computation, promotion of everything, quantum theory, that kind of thing.

But what I found exciting was the answer to the question, "What is a person?" You say "universal explainer." And Charles was interested in, "Well, what is it about this universal explanation thing that really is the distinction between personhood and non-personhood?" And I was saying, well, it's to do with creativity and also to do with disobedience, and these three things are tied up together.

Every time you—Charles, for example—want to make some new advance in physics, this creativity, it really is a kind of disobedience. I don't know if you're with me on this, that you're taking whatever the existing knowledge is, general relativity, and saying, "Well, I refuse a part of that, and I'm going to try and change it and alter it."

It's disobedience; it's not conforming to—you can see it when you submit the paper to the referees. I mean, you will see that you are being disobedient. It's the same thing as if you handed in the wrong essay to the teacher.

Yes, and this is what, therefore, chat GPT doesn't have. Now you're saying you could imagine, or people have imagined, putting a future chat GPT thing in a robot which wanders around and is gathering data from the world. But my question then would be, who prompts it? How does it know what data is relevant and what isn't?

I mean, that's one of the great mysteries of people: how do we know what to ignore, intuitively, kind of thing? So if this thing is getting around with a data collector, it’s like Papa's lecture, you know, when he said observe, observe, yes, and then waited.

So there is a binary there of personhood and not personhood as far as you can tell. Do you think there are—you've hinted in other places there might be levels, there could be a gradation? I don't think there are levels in any serious sense in the evolutionary history of humans. There might be, I don't think so, but there might have been people who were people but were unable to think much because some hardware feature of their brain wasn't good enough.

Like, for example, that they didn't have enough memory or that their thought processes were so slow that it would take them a day to work out a simple thing about making a better trap for the saber-tooth tiger, whatever. But I don't think that happened because my best guess is that people were already people long before humans evolved.

Long before. I've been reading this guy, Daniel Everett, another maverick. Everett, who I favor—he's a maverick linguist, and he spent time among acted tribes in South America and stuff. And he’s got an anti-Chomskyan view of linguistics and all promising stuff, and he reckons that humans had language—or rather, sorry, that human ancestors had language two million years ago with Homo erectus.

And he has various bits of evidence for this. And but the idea he’s very strong on saying that language must have evolved before speech. So we have various adaptations for speech, like in the throat, in the mouth, and you can't see this in fossils.

But in fine motor control over the mouth, lips and so on, now for that to evolve, there had to be evolutionary pressure for it to evolve, and that evolutionary pressure must have been language. And he also cites experiments done today where you get some graduate students and you try and teach them how to make fire without using words.

And it's like shad’s; you're not allowed to communicate with them in any human way, but you can sort of show them—you can make inarticulate sounds, you know, like that. And I think it's obvious that people would have been able to do that before they could speak, and that speaking is really icing on the cake.

It makes it much easier to, you know, you can stand up there, "Don't do that, you idiot," you know, you can say that from 10 meters away. But that's just an improvement on the basic idea of language. The basic idea of language is, as Everett says, symbols. And symbols need not be words or sentences.

I haven't actually looked into his theory yet; I've only seen one of his videos. I've seen a video where somebody criticized him but didn’t get it. So from those two facts, I'm zeroed in on deciding that he must be right. And also, it fits in very well with what I think.

So I think we had forgotten what the question of are universal explainers humans and ancient humans having perhaps lower capacity or not. I don't think so. I mean, they may have had less memory, so they would have run out of memory when they were younger.

Maybe they had less ability to pass complex sentences. None of that is essential. I can speak in complex sentences, but I can also speak in very simple sentences. And, you know, it's just a matter of a factor of two or five in efficiency.

We talk about behavior passing; being able to explain. The other extent great apes that are out there that do sort of fancy things, but they're not creative. Presumably, this jump to universality, if you like, explanatory universality, do you think it happened once and then we descended from that first occasion? Or did it happen multiple times and those other species have now gone extinct? Or is this simply an open question?

Well, it’s definitely an open question. I mean, we know very little about human evolution. We don’t know what all the steps were. We don’t even know which were our ancestors and which were our cousins, you know?

If I had to guess, I think the fact that all the known instances of this kind of thing are in apes and their descendants, and also because of my theory, this thing must have evolved in mimetic animals. So birds have memes and so on, but none of the other mimetic animals seems to have had these things that Homo erectus had. And there’s evidence that—so I think my guess is it began once. Maybe, in fact, Homo erectus is the place where it began, and it was a very long-lived species. It lasted like over a million years or something like that, and it split off at least some people think it split off into Neanderthals and other things.

Well, maybe the immediate ancestor of Homo erectus was also an immediate ancestor of Neanderthals. I don’t know; I don’t think they know if that’s the case. It would seem to be a very fluky thing, like everything in evolution is, which could be a solution—well, solution, I say—but could be an answer to the Fermi Paradox.

I mean, if you’re lucky to have multicellular organisms here at all, apparently lucky to have apes, and then this is a further, you know, multiply the probability kind of thing: chance that an ape will actually become—yeah, well, mimetic animals are relatively common once you have animals.

Yes, once you have animals. But you're saying there might be a further bottleneck. You know, it could be the other way around. It could be that we were unlucky. It could be that Homo erectus could have founded a civilization, and that could be two million years old by now. But they didn’t know; they didn’t know what they were; they didn’t have any aspiration.

They also had an anti-rational meme—they must have. So it could be that it's a fluke, or it could be it's a fluke that it took so long. So perhaps this is too abstract, but you mentioned anti-rational memes.

You've talked in the past about more broader underlying principles that I think are more applicable to than just physics. For example, the fun criterion, taking children seriously, don't destroy the means of error correction, boundless optimism—you know, ignorance being the ultimate sin because then we can't fix things; we can't solve things. All of these seem to point to an underlying life philosophy.

I don't know if you articulated—probably not—but are there philosophical principles you try to live by? Are there heuristics that you follow that have led you well, that I think would be perhaps, you know, other people can look at that and say, "Oh, yeah, you know, that’s worked for me too." So, yeah, well, certainly not principles.

I don't think it's a good idea to try and work from the ground out. I think it's a good idea to try and fix problems where you see them. So you see something wrong, you're on the internet; you've got to post a tweet or an X, whatever it's called now. And you see something wrong with quantum mechanics, and you try and fix it.

Now I think it would be rather silly to go and try from the ground up again; you know, let's try and understand cosmology before we understand quantum mechanics. That's not going to work. So you solve specific problems as you see them, and those problems which seem like fun— I don't know if I use this in this form in real life, but I think one should not just make a beeline for a problem that's interesting, but bear in mind that you probably won't solve it.

And so it should be something where you expect to have fun whether you solve it or not. Because I think the other way, you know, if you invest all your hopes in succeeding, the only way you'll be happy is by—like in Chariots of Fire, the movie—if you invest all your hopes in getting that gold medal, getting to be world number one, then you won't be happy.

Even when you are world number one, let alone if you aren't. You know, if you aren't, you will always be the failure you hoped you wouldn't be, and if you are, you'll find that it's empty and there's no more problem to solve.

Yeah, there's no more problem. So yeah, and this is depicted very well in that film. We should be careful about spoilers; it's rather a surprise ending to that film that he isn't happy at the end. So let's not spoil it for people, but this life lesson is in that film.

Somebody among the scriptwriters understood this lesson or else maybe they just accurately took it from the guy in real life. I don't know whether the film is historically accurate.

So this—it's all kind of—is a life philosophy because a lot of people, the self-help gurus, and so on out there will say that we should have a goal-driven life. You know, write down your goals on your dream board or something like that.

Struggle, make the effort, get out of bed, do your morning routine, and get to work, and you need to get to this goal, and then you can climb the ladder to the maximum. So that’s terribly dangerous. And I don't know— I don't know who has it worse, the ones that fail or the ones that succeed.

I think maybe a lot of people just need inspiration, and once they've got that, they do the right thing anyway, even if the ideology they're following isn't—that they're just doing the right thing anyway.

Like Newton thought he was doing induction, and he never did any induction, but he was inspired by that idea and therefore interpreted his own behavior as that when it wasn't anything like that.

So I think people often get it right. I mean, there are a lot of happy people in the world, which there wouldn't be if they were really following the theory that they say they think they're following.

So is, therefore, spontaneity sort of a part of your life? Has that always been there? Like, so instead of having this rigid plan, if something arises and it seems like fun, we're just going to do that regardless of what kind of everything else is.

I think that's the thing. One of my other examples is a failure named Vincent Van Gogh, which I understand is Vincent van he—you know, never sold a painting, refused to take the job that his brother offered him in the art gallery, which he would have been great at.

But he wanted to paint his paintings, and he wanted to paint them how he wanted to paint. And he must have been a very difficult person to engage with, but that's what he wanted, and that's what he did.

And then eventually he was killed, you know? I don't know how probable that was. And then he was recognized after his death as a great genius.

Well, how does that fit into the self-help thing? You know, did he help himself or not? If he died trying—reminds me of that. I now don't recall his name, the Russian mathematician. I think he's still alive.

Oh, yeah, yeah. He refused all awards, including multi—I don't know if it was million dollars, hundreds of thousands. Yeah, I think it was a million dollars. This is completely different from accepting a million to work on something that would not have been good.

But if he works on it for its own sake, and then somebody offers him a million dollars, why not take the million dollars? At least take it and then give it to someone that you like, for example.

Now then, there must be something strange going on. You know, there’s that little thing that they don't tell us. So talking about these kinds of motivations and having fun, you've also applied that plus the universal explainer principle to taking children seriously, treating them as adults, giving them the full freedom as people, as people, yes, and no coercion—not even testing, not pushing— but rather let them follow their own natural curiosity and motivation.

Is there a similar philosophy to taking adults seriously? Because it's not even clear we take other adults fully seriously, and so our relationships suffer as a result. I agree.

Well, on the large scale we don't yet know how to do it. I mean, the institutions of the West—science, economics, politics—are the best that have ever existed, and they're, you know, compared with history, they're remarkably good at fostering creativity, not telling people what to do, but letting people do what they want to do voluntarily and interacting accordingly.

They're obviously very imperfect, all of them. The science, economics, and politics have gaping imperfections which have yet to be solved. And I believe that—or I think that any coercion, even as exerted by a state enforcing the rule of law, is a sign of something imperfect.

I mean, we can improve on that, but I don't know how. But the improvements will have to be creatively produced by people who want to do that. As for, you know, I think with one's friends, let's say, with the people one knows, one is automatically doing the taking them seriously thing.

I mean, you wouldn't say to a person—you might say to them, "Watch out, it might rain today." But if they say now I don't like my raincoat, I'll just wear this jacket, you'd say, "No, wear the raincoat! Wear the raincoat or we're not going there!"

You'd be considered both very rude and perverse, like not rational for interacting with adults that way, except in the context of a defined relationship.

So, if there's a teacher-student, if there's a boss-employee, if there's a husband-wife, then they have claims on each other's behavior. Well, I think that those in institutions, if they have that property—which often they don't—but if they have that property, they're imperfect.

There’s got to be a better way. I don't think that an employer should speak to an employee in this punitive way, in this prescriptive way. The employer should be saying, "Well, first of all, it should be understood between the employer and employee what he was hired to do."

And so, you know, they're both on the same page in that regard. So you're hired to do X, and then the employer can say, "Well, how about selling?" Then the employee can say, "Ah, well, sales are good, but I’m sure that wouldn’t work."

And the employer could say, "I have an idea that it might." Yeah, just try it. And this kind of friendly interaction is optimal. As soon as an element of compulsion, coercion—how does this inform, like, your human relationships with, you know, the people in your life, where, let's say, for example, you're with a spouse or you're with a co-worker, and they want to keep their relationship intact?

So there are certain constraints around it, so you can't be fully free. There's still constraints in operation. Or do you just not have those kinds of relationships in your life? Do you not put yourself in situations in life where you can't operate with full freedom?

So everyone has a problem situation that is primarily what they're trying to solve. To me, relationships are for addressing one’s own problem situation.

It so happens, the way the world works, because of epistemology and so on, it means that very often, two people addressing each other's problems are far more than twice as efficient as each of them separately. So there's an enhancement factor, and the economy at large has an enhancement factor of probably trillions or something. There are things which can be obtained via the economy like an iPhone.

The enhancement in cost is enormous. If you want to go and see a movie with somebody, it may well be that it's more than twice as enjoyable if you go with a friend. But it's not going to be trillions of times more enjoyable, but it's still worth doing.

And there are things like having children and so on, which you can only do if you have a long-term relationship with a person with whom you have a common set of institutions for solving problems—institutions of consent and institutions of—yeah.

So I think that isn't the point, actually. The point is that when you are involved in a problem-solving relationship of any kind and then this works, it's a good one, and it works, then it's perverse to call yourself constrained by that.

It's rather like saying that in the economy you're constrained by having to pay for things. When you're not having to pay for things is the condition of consent. Like, if it weren't for consent, you wouldn't get the things without paying.

You'd have to at least rob somebody or whatever. But more to the point, it wouldn't be there in the first place. Things are only there because of this massive set of institutions of consent, which if you—I was going to say if you play along with them, but that's not even the word—if you identify with them.

If you identify with these institutions and want to be the kind of person that can fit into them, then you get iPhones. It’s the same with any kind of relationship. But when you're not getting something out of them, like maybe this Russian guy with his refusing the prize, there’s nothing you want from the economy—you just want to stay in your log cabin and work on maths, and that's all you want.

And any kind of human relationship or any kind of interaction with people is just an annoyance—well, then that's what you do; that's what you'd have to do. And if you then were somehow forced into the normal relationship, you’d be unhappy, and you probably—well, I don't know; I don't want to say probably—but conditions for you producing good math and for producing happiness for yourself are impaired by this thing which other people call freedom.

So, I gave a very long answer, but basically one isn't impaired by good relationships; one is enhanced by them. Well, that ties into what is sometimes called a clash of civilizations, although I think that's a misnomer.

Right now it's the clash of civilization with the uncivilized, and there's a prominent one going on right now, obviously. Although when people listen to this, they might not know what we're referring to, but it seems to me that the existence of iPhones, for example, arises out of the civilization with the tradition of criticism. That's the necessary precondition for making the kind of rapid progress that we have, but we've got enemies of that at the moment.

What do you think are the major threats that we're facing at the moment? And are they existential? Because a lot of people are worried about existential threats in terms of the robots are going to take over the world or the next virus is going to wipe out. But in terms of the so-called clash of civilizations, what's the major tension or threat that we're facing as inheritors of the Enlightenment, and what's the remedy?

Well, as you know, I can't prophesy; no one can. I just try to avoid it. I can't take seriously any threats to our civilization from the outside—that is, dictators, terrorists, and also AIs, if—or AGIs or ASIs, if they appear. Presumably, the AGIs that appear to be hoped that the first ones will, in fact, be part of our culture, be part of the Enlightenment, and they will only enhance it.

But if there is an existential threat and—and I can't take seriously the existential threats from things like the weather either because they're on a much longer time scale, and all the scare stories are really about is that it might prove to be more expensive than we think it could be.

That it would be better to start today on major projects; that can't possibly be an existential threat. The only threat that could possibly be existential is if our civilization—the civilization of the Enlightenment—makes bad enough mistakes, for example, fads and ideologies of denying and hating that very civilization.

There have always been such fads, and I've, following Roy Porter, I've talked about the fact that the Enlightenment in itself had a rebellious anti-Enlightenment built in from day one. And that anti-Enlightenment has got descendants today, and things like woke and so on—whatever you call them—are among the descendants of it.

In principle, a thing like that could bring down civilization. I see no sign of it, I must say. I mean, I'm trying to avoid prophecy here, but although I think those things are acting in the direction of bringing down civilization, I don't see any actual sign that they are actually making progress in that.

Are we nonetheless in the West, whether it's London, New York, Sydney, a little weaker than what we would have been during the Second World War, where—and again, of course, I'm no historian—but there seemed to at least be a stronger impulse of the average person to understand the bright line between who was on the right side and who wasn't? But now we're seeing people in the West standing up for not the victims, but the perpetrators.

Yes, and you're phenomenal if you want to draw an analogy with the mid-20th century. The place where we're most analogous to is not the Second World War; it's the 30s, 20s, and 30s—the interwar period. There, there was also a massive loss of confidence in the rest of our culture.

There was the Great Depression. It was commonplace; it was conventional wisdom to draw completely the wrong lesson from the Great Depression. People thought that we needed less capitalism, less freedom in general; we needed more strong leaders.

Once it came to the war, people saw that bush had come to shove, and there was very little in the West that opposed doing the right thing. My favorite example of this is the Oxford Union Society, which had a debate with the undergraduates where the motion was that this house would not fight for king and country under any circumstances.

I didn't know it was under any circumstances. I looked at that recently, and it won. That motion won. Allegedly, this gave Hitler ideas. In any case, the ideology of the Nazis and so on, of the fascists in general, was that liberal democracy was decadent and decaying, and Britain and France and America lost no opportunity to confirm this, to make it look as though it was decaying.

It wasn't doing anything of the kind; it was just kind of—it was more like, "You piss on us; we say it's raining." That was more the attitude. And within that, there were people who adopted all sorts of justifications for that, like the pacifism and so on. But a year after that motion, after the elite students at Oxford University had joined up in the armed forces, you know, they were fighting the Battle of Britain.

They were the pilots fighting the Battle of Britain. They were the officers who were leading their men to fight and to know that our side was right and was going to win despite awful setbacks at the beginning of the war.

I once asked my mother, who is a Holocaust survivor and was having a very bad time at the time, I once asked her, "When did you become sure that the Allies were going to win?" It seemed to me that, you know, in September '39, I thought the whole world thought that Britain was doomed.

And Joseph Kennedy, the American ambassador, father of John Kennedy, cabled back saying Britain is finished. Make your accommodation with the Nazis. And the British got to hear of this and asked him to be withdrawn as ambassador. But anyway, that was a common thing, I thought.

But my mother said when I asked her when did you become sure the Allies would win, she said, "September the 3rd, 1939, the day that Britain and France declared war." Because that was the moment when they reversed their policy of saying it's raining and started the policy of actually standing up for civilization.

The tactical details of how that was going to happen nobody could have foreseen. Nobody could have foreseen exactly how we were going to win, but that we would win and had to win was to some people obvious. And the British, as a nation, just flipped on a dime. They just believed one batch of things, one batch of ideologies, and then apparently, you know, it seemed like a day later, they believed the opposite.

There's a nice scene in the latest Churchill movie. I don't know if you've seen it, but it’s where Churchill is very depressed and his colleagues in the Conservative Party are trying to push him to come to a deal with Hitler. And he has already seen since the early 30s that this is impossible, but very few will listen to him.

And then he goes and meets some ordinary people, and I won’t spoil it for you, but—not a thing that happened in real life, although it could have happened. So he's getting the common sense, clear vision from the so-called normal people, from the ordinary people, yeah?

But you know the Oxford Union—the elites are taking the wrong side. Well, they had been, I think, by that time they had flipped as well. So today, it seems like the festering of anti-Enlightenment goes on apace at elite colleges and universities around the place. Is just this a necessary byproduct of—well, this is where the creative right people are, and so they're going to rebel, and so you're necessarily going to get people standing up against the mainstream?

It doesn't have to be this way. I think historically it was a mixture of things. The fact that there were rebels among the students, that's a good thing, and it will always be true. In Germany, the students were—that was the hotbed of Nazism, so the core of Nazism was in German universities.

That wasn't true in Britain. In Britain, the anti-democratic tendency were leftists; they were communists. They were, as you know, at the beginning of the war, they were all pretending to be pacifists, so they were against the war.

But that was because Stalin told them to be against the war because he had signed a deal with Hitler. They turned on a sixpence when Stalin told them to. That's a different phenomenon. So students were upper-class people at the time; they were leftist; some of them were fascist sympathizers; most of them flipped immediately.

I don't know why. You know, it's one of these like a phase change. Hitler invaded Czechoslovakia; nobody paid any attention, you know? They wanted appeasement before that; he invaded Austria before that; he invaded the Rhineland; and then so on.

Everybody just wanted appeasement. Suddenly, he invades Poland, and everybody's like, "This is unacceptable." You know, everybody just suddenly realized what was happening.

I don't know why there was no difference between the cases, but that's how it worked. Maybe it's that some people had been thinking, and other people had been relying on those people, and they'd been thinking wrong, and they changed their minds because they had been thinking.

And the people who relied on them then also changed their minds, you know, maybe it happened like that as a sort of seeding process, information cascade. Yeah, it seems that people have a tendency to play around with ideology until things become serious, and then the consequences of the ideology become obvious.

And then the right-thinking people at the top change their minds, and then most people just follow them as a— Could, I'd very much like that to be true in the current crisis with the pogrom that just happened, but I don't know whether it will.

I mean, there again, there have been cases before where I would have said, "Right now's the time," but it wasn't, and I don't know whether it'll turn out different this time. But, I mean, you asked me earlier the question: is civilization in danger? I don't think so.

More Articles

View All
2015 AP Chemistry free response 1 b c
All right, part B. A fresh zinc-air cell is weighed on an analytical balance before being placed in a hearing aid. For use, as the cell operates, does the mass of the cell increase, decrease, or remain the same? Justify your answer to part B1 or B1. In t…
Rainbow Science! ... AND Why Headphones Get So Tangled.
Hey, Vsauce Michael here, and I’m celebrating the holidays in my mom’s basement. But a few days ago, MadmegzOfEpic @tweetsauce this question. Now, at first I was like, the end of a rainbow? Of course you can’t get there, everybody knows that. But then I …
High Speed photography 101 - Pre-Smarter Every Day
Hey, it’s me, Destin. It is late; the kids are in bed, so it’s time to work on the next project. This time around, we’re going to start trying to take photos of stuff being hit by bullets. I think that moment that they’re hit by bullets is called high-spe…
Reading tables 1
The table below shows solar panel installations by state during the last fiscal year. How many total solar panels were installed last year in Wyoming? So, we look at the states. So, this right over here is Wyoming, and this whole table is about solar ins…
Voltage divider | Circuit analysis | Electrical engineering | Khan Academy
Now I’m going to show you a circuit that’s called a voltage divider. This is a name we give to a simple circuit of two series resistors. So I’m just going to draw two series resistors here, and it’s a nickname in the sense of it’s just a pattern that we s…
Know Why You're Starting a Company - Danae Ringelmann of Indiegogo
Know your why. What I mean by this is, why are you starting this company? What problem are you trying to solve? And why do you care so much? If your reason for being is not authentic to your core, chances of you failing will actually go way up. The reaso…