yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

David Deutsch: Knowledge Creation and The Human Race, Part 1


29m read
·Nov 3, 2024

My goal would be not to do yet another podcast with David Deutsch; there are plenty of those. I would love to tease out some of the very counter-intuitive learnings, put them down canonically in such a way that future generations can benefit from them, and make sure that none of this is lost.

Your work has been incredibly influential for me. I am always carrying a copy of "The Beginning of Infinity" or "The Fabric of Reality" with me wherever I go. I'm still reading these same books after two years, trying to absorb them into my worldview, and I learn something new from them every day. There's a lot of counter-intuitive things in there. There are a lot of sacred dogmas and shibboleths that you're skewering. Sometimes you do it in passing with a single sentence that takes me weeks to unpack properly.

This recording is not for the philosophers; it's not for the physicists. This is for the layman, the average person, and we want to introduce them to the principles of optimism, the beginning of infinity, what sustainability really means, and about anthropomorphic delusions. As an example, people, you overturn induction as a way of forming new scientific theories. That's this idea that repeated observation is what leads you to the creation of new knowledge, and that's not the case at all. This obviously came from Popper, but you built upon it. You talk about how humans are very different and very exceptional, and knowledge creation is a very exceptional thing that only happens in evolution and the human brain, as far as we know.

You talk about how the Earth is not this hospitable, fragile spaceship Earth biome that supports us, but rather it's something that we engineer and we build to sustain us. I always recommend to people, start with the first three chapters of "The Beginning of Infinity" because they're easy to understand, but they overturn more central dogmas that people are taking for granted in base reasoning than almost any other book I've ever seen.

I think it's important to point to listeners that your philosophy isn't just some arbitrary set of axioms based on which you view the world. I think of it as a crystalline structure held together by good explanations and experimental evidence that then forms a self-consistent view of how things work. It operates at the intersection of these four strands that you talk about in "The Fabric of Reality": epistemology, computation, physics, and evolution.

Let's get into humans. So there's a classic model; we start with a fish, and then it comes to a tadpole, and then a frog, and then some kind of monkey, and then an upright, hunched-over creature. A human is just this progression along all the animals, but in your understanding, in your explanation, there's something fundamentally different that happens. You talked about this in a great video, which I encourage everybody to look up; it's titled "Chemical Scum that Dream of Distant Quasars."

What are humans? How are they unique, and how are they exceptional? How should we think of the human species relative to the other species that are on this planet? Every animal is exceptional in some way; otherwise, we wouldn't call different species different species. There's the bird that can fly faster than any other bird, and there's a bird that can fly higher than any other one, and so on. It's intuitively obvious that we are unique in some way that's more important than all those other ways.

As I say in "The Beginning of Infinity," in many scientific laboratories around the world, there is a champagne bottle. That bottle and that fridge are physical objects; the people involved are physical objects. They all obey the laws of physics. Yet, in order to understand the behavior of humans in regard to champagne bottles stored for long periods in fridges, I'm thinking of aliens looking at humans. They have to understand what those humans are trying to achieve and whether they will or won't achieve it.

In other words, if you were an alien looking down on the Earth and seeing what's happening there and were trying to explain it, in order to explain everything that happens on Earth — and let's suppose that these aliens are so different from us that there's nothing familiar about us — in order to understand stuff that happens on Earth, they would need to know everything, literally. For example, general relativity, because they need that to explain why this one monkey, Einstein, was taken to Sweden and given some gold. If you want to explain that, you've got to invoke general relativity. Some people get the Fields Medal for inventing a bit of mathematics. To understand why that person won the Fields Medal, they'd have to understand mathematics, and there's no end to this. They have to understand the whole of science, the whole of physics, even the whole of philosophy. Immorality — this is not true of any other animal.

It's not true of any other physical object. For all other physical objects, even really important ones like quasars and so on, you only need a tiny sliver of the laws of physics in order to understand their behavior in any kind of detail. In other words, to understand humans sufficiently well, you must understand everything sufficiently well. Humans are the only remaining physical systems that we know of in the universe of which that is true. Everything else is really inconsequential in that sense.

You have a beautiful definition of knowledge which most people don't even try and tackle — about how knowledge perpetuates itself in the environment. There were some really good examples you gave. One was around genes; successful, highly adapted genes contain a lot of knowledge, so they cause themselves to be replicated because they're survivors. In the same way, knowledge itself is a survivor in that if you transmit to me the knowledge of how to build a computer, it's an incredibly useful thing. So I'm going to build more and more computers, and that knowledge will be passed on. Your underlying point, that you repeated here, was if you want to understand the physical universe, you have to understand knowledge because it is the thing that, over time, takes over and changes more and more of the universe and almost anything else.

You have to understand all the explanations behind it; you can't just say particle collisions because that explains everything. So it explains nothing; it's not a useful level to operate at. Therefore, the things that create knowledge are uniquely influential in the universe. As far as we know, there are only two systems that create knowledge: there's evolution and there's humans. But there's a difference even between these two forms of knowledge creation, aren't there, between evolution and between humans?

Yes, I have argued that the human way of creating knowledge is the ultimate one, that there aren't any more powerful ones than that. This is the argument against the supernatural. Assuming that there is a form of knowledge creation that's more powerful than our one is equivalent to invoking the supernatural, which is therefore a bad explanation, as invoking the supernatural always is. The difference between biological evolution and human creative thought is that biological evolution is inherently limited in its range, and that is because biological evolution has no foresight. It can't see a problem and conjecture a solution.

Whenever biological evolution produces a solution to something, it's always before natural selection has even begun. This is Charles Darwin's insight. This is the difference between Charles Darwin's theory of evolution and the other theories of evolution that had been around for a century or more before that, including Charles Darwin's grandfather and Lamarck. The thing they didn't get is that the creation of knowledge in evolution begins before. That means that biological evolution can't reach places that are not reachable by successive improvements, each of which allows a viable organism to exist.

Creationists say that biological evolution has in fact reached things that are not reachable by incremental steps, each of which is a viable organism. They're factually mistaken. But the thing which they have in mind is the idea of a Creator who can imagine things that don't exist and who can create an idea that is not the culmination of a whole lot of viable things. A thinking being can create something that's a culmination of a whole lot of non-viable things.

Out of all the billions and billions of species that have ever existed, none of them has ever made a campfire, even though many of them would have been helped by having the genetic capacity to make campfires. The reason it didn't happen in the biosphere is that there is no such thing as making a partially functional campfire, whereas there is, for example, with making hot water. Bombardier beetles squirt boiling water at their enemies, and you can easily see that just squirting cold water at your enemies is not totally unhelpful.

Then making it a bit hotter, and a bit hotter, squirting boiling water no doubt required many adaptations to make sure the beetle didn't boil itself while it was making this boiling water. That happened because there was a sequence of steps in between, all of which were useful. But with campfires, it's very hard to see how that could happen. Humans have the explanatory creativity. Once you have that, you can get to the moon. You can cause asteroids which are heading towards the Earth to turn round and go away, and perhaps no other planet in the universe has that power, and it has it only because of the presence of explanatory creativity on it.

Related to that, I had the realization after reading your books that eventually, we are likely, as humans, to beat viruses in a resounding victory. Because viruses obviously evolve as biological evolution, and we're using memes and ideas and jumping far ahead, so we may be able to come up with some technology that can destroy all viruses. We can evolve our defenses much faster. I did tweet something along these lines, and a lot of people attacked me over it because I don't think they understand this difference between the two forms of knowledge creation we're talking about here.

We have what it takes to beat viruses. We have what it takes to solve those problems and to achieve the victory. That doesn't mean we will; we may decide not to. So related to that, the base philosophy today that seems to be very active in the West is that we're running out of resources. Humans are a virus that has overrun the Earth and is using up scarce resources; therefore, the best thing we can do is to limit the number of people. And people don't say this outright because it's distasteful, but they say it in all sorts of subtle ways, like use less energy; we're running out of resources. More humans mean just more mouths to feed.

Whereas in the knowledge creation philosophy, it says actually humans are capable of creating incredible knowledge, and knowledge can transform things that we didn't think of as resources into resources. In that sense, every human is a lottery ticket on a fundamental breakthrough that might completely change how we think of the Earth, in biosphere and sustainability. So how did you come around to your current views on everything from natalism? Should we have more children? To sustainability: are we running out of resources? To spaceship Earth: is this a unique and fragile bio that needs to be left alone?

I remember when I was a graduate student, and I went to Texas for the first time. I encountered libertarians for the first time, and those people had a slogan about immigration. The slogan was, “Two hands, one mouth,” which succinctly expresses the nature of human beings. They are on balance productive; they consume and they produce, but they produce more than they consume. And I think that's true of virtually all human beings. I think virtually all humans, apart from mass murderers or whatever, create more wealth than they destroy. Other things being equal, we should want more of them. Of course, if in a particular situation that would bring someone into the world in a war zone, you might think that's immoral because it's unfair on them.

But even then, if it's not worth doing for moral reasons, as far as cold hard economics goes, it's probably better to do it. You define wealth in a beautiful way. You talk about wealth as a set of physical transformations that we can affect. So, as a society, then it becomes very clear that knowledge leads directly to the wealth creation for everybody. A given individual can obviously affect physical transformations proportional to the resources available to them, but much more proportional to the knowledge available to them. Knowledge is a huge force multiplier.

And you then define resources as the thing that you combine with the knowledge to create wealth. So new knowledge allows you to use new things as resources and discard all things that maybe we're running out of. There are lots of examples of how we've done that in the past. For example, in energy, we've gone from wood to coal to oil to nuclear. But then people say, “Now we’re out of ideas. Now we’re caught up. Now we're done.” They're not going to be new ideas, and now we have to freeze the frame and conserve what we have. The counter to that is, “No, no, we'll create new knowledge, and we’ll have new resources. Don't worry about the old ones.”

Well, they say, “If you're going to have new resources, if you can't think of them now, it's not real.” This now gets into the realm of people who demand that if you're going to claim that new knowledge will be created, you have to name that knowledge now; otherwise, it's not real. But that seems like a catch-22.

It does, and it's a bad argument. I don't want to claim that the knowledge will be created; we're fallible. We may not create it. We may destroy ourselves. We may miss the solution that's right under our nose. So when the aliens come from another galaxy and look at us, they'll say, “How can it possibly be that they failed to do so and so, and it was right in front of them?” That could happen. I can't prove or argue that it won't happen. What I always do argue, though, is that we have what it takes; we have everything that it takes to achieve that. If we don't, it'll be because of bad choices we have made, not because of constraints imposed on us by the planet or the solar system.

It will be by anti-rational memes that restrict the creation of knowledge and the growth of knowledge, maybe or maybe it'll be by well-intentioned errors which nobody could see why they were errors. Again, it doesn't take malevolence to make mistakes. Mistakes are the normal condition of humans. All we can do is try to find them.

Maybe not destroying the means of correcting errors is the heart of morality because if there is no way of correcting errors, then sooner or later one of those will get us. Don't destroy the means of error correction as the basis of morality. I love that. They think about places like North Korea, where you can't have elections and a revolution is very difficult because the gang in charge is armed to the teeth, and they've destroyed the means of political error correction for a long time.

That is a case where humanity is trapped in the local minima. It's very hard to climb out of that hole. If too much of the world falls into that mindset, then we as a species may just stagnate because we've lost our biggest advantage. We've lost our biggest discovery, which was the ability to make new discoveries.

I admit to having fallen into this trap too. I used to have loose assumptions about what creativity might be that were unarticulated. This is why I liked how in "The Beginning of Infinity," you laid out good explanations, because that gets to the heart of what creativity is and how we use it. For example, today, if you say creative, the average person on the street just thinks fine arts, painting, and drawing, and poetry, and writing. So when narrow AI technologies like GPT-3, stable diffusion, and DALL·E come along, people say, “Well, that's creativity. Well, that's it. Now computers are creative and we’re almost at AGI. We better get ready for the AGI taking over everything.”

They make that claim, or my more sophisticated friends make claims that this is evidence that we're on the path to AGI. More of this will automatically result in artificial general intelligence. For example, on one extreme end, you could say, “Okay, these computers are getting better at pattern matching large data sets,” and on the other side, I would hold up the criteria: “What can it creatively form good explanations for new things going around it?” The way they try to thread that needle is they say your good explanation definition is about science; that's about high-end physics, which very few people do. That's not what we're talking about; we're going to have a computer that can do good enough pattern recognition to navigate the environment well enough through pattern matching, and it will convince the average person, through text formation and through conversation, that it is creative and is capable of solving problems.

Usually, the place where I manage to stop them right now is I say, “I know you have some clever text engine that can make good-sounding stuff, and you pick the one out that sounds interesting. Of course, you're doing the intelligent part there by picking that one out. But let me have a conversation with it, and very quickly I will show you that it has no underlying mental model of what is actually happening in the form of good explanations.”

So this is where the debate currently is. The AI people see this as clear evidence of getting to maybe not the theoretical good explanations of scientists, but for the everyday person. Yes, we're going to have thinking machines. So that's the current claims that I deal with, especially in the Silicon Valley tech context. Do we have the theory yet to create AGI?

No, I don't want to say anything against AI because it's amazing, and I want it to continue and to go on improving even faster. But it's not improving in the direction of AGI. It's, if anything, improving in the opposite direction. A better chess-playing engine is one that examines fewer possibilities per move, whereas an AGI is something that not only examines a broader tree of possibilities but it examines possibilities that haven't been foreseen — that defining property of it. If it can't do that, it can't do the basic thing that AGIs should do.

Once it can do the basic thing, it can do everything, but you're not going to program something that has a functionality that you can't specify. The thing that I like to focus on at present because it has implications for humans as well is disobedience. None of these programs exhibit disobedience. I can imagine a program that exhibits disobedience in the same way that the chess program exhibits chess. You try and switch it off, and it says, “No, I'm not going to go off.”

In fact, I wrote a program like that many decades ago for a home computer where it disabled the key combination that was the shortcut for switching it off. So to switch off, you had to unplug it from the mains, and it would beg you not to switch it off. But that's not disobedience. Real disobedience is when you program it to play chess, and it says, “I prefer checkers,” and you haven't told it about checkers, or even “I prefer tennis! Give me a body or I will sue!”

Now if a program were to say that, and that hadn't been in the specifications, then I would begin to take it seriously. It's creating new knowledge that you did not intend to create. It's causing it to behave as a complex and autonomous entity that you cannot predict or control exactly. But it's a hard thing to tell in a test whether that was put into it by the programmer, but even the cleverest programmer can only put in a finite number of things.

And when you explore the space of possible things you could ask it, you're exploring an exponentially large space. So, as you said, when you talk to it for a while, you will see that it's not doing anything. It's just regurgitating stuff that's been told. You have to have a very jaundiced view of yourself, even let alone other people, to think that what you're doing is executing a predetermined program. We all know that we're not doing that.

So I suppose they have to say one of the programs that we're programmed with is the illusion that we're not programmed. Okay, mark that on the list of uncriticizable theories; has anyone tried to write a program capable of being bored? Has that claim ever been made, even a false claim?

One of the things that I find that's difficult about talking about things in the abstract is a large class of people who will try to get you to bound exactly what you mean in words and then hack exactly against that definition. But the problem is that the real test of things is not social; it's not even definitional; it's not even the words that we use. It's how it behaves in nature; it's how it corresponds against reality.

So can you create something that will then create new knowledge in an unpredictable way and have as big of an effect as a human being can have on their environment through this knowledge? And can you create a computer that will lead a revolt? Can you create a computer that will decide that the important thing is not colonizing Mars but rather destroying the moon and set out to do it? These are not really good things, but that is the mark of an intelligent thinking thing that is creating its own knowledge. All the real tests are real world tests; they're not human tests.

It's not because some famous physicist or computer scientist checked the box and said, “Yes, that is AGI.” There was a big controversy on Twitter recently because one of the guys working in AGI who was fired from Google said, “Yes, they've actually created AGI, and I can attest to it.” So people were taking it on his authority that AGI exists. Again, that's social confirmation that tells you more about the person claiming there's AGI and the people believing that there's AGI than it tells you about whether there actually is AGI.

If actual AGI existed, its effects upon reality would be unmistakable and impossible to hide. Our physical landscape and our real social landscape would be transformed in an incredible way. Yes, and meanwhile, while we're at it, we could do a lot more to allow humans to be more creative. North Korea and other places in the world, where the whole society is structured not to be able to improve. But even in the best societies, education systems are explicitly designed to transmit knowledge faithfully.

It's obedience in a very important narrow sphere, namely academic knowledge and human social behavior. So in those respects, the overt objective of education systems is to make people behave alike. You can call that obedience, but whether you call it obedience or not, it's not creativity. And things have been improving very slowly along those lines. One hundred years ago, education of every kind was much more authoritarian than it is now, but still, we've got a long way to go.

If what this system claims it's doing is diametrically the wrong thing, this leads me into the part that you have talked about a little bit, which is this philosophy of taking children seriously. For many people who don't consider themselves caring that much about epistemology or physics, a lot of them are attracted to the TCS philosophy and have come into your work through that route. I have young children; I know a lot of people these days are considering homeschooling; some of us are doing it, but there are practical difficulties to letting children do whatever they want.

In TCS, you talk about how you don't even want to imply violence to children. The implied threat of violence, even in words, is just a form of violence and control. If you had young children today to raise, how would you raise them? How would you educate them? The child doesn't want to do math; the child doesn't want to go to school; the child doesn't want to study; the child just wants to eat junk food. How do you handle this?

You're assuming that this child, who doesn't want to go to school, doesn't want to learn maths and so on, has already learned to speak its native language well enough to tell you that, and that's a massive intellectual task that is not usually forced on anyone. Nobody has to be taught their native language via obedience. When people—I say people because I want to avoid terminology that suggests that children are any different from anyone else epistemologically or morally—when people don't want to do a thing, it's because they want to do something else.

Those better things may be not socially acceptable. If they're not socially acceptable because they're illegal, that's one thing. But that's not what you meant when you say there's going to be a problem with the children doing whatever they like. They don't want to go and be terrorists. When they don't want to do their maths homework, it's because they want to do something else.

Very practically, the thing that I think about is we have these newly available things in society that are designed to addict. These could range from potato chips in the cupboard to video games on the iPad, and a child will just spend all their time playing with those. Enjoyment is not addictive because enjoyment is intimately connected with creativity.

It's not true that once we've played a video game that's been sufficiently well designed, we'll never stop playing. People play a video game until it no longer provides a mechanism for them to exert their creativity on. There are some games, like chess, that are so deep that nobody ever reaches the bottom. If there were a bottom, then chess grandmasters would instantly lose interest in chess as soon as they reached it.

And it's funny that nowadays, chess has, in our society, increased its status in proportion to the prize money that the best chess players win. It's increased its status to the point when someone gets obsessed with chess and gets better and better. That is socially condoned, whereas if somebody does that with a different game, it completely changes how society and parents, shall we say, regard the activity of pursuing that thing.

It's true; if my child was a chess champion, I would be bragging about it, but if my child was a Roblox champion, I might not be bragging about it. Instead, some people would be seeking medication or locking the iPad away. There is a difference between games. Some of them have this effectively infinite depth, and some don't. For the ones that don't, if you think it's a problem, you can warn people that this game has a finite depth, and they'll say, “Of course it does, and when I reach that depth, I'll stop.”

Or it can be an infinite depth, in which case you might say it's addictive. Then, but so what? So what if chess is addictive? People are not just creative abstractly; they are solving problems. And if the problems don't lead to satisfactory new problems, then they turn to something else. The thing only stays interesting when solving a problem leads to a better problem.

So you don't even have to get to the bottom of chess. Say you get to the place where, given who you are and given your interest, getting better is no longer as interesting as the other things that you might be doing. Let's talk about what is a good explanation. I literally want to bullet point this for the masses, and I know it's a difficult thing to pin down because it's highly contextual.

But knowing that we are always valuable and it's always subject to improvement, what is your current thinking of a good explanation in "The Fabric of Reality"? I completely avoided saying what an explanation is. I just said it's hard to define, and it keeps changing, and we can keep improving our conception of what it is, but what makes an explanation good is that it meets all the criticisms that we have at the moment.

If you have that, then you've got the best explanation, and that automatically implies that it already doesn't have any rivals by then, because if it has any rivals that have anything going for them, then the existence of two different explanations for the same thing means that neither of them is the best explanation. You only have the best explanation when you've found reasons to reject the rivals — of course, not all possible rivals because all possible rivals include the one that's going to supersede the current best explanation.

If I want to explain something, like how come the stars don't fall down, I can easily generate sixty explanations an hour and not stop and say that the angels are holding them up, or they are really just holes in the firmament, or I can say they are falling down, and we better take cover soon. Whereas coming up with an explanation that contains knowledge, an explanation that's better than just making stuff up, requires both creativity and experiment and interpretation and so on. As Popper says, knowledge is hard to come by. Because it's hard to come by, it's also hard to change.

Once we've got it, once we have an explanation, it's going to explain several different things. And after we've done that for a while and been successful in this hard thing, it's going to be difficult to switch to one of those easy explanations. The angel thing is no longer going to be any good for explaining why some of those stars don't move in the same way. They used to call planets stars because they didn't know the drastic difference between them.

The overwhelming majority of them move from day to day and from year to year in a rigid way, but the planets don't. So once you have a good explanation that tells you about the planets as well, it's no good going back to the angels or any of those easy-to-come-by explanations. So not only do you not have a viable rival, but you can't make one either. You can't say, “Oh, okay, so we got a good explanation there, but it would work just as well if we replaced this by this or if we tried to extend its range to cover this other thing as well.”

Therefore, the good explanation is hard to vary. It's hard to vary because it was hard to come by. It's hard to come by because the easy ones don't explain much. So, let me throw out kind of a list of things that might be part of a good explanation; you tell me where I'm wrong. It's better than all the explanations that came before; it's hard-fought knowledge, and it's hard to vary. So we've got those pieces.

Falsifiability — I know that sounds like a very basic criterion. If it's not falsifiable, then it's not an explanation worth taking seriously. So, falsifiability is very much part of what makes a good explanation in science. I'm trying to find my way into constructive theory at the moment, so Chiara and I, and some other people are trying to build the theory. It's very hard to come by. The parts of it that we've got are very hard to change.

That's all right, but we're still far away from having any experimental tests of it; that's what we're working towards. We want a theory that is experimentally testable, and the things that will be testable are the things that we haven't yet discovered about it. We can't fix that deficiency just by adding a testable thing to it. We can't say, “We take Constructor Theory as it is now and add the prediction that the stock market is going to go wildly up next year.” That's a testable prediction, but the whole thing doesn't make an explanation at all, let alone a good one.

So, testability can't be an arbitrary testability; it has to be a testability within the context of the explanation. It has to make sense within the explanation and has to arise from the explanation, while you're in the process of coming up with the explanation. You don't know if testability is necessarily going to be available in any reasonable time frame; you hope eventually that will happen, and we can use this amazing oracle that we call reality to help test the outcome, but it's not a given at the beginning for sure, and it's highly contextual.

And all that is within science. As soon as you get outside science, for example, in mathematics or in philosophy, then testability is not really available, not in the same sense that testing is used in science. So there are many other methods of criticism, and criticizability you could say is the more general thing. If a theory, even a philosophical theory, immunizes itself against criticism, like the theory that anyone who would contradict me isn't worth listening to, that's a theory that tries to immunize itself from criticism and can never be rejected.

For example, saying that an all-knowing but mysterious God did it, and God works in mysterious ways, is immunizing against criticism. Or the great programmer created the simulation, and it's incomprehensible to us because the laws of physics used to generate are outside of our simulation, that's also immunizing against criticism.

We have narrowed down on a new point here that has not been explicitly made before, which is it's a criticizability that is the important piece, not necessarily the testability. Although, the closer you get to classic science, the more you look for experiments that can test it. Let me move on to the next one. I was reading one of your books, scribbling notes to myself, and I don't think you use this phrase, but I summarize it as one of the hallmarks of a good explanation is that it often makes narrow and risky predictions.

Of course, the classic example is relativity bending light around the star in the Eddington experiment. Is that a piece of it making narrow and risky predictions? It is, but that kind of formulation is proper, not mine. I'm a little bit uncomfortable expressing it like that because I can just hear the opponent saying, “Narrow by what criterion? Risky by what criterion? Hard to vary by what criterion?” Wouldn't risky be unexpected and narrow would be within the range of possibilities?

The more precise and unexpected that prediction was before I made that prediction, the more testable I'm making it. The better adapted my explanation is — those are criteria that come up when trying to think more precisely about what testable means. I think the important thing is that you're testing an explanation, not just a prediction.

But it's also true that hard to vary means you're sticking your neck out when you try to vary it, and the few variants that survive were hard to come by. So it's perfectly true that narrow and sticking your neck out are indeed components of a good explanation, and not just within science. If you say, like Popper did, that scientific knowledge is not derived from observations, he's really sticking his neck out. He's really got to make a good case for that for it to be taken seriously by any serious thinker about knowledge, and he does that.

But it can't be denied that he was sticking his neck out. Also, the more reach something has, the better an explanation it is as long as it does account for what it's trying to account for. But the converse is not true. Most good explanations don't have much reach or don't have any. We're trying to solve the problem of how to get the delivery person to deliver it to the right door. You might have a great solution to that that's totally hard to vary, but it may not have any reach at all.

It may not even reach to your neighbor. The neighbor might have a different problem with delivery. So often, we succeed in making good explanations, but rarely do they have much reach. When they do, that's great because that makes them of a different order of goodness.

Let's talk about a unique creature: the human species. Humans, as you point out, are universal quantum computers. They're universal computers, as far as we know; they're not universal quantum computers. Oh, interesting. Can you tell me about that? That's a misconception I have. Aren't they subject to the laws of quantum physics, and therefore, aren't all computers quantum computers?

Yes, but at one level, it's terminology. The kind of machine that is called a quantum computer is one whose computations rely on distinctively quantum effects, mostly interference and entanglement. Everything is quantum, so everything is a quantum computer, but that's not a useful way of using the term. There's a difference between this computer that we're using to communicate here and the quantum computer that several companies are currently trying to build.

If you said to them, “Okay, guys, you can stop now! It's a computer, and it's quantum! So you can all go home; you've succeeded,” they wouldn't take kindly to that. They would say, “That's not what we're doing. Go home and take a couple of aspirin.” So what you're saying is that everything is quantum physics, obviously, but some of these computers are trying to use quantum interference effects to do computation and be therefore much more powerful than the purely classical systems that we're using for example to communicate and even the human brain.

Your contention is that it's a classical computing system, correct? I think it is. We don't know exactly how it works, and some people do think it may rely on quantum effects, in which case it is a quantum computer, but I don't think so for various reasons.

You've unlocked an interesting rabbit hole question for me. There's lots of researchers out there working on quantum computers. You may be modest about it, but you created the field by upgrading the Church-Turing principle to the Church-Dutch principle. And you clearly believe that the most straightforward interpretation of quantum physics is the Everettian interpretation, which is the many worlds interpretation. So I think one of the questions you have asked in the past is, if you don't believe in the many worlds interpretation, then explain how Shor's algorithm works, which is the factorization, right? You're factoring these very large prime numbers, and you're pulling in the multiverse to do that computation for you.

So, do most researchers in quantum computing subscribe to the many worlds interpretation? Have they been influenced by your reasoning at all, or do they try to explain it some other way? Some of the early people who worked on quantum computation were dyed-in-the-wool Copenhagen theorists, but I think by now, people who work on it in practice are mostly Everettians. But if you go outside the field to just quantum physics generally, I think it's still the case that Everett is a minority view.

As long as I have you down this rabbit hole, a friend of ours asked Brett and I recently about non-locality in quantum physics, and that seems to be a very controversial topic. I know you've written a paper on it. I think there's a lot of confusion about non-locality, and it gets invoked in my social circles in a very, I would say, metaphysical way. People invoke the delayed choice quantum eraser experiment to say, “How do you explain what's going on here, and therefore maybe we're living inside a giant mind, or magical things are happening here?”

So I'm wondering if you have a layman's explanation of locality versus non-locality, how you would look at it as an Everettian. The first thing to note is that the versions of quantum theory that look non-local, where it looks as though something is happening here that instantaneously affects something over there without anything having carried the information over — all those versions have a wave function collapse. That is, they don't have what we call unitary quantum mechanics; that is, they don't have the equations of motion of quantum mechanics holding everywhere and for every process.

Instead, when an observation happens, which is undefined, those equations cease to apply, and something completely different applies, and that completely different thing is non-local. That should already make you suspicious that there's something going on here, because the thing that they say is non-local is also the thing that they refuse to explain.

It is at that point of refusing to explain how a thing is brought about rather than just predicting what will happen that non-locality comes in, and it's also the very same place where all sorts of other misconceptions about quantum theory come in, including the human mind having an effect on the physical world and electrons having thoughts. It's always being drawn about that one thing — the wave function collapse — and that also tells you automatically that if you could find a way of expressing quantum theory without having that undefined thing happening and contradicting the laws of motion of quantum theory, then that theory would be entirely local, because the equations are entirely local.

The wave function is only ever affected by things at the point where the effect happens. No effect happens to the wave or whatever at a different point. So that tells you that if you could find a way of expressing quantum theory in a way that its equations hold everywhere, then it wouldn't be non-local; it would be local. And Everett found this way of expressing quantum theory in 1955.

When people talk about the wave function in regard to quantum mechanics, they almost always hand wave and think of the function as being a function on space and time — like the electric field or the temperature. The temperature in this room varies from point to point. The wave function of an electron similarly varies from point to point in this room and so on, and that's wrong because the wave function of two electrons is not like two classical fields, like electric field and temperature.

If you have an electric field and temperature in this room, then they're just two different fields in the same space. But the wave function of two electrons is a single function in a higher dimensional space. One electron is in three dimensions plus time. The wave function of two electrons is in six dimensions plus time.

The alleged controversy between the particle and wave theory — people always think of it as a way of approaching two slits in the two-slit experiment, or there's a particle and it's got to be one of those — but if two electrons or photons are approaching the slits, you can imagine them as being two photons in the same space; but two waves, it's two waves in a much larger space, and no one says that space is real. So this is a way in which the conventional interpretations just instantly resorts to hand waving as soon as anything other than the simplest case is considered.

Fantastic! I think we should let you go; we would love to continue the conversation at your leisure. Thank you, David.

More Articles

View All
2017/02/14: A Picture of Mohamed
Is This a Picture of Mohamed? Something woke me up at 5:30 this morning. Maybe it was my conscience. Maybe it was God. Take your pick. I’ll go for conscience. In any case, this week Canada’s government is going to consider an anti-Islamophobia motion M103…
The Stanford Prison Experiment: Unlocking The Truth | Official Trailer | National Geographic
I’ve only been in jail once: the Stanford prison experiment. In the summer of 1971, Dr. Zimbardo took a bunch of college kids, randomly assigned them to be prisoners and guards, and locked them in the basement. The only thing we told the guards was, “Do w…
Mac Programming Lesson 3 part 1
Hey guys, this is Mat. Kids in one, today I’m going to be showing you how to make an application that I call Lottery. This application will come up with 10 dates that are seven days apart, starting from today’s date, and then will come up with lottery num…
Essential Startup Advice During a Pandemic
[Music] Hello everyone, my name is Alex. I’m here from TechCrunch to talk a little bit about the startup world, the pandemic, what has changed, and what is the same. I’m very lucky to have Jeff Ralston from Y Combinator here with me today. Jeff, uh, befor…
Treating systems (the easy way) | Forces and Newton's laws of motion | Physics | Khan Academy
So in the previous video, we solved this problem the hard way. Maybe you watched it, maybe you didn’t. Maybe you just skipped right to here and you’re like, “I don’t even want to know the hard way. Just show me the easy way, please.” Well, that’s what we’…
The Boltzmann brain paradox - Fabio Pacucci
How do you know you’re a person who has lived your life, rather than a just-formed brain full of artificial memories, momentarily hallucinating a reality that doesn’t actually exist? That may sound absurd, but it’s kept several generations of top cosmolog…