yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Is Reality an Illusion? | Dr. Donald Hoffman | EP 387


53m read
·Nov 7, 2024

Darwin and physics, high-energy theoretical physics, agree that SpaceTime is doomed. It's not fundamental reality, and the search is on, in the last 10 years among physicists, to find structures entirely Beyond SpaceTime, not curled up inside SpaceTime. We've mistaken a headset for the truth because it's easy; if that's all you've seen all your life is a headset, it's hard to imagine something outside of it. Now we can, we're free using mathematics to ask what kind of structures could we posit, um, Beyond SpaceTime.

Hello everyone watching and listening. Today I'm speaking with author and cognitive neuroscientist Dr. Donald Hoffman. We discussed Dr. Hoffman's research on what we know as reality, why SpaceTime itself is now considered by many a doomed framework of interpretation, and how Consciousness might be best understood as a vast probability space within which we orient ourselves.

Hello Dr. Hoffman, it's very good to see you. I've been interested in your theory for a long time, partly because I'm quite attracted by the doctrine of pragmatism, which was really part of what I tried to discuss with Sam Harris many, many times. It seems that your work bears well; it's a broad general interest but it also bears on specific interests of mind because I've always been curious about the relationship between Darwinian concepts of truth and let's say the concepts of truth put out by the more Newtonian, say, objective materialists. They don't seem commensurate to me.

So, would you start by explaining your broad theory of perception? I know that'll take a while, but it's a tricky theory. So, do you want to lay it out for us to begin with?

Most Darwinian scholars would agree that evolution shapes sensory systems to guide adaptive behavior, that is, to keep organisms alive long enough to reproduce. But many also believe that in addition, evolution shapes us to see reality as it is, at least some aspects of reality that we need for survival. So, that's often among my colleagues in studying the evolution of natural selection; they'll say, yeah, seeing the truth will make you more fit in many cases.

So even though Darwin says it's—well, evolution shapes sensory systems just to keep you alive long enough to reproduce—many people think that seeing aspects of reality as it is will also make you more fit and make you more likely to reproduce. So I decided, with my graduate students a few years ago, to look into this. There are tools. Darwin's theory is now a mathematical theory. We have the tools of evolutionary game theory that John Maynard Smith and others invented in the 1970s, and so it's a wonderful theory.

So Darwin's ideas can now be tested with mathematical precision. I thought that maybe what we would find is that, you know, evolution tries to do things on the cheap; it doesn't—if you have to spend more calories, then you have to go out and kill something to get those calories. And so there are selection pressures to do things cheaply and quickly, heresies. I went into it thinking that maybe that would make it so many sensory systems didn't see all of the truth, but I just wanted to check and see what would happen.

To my surprise, when we actually started studying this, principles emerged that made me realize that the chance that we see reality as it is, on Darwinian principles, is essentially zero. And that was a stunning "why is zero" result for me. Zero is a very low number. So why zero? That's right.

I can—it's a bit technical, but in evolutionary theory, in the evolutionary game presentation of it, you think of evolution as like a game, and in a game, you're competing with other players and you're trying to get points. Now, in the game of evolution, the way it's modeled is there are these fitness payoff functions, and those are sort of the points that you can get for being in certain states and taking certain actions. And so these fitness payoffs are what guides the selections; they guide the evolution.

We began to analyze those fitness payoffs. To be very concrete, about a fitness payoff: suppose that you're a lion and you want to eat. Well, a steak won't be very useful for you for that process, right? You'll have very little fitness payoff for a steak if you're a lion looking to mate. If you're a lion that's looking to eat and you're hungry, then of course the steak will have high fitness payoffs for you. So a fitness payoff depends on the organism—like a lion versus say a cow. A steak is of no fitness payoff for a cow for any purposes, it could be, yeah, quite the contrary, that's right.

So the fitness payoff depends on the organism, its state—I mean hungry versus sated, for example—and the action: feeding, fighting, fleeing, and mating, for example. So these fitness payoffs are functions of the world; they depend on the state of the world and its structure, and the organism, its state and its action. They're complicated functions.

In some sense, you could think that there's effectively one fitness payoff function—there's this one big fitness payoff function which handles the world and all possible organisms of all possible states and actions. So there's a big fitness payoff—the question is, but we can think about it as many fitness payoffs if we want to as well.

What the question is, suppose then, so this fitness payoff function takes as its starting point the state of the world, right? That's the domain of the function, and the range of the function might be the fitness payoff value, say from zero to 100. Zero means you lose; 100 means you did as good as you could possibly do. So zero to 100, say, it's a function from the state of the world across organisms into state and action into this number—some Z to 100 to Z to A thousand whatever you want to use.

So the question then is, does this function preserve information about the structure of the world, right? This is the function that's guiding the evolution of our sensory systems. So does this function, if the function is what mathematicians call a homomorphism—a structure preserving map. For example, the world might have an order relationship like one is less than two is less than three—like a distance or a distance metric or something like that.

Then to be a homomorphism would mean that if things were in a certain order in the world, the function would take them into that same order or some homomorphism of that order onto the states of the payoffs. So that's the technical question: what is the probability that a generically chosen payoff function will be a homomorphism of a metric or total order or partial order or a topology or measurable structure?

There are any structures that you can imagine the world might have. You can ask, what is the probability that a generically chosen payoff function will preserve it? If it doesn't preserve it, there's no information in the payoff function to shape sensory systems to see that truth, to see that structure of the world.

So what's remarkable is that evolutionary theory is indifferent about the payoff functions; they don't say they have to be a certain shape. In other words, every fitness payoff function that you could imagine is on equal footing on current evolutionary theory to every other one. There's nothing in Darwin's theory that says these are the fitness payoff functions and this is their structure.

So what we had to do then is to say, okay we have to just look at all possible fitness payoff functions and ask how many of them—what fraction of these payoff functions would preserve a total order or a metric or a measurable structure or whatever it might be?

And here's the remarkable and in retrospect obvious thing: for a payoff function to preserve a structure like a metric or a total order it must satisfy certain equations. So you have to write down these equations that the homomorphism must satisfy, that the function, the fitness pay function must satisfy to be a homomorphism. Well, once you write down an equation, most payoff functions simply aren't going to satisfy it.

I mean the equations quite restrictive, and in fact, in the limit, as you look at a world that has an infinite number of states and payoff values that go from zero to infinity, the fraction of payoff functions that actually are homomorphic goes to zero precisely.

Alright, so this is going to be a somewhat meandering question because it's a very complicated thing to get right. So people who think that the world is made out of self-evident facts underestimate the complexity of perception.

Here's how I'll make that case and you can tell me what you think. You could imagine you could ask an engineer a simple question: can you build a bridge? You might think the fact of the bridge will be a fact, and the answer to the question, which would be yes or no, will be a fact, and that's all self-evident—it’s sort of like the behaviorists assuming that the stimulus was self-evident; it's very much analogous to that.

But here's the problem: there's a whole set of assumptions built into that question that people don't even notice. So let me walk through some of the assumptions: it's like, well I can't build a bridge if you want it to last 50 million years. So I could build a bridge that would last a century or two centuries. I can't build a bridge for no money, with no labor, with materials that are just at hand.

So the thing you define as a bridge is already subject to all sorts of constraints. Now you and I mutually understand those constraints without even having to speak about them. So I'm also going to assume that if you say—if I ask you, can you build a bridge, and you say yes, you're also saying I'm willing to work with you, I'm willing to work honestly, I'm willing to hire the right number of people, I'm not going to screw you during the construction.

The bridge that we build, both understand that human beings will be able to walk across it, and as many as will fit on the bridge without the bridge falling down, and also cars—and that means it'll have to be about the same width as a car or a truck, or four lanes of cars or trucks, and it'll have to abide by all the building codes and so forth.

There are so many constraints in that question that it would take you an unlimited amount of time to list them all, and you don't because you're talking to an engineer, and he's a human being like you, enculturated like you, and so he understands the world like you do. So there's a hundred million things you don't have to talk about, but they're there, they're constraining the set of facts that's relevant to the issue, and they're constraining them seriously.

Okay, so now those constraints, those are nested in an even higher order set of constraints which are Darwinian, right? It's like, well the axiomatic agreement that you and I come to as a consequence of our shared perceptions, our shared embodiment and our shared enculturation are a consequence of a broader process which is essentially Darwinian.

Now that Darwinian set of constraints is instantiated in motivational systems in part, so we might say, well anything that you and I do together will have to be done while taking into account hunger and anger and fear and pain, the whole emotional potentiality of people, plus our fundamental motivational systems. The manner in which we lay out this particular task will have to satisfy all that; now that's also unspoken.

Now when you talk about evolutionary great game theory and pragmatic constraints, let's say, you talk about the lion who wants to mate and not eat, you're referring to one motivational system or another, one governing sex per se, and the other governing hunger.

Then the manner in which the lion is going to perceive the world or the manner in which we're going to perceive the world is going to be bounded by the operation of that motivational system, and the perception is going to be deemed sufficient if, when we enact it, the motivational system is satiated. Fair enough?

Okay, now but then there's a more interesting issue that pertains to the big fitness payoff. So if you look at how the nervous system is structured, you have these underlying motivational systems which are goal-setting machines and which define the parameters within which a perception is valid, but all those systems have to interact together, and they cause conflict.

Right, so if you're hungry and tired, you don't know whether you should get up and make a peanut butter sandwich or if you should just go to sleep and leave it till the morning. There's inbuilt conflict. And part of the reason that the cortex evolved was to mediate subcortical conflicts, and then even at the cortical level, the manner in which you integrate your fundamental motivations and the manner in which I integrate mine have to be integrated or we'll fight.

I would say, and I don't know if evolutionary theorists have dealt with this, and it's relevant to your theory, that perception doesn't map to the real world. Is there a higher order set of integrated constraints that serves reproduction over the long run, that all the lower order fitness payoffs are necessarily subordinate to? And I know this is a terribly complicated question.

Is that the reality that perception serves? You made the case that perceptions will not map one to one on reality. I suppose that's partly because reality is infinitely complex, right? I mean you can fragment it infinitely, and you can contextualize it infinitely. So it's very hard to calibrate.

Alright, so we got to put that aside, but then I would say, well maybe there's another transcendent fundamental reality that's Darwinian in nature that integrates everything with regards to optimized long-term survival, and perceptions are optimized to suit that.

So I know that's a terribly complicated question, but this is a terribly complicated subject.

Well, so I think we have to think a little out of the box on this question because when we conclude that evolution shapes us not to see reality as it is, then the question is, well what is it shaping our sensory systems to give us as well as what is reality?

Right, that question also comes up.

Yeah, absolutely. And so, the way I like to think about it is that evolution shapes sensory systems to serve as a user interface. So like the desktop on your computer, for example, so when you're actually working on a computer, you're in this metaphor, what you're literally doing is toggling millions of voltages in a computer in circuits, and you're having to toggle them in very specific, millions of them in exactly the right pattern.

Well, if you had to do that by hand, if you had to deal with that reality and interface with that reality, one voltage, it would take you forever. You probably wouldn't get it right, and you wouldn't be able to write your email or edit your picture—whatever you're doing on your computer.

So we spend good money and people spend a lot of time building interfaces that allow you to be ignorant—completely ignorant—most of us have no idea what's under the hood in our laptops. We have no idea; we know that there's circuits and software, but most of us have never studied it. And yet we're able to very swiftly and expertly edit our images and send texts and emails and so forth without having any clue—literally no clue—what's under the hood; what's the reality that we're actually toggling.

And so it seems that that's what evolution has done for us: it's given us an incredibly dumbed-down interface we call it space and time and physical objects. So we think of space and time as the fundamental reality and physical objects as truly existing in that objective reality, but it's really just in this metaphor a virtual reality headset.

We've evolved a virtual reality headset that utterly hides the very nature of reality—on purpose, quote unquote—on purpose, so to speak, because it would be—we'd drown in the complexity, right? You'd drown in the complexity.

Okay, so some evidence for that as far as I'm concerned is the following: I mean, first of all, if you look at a desktop, it consists, let's say, in part of folders. Now, folders are actually something in the real world that you can pick up, and we understand them. You can manipulate them; you can see how they operate by using your embodiment.

That embodiment gives you a deep understanding of the function of a folder, and then you can represent it abstractly and you can put it on a desktop, and everyone understands what it means. That understanding is something like able to map a certain set of functions for a certain set of purposes—that's what—and it's a constrained set of purposes.

This is what really struck me about reading the pragmatists, say, they said—and Pierce and James studied Darwin deeply, and they were the first philosophers to realize exactly what implications Darwinian theory had for both ontology and epistemology. Ontology, which is the study of reality for everyone listening, that was a real surprise.

You could understand that Darwin's theory might have epistemological implications, implications for the theory of knowledge, but the fact that it had implications for what reality is per se is something that very few scientists have yet grappled with.

And the pragmatists always said, look, when you accept something as a fact, one of the things you don't notice is that you set up conditions for that to be factual. And the fact is something like this definition will do during this time span for this very constrained set of operations—a fact.

Okay, but the problem with that is that's not a dead objective fact just lying on the ground; that's a fact by necessity nested inside a motivational system. So facts now all of a sudden become motivated facts, and that just wreaks havoc with the notion of objective, like a distant objective materialism because the facts are supposed to be separate from motivation, and the pragmatists, as far as I'm concerned, following Darwin, demonstrated incontrovertibly that that's, like you pointed to—it’s analagous—that is actually impossible now because you have to constrain reality in order to perceive it because it's too complex; you drown in the details otherwise; you drown in the complexity.

Now you made the claim, and I want to interrogate this a bit, that there's really no direct relationship between the desktop icon that you think is an object when you look at the world and the actual world. But let me offer you an alternative and tell me what you think about this.

So there's this idea—this is a weird way of approaching this, but I'm going to do it anyways. There's a very strange stream of primarily Catholic thought, I believe, that tried to wrestle with the idea of how God could become man. Because God, of course, is infinite and everywhere, and man is finite and bounded. And so the question is, well how do you establish a relationship between the infinite and the bounded?

And that's analogous to the same problem that we're trying to solve. And they came up with this hypothesis of Kenosis, which means emptying, and their notion was, well, Christ was God, but in some ways like a low-resolution representation of God, an image of God, right?

So there was a correspondence, but not a totality—at least not in any one instance. Now, the reason I'm bringing that up is because it seems to me that when we perceive an object, that it isn't completely without—you call it homomorphism—with I believe with the underlying world. It's just extremely low resolution, like it's a low-resolution functional tool that's what an object is.

But—and I would say I would evince in support of that, for example, obviously, the icons that we have on a computer screen we can use, and we treat them like they're real, and clearly they're low resolution. But also when we watch an animated show, for example, like The Simpsons, um, we're looking at cartoon-like icons, right? They're emptied even further than—or if I saw a Simpsons cartoon of you, it would be like a very low-resolution representation of the you I see, which is a very low-resolution representation of whatever the hell you are in actuality.

But I think there's an element of that perception that's an unbiased sampling of the underlying reality, although it's bent to pragmatic ends, pragmatic motivational ends. Now, I don't know what you think about that; I've thought about it for a long time; I can't find a hole in it, but I'm wondering what you think.

Well, I think here's an analogy that might help explain the way I see it. Suppose you're playing a VR version of Grand Theft Auto; you have a headset and bodysuit on, and you're playing a multiplayer Grand Theft Auto. You're playing with someone in China, England, and so forth, and I'm sitting there in my ride, and I've got a steering wheel and gas pedal and dashboard, and I'm looking out, and I see, to my right, I can see a red Ferrari, and to my left, I see a green Mustang.

Well, now, of course, what I'm really interacting with in this analogy is some supercomputer somewhere, right? And if I looked inside that supercomputer and looked for a red Ferrari, I would find no red Ferraris anywhere inside that supercomputer; I would find voltages.

So there, in that sense, the red Ferrari is a symbol in my headset, in the game, and there's nothing in the objective reality—in this metaphor—that it's a low-resolution version of. It's just literally a completely different kind of beast; there are no—

Okay, so let me ask you about that. So I get your point, especially German with regards to the online game. But is it not the case that in that supercomputer architecture, there's a pattern that is analogous to the red Ferrari pattern? That's the externalized representation of the pattern, let's say, on your retina, and then that propagates into your brain.

Like there is a conservation of pattern. Now, that Ferrari pattern in the supercomputer would be a very tiny element of an infinite landscape of patterns in the computer, but it's definitely not a pattern of a car per se, right? It's a pattern of a representation of a car, and but it's still got some correspondence with a pattern of voltages, let's say that, that does have some existence within the supercomputer architecture.

Well, so in that case, I would say that there's a causal connection—that what's going on inside the supercomputer has a causal connection with the sequence of um pixels that are being illuminated in my headset so that I see a red Ferrari.

So there's a causal connection, but if I asked if there's some sense in which there's a homomorphism of structure between what's going on inside the computer and what I'm seeing on the screen as a red Ferrari, I would say there's probably no homomorphism at all. And in that sense, we can't think about it as like a low-resolution version of something.

So to be specific, the electrons in the computer have no color. My Ferrari is red. The shape of the Ferrari and the shapes of the electrons—or even the pattern of motion of the electrons—is independent. And what's going on in part is that the pattern of electrons in the supercomputer, they're programmed to operate in a certain way to cause certain other things to happen in my headset—to trigger voltages that trigger pixels to have certain colors.

And so there's a whole sequence, a whole cascade of events that are going on there. To say that there's a homomorphism, I think is just barking up the wrong tree.

According to a recent report, Planned Parenthood continues to rake in billions despite dwindling clients. The biggest takeaway here is that Planned Parenthood is generating vast profits, including millions in taxpayer funding with the help of pre-born. You and I are stealing their clientele, meaning the babies they are trying to kill.

Pre-born operates on a very slim budget as they rescue over 200 babies' lives every day, and they receive no government funding. Pre-born's network of clinics are situated in the darkest corners, competing head-to-head with the abortion giants. They need our help now more than ever. When you donate $28 to Pre-born, you will offer a free ultrasound to an expectant mother caught in crisis.

Once she hears that heartbeat and sees that precious life, her baby's chance at life doubles. If you would like to sponsor a precious baby's life, your gift will be tax-deductible and will go directly towards saving babies' lives. Dial pound 250 and say the keyword "baby," or visit pre-born.com. All gifts are tax-deductible; you will never regret saving a child's life. That's pound 250, baby, or visit preborn.com.

Okay, so I want to push on this a bit more because I want to understand it. Alright, so I'm going to do that from two angles. The first is that in the supercomputer architecture, let's say there are levels of potential patterning ranging from the quantum, subatomic, atomic, molecular, etc., all the way up to the apprehensible phenomenological world—multiple, multiple layers of potential patterning.

So I would say in response to your objection that if you looked at the electrons, for example, they have no color; that color is only a pattern that can even be replicated analogously at certain levels of that multi-level patterning.

So you won't detect it at the quantum realm; you won't detect it at the subatomic realm; maybe not even at the atomic realm. You'd detect it at the level of patternings of molecules at one level, and then not above that—that'd be very specific level.

So, it could still be there even though it wasn't propagating through the entire system. And then I want to add another twist to that that I think is relevant.

So, I was talking to a biologist last week about how the immune system functions, and basically the way that it functions, you imagine there's a foreign molecule in your bloodstream, and it's got a shape. Well, it has a very complex—has an endless number of very complex shapes that make up its surface, and the complexity of that shape would be dependent on the resolution of analysis, right?

Because the subatomic contours would be different than the atomic contours, and different than the molecular contours. What the immune system wants to do is get a grip on that molecule, and it just has to get enough of a grip so that it can register the pattern, replicate the pattern, and get rid of the molecule. So that's its goal, you could say; that’s a motivational frame.

Now, the way it does that is sort of the way your arm works. Imagine you were trying to figure out how to pick up a basketball. Now, a baby will do that in the crib; the first thing a baby will do when it's trying to figure out how to use its arms is it uses them very nonspecifically. It’ll flail about and maybe it'll hit the ball. Now, hitting the ball isn't throwing the ball, but it's more like throwing the ball than not hitting the ball, right?

And then the baby does this, and then that works and then it gets a little bit more sophisticated and it does this and then it gets a little more sophisticated and does this, and then finally it can manipulate its fingers, so it's specifying the grip. At some point, the baby can grab the ball and throw it.

That's kind of what the immune system does; it makes some molecules that kind of stick to the surface and then those modify so they stick even better, and then the sticky molecules modify so it sticks even better. But the point I'm making is that the immune system appears to generate a sufficient homologue of the molecule to grab it and get it out.

Now, you could say that that homologue that it generates—there are many levels of reality that the foreign body participates in that aren't being modeled by the immune system homologue. But I would say, yeah, but there's enough of a homology so that the immune system can get a grip and get rid of the molecule.

Now, and we're running around the world—this is a very good analogy because we're running around the world trying to get a grip all the time, and we presume that the map that we've made of the world is sufficiently real if we get a good enough grip to perform the operation that we're intending to perform.

But that still, to me, that still implies that there's some level of representation that has at least the echo of a genuine homology. So I'm wondering, you know, if you have objections to that or what you think about that.

I think that we can't count on any kind of homology or homomorphism. I think that, for example, the way I think about it now is that SpaceTime itself and all the particles that we see at the subatomic level and the whole bit—that's all just a headset.

And physicists actually agree—they say SpaceTime is doomed. So, Narh Hemed, David Gross, and many others are saying that we need a new framework for physics that's utterly outside of SpaceTime and quantum theory.

So, they're finding structures like decorated permutations and so forth—these are structures not curled up inside of SpaceTime but utterly outside of SpaceTime. And so I think science is telling us, Darwin, Darwin's theory, I think, is agreeing; it's saying that SpaceTime is not fundamental. It's just a headset.

Okay, okay. So if I said there's no ultimate homology, but there are proximal local homologies, would that do the trick?

I have a reason for torturing you about this, and I'll leave it soon, but because the issue of grip really makes a difference as far as I'm concerned because getting a grip is very—it's sort of the basis of understanding. All of our cognitive enterprises, you could think in some real sense, are extensions of our ability to manipulate the world with our hands.

I mean, the fact that our left hemisphere is linguistically specialized looks like it's a consequence of its specialization for articulation at the level of the hand. And so getting a grip is crucial here, and the homology seems to me to be demonstrated in the fact that if you pick up a hammer, it actually—

You—it actually comes off the ground. Now, I think you could reasonably object that that homology is tremendously limited. But it's hard for me to exceed to the notion that it's absent.

Now, having said that, I don't want to push that point to stop you, let's say, from questioning something as fundamental as the objective reality of space and time. I think you can have your cake and eat it too in that regard, and I want to turn to those more radical claims right away, but if I said, well, if I pick up a hammer and it does, in fact, raise off the floor, how is that not an indication of a homology? Would you just—you would reduce that again to mere function? Like it’s merely the case that it worked, and that's not a demonstration of anything beyond the thing is—it worked.

That's the thing; is that—that's why I can't shape the notion of some homology. Well, I'd again say that there's a causal connection. You could talk about, you know, a causal connection between the reality behind your headset and what you're seeing in the headset.

But I think it would be a stretch to talk about some kind of homology of structure. It's not—it's actually not necessary, right? To be successful is not necessary.

Well, and as you pointed out very early in this discussion, it also might be hyper-expensive, right? You actually don't want to know more about something than you need to know in order to perform the requisite action. That's part of efficiency, right?

So, okay, so alright, let's leave that aside. Let me—let me on that in the back. I'll just say one little—if you have like a desktop folder on your laptop, and the file is blue and rectangular in the middle of your screen, well, the file is not blue; it's not rectangular; and it's not in the middle of the computer.

There’s literally no homology for anything that you can see on the— in the symbol on the screen and the file itself. It's just a useful symbol without homology, but there is a causal connection between the voltages.

But—but no homology. So then what do you—okay, so—okay, so let—let maybe we can go down that route. Sure, I guess I'm then unclear about what you mean. What exactly do you mean by causal then?

So that's already sort of smuggling in a spacetime kind of analogy, right? Right, exactly. Exactly. So I'll just say that there's a mathematical connection, maybe not causal, but there's some kind of mathematical connection. But the mathematics need not be a kind of mathematics that preserves, you know, structure, for example, right?

So there's a mat—mathematical connection. Okay, and I have to grind away on that for a bit because, you know, you are stating that there is a relationship at least of function, and I’m unable to, on the fly, thoroughly discriminate between some grip of structure and some function because grip is a function. So I'll just put that aside for now.

Let's go on to Consciousness itself. Now, you said a variety of very radical things, including criticizing the entire notion of space and time. And so we'll delve into that, but I want to tell you something that I learned from reading mythology, and I want you to tell me how that relates, if at all, to the way that you're conceptualizing Consciousness, which is obviously not the way that people generally conceptualize it.

Okay, so I've read a lot of different mythological accounts and I've studied a lot of analysis of mythological accounts, and I think I've been able to extract out commonalities and regularities across the methods of assessment.

And I think I've been able to triangulate them against findings from neuroscience, let's say the neuroscience of perception. Now, the mythological stories that represent the structure of reality proclaim, you could say, that there are three interacting causal agents or structures—three interacting fundamental causal agents is probably a better way of thinking about it.

There’s a realm of potential from which order can be extracted—that's often given feminine symbolism; the realm of potentiality, and I think that's because feminine creatures are the creatures out of which new creatures emerge, so there's a deep analogy there.

So there's a realm of potentiality, then there's a realm of a prior order that's often given patriarchal or paternal symbolism—that's the great father. And so if you read a book, let's say the book offers you a realm of potentiality, which is the multitude of potential interpretations that the book consists of, but then you impose an order on that that's a consequence of, well, every book you've ever read and every experience you've ever had.

And the book itself is a phenomen that emerges as a consequence of the interplay between the interpreter and the realm of potentiality.

Okay, then there's one additional factor, which I think is identical to Consciousness itself; it's associated in mythology with the sun—with the sun that sets and then rises triumphant in the morning. It's associated with the conquering hero, and it's the thing that literally makes order out of chaos; that's the right way to think about it.

And that we, part as conscious beings, we partake in that process; in fact, that process is our essence, and that's what makes us made in the image of God, let's say, but also instantiated with something like intrinsic value.

Now you have a very strange concept of Consciousness, and so partly because you're attempting to make the case that what we think of as objective reality—so that's just the facts, ma'am—objective reality is actually an emergent property. Tell me if I've got this wrong, it's actually an emergent property of Consciousness itself. And so that in your scheme of things, Consciousness is more fundamental than objective reality.

Doesn’t even exist in your scheme that objective reality, so to speak, is…?

So tell me how you've grappled with the relationship between Consciousness and the world as such. What have you concluded?

Darwin and physics, high-energy theoretical physics agree that SpaceTime is doomed; it's not fundamental reality. And the search is on, in the last 10 years among physicists, to find structures entirely beyond SpaceTime—not curled up inside SpaceTime. And they've found structures.

I mentioned like the decorated permutations, amplituhedrons, and so forth. And so I'm also thinking about Consciousness utterly outside of SpaceTime. So it's a fundamental reality, and SpaceTime, which we have thought of for most of human history as the fundamental reality that we're embedded in, is trivial headset— that's all it is.

We've mistaken a headset for the truth, because it's easy; if that's all you've seen all your life, it's a headset. It's hard to imagine something outside of it, but science is good enough to recognize that SpaceTime is just a headset.

So now we can—we're free using mathematics to ask what kind of structures could we posit, um, beyond SpaceTime. In my case, I'm trying to also deal with the Mind-Body problem: how is Consciousness related to what we call the physical world?

So I've decided to try to get a mathematical model of Consciousness. Now, of course, spiritual traditions and humanity for thousands of years have thought about Consciousness and so forth, but as a scientist, what I want to do, of course, is listen to their insights, but I need to write down as minimal a mathematical structure as I can to boot up a completely rigorous theory.

And so what we've done in our theory, we call the Theory of Conscious Agents, is a very minimal structure. A conscious agent has a probability space that it's defined on. So it's a probability space.

Starting a business can be tough, especially knowing how to run your online storefront. Thanks to Shopify, it's easier than ever. Shopify is the global commerce platform that helps you sell at every stage of your business, from the launch-your-online-shop stage all the way to the "did we just hit a million orders?" stage. Shopify is there to help you grow.

Our marketing team uses Shopify every day to sell our merchandise, and we love how easy it is to add more items, ship products, and track conversions. Shopify helps you turn browsers into buyers with the internet's best converting checkout—up to 36% better compared to other leading commerce platforms.

No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level. Sign up for a $1 per month trial period at shopify.com/sjbp. Go to shopify.com/jbp now to grow your business no matter what stage you're at. That's shopify.com/jbp.

So the probability space is equivalent to, let's say, a realm of potential. Around my students and I tried to model anxiety as a response to entropy. Okay, so imagine that what you have in front of you is a set of branching possibilities, some of which can be realized with comparatively less effort, so they're more probable, let's say, given your current state, some of which are virtually impossibly distal but in principle could be managed if you were smart enough and could gather the resources.

But so you have a probability space in front of you; some of which is sort of at hand, like it's pretty easy for me to pick up this pen, right? So that's a high probability pathway laid out in front of me.

So I mean the mythological motifs that I referred to insist that what people face is something akin to the precosmogonic chaos that God himself faced when the cosmos first sprang into being, right? And so the way to construe the world isn't as a place of clockwork automaton machines or self-evident objects, but as a realm of possibility that differs in probability.

And then the issue becomes how do you best orient yourself so that you can contend properly with that probability landscape? Now, is that—am I walking on parallel ground here?

We're in broad agreement in that, in the sense that our theory of conscious agents, by writing down a probability space, it is a space of potentiality.

For example, to be very, very concrete, suppose my experiment is just to flip a coin twice: heads and tails. Well, what's my probability space? Well, I could get heads-heads, heads-tails, tails-tails, or tails-heads, right?

So there's four possible—could land on the edge? Yeah, right, right, right. So—yeah, yeah. Well then I’d have to increase my probability space to if I wanted to include that.

But now, notice I write down the probability space first, but I haven't flipped my coin yet. So it's the space of potential outcomes of things that I can do, and that's what probability spaces are, and so, yeah.

Okay, so when I write down a probability space for Consciousness, it's a probability space in which I'm thinking about—in the first instance—that it's about what is the probability of this—I'll experience green or mint or the sound of a trumpet or so— all these different conscious experiences.

So the probability space is a space of all possible kinds of conscious experiences that this particular agent might have. And you could imagine for some agents, maybe they're simple; they only have the experience of red, period— that's it— that's all this agent has is red.

Another one can experience red and green, and others can have 10 trillion experiences. You could imagine agents with—and then they can be related, right? Well, maybe the red agent can be thought of as a subspace of the one that says red and 10 million other things.

So we can now write down, it depends on how articulated the organism is, right? So yeah, the simpler organisms, exactly; their probability space around them collapses.

That's right. And so all the infinite number of potential probabilities that we see in front of us just collapse into maybe five choices, something like that.

And yeah—okay, so you know Carl Friston, so this is quite interesting—so I talked to Carl Friston about emotion, about hope, positive emotion, let's say, incentive, reward, positive emotion. So positive emotion, in that sense, is a reward that signals advancement towards a goal.

Now, I’d already been conceptualizing with my students, as had Friston, anxiety as a marker for the emergence of entropy. But Friston pointed out, and I want to make a connection between his thinking and yours here—Friston pointed out that you can map positive emotion with respect to entropy too.

Because if you're looking for a desired outcome, so imagine you're trying to get a grip on the world to bring about a certain reality, if you see yourself making a step towards that end such that the number of potential pathways to that end decreases somewhat, that produces a dopamine kick.

And that's a signal of reduced entropy in relationship. And it seems to me that entropy is always calculated in relationship to a goal, right? You're saying, well, how entropic is the current space? And you can't answer that; you have to say, how entropic is the current space in relationship to the ordered state that I'm trying to bring about as a consequence of my actions?

And then you'll stumble across something that blows up in your face, let's say. Like I've always thought about this like—imagine you're driving your car to work, okay? And you might say, well, what is your car? And the objective materialist would say, well it's an enclosed shell with four tires. It would give you a materialist description.

But I would say, no, no, no, that's not how your nervous system is responding at all. Your nervous system, for your nervous system, the car is a conveyance from point A to point B. So it's a tool, and it's a tool that signifies zero entropy essentially as long as it performs its function.

And then, let's say your car breaks down, and now you're on the side of the road. Now what happens to you is the probability space around you, I would say, becomes more distal; any of your desired goals become more expensive and harder to compute.

Right? What's wrong with my car? Was I an idiot for buying that car? Am I generally an idiot? Am I going to get in trouble with my boss? What's going to happen to the rest of the day? You know, what's going to happen when I go see the mechanic, right? The landscape blows into a broader range of unconstrained potentiality, and that seems to be signaled by anxiety.

And anxiety then prepares your body for a multitude of potential actions, and the problem with that is that it's very physiologically costly, right? So that's stress, and that'll wear you to a frazzle.

So, okay, so is any of that not in accord with the manner in which you are modeling your theory of conscious agents?

Right, so in the theory of conscious agents, I should say that in addition to the probability space and the conscious experiences that it allows, there is the dynamics—it's a Markov chain, a Markovian dynamics where you have these matrices that describe the probability. If I’m experiencing red now, what's the probability I'll experience green the next time I have an experience, or so there's a dynamical—and when we do the analysis, it turns out that our Markovian dynamics need not have an entropic arrow of time.

It can be a stationary dynamics in which the entropy does not increase. So entropy, right? Right in this realm of kind, what you hope—you know that's one of the things that makes things constant, right? Is that you assume that the entropic transformation is negligible; that's why you can ignore things, right? When you ignore things, when you ignore almost everything, you're assuming that the entropic transformation is negligible.

Well, what I'm saying is that it's possible to model a reality in which entropy doesn't increase—period. It's not ignoring anything; that’s the nature of this deeper reality outside of SpaceTime. But then it turns out to be a theorem that if you take a projection of that non-entropic—there's no Arrow of time in the sense of increasing entropy of this of this Markov dynamics.

But if you take a projection of it by conditional probability, any projection of it—it's a theorem that you will, as an artifact of projection, have the illusion of an arrow of time. You will get an…

Well, well, is that because—well, look, if you're pursuing a pragmatic goal, things can fall apart and go wrong, and that is an increase in entropy within the universe defined by that goal. That may say nothing about entropy per se as a characteristic of broader reality.

See, I've always had this issue with entropy because entropy always seemed to me to be by necessity subjectively defined. It has to be disorder in relationship to some posited state of order. And then you get back into the Darwinian problem at that point, like if it's—well, if it's bounded by motivation, then it's encapsulated within a Darwinian space.

So, okay, so in terms of your conception of objects, let me try this out. So I'm looking at this teleprompter here, and you're sitting in the middle of it. Now, I'm treating that like a set of conditional probabilities, right? I'm presuming that what this machine is doing right now is very much predictive of what it's going to do in a second, and I'm predicating my perception itself on that reality.

Now, you know, it could burst into flames now; I feel that the probability of that is very low, so I'm not going to perceive the machine that way. Now, you know, there are disorders—obsessive-compulsive disorders is a good example—where people stop being able to reduce that probability landscape to predictable safety, and they start reacting to almost everything as if it's unpredictably dangerous.

And you know, things are so—I've had clients, for example, they would go into a building, and the first thing they would do is look for all the fire escapes, and what they asked me was, well why don't you do that? Because the building could burn down, and people do get trapped in buildings, and that's a horrible way to die.

So the mystery isn't why they did that, the mystery for them was why everyone didn't do that all the time. And I actually do believe that the great mystery is why people aren't scared out of their skulls all the time, not why they're sometimes calm.

But can you imagine an object—now the object is surrounded by a probability distribution, I would say, and that probability distribution is all the things that object might turn into in some period of time, let's say. And I would say to some degree when you look at the object, you actually also perceive that probability space because, although I see that this teleprompter is stable, it's unstable enough and dynamic enough to provide me with a representation of you.

And so I'm playing with the—by seeing the object and interacting with it, I'm playing with the probability space around it. So is it the case that you see the damn probability space when you look at the object?

Well, I don't know if we see it, the space itself; we certainly—we're estimating what we think are the probabilities for various good things and and bad things to happen.

But I would say that this whole business about entropy increasing and so forth, first I should point out that Shannon entropy, which is what we're talking about here, it turns out not to be the most general notion of entropy.

There are mathematicians and physicists looking at broader definitions of entropy. There's something called the Solis entropy and others. So there are technical reasons for why—I mean, Shannon entropy is great and it's very, very useful, and when I was talking about the entropy of our dynamical systems and not having increasing entropy, I was talking about Shannon entropy.

But there are more general notions of entropy that are important. So I would say that the very structure of needing to estimate probabilities and worrying about outcomes and and you know, rewards and so forth from the point of view of our dynamics of conscious agents—all of that in fact—all of Darwinian theory is an artifact of projection.

So here's a dynamic of conscious agents outside of SpaceTime. There need not be any competition, no limited resources, no arrow of time. And yet when I take any projection of that dynamics to get a new Marian dynamics that has lost just a little bit of information, I will have an arrow of time, and it can look like separate organisms competing for resources and so forth.

In other words, I mean I love Darwin's theory of evolution and natural selection; it's very powerful. I think the entire theory is not a deep insight into reality. I think it's an artifact of projection. The very arrow of time—think about the arrow of time. It is the fundamental limited resource in evolutionary theory. Time is the fundamental limited resource. If I don't get food in time, I die. If I don't mate in time, I don't reproduce. And if I don't breathe air in time, so time is the fundamental limited resource, and the arrow of time itself need not be fundamental; it could be entirely an artifact of projection.

So what that means is—and this gets again to the—so okay, well then I’d like to know, this is back to the most fundamental possible question we could be describing is: well, what's the nature of reality itself?

I mean when I was debating with Sam Harris, we got hung up on this consistently because I wasn't willing to use the same definition of truth that he was. He uses an objective materialist definition, and I think that, you know, truth flies like an arrow, let's say. It's got a functional element to it that you cannot eradicate.

There's no way out of that with an objective materialism, as far as I can tell. Now, you said the Darwinian race and the arrow of time is just an artifact, but if I said, well hold on a second, I don't exactly know what you mean by artifact then because if I don't act like there's an arrow of time and red and restricted resources in that regard, then I'm going to die. And that's real enough for me.

You might even say, well my death has little to do with the fundamental structure of reality, but I would say, well it has enough to do with it so it happens to concern me. And so, you know, we start to get into a discussion about what constitutes reality itself.

If this is just a projection, what in principle would be real? Right.

So on this theory then, Consciousness is the fundamental reality and the conscious experiences that observers have as a fundamental reality. And the experience that we have with space and time is a projection of a much deeper reality.

And that projection, because it loses information, is necessarily going to have artifacts in it, and among the artifacts are things like separate objects in space and time. Space and time itself is an artifact.

So one reason I'm not a materialist is because our best materialist theories, namely evolution by natural selection and also quantum field theory and Einstein's theory of gravity, they tell us that space and time have no operational meaning at 10^-33 cm or 10^-43 seconds.

In other words, our theories, our scientific theories that are the foundation of our material ideas tell us precisely the scope and the limits of materialism. Materialism—that kind of materialism is fine down to the Planck scale, 10^-33 cm, and after that, it completely falls apart. It's utterly—it's irrelevant.

That's right, the space-time physicalist matter kind of materialism falls apart. And it's not because of religious ideas—I'm saying it's, I'm just listening to the science. Science tells us: space-time has no meaning beyond the Planck scale.

And that's why the— you know, high-energy theoretical physicists are now looking for structures entirely outside of space-time—not curled up inside space-time—entirely beyond. So it's in that sense that, yeah, I materialism and—by the way, I should say this about all scientific theories. My view about all scientific theories is that each scientific theory starts with certain assumptions—the premises of the theory—and it says, if you grant me those assumptions, then I can explain all this wonderful stuff.

Okay, okay, so how did you come to that conclusion? Because that's—see, see, this is—I’ve been trying to wrestle with this with regards to say the potential relationship between the integrity of the scientific process and an underlying transcendent ethic.

So I think, for example, I talked to Richard Dawkins about this a little bit, although we didn't get that far for a variety of reasons, but like, I think that to be a scientist, there are certain things that you have to accept on faith.

These would be equivalent to those axioms. And I'm not talking about necessarily a scientific theory here, as you were, but the practice of science itself.

So for example, you have to act as if there is truth, you have to act as if the truth is discoverable, you have to act as if you can discover it. Then, you have to act as if you discovering the truth and communicating it is good, and none of that is provable scientifically.

You have to start with those axioms before you can even make a move. And it could be wrong, you know? I mean we think that delving into the structure of the world with integrity is redemptive; we think that knowledge is useful pragmatically.

But you know, we've invented all sorts of things that could easily wipe us out, like the hydrogen bomb perhaps being foremost among those. And so the evidence that that set of claims is true is sorely lacking, or you could say it's 50/50—that's another way of thinking about it.

But I'm very curious about how you came to the conclusion that scientific theories themselves have to be axiomatically predicated. How did you walk down that road?

Well, if you just look at any scientific theory—say Einstein's theory of special relativity—he says, let’s start with two assumptions that, you know, the speed of light is universal for all observers and that the laws of physics are the same in all inertial frames.

He says if you grant me those two miracles, then we can go. So the same thing as Reman and Darwin starts off and says grant me that there are organisms in space and time and resources, and these organisms are competing for resources.

Now, I can give you a theory. So every—when you just, when you just look at any scientific theory, a good theory will make explicit the assumptions, but if it's not, you can find what the assumptions are.

So there's no Theory of Everything. Do you think that there's—is there any difference between, technically, I'm thinking philosophically, I don't see any difference between the claim that a given theory has to have axioms that aren't provable from within the frame of that theory— that's Gel's theorem, as far as I can tell—applied much more broadly.

I don't see any difference between that and the proposition that to get the game started there has to be something akin to a miracle. I mean because these axioms imagine that a miracle inside a system is defined as any occurrence that isn't governed by the rules that apply within that system.

That's a good working definition. Now your proposition is, well I don't care what theory you're coming up with, there's going to be a set of axiomatic presuppositions that are a launching point.

See, I also think those axiomatic presuppositions are where you put all the entropy. You say grant me this; it’s like, well, that takes care of 95% of the mystery. So we'll just shove that invisibly, right?

Because it's hidden inside the axioms, and then you can go about manipulating the small remnant of trouble that you have left over.

I also think this is why people don't like to have their axioms challenged, see? Because if you say, well, I'm not going to accept that, then you let loose all the demons that are encapsulated within those axioms, and they start roaming about again, and people don't like that at all.

Well, yeah, a good scientist will want to have their assumptions made absolutely mathematically precise and explicit, so they’re just laid out there, and they say these are the assumptions of the theory, and given these assumptions, I can now prove this.

And this is the glory of science where we put down precisely what our assumptions are, and then we look at it mathematically and we can get both the scope of those assumptions—how much can we do with those assumptions—and the limits.

Like in the case of SpaceTime, the limits are 10^-33 cm—game over. By the way, it's not that deep, in my view; it's 10^-33 trillion cm—it's just 10^-33, and the game is over for SpaceTime.

So that's a good antidote for dogmatism because your own theory—a mathematically precise theory—will tell you the limits of your assumptions and then say, okay, now you need to look for a broader framework with deeper assumptions.

But they will be new assumptions. And so I view this as infinite job security for scientists because we will never ever get a Theory of Everything. We will always have a theory of everything except our current assumptions, and those—and I agree with you that those assumptions will essentially be the whole baileywick of what we're doing.

So reality, whatever it is—now this is, for me, something of an interesting mystery. Our theories in some sense don't even scratch the surface of the truth, and yet because this process will go on forever, and we'll still essentially have measure zero of the truth, and yet Einstein's theory and quantum theory gave us the technologies that allow you and me to talk across the country.

Well, so you could say, well you could say that partly what's happening there is that the more sophisticated the theory, the broader range of probable states of any given object or system of objects can be predicted. It's something like that.

P.J. pointed that out when he was talking about developmental improvement in children's cognitive theories. And so you know, if you look at someone like Thomas presumed that we undertake multiple scientific revolutions, but there was no necessary progress—there were just different sets of axioms.

And P.J. knew about Kuhn's theory, by the way, but P.J.'s point was no, you've got it slightly wrong because there is a regression of theory in that a better theory allows you to predict everything the previous theory allowed you to predict plus some additional things.

Now your point would be, well we can just continue that movement upward forever, right? Because the landscape of potentiality is inexhaustible and so, again, you can have your cake and eat it too.

We can learn more; Einstein got us farther than Newton, which doesn't mean that Einstein's axiomatic set is the final set.

Okay, so let me put a twist in this. I've been thinking about this recently—I'll be I'm writing a new book, and one of the things I'm doing in that book is doing an analysis of the story of Abraham. Abraham's a very interesting story, okay?

So Abraham is called out into the world even though he sort of hung around his father's tent till he's like 70. So he had utopia at hand; he didn't have to do any work to get everything he needed. But that wasn't good enough, so a voice comes to him—it's the voice of conscience, I would say—and says, look, you've got all this security, but that isn't what you're built for. Get the hell out there in the world.

And so he does that, and then all hell breaks loose. It's one bloody catastrophe after another: starvation and tyranny and warfare and the necessity of sacrificing his son—it's just like one bloody thing after another.

Okay, but during that process, Abraham continues to aim up, and he makes the proper sacrifices, and the consequence of that is that God promises him that his descendants will be more numerous than the stars.

So I was reading that from an evolutionary perspective, and I thought, okay, what's happening here is that the narrative is trying to map out a pathway that maximizes reproductive fitness all things considered.

Now the problem I have with theories like Dawkins, let's say, is Dawkins reduces—and you tell me if you think this is wrong—Dawkins implicitly reduces sex to lust. Then he reduces reproduction to sex, and the problem with that is that reproduction is not exhausted by lust or sex quite the contrary, especially in human beings.

Because not only do we have to chase women, let's say, but then when we have children, we have to invest in them for like 18 years before they're good for continual reproduction, and we have to interact with them in a manner that's predicated on an ethos that improves the probability of their reproductive fitness.

And so reproduction—see this is something that the Darwinists, the casual Darwinists do very incautiously as far as I'm concerned because they identify the drive to reproduction with sex, and that's a big mistake.

Because sex might ensure your reproduction proximately for one generation, but the pattern of behavior that you establish and instantiate in your offspring—which would be an ethos—might ensure your reproduction multi-generation, you see?

And that's appears to be what's being played out in this story of Abraham, is that the unconscious mind, let's say, is trying to map the fitness landscape, is attempting to determine what pattern of behavior is most appropriate if the goal is maximal reproductive fitness calculated across multiple generations, or maybe across infinitely iterating generations.

And so that points to something, again, like you said earlier, you called it a general fitness—what was it? I got to get it here—a big fitness payoff, right? And that could be the ethos to which all these subsidiary ethoses are integrated.

See, see, okay.

So, well, I'm wondering what you think about that. Is that, first of all, what you think about the proposition that evolutionary biologists, to Dawkins is a good case in mind, have erred when they've too closely identified reproduction with sex?

It's like that isn't a guarantee of reproduction; we wouldn't invest in our children if that was the case. We would just leave them; the sex is done, we've reproduced. You need an ethos to guarantee reproductive fitness across time.

Well, there's several levels here. First, Dawkins, of course, understands that most reproduction is asexual, right?

So sexual reproduction is a relatively recent thing. Most reproduction has been asexual, so Dawkins is very famous for talking about the selfish gene. And it's really when he talks about reproduction, it's about genes reproducing themselves really not so much about sex.

Sex is one way of having that happen, but you know bacteria do it without sex. And so there are different strategies.

So for example, some spiders will have just hundreds of babies and eat some of—they'll eat some of them. You know, and let the others do that, having the babies is their only job—and after that, the babies are on their own.

So you, and so there are different strategies. So this is where, you know, Dawkins is quite famous, justifiably, for, you know, his work on the selfish gene idea. That is, there are different strategies, but the only thing that matters in this framework is what is the probability that the particular genes, you know, spread through the population in later generations?

Sex came along apparently to deal with—

Okay, as one of the pathways to that, right? One of the—that's right. And—but there’s another framework in thinking about all this as well.

So again, I love evolutionary theory; I think in terms of models of evolution and so forth of creatures and their behaviors, it's an incredibly powerful theory. I've used it a lot. My book, "Case Against Reality," talks about it in great detail. It's a wonderful theory, but I think that from this deeper framework that science is now moving into beyond space-time, all of evolutionary theory—all of it—is an artifact of projection.

It's not—in other words; if you're looking like from a spiritual point of view for some deep principles, deep spiritual principles, evolution I don't think is deep enough. I think that it's all of it is an artifact of space-time projection.

And if you're going to be thinking—looking for deep principles about, you know—that spiritual tradition is talking about Abraham and really thinking big, I think that thinking inside space-time is not big enough.

You've got to step entirely outside of space-time. Space-time has all these artifacts, and we're so used to being stuck in the headset.

So there is an insistence upon that in the Judeo-Christian tradition because God is conceptualized, what would you say, traditionally, as being entirely outside of time and space. And so whatever works for human, like the human landscape and the divine landscape, they're not the same. There’s a relationship between them, however, but they're not the same.

Okay, so now, okay, so let me—let me ask you about that. Now, you have made the case, not least in this interview, that Consciousness is primary. Now, Consciousness uses these projections. So how do you reconcile the notion that Consciousness is primary?

And I want to make sure I'm not misreading what you're saying: that Consciousness is primary, but Consciousness operates in the world with these projections. See, because this is the thing I grapple with.

Is that if survival itself is dependent on the utilization of a scheme of pragmatic projections, then in what sense can we say that reality is something other than that? Like—because see, part of this is something that Pierce and William James wrestled with too.

It's like, well, why make the claim that there—is a reality outside of the human concern with survival and reproduction? And if—in. Like if Consciousness is the primary reality and it's using projections to orient itself so that it can survive and reproduce in a biological sense, how can you even begin to put forward a claim that there is a reality that transcends that?

Like on what grounds does it transcend it in relationship to what? Right?

So, these are deep waters. And the idea that I'm playing with now is that this Consciousness is there's one ultimate infinite Consciousness, and what is it up to? Knowing itself.

But how do you know yourself? Well, there are certain theorems that say that no system can actually completely know itself, right?

Right, right, exactly, exactly. So if this one infinite Consciousness wants to know itself, all it can do is start looking at itself through different perspectives—putting on different headsets.

So space-time is one headset, and from that perspective, here's a pro. So this is a projection of the one infinite Consciousness. And in that perspective, it looks like evolution by natural selection; it looks like quantum field theory and so forth.

It looks like I need to play the game this way, but this is a trivial headset. This is actually, I think, one of the cheaper headsets.

Okay, that's very interesting.

Okay, so, one of the things—so, writing the book that I'm writing now, I've been walking through all these biblical narratives. And one of the things they do—every single narrative provides a different characterization of the infinite. There's no real replication.

It's like, well, here's a picture of the divine, and here's another one, and here's another one. And here's another one. Now, there's an insistence that runs through the text that unites the text—that those are all manifestations of the same underlying reality.

But it is definitely the case that what's happening is that these are movies, so to speak, shot from the perspective of different directors. And it does seem to me akin to something coming to know itself.

There's this ancient Jewish idea—this is a great—it's like a Zen cone; it's a great little mystery—he says, so here's the proposition: God is traditionally imbued the following characteristics: omniscience, omnipresence, and omnipotence. What does that lack?

And you know you think, well that's a ridiculous question because by definition that lacks nothing, but the answer is limitation. That lacks limitation. And that's actually the classical explanation for God's creation of man: is that the unlimited needs the limited as a viewpoint.

It has something to do with the development of, as you pointed out, I believe, it has something to do with the possibility of coming to—it's something like conscious awareness.

You see this in T.S. Eliot too; I don't remember which poem, where he talks about coming back to the point of origin—which is like the return to childhood, you know that that heavenly notion that to enter the kingdom of Heaven you have to become as a little child.

It's like, but there's a transformation there. So that return to the point of origin is accompanied by an expansion of consciousness. It's not a collapse back into childish unconsciousness. It's the reattainment of a—what would you say?

It's the reattainment of the state of play that's a good way of thinking about it—that obtained when you were a child but with conscious differentiated knowledge.

So there is this tremendous narrative drive in the western tradition towards differentiated, comprehensive understanding as a positive good, and that seems tied up with the continual drama between God and man.

So, and I do think the scientific enterprise is an offshoot of that; that's what it looks like to me historically.

So, okay, so how in the world do you survive in psychology departments given what you're thinking about?

Well, I've got the mathematics. So as long as I—as long as if I was just talking this stuff without any mathematical underpinnings to it, it would be dismissed, of course.

But you know, we've—in the case of the evolutionary stuff, we've published papers in the journal theoretical biology, for example, and elsewhere where we actually put the mathematics out there, so it's peer-reviewed.

And I think that it's a bit surprising, but you know, I'm a minority—a small minority. But you know, that's the way science progresses; it proceeds one funeral at a time, and it progresses by minorities of one—exactly right.

So scientists understand that, you know, you want to have independent ideas, think out of the box, make it mathematically precise. Most of our ideas will be nonsense, including mine.

But you've got to put them out there and push them and see what happens.

I've gotten some stiff pushback. For example, some philosophers have published papers recently where they give the following argument against my Darwinian theory. They’ll say, look, Hoffman uses evolutionary game theory to show that space and time and physical objects and organisms don't exist.

Well, he's got himself what they say is an unenviable dialectical situation. Either evolutionary game theory faithfully represents Darwin's ideas or it doesn't.

They say, okay, so if it doesn't, then he can't use it to say that organisms and resources are not fundamental in space-time. And if it does faithfully represent Darwin's ideas, well, Darwin's ideas are that space-time is fundamental, and there are organisms and resources.

So it couldn't possibly contradict that, so either way, Hoff is screwed. Right? There's nothing he can do.

And so—and that's been published actually in uh high-value philosophy journals, and my response is, is quite simple—that misunderstands science completely. Every scientific theory has, when you write it down mathematically, it has a scope and its limits.

And the mathematics tells you both the scope and the limit. So for example, just to be very concrete, Einstein's theory of gravity, right? In, I think, 1907 or so, he had this big idea: if I was standing on a weighing machine in an elevator and all of a sudden the cord was cut and I was in free fall, all of a sudden I would be weightless; that was his big idea.

For his theory of gravity, it took him years—seven or eight years—to actually make the mathematics but he wrote down his field equations.

Those field equations are Einstein's mathematics to capture his idea that space-time is fundamental and has certain properties. Well, a year after he published it, Schwarzschild, a German scientist, discovered that they entail black holes.

And we eventually found out that this theory entails that space-time itself has no operational meaning beyond 10^-33 cm.

So we could use the same argument that's been used against me against Einstein now.

Look, Einstein's field equations, either they're faithfully representing Einstein's ideas or they're not. So we can use the same argument against Einstein that has been used against me.

You see, if they don't, then we couldn't use them to show that space-time isn't fundamental. And if they do, they couldn't possibly show that space-time isn't fundamental. That last step is the wrong one; the equations are there to show you the limits of your concepts. They give you precise—

That's so—that's what these philosophers have missed: that the equations that we write down tell us not just the scope but the limits of our theories.

And that's why science is so valuable: because it tells us your theory, your assumption go this far and no further.

So that's all I've done with the theory of evolution is to say that also.

But that also sounds to me very much like a vindication of the fundamental claim of the pragmatists, which is that we accept something as true without noticing that what we mean is true in a time frame with certain implications for instantiation.

It's something like that. And so, true is a lot more like does the bridge stand up when a hundred cars go across it?

It's not some final comprehensive all-encompassing definition of the truth for all time. And you've already made the case that it can't be because that truth is an ever-receding goal; it's always bounded.

Okay, so when I came across that, I thought, okay, well, it's bounded by what? And it's, well, it's bounded by our aim, and then that's bounded by our motivation, and then that's nested inside a Darwinian world.

Okay, now let's go after the game theory. Let me just say one thing about the first thing I’d like—sorry, go ahead, go ahead.

Yeah, I would just say that the very deep, deepest spiritual traditions really say that up front. Like the Tao Te Ching starts off that says the Tao that can be spoken of is not the true Tao. Once you understand that, then go ahead and read the rest of it.

That's a good example because that's a great book—yeah, a great book. And I think that that's also the way we should think about our science. The science that can be spoken of is not the final reality. But given that, it's a wonderful thing to do science. We—and we should do science, and we should do it very, very rigorously.

But we should always understand that if we're talking about a theory of everything, it should be with a wink and a nod because there is no theory of everything that we can write down.

Right, it's the—every—the theory of everything that we've discovered so far, maybe, but um, we will never be the final theory of everything, right?

And it might have a broader and broader range of potential applications as well, but that doesn't mean that we've exhausted the landscape of comprehensive theories.

Right, okay, so now, the philosophers that you described as objecting to your theory said that if evolutionary game theory is correct and it models Darwin's propositions appropriately, then—well, so game theory is extremely interesting to me, although I wouldn't say I'm an expert in its comprehension, but I understand its gist, I believe.

And it seems to me to be something like—if you iterate interactions, an ethos of one form or another emerges.

So for example, if you play tit for tat simulations, you find out that the best trading strategy is cooperate, but slap when necessary, and then forgive, something like that.

And so what it points to, very interestingly, is something

More Articles

View All
My Thoughts On The iPad
Hey guys, this is my kids and on and today I’m going to be doing a my thoughts video on the new Apple iPad. So first of all, I’ll get started by mentioning a few things from David Pogue’s review in The New York Times. David said specifically that the iPa…
Voltage | Introduction to electrical engineering | Electrical engineering | Khan Academy
Voltage is one of the most important quantities and ideas in electricity. In this video, we’re going to develop an intuitive feeling for what voltage means. It has to do with the potential energy of electrical charges, and that’s what we’re going to cover…
Warren Buffett: Why Real Estate is a Lousy Investment
We don’t have any competitive advantage over experienced real estate investors in the field, and we wouldn’t have if we were operating with our own money as a partnership. If you operate as a corporation, such as ours, which is taxable under chapter C of …
The Spartan Way: How to Unf**k Your Life
What’s the first thought that comes to mind when you think about Spartans? Many of us will conjure up an image of the Battle of Thermopylae, as depicted loosely in the 2007 film 300. The common understanding of the battle is that 300 ruling class Spartan …
The importance of taking a break
What’s up you guys, it’s Graham here. So let’s talk about a topic that seems taboo for a lot of these business motivation mindset channels, and that’s the topic of vacation and taking a break. That’s almost like shunned upon in all of these channels that …
Searching for Bullwinkle | Port Protection
Where are you going to go, Gary? Uh, we’re going to go and see if we can find Bow Winkle. You’ll probably hear them come through the brush or hear them walking in the muskeg before you see one, right? And call them in like a cowboy, you know. Really? Ye…