2015 Maps of Meaning Lecture 02a: Object and Meaning (Part 1)
Okay, I want to talk to you a little bit about theories of truth, and there's a reason for that. The reason is that I want to make a distinction between two kinds of knowledge. Before I do that, I want to talk about the multiple ways that you can determine whether or not something is true and/or whether something is factual. It's a much more difficult task than you might think because it's easy to think that there's only one way of defining truth, but that's not the case, as far as I can tell.
The first thing I'm going to do is talk to you about how things might be true from a Darwinian perspective. If you think you can knock any holes in this argument, go right ahead because I've been hacking away at it for a while, and I haven't been able to. So, you know, Darwin had this theory back in the late 1800s, which you're no doubt all aware of, and it was basically predicated on two fundamental axioms. The first axiom was that there’s natural variability in a population, and that was later discovered to be associated with genetic variability, and that that genetic variability could transform as mutations do. That was in some sense a later addition to Darwin's theory. Then the natural environment selects the most suitable organism to the current environment, all things considered, so that it can propagate, right? Then whatever propagates moves to the next generation, and the same process occurs again.
Now there's a variety of complications about that theory that I'm going to lay out a little bit. When the Victorians first got a hold of this theory, they kind of thought that it meant that there was a natural hierarchy, right? That as evolution progressed, it progressed, and that higher organisms followed in the trail of lower organisms. So in some sense, even though it wasn't a natural or necessary presumption for those who adopted Darwinian theory, the idea of a hierarchy of value in some sense was still part of Darwinian theory. So we could say, well, human beings are, you know, evolution has led over its billions of years to us, and we're far more complicated than simple creatures, so it seems like there's a process at least of increasing complexification going on. Maybe that has something to do with increased value. We're more fit now.
Modern evolutionary biologists take issue with this because they don't really think that there is any directionality in evolution. After all, there's still plenty of one-celled organisms around. In fact, there are more one-celled organisms in your body than there are your own cells by a huge margin. So the idea that there's a necessary direction in evolution, going from simple to more complex, is not formally true. Except that obviously, there are much more complex organisms than there were around a couple of billion years ago, and so there does seem to be some directionality in some cases, and it's not exactly obvious why that is.
So the reason I'm telling you that is because it's not self-evident that creatures, as they evolve, become more fit, so to speak. What evolutionary theory seems to be about, at a deeper level, is something like a dance between the environment and the organism. So the environment keeps transforming, and it does that unpredictably. I wouldn't say randomly, but it does it unpredictably to an unpredictable degree. Because some things that are true today are going to be true tomorrow, but other things aren't, and you don't know which ones are going to shift.
Now, I think if you want to get a practical idea of how the environment moves around, one of the best ways to do that is to try to predict the stock market. Because it's pretty easy to predict the stock market based on past performance, but it's very very difficult to predict the stock market by deriving formulas that predicted past performance to predict the future. In fact, there's no evidence that you can do it. Most money managers, for example, do worse than chance at picking stocks. So, you know, I've thought for a long time about why anybody would invest their money with a money manager in the stock market, and the best answer I come up with is that people don't want to take responsibility for making mistakes with their money. They have a lot of money, and it makes them nervous. They don't know anything about investing, and maybe you can't, but anyways, they don't know anything about investing, and so they parse the money off to a money manager who's an expert. Then if they lose their money, they can say to themselves, "Well, at least I had an expert looking after it."
But if you look into it deeply, you find that the bulk, the preponderance of the evidence suggests that you cannot get enough information to accurately predict what the stock market is going to do at all. You can kind of see that that's true because if you could get a consistent edge of say one-tenth of one percent, which sounds like nothing, all you'd have to do is repeat your bets over and over, and soon you'd have all the money in the world because of compounding.
So, the stock market is a good model of the environment because it is, in fact, a model of the environment. It's not a model of the entire environment, obviously, but all sorts of things affect stock market prices, right? Psychological factors, biological factors, weather, political events, and so forth. So as far as models of the environment go, it's a pretty good one, and you can't keep up with it fundamentally. You can think of the environment that way; it's this thing that's dancing around, and some elements of it are more stable than others.
For example, the sun appears to be pretty stable, at least on our timeframe, so that's a good thing. But lots of things can't be relied on from one moment to another, and you don't know what direction it's going to turn. The way that life deals with that is by producing a plethora of variance and basically letting anything that doesn't work die. Evolution requires an awful lot of death, you know, and there's ideas that what we've done, to some degree, is we've internalized the evolutionary process because we can invent fictional representatives of ourselves, right? Which would be our potential future selves, and we can run simulations of the environment and kill off all those potential future selves that don't look like they would actually survive, and that's really what we think about as thinking.
Okay, so, all right, so then the next thing you might ask yourself about given that is — and we're going to shift bases here a little bit — is how would you define reality? If you had to come up with a definition of what constituted real, how would you go about doing it? I would say the way the West does this, there's a schizophrenic element to it, a split element. There's an incoherence in it, and I haven't heard anyone else make this argument, so if you can figure out some reason why it's stupid, then let me know.
Think about it this way. Our basic presumptions about the world are materialistic right now, and I would say that those materialistic assumptions were basically laid down at the beginning of the scientific endeavor, and so that would be about the time of Bacon and Descartes. Because they sort of laid the foundations for modern science and there are some rules about what things are in reality, and the rules are something like, well, things are made out of little things. Then, if you go all the way down to the level where the littlest thing possible is, well, that's where you find the fundamental constituent elements of reality. Matter is detectable by the senses, and it's intersubjectively detectable. So you can detect something with your senses fundamentally, or with the extension of your senses. You can describe how you went about detecting it; other people can duplicate that, and then they can report what they detect. If the reports are similar, then we assume that that thing is real.
So, in some sense, that's what our senses have to do, right? Because our senses triangulate, in a sense. So how do you figure out if something's real? Well, if you can see it, you know, that's pretty good. But it's better if you can see it and hear it, and it's better yet if you can see it and hear it and touch it. Maybe it's better if you can see it, hear it, touch it, and taste it, and so forth. And you know, we've evolved these sensory systems that are quite different from a qualitative perspective, and if they all report the same thing, then we believe we have a pretty reasonable chance of actually believing that that thing is there.
Then, with science, we've actually extended that capacity because we don't just use the senses of one person; we use our senses in a rigorous and collective manner, and that reports to us the nature of what we've come to call the objective world. Then we say, well, accurate descriptions of the objective world are true. It's like, you know, that's a good theory. You can also certainly see its objective utility—sorry, I shouldn't say that; it's pragmatic utility. Because when you play by the rules of science, it appears that your power expands, right? So that's another reason why we think it's true. You run through these routines, and then all of a sudden, you can do things you couldn't do before.
That's a different criteria of truth, right? That's more like a criteria of usefulness, practical usefulness. Because you would think too that—this is why Jung thought that science was actually embedded inside a narrative, in a deeper narrative— we wouldn't be doing science if we didn't think it was useful for something, right? Then you might think, well, useful for what? Well, what would you say? Maybe you suffer less because of science. Maybe you live longer because of science. Maybe you find life more interesting. You could do a more diverse range of things. Whatever, we have made the decision that the utility of science justifies the endeavor. But that's a different criteria, right? The utility of something. You could debate the utility of science from a Darwinian perspective because you could easily say, well, here's an example—there was a book written a while ago by an ex-KGB officer, and he was talking about biological warfare labs in the Soviet Union.
He claimed to have worked in a biological warfare lab where the goal was to cross Ebola with smallpox. Now, Ebola, which you all know about, isn't very contagious; you have to really be in contact with someone who has Ebola to catch it. But smallpox, that's contagious. So if you could get the two things working together, you would have a disease that's basically 100% fatal, and you could distribute it in aerosol form, which is sort of the dream of any biological weapons maker. Now, they never really managed it, but you know, it's kind of an interesting idea, right? And they had it turned out, and there were no people left, then we'd have to say that our notion about the utility of science turned out to be colossally wrong from a Darwinian perspective.
You might say the same thing about hydrogen bombs. Or maybe you might say the same thing about computers. Because, you know, it seems to me quite unlikely that human beings in their present form are going to be around 200 years in the future. With the rate of the growth of computer intelligence, the probability that we'll become something biological and mechanical together seems to me to be virtually certain. We're certainly headed in that direction.
So anyways, the question—no, no, that's a good question. The question was do you ever think—do you think that people's interest in science cannot be pragmatically oriented? I actually don't think so. Because I don't think that you would be interested in science if it wasn't pragmatically useful, you see? And that's exactly the point that I want to pursue in this lecture today. It's like we are interested in things generally as a consequence of their perceived utility.
Now the ends—the framework within which you determine utility is quite malleable. So, for example, if you're suicidal, you might regard how sharp a knife is as the primary object of interest. But generally speaking, you know, we would assume that, especially if you're a Darwinian thinker, your primary interests are something like survival and reproduction. I don't think that's an unreasonable assumption. The terms are quite elastic, so you can throw a lot of motivations in them without having to think that rigorously about it. But as a hierarchal framework, it's not bad.
So, part of the reason that Jung believed that science was embedded in a dream, which was the alchemical dream fundamentally, was because he believed that the people who developed the symbolic precursors to science—and those would be the alchemists, Newton was an alchemist, by the way—were looking for something approximating the philosopher's stone, which was a material object which would grant its bearers eternal life and good health and wealth. The alchemists were the first people, in some sense, to posit that if you systematically investigated the transformations of the material world, which was regarded more or less as damned by the Catholic Church, that you could extract out information that would be of substantial benefit to the things I just described: health, wealth, and longevity.
So you might say, well, why are we pursuing science? Like, why are we motivated to do it? Why would you spend 10,000 hours looking through a microscope? It was a very weird thing to do. The Jungian idea is, well, if you go right down to the base of the hierarchy of motivation, you're doing it to make the world a better place in some manner that's important to you. So you can't—science has to be nested inside a motivational structure, or no one would do it.
Now, you know, you can think about that what you want, but it's not an argument that you can easily dismiss because you have to account for why people are interested in science, you know? If you say, well, it's because they want to build a career, that's fine because it just nests inside the same argument.
Anyways, okay, so now we kind of have a definition of reality from a materialist perspective. Now there are a few problems with that, right? One of the problems is that when you get down to the fundamental elements of matter, they turn out not to be very much like matter at all. They turn out to be these weird quantum processes and entities that appear to be tangled up with consciousness in some way that no one can understand, and certainly display all sorts of properties that aren't evident at the macro level.
So, you know, the formal job of reductionism—one element of reductionism because there are many elements of reductionism—is that you explain the complex by the simple. For a long time, as we went deeper into the microstructure of things, it did appear in some sense to be getting simpler. But then, when we went down to the quantum level, it all of a sudden got incredibly weird, and no one really knows what to do about that. Now I'm not going to introduce quantum thinking into this discussion because every weirdo with a Crockpot theory immediately does that, and it's a very dangerous thing to do.
I'm just pointing out that we didn't expect for the nature of reality to qualitatively transform as we invested at the microstructure, but that's what happened. So, okay, all of that aside: the idea that truth—that there are facts about material reality, and that those facts are true—is a very, very powerful idea. However, we don't pursue it because the rationale for pursuing that truth is based on truths that are different from the truths that the process itself reveals.
You know, and I think you might well agree hypothetically that perhaps investigating how to combine Ebola with smallpox is not really a good idea from a scientific perspective. It's a perfect idea from a scientific perspective, but there it seems not unreasonable to assume that there's a broader perspective from which that idea probably isn’t for the best.
You know, a pure scientist—there isn’t such a thing—but if there was, might say it doesn’t matter because all facts are equal because facts don’t have value—they’re not being. All facts are equal, but, of course, you can’t live like all facts are equal because there are a trillion facts. If you don't have some mechanism to zero in on a subset of relevant facts, it’s like you're immobilized. You're just flooded by information. I think something like that probably happens to people in the initial stages of schizophrenia, so they can't distinguish; everything becomes relevant. That's a bad situation.
So, you know, you're stuck with a set of facts anyways, and there are reasons that you choose the subset of facts that you choose, and those reasons aren't grounded in a materialist philosophy, and they can't be, at least not in any simple way. Okay, so maybe, you know, hopefully that's a coherent argument.
Now, the next part of it is kind of something interesting that happened in the late 1800s. So in the late 1800s, there were a group of philosophers on the east coast of the United States that called themselves pragmatists, and William James was one of those people. William James is, of course, regarded as one of the founders of modern psychology, even though he wouldn’t be allowed on a faculty of psychology today because he was like the first hippie in some sense because he liked to experiment with things like laughing gas, nitrous oxide, you know? It's actually quite an intense hallucinogen at the appropriate doses. He wrote, like, 60s acid-free poetry back in 1880 when he was on nitrous oxide, and he was very curious about phenomena like religious experience and so forth.
So, he was a very philosophically minded psychologist and had some very—well, he was brave, I would say, you know? Which sometimes being weird and being brave are the same thing. He was also unbelievably intelligent, and he had a group of people around him including a guy named Charles Peirce, who’s one of the West’s greatest relatively unknown philosophers.
Anyways, they set up a new field of philosophy which is classically regarded as the only American philosophy, and that philosophy was pragmatism, you know? When you call someone pragmatic, you sort of mean, well, they're willing to do what works, you know? They're concerned with what works, and that's kind of pragmatism in a nutshell in some sense, except it's a lot more sophisticated.
The pragmatist would say, look, you've got a problem, and the problem is you don't know everything. What that means in some sense is that all of your knowledge about anything is limited, even about the things that you think you know about. It's limited. You can never be sure; you can never be certain that what you're dealing with is what you think you're dealing with or that when you deal with it what you expect to happen will happen.
You know, you might think, well, I understand my—I understand this can of coke sufficiently. I can drink it, and so on, and so my knowledge is as close to absolute as it needs to be. But, you know, this has aspartame in it, and everybody thinks aspartame is dangerous, but it's probably not. But probably it’s not, you know? So who knows? If I drink a hundred of these things, that might be the end of me, and wouldn't that be stupid?
So the point is, it's that because you're surrounded by a cloud of ignorance that you cannot get rid of, none of your judgments about what constitutes truth can be final. One of the things the pragmatists were trying to figure out is, well, given that, how can you even act? Because you can't compute!
Well, this came up again in the 1960s and has really bothered people since then in the guise of the frame problem. You cannot compute all the consequences of your actions, so how can you act? How can you feel that you have enough knowledge to act? So, well, it's a very difficult problem. It's an unbelievably difficult problem. It's actually one of the problems that has made developing intelligent machines much more difficult than anyone thought it would be because it turns out that not only are there an infinite number of things that could happen as a consequence of a given action, there's also a virtually infinite number of ways to look at a situation to perceive it.
We look at a situation, and there it is, and so we just think it’s given. But it turns out that that's just seriously wrong, and the only reason that when we look at the world that it's given is because our psyche—our consciousness, which we still think of as sort of a soul—is actually something that is grounded deeply in the biological processes of our body. All of those biological processes that were created over evolutionary time structure our perceptions before we even know it and just deliver this world to us.
So, it looks obvious, but you know, we've been evolving for like three billion years or something like that, so it took a lot of tinkering around to get this perceptual system to do what it does.
So, okay, so they're very interested in how you came up with ideas of truth, and what they settled on, roughly speaking, is something like whenever you conduct an action, you set up the criteria for determining whether what you're doing is reasonable or factual based on the outcome that you want to attain. For the pragmatists, it was more like things were true enough.
So, if my goal in interacting with this coke can is to have a drink, and when I do what I am going to do and I get a drink, then that’s sufficiently true, and I can go on to the next thing. But it’s still tenuous because it leaves open other questions, like should I be drinking this or something else? Or, you know, whatever. It leaves open all sorts of questions. But it doesn’t matter as far as the pragmatists are concerned.
Now, when Darwin published his theory, the pragmatists got on it right away. They were really interested in Darwinian theory because they regarded it as a version of pragmatism, which is quite cool, and they did this within, I think, within five years of its publication. It caused quite a stir; people were sort of ready for the theory, but the pragmatists were really all over it because they thought of the mechanism that Darwin described to account for evolution as a pragmatic mechanism.
Now that's an interesting idea, so think about it this way: how much do you need to know? From a Darwinian perspective, there's an answer to that. You need to know enough so that you can last long enough to pass your genes to the next generation, that's it. The Darwinian would also say, well, obviously, your knowledge is faulty, incomplete because, you know, your ability to transmit your genetic material to the next generation is somewhat limited. You have a limited number of partners, plus you die, and the fact that you die indicates that you, as a solution to the problem of maintaining your own life and reproducing, you're like a partial solution.
You're a good enough solution, you'll do for like the 30 years or the 35 years or the 40 years that you're likely to be active in a, you know, in a functionally reproductive sense. So you're a good enough answer. But you know, the Darwinians and the pragmatists also said something else, or implied something else, which is you’re good enough, and also, not only is that as good as it gets, it's also as good as it can get. So, they put some really stringent restrictions on what you could regard as a sufficient solution to a given problem.
The reason for that was that, well, your knowledge is limited, and that things transform unexpectedly. So the best thing you could do is run along, try to keep up, and you'll never really do much better than that in some sense because things are unpredictable and because they change in an unpredictable way.
So now the final part of the argument is this: so then what are we to regard as truth? Now leave the materialist claims aside; I'm perfectly willing to say, and I think it would be ridiculous not to, that that's a form of truth that at least is very useful. So fine, it's a tool; maybe more than that, it is saying something about the absolute nature of reality. I don't care about that; at least it's a useful tool for us. It's made us more powerful.
But then we have this other problem, which is, well, what about the nature of the system that it's embedded in? How do we determine its utility or how do we determine the utility of anything? Or how do we determine how to act? Or how do we determine what to value? And those questions can’t be asked using the same methodology that science uses to answer its questions. Now the consequence of that is what Nietzsche described at the end of the 1900s as the death of God, right? There’s a conflict, a serious conflict between the claims of religion, which, as far as I’m concerned, have to do with value and morality, and the claims of science, which have to do with material reality.
All right, so now the tricky part comes in realizing that the selection mechanisms that determine what happens to you in relationship to the next generation, say, are selecting you on the basis of your action and your behavior. I would say they're selecting you on the basis of your values. Because, you know, if you're an idiot and you go charging after a lion and the lion eats you, then the nature of reality, so to speak, is rendered a judgment on the value structure that sent you down that particular path, right? And so you're being selected by natural forces as a consequence of your behavior.
Now we can extend that argument, especially in the human case or in the case of any organism that's also subject to sexual selection. Human beings are tremendously subject to sexual selection. Now Darwin was a great believer in sexual selection. He thought it was at least as important as the selection of the organism by the natural environment. Whatever that is—but, you know, biologists really had a hard time with that. They kind of ignored sexual selection for like a hundred years, which is quite interesting. I think it was partly because sexual selection implies, especially when you're dealing with organisms that are sort of conscious like human beings, that choice itself is a determinant of evolutionary processes.
But, you know, the strict deterministic materialists didn’t really like that, and they couldn’t really come up with an argument against it, but they sort of shelved the whole topic for a long period of time. But one of the things we know, for example, about human beings is that females are very sexually selective. Female humans, as opposed to say female chimps, which aren't. So part of the consequence of that is that you have twice as many female ancestors as you do male ancestors.
Now, there'll be half of you who are going to have a hell of a time figuring out how that's possible, but it is possible. What it really means is that something like the average male who's in your ancestry had two children if he had any, whereas the average woman had one, and that'll do the trick right there, right? Now, I'm not saying those are the exact proportions, but those are the average proportions across huge spans of time.
So anyway, part of what's occurred to select human beings for the way they are is the choices that each gender has made with regards to action and belief, but in many ways, more specifically female choices. So it's kind of interesting. The next time you're irritated at your boyfriend, let's say, then you can just remember that, you know, untold millions of women have chosen men so that they're exactly like he is, and so you know, you can't really blame him for that.
So, all right, so, but here’s the thing that’s tricky from a scientific perspective: if it is your behavior and your value structures—if your value structures determine what you’re interested in, how you behave, and selection pressure occurs as a consequence of your actions, then doesn’t it mean that the structure of the value systems that drive your behavior have to be regarded as true or real in some scientific sense? Because the Darwinians basically say, look, the best solution is the solution that you embody. There is no higher solution than that, and there you are embodying a solution.
So in what way—and I don't understand—in what way cannot that be regarded as real? You know? Because people think of things like value structures as epiphenomenal. They’re not real like material is real, but I don't get that. I don't see how that works with Darwinian thinking because you're interacting with each other all the time, and you're doing that on the basis of your capacity for action and the value systems that drive that, and obviously, those have some effect on whether or not you're selected by women, say, or by men, for that matter, but also by the natural world. So it looks to me, at least, you can make a case that the deep value structures that drive us—that are deep, deep parts of our psyche—I’ll try to also demonstrate to you that those exist beyond question. I already think that question is answered, but, you know, there’s no blank slate theorists left, right? Everybody knows that you come into the world with an a priori structure. It’s like that a priori structure was selected.
So in what way is it not true? I could say, well, it's as true as it gets. Well, it's not true like a materialist idea, but that's okay. You don’t have to have. First of all, I don’t think the materialist ideas of truth are the most fundamental ideas because, as I said, in order for that truth to be relevant, it has to be grounded in a value system. And so there's a value system outside of science that science serves; otherwise we'd be completely—it would be insane for us to pursue it, you know?
If we knew that science was going to lead to the demise of everyone on Earth, which it could—you wouldn’t be motivated to pursue it. If you knew that science was just going to increase your suffering, if you're a scientist, it's like, well, you'd stop doing it if you had any sense. So then I would say, well, we pursue materialist truths for reasons that are non-materialist, and those reasons are just as real as the materialist truths, but they're a different kind of true. It’s a good question.
You know, I would say they change at least to some degree, you know? Because, you know, I don't know what values you share with the primate who was your ancestor and the ancestor of chimpanzees, but I would say there’s no shortage of similarities. You know, I mean, first of all, we can understand chimps, you know, more or less. We can understand dogs; we can understand cats. My daughter had lizards, and the more social lizards there are—social lizards, by the way—you can kind of understand a social lizard, you know? So we certainly understand all mammals, um, and then there are other animals that we have deviated from at far greater spans of time.
I like the example of crustaceans because I happen to know a little bit about lobsters for reasons that where my reading took me in a random direction. But like lobsters, lobsters organize themselves into dominance hierarchies, and we understand dominance hierarchies. Even more strangely, the biochemical systems that the lobsters use to track their dominance hierarchies and to determine how confident they should be or how submissive are serotonergic and so are the systems we use to track dominance hierarchies.
The similarity is so great that you can actually use antidepressants on lobsters. So if you inject a lobster with serotonin, I don't know the precise procedure, but generally, if a lobster has lost a fight, then he zooms into his little lobster place and pouts for at least 20 minutes. He won’t fight with anyone else during that period of time, and when he comes out, he’s like a smaller lobster, you know, because he sort of determines how big to be on the basis of the ratio of victories to defeats. The mechanism that adjusts his posture—the successful lobster takes up more and more space because he says, I’m as big as I think I am, and that's based on my success in battle. That’s serotonergically mediated as well, you know?
When you see a human being that’s like this, you know, taking up as little space as possible, you can think, well, there's a defeated lobster because it's exactly the same mechanisms, and they're really fundamental in human beings. You can't really change them, you know? The way your brain computes your dominance affects how much positive emotion you have and how much negative emotion. If you're way held down in the dominance hierarchy, or you feel that way, then you're going to experience way more negative emotion because your nervous system has calculated that the environment on average is very dangerous to you because you're barely clinging to the edge of reality, and you’re going to experience very little positive emotion because, well, things don’t look that good for you.
Those aren't really systems you can control; they’re like master operating systems that sit at the base of your brain, you know? So you say, well, do things change across time? It's like, yes, but they change within constraints that don't change much across time. For example, I think there's probably as much—it's reasonable to assume that there's as much conservation of value as there is retention of skeletal structure, you know? And I went to the Smithsonian at one point, I think it was the Smithsonian in Washington, and they have a display of mammalian skeletons in there that's extremely extensive.
So it's everything from little bats to whales, and you know, it's really interesting to walk through that because what you see is that everything’s basically a bat. You know, the bats look exactly like us skeletally except their fingers are way longer, and you know, all that's happened is that there are transformations of size in the basic skeletal structure. The structure itself is virtually identical across mammals, you know? And so evolution is a conservative process, right? It just doesn’t go zipping off in some completely random direction.
And so I would say also the way to think about it is that there are levels of stability in the evolutionary process, and some domains are extremely stable across time. One of them happens to be the existence of a dominance hierarchy; it's at least 400 million years old, and that's old. It's older than trees. So if you're trying to think about what's actually real, you know, you can't see a dominance hierarchy. You can't taste it or touch it or sense it in any real way, but dominance hierarchies are more real than almost anything that you ever come across.
They're probably not quite as real as rocks, but they're real enough so that both you and lobsters organize yourselves into them and understand them. And so, yes, there's change, but it's change within constraints. I think maybe one of the ways to understand that is it's something like the variation in languages. You know, are languages the same or different? Well, it depends on your level of analysis. There are levels of analysis at which the commonalities are what's most manifest, and there are levels of analysis where the differences are most manifest.
I like to think about it as a hierarchy, and there are fundamental elements at the base of the hierarchy, and they don't change. You know, life made a bet 400 million years ago, and it was one of those bets that’s just not going to change. The dominance hierarchy is something that’s going to feature a lot in our conversations because it's such a stable element of the environment that it's one of the fundamental presumptions of our being, and so it's extremely important.
So, for example, one of the things that women do with men—and men do it with women as well, but they use different criteria—is men are very competitive in a certain manner, and the women assume that the guys who win are the better men, and they just peel them from the top. So the dominant hierarchy, in part, female selection, is one of the things that determines whether male genes of a particular type go into the future, but so is the structure of the dominance hierarchy. Because if you're at the bottom of it, I mean good luck finding a partner. It's very, very unlikely, unless it's random and impulsive, or involuntary.
So, yes, I don't know what to say about that, actually, you know? Because I think that they both ground out in truth, and that grounds out in Darwinian theory that's at the bottom. So, the question is, for example, I think—that let me put it this way. You can think, see what you think of this. So here's a claim: our theory about the atomic substructure of material entities is sufficiently accurate so that we can make thermonuclear bombs. Right? Everybody agrees with that because there they are, they're nuclear bombs. Here's a counter proposition: our theory about the substructure of reality is so flawed that we will make thermonuclear bombs.
Now, you see, now, so you think, well, yeah, that’s not a bad proposition either. So how can both of those be true? Well, here’s what it looks to me: it's like, you know, when you parse an entity out of its context, you lose something of the entity. Now this is something that really plagues psychologists, right? Because psychologists will—you know, maybe you'll go up to someone and say, I was angry today, and the person says, well, why were you angry? And then you have a little discussion about why you’re angry, and you never really worry about whether the person understands what you mean by angry because you assume that they've been angry before, and so you share that.
But also, you also assume that the context of the conversation defines the ambivalence of the word sufficiently in that time and place so that it has an actual meaning. That's the context. You know the person hypothetically because they wouldn't be telling you that otherwise, you know? Other people, you understand the language so that when the word angry emerges, someone just doesn’t come up and go, "Angry!" You know, they put it in a story, and so the meaning of the word angry is confined by the story that's constructed around the word, and that's confined by your relationship, and the relationship is confined by the culture that you're in. There are lots of levels of analysis that are going on simultaneously for you to be able to understand what the person means when they talk about being angry.
Now, a psychologist will come along and say, oh, anger is a thing. They'll pull it out of its context and then treat it like it's a material entity, and actually that doesn’t work very well. I mean, there are lots of things that people discuss in the context of multi-level language that you can't just pull out as an entity. Now with regards to our understanding of the subatomic realm, well, the problem is that we're parsing out from a very complex reality a particular form of perception.
Now, it's a very powerful form of perception; it's the form of perception that allows us to mess about with things at the subatomic level. But while we’re doing that, there's a lot of things that those particles are associated with that we're ignoring. So, we narrow our knowledge in some sense by decontextualizing the phenomena, and then we can do things like make atom bombs.
But the question is, what makes you think that the process of decontextualization produces an outcome that's true? Now you could say, pragmatically, well it's true enough to make atom bombs—like, yeah, okay, but it's not true enough not to make them. So it’s tricky. It's tricky.
Yeah, well—that’s the next thing. That's the next thing I want to get to. That's actually why I have this little diagram up there, so I'm going to do that now. So now I'm going to offer you a proposition, and it's like a core—I’ll get to you right away—it’s a core proposition. We tend to think that our knowledge structures are made out of facts and that facts are about the material world, and I would say that's not very true. Partly because who the hell cares about facts most of the time? They just lie there and they’re not good for anything.
More importantly, it can't be right for a variety of reasons. First of all, the scientific endeavor is only about, let's say, 500 years old, formally. Now you can stretch it back—maybe you can stretch it back to the Greeks, you know, if you want to look at precursors, but it’s still—who cares? It’s 2,000 years or it's 3,000 years, whatever. It’s like dust on a cliff; it's nothing. Now, we were doing perfectly well without the theory beforehand, and so do animals.
You can decide for yourself what that indicates, but it does indicate that you can exist in a manner that fulfills the Darwinian requirements perfectly fine without ever formalizing a theory about the nature of material reality. So fine. Now the next thing is, your nervous system isn't built to view the world through a cognitive framework that's composed of empirical facts, obviously, because we've only been doing that for a few hundred years. Chimps aren't doing that, and you're mostly chimp.
I don't know what the similarity is; it's like 99.9 at the genetic level. It's really, really high. So it's not like the dissimilarities are unimportant, but, you know, there's a lot of you that’s chimp. There's a lot of you that’s mammal. There's a good chunk of you that's like a crustacean. So, you know, obviously none of those entities are predicated—their action in the world or their knowledge of the world on material theories.
Okay, so then the question might be, well, what's the nature of your theories of the world? And that's where you get into the distinction between motivational significance and value and material reality. I would say that the cognitive structures that use—they're not just cognitive; it's actually your personality, because a personality is far more than a cognitive structure. Your personality is how you act, so that’s how you act out ideas, right? Actually, how you move your body through time and space, it's how you perceive because you don't perceive everything.
You perceive a very limited subset of things, and there are processes that are associated with your personality that determine what that subset is. It determines how you value things; it determines how you respond to them emotionally, and then it sets the boundaries for the contents of your cognitive knowledge. So I would think of that as a personality. What's the personality up to? Well, it's trying to orient itself in the world. It's trying to figure out how to act in the world, and I would say that that's our fundamental problem. Our fundamental problem isn't what are things made of, although I'm not saying that's not a problem. It's a problem, but it's not the fundamental problem.
The fundamental problem is what to do about what things are made of. It's like where to go now. You know, that’s an existentialist claim in some sense—it’s like, well, what's the meaning of your life? What's the purpose of your life? You know, and we shouldn't be using materialist theories to address such questions, but we do, and it's cost us a lot.
So, yep, well, that’s exactly what I want to talk about next. Is it a problem? Okay, so, well, it could be true. I mean if it turns out that the scientific endeavor dooms us, which it might, then you certainly make that case. You'd have to make it post hoc, though, and you won't be able to because there won't be anybody around. But, you know what? Like the idea that science is a valuable enterprise is a—it’s a theory, you know? And it's, you know, it's 50/50.
If people—maybe not, but it's definitely room for dispute. I don't think we would see the mass emergence of phenomena like the environmental movement if there wasn't part of the human psyche that was very much worried about untrammeled technological progress. Now, you know, it's like we should also be equally worried about non-technological progress; we’re basically screwed either way.
So, but that's in some sense beside the point. Okay, so the reason I got into the ideas that I'm describing to you is quite straightforward. Now it's—I can't really use the story that I used to use to frame why I'm telling you what I'm telling you because when I developed these ideas, and I've been working on them for 30 years, my primary motivation was to understand what insanity was driving the Cold War because—and so I started working on these things probably in about 1982, although I'd been thinking about things like that far longer than that.
I don't know what it's like for you people as young people. I know you're worried about the environment, and but when I was young— younger, young, everyone I knew was very, very obsessed with the idea that there was likely going to be a thermonuclear war. We didn't assume that it was certain, but we certainly assumed that it was highly probable. You know, and things came pretty damn close. So, I'll tell you a story.
So one time, about 10 years ago, I went to Arizona, and I went to see a patriot missile site. The Patriot missiles were the early intercontinental ballistic missiles. Now, a ballistic missile is an interesting entity because, you know, if you think about something like a—what do they call those? Cruisemissiles. A cruise missile is like a robot; you can tell that thing where to go. You know, it's sort of—it's got enough intelligence so if you don't want it to land there and you’ve launched it, then you can have it land somewhere else, but an intercontinental ballistic missile—no, no, no, it's a rifle. You point and you pull the trigger, and 20 minutes later that thing is blown up a city, and there’s no calling it back, and there’s no diverting it.
You know, so I went to the missile site, and those things were huge. They were like major rockets. The launch tube, because they were underground, would have been much, much broader in diameter than this room. Like it was a major league thing because it would zip up into space and then land back on the other side of the world in 20 minutes. It’s like those things are cruising seven miles a second; they're moving. Some of them had multiple warheads, so they’d get over to the Soviet Union, let’s say, then they’d break apart into multiple nuclear warheads, and you know, they were nasty. The force in those things makes Hiroshima look like a pop gun.
I mean, people built—you know, I don't know if you know this about a hydrogen bomb. A hydrogen bomb has the same mechanisms that the sun uses to produce heat. Okay, whereas it's your standard atomic bomb—it's low grade in terms of energy transformation. A hydrogen bomb is so powerful that it uses an atom bomb as its trigger, so like it’s a whole new scale of destructive magnitude.
There were, at that point—we'll say 1982—there were tens of thousands of these things, 25,000 on the Russian side, maybe 50,000 on the American side. Everybody had a pretty itchy trigger finger, you know? Reagan was in power, and he was in many ways right about the Soviets. You know, they were an evil empire, but—and he scared the hell out of them—but it was touch and go.
Anyway, we're back in this old missile base, and so we're outside looking at it, and there’s a nose cone for one of these things that's sort of laying there in the desert, and so that was kind of an eerie thing. So it was about this high and probably about 12 feet in diameter and was made out of plastic about that thick. It was sort of variegated plastic, a little bit clear, and that was the thing that would melt off when the missile re-entered the atmosphere.
So it was like, you know, it was a weird artifact. You couldn't just stand beside that thing and think neutral thoughts. It was like, hm, that’s a wicked thing. So then we went underground, and it was very bizarre; it was surreal because the first thing we passed through was like this museum front, because it was a museum, and it was sort of like your grandmother’s nuclear missile site because it was staffed by volunteer seniors. And, you know, there was a little guest book that you could sign. It was sort of pine panel, and it was like—you know, Southern Americans are pretty hospitable, so they were sort of happy to see you, and it was like—I said, it was like your grandmother's nuclear missile site, so that was bloody weird.
Then all around the outside of the museum, there were pictures of Reagan, basically, and you know, interacting with various people, including people from the Soviet Union. So it's quite weird because it was a time warp. There I was in 1982, and then in my grandmother’s nuclear missile silo. Then we went into the silo itself, which was quite deep underground. So first of all, you go through this massive door that looks like a safe door, except the safe is on steroids. It's like 12 feet high and this thick, and it's made out of steel, and it's painted that kind of pastel green that everybody was really excited about in 1957.
The whole missile site is like that. It's like, oh, here I am in like 1957, world-destroying technology. So I move, like, from the year, say, 2005 into the year 1982, and then I'm in 1957. It's like Star Trek down there, you know? So we go into the main control room after seeing the silo, which is now empty. We go into the main control room. I think it's 35 feet underground or something like that, which is basically far enough underground so it would be protected from a nuclear blast that wasn't a direct hit. You know, dirt is a really good shield against explosions, as it turns out.
Anyway, so we're down there, and one of the control tables is over there, and it looks like the Star Trek control module, except it’s 1957. So everything switches and so forth, and there's another control desk here. They say, well, would you like to go through a simulated missile launch? You know, and of course, that was definitely my idea of a good time. So they showed us what their protection was and so what happened was there’s like a little place on that side and there's a little place on that side, and there's a key. I believe that the people who had the keys wore them around their necks, although that might not be right. They may have been stored, but I think they wore them around their necks. Makes sense, right? Because that’s really high security, wearing the key around your neck.
The deal was if you wanted to launch a missile, then like person A was over there with their key, and person B was over there with their key, because you needed two people to launch the missile. Then you both put your keys in at the same time, and then you turned them and you held them for 10 seconds, and then bang! God only knows what happens next. Then they said, someone said, how close did you get? And they said, well, the keys were in the lock at one point.
Now, they wouldn’t tell us when, but we knew when it was during the Cuban Missile Crisis. There were other times when that happened as well—that wasn't the only time, but you know, it was a good enough time. It was touch and go. Back when I was your age, roughly speaking, you know, this—the world was locked in this dispute and it was no joke; it wasn’t that long since the Soviets went into Czechoslovakia; it wasn’t that long since they went into Hungary. It was only 40 years after the Second World War, and like, it was tense. It was tense.
Lots of people I knew—maybe they, you know, maybe it was just justification for their own psychological nihilism—but they had a hard time being concerned about their futures because they really didn't think they were going to have any. So, it was a heavy weight that was over everyone’s spirits. What it really did peak again in about 1982 under Reagan. There was a whole bunch of movies made at that time about nuclear destruction, including one called “The Day After,” which you may have watched or may not have, but more people watched that than any TV movie ever made at that point. It was pretty dismal, man. It was like, there’s the city, and poof, there’s a nuclear bomb, and everyone’s wandering around ruined, and that was sort of the movie.
Reagan actually said, weirdly enough, that that movie actually motivated him to enter into more negotiations with the Soviets. You know, you might have thought he would have thought that through before, but you know, he was an actor. I guess the movie was enough to convince him. But anyways, everyone was— you know, we were carrying this around on our backs in some sense, and I was obsessed by it for one reason or another. I had nuclear nightmares on a regular basis and God only knows why that was. You know, who knows why you get obsessed with something? But, you know, I was—I’d always been interested in belief systems and what they motivated people to do.
So, when I was really young, when I was 13, we had a dentist in our town, way the hell up in Northern Alberta, and he had a tattoo on his arm. You know, and he was maybe—there were two Jewish people in our town. It never really even occurred to me that he was Jewish, so there’s not a lot of Jewish people in Alberta. But anyways, you know, that was kind of—I was kind of curious about that in a sort of small-town sort of way. And I also did a project when I was in junior high about the Holocaust and I got pretty interested in that, you know?
Part of I was interested in why—the why in the world would people do such things, you know? Protecting their belief systems, so let's say that's the Nazis. But then they were committing these atrocities that were just—they were the worst things that human beings could imagine. That's the right way to conceptualize them. So, you know, you sit in a dark basement with spiders on your head for two or three years dreaming up miserable things to do and then come up with the most miserable thing you can possibly imagine and then multiply that by a hundred and have people do that—that's sort of what was happening in the concentration camps.
You know, and there were places on Earth during the Second World War that were worse than the concentration camps. If you want to read about those, you can read about—I think it's Unit 731—that was a Japanese medical experimental unit. I would recommend that you don't read that because you'll read things you'll never get them out of your mind. So, that was—I was very curious about that. It’s like what the hell? Here we are arming ourselves to blow each other up, and maybe the whole planet. That seems somewhat counterproductive, you know? Even if you're a communist or a capitalist, it's like being communist king of a burned-out cinder seems to be a bad outcome, especially when you're suffering from radiation sickness and everyone you know is dead.
You could say the same thing about the capitalist end of things. Then I also was curious; it's like, well, why is this war of ideas occurring? It burst out into real wars a lot, right? Why is it occurring? Some of it's still going on. We're still—the Americans are still at war with North Korea, right? They never did sign a peace treaty, and the North Koreans take that rather seriously. So there was the issue of, well, what the hell? Why are we arming ourselves with these horrible weapons that are more than sufficient to blow up our enemies multiple times and would maybe take the whole planet with them? That seems rather insane.
Then the other issue was, well, we do have two belief systems operating here—they're really not the same. You know, like the presuppositions of communism and the presuppositions of the West were not the same. They were seriously different at the level of like constitutional assumption, way down deep.
So, you know, another question that emerged from that was why a system at all? And then, how do you determine the validity of one system compared to another? It's a big problem even now, right? We have culture wars going on in all sorts of ways, you know, radical Islam against the West, for example. Is there any way of getting a handle on what these systems are, what differences in them mean, why they motivate people to do atrocious things as well as defending their own territory, which, you know, is an understandable thing?
Generally, for me, it was like what the hell is going on? What is happening here? So I don't know. I was going to university, and I wanted to continue, so that was a pretty good problem, and I studied political science and English literature for the first part of my university career, and then the political scientists kept—they were basically Marxists, clueless as you can possibly get. And they, you know, they basically proposed that the reason people engaged in social conflict was for economic reasons.
It's like, that’s no answer. It’s like, okay, economic reasons? That means people fight about things they value. It’s like, well, yeah, self-evident, but what's not self-evident is, well, why exactly do they value the things they value? You know? Because obviously, people can live at very different levels of material plenty, you know? I mean, there are people who live in the Amazon jungle, and they basically have, from our perspective, they have nothing, and you know, they're living away just perfectly fine.
Then there's people, you know, in New York penthouses, and it's like people are capable of adapting to situations where the variance in economic wealth is insanely large. The idea that we’re fighting over scarce resources or something like that just struck me as stupid, and I think it is stupid. It’s not even easy to define what constitutes a resource.
Then, you know, you might think about Japan, right? It's got no resources at all, and it's rich. You know, it's a bit stagnant at the moment because it's aging, but Japan has nothing. It has rice that they have to subsidize, and it's rich as can be, you know? So none of that made any sense to me at all. I thought it was just posturing, and I still think that. I think it’s a ridiculous idea; it’s shallow as can be.
So then I thought, well, I'll study psychology instead. You know? And I did that partly because I was interested in problems at a clinical level, but I was also—I started reading some—I took a site course that was self-study, and I started reading some real psychologists, you know, in this self-guided reading course. I read original works by Carl Rogers, and by Abraham Maslow, and I read the archetypes of the collective unconscious by Jung, and I read the interpretation of dreams by Freud. I was trying to pick out core texts in the history of psychological ideas.
That was really helpful. There’s nothing like reading original works by geniuses to provide you with some useful information, you know? It was extraordinarily useful reading. And I was reading Nietzsche at the same time, which was also really remarkable. I think I read most of what was published at that point by Nietzsche, and I tried to read it in chronological order, and it really blew me away, man. That guy— I don’t know what was up with him, but he could sure think. I mean, Nietzsche said at one point: I write in a sentence what it takes other people a whole book to say.
It’s like, that's a pretty hubristic and narcissistic comment, right? It's like—and then he tops it; he said, ah, they can’t even say it in a whole book. So, you know, I love that, and he was right too; that’s the thing—he was right. You know, and Nietzsche was not in very good health, and so he’d think like mad and then write something down that summarized the thinking because he didn't have the energy to run through the whole arguments. And so he said he liked to philosophize with a hammer.
It’s like, yeah, I have had a client recently who was raised as a Christian fundamentalist, hey? And he was kind of cocky intellectually. He’s not a stupid guy, and he thought that he would, you know, take on Nietzsche intellectually because he knew about Nietzsche's statement about the death of God. So he read Nietzsche, and that was the end of his Christianity.
It’s like, you know, you’ve got to watch who you pick on, you know? And he’s not a good guy to pick on because he’s a lot smarter than you, and you know, a lot smarter than almost everyone. He was an amazing person. But, and so I was reading Nietzsche, and I was reading Dostoevsky, and I was reading Alexander Solzhenitsyn at the same time.
Solzhenitsyn wrote this book called "The Gulag Archipelago," which is like the second volume of that book. It’s just he memorized the book in a prison camp in Russia, and the book is, I believe, it’s 2,400 pages long, and it’s written in like seven-point type. It’s a major league book, and you know, he didn’t have much pen and paper in the gulag camps. He spent a lot of time thinking about why he was in there and why Russia was basically—the whole economy was basically predicated on slave labor, and it was absolutely murderous.
So the second volume of that series—he escaped from Russia in 1975. It was brought over in a bad translation and circulated in the West long before it was available in Russia. I tell you, once that book was published in 1975, there was no being a Marxist anymore. He just demolished the claims of the Marxist left to its moral credibility. It just demolished it, and rightly so. It's an amazing book, "The Gulag Archipelago," but it's a walk through a very deep swamp, and it's a remarkable book because it's like reading someone who's screaming an outrage at the top of their lungs for 2,400 pages, you know?
So, I was reading all that, you know? And I was having these nightmares, and I was so—and I was trying to figure out what the hell it was about belief systems and how things had gone so astray, especially in the 20th century. Now, I was reading Nietzsche at that point, and I read parts of "Thus Spake Zarathustra," which is not a book by Nietzsche that anyone should read until they read everything else he wrote because it's nothing like anything he wrote.
It's this weird sort of Old Testament-like poetry, and it's revelatory in some sense. It's a very strange book, but it's in that book where Nietzsche announces the death of God. He has a prophet, Zarathustra, come down from a mountain into a marketplace and tell everyone that God is dead. You know? And he says, "God is dead, and we have killed him. Where will we ever get enough water to wash away the blood?" which is the part of Nietzsche’s quote that no one ever refers to.
So, I mean, because we—you know, people like to think of Nietzsche's statement as triumphal in some sense, “God is dead.” It’s like, yeah, but Nietzsche knew what was coming because back in the 1880s, because he was a real prophet, this man, like Dostoevsky—I don't know how they did this. Nietzsche said in his notebooks, in "Will to Power," that there were two things that were going to happen to the human race because of the death of religion. One was that people would become nihilistic, and the other was that they would become totalitarian.
He said that what would happen in Europe in the next century—so this is like 20 years, 15 years before the 20th century started—that there would be hundreds of millions of people dying as a consequence of the conflict, basically between communist presuppositions and classic Western presuppositions. It’s like, you know, when the wall fell in 1989, no one predicted it. It was like everyone was left dumbfounded by that event, you know? So predicting what's going to happen tomorrow—that’s impossible. Predicting what’s going to happen in a whole century 15 years before it starts, it’s like you've got to think someone’s tapped into a very deep, deep underground vein to be able to do that.
But he thought about it as an ideal necessity. It’s like because, for Nietzsche, once the idea of God was killed, he believed that the entire moral structure that was predicated on that idea, which was its fundamental assumption, would be shaken to its core because it’s like the foundation piece was knocked out and everything would be up for grabs. If that’s not true, then what? Well, that’s a theme that Dostoevsky explored too continually.
Dostoevsky who Nietzsche had read—and even more than I thought as I found out recently—Dostoevsky’s take on it was that, well, if there’s no God, then anything, then you can do anything you want; everything is permitted. There’s no morality whatsoever, and all there is—and Nietzsche also pointed this out to some degree because he said, most people’s morality is cowardice. He didn't say that morality was cowardice; he said that people cover up their cowardice by pretending that they're moral when they refuse to do things that they're terrified of. It’s like, it’s a nasty little comment, and it happens to be true.
So Dostoevsky, as well, at the same time, was exploring what might happen from a political perspective, for example, if his ideas about the death of God were true, that everything would be permitted if God was dead. He wrote a book called "The Devils," which, or "The Possessed," which I would also recommend especially if you read it with "The Gulag Archipelago" because they're like—Dostoevsky predicts what's going to happen, and then Solzhenitsyn describes what happened, and they’re the same thing. It’s rather staggering, actually.
Especially because, you know, there hasn't been anything more powerful written in the 20th century than Solzhenitsyn's books, and there hasn’t been anything written in the 19th century that's more powerful than Dostoevsky’s books, so like that's a one-two punch. So anyways, Nietzsche’s prediction was that once you pull the rug out from underneath the system, it might take a long time to fall, but it's going to fall because there’s no foundation left.
The system was actually logically coherent once you accepted the assumptions, and the assumptions were—well, they were basically Christian assumptions, or Judeo-Christian assumptions in Europe, you know? And they were predicated on a religious substructure, and those religious ideas, although it wasn't as obvious in the late 1900s as it is now, had an unbelievably ancient history.
Christian ideas are derived from Jewish ideas, and Jewish ideas are predicated on Egyptian ideas, and Egyptian ideas are based on Mesopotamian ideas. That’s about as far back as we can go, but then we know that there are shamanistic traditions that are basically religious in structure that go back maybe 25,000 years. And then, you know, before that, we don’t know; we don’t know. But there wasn’t nothing before that because there were people like us fundamentally for at least 150,000 years, and then there were people who are more or less like us for—depends on how you count; we diverged from our chimp-human ancestor about 7 million years ago.
So God only knows how deeply rooted those ideas are. Now, you know, the rationalist critics like—we still have them, like Dawkins, for example. They kind of think of a religious system as a superstitious scientific theory, you know, and it’s exactly what people like Voltaire thought 300 years ago. You know? And it’s—you know, I get their point, but it's stupid. No, I’m telling you, like, it’s like forget about that. That’s like 1750, you know? It's not 1750 anymore, and we've got different problems.
The idea that religion is a superstitious scientific theory—it's like that’s a really stupid idea. All that means is that you haven't read anything about the psychology of religion or about religious phenomenology in general. I mean, anybody who has any grounding in neuroscience and who actually reads knows, for example, that there are all sorts of neuro phenomena that result in religious experience. You know, so, for example, if you get epilepsy, there’s a high probability that your aura, which is the sense you have before the epilepsy hits, there's a high probability that that's going to be associated with religious experience.
And it was for Dostoevsky, for example. So what happened to Dostoevsky was that he was kind of a student radical under the Tsars, and they arrested him along with the other student radicals and threw him in the prison in Moscow, I think it was in Lubyanka. They kept him there for a while. He’s kind of an aristocratic guy, so he wasn’t used to hard living, and then before they let him go, they woke him up at 6:00 in the morning and took him out in the firing squad yard.
Then they lined up all the people who would do the shooting with a firing squad, and they shot him with blanks, but he didn't know they were blanks, so that was a little hard on his nervous system. One of the consequences of his epilepsy was that just before he had the actual seizure, he said that it appeared to him that his knowledge of the structure of reality was deepening and deepening and deepening until it was absolutely overwhelming. He felt like he was on the verge of discovering the secret of the universe, and then apparently, the secret of the universe was a little too big to fit inside his head, and he'd have an epileptic seizure.
But it's very interesting because Dostoevsky was kind of a hack novelist before he underwent prison, then he had the firing squad thing happen, and then he was sent to this horrible prison full of murderers and rapists for a long time. When he came out, he wasn't the same guy. It’s hard to say exactly how much the experiences changed him and how much the epilepsy changed him, but certainly, there are depths of representation in his writing that I don't think he could have reached without the epilepsy.
And so, religious experiences, man, they’re hardwired into your brain, and we can elicit them with some regularity now. You might think, well, that means there's a biological side effect. It's like, yeah, yeah, maybe, and maybe not too, you know? So, you know, I don't know what the rationalists do with that sort of thing. You know, maybe they say, well, that's an evolutionary spandrel; it doesn’t mean anything of any importance.
It’s like the whole spandrel—you know what a spandrel is? A spandrel is something that characterizes an organism but that hadn't really evolved; it just more or less happened. You can't say that it has a function. It's like a side effect of something else, and one of the great tricks of rationalist, evolutionary thinkers is that they call anything that a biological organism manifests that they can't account for rationally a spandrel, and it's a real pathological trick.
You certainly can't do that with something like religious experience. I mean, you just have to be an adult to think that. I can give you an example. So, obviously, part of what's happening in religious phenomenology is dominance hierarchy perception gone mad in a sense. So, you know, imagine that there’s a hierarchy of people, and that you're hardwired to admire those at the top. Well, obviously, right? I mean, how—what percentage of men's magazines sold do you suppose are about celebrity culture? It’s like 50%.
Primates like us, we like to gaze at high-status primates. If you take a vervet and you figure out the vervet hierarchy and you show low-status vervet photographs of high-status vervet, then they'll spend a lot of time looking at them. So part of what I'm trying to suggest here is that part of what religious phenomenology is, is the inbuilt sense of awe that we have when we gaze up a hierarchy and we see what's at the top.
You know, it's not that long since we had monarchies, and in many societies you weren't even allowed to look at the king. You know, that was certainly the case in Japan where the emperor would be surrounded by shoguns, I think that’s right, and they were authorized to kill anyone who got out of line. They just chopped them up with their swords, and any bit of being out of line was enough.
You know, dominance hierarchies are a very serious thing, and so, you know, it's hard for modern people to understand the awe that must have accompanied the experience of being exposed to a monarch in a society that was fundamentally monarchical. But the Japanese believed, like right up to the end of the Second World War, that their emperor was a sun god. You think, well, that’s pretty damn weird? It’s like, no, it’s not. It’s the fact that we don’t have sun gods that’s weird.
It’s like that’s a really, really old idea, and the Japanese were just doing what virtually every culture before them had done that ever got anywhere with organization. One of the things about a god is that a god is something like what’s at the top of a dominance hierarchy. Now, human beings are very peculiar creatures, and we’re capable of high levels of abstraction.
So, I'm going to offer you a theory, okay? And this is a lot of thinking compressed into a very tiny sentence. Okay—so imagine a dominance hierarchy—okay? Now, imagine that there's a set of qualities that makes it more probable that you'll get to the top of a dominance hierarchy. Now, that’s not that hard to imagine, right? You might think, well, intelligence might have something to do with that. Attractiveness might have something to do with that.
We know from studies of chimps—this is mostly done by Frans de Waal—that you know, because you might think of kind of prototypical caveman society that it’s like the caveman that looks most like a bodybuilder who’s the most dominant caveman, so to speak. But what de Waal found among the chimps was that the chimp who was at the top of the male dominance hierarchy, who remained at the top relatively stably so that he wasn't brutally murdered, say, by two chimps that were below him in the dominance hierarchy, was actually kind of a good leader in some sense.
You know, he got along well with the females in the troop and maybe even attended to their children to some degree, and he wasn’t a completely unpredictable tyrant. If he was like that, then you know, a couple of male chimps would form a coalition, and when his back was turned at any moment, then they’d rip him to pieces. So that's really—it's phenomenally interesting because what it suggests is that there's a stable set of characteristics, relatively stable, that are operative within a dominance hierarchy that propels you to the top.
That’s characteristic not only of human dominance hierarchies, but it's also characteristic in a way that's analogous to even chimp dominance hierarchies. So you can read Frans de Waal if you're interested in that sort of thing. He's written a lot of books about the emergence of morality in chimpanzees in particular—very nice work. I mean, he's observing chimps that are basically in a zoo in Holland, so you know, you can make the case that, you know, they're a little—not like wild chimps, but whatever, you know? Fine, they're not exactly like wild chimps, but I don't think that invalidates his research at all.
So, and, you know, you can observe similar things in wild chimps. Okay, so that’s kind of an interesting idea, right? There’s a set of stable traits that propels you towards the top, and some of those would be indicators of biological health, like symmetry and beauty, and some of them would be, well, what other characteristics the rest of the bloody troop think are admirable. Because admirable is also a very interesting phenomenon, right? Because there are people you admire. You think, well, why are there people that you admire?
What does it mean to admire someone? What it means in some sense is that the effect they have on you is one of wanting to imitate. You know, human beings—a lot of our knowledge, a tremendous amount of our knowledge, isn't propagated linguistically; it’s propagated through imitation. The probability that we were imitating each other before we developed language is extremely high, and we're also one of the few animals—maybe the only animal—that are really capable of true imitation, and it's a major league advance in terms of the transformation of information.
What it means is you go out there and half-kill yourself learning how to