2017 Maps of Meaning 05: Story and Metastory (Part 1)
Now that you've had an opportunity to walk through a narrative, then hopefully, some of the things that I'm going to say, that are more technical, will make more sense. And so, what we're going to do today, at least in part, is to deal with... to start to deal with conceptualizing a solution to the fact that the world is too complex to properly perceive. So what the problem fundamentally is, is that there's a lot more of everything else than there is of you, especially if you include in that everything else, all the parts of you that you also don't understand. And so... I want to walk you through how I think we solve that, at least in part.
We do that essentially by simplifying the world, but I think mostly that we simplify it as a place in which to act, rather than a place in which to perceive objects. I really believe that there's a critical distinction between those two things. I think that part of the reason that there's been continual tension, say, between the claims of science and, let's say, the claims of religion is because the idea that the world as a place of objects and as a place to act have to be considered separately isn't properly understood.
I don't know... so I'm gonna try to straighten that up to the degree that that's possible. So, I'm gonna talk to you about stories and meta-stories. The story, I would say, is the simplest unit of useful information with regards to action and perception that you can be offered. Then a meta-story is a story about how a story like that transforms. I would say... we'll concentrate on the structure of the story, and then we'll get into the structure of the meta-story, and that'll constitute today's... today's class.
So the first thing I wanna show you... I know many of you have seen this, but I'm gonna show it anyways. For the longest time it was presumed that... for the longest time, say, at least in the 20th century, it was presumed that we make a pretty complete model of the world and then we act in the world, and we compare what happens to that model. As long as our model and the world are matching, then, roughly speaking, we believe that everything is okay, and our emotions stay under control. But if that model mismatches, then we know that something's up.
Now, a lot of this work was done by Russians, especially in the early 60s, by two Russian scientists, Vinogradova and Sokolov, who were students of Alexander Luria, who was arguably the greatest neuropsychologist of the 20th century. Luria spent a lot of time studying soldiers from WW2 that had received head injuries of various sorts, and because of that, he could draw inferences about how the brain worked. Much of what we're going to talk about over the upcoming weeks with regards to brain function, much of it is predicated on Luria's work.
Sokolov and Vinogradova were his students, and they were interested in this phenomena... they were interested in psychophysiological measurement, right, as a way of inferring brain function. So psychophysiological measurement is the measurement of those physiological parameters, say, like pupil width, or skin conductance, or EEG, that are in some ways directly reflective of how the brain works.
Now, if you measure skin resistance, skin resistance changes with the amount that you sweat. It can change very, very rapidly, and it changes in response to physiological demands placed on your body. So, for example, if your body assumes that you're going to leap into action for some purpose, it's gonna open up your pores to prepare you to keep yourself cool, and you can measure those transformations quite accurately by measuring the electrical resistance of the skin.
What you see, if you put someone in a lab chair and expose them to different stimuli, is that... for example, you can expose them to something threatening, say, like a picture of a snake. Then their skin conductance will increase because they're... sorry, their skin conductance will decrease because they sweat a little bit more, and it's quite a rapid response, it can be a very rapid response.
Now, one of the things that Sokolov... yeah, that's right... noted was that if he... if I sat you down, for example, and put some headphones on you and played a tone that repeated, exactly the same tone that repeated at predictable intervals, then the first time you heard the tone you'd produce quite a spike in skin conductance, and the next time a slightly smaller spike, and then the next time a slightly smaller spike, until after maybe you've heard it three or four times, you would not respond to it at all. That was often regarded as habituation, and habituation is the same thing that you can see in snails, for example.
I'm using snails as an example because they have very, very simple nervous systems. So if you take a snail and poke it, like it comes out of its shell, and you poke it, it'll go back into its shell. Then it'll come out, and if you poke it again, it'll go back into its shell and it'll come out; but if you keep doing that, sooner or later the snail will just stop going in. You might think of that... it has been conceptualized as the simplest form of learning - habituation.
The behaviorists tend to presume that if a human being manifested a response that could be modeled by a simple organism, then the human being was using a response that was analogous to that of the simple organism. Sometimes that's true, and sometimes it's not. So, for example, you have simple reflexes. You know if you put your hand on a hot stove, you'll jerk back, and that's quite a simple circuit. You'll move your hand back before the message gets to your brain because the spinal cord is smart enough to mediate a reflex like that all by itself.
Your brain is actually quite distributed throughout your body; it's not just in your head like people tend to think. We have conserved fast-acting reflexes at various levels of our nervous system that aren't capable of sophisticated response; it's pretty much stimulus-response thinking from the behavioral perspective. But they have as an advantage incredible speed because there aren't that many neural connections between the stimulus and the response.
We have layers of response at different time frames that help us match with the demands of the external environment. So Charles Darwin, for example, used to go into the, I think it was a museum in England. I don't remember the name of it, but the snake in there, I believe it was a cobra, and he'd stick his face up at the glass and the cobra would strike at him, and he'd jerk back. He tried many, many times to master that reflexive response to the snake but there was no way; every time that thing struck at him, he'd jump backwards.
Well, you can imagine the survival utility in a reflex like that, but in reflexes in general. Okay, so back to Sokolov now. What he decided... if you took that tone, and you did anything to it that was perceptible - right, because there's certain gradations of tone that you're not capable of perceiving - but let's assume you took the tone and adjusted it enough so that it was perceptibly louder or it was perceptibly a different frequency or something like that, or even that the spaces between the tones – 'cause I said they were predictably spaced – even if the spaces between the tones were changed, then when the change occurred, the orienting reflex would be reinstated.
You'd respond to it again, and Sokolov tried to vary the tone on many, many parameters, but no matter what parameter he varied it on, as long as you could detect it perceptibly, you'd produce an orienting reflex. Sokolov's idea was that you must be producing a complex internal model of the world that's in concordance with the world across pretty much every perceptible dimension because if you weren't doing that, how in the world would you know that the tone had changed from what you had already learned about it?
For the longest time, and this was also true for people who were investigating artificial intelligence, we had this idea that what people did was make a complex model of the world and hold it in their minds, so to speak, and then they'd act in the world and compared what they expected to happen in the world with the model. As long as there was a match, then there was no orienting reflex.
Now the orienting reflex turns out to be quite a complex reflex; it's not merely an alteration in skin conductance. What it is in essence is the manner in which you start to unfold your response to the unknown. The initial stages of that are very, very quick, but it's hard to tell when the orienting reflex stops and when more complex learning begins. They sort of shade into one another.
So the initial stages of the orienting reflex are quite reflexive, but the later stages can be extraordinarily complex. For example... well, I always think the example of betrayal is the best one because it's so complex. Imagine that, you know, you come home and you find evidence, lipstick or something like that, evidence that the person that you're with is betraying you.
The first thing that's going to happen is that you're going to orient; there's going to be a real shock, and that's reflexive. It's very much akin to the response that you would manifest if you saw a predator or a snake or something like that. So that's very instantaneous, you know. And then that'll prepare you for action; you'll get ready to do whatever it is that you need to do next—a very unpleasant thing.
But then it might take you even years to fully manifest the learning that would be necessary in a situation like that because there are so many things that you have to reconsider. First of all, the person might now appear to you as a threat. That's pretty immediate. So there's a biological, physiological response first. Your body reacts first; then you respond emotionally. That's gonna take a while, and you know, the emotional response might extend over days, or weeks, or months, or even years.
As you're doing that as well, you're going to try to start to re-sort out your interpretive schema so that it can adjust to the transformation that this... this error on your part, say, or this catastrophe or this betrayal—it has to adjust to whatever information that event contains. The orienting reflex can manifest itself over an extraordinarily long period of time. It's best to think about it as the initial part of what can be a very complex learning process.
Now, that was a standard idea in psychology for the longest period of time: that we created a detailed internal model of the world, we watched how the world was unfolding, we compared the two, and the physiology, the neurophysiology of this, was even understood to some degree, even by the Russians in the early 1960s because they basically localized... you could use complex EEG, electroencephalogram technology to localize where the orienting reflex was occurring in the brain, and basically, it appeared to occur, roughly speaking, in the hippocampus.
The theory arose that your brain, your cortex, let's say, produced a very complex model of the world, an internal model, and your senses were producing a model of the external world and the hippocampus was watching those things to see if they matched. If they didn't match, there was a mismatch signal, and that would be the orienting reflex. Your body would start to prepare itself for whatever that mismatch meant, and then you would engage in exploratory behavior to try to update your model.
That was the standard theory; it was a very well-accepted theory. It has elements of cybernetic theory in it, but it was accepted enough so that when people first started to experiment with artificial intelligence, that's how they tried to make artificially intelligent systems. They tried to make ones that would model the world, act, and then compare the changes in the world to that model.
But that didn't go anywhere; it turned out that it's so difficult to see and model the world that people had no idea how complex that was. It was impossibly complex, as it turned out, and so that's part of the reason we don't have robots wandering around doing apparently simple things like walking—walking in an environment like this.
Now, when we look at the environment, we think: well, it's not that hard to look at. It's full of objects; they're just self-evident; there they are, and we can just wander through it. You know, we don't even do that consciously to any great degree because so much of that perception is presented to our consciousness without effort in some sense.
But the AI guys learned pretty quick that perceiving the world was way more difficult than anybody had guessed. And then this experiment really, in some sense, put a phenomenological punch behind that observation because one of the presuppositions of the orienting reflex theory that I just laid out was that: you were very good at detecting changes; that your nervous system would automatically detect change, anomaly, right?
Any mismatch between your model and what you expected, and then... well, the AI guys, I think, figured out, first of all, that was a big problem; that the problem of perception was much more complicated than that. You know, it's actually... it's out of that same set of observations, in some sense, that Postmodernism emerged in literature... in literary criticism, because, well, it turns out to be hard enough to see a normal object, like a chair.
Part of that is that if you just do that to the chair, it's really different than it was before. You could imagine how different it would be if you tried to paint the chair under both those conditions, right? If you really got good at looking at it, you'd find that even though, if I asked you what color this is, you'd say white; if you were actually painting it, you'd find out that the colors of the chair when it's in that location and the colors in the chair when it's in that location, just because of the difference in lighting are substantially different.
I think it was Monet, I think, who painted a very large series of haystacks in the French countryside in different seasons and under different conditions of illumination just because he was exploring how radically different the same object could be as it moved through contexts. So it isn't even obvious why we think this is the same object when you move it; and the answer is something like: well, you can sit on it in both positions, which is not a description of an object, by the way, right? That's a description of something that's useful, something that's a tool, something that exists in relationship to your body; it's not an object.
And so... if you think that just looking at something like a chair is almost impossibly difficult and subject to interpretation, then imagine how difficult it is to perceive something like a text, you know, like a novel, because a novel obviously is subject to multiple interpretations and the interpretations are gonna depend on, well, at least in principle, on the intent, conscious and unconscious, of the author, of the time, of the place, of the culture, of the language.
Then that's just on the side of the production itself, but then there's the reader. It's like, I've read books when I was sixteen, and then reread them, say, when I was forty, and the book was almost completely different, as far as I was concerned, partly 'cause I knew what was in it the second time, and I didn't know what was in it the first time.
So the meaning that manifests itself out of a book is a consequence of all the complexity of the book plus all the complexity of the reader. So you know, if you're reading Russian literature, for example, and you've already read fifty Russian novels, you're going to be in a much more different interpretive space than you are if, say, the Russian novel is the first novel you've ever read.
The Postmodernists were grappling with this as well as with many other ideas that I think contaminated their thinking, and their conclusion was: well, you can't extract out a canonical meaning from a text. It's so dependent on the situation that to say the text has an interpretable meaning is actually an error. Now, just because it's difficult to do something doesn't mean it's impossible, and there are massive holes in the postmodernist view. I think it's an unbelievably pathological view, personally.
But the thing is that there are reasons why it emerged, and the reasons were analogous to the reasons that the AI project initially failed and analogous to the reasons that this experiment turned out the way it did.
So I'm gonna show you this; many of you have seen this already, but as I said, it doesn't matter. Your job here is to count the times... there's a team of three people here, dressed in white, and there's a team of three people here dressed in black, and your job is to count the number of times the white team throws the basketball back and forth to the white team members.
Okay, we'll just run that.
Okay, well, so, obviously, or perhaps not so obviously, the number of times I believe that they threw it back and forth was sixteen, if I remember this correctly. But, of course, that's not really the issue because what happens in the middle of the scene is that a guy wearing a gorilla suit comes out into the middle of the screen and pounds his chest three or four times. He comes out quite slowly, as you saw; is there anybody in here who didn't see the gorilla?
No? Well, I presume all of you knew about this video anyways. So, Dan Simon, who produced this video has got a couple of other ones where he shows that, you know, even if you're smart enough to see the gorilla, 'cause you've seen the video before, you've heard about it, if you make other changes in the background, you'll count properly and you'll catch the gorilla, but you'll miss the other changes in the background, and they're not trivial either. It's really quite remarkable.
He's produced other short videos, for example, where you'll be looking at a... like a field and a road will grow in it, occupying about a third of the photograph's space, and you'd think, well, yeah, you're gonna see that; it's like: you don't, you don't.
So, okay, so this threw a big spanner into the works. This sort of experiment, along with the AI failures — and we could even say, the postmodern dilemma — it's like, well, hmm... everyone, virtually... every psychologist would've predicted before this series of experiments that there's no damn way you'd miss that gorilla because your nervous system was actually attuned to change in the environment and that's a big change. And it's also a gorilla; it's something you would really think that you couldn't miss, you couldn't possibly miss, especially when it's occupying the center of the visual field.
So, well, this is part of a phenomena called change blindness, and it helped psychologists who had been studying the visual system for a very long time to figure out, well mostly figure out exactly how blind human beings are because we're way blinder than we think. And so we actually focus on much less of the world than we think, and we do that partly... it's not exactly obvious how we do it. It's kinda like we hold a still picture in our imagination and then fill in the details by using our central foveal vision, which is always dancing around like a pinpoint or a laser beam, moving back and forth.
We're assembling those little snapshots from the fovea into a relatively coherent picture. Maybe what happens is that I look at you, and then I look at you [points to a different student], and I've still got the information from looking at you, so my brain can sort of infer that that's remained stable. But like if I look at you, and I... I tried to learn how to do this 'cause you can look at something and then pay attention to the periphery; it's annoying.
But so, if I'm looking at you, I really can't make out your eyes [points to a different student]. I can more or less make out the fact that you have a head, especially if you move it. So your periphery is sort of like frog vision or dinosaur vision; it's much better at picking up movement than it is at picking up something that's staying still.
That makes sense because, well, if it's staying still and it hasn't already hurt you, then it's probably not going to hurt you. But if it's moving, then, you know, that's a good thing that you might pay attention to. And so if your periphery catches movement, then you'll focus your fovea on it; it’s like you go from really low resolution to really high resolution.
So the center of your vision is incredibly high resolution, but then it fades into low resolution as you move towards the periphery until it's out here, which would be about 170 degrees. If I concentrate on this hand, I can tell it's a hand, mostly when it's moving. I have no idea what color it is, but this one I can't see at all. And then, I can probably see my fingers - now, and then I can clearly see them if I look at them with my fovea.
Your vision is a very, very strange thing, and it's focusing on something very specific. So you're pointing your eyes at something very specific, and that's what you seem to see.
So then that opens up a whole new universe of questions. It's like... how do you decide what to point your eyes at? That turns out to be an insanely complicated problem. John Vervaeke talks about that all the time as the problem of relevance, and the issue is: well, there are many, many things in the world; there's an infinite number of things, let's say, and you're not gonna be able to see them, that's for sure, even if they happen to be changing, as it turns out.
So out of this mess, first of all, how do you pick what to look at? And second, even if you do pick it, how do you see it? 'Cause it's so crazily complicated. So that's the problem that we're going to try to unpack now.
Roughly speaking, what seems to have happened with the gorilla video is you have to take that first theory, that you make a complete model of the world, which is the objects in the world and how they're interacting, and you compare that to the objects in the actual world and how they're interacting.
You have to modify that model. You say: well, no, you're certainly not making a complete model. People should have known better anyways, even subjected to the limits of your perception, because there are all sorts of things in the world that you can't directly perceive.
But what you're doing instead is something like: you're making a partial model of the world, but you're only making a partial model of the world that you're currently operating on with some goal in mind. You're also comparing that to a model of the world as it's currently unfolding.
'Cause the other thing that was implicit... this is really tricky. This is where you have to watch your implicit assumptions. The other thing that was implicit in the original cybernetic theory was that you have a model of the world that's complete, and then what you're watching is the actual world as it unfolds. That's not a model, that's just your perception of the object; but that also turns out to be wrong because your perception of the world as it unfolds is also a model.
What's happening is: you look at the world, the world you see is a model, and a very partial model at that, and then you compare it to the model that you expect—or desire, more accurately—desire. Although the initial models were expectation because if you're in the lab, listening to tones, it's not like you desire anything.
But mostly when you're acting in the world, you have desires. So, the experimental constraints skewed the data in some sense by making people assume that what people were doing when they walked through the world was expecting instead of desiring.
Anyways, you have a model of the world that's generated as you look at it; you have another model of the world that's something like the world that you desire. Then you compare both of them, and they can mismatch, and they can mismatch in a way that upsets your current pursuit; that's the critical issue.
You don’t see the anomaly unless it upsets your current pursuit. You kind of know that too because when you're... like, while I'm lecturing to you guys, you know, mostly you're sitting still, but people are moving their arms and they're moving their glasses and they're shifting their feet, and generally, I don't see any of that, because what difference does it make? You know, it's not relevant to the ongoing... to the ongoing what? Ongoing contract? The ongoing series of interactions? It's something like that.
So as long as you keep your movements bounded within a range that doesn't interfere with whatever it is we're doing, then it's going to be as invisible to me as the gorilla was when you were counting the balls. The cool thing about the gorilla experiment, or one of them, is that the reason you were blind to the gorilla was because you were counting the balls.
That's so fascinating because what it shows to a huge degree, to an unfathomable degree, is that the value structure that you inhabit determines what you perceive. It doesn't just determine what you expect or want; it bloody well determines what you see. That makes the world a completely different place; no one really expected that.
So if you watch the basketball, you see the basketball. If you stop watching the basketball, well then you see the gorilla. So the first question that arises from an experiment like that is: just exactly what is it that you don't see in the world? And the answer is: all of it. You see so little it's unbelievable; you see that tiny amount that's necessary for you to undertake the next sequence in your plotted movements...? It's something like that.
But then that becomes very complicated too because it isn't obvious how you can conceptualize or how you can determine what your next movement is. Because it's not like you just add up movements and make up your life. It's not that simple, and it's related to the novel problem, the problem of meaning in a literary work.
So imagine: you're trying to specify the meaning of a literary work. Well, there's meaning in the word, but the meaning of the word is dependent on the phrase within which it's embedded, and then the meaning of the phrase is dependent on the sentence that's embedded in, the sentence in the paragraph, and the paragraph in the chapter, and the chapter in the book, and the book in the corpus of books of that sort, and then within the culture, and then within whatever your peculiar personal experience is—all of those things, nested, are operative to some degree when you're extracting out the meaning at any level of analysis; they're all operating simultaneously.
So you might say, well, what are you doing in this classroom? Well, the answer is: sitting in a chair. But that's… obviously that's a very short-term and context-independent answer. But you're also attending to what I'm saying, hypothetically, and you're attending to some of it and not to other parts; you're thinking about some parts and not other parts.
You're also attending to a class, and a class is a sequence of lectures, and that's embedded within your desire to finish up the semester and then to finish up the year, and then to get your degree. Then you nest that inside whatever the reason is that you're getting your degree, and then maybe that's nested inside your career goals, and that's nested inside your life goals—and that's nested inside your ultimate values, which you may or may not even be aware of.
I could say, well, you're sitting here because it serves your ultimate values. Well, that's true; it seems a bit abstract to be useful, right? It's so vague out at the outermost levels that it doesn't really have much specificity, right? So it seems to lack information, but by the same token, if I said what you're doing is sitting there, it has the same problem of too restricted meaning because of overspecificity.
So, there's some level in there that you would interpret as meaningful, God only knows why, and that's the level... there's a natural level of perception for that sort of thing.
So, for example, when children learn to name an animal, for example, they'll name "cat"; they don't name the species of cat or the subspecies of cat. They don't confuse cats with dogs, even though they're both in the category of "four-legged furry mammal." So, why not call the cat and the dog "furry mammals"? Well, children don't do that; they go to "cat" and "dog."
People who've studied the acquisition of language have found that there are basic-level categories that children pick up first, and they're often represented with short words. The words are short because they've been around a long time because they seem to reflect the natural level at which people perceive the world. But none of that's obvious, you know? I mean, you could just lump all animals together, for that matter, and just call them "animals," which we do sometimes.
So... anyways, so it's very difficult to specify the meaning level, and it's not very easy at all to figure out how we do it. And so that's partly what I'm trying to unpack. So... here's part of the issue.
So let's say that you have a computer. Yeah, I have a story for this. So one time, when I was in Montreal, I was using my computer. I was in my apartment, and I was typing out an essay, and it crashed.
So what happens when your computer crashes? Well, you know, usually you'd utter some sort of curse, and it's interesting that you do that because the circuit that you use to curse with is the same circuit that monkeys use to detect eagles or leopards or snakes. And so, when there's a bunch of monkeys together, you know, they're not all preyed on by eagles and leopards and snakes, but you know, there's usually a predator in that category for every single monkey population.
When the monkeys are watching, they have an emotional utterance that the most nervous monkey might utter first that basically says: you know, "hide from the eagle; get out on the thin branch, so the cougar can't eat you; and look the hell out for the snake." But there's a circuit that's linked to emotions that produces that instinctive utterance that represents that category.
That's the same circuit that you use when you curse. It's not the same circuit that you use for normal language. We know that because that circuit is activated in people who have Tourette's syndrome because they preferentially swear. You think, well, why in the world would you have a neurological condition that makes you preferentially curse? Well, that's the reason: you don't just have one linguistic circuit, you have one for: "oh my God, there's a predator!"
That's the one that will get activated when something happens like your computer crashing because, you know, you're an evolved creature. Those old circuits that were there, say, 30 million years ago to deal with exceptions are the same circuits you're using now to deal with your computer; why else would you wanna hit it? Right? 'Cause that's what you want: give it a whack! It's like: it doesn't behave - whack! Aggression right away, well, that's some clue as to the category system that you're automatically using to encapsulate the event.
Okay, so fine, what do you do when your computer crashes? Well, first you curse, and then you do the stupid things that idiot primates do when they're trying to deal with something that's way too complex. Maybe you turn it on and off, right? And that doesn't work. It didn't work, and so then I thought, well, maybe the power bar went, so I checked the power bar, and I turned it on and off, and nothing happened.
So I brought a light behind the computer, and the light wouldn't go on, so I thought, aha! I must have blown a fuse! So I went to the fuse box and took a look, but the fuses were fine. And so I thought, well, the power's gone out. So then I went outside, and the power was out. None of the street lights were working; the power was out everywhere.
It was seriously out because this was the time that almost the entire northeast power grid in Quebec collapsed. The reason it collapsed was because there was a solar flare that happens reasonably often. A solar flare produced a huge electromagnetic pulse because it's basically, you know, like a million hydrogen bombs going off at the same time 93 million miles away produces this tremendous electromagnetic pulse, passes through the Earth's atmosphere, produces a spike in current in the main power lines, and blows the whole system.
Just so you know, an event like that happens about every 150 years and if we had one now, it would take out all of our electronics, like one of the big ones—there was a big one in the late 1800s—everything: satellites, computers, cars, everything - gone. That's a big problem. No one knows what to do about it. One missed us by about nine minutes, I think, two years ago.
So that's something else to worry about if you're inclined to worry about those sorts of things. Um, okay, so what did I conclude from that? Well, the function of my computer was dependent on the stability of the sun.
It's not the first thing you check out when your computer crashes, right? You don't run out and go, hey, well, yeah, the sun's still there :) No problem, I can cross that off the list. But to me, it's an extraordinarily interesting example of the invisible interdependence of things. You know, and our tendency to fragment the... what we seem to do is to look at things at the simplest level of analysis that actually functions.
So, for example, when you're interacting with your computer, you're not interacting with your computer at all, really. You're interacting with the keyboard, sort of one key at a time, and you're interacting with the symbols on the screen. But as long as the computer is working, you don't care about it at all. You don't give it a second thought, and you certainly don't care about the fact that it's dependent on... well, the electrical power, for example, and the electrical power is dependent on... you know, I don't know how many men are out there right now or were out there last night, when it was freezing rain fixing power lines and freezing to death while they're doing it, so that your stupid computer doesn't malfunction while you're watching cat videos.
You know, I mean, there's this incredibly dynamic living system that's social, and economic, and political that has to remain dead-stable in order for us to have access to functional and pure, non-fluctuating electricity 100% of the time. 'Cause you also don't think, well, the stability of your computer is dependent on the stability of the political system—but of course it is because if the political system mucks up and the economic system goes, then people don't go out and work to fix things; and things are breaking all the time, that's their normal state: is broken, not working.
And so... and that's all in some sense folded up not only inside your computer but actually inside your tiny conceptions of the computer while you're using it. You only get a glimpse of what the computer is really like when it doesn't work. That's when it becomes a complex object, right? As long as it's working, then your stupid perceptions are perfectly fine to get the job done. And that's another indication of what you're using your perceptions for: it's to get the job done and how you specify exactly the level of resolution that you should be operating at.
I haven't sorted that out, but it's something like you default to the simplest level that moves you to the next step. You know, so for example... and generally that is what you should do if you're having an argument with someone that you have a long-term relationship with. You can start by arguing about what the little argument is about, or you can immediately cascade into whether or not you should have a relationship with this person at all, or even into whether or not you should even bother with relationships.
Which is, you know, every time there's an argument, that question is a reasonable question to have emerge or at least it's in the realm of potential reasonable questions. But it doesn't seem useful to jump to the most catastrophic possible explanation every time some minor thing goes wrong. That's what happens to people who have an anxiety disorder. That's what happens to people who are depressed, right? They can't bind the anomaly.
What happens is it tends to propagate up the entire system until it takes out their highest-order conceptualizations. You know, so if you're seriously depressed, maybe you'll watch a news article about something stupid, and you'll think: Jesus, why should I even be alive? You know, and I'm dead serious about that. If you score like 60 on the Beck depression inventory, which puts you way the hell up in the "depressed" range, anything that happens to you that's negative will trigger suicidal thoughts, roughly speaking.
Sometimes even positive things will do it because there are very few positive things that happen that don't carry with them some threat of change or transformation. So, you know, one mystery—it's a big mystery—why don't you fall into a catastrophic depression every time something little goes wrong? Because the level of analysis is not self-evident. You see this with people who are high in neuroticism too; you know their trivial fluctuations at their workplace or in their relationships or in their health will produce a very disproportionate negative emotional response.
It's part of the range of normal emotional responses. Some people are very, very high in neuroticism, so everything upsets them; some people are very low. The reason that whole range exists is because sometimes you should get upset when some little thing happens to you ‘cause it's an indication that the whole damn environment has got dangerous on you, and sometimes you should just brush it off because its net consequence is low.
But how do you calculate that? Very, very difficult question. So you know, when your computer goes wrong, well, you have to pick the proper level of analysis to fix it. You could say, well, there's something wrong with the circuit board, and maybe there's a crack in one of the... somewhere that it's soldered. Or you know, sometimes now that people are building microchips they've run into a crazy problem; you know, microchips keep getting smaller and smaller, right?
So the little wires now are down to atomic width or the width of maybe 20 atoms or something like that, but really, really... they're really getting thin. So that produces another problem, which no one would have... you wouldn't expect, and that is that at the quantum level there’s uncertainty about where electrons might be. Normally that doesn’t matter. The degree of uncertainty where your electrons are is smaller than your size so that it’s basically irrelevant.
But down at the sub-atomic level, where these microchips are starting to be produced, sometimes the electron will be outside the wires, and that means that they are getting so damn small that they’ll get short-circuited by themselves because the electrons aren't stable enough to be where they're supposed to be in the wires.
Well, the reason I'm pointing that out is because a problem that exists in the system can exist at any of the multiple levels of that system and it isn't obvious where to start. A lot of political arguments are like that, you know... it's like maybe a company goes bankrupt and its shareholders get... maybe a bank fails, so maybe people can't withdraw their money. One response is, well that just shows you how rotten the capitalist system is.
It's like, well, maybe that is what it shows, but it seems like that might not be the most appropriate level to start. And so again, it's like Occam's Razor in the scientific world, right? You want to use the simplest explanation. It’s not that fits the facts because you don't organize your perception by facts. It's kind of like you want to use the simplest tool you can possibly manage to fix the problem.
So you don't... when your car has a flat tire, you don't buy a new car; you fix the flat tire if you can figure out how to do it. So you go for the thing that will put the tool back together with the minimum involvement of time and effort; it’s something like that. And you care about that because you have limited time and you have limited resources.
And so it makes sense for you to conserve them. And I'm telling you this partly for practical reasons too because this is a very useful thing to know if you're arguing with someone. You want to argue about the smallest possible thing that you could argue about that might fix the problem. You want to really specify what's going on at a micro level and what's the minimum that I would require to be satisfied with that outcome.
And if you're... this is especially true in intimate relationships. It’s like... if someone is bugging you and you want them to change, you think, well, how can I be minimally bothered by this and what's the tiniest amount of change I could request that might satisfy me? 'Cause otherwise, the argument will come unglued. Every time you guys try to discuss a problem, you'll talk about whether you should even be together.
Then you're done ‘cause you'll never solve a problem, and then you won't be together because you'll never solve a problem.
So, okay, here's the way to think about perception. So, let's say this is the thing you're trying to look at. I call that the thing in itself.
Now, that's schematic of a thing in itself. So, the thing in itself, that's an old philosophical concept, and I think it came from Kant, but I'm not sure about that; that might be older than that. The thing in itself is what you could see if you could see everything about something, but… you can't… so it's a hypothetical entity, and maybe, who knows, if I were looking at you like the thing in itself, maybe I could see every level of your being from the sub-atomic level up to this level of perception.
Then beyond, I could see your family relationships; I could see how they were nested in the societal relationships, economic relationships, political relationships, the eco-system as a whole. Like, I would see all those levels at the same time. Of course, I don't, 'cause I can't. What I see instead is, first of all, you are radically simplified by my senses because they are just not acute enough to see you at a microscopic level, and they're not comprehensive enough to see your connections across time.
So, my senses filter a bunch of you from me right away, and then I'm also filtered from you by your willingness to act like I want while we were together ‘cause that's... 'cause you could be doing all sorts of strange things at the moment, but you're not.
So, you're helping me simplify my perception of you by agreeing to play the same game that I'm playing while we occupy the same space, and that's basically politeness. That's the mark of someone who's well socialized. You walk in somewhere, you get the game, play the game, and you don't scare the hell out of everybody.
That's partly how we keep our emotions stabilized because, you know, if you're like a Freudian, you think, well, as long as your ego is well constituted, you can keep your emotions under control. It's like, yes and no; mostly no. I like the Piaget idea better, which is if you're well socialized, you're awake enough to identify the game that's going on wherever you go, and then you play that game immediately, and so do all the other socialized primates.
Then you can just understand the game. You don't have to understand them, thank God; you could just understand the game, and as long as the game continues, you don't have to be nervous because you know you at least know what's going to happen, and maybe you even know how to get what you want in that game.
So that, again, that's really worth thinking about because we talked about this before about why people want to maintain their culture. It isn't just because their culture is a belief system that helps them orient themselves in the world; it's because a belief system is a game that everyone who shares that belief system is playing.
The fact that everybody's playing means nobody needs to get upset. So it isn't like the belief system is directly inhibiting the emotions. That isn't how it works. So... and it's not like the culture is just a belief system; it's only secondarily a belief system, man. Mostly it's a game that people are actively engaging in; that's way more important than the beliefs that go along with it.
You even need the damn beliefs. You know, that's why wolves can live with each other. I don't know what the hell... they'll have a belief system exactly; mostly they have a set of... they have a game; that is the wolf game, roughly speaking, and all the wolves know how to play it. So, that's that. That's how they keep themselves organized in their packs.
A lot of it's externalized, and so okay. So, anyway, so the thing in itself, that's a very complicated thing. It has multiple dimensions and multiple levels. And then it's worse than that because it doesn't only have multiple levels, but all those levels move across time, and every one of those levels shifts as it moves across time.
And so I like to think of the thing in itself like a symphony. I think that's a good model. I think that's why we like music, in fact. Because music shows you a multi-level reality that unfolds and shifts across time within some parameters, right? Because it is not just chaos.
The music has an element of predictability and an element of unpredictability, and it has these multiple levels, and that's sort of what everything in the world is like. It's what the world is like.
So this is... even that is just a conceptual model of the thing in itself. First of all, that's only got 2 dimensions instead of 3 'cause it could be a cube. And then it has... even a cube has 3 dimensions instead of 4 because if that was a cube, adding the third dimension, then it would also be a cube that would transform and shift as it moved across time. That's what the thing in itself is, but that's too damn complicated.
So then the question is, when you look at it, what do you see? And the answer is, to some degree, it depends on what you want to use it for. And so I would say, well, here: look at the different ways you can look at this. You might say, what is this? Somebody could say, well, it's a rectangle. Would you say that's correct?
It's like, well, it's not correct because there's not a one-to-one correspondence, but it might be a useful conceptualization if you think about that as a box. It could contain that, and if you are carrying the box, you only have to be concerned about the box, and so that would be fine. It's a good functional simplification.
That one's a little higher resolution because it says, well, yeah, it's actually four rectangles. And that one says, well, wait, think about that as an orchard that someone's looking at from the top. You want to figure out how to walk from South to North; well, you got a little map there because you can think of those as bars instead of collections of dots.
Piaget showed the children will automatically do this. So for example, if you take six dots and put them in a row and you take the same six dots and you stretch them out so the row is this much longer and then you ask the child where there are more dots, the child will say that there are more dots where it's longer because they're flipping, in some sense, between the perception of the individual dot and the perception of the shape that the array of dots makes.
The shape is longer 'cause you could see it as a rectangle, so they think, well, longer is bigger; bigger is more; there's gotta be more dots. Then there's this one, which is sort of an amalgam of this one and this one, and then that one, and that's the highest resolution model of that. That's still a simplification, and you know, what I like about this diagram is that, you know, people say, well... the facts are the facts, and what we're disagreeing about is our opinion about the facts.
It's like, no... yes... you have an opinion about the facts, but the world is so horribly complex that you can actually disagree about the facts themselves. I think an ideology does that to people very commonly. So I saw this movie once that Naomi Klein made—if I tell you the same story, tell me, 'cause I don't wanna tell you the same story, but I might.
So she went down to Argentina after a bunch of money had gotten out of Argentina because of a financial collapse, and she went to a factory that had been padlocked. It was a heavy machinery factory. The workers had decided they were gonna undo the padlocks and go build machines, you know, to hell with the owner who shut it down!
She went down and made this movie and followed these workers around and showed how catastrophic their lives had been because they'd lost their livelihood in this big financial crash, and so that was really interesting. But then she went and interviewed the guy who owned the factory, and she treated him like he was... like a cipher in some sense instead of asking him: how he got the factory? What he wanted to do with it? How it fit in with his life plans? Why he shut it down instead of continuing it? She didn't get the backstory on him; she just left him in the "evil capitalist" box and went on with the film.
It wasn't like what she did wasn't true, but it was only half true, and it was half true because she could perceive the complexity of the workers, having sympathy for them, but as far as she was concerned, the enemy, the owner, had no complexity; he was just a "bad capitalist," and that's how it was left in the movie.
I found it profoundly unsatisfying because I wanted to know, okay, it's like you know that these workers are suffering; it's not self-evident that you want your damn factory closed. You'd think you'd want it open so you could be building things. It's like... who are you? What are you doing? Why is it justifiable? Have a question about it.
Well, you can take this infinite set of facts and then you subject it to your filters, and you let some of the facts through, and they're facts, but what about all the facts that you don't let through? That's the thing. And that's what the gorilla video shows too. It's like, yeah, yeah, you've got the basketball count right, but you missed the big primate, and you might say, well, your priorities were a bit skewed in that circumstance because you were rearranging the deck chairs as the Titanic sank, as the old joke goes.
It's very much useful to think always, well, you're... it isn't just your damn opinion that's biased, although it is; it's your perceptions that are biased. So, [???] it's even more. So you say, well, you can't see the thing in itself 'cause it's too complex, so you perceive it simpler than it is.
Some of that perceptual simplification is dependent on your aims. So that's a vicious one because it pulls the value structure that you're ensconced within into your perceptions. It pulls it into the realms of facts itself. Then you do another... I think about this as a compression. You can compress a photograph by getting rid of redundant information—that's sort of what you're here. One of these squares, little black squares here, black rectangles compresses all of those.
It's like we're going to treat those as if they're greyish-black. The same thing happens here, so we're blurring across them. We have a much less high-resolution image here. You take the thing in itself, you perceive it as a low-resolution representation, and then you take that low-resolution representation and you replace that with a word.
The word is a twofold compression, and then when someone tosses you the word, you unpack it into a low-resolution perception and then maybe into the world itself, if you can do that—but probably not. So that's what we're doing: we're taking the complex world, we fold it into a simple perception, we fold that into a word, we throw the word to someone else, and they unpack it. The only way you can unpack it, of course, is if you'd had enough similar experience so that you have the reference for the word already in your experience, which is why you have to use simplified language with children, right?
Because there's no point tossing a child a concept that he or she can't unpack. So we compress a very complex reality through a very, very small keyhole that's basically our cognitive process.
Okay, so then here's the next kind of argument; this goes along with the science-religion argument that I was making earlier, which I wanna unpack a little bit more. I think that fundamentalists and atheistic scientists have the same problem. The fundamentalists, so we can say the Christian fundamentalists in the U.S., make the proposition that biblical stories, we'll call them mythological stories, are literal representations of the truth.
But... and that might be true depending on what you mean by "literal." But what they mean by "literal," or what they attempt to make "literal" mean is that they're in the same category as scientific facts because they don't have the idea that there are different ways of approaching truth and that truths can serve different purposes.
They don't have the sense that your definition of "truth" is actually something like a tool rather than an ontological statement about the reality of the world. The fundamentalists basically make the proposition that the idea that God created the world in six days, five thousand years ago, is literally true, and they get the five thousand-year estimate, by the way, by going through the genealogies on the Old Testament and adding up the hypothetical ages and figuring out how long before Moses Adam lived.
Some bishop did that, I think it was in the mid-1800s; I might be wrong about that, but it was somewhere back about that time. More or less that's been accepted as canonical fact ever since. Then the scientists say: well, yeah, those are empirical truths; they're just wrong, see? And that's the only difference there is between the fundamentalists and the atheist scientists.
The fundamentalists say: those are fundamental scientific truths, and they're right. The scientists say: well, they're scientific truths; they just happen to be wrong. I think that's a stupid argument, personally, for a bunch of reasons. One is that the people who wrote the ancient stories that we have access to were—in no way, shape, or form—scientists. You know, modern people tend to think that you think like a scientist, and people have always thought that way.
First of all, you do not think like a scientist. Even scientists hardly think like scientists. But if you're not scientifically trained, you don't think like a scientist at all. So one of the things, for example, that characterizes your thinking is confirmation bias. If you have a theory, what you do is wander around in the world looking for reasons why it's true, and a scientist does exactly the opposite of that in the little tiny, narrow domain where he or she is actually capable of being a scientist.
What they have is a theory and look for a way to prove it wrong. But, believe me, you don't run around doing that. You can train yourself, so now and then you can do that. You can learn to listen to people, for example, on the off chance that you might be wrong, but that is by no means a natural way of thinking.
Of course, the fundamental philosophical axioms of the scientific method weren't developed until Descartes and Bacon and who else...? There's one more... anyways, the name escapes me at the moment, but you can argue when science emerged, but it's certainly emerged in its articulated form in the last thousand years. I think you could say even more specifically that it emerged in the last five hundred years.
Now, you might argue with that and say: what about the Greeks and other people who were fairly technologically sophisticated or who invented geometry or that kind of thing? But yeah, yeah, bare precursors to the idea of empirical observation. Aristotle, for example, when he was writing down his knowledge of the world, it never occurred to him to actually go out in the world and look at it to see if what he assumed about it was true.
It certainly never occurred to Aristotle to get 20 people to go look at the same thing independently, write down exactly how they went about doing it, compare the records, and then extract out what was common. That seems self-evident to us to some degree, but you know... it was by no means self-evident to anyone five hundred years ago, and people still don't do it.
It's not plausible... if you know anything about the history of ideas, it's not plausible to posit that stories about the nature of reality that existed before 500 years ago were scientific in any but the most cursory of ways.
So why we have that argument continually is somewhat beyond me. Part of the reason is, though, that everyone—including fundamentalists—really believes in scientific facts even though they hate it. They'll use computers; they'll fly. A computer wouldn't work unless quantum mechanics were correct. The fact that you use a high-tech device indicates through your action that you actually accept the theories upon which it's predicated, right? The same as flying, the same as anything you do in a complex technological society.
You're stuck with it; you're reading by the lights. Do they work? Yeah, they work. Well, so it's really hard for people who are trying to hold onto a way of looking at the world that appears to contradict the scientific claims when everything they do is predicated on their acceptance of the validity of the scientific claims. It's really problematic for people; it's problematic in a real way, I think, because one of the problems with the scientific viewpoint is it doesn't tell you anything about what you should do with your life.
It doesn't solve the problem of value at all; in fact, it might make it more difficult because one of the fundamental scientific claims, roughly speaking, is that every fact is of equal utility at least from a scientific perspective. There's no hierarchy of facts. That's not exactly true because you can think of one theory as "more true" than another, but that boils down to saying that it's more useful than another. So I don't think that that's a really good exception.
Okay, so fine, you've got the scientific atheists on one end and you've got the religious fundamentalists on the other, and what they both agree on, whether they like it or not, is that there's so much power in the scientific method that it's difficult to dispute the validity of scientific facts, and they seem to exist in contradiction to the older, archaic stories if you also accept them as fact-based accounts.
So what do we do about that? Well, if you're on the scientific atheist end of things, you say: well, those old stories are just superstitious science, second-rate, barbaric, archaic forms of science; you just dispense with them, they're nothing but trouble. If you're on the fundamentalist side, you say: well, we'll try to shoehorn science into this framework, and really that doesn't work very well; it doesn't work very well with the claims of evolution, for example.
In fact, it works very badly, and that's a problem because evolutionary theory is like... it's a killer theory. It's really, really hard... like it's not a complete theory, and there's lots of things we don't know about evolution, but... you know, trying to handwave that away—that's not gonna work without dispensing with most of biology. So that's a big problem.
Here's another way of thinking about it: you don't just need one way of looking at the world; maybe you need two ways of looking at the world. And I'm not exactly sure how they should be related to one another, like which should take precedence under which circumstance. But one problem is: what's the world made of? You know, what's the world, conceptualized as an objective place, made of?
And the other is: how should you conduct yourself while you're alive? There's no reason to assume that those questions can be answered using the same approach. I mean, physics has its methods, chemistry has its methods, and biology has its methods. So a method for obtaining the truth can be bound to a domain.
Why would we necessarily assume that you could use the same set of tools to represent the world as a place of objects and to represent it as a place in which a biological creature would act? I mean, anyways, I'm suggesting that we don't view it that way—that we have two different viewpoints. Maybe they can be brought together, although it's not obvious how. But that it's not a tenable solution to get rid of one in favor of the other.
I think the reason for that is that you need to know how to conduct yourself in the world. You have to have a value system; you can't even look at the damn world without a value system. It's not possible; your emotional health is dependent on a value system. The way you interact with other people is dependent on a value system. There’s no getting away from it.
You say, well, there’s no justification for any value system from a scientific perspective; you’re gonna draw that conclusion that no value system is valid—where the hell does that leave you? There’s no down, there’s no up, there’s no rationale for moving in any direction; there’s not even really any rationale for living.
So, people say things like: well, why the hell should I care what happens in a million years? Who's gonna know the difference? It's like, yeah, yeah, true; stupid, but true. And the reason I think it's stupid is because it's just a game.
I can take anything of any sort and find a context in which it's irrelevant. It's just a rational game. It's like who cares if a hundred children freeze to death in a blizzard? What difference is it gonna make in a billion years? Well, what do you say to someone who says that? You say, well, seems like the wrong frame of reference, bucko. That's what it looks like to me; you know, because at some point you question the damn frame of reference, not what you derive from it.
It certainly seems to me that situations like that don't allow you to use that kind of frame of reference. There's something inhumane about it, and that trumps the logic, or at least it should. If it doesn't, then all hell breaks loose, and that doesn't seem to be a good thing.
Okay, so I have this quote from Shakespeare here. He says: "All the world's a stage, and all the men and women merely players; they have their exits and their entrances; and one man in his time plays many parts." Well, it's the sort of thing that you'd expect a dramatist to pen, but that's how he looked at the world.
We still watch Shakespeare's plays some hundreds of years later because there seems to be something essential captured in them; something about how people do act, but more importantly, I think, how people should and shouldn't act. Because what fun is it going to a play that doesn't outline how someone should and shouldn't act? You want a good guy or a couple of them; maybe they can be complex interminglings of good and bad, you know, that makes it more sophisticated.
You want a bad guy or a bad... you always want to see that contract either within a character or between characters. It's because you want to know how to live properly; that's how to be a good person. And you wanna know how to live improperly, how to be a bad person, so you can watch out for people like that, or so you can figure out what that means for yourself; it's compelling.
And that's another thing that's worth thinking about: why is it compelling? It's compelling to everyone. That's the thing that's so cool. There aren't that many phenomena that you can point to that are compelling to everyone. Music is close; it's a very rare person who doesn't like at least some genre of music, no matter how narrow.
But the other one is stories; you're hard-pressed to find someone, especially if they're younger, who doesn't like stories. Why? Is it a waste of time? Or is there something going on? Well, I think it's not only not a waste of time; it's actually the most fundamentally important thing you can possibly do because there's no difference between understanding stories and figuring out how to get along in the world.
There's a tight relationship between the story that you inhabit that structures your behavior and the games that Piaget talked about that organize people's behavior, to some degree. The reason we can all sit in this room together like this is that a huge chunk of the value system that guides our behavior is shared.
So I'm lecturing and you're sitting in the classroom, and that distinguishes us to some degree, but you know that that's partly merely a consequence of the difference in our age. It's the same trajectory; we just happen to occupy different positions in a value hierarchy that we both accept. As long as you feel that that's fair and just, then you're not gonna object to it.
But I'm here in the classroom for many of the same reasons that you're here in the classroom if you look at the higher-order parts of the value structure, and maybe right at the end of that... 'cause I've tried to figure out if you push why you're doing what you're doing right now to its ultimate limit; so you can't get a story that's superordinate to that.
It's something like, well, you believe that the investigation of the world to acquire knowledge is worthwhile; otherwise, what the hell are you here for? Even if 80% of your motivation is to get a good, stable job, fair enough, there's still something outside of that because the whole culture says, well, you're more likely to be able to function properly in a good, stable job if you're the sort of person who knows how to go out in the world and forage for information usefully.
And I think that's very much analogous to the hero story. It's like, you go out and you search the unknown to find something of value, and so fundamentally that's what we're doing in the classroom, and the reason we can all organize our behavior is that we accept that framework: consciously... consciously would be: we know how to articulate it, unconsciously it's: well, it doesn't matter; we know how to act out the patterns.
Whether we can say the rules or not doesn't matter, same as a wolf pack. We know the procedures, and you could describe them with an articulated value structure.
Let's take a break.
Okay, so let's go back to the complexity problem. You see, I actually think in some sense that's the fundamental problem. When you read about the terror management theorist types, they think that death is the fundamental problem, and that's a good argument because it's definitely a fundamental problem. But I think it's a subset of the complexity problem, and the reason I think that is because sometimes people's lives become so complex that they'd rather be dead.
The reason they seek death through suicide is to make the complexity go away because complexity causes suffering if it's uncontrolled. You know, things just get beyond your control, and that can happen, you know, if you're hit by three or four catastrophes at the same time. You know, maybe you have... oh, the political system collapses, there's hyperinflation, you lose your job, and you have someone that you love or two people die, and maybe you get cancer—something like that.
Those things happen to people, and they just think, well, there's no getting out of this; it's just too much. You know, one of the interesting things about being a psychologist is that what you learn if you're gonna be a psychologist is that people come to you with mental illnesses - and that's almost never true. People come to you because their lives are so damn complicated, they cannot stay on top of them in any way that doesn't make it look like they're just gonna get more complicated.
That causes symptoms, you know? It's like... there's this old idea, a sort of a metaphor for genetic susceptibility. Take a balloon and blow it up until it's beyond its tolerance - it's going to blow out at the weakest point. Well, that's sort of what a genetic susceptibility is. If I just keep adding complexity on top of you, at some point you'll blow out at your weakest point.
Maybe you'll get physiologically ill, maybe you'll start drinking, maybe you'll develop an anxiety disorder, maybe you'll get OCD, maybe you'll get depressed. Whatever, there'll be something about you that's the weakest point, and if I just push, that's where you'll blow out. So that's a mental illness, but those things almost never just happen.
Sometimes, but not very often, usually people have just been hammered like two or three different ways, and then they collapse in the direction of their biological weakness. Then maybe you put them back together, but it's almost always a complexity-related phenomenon rather than a mental illness-related phenomenon. Not always, but almost always.
Okay, so now you got this complexity problem, and you think: well, you deal with it conceptually. That's sort of akin to the idea that it's belief systems that protect you from death anxiety. The ideas are roughly comparable, but again, that's wrong.
It's the sort of thing only a psychologist could think up because psychologists think that everything about you happens inside your head, so to speak, in your psyche, but that's not true. There's a huge chunk of you that's outside of you completely.
So this is a really good example: we know the oldest cities, this is a medieval city in France—a beautiful old city. Old cities were walled, and the reason for that was because they were places of wealth, and if you didn't put walls around them, then other people would come in and steal everything and kill you. So having some walls was a good idea—the same as having walls in your house is a good idea.
Walls between your rooms are a good idea. Or borders between categories are a good idea. And so part of the way you simplify the world is by building walls around your space because then a whole bunch of things can't come in, and so you don't even have to think about them; it's not conceptual; it's practical.
You know, one of the things I think I've figured out recently is the fundamental political difference between people, and it looks to me like the fundamental political difference is: how many walls should there be around your stuff? The ultimate liberal answer is: zero. The ultimate conservative answer is: bring on those walls, man!
What's interesting about both those perspectives, first of all, is that there are temperamental contributions to them, and second that they're both valid.
One of the mysteries, I believe, that permeates psychometric psychology right now is why the temperamental factors that influence politics are those particular temperamental factors. So there's five, let's say, right? There's classic Big 5: extraversion, neuroticism, agreeableness, conscientiousness, and openness. Well, the biggest predictors of political allegiance, forget about the politically correct types for a minute, but on the liberal to conservative axis is that the liberals are low in conscientiousness and high in openness, and the conservatives are high in conscientiousness and low in openness.
Then you think, well, why those two traits? That's the first question. The second question is: why those two traits together? Given that they're not very highly correlated, right, they're really quite independent. So why do they co-vary on the political axis? I think this is the reason.
I think it's exactly that open people like to live on the periphery of boundaries, and they like to break boundaries between things 'cause interesting things happen when you think a different way, when you think outside of the box, so to speak. That's what open people do—they always think outside of the box, no matter what box you put them in.
Sometimes you meet people that are so open that they're completely disorganized. Their thought process is almost completely associational, like a dreamer. They just jump from one thing to another. They're very interesting to talk to, but it's very hard for those people to get their lives together 'cause they're interested in absolutely everything, and their attention just flits all over the place.
They're open, and that actually does go along with higher intelligence, generally speaking. And then if they're low in conscientiousness, they don't see any utility in order and orderly people—'cause that's part of conscientiousness, and the biggest determiner of political belief in the conscientious domain.
The orderly people like to have everything in its separate place and properly structured. Their world is boxes inside a box, inside a shelf of boxes, and then that shelf of boxes is inside another box, and all those boxes are nice and neat and tight, and nothing inside them is touching, and everything in every box is the same thing.
You can see that... you can see the utility in that. That, as far as we've been able to tell, is also associated with disgust sensitivity. People are disgusted, generally speaking, when things that shouldn't be touching, are touching—like something horrible stuck to you, for example. That produces a very visceral sense of disgust, and it's a boundary violation because that's what disgust is; it indexes a boundary violation.
How separate people should be from one another as individuals or in groups is an entirely debatable issue because there's huge advantages when people mingle and mix and there's huge dangers when people mingle and mix. So at some point you say, well, the dangers are overwhelming the positives, and at another point you say, well, the positives are overwhelming the dangers, and you have a continual argument about that with yourself, but more importantly, with people who have different temperament than you.
The terrible temptation is to assume that only those people who have your temperament are correct, and that's just... those other temperaments wouldn't exist if that was true, if you look at it from a strictly biological perspective.
So anyways, one of the things we do to simplify the world is to frame it physically. And so you look at this; you've got wall number 1, and then you have wall number 2. But then inside the walls, you have walls around everything. All these houses are walls, and inside the houses, there are walls as well.
Everything is... and what you do when you put walls around things is you make part of the world simpler constantly. The reasons you have a house is so that everybody and his dog isn’t in your house. You just want those few people that you can barely tolerate in your house and not all those other strangers—God only knows what they're gonna do.
You'll still invite people in now and then because maybe you're sick and tired and bored of the people that are in your house, and so you want a little bit of new information, but you want those barriers to be there so that you can voluntarily modulate the information flow.
Okay, so that's the first thing you do. Then you set up rules with everybody else that says, well: I'm gonna have some walls, so you can't come in, but what I'm gonna do is pay you for that privilege by letting you have some walls where people can't come in.
And so, I think that's analogous... I was thinking about the issue of discrimination in relationship to sex because I've been thinking a lot about discrimination lately because everybody thinks discrimination is a bad idea, which is a very stupid proposition because you're discriminating all the time.
The most fundamental form of discrimination is choice of sexual partner, and so you might say, well, why should that even be allowed? Because it is the most fundamental form of discrimination. For example, almost everyone is racially prejudiced when it comes to sexual partners. So you think, well... are you... do you use age as an exclusionary criterion? You probably do.
Do you use physical attractiveness? Only insofar as you're able, right? You'd use it completely if you could get away with it, roughly speaking, but you can't because the most attractive people aren't gonna be anywhere near you. So you can't do it, but you'd like to. Health? Yes. Strength? Yes. Wealth? Yes. Education? Definitely.
So it's unbelievably discriminatory. So you might say, well, why is that justifiable? And it seems to me that it's something like... well, you get to say "no" to me if I get to say "no" to you. It's something like that. We've agreed that everybody gets to discriminate on that basis, and because everybody can do it, then it's fair. It's something like that.
It's very much worth thinking about. You know, I don't know if you know this, but in Huxley's book, Brave New World, where the family had been completely demolished, right, and children were conceived in bottles and produced in factories, the whole idea of the relationship between sex and procreation had become a taboo. One of the mantras, slogans of the society was: "Everyone belongs to everyone else."
So it was actually a social faux pas to refuse to sleep with someone, just as it was a social faux pas to have any exclusionary relationship because another thing you might notice is that there's nothing more discriminatory than falling in love with someone. It's like: you're special! And all the rest of you? Haha, no.
So it's the ultimate exclusionary act, right? We presume that that's an acceptable—not only acceptable; we demand that as a right. And that's worth thinking about a lot.
Anyways, okay, so what you're doing is by agreeing to this segregation and boxing, what you're doing is carving off little bits of the world that are simple enough so that