2015 Maps of Meaning Lecture 03a: Narrative, Neuropsychology & Mythology I (Part 1)
So the first thing I'm going to do is to tell you what to do when you start speaking to an audience, because some of you will do that in your life. So this is a hint: when you stand up in front of people, do not talk till they're all absolutely silent. Wait, because they'll quiet down; everybody will quiet completely down. Then you want to wait until just before it becomes awkward, and then you've got everyone's attention. If you speak before then, you're giving permission to people to speak while you're speaking, and you shouldn't give them that permission.
Because the hypothesis is that you have something reasonably important to say, because otherwise the whole setup is a lie. Right? So just so you know, it's a good thing to know. And it also, I can tell you one more thing about speaking in front of people that's extremely useful too, because it frightens people, right? To do that, I never talk to the group. You know, because you might think, "Well, I'm speaking in front of all these people." You might think that when you're afraid.
That's not really true; you're speaking to people in the group, and you can move your attention from one person to another. If you speak to the group, you're basically going to be terrified because the group doesn't give you any feedback. But if you look at individuals and talk to them, then it's not that much different from having a conversation. It is because you have to be prepared and so on, but you can judge when you're looking at someone. You can tell whether they understand what's going on, whether they're hooked in or anything like that.
And as long as you're paying attention, then you're going to utilize your natural skills as a communicator, which you've been practicing since you were born. So anyway, that's two hints and they're useful hints. So, and I guess the other thing to remember is that—be three hints—is that it's really a dialogue, right? Even though I think I told you this before, most of the response from the people in the audience is non-verbal. That doesn't mean it's not a dialogue, so if you're paying attention to people, you keep the dialogue going, and generally that'll keep everybody's interest reasonably high.
Um, well anyway, so I want to bring up some mysteries about words. Now, the first question might be: well, what exactly is a word? Now, I'm talking about words because I'm really interested in concepts. But a word is like a unit of conception. So, I'll focus on words, and we'll broaden out to concepts. The reason I want to talk to you about concepts is because I want to talk to you about ideological and group identification. Such identification is based in part on shared behavior, but also in part on shared conceptions.
And the reason that I'm interested in group identity is because I'm interested in what it is that motivates people to retain their group identity and also to defend their group identity or expand it, often at the cost of warfare. And then finally, because I'm interested in why people will do horrific things to one another in order hypothetically to facilitate that process. So that's the linkage. So let's think about words for a minute. And so the first question you might ask is: well, what is a word?
St. Augustine, from what I understand, formally posited that a word was a label for a thing. I would say that that's mostly what people think when they think about words, which they don't do very often; they use them, but they don't think about them. So, a word could be a label for a thing. All right, now that's a funny concept because the thing is often defined by words, and so there's a bit of circularity in there, right?
Because for that argument to be exactly true, the thing would have to be self-evident, sort of independent of any verbal formalization, and then you could just slap a label on it. And it's pretty clear that our conceptual abilities also influence the manner in which we perceive things. So there's a reciprocal interplay. So, okay, so anyway, that's one definition, and you can see that it has some utility.
So here's another definition. This is from Dan Weintraub, who said a word is like a player in a chess game—one of the pieces. What he meant by that in part was that you're engaged in an activity when you're using language, and the activity has a vaguely game-like structure. And the reason that you're using words is to move ahead in the game. You're a player in the game. And so then you can think about, well, so that's another analogy—that a word has a tool-like structure, and you use it to attain certain ends and to operate in the world.
And so that's kind of an interesting way of thinking about it. Then I was thinking about this as a more visual analogy. So imagine that you have a toy train, and there are freight cars on it, and you fill the freight cars with all sorts of different things. Maybe one is full of screws and Legos, and another one is full of erasers and coins, and so on.
And there's a little tunnel so that the train track goes underneath it, and then the train is continually running underneath it, and that thing stays in the same place. And that's like a word too. Now that's a trickier thing because a word encompasses things that are the same and things that are different, right? It has to, because everything that you try to categorize as an entity isn't homogeneous. It's not homogeneous; its internal structure varies.
So, and that means that in so far as you're treating it as if it's only one entity, it might do things that will surprise you. You know, so it's especially true with complex things. So you might say, well, this is my wife, okay? And there's a certain set of assumptions that go along with that label. But as long as those assumptions are met—which basically means that as long as the relationship you have with that person stays within that definitional box—then you can use the term and you can get somewhere with it.
But because people are full of snakes, you never know when they're going to jump out of the category that you've put them in and manifest something that's completely unlike the word that you've been using to describe them. And many things are full of snakes, or you could say to some degree everything is full of snakes. And that's why it's useful to think of a word as a place where multiple things move through.
So another example that you might think about is, um, say a stock, because the stock market's kind of a good model of the environment as such. You know, a stock is like a thing. So you can think of it like a thing, and people do think of it like a thing. But the problem is that the stock itself is an absolutely dynamic entity, and what it contains at any given moment is far beyond your capacity to comprehend.
Because you think about how a company is operating in the economic and then the biological environment. I mean, it's subject to any number of factors. So for example, you know, maybe it's a coffee company stock, and then there's some weird blight in one of the coffee-growing countries because, I don't know, maybe because people have been using water improperly, and so then the coffee dies and the crop's bad, and then the price goes up.
It's like that—the stock, as an entity, in some sense contains the whole world. And you can treat it like it's one thing for certain operations, but most of the time, that's really going to trip you up. It's going to trip you up in a serious way. So now I thought I had one more—well, that'll do for now.
There are three different kinds of concepts, and there is another one, but I can't bring it to mind at the moment. So okay, so here's another way of looking at it. The world is extraordinarily complex; like it's insanely ridiculously complex. And part of the complexity is the fact that—and this is a hard thing to grasp, although it's sort of self-evident once you think about it—everything that you see exists at multiple levels of analysis simultaneously.
Right? And this is sort of like the train metaphor. So, you know, for you to be you, for a person to be a person, there's an incredible number of microstructures that make up that person that have to be functioning in no unpredictable manner whatsoever. You know, so you exist at a quantum level, and we're not going to talk about that because there isn't anything to say about it that's intelligible. You exist at an atomic level and a molecular level, and then the molecules make organs, and then the organs interact to make you.
And then you're nested in a family, and then that family is nested in a city, and the city is nested in a state, and the state is nested in an economy, and the economy is nested in a biosphere. And every single one of those levels of analysis—it's not that it's relevant to you; it's actually that it is you. Now, you only figure that out when something goes wrong because as long as things aren't going wrong, they're invisible.
Now that's an important thing to realize. So one of the things that—one of the determining factors with regards to your capacity to regulate your own emotion, including anger and fear and disgust and a variety of other things we'll talk about is the fact that a million invisible things are doing exactly what they're supposed to be doing all the time.
Now, the problem with that is that at any time any of those things can go wrong. So and that's the problem of finite mortality in a sense. You know, that you're the sort of being that can be plagued by unexpected events at multiple levels of analysis, and it’s by no means obvious which of those levels of analysis is most real. I think they're all equally real.
It's also no small feat to pick the level of analysis at which you should be approaching a given problem. That's people argue about that all the time, you know? When they might say something like, "What's the greatest problem facing humanity today?" It's like even conceptualizing where the problem is in that entire tree of being is something that you can argue about for forever and not get anywhere.
So part of what I want to point out to you by all these examples is that a lot of what human beings are doing in their perceptual and cognitive operations is constraining the complexity of life to limits that enable them to stay alive, but also within limits that enable them to bear staying alive—both of those at the same time. Because you want to keep existing, obviously, and hypothetically you want to propagate, but also hypothetically you want to do both of those things while not suffering yourself absolutely to death.
And so that's a set of constraints. And it's necessary to know how much of the world is invisible to you because unless you have some understanding of how much of the world is invisible, you can't understand how many things might be going wrong when something goes wrong. And so here's a hint in some sense: if something interferes with the integrity of your categories, all the snakes that are inside those categories come out, and then you have to deal with all those snakes. And that's a big problem.
And so you don't like having your categorical systems—the adequacy of your categorical systems challenged in any way. And it's no wonder, and it seems to me that this is particularly relevant with regards to social conflict between different ethnic groups or between groups that have been historically isolated from one another to some degree. Because the groups come at the world with different conceptual structures, and they're not necessarily isomorphic.
Some of the more fundamental assumptions might not be the same from one place to another, and that means the validity of the categories that both of those groups use to organize their existence in the world come under assault when they meet. So it's a big problem, and you know, you might say, "Well, why do people engage in conflict all the time?"
And part of the reason is because there is conflict. Like, if you believe one thing and I believe another, and we happen to occupy the same territory and those beliefs have implications for how we're going to behave, there’s going to be conflict. There's going to be conflict. And as far as I can tell, there's a very limited number of ways that that conflict can be resolved. One is I can become a tyrant or you can—a tyrant; one of us is a tyrant, one of us is a slave. And the logical end point to that is one of us is alive and one of us is dead.
And the other possibility is that we can negotiate and see if we can figure out how to cohabit in this territory behaviorally and conceptually simultaneously for a long time. But that's extraordinarily difficult because it means you have to take your concepts apart where they conflict, and then you have to figure out how to put them back together in a manner that actually works.
So, and you might say, "Well, you should be tolerant." It's like, yeah, you better be tolerant because you'll never get along with anybody if you're not. But "you should be tolerant" is no answer to this question because the question is, well, how tolerant should you be and under what conditions? And believe me, you can be too tolerant. That's an easy example.
And sure, I mean, I gave you an example last week, I think it's one that I really like: you’re in a relationship with someone and they betray you. Your category is not adequate now. Which category? Oh wow, there's a big problem. You know, the answer is you don't know which category, right? Because you don't know where the error is.
Now obviously there's an error of assumption, but at what level? And with what category? So the problem is that an anomaly of any seriousness calls the validity of the conceptual system as such into question at some unspecifiable degree. You actually don't know, and that's a big problem because what it means is that it's very difficult not to react catastrophically to anything that's unexpected.
Because the unexpected thing could have virtually infinite consequences. Now usually it doesn't, but what good is usually? Usually is not that helpful. So okay, so fine. So the world's a very, very complex place, and it's doing unpredictable things, and it's doing unpredictable things at multiple levels of analysis simultaneously, and your problem is how do you act in that environment, and how do you constrain it conceptually so that it doesn't kill you or overwhelm you? It doesn't cause you physiological or psychological damage.
So that's a big problem. And it's sort of a reflection of the idea that you don't have to learn to be afraid or on guard; you start that way because the world's an incredibly complicated, unpredictable, and dangerous place. And then with a tremendous amount of effort, both individually and socially, you can construct up environments that are physical and conceptual within which you can have some degree of safety, security, and the hope of some progress during your life.
So we're going to dispense completely with the idea that secure, stable, and promising is the normal condition of a human environment. It's not. And to keep an environment like that requires a tremendous amount of collective effort and individual effort. So we're reversing the propositions to some degree: chaos is the underlying reality, and order emerges from that with difficulty, and then it's also maintained with difficulty.
I mean, I can give you one example. I think, if I remember correctly, to keep a military helicopter in the air so that it just doesn't plummet to the ground like the piece of metal it actually is, requires something like 35 hours of maintenance for one hour of flight. And the reason for that is that thing is barely a helicopter; like it's just holding together by nothing. And then you have to work like a mad dog to stop that thing from falling prey to instantaneous entropy and just not functioning.
And a lot of our elements of civilization are like that, you know? They're in place, so to speak, but to keep them doing what they're doing— which basically means to keep them inside the category that we've placed them in—it takes a tremendous amount, an almost incalculable amount of continued input of energy and time. And, well, it's counter-entropic being, right? So, and things have to be stable at multiple levels of differentiation for anything complex to maintain itself for any length of time.
So it's by no means entropy is the standard—the standard reality: complex things decay towards their simpler forms. That's the rule of existence, and we're very complex forms, and so we're chronically trying to stop entropy from tearing us into our constituent elements. So okay, now some of you have seen this before, but I'm going to show it again. Maybe you haven't seen precisely this version of it, but this line of research wasn't piloted by Dan Simons, although he's done a tremendous amount to popularize it.
His demonstrations have become some of the most famous demonstrations that psychologists have ever put together. And this particular line of investigation, although it might be old hat to you guys, perhaps was an absolute staggering shock to everyone when it was first revealed. Because the hypothesis was that, first of all, that we see the world pretty well; you know, that we—and when we look at the world, we're seeing it in a relatively comprehensible manner.
And that's wrong. And then the second hypothesis, which is a deeper hypothesis, is that even if we don't see the world that well, if something that's unexpected conceptually happens, we will—our attention will be automatically drawn to it—and that also turns out to be wrong. So we're going to look at this video, and, um, so the rule is here—because you probably won't be able to hear the sound—you’re going to see two teams, one white and one black.
And the teams are going to move around; they have a basketball; each of them—they're going to move around, they're going to throw the basketball back and forth to the other members of their team, so the white-dressed people and the black-dressed people. And your job is to count how many times the basketball is thrown back and forth successfully. So, you're going to be looking at the white-clad players, and you just count the basketballs.
And so if you've done this before, well, don't spoil it for those who haven't, but you may not have seen this variation. In any case, for— okay, so there's the gorilla! One person leaves, and then the curtain changes color, right? Right? Right? So it's brilliant. It really is; it's—it's amazing; in fact.
So all right, so you know, you look at that, and you think, "What do you think?" If I'm concentrating on one thing, there's a lot of other things I'm missing, and some of those might be unbelievably massive. And then it’s even worse as Simons just demonstrated, which is if I'm concentrating on one thing and I'm paying attention for the possibility that something strange might happen, well, I'm watching for the strange thing.
Other strange things that I don't expect will happen, and I won't see them. All right? So I like that demonstration because it's such a concrete presentation of the fact that we're dealing with something complex with relatively limited resources. Now, it’s kind of obvious if you think about it because there's you, and then there’s the entire rest of reality, including the reality that's within you that you don't understand.
But it's still something completely different to have that pushed in your face like that. Now, one of the—as I mentioned before, one of the propositions basically up until the time this line of research started was that you operate in the world with a certain set of assumptions. You see the world, and you interact with it, and the things that you're interacting with and that you see are real, and it's a relatively straightforward matter to perceive them because there they are—like things. And you can interact with them in a manner that produces a predictable outcome.
And if something radical happens, you will automatically attend to it. So that was the proposition of psychological theory—psychophysiological theory—for a very long period of time, and it had a huge influence on fields like artificial intelligence before they discovered that it was virtually impossible to perceive the world. It's so complex that we can't really figure out how we do it, although, you know, in the last five years, tremendous strides have been made in having machines that can actually operate in the real world, which is something new and absolutely remarkable and amazing.
So, but what the bottom line for this turned out to be is that, no, no, no, you don't see the world; you see a subset of the world. And which subset you see depends on your intent. Now, of course that's obvious from this video, right? Because your intent when you're watching the video is to count the number of balls that are being passed back and forth, and that intent occurred merely because I asked you to do that.
And so one of the things that indicates is how easy it is for you to shift what your goal is, you know? And human beings are unbelievably good at that. Now, for most animals, goals are driven in some sense by the operation of underlying fundamental biological systems— they're not just goals. They're whole schemes of perception and action; they're like little sub-personalities. I would say in that they're embodied perceptual, cognitive, and emotional solutions to classic biological problems.
So, it's like an animal's head is full of little robots, and it's not like they're driven by a motivational system. The motivational system is a subelement of the animal that participates in the entire animal's being. So if an animal is hungry, then it sees the things that hungry animals see, and it responds emotionally to the cues that are associated with food or its absence.
And it thinks, insofar as an animal thinks and strategizes—and some of them do, like the hunters in particular, the pack hunters—then it strategizes from within that framework. And so you can't think about it as a drive; you have to think about it as an entire way of looking at the world. And then you kind of have to think of the animal as a collection of different ways of looking at the world in accordance with the different problems that the animal has.
And then you're like that too, except as your cortex grew radically over the last several million years, you underwent this weird transformation where your time span got extremely broad and your capacity for abstraction increased. And one of the consequences of that was that you learned how to replace what you might regard as purely biologically determined relatively immediate goals—although they're really personalities—with unbelievably abstract goals that could be manifested in a social environment, and that might work over very long periods of time.
And that's a whole different thing. And so what the upshot of that is, is you can—PR—you have a system that's—that allows you to formulate virtually any frame of reference imaginable and then to look through the world and act on the world in accordance with that frame of reference. So that I can just say to you, your task now—this is going to be something you value—your task is to turn yourself into a personality for whom counting the basketball throws for the next five minutes is your primary aim in life.
And you do that no problem; you reconfigure your entire perceptual system so that you feel inadequate to some degree if you can't perform the task. You're motivated to do it; you focus your perceptions—you’re pleased if you count the balls; you're pleased if you get the right answer. And you do that at the drop of a hat. So that's one of the consequences of human cognitive transformation.
So now what that also means is that from moment to moment, as a consequence of the varied goals that you're pursuing, you also make a determination of what you have to attend to and what you can ignore. And the answer to the question, "What should you attend to and what can you ignore?" is you should attend to the minimal number of entities necessary for you to produce the outcome that you desire.
Okay, now if you read classic cybernetic accounts of motivation, the ones that are cognitive, if you look at the behavioral theories of motivation, they're mostly drive motivations, so there's a deterministic element to them. You're in a state of deprivation, and as a consequence, you're driven down a certain behavioral road.
Okay, so that's the behavioral idea. And there's some truth in that because some of your physiological subsystems are fast and ancient enough so they do kind of operate as chain deterministic systems. And, in fact, before you act in any complex way, you actually set up a system of chained deterministic events so that you can implement the action.
So I can show you an example of that. So, all right, so what you'll watch what my arm does. Okay, so you see that? I accelerated it very rapidly, and then just before I hit the surface, I stopped it. Now, it turns out that I can move my hand so fast that a new signal from my brain can't get to my hand during the time I'm going like this. And so what that means is that I have to set that entire system of operations up as a ballistic movement and then release it.
And you're doing that all the time when you're dealing with the world. You have systems that are automated and that run in a deterministic manner, and what you're doing in part when you act on the world is you're disinhibiting them one after the other. So you do that when you're driving, for example. Where do you look when you're driving? You don't look right in front of the car. Why?
It's because you're going to run over what's there, so what good is looking there? If it's something's there, you're going to run over it; there isn't a thing you can do. So what you do is you look far enough down the road so that you can sequence automated actions as a consequence of your perception, and then they run, in some sense, they run ballistically. You can't control them once they've been implemented.
And you're doing that all the time. Like you're wandering around on a platform of semi-automated systems, and your choice in some sense is the choice of which of those semi-automated systems you're going to implement at any given time. Okay? So you throw up a frame of reference; we're going to talk more about them in a minute. You throw up a frame of reference, and then you attend to anything that gets in the way if it interferes with the desired outcome.
So that's not the same as merely responding to anything unexpected, as you can see with the gorilla. Now, if that gorilla would have jumped in front of one of the players in white and grabbed the ball, there's a high probability that you would have seen them—especially if it would have held on to it for some length of time—because what happened there was that the anomalous event turned out to be a relevant factor in your pursuit of the goal that you deemed appropriate.
So you could say that you respond to the world as if it's a predictable place as long as it's doing what you want it to do. And then you respond to the world as an unpredictable place when anything happens that stops you from reaching the goal that you've determined to be relevant at this point in time. And so those are in some sense your two modes of operation.
Then you could also add that if you're pursuing a goal or embodying a little goal-directed personality, then things that facilitate movement towards that goal are going to be positive, and things that interfere with movement towards that goal are going to be negative. And when you understand that, then you've got some sense of how the fundamental emotional systems work.
So, all right, so this little diagram is designed to show you what the relationship between a conception and actuality might be, and even what the relationship between conception and a word might be. So the theory behind this little diagram is that there's a set of phenomena in front of you that's too complex to deal with, and so what you do is you generate a low-resolution representation of it.
And that low-resolution representation is actually what you see. It's not a concept; or you could say it's a percept; it's what you see when you look at something. So for example—and then what you do with your words is you label the percept of the thing. And so it's like a double compression— that's in some sense what you're doing.
So you have a complex reality, you perceive it in a low-resolution form, and then you label the perception, and that gives you a word. And so what you're doing basically is taking the world that you can't understand and you're putting it in smaller and smaller and tighter boxes until you have a box that's small enough and tight enough so that you can actually manipulate it.
And that's a fine plan, except when it doesn't work. And so then the question is, well, what do you do when it doesn't work? And that is the question. Okay? So you could say if you look at the top left corner, I called that the thing in itself, which is a translated phrase from German philosophy.
And the old idea was that, well, you never really have contact with the thing in itself, because you're not capable of perceiving it or comprehending it. So what you see are elements of it. So we're going to say, well, that's the thing in itself. It's a schematic representation of it; it's quite complex. Then hypothetically, you can see that thing in all the other ways that are illustrated in the diagram, right?
So if I asked you what that was, you could say, well, it's a rectangle or you could say it's a rectangle with four rectangles as sub-elements or you could say that it's—I think that's what, 24 rows—which it also is—or you could say it's a bounded rectangle that's made out of two paths and four sub-rectangles—or you could say that it's—I don't remember how many are in there—but you know, that's the most differentiated representation of the object.
So five, and then you might say, well, which of those is the real object? And the answer first would be, well, none of them are. And then the second answer would be, it depends on what you want to do with the representations. And actually, it's not really obvious that you can make things more real than that. It's like how real is what you're perceiving?
It's real enough for certain operations and not real enough for others, and so its validity—it's a pragmatic approach—its validity is determined by its applicability in the situation in which you want to apply it. Okay? And then at the bottom there are words, and the words are basically labels for those percepts.
So, you know, the first one you could sum up as 432 because I guess there's 432 little circles there. And the second one you could, um, on the bottom, you could summarize as a cross or a rectangle, and the one at the top you could summarize as a flag, and you get the point. So the word is representing a percept that's already simplified and that's representing something that's more complex underneath it.
Okay, so here's another way of looking at it. So if you're looking at a computer and your computer crashes, we say, so you're working on your computer. And then you might think, well, are you actually working on your computer? And the question—the answer to that is it depends on exactly what you mean by computer and exactly what is this thing that's a computer.
So actually, when you're working on a computer, what you generally see is only the screen, and you don't even see the screen; you see that tiny subelement of the screen that contains those representational entities that are directly relevant to the area of the world that you're focusing on right now because you want something to happen.
Okay? So really, when you're interacting with the computer, you're just interacting with this tiny little part of it, and then the problem is in order for that tiny little part to maintain its validity as a representational entity, all the things that you don't see about the computer have to work right. And then you might think, well, what don't you see about the computer? Well, that's what this diagram is supposed to be showing.
So the first thing is you don't see the elemental properties of the computer, and that would be its existence at a quantum level. And that actually turns out to be increasingly relevant for computer designers because you cannot specify the location and the momentum of a particle at a quantum level with 100% accuracy.
And what that means is it actually has relevance. The little wires in computer chips are getting so small now that there's some reasonable probability that an electron you think will be in the wire is outside the wire, and that means they'll short out. So at some level of interaction, the quantum properties actually matter.
Okay? So that's part of the computer you don't see. And then, while the computer itself is made out of all these little parts that are as complex as little cities, and it's just packed with that, what you don't know about that could fill whole volumes. And any of those things can, you know, malfunction at any moment, so that's a big problem. And then those micro parts are arranged into subcomponents with a desktop.
Maybe your sound card is gone, or your video card has gone or something that you could take out and replace if you could only specify the proper level of analysis. Or, you know, maybe you bought a lousy brand, and that's the problem. You bought some cheap computer that's made in a dismal factory, and the probability that it's going to be reliable is zero.
And then you might ask, well, why is it that that brand is not reliable? And you might say, well, the economic system within which it was produced is corrupt, and so is the political system. And so as a consequence of that, people can get away with shoddy manufacturing, and that manifests itself as the fact that your computer crashes in the middle of an essay.
And then you might say, well, what's the right level of resolution to solve that problem? And the answer to that is that's not obvious. It depends on what you mean by solving it. Now, usually what you mean is do the minimum necessary to get the system back online so that you can return to your activities.
But it's by no means obvious that that's always the right solution, and sometimes it isn't because at some point you're going to say, well, I should just throw the computer away. And you know if you own computers, that when to throw them away is actually a complicated problem. How many of you have like three or four old laptops lying around the house? Right? Right?
So like, why? Well, they still work. Well, do they really? And then a deeper question is what makes you think that they're still laptops? And the answer to that is, well, they look like laptops. And that's also wrong. They look like laptops if you think that a box is a laptop and it's not.
And with computers, you can really see this, e, because you can have an entity that's fully functional—a laptop—and then if you just leave it on your desk and don't touch it for five years, or let's say 10 years, what is it then? Well, it's become so disconnected from the ecosystem that it's a part of that it might not be useful for anything anymore. You might not even be able to get it to work.
And you know, most things around us don't change that fast, so we can perceive them in relatively simple forms, and they don't transform so quick that that shows us that our perception has gone astray. But computers aren't like that; they transform so rapidly. You probably spend one-fifth of your time just trying to keep up with the transformations.
Do you think that even if a laptop ceases to be used, the anticipation in a person that it could be used as a laptop if they wanted to—does that make it stay elastic? Sure, well, it does to some degree. You know, the question is do you actually use it? That's the first question.
So, and that's actually the manner in which you test the validity of your perceptual theory because I would say that a laptop that you don't use is actually not a laptop; it's just a hypothetical laptop. Now, you see, I mean I'm being picky about these sorts of things, but I'm doing that because we’re trying to understand exactly how categories work, and it is not obvious—it's not obvious at all.
So the problem is that the laptop actually transforms into a brick relatively slowly, and deciding when it's a laptop and when it's a brick is not obvious because it doesn't denounce that— you know, it just doesn't stop working all of a sudden. It just works in a manner that's completely counterproductive. You know, and one of the things that's quite interesting is there's not a lot of evidence that the electronic revolution has increased productivity.
And I think the reason for that is how much time during the week do you suppose you spend just keeping your computer updated and making sure that you understand how it works? Like, would that be 10% of the time that you use the computer? Is that an unreasonable estimate? Like, what do you think?
I mean, I think for me it's like 20% of my time. It's horrifying because I have quite a few computers, and you know they're always doing weird things of one form or another. Even when I just leave them be, they'll update and then they won't work. Or, you know, I have a new computer and it's got such a high-resolution screen that you can't use Photoshop on it because you can't see the menu items anymore, and Adobe has known about that for a whole year and hasn't fixed it.
And so like there's always these weird little snakes popping up all over the place with something as complex as your computer, and battling them off, I think, destroys all the potential productivity that the tool was actually designed to produce. So I don't know if that's true or not, but it certainly seems that way to me.
So okay, so the laptop—it's a funny entity too because you can certainly see that it's connected directly to a very large network, right? Then that's the internet clearly. But partly that's an information portal, but partly the thing stays alive because of its connection to the internet, right? It's continually updating; it's continually being modified because it's plugged into something that's essentially a living system.
And when you think about it as a laptop, you're just ignoring all of that. And you really can't because you can't ignore all that because it's there. It's even more real than—I kind of think of these things like leaves on a tree; the tree is the real thing, these are just leaves. And you know, they fall off, and then you have to—then—and then they're dead.
So and that's a good indication of the way that our perceptions deceive us. And you know, one of the things you might notice is what kind of emotional reaction do you have when your computer ceases to be a laptop while you're working on something important? What's your response? Frustration and anger, mostly?
Yeah, and there'd be some anxiety in that too, and probably some contempt, you know, because you might say, "Well this stupid bloody thing," or something like that, which is an expression of contempt, which is associated with disgust. And then there's the anxiety, which is "Oh my God, what am I going to do with this now?" And then there's the anger, and that as a consequence of something interfering with your goal-directed pursuit.
And so one of the things that's really worthwhile to note is to note that panoply of emotions. Now, there might also be a little excitement possibly because you could say, "Oh, well I wanted to get rid of this piece of junk anyways and get a new computer," right? Because, so when it collapses into its sub-elements, this mess of sub-elements, there's all sorts of possibilities that immediately arise that your nervous system is attuned to, and the way your nervous system responds to that is it gets frustrated and angry.
It gets afraid; it gets disgusted, and it gets curious about what's going to happen next. And all of that happens at the same time. And I would say that that response, which is an embodied response and an instinctive response—although there's cognitive elaborations on it—that's the response you have when you actually see the actual world, or as close as you're going to get to it.
Because I don't think that you see the world at all until the things you're doing don't work. And as soon as they don't work, bang, you see the world. Now, see, is a weird way to think about that. But then we can say, well, that's the closest you ever get to having contact with the thing in itself. And it's very, very stressful to have contact with the thing in itself because God only knows what it's going to do.
So your conceptual systems protect you from contact with reality in the same manner. One of the things Jung said, which I think is extraordinarily funny, he said the purpose of religious systems is to stop people from having religious experiences. It's an extraordinarily remarkable and intelligent claim because, you know, do you really want to have a religious experience that’ll tear you into bits?
Yeah, it's how we encounter with things that are not the real world when it stops working. That must take a lot of effort. Well, think about how we do it. You know, like say something collapses on you, like your car breaks down. I mean, do you actually solve that problem? Well, generally, you don't, right?
What you do is you parse off the problem to someone else whose knowledge about the multilayer reality of the automobile is much more detailed and differentiated than yours. And so partly the way we deal with the complexity of the world is that we set ourselves up as a distributed supercomputer, and we can call on elements of that supercomputing process at any given time to increase our level of resolution of the world and to solve the problem.
And so partly we solve it as individuals, but a huge chunk of it is that we solve it as a consequence of continual communication. And that's also become increasingly obvious since the rise of the internet because now I don't care what problem you have; there's a high probability that someone somewhere, merely as a consequence of the innate altruism of human beings, has put together an extraordinarily detailed video on exactly how to solve this little micro problem that's beset you, which is really a fascinating and unexamined phenomenon.
So, I'm sure you guys are all familiar with that. I wanted to change the stereo. I have an old car; I wanted to change the stereo. So the search was how to replace a stereo in a 2005 Hyundai Sonata. You know, and some guy down in the US, for some reason, videotaped himself doing it, and there it was.
And it's really quite cool that that happens because, you know, psychologists and evolutionary psychologists and biologists always argue about whether there is such a thing as actual altruism. And you know, it's hard not to see someone taking the time to post, you know, to post an observation of them undertaking a complex task for no financial reward whatsoever. It's hard not to see that as a form of, you know, spontaneous altruism. It's very, very cool, yes.
Well, we'll get to the relationship between your conceptual systems and how things fall apart here right away. So all right, so I've run through a couple of examples of the complexity that you're faced with—that's invisible. That's the thing that's really most worth considering, is that your stability is dependent on the maintenance of a million conditions—the maintenance of a million conditions that could go wrong at any particular time, but generally don't, at least not in a functioning society.
All right, so now we'll look at this. So I mentioned frames of reference, and so you cannot look at the world without a frame of reference. And the reason you can't do that is because it's too complex to see. So that means that a frame of reference is actually a necessity for perception itself, which is quite interesting.
So you cannot perceive at all unless you perceive locally and pragmatically, that's what it looks like. And then you might ask yourself, "What do these frames of reference look like?" And this is where I think neuropsychology and cognitive science meets narrative psychology and psychoanalysis because I think just as the smallest units of your semi-robotic subsystems—the ones you share with animals—are personality-like when they manifest themselves, so the concepts that you use to build the frame of reference that you orient yourself with are narrative-like.
Because you could say, well, what's a narrative? And the answer to that is it's a description of a personality in action. That's what a narrative is. And so if the actual problem-solving subcomponents of your psyche are sub-personalities, then the description of those is going to be basic narratives. And so the hypothesis here is that because we're tremendously imitative creatures, first of all, I can pull your personality out of you at any given time by just watching what you do and matching it with my body, and we're doing that all the time.
If you're a reasonably social person and you're talking to someone one-on-one, you're going to match your behavior to theirs in all sorts of ways. I mean, one of the experiments we did once was we brought undergraduates into the lab, and we just put like a, you know, a bowl of chips between them. And basically, what we found is that each of the undergraduates ate exactly the same amount of chips as the other.
So sometimes a dyad would settle into no chips, and so neither of them would eat anything, and then another dyad would settle somehow into, well, we'll eat all the chips. And so then each of them would eat half, and the correlation was like 0.n; it was unbelievable. And these were strangers, you know? And so you're mirroring the other person's embodiment like a mad dog all the time, and it's a great thing to be able to do because it means that you can unite your being within the same locale, and that means you don't have to have conflict.
Plus, it also means that you can learn very rapidly from someone else just by doing what they’re doing. And I don't know if you guys have seen this, but there's computers now that take advantage of this idea. And so what happens is, let’s say you have a computer—it's a robot—and you want it to do this task. So, pick something up, here—move it up here, pull it over here, and put it down.
All you do is you take the robot's arm and you put it down, and you close it, and then you lift it up, and you move it over and down, and then the robot will do it by itself. And so that’s, you know, that’s the beginnings of robotic imitation. And that’s actually—imitation is mindboggling because once it kicks in, as soon as one thing learns something, then all the other things that are exactly like it can learn something.
And so what I think will happen in the next five years is we'll finally get robots that are fairly autonomous, and they’re really on their way. And then we'll get ones that are good at imitating each other, and then the thing will just explode because if there’s, say, there’s 10 million of those robots that are reasonably complex, then as soon as one of them learns anything, all the other ones will immediately know it.
And, well that’s not— that's exponential growth on a scale that we can’t even imagine. And that’s coming very, very rapidly. But anyways, you already do that, and that's part of the reason people are so damn smart. But then you're also capable of a different kind of imitation. And that imitation is when I watch myself do something, and then I formulate it into a concept, and then I tell you the concept, and you decompose it so that you can use it in your own body.
And that's basically what we're doing when we're telling stories. So now it's kind of low-resolution communication, but it doesn’t matter. It’s unbelievably powerful communication. And as far as I can tell, the basic unit of that narrative or that frame of reference is this: you're somewhere because you're localized in time and space, and that isn't as good as it could be for one reason or another, so you’re trying to improve it.
And then you have some notions about what you have to do in order to improve it, and you act them out; you implement them. And I think you're doing that all the time, and I can't see how you cannot do it because you have to have a technique for simplifying the world. And so as far as I can tell, this is the technique.
And so a basic story was, well, I was here, and then I went there, and that was better, and here's how I did it. And so that's a pretty good story. It's not great, but it's good enough. You might listen to that or you might watch it as, on a video, like I did when I was trying to figure out how to fix my car stereo, right? So it's a story about how to fix a car stereo.
And you might say, well, that's not a very good story. It's like—I didn’t say it was a very good story; I said it was the basic element of perception and also of narrative communication. And to make those two things united is a tremendously exciting possibility because one of the things that you may have noted about psychology—this is something we talked a lot about in my personality class—is that there's a real disjunction between narrative psychology, which generally tends to be quasi-psychoanalytic, and neuroscience, for example, and psychometrics, which tend to be—well, they're not narrative anyways.
You know, they—we haven't been able to bridge the gap. And so part of the reason I'm telling you the things I'm telling you right now is so that you can bridge the gap. So an upshot of this—and then we go to this next, this next diagram here. Sorry, I have so many diagrams; it's hard to keep track of the monsters. Yeah, this one.
Okay, now you've seen this before, but I want to walk through it again because it's important, and it allows you to do a reconceptualization that unites things that can't otherwise be united. Okay? So one of the issues might be: well, what is an abstraction composed of? And I would say, well, it depends on the domain in which the abstraction is being implemented. So an abstraction about the world is slightly different than an abstraction about you.
I think what they have in common is that when you make abstract representations of the world, you're making abstract representations of the world as a place to act. And that's a critically important thing to understand because we're scientific in our explicit cognitive presuppositions. We think that world models are models of the world as a place of things. So we're kind of taking a page from St. Augustine in some sense—the world is made out of objectively existing independent material entities.
And when you develop understanding about the world, what you're doing is producing a map of those entities. And the thing is, that's true in a narrow sense in that that is what you're doing when you're doing science. But science itself is nested inside a narrative structure, as we talked about before. So then the question is: what's the nature of the more comprehensive narrative structure?
And I don't think that you can understand—you cannot understand profound narratives unless you understand this underlying set of assumptions. You're not naturally predisposed to view the world in objective terms. Now, we talked about that from a Darwinian and Newtonian point of view last time. And I told you what I thought about that, but here's the idea.
You know, so let's say we'll talk about the domain of morality. And the reason that I put this under the domain of morality is because it has to do with how you do act and how you should act. So that's a moral domain. And then I would say most of the time when we're communicating information that we find relevant to one another, what we're communicating is moral information. And the moral information is how to perceive and act in the world such that things work.
Now, obviously, you can have a debate about what those things are, but that's still what we're trying to do. And you can debate about what "work" means and so on and so forth, and we're doing that all the time because it's complex. So if I say, well, you're a good person, then we could unpack that. I could say, okay, well, what are the snakes inside being a good person? And we could say, well, that's a high...
It's actually good person is a representation of a complex hierarchy. And so I've outlined some of the hierarchy here. If you're a good person, one element of being a good person—although maybe not a necessary element, but one element—is that you're a good parent. And then a sub-element of that is that you can take care of your family, and a sub-element of that might be that you can play with the baby or that you can cook a meal.
Now, you don't have to hit a home run on every single one of the constituent sub-elements of the entire category in order to qualify as a good person, but you can see that there's a shared structure that people recognize that's characteristic of good person. That's actually composed of action patterns and perceptions. It has nothing to do, in some sense, with the objective world. It's a completely different way of looking at it.
So if you're going to play with the baby, well, then you think, well, what do you have to do to play with the baby? And so, you know, you might make a goofy face at the baby because babies respond to that sort of thing, sometimes by crying but often by smiling. You can tickle their feet; you can kind of poke them; you can play with their arms; you know, you can rock them back and forth.
There's—and you watch the baby, so you find out if it's having fun too. And now the thing that's interesting about actually playing with the baby is that the phenomena that you're manifesting is no longer abstract and conceptual; it's embodied. Play decomposes into movement patterns of movement. It's like a dance. In fact, if you take a mother and she's getting along with her baby, and you videotape them interacting, what you see is that the mothers who get along with their baby are doing a dance with them.
And the baby is harmonized with the mother. So there's this sort of melodic and fluid dance, whereas the baby that's not getting along with its mother is more like having a conversation with someone who disagrees with absolutely everything that you say in random ways. No, so you start off an invitation to the conversation, and they nail you with a non sequitur.
So that's a non-starter. And then you try it again, and they nail you with something else that seems completely irrelevant. And so it's jerky and discontinuous, and you see that with people interacting with children all the time. I think that's part of the reason why people use dance as a mating strategy. It's like, you know, is this person capable of mutual, harmonious imitation?
Now, it's not a perfect test, but it's not too bad. It's got some utility in it. So the abstraction, the abstract narrative representation, you're a good person, that is not an objective category. The abstract representation you're a good person decomposes at the highest level of resolution into a sequence of actions. And Piaget would say that, well, you learn those basic actions as you're putting your nervous system together from birth until now.
So you're building yourself up at the bottom because, so when you dance, you know, there's certain movements you have to make, and those are movements that you could make in other circumstances that would also be useful. And so for Piaget, you build yourself from the bottom of this hierarchy towards the top. Okay?
So then, you know, one of the things that you might note is that as you go up the chain of abstraction, arguments about what those categories contain become more and more likely, you know? And so you might say, for example, um, to your mate that being able to prepare a decent meal is such an important sub-element of being a good person that if you fail to do it or can't, that actually means you're a bad person.
And you can see right away why that would provoke all sorts of arguments. And so part of the reason that I'm bringing this up is because we're always—we're engaged in a constant process of mutual exploration to determine how these hierarchies are properly arranged and what are necessary subcategories of them and how those subcategories should be hierarchically instantiated with regards to one another.
And we have to have some agreement about that or we actually can't communicate. So like if my stance is you burn the vegetables, you're a bad person, and your stance is I burn the vegetables and who the hell cares about that—then the fact that our hierarchies of values are structured completely differently means we're going to have a very difficult time communicating, and there's going to be a lot of conflict about how that conceptual structure should be reorganized.
And that's actually what you're doing when you're arguing with someone about something important. It's like you're trying to say, okay, we need to get our hierarchies, our narrative hierarchies, synced so that what you want and what I want can happen at the same time, and so that we understand each other, and we can partly do that by talking.
Um, we can also do it by suppressing the other person, which we tend to do an awful lot, or suppressing ourselves in favor of the other person and so forth and so on. But the thing is, those places of non-agreement are tunnels into the complex underlying reality that we have to deal with. And places where the nature of that, the nature of the complexity of dealing with that become apparent.
And part of what happens as a consequence of that is people just don't have those discussions. They don't know how to do it. They can't decompose the situation, you know? And I can give you an example of that, which you're going to face. Let's say you're in a family; you've married to someone, or whatever the equivalent happens to be—you both have a career. Whose career is more important? You think, well, it doesn't matter.
It's like it matters because there will be times when you have to make a choice between the two career paths. It's like, okay, well, how do you do that? And the answer to that is you don't know, and neither does anyone else. And worse, there are no rules for having the discussion. You have to come up with all the rules and everything about the discussion because you can't even tell when you're being reasonable.
You know, because you might say, well, the person who makes the most money, their career is the most important. It's like, are you going to accept that? Do you think that's valid? And if you don't, okay, fine! What are you going to use as an alternative criterion? It's very, very difficult; it's very, very difficult.
You know, and most of the way we solve this as collectives is we have standard hierarchies of narratives that we just all adopt. That's our identity with our history. That's a role. If you adopt your role, and I adopt mine, and my hierarchy is structured to take your role into account, and the reverse is also true, we don't have to have a conflict.
Now, you might have a conflict as an individual with your role; that's a whole different story. But if you pop out of your role and I pop out of mine, we're in no man's land. And in no man's land, three things happen: tyranny, slavery, or negotiation. And most people can't negotiate. And the reason for that is it's too damn complicated.
What time is it? Ten after two? Yes, yes, let's take a break for—15 minutes.