yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

2015 Personality Lecture 18: Openness - Creativity & Intelligence


50m read
·Nov 7, 2024

So I know you can't tell, but you're looking at a new and improved version of me today because I just finished the Canadian government's online ethics course. So, my capacity to manifest ethical behavior has been improved by an immeasurable amount. So, okay, so today we're going to talk about something that's actually quite controversial, although I guess what we talked about last lecture was rather controversial too. People hate this topic generally speaking, and it's because like there's nothing ever, there's never any guarantee that if you investigate things from a scientific perspective that you're going to learn things that you want to learn or that you're happy about knowing or that you wish were that way.

I mean, I would say one of the big advantages of the scientific method is that it presents you with data that you don't like. I mean most of the time when you learn something, it's because you're running into something that you don't like or don't want, right? Because otherwise, when the world is unfolding in front of you exactly as you predict and hope, there's nothing to learn. You almost always run into an impediment of one form or another before you ever learn anything. So we're going to talk about openness today.

Now, that sounds rather innocuous in and of itself, but the problem is that openness is the place in the big five where IQ hides, especially non-verbal IQ, which is most specifically associated with fluid IQ. Now, the thing about intelligence is that it's not like the big five; it's unidimensional in most regards. So, you know, if you do the proper factor analysis with the entire corpus of trait descriptors in language, you end up with five dimensions. There's no superordinate single dimension; you can make a case for plasticity and stability, but the actual utility of that higher-order factor structure has yet to be demonstrated in a multiplicity of situations.

We have some evidence regarding plasticity and stability: that plasticity is predicted more by what people do, and stability is much more predicted by what people don't do. Now, I don't know how useful that is. I mean, it's kind of an interesting fact; it has an incredibly powerful effect. Plasticity seems to be associated more with exploratory behavior and perhaps with dopaminergic function, and stability perhaps more with the control and constraint of behavior and serotonergic function, but it's still pretty vague.

So you can certainly make a case for the hierarchy of factor structure with regards to traits to sort of top out at the five-factor level, with some possibility of an informative level above that. IQ isn't like that; it's pretty damn unidimensional. It's complicated because there does seem to be some utility in distinguishing between verbal and non-verbal intelligence, or fluid and crystallized intelligence, depending on how you look at it. And I think the distinction is basically this: fluid intelligence roughly is a measure of how rapidly you can learn things, and then crystallized intelligence roughly is an indication of how much you've learned.

And then you can see why those two things would be very highly correlated, right? Because how much you know is obviously going to be a function of how fast you can learn, but you could also see that there would be some circumstances under which those two things could be dissociated. One of the things that happens, for example, as you get older, your fluid IQ drops quite precipitously, and it starts doing that somewhere in your mid-20s. So, some of you are already getting stupider, which is kind of annoying, at least from the perspective of the ability to learn.

But your crystallized IQ, or your verbal IQ, depending on how you look at it, can continue to rise for the rest of your life as you put more information into yourself, so to speak. And some of that might be a measurement artifact somewhat because the way that you test verbal or crystallized intelligence generally is by asking people about their knowledge of something, maybe it's their vocabulary, for example, which is an excellent measure of verbal intelligence, as you might expect, probably because it's an index of how much people read; that would be my guess, or general information tests, which is a sample of your required factual knowledge, and so forth.

Um, that sort of thing can increase as you age. As your ability to learn decreases, the best way, by the way, if you want to keep your intelligence intact, the best way to do that is to exercise, just so you know. Both weight lifting and cardiovascular exercise can maintain your fluid intelligence to quite a remarkable degree across the course of your life. And the reason for that, likely, is that your brain is a pretty vascularized organ, and it uses a tremendous amount of your body's oxygen and energy.

And so, if your cardiovascular system isn't in that good of shape, then what seems to happen is that it's harder for your brain to get rid of waste products that are produced during its metabolic processes. And so, you can't work as efficiently. There's not a lot of evidence that brain games of one form or another can produce any real positive effect, although there's scattered reports in the literature that video games can improve spatial intelligence, which is a subset of intelligence, even though it's very highly correlated with fluid intelligence in domains other than the game.

Usually, what happens if you get people to develop expertise at a game that requires abstract ability, like a video game, is they get tremendously better at the thing that they're practicing. But there doesn't seem to be a lot of transfer of ability to different games. And what fluid intelligence is, in large part, is what's common across multiple— the ability that's common across multiple complex tasks, whatever that ability happens to be. Now, we don't know exactly what that ability is, partly because we don't really know very much about how the brain works, but we definitely know how to measure it, and that irritates people.

So, IQ fits under openness, and the terminology is still a little bit, I would say, awkward. For a while when people were trying to sort out what the fifth factor was, the openness factor; some people thought about it as intellect, and some people thought about it as openness, and it ended up being named openness. When we did our aspects study back in 2007, we broke it up into openness to experience proper and intellect, so those seem to be aspects of the higher-order trait. And the aspect-level differentiation actually turned out to be rather useful, at least so far.

One of the things we just published, for example, showing that if you were higher in openness to experience, that aspect, you were more likely to be in the humanities. And if you were higher in intellect, you were more likely to be in the sciences, for example. So anyways, what we'll do first is we're going to talk a little bit about intelligence per se, and then we'll talk about the trait. The reason I want to talk about IQ, really, because really that's what I'm talking about, is because the best measure of the intellect aspect isn't a big five personality questionnaire; it's actually an IQ test, and it's very, very important to understand IQ, and there's a bunch of reasons for this.

And the first reason is that IQ is more powerfully predictive of long-term life success than any other phenomena that can be measured among human beings. And not only that, the statistics that were derived, the statistics that were used to derive IQ tests, and then used to predict future performance, are exactly the same statistics that every social scientist, particularly psychologists, every social scientist uses to do all of their statistics. So you can't criticize the derivation of IQ tests on statistical grounds without criticizing every single finding in psychology that's ever been made by anyone, ever since we started to use statistics.

And I would say not only that, the best statisticians that psychology has ever produced are the people who invented and promoted and developed and tested IQ tests. So, there's no getting out of it by saying that there's something faulty about the statistics. So, for example, Stephen J. Gould, who's a famous evolutionary biologist and pretty radical left-wing thinker, he debated—he had a public debate at one point about what he called the mismeasure of man because he wasn't very much of a fan of IQ tests.

And he claimed that IQ tests were faulty because they relied on factor analysis. And because basically what IQ is, is the first factor that you draw out of a huge battery of questions that require abstraction. Any kinds of questions you'll find that if people do well on one question they tend to do well at all the other questions. That's the general factor. And he said, "Well, that's just a factor, and that means it's a mathematical abstraction, and that means that it's not real." It's like, well, first of all, the first factor that you derive out of the performance indices for a very large number of answers to abstract questions is almost perfectly correlated with the average, the mean.

And so that's tantamount to saying that the mean is a mathematical abstraction, and therefore doesn't really exist. And well, then you have a problem, which is, well, exactly what do you mean by "exist"? Because you can certainly make a case that mathematical abstractions are more real than the thing that they're abstracted from. If that wasn't the case, then mathematics wouldn't be so powerful. So basically, Gould's argument boiled down to the idea that there's no such thing as an average.

Well, this is part of what makes me a pragmatist philosophically. I mean, there's such a thing as an average insofar as calculating the average and then using it produces certain results that you desire in order to deal with the world, right? So the average of a group of numbers is a tool of some sort. You know, which obviously doesn't contain all the information that the entire set of numbers contained. You need at least a standard deviation to get that as well because that talks to you about the range, know the variability. But the idea that the average isn't a real thing is, you know, is silly. It's a shallow criticism; it's also based on a complete lack of knowledge about exactly what factor analysis does and doesn't do.

So now here's a list of some intercorrelations here between the various facets or the various aspects, just so you know how tightly associated they are. So industriousness and orderliness are correlated at about 0.4, that's from conscientiousness; volatility and emotional stability at about 0.59; politeness and compassion at about 0.44; assertiveness and enthusiasm at about 0.47; and openness and intelligence about 0.35. So openness and intelligence seem to be somewhat maximally differentiated, as far as aspects go. So the average correlation between the big five is r equals 0.22, and the correlation between the big two is r equals 0.24. So that just gives you some idea of how much segregation and how much overlap there is between the different aspects underneath the trait level.

Let's see, so what I should really tell you first, I guess, is how to conceptualize IQ. Yeah, this is a good one, so you want to follow this carefully so that you can explain it to yourself and so that you can explain it to other people. So here, let's say you want to make an IQ test. So this is what an IQ test is: the first thing you might imagine is that you have a huge library of questions. And there'd be questions that could be formulated in language or some other symbolic manner. Often they don't have to be, but often, and then they require skill or knowledge to answer.

So here's some typical questions, and you could generate multiple exemplars of questions like this: What's 2 times 68? What's the capital of Georgia? How do you find hypertrichiasis anemia? Um, complete the pattern: 2, 4, 8, 16. Remember these numbers: 2, 4... then you tell them back to me: 6, 12, 15, 14, 18, 20, 22. You tell them back to me. That's a working memory test, by the way. But working memory is basically indistinguishable from intelligence in many ways.

So then, you know, another thing I might have you do, for example, is I might show you a pattern made out of squares and then give you some blocks so that you could make the pattern out of the blocks and then time you to see how long it took you. Or maybe only give you a certain amount of time, and if you didn't do it, then you either pass or fail. I might show you some patterns—this is like a Ravens Progressive Matrices Test, which is one of the tests that is the most accurate at assessing nonverbal intelligence or fluid intelligence. So I might show you six patterns—six designs—and each design changes, sorry, eight designs. Imagine a three-by-three array, three here, three here, two here, and a question mark, and then over here I'd show you a bunch of options of patterns, and one of the patterns would complete the sequence.

Usually, you could calculate the sequence going down or across and diagonally as well. And that looks somewhat like a working memory test in that usually the Ravens Progressive Matrices patterns involve a couple of different variables, like it might be color and shape or something like that. And so in order to complete the test, you have to figure out how the color is transforming and figure out how the shape is transforming. Or maybe in a more complex Ravens Matrices question, it would be color, shape, and position on the page, or something like that. So you're increasing the number of dimensions that have to be attended to simultaneously in order to solve the problem.

So, so one of the things that's quite interesting about the IQ test is it really doesn't matter what the questions are. So, anyway, so imagine you have a whole pile of these questions lying around, like a universal library of potential questions that require abstraction to solve. And then you take a random set of a hundred of those, and then you give that to a thousand people, and then you just score some up there; you just sum up their scores, and then you rank order them in terms of their performance, and that's IQ. It's as simple as that. Now, you could do some other things; you could correct it for the age of the participant.

Now, sometimes that's useful and sometimes it's not, but you could do that. And generally, if you standardize the score and you correct it for age, then what you get is an IQ score. I would—so actually, the total would just be an indicator of general cognitive ability; an IQ score would be the indicator of general cognitive ability corrected for age. And that's it, that's all. And so, here's some of the strange properties of that set of scores: if you factor analyze the dataset, you'll pull out one factor and a few little bitty factors after that, and that factor will be very, very highly correlated with the sum.

So correlated at 0.9, 0.92, 0.93—like really highly correlated—far more highly correlated than most tests are. Far more highly correlated than most tests are reliable. Now another thing that's very interesting is you take another set of 100 questions and you give them to the same thousand people, and then you re-rank order the people. And then you correlate the rank orders—you get what I'm saying? The correlation between the rank orders will be like 0.95. So what it means is no matter which set of 100 questions you take out of this universal library of questions, people are going to score in the same order that they scored when they did the first hundred questions.

And that means that IQ is an extremely reliable test-retest reliability. This is actually an even more stringent form of reliability, which is alternative form reliability. So that's all there is to IQ; it's very simple. Now, why it is that the ability to do well on one of those questions is highly correlated with the ability to do all the rest of them, that's something that nobody really understands. I think it has to do with the human capacity to abstract. Like, we can abstract, and we can manipulate abstractions; it's not obvious that there's any other creature that can do that, and it seems to be, in some sense, modular.

Anything that can be abstracted and then manipulated draws on exactly the same set of underlying skills, whatever those happen to be. Now you might say, "Well, so what?" which is a perfectly reasonable thing to say. And so what is, if you then take that rank order, then let's say you take the same thousand people 20 years later and you rank order them in terms of how well they're doing at their job—you could just use income, say, as a marker. There's other markers you could use, but for the sake of simplicity let's say income. The correlation between their original rank order and the rank order of their performance will be something around— if you had a full sample of intelligence, it would be something around 0.5, which is—a 0.5 is a staggeringly high correlation.

So, you know, you'll hear that when you hear people talk about effect sizes, you'll hear them say that 0.2 is a small effect size, and 0.3 is a medium effect size, and so forth. What you find out if you look at the literature empirically is that point—effect sizes of 0.5, correlations of 0.5, are vanishingly rare in the social sciences. If you ever do a study with a reasonable number of people with a novel measure and you're predicting something with it, if you get a correlation of 0.5, boy, you're jumping around the room because that's only going to happen like once in your life unless you're measuring something that someone has already measured and you just don't know it.

And so it's fair, it's a—it’s a reasonable to point out that the correlation between IQ, say, and academic performance, or the correlation between IQ and long-term career success, the size of that correlation is bigger than the correlation between virtually two—two of virtually any other things that social scientists have ever discovered. And the thing about it too is it's very straightforward; you know what the procedure that I just told you—anyone can replicate that. You know, you're not going to come up with a universal library of questions, obviously, but you can use the internet as a reasonable proxy.

So not only is IQ easy to understand from a measurement perspective, it's also the tests are very, very easy to replicate, and they're extremely powerful predictors. Now, it turns out actually that IQ is correlated more strongly with how rapidly you learn a task than it is with your performance per se. So the correlation between rate of learning of something novel and IQ can be as high as 0.6 or 0.7, which, you know, the other bit of that would probably be taken up by something like conscientiousness or openness, or negatively with emotional stability. So you're picking up a tremendous amount of the variation between in people's performances with that single measure, and you can derive a pretty decent indicator of IQ in 20 minutes, which is also pretty frightening.

You know, so well, so that's IQ. Does anybody have any questions about that? Yes, well, you have to formulate them in language—that's all. Yeah, now, a lot of people have complained about the idea of intelligence because they don't like the idea. They don't really mind the idea so much that there are smart people, but people really don't like the idea that there are dumb people. And, you know, it's really not reasonable from a logical perspective to have one without the other. And it's easy to see why people object to that because whatever term you use to refer to the low end of the intelligence distribution rapidly takes on a pejorative nature.

So what happens is the culture cycles through various words, decade by decade, trying to describe the lower end of the intelligence distribution, but all the words end up being pejorative, and so they're changed decade after decade. But you can understand that because, first of all, there's great danger in labeling someone unintelligent, perhaps, period. But certainly, if there's measurement error, you know, so if you're labeled unintelligent, let's say, and you're not, that's going to be a real catastrophe.

So there's measurement error problems, but then, you know, you have to think about the problems of not doing the measurement. So this is something I've thought about for a long time. So let's say that I assessed your verbal IQ before you came to the University of Toronto. Forget linguistic differences; assume we could control for that, which is very difficult at the University of Toronto because there's so many people who have English as a second language. But forget about that for a minute; let's say I got an entire population distribution, not only of the applicants from the University of Toronto but of the entire population, and I said categorically that the bottom thirty percent of the people in that distribution would not be able to graduate from the University of Toronto.

Am I doing them a favor by not letting them apply, or am I harming them? Well, we know perfectly well— I mean, the probability that there's anybody in this room that has an IQ of less than 115 is pretty damn low. And if there is someone in the room whose general intelligence level is at that level, you can bloody well be sure they're working like mad dogs in order to keep up. And I'm not saying that all of you who have to work like mad dogs to keep up, you know, are necessarily lower than you might be in IQ. I'm just saying that because IQ, in part, seems to be a measure of something like processing speed or rate of understanding or something like that, if you're not in the upper echelons of the distribution and you're tasked with extraordinarily difficult tasks that require abstraction and quick learning, the only way that you can compensate for it is by working to an incredible degree.

And you can do that. And that's also why at the University of Toronto—we know this perfectly well, and it's true in other educational institutions—conscientiousness is almost as good a predictor of grades as intelligence. And that only makes sense to me. It's like, who the hell is going to get the good grades—the smart people who work hard? And I mean, wouldn't it be good if that wasn't the case? As far as I can tell, because hopefully what you get when you get a grade is an indicator of how fast you are, how well you know the material, and how much effort you've put into learning it; something like that.

The relationship between creativity and grades at the University of Toronto is zero, by the way, once you control for IQ, which is, you know, perhaps rather appalling. But the problem is, is that it's very difficult to assess creative people, right? Because they're annoying. Creative people do things in a new way, and the problem with trying to assess whether or not someone has done something in a new way is that you have to come up with a new scheme of grading for that, and you can believe that's not going to happen, you know?

And then it’s worse because if you're a creative person and you're graded by someone who isn't creative, they're not going to think you're creative; they're just going to think that you're wrong. And so, and you might be; because lots of times if you're creative, you're also wrong, because it's not that easy to come up with a novel way of doing something, or a novel hypothesis that actually is going to be an improvement over the previous hypothesis, right? Most of the time, you're off on a tangent, and it's an incorrect tangent.

Now, various people have criticized the idea of a single intelligence, and so they've made— I guess the most famous ones, probably in the last 15 years or 20 years, are being Robert Sternberg and Martin Gardner. Martin Gardner is not a scientist, by the way, at all. He claims directly that he doesn’t really care if his intelligences can be measured. Well, that's not that helpful because if you're working as a scientist, if what you're talking about can't be measured, then it doesn't exist.

And you can even make a philosophical case, I think, that if what you're talking about can't be measured in any possible way, that there's no sense in assuming that it exists. Now Gardner has posited intelligences like linguistic, musical, logical, mathematical, spatial, body kinesthetic, intrapersonal, and interpersonal. One thing you might note is that we already have a perfectly good word for all of those performance variability across all those domains, insofar as it actually exists, and that would be talent. So there's no reason to confuse the word talent with the word intelligence.

And the next thing that you might note is that a lot of these so-called intelligences are likely captured not so much by intelligence but by variability and personality traits, which we already understand. And so to throw a bunch of new terms in like linguistic, musical, logical, mathematical, spatial, body kinesthetic, interpersonal, and interpersonal— all that does is muddy the conceptual waters. It adds—now there's a reason for that. The reason is fundamentally political, and the political reason is that it is uncomfortable for people to admit that there might be something like actual differences in ability that are important and unerraticable.

Now, I can understand why people would have trouble with that but consider the alternatives. We know part of the reason that you're here and will perhaps stay here and will perhaps be successful, insofar as you can define success along the dimension of career attainment, say, in your life as it unfolds from this time onward, is because you work hard, okay? But we also know that part of the reason that you're here is because you're smart.

Now, the question is: what should you do with that from an ethical perspective exactly? Should you say, "No, I'm no smarter than anyone else," which means that maybe you're much more hard-working than everyone else, so you're still, you know, denigrating the non-achieving end of the distribution? Or do you say, "I've been blessed by something that's actually beyond my control and am privileged as a consequence of that," and then decide what exactly that means in terms of your responsibility? Those are your choices, but to not take into account the fact that you've been blessed at at least one level of analysis with a favorable role of the randomness dice, seems to me to completely misstate the nature of the causal sequences that propel you into what's essentially a position of privilege.

So, but that—by denying that the differences, the innate differences in people with regards to important abilities, all you do is attribute all your success to your own particular individual controllable actions. And, I mean, I'm not saying that you don't deserve credit for your work; it's like, "More power to you" and all of that, but the idea that some people are smarter than others, that's not an idea; that's a fact. And it's an uncomfortable fact, and the fact that we won't deal with it means that people suffer unnecessarily.

So let's go back to those people in the distribution. I say the bottom 30 percent of the distribution isn't going to be able to get a degree at the University of Toronto, um, unless they are—they're capable of working to an insane level, and even then likely not. Um, so let's say that you have an ability-blind admission process; it's random, so everybody's name is thrown into a hat, and if they get pulled out, they get to go. Do you think that you're doing the people who are going to fail a service or not?

You know, now you can make a case that you are, because it's likely that there'll be a couple of individuals in the bottom part of the distribution who will make it through, partly because of measurement error, right? Because you're not going to measure this perfectly, and partly because, well, people are surprising and amazing creatures, and you never know exactly what any given one of them is going to do. But you're going to torture a lot of them to death at the same time, you know?

I've been struggling with this because I've developed tests that help employers decide who might be more competent than whom when they're hiring, and that's partly dependent on general cognitive ability. And we think, based on the statistics, the relevant statistics, that we can improve the probability that a given employee will hire an above-average worker in a cognitively complex job position from 50-50 to 75-25 or 80-20. It's quite a lot. But it's not obvious; there's still a fair margin of error there. But, you know, having half as many employees below average is definitely going to be of tremendous benefit to your company.

And you might say, "Well, that's unfair because there's going to be measurement error too." And that's not the question. The question is: is it more unfair than to do it any other way? Well, you could use interviews, but tall, good-looking extroverted people tend to do much better on interviews than short, ugly, introverted people, especially if they're a little bit on the disagreeable side as well, because agreeable people also do better in interviews.

So not only are they tremendously biased in all of those dimensions, they actually don't predict performance in the long run very well at all. So actually, that means, in the States, they're illegal. You know, although companies' lawyers haven't woken up to this fact yet, but you're mandated by law in the United States to use the most accurate, valid, and reliable current means of hiring available, and interviews are not that. And neither are letters of recommendation, which are pretty much as bad. They're not so much biased, the letters of recommendation; they're just useless because, you know, except insofar as if you can't get anyone to write you a letter of recommendation at all, well, maybe that's an indication that you're very socially skilled or that you don't have a very good social network or whatever.

So maybe as a really blunt indicator of isolation, you could derive some information out of letters of recommendation, but as far as valid indicators of future performance, they're just not valid at all. It's partly why I hate writing them; they're also illegal, at least they are in the U.S., even though people don't understand that yet. But I know the law and I know the validity statistics, so what you're required to do.

So then you might also ask: so the other selection methods are also biased and unreliable. You could say, "Well, you could hire someone based on their academic history," but roughly that's an index of intelligence and conscientiousness anyways; it's just not a very good one, especially because there's tremendous variability between schools. So if you have a, you know, a GPA of 4.0 from school X and this person has a GPA of 4.0 from school Y, there's no reason at all to assume that those are comparable.

So you can use grades, but that's full of measurement error, so that's not very good either. So you could guess that that'll give you 50-50 in terms of your probability of hiring someone who's above average versus below average, but that doesn't seem to be a very intelligent way of going about it either because— and here’s something to think about. So let's assume that there are 10 people working and an inappropriate selection method places an incompetent manager above them. So maybe the manager is less intelligent than the workers; that might be one possibility. Or maybe the manager is less creative than the workers are; maybe the manager is less conscientious or whatever. It doesn't matter; some dimension of competence is not well matched with the demands of the job.

Well then one of the questions is, what exactly are you doing to the manager? Well, you're basically setting them up to fail. And the tremendous number of managers fail in the first two years of their promotion; it's well above 50 percent, and so that's not so good for them. But then you're also doing the 10 people that they're supervising to a kind of horrible perdition for the time it takes for everyone to calculate that the manager is actually failing. And what that means generally is that the very highly qualified people in the worker pool will just leave because why would they put up with that? So then you might think, well, is it unethical to select properly, or is it ethical to select randomly, or exactly how should you solve that problem?

And the answer is: well, we do try to solve it because we use selection methods, but we usually use ones that aren't very accurate. It's funny too because I tried selling accurate selection tests to corporations for a very long period of time, and one of the things I learned is they didn't want them. That was mind-boggling to me because the economic utility in hiring people who are competent compared to people who aren't is so high; it's absolutely mind-boggling because the difference in productivity between people isn't even— it isn't normally distributed; it's Pareto distributed. Some people are staggeringly more productive than other people.

So tilting your selection criteria so that you pick from the more productive end of the distribution means incredibly powerful economic benefits for your company. But at least when I was doing this to begin with, human resources people, who were usually the ones evaluating the tests, weren't necessarily the most competent people in the corporate environment, which is something that hasn't changed very much. They were very badly trained, and they believed that there were no differences between people that couldn't be eliminated with training, which is like, yeah, no, no, that's wrong.

As anybody, if you have any sense at all, and you think about it for 15 or 20 seconds, you know that's the case. You know, I don't know what it was like in your school, but in the school I went to, which was way the hell up in northern Alberta in the middle of no man's land, you know, there was at least 15 percent of the students, I would say, it was probably higher, who were still functionally illiterate by the time they hit grade 10—functional illiteracy being they had never read an entire book.

And that's way more common than you think. Like, I don’t know how common you think that is, but the stats basically indicate there’s as many people with IQs of 85 and lower as there are people with IQs of 115 and higher, so—and I already said that the minimum IQ level of anybody in this room is likely to be something around 115. So the person in this room who has the lowest general cognitive ability is smarter than 85 percent of the general population at 85 and lower.

Which is—there's just as many people there as there is at 115 and higher. At 85 and lower, then you don't get literacy. So people with IQs of 90 or less have a difficult time translating written words into action, so they can't really read instructions. So, and that's like 10 percent of the population, 15 percent of the population—actually, 15 percent of the population. And it's so—you might say, "Well, no way." It's like, well, you don't get to have an opinion on this, by the way, because the science has already been done.

But I can tell you one fact: it's pretty unsettling. So the United States Army has been doing IQ testing on its recruits since before World War I. They actually did a lot of the research that established IQ tests and validated them and indicated that they were reliable and so forth, and they had a variety of reasons for doing that. Partly, I guess, they didn't want to put incompetent people in charge of deadly machinery, which, you know, does seem to be a reasonable proposition, I would say.

But what they found as a consequence of their hundreds—hundred years of testing was that you couldn't teach anybody with an IQ of 83 or lower anything at all that would make them anything but an obstacle to the tasks that the armed forces had to complete on a regular basis. So that's 83, so now in the United States, it's actually illegal to induct anyone into the armed forces if they have an IQ of 83 or lower, and that's more than 10 percent of the population.

Now you've got to think about that because what it means is that approximately 10 percent of the population has sufficiently low general cognitive ability that one of the complex enterprises that's chronically most desperate for manpower has decided that there's no point in even trying. Now, that's a dismal outcome, but I don't know exactly what you're supposed to do with that fact. But ignoring it doesn't seem to me to be the right thing because it doesn't solve the problem.

And this is going to become an increasingly present and unavoidable, unignorable problem because, you know, already that's what's happening in developed countries—less so in underdeveloped countries—but in developed countries, the gap between the rich and the poor is increasing. So, for example, almost all of the increases in wealth that have accrued to people in the last 20 years have accrued to people who are in the top one percent of the distribution of wealth, and that's not going to stop, by the way.

And part of the reason is that cognitive power has become even more valuable than it was four years ago, and the reason for that is computers. Basically, because if you're really smart and you're good with a computer, you're way ahead of someone who isn't very smart and who doesn't know how to use computers at all. And you're not just a little bit ahead; you're leaps and bounds ahead. And worse than that, you're getting farther ahead all the time, and the reason for that is that computational power keeps increasing, and it increases a lot—it's doubling about every 18 months.

So part of the problem that the human race is going to have to face in the next 30 years, along with many other problems, is what are you going to do with people who have neither the cognitive ability nor the conscientiousness to find a niche in society that's actually that other people will value enough to pay for? Because that's really the question: what do you do? Well, the conservative answer to that is, "Well, there's a job for everyone." It's like no, that's wrong; there isn't.

And the liberal answer is, "Well, everybody's the same, so it's just a matter of training," and that's also wrong. They're both—both of those positions are there—the scientific research has rendered them permanently outdated. Well, you know, there is the possibility of providing a minimum income. The question is, what will that do? It's completely unanswerable. We have no idea what it will do, you know? And I mean, it might stop people from starving to death, although I wouldn't say in North America that's generally not a problem people have anyway.

But we don't know. See, I think of human beings as pack animals fundamentally, you know? And I—this is just—it's a metaphor in some sense. I don't think that people can be happy unless they are burdened down with something, like a sled dog is burdened down with something, you know? You have to have responsibilities; they have to be important responsibilities. You have to be sequenced in your time, you know? Most of the people I have in my clinical practice, if they're not employed, they just fall apart.

And the conscientious ones fall apart because they eat themselves up with shame and guilt, and the unconscientious ones fall apart because their sleep schedule goes all over the place, they don't eat regularly anymore, they do all sorts of impulsive things that are counterproductive, and you know, they just sort of spiral into a pool of meaninglessness. So human beings are pretty social, and we're pretty altruistic, you know, in some weird manner. And it doesn't seem to me that people can live a life that's acceptable if all they have is enforced leisure.

So maybe that will be wrong. If you provided people with a base salary, maybe people would figure out what to do with their spare time, but I doubt it. I don't think people are—I think it's very, very difficult for people to regulate themselves in the absence of a certain minimum of social structuring and guidance. I've seen very, very few people who can conjure that up on their own and manage it for extensive periods of time. I think I've only met one person, I would say, in my whole life who's actually managed that.

And that particular person has a very large array of talents and is extremely intelligent. So, all right. So you might ask, "Well, what exactly is intelligence?" And maybe, "What exactly is the ability to abstract?" And I think it's something like this: The world is made out of—the world is a very, very high-resolution place. No matter how much you zoom into something, you can zoom in more. There's information at more and more levels of detail.

And then also, no matter how far you back away from something, there's more and more levels of detail. So, you know, you guys are composed of a whole variety of subsystems—complex functioning subsystems all the way down to the subatomic level; and then above your phenomenological level of perception, you’re members of families, you're members of cities, you're members of provinces, and nationalities and international organizations and ecological structures and so forth and so on.

And all of that’s characteristic of you all the time and in every situation, but you don't deal with all that information, and you can't. And so one of the things that people seem to have learned how to do is to abstract from that. And I'm not exactly sure what it means to abstract, but it seems to me that it means something that's similar to producing a low-resolution representation, like a thumbnail.

And so actually when I look at each of you, what I'm actually seeing is a thumbnail of what you are, you know? First of all, obviously, I don't see the other side of you that or either side for that matter; I only see you face-on. So in some sense, it's a two-dimensional thumbnail, and then I have no idea what's going on in the subsystems that constitute you at levels outside, you know, underneath your mirror surface.

That's a complex diagnostic problem. For example, if there was something wrong with you—and I can't see your families, or except in very, you know, low-resolution ways, your nationality or any of the systems that you're identified with and embedded in. So just looking at you is an act of abstraction, you know? And part of that, thankfully, is done by the fact that our sensory systems just aren't that good, so there's a whole bunch of things we can't perceive, and that simplifies the problem.

Now what we're hoping is that I can perceive enough of you at a given time so that if I interact with you, I get what I want, and you get what you want, roughly speaking; it's something like that. So what we hope is the model, the abstract model, even of perception itself, is sufficiently accurate so that it's a sufficiently unbiased sample of the reality that it reflects, so that if I interact with it, things work out the way that I would like them to work out. But there's lots of times when that doesn't happen at all, right? It doesn't happen when you're sick.

The fact is that your—you—the resolution of your sensory apparatus is a huge impediment to figuring out why you might be ill, you know? And it doesn't help much either when people are engaged in struggles that go beyond the merely personal. So if we happen to be—if I happen to be the member of one ethnic group and you happen to be the member of another, and there's strife between those two ethnic groups, then the fact that I can perceive you at this level of resolution might have very little to do with my ability to solve that particular problem.

Now, obviously, people can abstract. You know, part of that's just built into us so that we abstract the phenomena that we see, but then human beings, I think, in some sense, are capable of meta-abstraction. So I think what happens is, you can imagine that there's the phenomena in and of itself, whatever that is, that complicated multi-layer thing that's always in front of you, and then there's your representation of it, which is what you perceive, which is already extracted to a tremendous degree and limited to a tremendous degree.

And then there’s abstractions of that. So what it seems to me to be is that language is a thumbnail of images that are a thumbnail of the reality of things; it's something like that. So it's a dual compression. So, if I say "cat" to you, you'll—what the word cat does for you is produce the image of a generic cat, which is already a kind of abstraction, and then that's attached to your understanding in some sense so that you can generate the understanding that would go along with at least in part with perceiving or interacting with a real cat.

So in some sense, what you're doing is I'm compressing the information down to a tremendously low-resolution thumbnail, and I'm throwing that at you, and you decompress it into a low-resolution image, and then you decompress that into something that's roughly equivalent to reality. That's what you're doing when you're reading a book, for example, right? Because when you read the book, you can conjure up images of the places that the author is talking about; you conjure up images of the characters themselves.

To such a degree that if you go to see a movie that's made of your favorite book, you might be irritated because the person in the movie looks nothing like the person that you read about, at least as far as your imagination was concerned. Now, intelligence in general seems to be whatever underlies the ability to generate those low-resolution representations, to manipulate them in your own mind, but also to communicate them to others.

A big part of intelligence is working memory, and working memory is—well, while you're sitting there thinking, if you're thinking—how many of you think in words primarily, as far as you're concerned? So how many of you don't think in images? Anybody? Images? Well, that's another possibility. How many of you think in words and images? Okay, okay, so that's fine, you know? The images are already a representation, and in many ways, the words are a representation of the images.

And so your ability to abstract and then your ability to manipulate those abstractions seems to be at the core of whatever intelligence is, and that's what IQ purports to measure. Your working memory. So when you're—if you sit there right now—so let's do this: here is the sentence to think about. Okay, now think about that sentence. Okay, so the faculty that you're using to represent that sentence to yourself is working memory.

And it's not very powerful; you can't contain very much in your working memory. Seven digits is about the maximum. Now, people vary; some people think that four bits of information is actually the limit to working memory. Some people think it's closer to seven, but it's not very much. That's why telephone numbers tend to be seven digits long, so you can kind of easily remember seven things, which is not very many things, you know?

So part of what your intelligence is is the breadth of that working memory and then the speed with which you can run abstractions through it. So now, you know, I've put a little map up here. So the thing on the top left—well, that's sort of a multi-dimensional, it's a schema of a multi-dimensional reality. And you know, if you look at that, what it is, it's a collection of dots or circles arrayed in a variety of different arrays.

My proposition is that you can represent that array in a variety of different ways. I've called them Object 1—you can't see Object 2, Slide Error—Object 3, Object 4, and Object 5. Those are all low-resolution representations of the thing that's in the top left-hand corner, and you can see that in some sense they capture something important. They capture some important element of the thing in the top left-hand corner, but they don't capture all of it.

Now, it might be that not capturing all of it is a good idea because you don’t want to use any more information than you have to. And then at the bottom, well, those are words or semantic symbols of one form or another, and they represent the representations. I saw Temple Grandin speak once; I don't remember if I told you about this, but she's a very famous autistic researcher.

She's quite autistic, and she's very fun to listen to; she's quite a good public speaker, which is quite remarkable. She's trying to figure out—she thinks that autistic people think like animals, and she actually works with animals, and she seems to be very good at understanding them. She believes that what frightens her are the same things that frighten animals and for the same reasons.

And so her proposition, for example, is that if I say the word "church," all of you people are going to have what you might describe as a schematic abstraction, like a hieroglyph, that's more or less representative of the class of churches. And so— or maybe I can say "house." Kids are good at this. You see this house? It's like it's got a little rectangle on the top, and it's got a kind of a square on the bottom; it's got a door and two windows, and the windows have crosses in them. And there's always a chimney on the top with smoke coming out, which I think is quite remarkable because you actually don't see that many houses now that have chimneys with smoke coming out.

But still, that's the canonical image for a child. Temple Grandin's claim was that she cannot see "house"; she can only see a house. And so if you say to her something like "house," then what comes to mind is a particular house that she's actually experienced. She can't take the next level of abstraction past that, which seems to be something like a deficit in generating, like, a hieroglyphic image.

One way to think about children's drawings—you know how they draw people with sticks and circles? You think, "Oh, that's so primitive." It's like, no, it's not; it's unbelievably sophisticated because those aren't pictures; they're hieroglyphics, and the child automatically produces them, and that's a proto-linguistic development. So some autistic kids can draw like Leonardo da Vinci with no training whatsoever, and that's partly because they don't use hieroglyphics, and that means they don't really conceptualize the thing they're looking at as an abstraction; they see nothing but detail.

And if you're training yourself to be a visual artist, you have to stop looking at the abstraction, and you have to start looking at the thing, and that's very unsettling. So if you take your hand, for example, and you look at it, and you snap it out of hand representation when you do that property, it all of a sudden looks like some kind of like an octopus claw. It’s a very bizarre-looking thing, and as soon as you see it that way, you can draw it. But as long as you're seeing it like a hand, you're going to put, you know, a balloon with four balloons on it or something like that, and that's going to be the hand.

So, all right. This guy's name is John Carroll. Carroll's a real scientist. He looked at the structure of IQ for a very long period of time, wrote a very thick book on it, which nobody likes to read because it's almost entirely technical. So it's really more of a reference book. But what Carroll hypothesized, and I think his hypothesis is as close to state of the art as any, is that there's a stratum three level, which is fluid intelligence G, and it's the main factor that unites all these other tests.

And then at stratum one, you get fluid intelligence, crystallized intelligence, verbal intelligence—I don't remember what the rest of them stand for. And then each of those can be subdivided into specific tests, and each of the tests can be subdivided into somewhat particularized individual cognitive abilities. Now, they're not that individual because there's a single factor that accounts for most of the variability in the performance, so in a sense what this theory says is that intelligence is summed up to one intelligence, and that seems to be essentially correct.

This, although you can't see it very well—if you look at G there on the left, the lines leading to the first or I think that's the second strata, the next strata, anyway, show you the correlation between G and each of the individual intelligences, and what you see is the correlation between G and fluid intelligence is 0.94, which is such a tight correlation that it might as well just be the thing itself, and between G and crystallized intelligence is 0.85. All of the first-order correlations there are above 0.8; that's exactly— and then at the far right you get concepts that you might think about as particularized.

And so those include concept formation, analysis and synthesis, number series, number matrices, spatial relations, picture recognition, block rotation, visual matching, decision speed, block rotation, cross-out, visual and auditory learning, memory for name, sound blending, incomplete word—that would be incomplete word completion, sound patterns, auditory attention, etc., etc. And you can think of those each as individual tests in that you could make a test that theoretically only individually assessed that thing, but you couldn't help but be testing all sorts of other things at the same time because they're not actually individual.

One of the things you will learn is that there are neuropsychological tests, and the neuropsychologists like to think that IQ is something that's primitive and that they've already, what would you say, advanced past. And so they have specific prefrontal tests, say, that hypothetically assess specific prefrontal abilities, and then they'll note that if you take a single test of prefrontal ability and correlate it with IQ, it'll only correlate at about 0.3 or 0.4.

And then they'll assume, because it only correlates that low, that it's measuring something other than what IQ is measuring. But really what happens is that it's just not a very good IQ test, and the reason that the correlation is so low is because of measurement error. So if you gave the same test to the same person 20 times, and then you varied the test so there were 30 variants of it, you gave those all to the same person 20 times, then what would come out would be a score that would be much more highly correlated with IQ, and probably indistinguishable from it.

And so you'll learn from neuropsychologists that there's such a thing as prefrontal ability, and that's associated with executive control, and that's associated with self-regulation, which would be control over your impulses. And as far as I can tell, that's all completely wrong, and it took me like ten years to figure that out because I was taught that it was right, and it wasn't until I plowed through it painfully and understood that there was no evidence for that.

So I can tell you how that works. So for example, we gave a neuropsych battery that consisted of dorsolateral prefrontal tests to 3,000 people. It was a 90-minute battery, so it was exhaustive. And when we factor analyzed it when there was only about 200 participants, we got four factors. But then when we factor analyzed it after we had three thousand participants, we got two factors, and one was one on which almost all the tests loaded, and it was undoubtedly fluid intelligence.

So, but it's worse than that, you know, because you hear about how the prefrontal cortex regulates behavior. Well, here's a problem with that: what's the correlation between IQ and conscientiousness? Zero, roughly speaking. It's probably a little off zero, although it might not be, and you know that because the dimensions of the big five are roughly orthogonal. I showed you at the beginning of this that each trait of the big five traits is correlated with the other trait on an average of about 0.2.

Now why that is, it's probably because of something like—what would you call it? It's probably a halo effect. If you tend to rate yourself as positive on one trait, more conscientious, more agreeable, more emotionally stable, you're going to tend to rate yourself as more positive on the other traits, so that would artificially inflate their correlation. That's one way of looking at it anyways. Weirdly enough, the correlation between conscientiousness and IQ is essentially zero.

Now, that's a big problem, isn't it? Because it's unconscientious people that seem to be behaviorally dysregulated, right? They can't stick to a task; they jump about; they're not on track; they appear impulsive, although we don't know how to define impulsive, by the way. But your heel psychologists use that all the time. It's like, well, if prefrontal ability regulates behavior, how come the correlation between intelligence and other forms of executive control are correlated with conscientiousness at zero? Right? Zero is a very bad number when you're testing out a hypothesis.

So, and you know, if you think about it, it has to be that way. And the reason it has to be that way is this: like, so if you're dreaming, you're going to just lay there and dream, but your eyes are going to move back and forth. Now, you might say, "Why aren't you running around acting out your dreams while you're dreaming?" And the answer to that is there's a little switch in your head, roughly speaking, that shuts off your motor apparatus when you're dreaming.

And so what happens is you can't run around and act out all your dreams because you're paralyzed. And sometimes people wake up in that state; it's called sleep paralysis. They wake up, they're sort of half awake, and they can't move, and then they often hallucinate all sorts of weird things and think aliens have come and abducted them and various peculiarities. But if you take a cat and you take out that little switch, then the cat will run around while it's dreaming until it runs into something, which is the problem with running around acting out your dreams.

So what does that allow one to conclude? Well, if you couldn't abstract away from your behavior, you couldn't think, right? Because what thinking means is to represent an alternative world, or maybe just a tiny little fraction of the world, whatever, to represent it abstractly, to manipulate it around, but not to simultaneously act it out. Then it wouldn't be abstracting at all; it would be fiddling about with the world through trial and error.

So you have to be able to pull what you think about out of your body, so to speak, play with it, and then implement it back in. Now it looks maybe like the ability to implement a plan is associated with conscientiousness, but we don't understand that; that would be industriousness, and we don't know a bloody thing about industriousness—like, it's just a black hole. But the idea that it's your ability to think that allows you to regulate your behavior strikes—well, there's no evidence for it. There’s no evidence for it.

Smart people are not necessarily better at regulating their behavior. So because you can have a very unconscientious smart person, and I see them in my clinical practice now and then; they come and say, "Well, I'm really smart, and here's the evidence, but I'm doing very badly in everything." They're underachievers, roughly speaking, you know? So they can't implement their thoughts—not in any stable manner. So the fact that they're smart, you know, on average across their life, smart people are healthier mentally and physically than people who aren't as smart, but the reason for that isn't because their intelligence allows them to regulate their health.

The reason for that is because their intelligence allows them, roughly speaking, to succeed, so they're higher up in the dominance hierarchy, and if you're higher up in the dominance hierarchy, then your life isn't stressful and you don't get sick as often. But that doesn't indicate that there's a direct control between your intelligence and your behavioral output; there's very little evidence for that. Well, I think there's no evidence for it, so that's rather shocking.

So, you know, to me that just wiped out a whole substrate of psychological theorizing, and it's—I’m sure—how many of you have come up against the proposition that prefrontal ability was associated with behavioral control? Yeah? Yeah. Well, it isn't. So, you know, not any more than—first of all, it's not differentiable from IQ regardless of what the neuropsychologists say, because they don't know anything about psychometrics generally, and they don't like IQ, because if they studied IQ, then what they'd find is all the things that they're studying are basically variants of IQ, and that would suck because other people have already figured out IQ.

And then there's the big problem, which is, well, variation in prefrontal ability, which is equivalent to variation in IQ, isn't correlated with behavioral control. It’s a really big problem. So, and so I would like to say, well, what is correlated with behavioral control? And then the problem with that is—well, I don't know what you mean by behavioral control, I don't know how you're going to measure it, and if you do measure it, the probability that you're going to come up with something like conscientiousness is pretty much 100.

So, all right, let's see. Yes, effect sizes. This is from a very important paper by a guy named Hemphill called "Interpreting the Magnitudes of Correlation Coefficients" from the American Psychologist number 58. What Hemphill did was look at a whole—he didn’t guess at how big an effect size had to be for it to be a big effect size. What he did was go into the literature and study a whole bunch of papers and then empirically rank the effect sizes so that would be the standard deviation difference or the correlation or the squared r; you can convert those and then decide, you know, what proportion of papers had which effect size and decide that, well, you could empirically rank them, and then you'd know how big your effect size was.

And what he found was that about three percent of social science studies showed a correlation coefficient of 0.5 or above, and that's about the correlation between IQ and general life success in relatively complex conditions. Point three five to point five: only about one in four papers to one in ten have a correlation of 0.15 to 0.35; 25 to 57 percent of papers. And then an r of less than 0.15 was about a quarter of papers. So what that means is the ability of IQ to predict whatever it is that you want to predict with it is way the hell up in the rare, rare—what would you call it? Well, it's up in the stratosphere with regards to effect power.

Okay, so what does it mean practically speaking? This is from a company called the Wonderlic Company. The Wonderlic sells IQ tests to businesses. Now, it's actually illegal for businesses to use IQ tests in the United States, so actually it's illegal to do anything to hire employees in the United States, as it turns out, because you have to use a test; you have to use the test that's the most valid and reliable available that doesn't produce any ethnic or gender differences. Well, there's no tests like that, so it's illegal.

It's also—so it's illegal to use interviews, it's illegal to use letters of reference, it's illegal to use ability tests. You might be able to use conscientiousness tests because conscientiousness tests do seem to be completely free of—oh, not exactly, because older people are more conscientious than younger people, so that's also a problem. So, anyways, Wonderlic—it’s a good company; it makes really—its IQ tests are genuine tests, but they are IQ tests, and people do use them in business even though they're not supposed to.

But so, but what they've done over a long period of using the test is come up with approximations for different forms of employment opportunities. So how smart do you have to be in order to be placed in a given occupation with a reasonable probability of success? So we can tell you here. So um let’s see; is that we got it exactly the right place? Okay, so if you have IQs of 116 to 130, that's pretty much you guys; although I would suspect there's a fair chunk of you that have an IQ of over 130, because 130 makes you—115 makes you 15, 130 makes you about 98 percentile, 145 makes you 99th percentile, 160 makes you 99.9 percentile. Something else to think about: percentiles are strange things because you might say, "Well, there's no difference between someone who scores 95th percentile on an IQ test and 99th percentile," because it's only four percentiles different.

But that's not four percentages. The person who scores 95 is one in 20; the person who scores 99 is one in 100; the person who scores 99.9 is one in a thousand. You might think, "Well, it doesn't matter because as you— you know, once you get to a certain degree of smart, that's smart enough." No, that's wrong; in fact, it's probably radically wrong in that the differences between people actually increase as you go up the IQ scale. So, and that's because performance isn't distributed in a normal distribution; it's distributed in a Pareto distribution such that almost no one does anything, and a very small number of people do everything, so which is also a very dismal fact.

Okay, so anyways, if you're in the upper echelons of the cognitive distribution, so from 116 to 130, you could be an attorney, a research analyst, an editor, an advertising manager, a chemist, engineer, an executive, a manager, a trainee, a systems analyst, or an auditor from 115 to 110, 110 to 115. So that's that would be in the upper half of a high school graduating class, roughly speaking, upper quarter: copywriter, accountant, manager, supervisor, sales manager, salesman, programmer analyst, teacher, adjuster, general manager, purchasing agent, registered nurse, sales account executive, 108, 203, administrative assistant, store manager, bookkeeper, credit clerk, drafter, designer, lab tester, secretary, accounting clerk, medical debt collection, computer operator, customer service rep, automotive salesman, clerk, and typist, 102 to 100.

So that's pretty much right at the mean: dispatcher, general office dispatcher, police patrol officer, receptionist, cashier, general clerical work, inside sales clerk, meter reader, printer, teller, data entry, or an electrical helper for IQ from 95 to 98, machinist, quality control checker, claims clerk, driver, delivery man, security guard, unskilled labor, maintenance machine operator, arc welder, mechanic, medical dental assistant, from 87 to 93.

Messenger, factory production assembler, food service worker, nurses' aide, warehouseman, custodian, janitor, material handler, and packer. And then that's it. So, you see that things are getting pretty damn dismal for the people who have an IQ of 85 and below. So now what's the difference between intellect and openness proper? Well, one difference is that men seem to be higher in intellect, the trait, and women seem to be higher in openness.

And openness seems to be associated with verbal intelligence, and with imagination, and with fantasy, and with aesthetic experience. And it seems to go along with the fact that women read more fiction than men. So men read nonfiction, roughly speaking, and women read fiction. You know, it's obviously there's a lot of overlap, but fundamentally that seems to be how it distributes. Maybe that's also partly because women are more agreeable, and so they're more interested in human relations, and characters, and you know, that's the central theme of fiction.

People who are high in openness think divergently. Here we can do a quick little divergent thinking test. So why don't we do that? Take out a piece of paper and a pen, or you can type on your computer. I don't care. You can scratch it onto the desk for all I care; it's law. But get out a piece of paper—something you can write on. And we’ll do this, so I'll only give you a minute since you're all so fast, and then we only have six minutes left. So write down as many uses as you can think of for a brick. You have one minute.

So—so, okay, that's all the time you get. Okay, so the first question is, I'm just going to ask you—I'm going to point at you, and you can answer: how many—how many uses did you come up with? Thirteen? Five? Eight? You two, that's okay, eight. That's alright, seven, seven. Okay. Did anybody come up with more than thirteen? How many fourteen? Did anybody come up with more than fourteen?

Okay, so you two are the most ideationally fluent people in the class, and fluency is a matter of the rapidity with which you can generate exemplars from a category. So one of the things that we might ask just as much would be how many words can you write that begin with the letter 'S' in a minute, and they'll be—you wouldn’t believe the bloody range of that. There’ll be people who’ll get six, and there’ll be people who’ll get fifty, you know?

So ideational fluency is actually a pretty decent predictor of creative ability. Now, you know, that's not a great test, right? Because it was very short, and I do that multiple times with multiple time durations and just different exemplars, and some across that. But the correlation between ideational fluency and long-term creative achievement is about 0.3, that's pretty major. Okay, so let's hear a use for a brick. How about you?

Oh, you could draw on the sidewalk with a brick. How about you? A paperweight? How many people got paperweight? Yes, that's a low creative response. Now, why that doesn’t mean that all your responses are, but it's a technical definition, right? So here's what makes a response low creative: everyone has it now; it's a perfectly reasonable response, but for it to be creative, virtually by definition it has to be a low-probability response that actually makes sense.

So "draw on sidewalk" is a more low-probability response. Somebody got a weird use for a brick? Ground into dust and used? Okay, okay. Alright, anybody else have that? They ground their brick into dust for something? No? Yes, you could kill someone with a brick. Anybody have that? Yes? Low agreeableness, low creativity. Yeah, yeah, okay.

So anybody else got a peculiar one for a brick? Yeah. Yeah? Yeah, so the way you would score this, by the way, it's quite straightforward. It's not that straightforward, actually. First of all, you count the number of responses, and you get rid of ones that are repetitive, and then you also get usually a couple of people to throw in ones that are just like—they're neither original nor practical.

The best responses are original and practical. Now, there's a bit of a judgment call there, you know what I mean? But if you get a number of people to do the rating, you can get pretty decent integrated reliability. And so then you score originality. And the way you score originality is merely by summing, like, listing all the responses and then calculating how probable or frequent the responses are, and the more infrequent responses you get a higher score for.

So it's basically a population sample. And those things actually work; those tests, just so you know. Now I’ll stop with this. This is a measure that I derived with one of my students when I worked in Boston. Her name was Shelly Carson, and this thing has actually become one of the standard measures of creativity in the creativity literature, which is quite fun.

And it's a creative achievement questionnaire. So what I gave you just now was a creativity test. It's one of many creativity tests, but it wasn't a creative achievement test because you can generate uses for a brick without running out and becoming a brick entrepreneur, you know? So there's a difference between being able to think divergently and with loose associations, and then also being able to put that into practice so you achieve something.

And so here's some levels of achievement. So there's the scientific discovery domain: I do not have training or recognized ability in this field, I often think about ways that scientific problems could be solved, I have won a prize at a science fair or other local competition, I've received a scholarship based on my work in science or medicine, I've been author or co-author of a study published in a scientific journal, I've won a national prize, I've received a grant, my work has been cited by other scientists in national publications.

Okay, and then the same thing for, say, theater, film, and culinary arts. Zero is, I do not have training or recognized ability in theater and film; number seven is, my theatrical work has been recognized in a national publication. For culinary arts, I do not have training or experience, and the highest one is my recipes have been published nationally.

Okay, so there's 13 different domains, and that's the distribution. It's not normally distributed. So what you see there actually—it's not quite right because the median answer is about 66 percent of the respondents to the creative achievement questionnaire have a median score of zero; right? Zero—they do not have training or talent in any of the 13 areas. And then there's some people who are way out on the right-hand distribution because you can imagine that once you have national exposure in one magazine as a writer, the probability that you're going to get national exposure again for your second novel starts to go up pretty damn high, right?

And so you get this terrible step-down function so that almost everybody aggregates at nothing, and then a few people are way out in the distribution, and they just get everything. And there's actually a law, it's called the Matthew principle that economists use, and it's from a statement in the New Testament from Matthew where it’s a statement where Christ says, "From those who have, to those who have everything, more will be given, and from those who have nothing, everything will be taken away."

And that seems to be how creative resources are distributed in the population, and to a large degree that's dependent on intelligence, openness, and conscientiousness. So you can understand why none of that’s very popular with people. Since you're all smart and conscientious, though, it's going to work out well for you. See you on Tuesday; I'm going to post your midterm marks. What we're going to do after that with regards to this strike, I don't—I can't—I don't know yet.

More Articles

View All
See What It Takes to Hide a Secret Tracker in a Rhino Horn | Short Film Showcase
[Music] Africa’s got the greatest number and diversity of large mammals. It’s the continent that’s been blessed with the most wildlife. Many of these animals, like the black rhino, are down to a few thousand. This is it; in the next hundred years, years m…
Free Markets Are Intrinsic to Humans
Overall, capitalism is intrinsic to the human species. Capitalism is not something we invented; capitalism is not even something we discovered. It is innate to us. In every exchange that we have, when you and I exchange information, I want some informatio…
Money Has Been Making People Crazy Since King Midas | Big Think.
Historically, money has been used as a way to control other people. For example, in ancient times, ancient Mesopotamian times, it was illegal to sell your family, your wife, your son into slavery unless you were settling a debt. So what does that show us…
How Online Advertising Is Tricking Your Thoughts, Attitudes, and Beliefs | Tristan Harris| Big Think
So we always had an attention economy, whether it was on radio or television. There’s always been a race for our attention, and it’s a zero-sum game. If one TV station gets more of your attention, the other TV station gets less. But now, because we’re spe…
Sunni and Shia Islam part 2 | World History | Khan Academy
Where we left off in the last video, we were in the year 656, and the third Khilafah Uthman, or Usman, is assassinated. Ali is chosen to be Khalif. Remember, Shia believe that Ali should have been Khalif immediately after the death of Muhammad, and they c…
Puppies and Scientists Team Up Against Zika and Other Diseases | Expedition Raw
Oh yeah, the puppies are absolutely critical to the research. Okay, you hi puppy! We are collecting blood-sucking creatures like fleas and mosquitoes because they transmit disease to humans, like the D virus, Zika virus, Bubonic plague, or Bonella. So, o…