JBP's Partners in Crime | Dr. Daniel M. Higgins & Dr. Robert O. Pihl | EP 328
If you have a Big Five personality assessment on somebody before you interview them, it would be very useful. If you're about to interview somebody who's high in extroversion, you need to know that during that interview, you're going to have an inflated view of their competence because you will automatically conflate confidence with confidence. If somebody is high in neuroticism, it's good to know that in advance. They might not appear very confident; they might require a certain amount of coaxing to come out of their shell. They might be nervous. It would be useful to have a tool to know that in advance so you can essentially counteract the hard wiring that you have as an interviewer.
You talked about brutal selection methods and people might react to that, and I would say, well, here's something for people to think about: hire stupidly and put people in positions where not only do they fail painfully over a long period of time, but they compromise the performance of everyone around them while doing so. It isn't whether it's brutal or not; it's like which form of brutality do you prefer? I would prefer the preventive brutality approach rather than the consequential brutality approach.
Thank you. Hello, everyone. I recently had the wherewithal and the honor and the privilege to discuss the business and research and personal arrangements that I've had with a couple of my closest compatriots: Dr. Robert Opel, my former graduate advisor at McGill University, and my former student Dr. Daniel M. Higgins, who graduated from MIT and under my supervision from Harvard. We walked through our business, professional, and personal relationships as they've unfolded through many ups and downs over the last 30 years.
I wanted to bring them into the picture because I have worked with them so closely, particularly on two projects that I wanted to also draw attention to as they form the basis for much of the discussion that's to follow in this clip. One is self-authoring.com, as in S-E-L-F authoring, as in writing. Self-authoring.com contains a number of programs: past, present, and future authoring that help people write out the narrative of their life—their biography, their virtues and faults in the present, and then a vision for the future. That's a very useful program as you do live out a story, and it's a good idea to know what story you've been living out and know where you want to go in the future, and that's self-authoring.com.
The other project that we developed on the commercial front, so that's publicly accessible, is understandmyself.com. Understandmyself provides a very thorough description of the fundamental elements of human personality: the five major traits, extroversion, neuroticism, agreeableness, conscientiousness, and openness differentiated into ten different aspects. It gives you a relatively simple shorthand for your personality, but also a differentiated view of who you are. If you take that test, which doesn't take very long—about 20 minutes or so—you get a detailed description of your basic temperament.
If you have a partner and then your partner takes the test, you can join the tests together and get a separate printout, a separate report that details your comparative similarities and differences. This is extremely useful. When you establish an intimate relationship with someone, there is some utility in your differences and some utility in your similarities, and there's some additional utility in understanding what those are because you need to understand who your partner is. You need to understand when they're different from you that they're actually different, and there can be value in that.
So, understandmyself.com can help you understand who you are. It can help you understand who your partner is, and it can help you understand how you differ and what you might do about that. With that, we'll move onward to the discussion itself. So, that self-authoring.com: past, present, and future biographical writing and the development of a vision, and understandmyself.com, which helps you understand your personality and maybe understand your personality in relation to the person who's closest to you.
Thank you all for your time and attention, and onward to the discussion. Today, I have two people to talk to who I know, who I've known for many, many years. First, Dr. Robert Peel, who was my graduate supervisor at McGill from 1985 to 1992, and with whom I've had a friendship and business relationship ever since—a very intense and multi-dimensional relationship both on the intellectual front, the personal front, and the business front. So, we're going to talk a little bit about that today.
Also, Dr. Daniel Higgins, who is a student of mine at Harvard after getting his engineering training at Trinity and at MIT. He also got involved with Bob on the business front and helped me and Bob develop some of the measurement devices that we've been attempting to use in the corporate world and more successfully using in the private sphere. I thought it would be fun for me, but also hopefully for my two guests and for everyone listening, to just walk through what we learned as we've worked together over the last 30 years on the scientific front and the business front.
I started working with Bob in 1985. I wrote him a weird letter when I was applying for my graduate training. I'd finished my bachelor's degree in psychology, my second update really to my first bachelor's degree, concentrating on psychology, and decided I wanted to go into clinical work. I wrote him a letter that actually told people what I was like, which I'm sure scared off a hell of a lot more people than it attracted, but for some reason, it seemed to twig Bob's interest.
He called me one day. I really wanted to go to Montreal to study and asked me if I wanted to come down and study alcoholism. We actually had a compatriot in common—a man I had met in Fairview years ago, the town I grew up in. He happened to be a student of Bob's, and when the letter showed up on his desk, he asked this person, Dave Ross, if he knew who I was, and luckily, I got a positive review, which was quite the surprise.
So, I really enjoyed working with Bob. What do you remember from the beginning of those days, Bob?
“Well, first of all, I remember the letter, and I remember that colleagues advised me that I should not select you as a graduate student—but it was exactly the non-traditional nature of what you wrote and the deep thought that was implicit in your statements that drew me to it. That, along with a basic instinct to be a risk taker, was the reason why I accepted you as a student.”
Yeah, well, you know, I spent a lot of time thinking about that letter. It was a calculated risk. I think I wrote, if I remember, and I can actually remember some of this, I think I wrote that I like to drink copious amounts of red wine and could type like a mad dog. I knew that wasn't exactly standard graduate school application letter language, but you know, I thought, first of all, I also indicated in a letter that there were some deep things that I wanted to pursue, you know, that I was interested in assessing the nature of human malevolence and that I had broad philosophical and psychological interests.
So, there was a real serious part, and there was a real, I would say, comedic part in some sense, and it was provocative. I thought, look, I'm going to be working with someone for a long time, and I want to find someone who actually wants to work with me. So, it was a calculated risk, like the one you took, I guess, when you accepted me.
It's funny, you know, because people have had that kind of reaction to me, I would say, ever since, which is that some people, like you, are quite happy with the opportunity to work with me, and other people think, you know, that they should keep me at a distance with a stick. I think maybe it's not obvious that the loud people in the latter camp are wrong.
Anyways, yeah, so you called me up and asked me if I wanted to do some work on alcoholism, which wasn't really the specific field that I had tremendous amounts of interest in, although I was interested in motivation. But we started to work on motivation for drug and alcohol abuse and on antisocial behavior pretty much right away.
Why do you think our collaboration was so successful?
“Oh, well, it comes down to who you are and how many degrees of freedom you're allowed and the nature of the challenges in front of you, Jordan. Challenges produce great rewards, so I think it's as simple as that.”
Yeah, well for me, you know, I was really thrilled that I had the opportunity to come to McGill. I really wanted to come to Montreal, and then you were an ideal supervisor for me because you were very practically oriented, right? You had a great administrative hand; you had a thriving and unbelievably productive lab. How many papers have you published, Bob?
“I honestly don't know, Jordan—100 and a half, 200, something like that?”
Yeah, I think it's more than that, Bob. So, Bob's lab was famous for its productivity, I would say, and also for the morale of its graduate students. I mean, one of the things that was really remarkable about you, and often in distinction to other graduate supervisors, is that you were very generous with credit, you know, and you gave your students a tremendous amount of freedom, and you really helped all of us through the various administrative hurdles, you know, clearing the ethics hurdles for our research and then encouraging us both simultaneously on the career development and the intellectual development front, which is a very thin line ethically.
Right, because obviously, to be a successful academic, there is kind of a marketing element. You have to publish, you have to meet people, you have to communicate, but at the same time, you're supposed to be assiduously pursuing mathematically grounded truth, insofar as you're a good statistician, and those often come into conflict. You were very, very good at letting all of us know. I mean, you've produced a lot of very successful graduate students: Sherry Stewart, Patricia Conrod, Josh again, and Peter Finn—a lot of very successful academics.
You did an extremely good job of helping us know that it was our moral obligation to stick with the data, no matter what, but at the same time, to develop our careers. For me, also, the fact that you had an unbelievably encyclopedic knowledge of the relevant psychiatric and psychological research made our discussions extremely fruitful because I could talk about more philosophically oriented issues, and you could immediately bring that down—well, bring it down, move laterally into the scientific realm and help introduce me into the appropriate biological and psychiatric literature.
The current administration's New Year's goals are to tax, spend, and turn a blind eye to inflation. If this is at odds with your goals, if you are tired of the government playing games with your savings and your retirement plans, then you need to get in touch with the experts at Birch Gold today. For over 5,000 years, gold has withstood inflation, geopolitical turmoil, and stock market crashes. With help from the experts at Birch Gold, you can own gold in a tax-sheltered retirement account. Birch Gold makes it easy to convert an IRA or 401(k) into an IRA in precious metals. Just text "Jordan" to 989898 to claim your free info kit on gold and then talk to one of their precious metal specialists. Birch Gold will hold your hand through the entire process. Text "Jordan" to 989898 and protect yourself with gold today.
With an A+ rating with the Better Business Bureau, thousands of happy customers, and countless five-star reviews, you can trust Birch Gold to help protect your savings. Text "Jordan" to 989-898 today.
Two rules of thumb: first, accept good students and get out of their way. That is, just provide them with what they need to do what they're interested in. Secondly, if they have really good ideas, don't ask the bureaucrats; just do it.
Yeah, well, you know, as we worked together, that got more and more difficult, and we could tell even back then that the university was starting to close in on its researchers. Because when I first started working with you, the ethics committees—the so-called ethics committees—were kind of an encumbrance whose dictates you had to please but could in some real sense dispense with quite rapidly while attempting to stay in the proper ethical domain. But as we continue to work together, the constraints that were placed on our research became more and more onerous, and that's a process that's just continued and accelerated since then. But you could really see it coming back even in the late '80s.
Yeah, no, it's true. I'm somewhat happier that I'm not there now, given the kinds of constraints that researchers have to go through.
Well, it's so odd, because as we got more and more efficient at running studies—and this happened a lot when I was working with Daniel too—we got more and more efficient at running studies and designing them, partly as a consequence of being able to use computational power. So we could do studies much faster. The bureaucratic impediments to doing studies multiplied to such a degree that it became more and more difficult to do them because there were so many hurdles that had to be leapt over before you could even begin the process of an investigation.
It's really hard on people who are quick-minded and sharp because their orientation is to do interesting things as rapidly as possible and then to be confronted continually with a bureaucracy that works at counter purposes to that certainly ensures, at least to some degree, that anybody who's fast and sharp just wants to get the hell out of there.
Because, you know, you said this is so cool: one of your management principles, and this is a good thing to pursue on the front of the joint relationship between scientific endeavor and entrepreneurial endeavor, you said your management principle was to hire really good people, students, let's say, and then get the hell out of their way.
One of the things I really admired about you and never stopped admiring you for—and you were a great model in this regard—was that your students had areas of expertise always that you didn't share. Like Sega, for example, was a near professional musician. You tended to take in a lot of students who had non-traditional backgrounds in some sense in relationship to psychology. I never ever saw you engaging in a turf war for intellectual preeminence with any of your students if they were operating in an area of expertise that wasn't your area of expertise. You were always able to maintain a calm authority and never be threatened by the fact that you were willing and capable of bringing around people who knew some things you didn't know. That was early, and that is a great management principle.
Well, less a principle and more a realization that they were all brighter than me.
Yeah well that remains to be determined. So, we started working on alcoholism. That was really useful. Alcohol, unlike most other drugs, doesn't really target a specific brain area or set of neurological receptors. It flows through the brain like water. One of the consequences for me was that that meant that I really had to delve deeply into biological neuroscience because alcohol essentially affected every physiological and neurophysiological system in the body.
Part of what I learned to do at McGill, apart from developing a certain degree of statistical expertise—not my forte, by the way—was to delve deeply into the biological literature. You were very interested in biological psychiatry, so that was extremely helpful.
Yeah, remember that paper we did on genetics for the U.S. Congress?
Yeah, I was amazed at how quickly you were able to grasp that literature.
That was a very nice piece of work.
Yeah, well, it was one of the things that was great for me at McGill. You know, there's this idea in psychology of construct validation. The idea is, how do you determine if something that's abstract, a psychological concept, let's say, like neuroticism or self-esteem, how do you know if that's real rather than just sort of a metaphor or a linguistic placeholder? One of the answers to that is, well, it's real if it's a pattern that makes itself manifest across a variety of different modes of measurement.
I had been reading a lot of psychoanalytic material and mythology when I came to McGill, which definitely put me in a minority. And then, as a consequence of having the biological frontier opened up, especially with people like Jeffrey Gray, I started to see parallels between the deep biological literature, neural hard neuroscience work, and animal experimental work that was done on rat brain functioning and on the neurochemical front. I can start to see real deep parallels between that and the mythological material that I had been reading, and that was really what I started writing my first book when I was at Miguel, "Maps of Meaning." I pursued that a lot while I was working with you, sort of as a side project, but it was an attempt to integrate all the biology I was learning about with the mythology.
We're also studying antisocial behavior at that point, and this sort of segues into my relationship with Daniel, and we'll get to that—or interrelationship. So Bob and I were working with the sons of male alcoholics who also had an extensive family history of alcoholism—a very specific population that eventually became essentially impossible, as the dictates came down from above that half our research subjects had to be female, which was a real problem for our research enterprise because we were actually researching a particular kind of primarily male psychopathology.
Our subjects had to be young men who weren't alcoholic, who did drink, who had alcoholic fathers, alcoholic grandfathers—so they had to have an alcoholic father, another close alcoholic relative male, and not an alcoholic mother, because that would have exposed them, in principle, to fetal alcohol syndrome. We were looking at the biological basis of the proclivity to alcoholism and were interested in the role of disinhibition in that, right?
So, you might say, well, one of the reasons people might drink is because they're biochemically responsive to alcohol in some manner that's either directly rewarding, like cocaine, or anxiety-reducing, like Valium or barbiturates. But another possible hypothesis is they're just not very good at impulse control. An impulse would be a biological impetus that wants short-term gratification, like lust, hunger, thirst, or the desire to breathe for that matter, and obviously, you have to abide by those dictates or you die. But if you only fall prey to them, then you're impulsive, and that dysregulates your medium to long-term survival.
So we were interested in impulsivity. Bob and I started to investigate the neuropsychological literature. A lot of people at McGill were working on the assessment of so-called prefrontal cognitive ability and had developed a lot of practical tests for brain-damaged people to see what focal cognitive deficits they had as a consequence of their neurological condition or their brain surgery. We started to apply that to the analysis of antisocial behavior, right? Jean Segal was very much involved in that.
So what got you interested, Bob, in the realm of antisocial behavior?
Well, alcohol and aggression, the relationship. If you want to really understand aggression, understand the relationship between alcohol and aggression because alcohol is involved in and half of murders, rapes, and general assaults, but in most situations of violence. So it was a question of what is alcohol doing to the brain that, in fact, is increasing that propensity? So, as we were turning to sons of alcoholics, then the problem of alcoholism, it became the same kind of question of was there a difficulty in producing inhibitory behavior as these individuals also tended to have a series of cognitive deficits as measured in terms of scholastic performance and psychological tests?
That's where you started looking at the Montreal Neurological Institute and the measures they were using to measure, for example, frontal lobe functioning as its importance in generally controlling social behavior, right?
Well, that was the first paper I published with Jennifer Rothfleisch and Phil Zelazo, right? We put together a neuropsychological battery, and then we had people who were drunk at two doses of alcohol. We used high doses of alcohol in our lab, which was one of the things that made it rather unique. We looked at the specific patterns of neuropsychological deficits that alcohol produced, and alcohol doesn't interfere with things like vocabulary understanding or color perception, but it has a walloping effect on the ability to move information from short-term storage into long-term storage, even at relatively moderate doses. It really interferes with complex motor coordination, although it doesn't suppress, let's say, it doesn't shorten, it doesn't lengthen reaction time per se. Simple reaction time had no effect on at all.
That's right. So we started building a neuropsychological battery, first of all, to see what the nature of the overlap between criminal and aggressive behavior and the proclivity to alcoholism was, but also trying to investigate why alcohol made people aggressive because it is one of the few drugs—and your research was part of what demonstrated this—that alcohol actually does make people more aggressive.
I remember a study we discussed that you devised where people were put into a bus aggression task, I think, where they were asked to administer shocks—electrical shocks of a certain duration and intensity to their competitors in a game-like scenario, low levels of shock. They weren't actually shocking a real person; that was a sham. But one of the hypotheses was that people who were drunk just didn't know what they were doing. So, if I remember correctly, you had the drunks—the people who were alcohol intoxicated—write down or otherwise record the level of shocks that they were administering and the duration to make that conscious. What happened was that actually made the drunk people more aggressive rather than less.
So it wasn't merely a matter of alcohol-induced stupidity; there was real facilitation of aggression that seemed to be associated with something like disinhibition.
Indeed. We even tried to pay them not to be aggressive and found out that that works very well with people who generally have higher IQs, but not with individuals with lower IQs.
Yeah, yeah. Well, you know, the thing is, people in real life, in some sense, are paid not to be aggressive when they're drunk, and the way you get paid to not be aggressive while you're drunk is by not getting in trouble. And that certainly doesn't stop people.
It's worth just dwelling for a moment on the statistic that Bob cited. You know, we did a lot of reviews of the relationship between alcohol and aggression, and I think you could make a pretty strong case that almost all sexual assault and a tremendous amount of general interpersonal violence would just vanish if people weren't overly alcohol intoxicated. There's a massive relationship between drunkenness. What? Half the people who commit murders are severely intoxicated, and half of the victims of violent crimes are severely alcohol intoxicated.
Indeed. It's always been amazing to me that when we talk about sexual assault on campus, you know, we talk a lot about sexism and toxic masculinity and not very much about the fact that it's almost all alcohol-fueled, at least half of it is.
Jordan. Right, right. At least half of it is. So we were also—you and I were also talking at that point about entrepreneurial ideas, you know, because I guess we both had a bit of an entrepreneurial bent. One of the things that was always noodling away at us was whether there was anything that we could do that might constitute the grounds for the construction of a business. I remember we talked about the possibility of investigating treatment for hangovers on the pharmaceutical front and went down that rabbit hole for a while, but we never really—not at McGill; we never really settled on an entrepreneurial idea.
I think one of the ideas that we discussed too is to start a consulting business to help people who had health problems do an objective review of the scientific literature bearing on their particular health problem. That's still a good idea, although we never did do it.
Anyways, after I was at McGill—and I think Bob and I wrote 15 papers together, which was something of a record at that point for a graduate student and advisor collaborator—I was thrilled to receive an appointment to Harvard, which was quite the event.
Yeah, that was quite the event.
I went down and started pursuing the same line of research, in some sense, that I was pursuing with you, but it had become increasingly difficult. The National Institute for Alcoholism and Alcohol Abuse, which in principle was designed to facilitate research, kept making it impossible to bring people into the lab and actually give them reasonable doses of alcohol.
I mean, I remember by the time you and I were done, our research, we were having to keep our damn subjects in the lab for like six or seven hours after we got them drunk. The NIAAA required that we bring their alcohol blood level down, I think their eventual recommendation was either 0.04 or 0.02—half of legal intoxication. They wouldn't allow us, for example, to send them home in a cab. Nobody wanted to sit in our damn lab and sober up miserably for six hours while staring at a wall, and it made it pretty much impossible to bring people in repeated times because the experience became too onerous.
There were all sorts of other restrictions that were emerging that made it impossible for us to do what we had been doing. We got a long way on analyzing the effects of alcohol associating with opiate reinforcement and starting to investigate potential biochemical treatments like naltrexone for alcoholism.
It was, and we did it on a shoestring budget too, which was also an interesting thing to do.
So, I went to Harvard and I wasn't making a lot of money. I had to teach a lot of extra classes. My wife couldn't work because she didn't have a green card. I was just kind of existing on the threshold of survivability, you know, in a comfortable sense, but I didn't even have magazine subscriptions when I was a junior professor at Harvard, and I drove this old rust bucket of a car that was barely holding together.
I remember I phoned up the dean one day and said, you know, I don't know what your policy here is at Harvard, but you guys hired me to do research, but I have to do a lot of overload teaching just to be able to survive here because it's relatively expensive.
Like, what the hell is the rationale for this? I was probably slightly more polite than that. And he said, well, most of our people consult. And I thought, well, that's not true because junior professors don't have time to consult, so that's just not true. But okay, if that's the damn game, then what the hell do I know that might have some economic value.
I thought we had talked at that point about starting to use our neuropsych battery, which we had started to computerize, on broader fronts. Right? We used it to investigate alcohol and then antisocial personality, and then I thought, you know, maybe we could see if this battery of neuropsychological tests would predict corporate and academic performance.
So I called you, and that’s when—and we had to talk about potentially putting together a company that would be designed to do exactly that. We did, like, in the late 1990s.
And that’s when Daniel showed up. And so, Daniel, you had done your engineering training at Trinity and then at MIT. So walk us through a little bit about your academic and intellectual background.
I was, uh, I did civil engineering at Trinity College in Dublin, and then I came over to the U.S. and I found it incredibly difficult to find a job, just to go through the mechanics of it. So I thought, well, maybe I’ll just go to graduate school instead, which, sadly, was as shallow as that.
No, it probably wasn't, but I went to MIT and I did a master's degree in civil engineering. But at the time, it would have been around 1991, 1992, 1993. The use of computers—computer technology was getting cheaper and was more pervasive, and so obviously while I'm there, I'm not going to be busting concrete beams or cubes. I'm going to be looking at what the current computer technology is: AI and so on and so forth.
People were much more optimistic about AI in the early '90s than they were in the late '90s. I had been doing computer programming, and then I went over to take some classes at Harvard with you. I think it was 1995. I may have been in your first personality psychology class or your second; I'm not quite sure.
We started programming the neuropsych stuff. The work that you guys had done and that Jean Segal had done in Montreal had used a paper-and-pencil version of the psych tests, and those psych tests were kind of modified from use with people with clinical issues and with experimental animals. It was an interesting kind of a transformation of using something to detect clinical differences to detect individual differences.
At the time, the idea of a prefrontal cognitive ability really didn't exist. People spoke about executive function, but they didn't speak about it in individual differences terms. They spoke about it more in a sort of a global—like what would happen if executive function did or didn't exist.
And so, we computerized the types of tests that you'd been using with the sons of male alcoholics and that Johnson had been using with, I believe, impulsive children, if I remember correctly.
And then we did, I think it was 1997—you were teaching the personality class, and we did the online computerized personality lab, right?
Right. And that was early web days. We did something like 100 experiments at once. We did something that was right when Netscape—what was the first browser? It was pretty much right after Netscape came out. It was fairly early in computer days. It was when Perl—CGI programming was the way you’d do web stuff.
We essentially set up a bunch of experiments. My wife, Alice Lee, and I programmed them, and all the kids in the class took them. It was like, I think, probably about 140, and they divided it up into groups of five to do the analyses. So we did a soup-to-nuts set of, as you said, maybe 15, 20 different experimental questions.
Yeah, yeah, well, it was this battery of tests that I had developed with Sega and Bob. It took a lot of—it required gadgets and lights and boxes, and it was very mechanical. It also took about 9 to 11 hours for a trained neuropsychologist to administer the test battery, but when we computerized it, we got the whole damn thing down to 90 minutes.
Then we were interested in—we thought, well, we could use this test battery to assess psychopathology, impulsivity, but then could we use it to assess normative or even excellent performance? Daniel and I and Bob started to investigate the possibility of using prefrontal cognitive ability tests to assess academic prowess at Harvard and at the University of Toronto, as it eventually turned out.
We also started to move into the corporate world, and so that was part of an entrepreneurial vision at that point as well because we thought—I had started studying papers that were produced by, now I’m not going to be able to remember, unfortunately, the names of the people who produced the initial equations relating increased accuracy of selection to economic gain.
Hunter and Schmidt, yeah, absolutely crucial papers, showing that so one of the things we learned for everyone watching and listening is that a tiny minority of extremely high performers produce almost all the economic outcome of a given endeavor. So it's actually the square root of the number of people involved in a given endeavor do half the work.
That's like the 80/20 rule; it's almost a very small minority of your customers produce all your profits; a small minority of your workers produce all your productive output; a small minority of creative people produce almost all the creative output; and a small number of criminals commit almost all the crimes. So what that means is that if you can tilt your selection methods, your hiring methods towards the upper end, even a small amount, and thereby increase the number of extremely top performers that you have in your organization, the economic payoff to that is unbelievably dramatic.
So I think Schmidt and Hunter estimated at that point that if the U.S. bureaucracy, the government bureaucracy, switched to accurate assessment for hiring, which they're actually bound to do by law, they would save an amount each year equivalent to total American corporate profits. I read that, and I thought, oh my God, if we can develop a new way of assessing ability that's accurate, we should be able to go to corporations and say, look, you know, for a relatively small amount of money, we can radically increase the productivity of your employees.
I thought we could just make a statistical case that the payoff for that would be so great that, you know, people would literally be to pass to our door, and so that was a very naive presumption. Now, you were put right in the center of this because in order to pursue this for your PhD, you decided to come and do a PhD in psychology. You had to master the neuropsychological literature; you had to master the literature on IQ testing because one question that came up was, well, was there a difference between prefrontal cortical ability, let's say, the ability—cognitive abilities that are dependent on the forward part of the brain, the part that abstracts action before it's implemented?
What was the relationship between, not an IQ, and most neuropsychologists regarded IQ tests as these dusty, you know, ancient technologies that had been superseded and just assumed that what they were measuring was something separate. But that had never really been rigorously tested, and then you also had to become a master of the relevant literature on personality because personality traits, like conscientiousness, are also useful at predicting performance.
So, why don't you talk a little bit about the development of your thesis and also the fact that you wrote your thesis at the same time that "The Bell Curve" came out, and that was quite the political nightmare, all things considered.
So, let’s why don't you walk through what you did for your PhD research and your thesis.
We sort of hit a perfect storm in a way because Hernstein and Murray's bell curve had just been published, which was the most vilified book of the '90s, if you like. Hernstein and Murray were both at Harvard, although Hernstein had just passed away. We had been—the neuropsych guys had been speculating on the way the brain works under normal circumstances based on research they had done with clinical populations, and they essentially completely ignored the statistical methods that are required to talk about individual differences in cognitive ability and individual differences in outcomes across situations.
In some ways, we caught a certain amount of flack from the neuropsych guys because we weren't cutting up monkeys, which seems okay to me, but—and they weren't interested in the IQ stuff because the intelligence research had been—it was tainted in psychology, which was very strange because it was the most well-developed from a scientific perspective and has been an area of psychology, and the tools that people used throughout psychology—personality psychology, for example—were developed by the early intelligence researchers.
Charles Spearman developed factor analysis, and it was later enhanced by that gentleman from the UK—I’ve forgotten his name, but Thurstone, is it?
Thurstone, yeah.
And so it was very weird for somebody coming into psychology from engineering to look at the situation and see, well, the most rigorous research programs that I can see in this—this is probably going to irritate experimental psychologists a bit, but I don't know how else to put it—were despised. The area that those things, that was essentially despised, and a lot of academics in psychology were trying to figure out how to essentially throw stones at intelligence research.
Even though, it would be like that meme where it's like nobody, nothing, and then Jerome Kagan: "IQ tests are biased." It was just—it was that opportunistic, it was just to come out and make a statement against that.
So suddenly I found myself—and you did also—being with the pariahs when we didn't even know that there was such a situation.
Yeah, it was really weird, eh? Because, in existence, we were driven to—no, we were driven by two things: like, first of all, we had an entrepreneurial curiosity about this, but Bob and you and I were driven, I would say, apart from that, 100% by nothing but the desire to try to find accurate predictors of measurable performance academically and industrially.
There was no—not only was there no political agenda, as you said, we didn't even know a political agenda existed in some real sense until you started writing your thesis and the bell curve issue blew up.
We were also interesting; people might find this interesting. So the neuropsychologists had really claimed that what they were measuring with their specific tests was something completely independent of IQ or at least importantly independent of IQ, and then as we delved into the IQ research—and you took the forefront on this endeavor, Daniel—we realized that all cognitive measures converge to a single factor. And that really means all right. It means you can't come up with the—go ahead, Daniel.
If you're speculating about the higher cognitive functions, like Alexander Luria's the higher cognitive functions of man, and if you're working in that tradition and you want to speculate about that in intact humans, you're, like, two questions away from G, essentially, right? You're like, well, how does this manifest itself in the real world? Well, it manifests itself in individual differences between performance. Okay, and what do those things have in common? A lot. They're all inter— they’re all positively correlated with each other.
Right? You’re IQ, you bake. Congratulations! It’s 1904. You’ve just rediscovered the G factor.
But, yeah, but no, we’re not going to ask that second question. We’re just—we’re not even going to ask the first question—we’re just going to speculate about how important things like executive function are without saying, well, can it be formalized, measured, and does it spread? Are there individual differences? And then once you ask those questions, it is incumbent upon you to explain why it is different from the G factor.
Right, right. Well, that was hard on us too because we had already put years into working out the proposition that these prefrontal cognitive tests were in fact assessing something interestingly different, right? And then I pushed, when we did go ahead, I pushed the prefrontal cognitive ability as a construct, measured by the neuropsych test that we had essentially stolen from the neuropsych guys in the animal research guys and developed into an individual difference construct, and I pushed that as far as I could as an independent construct from G.
And when I wrote up my dissertation, and I—and the paper, the one that we published in the Journal of Personality and Social Psychology—essentially, I didn't collapse it. I pushed it as hard as I could. But after coming away from that, I would not sit down with anyone and tell them that prefrontal cognitive ability was independent of G because it had a different heritage.
Yeah, this is a good indication of the kind of price you can pay for scientific rigor. I mean, Bob and I had put six years into the development of this prefrontal cognitive battery, and there was some utility in it predicting disinhibition. Then you and I put in years developing the battery itself, and then our discovery was we didn't know enough about the psychometrics of intelligence to be doing what we were doing.
There was no escaping from the black hole of fluid intelligence in some real sense. I mean we did get evidence that the battery had incremental predictive validity in relation to predicting grades at Harvard and the University of Toronto and also on the corporate front, but what that meant at best, in all likelihood, was that perhaps we had expanded the domain of cognitive measurement to some degree into areas that hadn't been precisely evaluated, although it might have just been that it was just a secondary consequence of additional testing.
With the start of the New Year upon us, what better time than now to start building a habit of prayer? Just like physical exercise, daily spiritual exercise is critical to your well-being, especially in a world where attacks on faith and religion are happening all around us every day. Hallow, the number one Christian prayer app in the U.S. and the number one Catholic app in the world, helps you maintain a daily prayer routine. It's filled with studies, meditations, and reflections, including the number one Christian podcast, "The Bible in a Year."
Download the app for free at hallow.com/jordan. You can set prayer reminders, invite others to pray with you, and track your progress along the way. Make this year your year for spiritual growth and peace. Get an exclusive three-month free trial at hallow.com/jordan. That's hallow.com/jordan.
Well, we conducted a G measure of G. We only used, you know, the Raven's Progressive Matrix, right? Exactly. You know, the block—the fluid intelligence components like three of them of the WAIS. So, a real intelligence researcher like Ian Drury, for example, is that—that's his name, isn’t it?
Yeah, Ian Drury.
He would say you haven’t tapped G at all. You’ve only taken two or three subtests, and he would basically want to find the whole thing. But anyways, one of the things that you said at the time was every psychological experiment should use IQ tests. That would have been—if that maxim had been used from the start, we probably wouldn’t have been as naive as we were about getting away from it.
That's not to say that executive function and prefrontal cognitive ability are reducible to some magical construct in the sky, G. It's just to say that in psychological research, things just don’t fall along the nice little neat theoretical lines that you build your career around. It’s messy.
Right? Well, what I concluded at that time was that every psychological experiment run that involves analysis of the differences between people should, by what we would call universal fiat, be required to use both IQ tests and likely personality tests because we have these basic constructs: general cognitive ability, which is a walloping measure, and also pretty decent measures of temperamental proclivity.
They’re really basic the same way the elements in the table of the elements are basic physically, and we shouldn’t be talking about any other constructs in psychology until we can be absolutely sure that we’re not just re-measuring constructs that have already been discovered.
That is kind of disheartening on the psychological front because both IQ and the Big Five— I mean, you can argue exactly about the parameters of the Big Five still, but fundamentally, they’re such black holes that they tend to suck in every other bit of research and invalidate it in some sense. I mean I remember we looked at the self-esteem literature, for example, and concluded pretty rapidly that self-esteem was nothing but low neuroticism with a bit of extraversion thrown in, and there was no need for the self-esteem construct at all.
That just obliterates the careers, in some real sense, of the people who are proposing these alternative constructs as real. The other thing that happened, Daniel, when we were working together was that the multiple intelligence and practical intelligence people—especially at the Harvard School of Education under Gardner—were beating their drum extremely loudly.
Gardner was proclaiming that what had always been regarded as talents—there’s multiple talents, let's say, although they might be united to some degree by G—that he proposed renaming talents intelligence and then pursuing this multiple intelligence idea, which was really just a political scheme right from the beginning because he never developed a single instrument of measurement, and that was scandalous from our perspective too.
It’s like, what the hell are you doing? You’re talking about multiple intelligences in the face of all this documentation that intelligence is actually a unitary construct and you can’t measure it at all, and now you're putting this forward as the basis for some kind of educational doctrine?
That was also politically shocking.
Well, Gardner had written a book on the emergence of cognitive neuroscience, and in that book, I don't recall the title, he identified faculties in the mind that the neuropsychologists had—that cognitive neuroscience had uncovered.
His next book after that was "Frames of Mind," which is the multiple intelligences book. I think that what he essentially did was there’s a very weird thing amongst academic psychologists, I’ve always found this striking.
The people that they value the most are the people who are the most intelligent. You can see this in the way they interact with their students; they’re unaware of it many times, but it’s quite noticeable.
But then, as I said, they don’t like the whole idea. There’s a presumption that— I think it might be, if I may, a bit Freudian about it—there might be an unconscious recognition that they are conflating intelligence with value as a human, with moral virtue, with moral virtue, but just with value.
That leads them to see this conflation everywhere, even though other people aren’t doing it. I personally don’t consider people to be more valuable if they have a higher IQ score. But I think that there may be some sort of guilt at the bottom behind that.
I never really understood it. So what Gardner did was he plucked the construct intelligence out of the psychometric realm. In the psychometric realm, intelligence and IQ are very clearly understood. There are very clear procedures for having these phenomena appear as constructs in your statistical analysis tool set, to put it as precisely as I can.
But he just took that and said, reducing—if I remember correctly, it was something along the lines of reducing the sort of prospect of a young person down to one number, an IQ score, is of unreasonably cruel, which he offered.
He said Jay Gould was making the same kind of arguments right in the midst of "The Mismeasure of Man." What Gardner didn’t do—and he should have done—and I would criticize him ethically on this one— is he didn't explain that he was using a well-defined individual difference psychometric construct and he was tearing that from that domain and essentially appropriating the term for his own rebranding of the faculties.
And without carrying through with the required research that would be necessary, you want to call your frames of mind intelligences, you’ve got to show me the tools that will produce the spread of variance so that you can use it as an individual differences construct in your statistical analysis for your research project.
He never did that. He just hand-waved it away. He said, well, I’m not interested in measurement. It’s like, well, then you should shut the hell up about intelligence, and you should not pollute the entire educational psychology literature with your preposterous propositions multiplying a construct that’s already extremely well understood technically and muddying up the waters, like, unforgivably in some real sense.
And Stephen J. Gould did the same thing, you know, when he talked about "The Mismeasure of Man." I’d started to understand, with my limited statistical and mathematical ability, started to understand factor analysis and the sorts of things that we were pursuing more and more mathematically deeply, and I thought, well, Stephen J. Gould is criticizing the hypothetically abstract construct of intelligence.
It’s like, well, what’s the abstract construct here? Well, it’s a single factor. He’s criticizing the idea that the average of a group of numbers is real. Because really what it boils down to, just so everybody who’s listening knows, if you take a hundred questions—any questions that require abstraction to answer—and so they could be formally and verbally say or pictorially in some kind of abstraction.
If you ask a hundred people a hundred questions and you sum up their answers and you rank order the people in terms of their accuracy, you’ve already produced what’s essentially a test of general cognitive ability. It’s that robust.
And so IQ, in some sense, is the rank ordering of people in relation to their ability to ask, to answer abstract questions corrected for age; that’s all it is, and it’s a unitary factor. The idea that this is some sort of statistical abstraction that doesn’t really exist is the same claim that the average or the sum of a group of numbers isn’t real, and any—that's preposterous scientifically.
I thought Gould's book was pretty dishonest. He used the maxim that correlation does not imply causation, which, you know, of course, correlation implies causation. What it doesn't do is it doesn't elucidate the causal mechanism, right?
It doesn't prove causation. Essentially he jumped from there to the idea that because this statistical regularity didn't—it could—you couldn't point to a brain area, so that we had illegitimately—people had illegitimately reified it. So, on the entrepreneurial front, and this is where things got interesting and more complicated for all of us.
Daniel, that we were working together on the scientific front, we started talking about the possibility of doing this commercially. So we built a battery that assessed people on the neuropsychological front and had a personality test built into it, and you tested it at Harvard and the University of Toronto, and Bob and I went on the road to find companies that would allow us to test it in the real world, and that was very tricky.
Now Bob had a brother-in-law who ran a factory in Milwaukee, Hatco, and we went to Hatco and we talked to the people there and said, look, we’re interested in predicting industrial performance. Will you allow us to conduct a study in your institution? Bob, do you want to pick up the story there?
Sure. On numerous occasions we went and applied the battery of tests to all the managers—basically that the entire staff—and it was a mid-size corporation. We went back, what we did is to validate it. We wanted to look at our test results relative to the ratings that the internal ratings of the corporation, which are done twice a year, I believe.
So we over a two-year period—it was that Jordan and I went and tested individuals, went through all the data, collected the data, and passed it on to Daniel.
Right, right. And then Daniel published this, well, I think it was a brilliant thesis bringing together the neuropsychological literature, the IQ literature, the personality literature, laying out these three extremely difficult studies.
Right, it was hard to do the studies at Harvard; these were practical real-world studies; it was difficult to do them at the University of Toronto, and it was difficult to do them on the industrial front, and you published that paper, as you said, in the Journal of Personality and Social Psychology. It's a very highly ranked journal.
I remember the reviewers of your thesis, even though they weren't necessarily politically aligned with this research protocol, let’s say, were uniformly extremely positive in regard to their comments on the quality of the thesis. I remember that a couple of them told me afterward that that was one of the best theses they had ever read, and it was something we had very much thought about and still do talk about from time to time making it into a book.
One of the things that was so lovely about working with you, Daniel, is something I like about engineers is that you virtually never said anything you hadn't researched, right to the damn bottom. If you're a computer engineer, you have to build something from the code level up, and it has to work. One of the things I really liked about you as a graduate student was that I knew if you said—you didn't say a lot. I had a lot of students who were a lot noisier than you, but if you ever said something, I knew perfectly well that you had investigated it right down to its atoms, and you knew the entire logical sequence of ideation that produced that conclusion.
Remember, for example, you referred to John Galton a fair bit in your thesis, and you actually went back into the original writings and familiarized yourself with Galton in a manner that distinguished you completely from anybody I'd ever met who talked about Galton at all. Your thesis was really a masterpiece of depth and courage and clear thinking.
Anyways, we started—I had a side question for you. Why the hell did you switch from engineering to psychology? Like, you came over to Harvard, took my class, and I don't remember exactly. Did you start working with me when you were still an undergraduate as an undergraduate project? How did—why did you come over and start working with me? I mean, you were perfectly qualified engineering on the computational front; you had an immense potential career set out in front of you.
Well, I had been reading Carl Jung and Sigmund Freud and Alfred Adler and stuff like that, and I had taken—I ordered it. It was psychos at MIT. So I went over there to take your personality psychology class, and then I went up and talked to you afterward and said, this is interesting. What do you do? Shall we do something together? You said, yeah, sure. Why don’t you construct this instrument to help us, I believe, code, you know, reactions of people who are in an alcohol experiment with the computer?
Then it just went on from there. We did the first version of the neurocognitive battery. Then before we went to Hatco, etc., we redid it to make it more intuitive and easier to use. Remember the summer? I think it was 1998. You and Alice and I were in Porter Square working on that, stuffing ourselves with pastries in the mid-morning. Good times!
That was just at the dawn of the point where all this stuff became possible on the computational front, so that was exciting too to learn how to do all that.
Just to clarify something, it wasn't the people at JPSP that said that the paper submission was well written; it was the people at Harvard that gave the positive feedback.
I don’t think you should be smirching the people at JPSP by suggesting that they thought there was anything particularly special about my paper.
Right, right. No, no, it was the reviewers of the thesis at Harvard, yes, that’s right.
All right, so we started then. Bob and I, once your paper came out, we could calculate the economic return that would be available to companies if they used our test battery. So basically, what we showed was that on the managerial front, if you were scored highly on the neuropsychological assessment—so which is something like an elaborated assessment or a variant assessment of fluid intelligence—and you were high in conscientiousness, you were much more likely to be a successful manager.
What predicted on the line worker front was more purely just conscientiousness, and that was in accordance with the relevant industrial organization literature, which we also had to master at the same time, right? Because we were trying to find out what were valid predictors of industrial performance and to calculate the economic returns on that.
So, we armed ourselves with a set of scientific facts. What Bob and I and you decided to do was to go out into the corporate world and try to sell these tests. The pitch was: we can assess your new employees within about an hour and a half, we can tell you who’s going to be a good manager, who’s statistically more likely—we can identify a pool of people who are statistically more likely to be high performers as managers and as line workers using slightly different measurement techniques, and the economic payoff of that is going to be like 500 times as much as it costs you to do the testing per year over the hypothetical five-year period of their career.
We used that five-year period because people tend to switch jobs. So, in our naivety, we presumed that if we could go out into the corporate world and say, look, here’s something that you could do that is really inexpensive that will generate a lot of money, that people would just fall all over themselves to do it.
And so Bob, you and I went on the corporate road for, God, five years, do you think, trying to sell these damn tests?
Zero experience and traveler points, pretty much.
Right. I recall our trips to Chicago, Phoenix. We even went to a National Football League team and tried to sell them the test, and they were very interested, but—
Right! They were using IQ tests, eh?
Yeah, but the big problem with them was they figured they would have difficulty getting the players to sit down and actually invest an hour and a half at any one time.
Right, right, yeah.
Well, it turns out that the better quarterbacks tend to have better general cognitive ability, tend to have higher IQs.
Yeah, so that was also our first encounter, Bob, with the realities of human resources. We used to joke Bob and I met a tremendous number of women employed in HR named Debbie, and they always got in our way. Like we would attempt to lay out the case that we could evaluate people on the basis of their capacity to learn, and that was that.
That was an element of merit. The Debbies weren't very happy with that idea because they liked to presume that you could train anybody to do anything at all and there was no such thing as individual differences.
Well, we probably talked to 300 companies, something like that, mostly at the middle management level, and what did we learn? We learned that talking to middle managers is absolutely pointless if you’re trying to do anything entrepreneurial.
Well, we learned that if you’re a middle manager—
Go ahead, Bob.
We were devoured.
Yes, exactly, exactly. And I think that’s been replaced by the epithet Karen now. Other people, obviously, encountered more Karens, but we encountered a lot of Debbies. And so, well, we learned that talking to middle managers about anything entrepreneurial wasn’t going to fly.
And the reason for that was that most people in middle management are not risk-takers at all. And so, if you go to them and you say, look, we’ve got this new thing that could have a spectacular outcome, all they hear is, I could get a lot in a lot of trouble if that goes wrong.
The only question they really want to have answered is who else is using this? That was so interesting sociologically because I didn’t realize at that point how much people relied on consensus to make their judgments instead of logic.
As naive scientists, we assumed that if we just went out into the business world armed with our statistical arguments, we could demonstrate incontrovertibly the economic utility of this approach. But then we assumed that people were motivated by economic utility, that they would be rewarded for taking a risk, that they would understand the technical arguments and that that would guide their decision-making.
Every single one of those presumptions was wrong.
Yeah, so we didn’t sell any of those tests. Not really—not until we started working with the Founder Institute, and that was like 10 years later.
Well, the experience I had trying to demonstrate to pharmacists and to pharmacies, I did studies with pharmacies who made medication errors—serious errors—and were able to demonstrate that on this battery of tests, those individuals who made the errors compared to a control group of pharmacists scored significantly poorly, particularly in something like working memory because they just weren't able to keep a lot of things in their head at the same time.
A pharmacist's job is one that is very dependent on being able to keep in mind that they’re filling a prescription at the same time that they have to deal with a customer at the same time that there’s a phone call is—that all of these situations are occurring.
Anyways, our tests were really appropriate in being able to raise red flags about individuals who were going to have trouble. And we even talked to large pharmaceutical chains at the directors of personnel level, and the issue is they didn’t like our tests, A, because of the time, and B, because there was no face validity.
Right, right, they didn’t look like—
Yeah, they didn’t look like what a pharmacist was doing.
Right, right, yeah. And so that, as Bob pointed out, that's known as face validity—the test looks like it's doing what it says it’s doing.
Lots of accurate psychological tests don’t have that quality. So we also started to understand more about the political landscape on the corporate front in America and also why innovative technologies are difficult to get adopted.
I mean, first of all, if you’re going to go out and sell something to a corporation, you have to actually talk to the people who can make decisions, and that is not an easy thing to do, and that’s almost never middle management.
Before people think that I’m down on middle managers, I should also point out that they have every reason to be leery of risk. I remember one company that we dealt with, they were growing by like 100 employees a month; they couldn’t keep up, and they really needed a technology to screen their applicants before interviews.
Our technology was perfectly situated to manage this, but we wanted to charge them, I think it was thirty dollars a test or something. I think they had a budget of five dollars, and we said, look, we just can’t do it for five dollars; there’s just no utility in this whatsoever for us.
We already demonstrated that you’ll get like a 500 to 1 return on investment at the price we’re charging. Why the hell wouldn’t you do it? And they said, well look, we get evaluated on how much we spend per test, and some other part of the company will be rewarded if there are more productive people.
So we bear all the responsibility for the risk and get none of the reward for the outcome. And I thought, well, oh my God, that’s fatal.
So many companies are set up like that; you’re not going to get your HR people to be incentivized to hire better employees if they get punished for taking risks when they’re assessing them.
Well, we learned a lot about how the corporate world functioned, and I also learned, for example, why people, when they’re making business deals, go play golf, for example. The reason for that is a lot of the way that people in business evaluate one another for the possibility of working together is on the basis of personal trust and compatibility.
Like, they’re not doing a technical analysis of the utility of their processes. Hardly anyone thinks like that, right? Some scientists think like that some of the time, but other people use all sorts of interpersonal heuristics that we were, in some ways, unaware of when we were overestimating the degree to which pure logic could be used as a sales technique.
In any case, Bob and I went all over the U.S. and Canada for a long time—long, long time—learning about the culture of business, feeling like complete bumbling fools because, of course, we hadn’t—didn’t really have any business experience at that point.
Then, well then two things happened. I made contact with the Deoresi at the Founder Institute in California, and Adele was trying to set up business schools for budding entrepreneurs all over the world. He really wanted to predict entrepreneurial ability.
So, we ran a study with him that we never published; it was a private study for us showing that we could actually predict entrepreneurial performance with quite a high degree of accuracy, and then we started testing people all over the world with a modified version of our neuropsychological battery—a much simplified version that concentrated more on fluid intelligence than on hypothetical prefrontal ability—and we tested tens of thousands of people for Adele quite successfully.
That kept Exam Corp going. Now that was hard on the business front because we pulled Daniel into the business, and we tried to fund his research through grants.
We threw some private funding at him, although we didn’t have a lot of money and we weren’t generating any capital, and that, you know, at that time, the universities were really pushing on professors to commodify their research.
Why can’t the universities produce more businesses? We went out and tried to produce a business, and what we learned was that’s a hell of a lot harder than you think because the product is only about five percent of the problem.
Sales and marketing are more like 95% of the problem. And we also ran into a fair bit of friction. It was often very emotionally demanding because we had, in some sense, a conflict of interest, especially in relationship to Daniel, because we were trying to move ahead your scientific career, and we were trying to produce a business enterprise at the same time.
It wasn’t easy at all to keep the ethical lines straight, you know, how much to pay you, how much we should be concentrating on your scientific career.
Now you had gone somewhat disenchanted with the idea of a scientific career, but you were very interested in pursuing the development of this battery. At the same time, just to make the story a bit more complex, a lot of the managers we were talking to, they would dispense with any interest in the predictive tests for the reasons we already described, but they kept asking us the same question, which was, well, you say we should hire better people, but we have a lot of troublesome people that we've already hired, and we need to know what to do with them.
Our answer always was, well, we don’t know what to do with your troublesome people, and the managerial literature says you should spend all the time with your best performers, not your worst, and so we don’t know what to do about that; maybe there’s nothing that can be done.
But we got asked that like 200 times, and so we went into the literature to see if we could find any evidence that there were broad-scale psychological interventions that might help poorer performers, and we settled on the development of what became the self-authoring battery.
And we learned that from two different literatures. James Pennebaker spearheaded one of them, and then Locke and Latham—Gary Latham did that in the goal-setting domain in industrial organizational psychology. We found out that if you had people write about the complexities of their life autobiographically or in relationship to their future, that they would perform better industrially, and their mental and physical health would improve.
So we started working with Adele to predict creative competence, and that gave us a bit of capital, and then we started to develop—the I think that was the right order. We started to develop the self-authoring tests, and our goal was to produce tests that were scientifically validatable, that were inexpensive, that were scalable, that would do no harm and that had demonstrated validity in terms of improving performance and mental health.
So Daniel, you want to walk through the self-authoring suite a little bit?
The self-authoring suite is essentially a series of writing exercises: future authoring, past authoring, and present authoring. I’d like to start with the future authoring because it gives you the most bang for your buck. Essentially, it’s a series of writing exercises where you’re asked to think about what you want out of life in concrete, in more concrete detail than you do typically and to work out some processes to elaborate on that vision of an ideal future and then to actually break up the steps that would be required in order for you to start making motion towards that.
Then to look at the impediments and things that can help you to execute those plans. And so the future authoring is a good way to get people motivated.
And I think maybe you might talk in some detail about the study that we did at the college because you’re a little bit more familiar with the details of that than I am.
We’ll be back in one moment. First, we wanted to give you a sneak peek at Jordan’s new documentary, "Logos and Literacy."
I was very much struck by how the translation of the biblical writings jump-started the development of literacy across the entire world. Illiteracy was the norm. The pastor’s home was the first school, and every morning it would begin with singing. The Christian faith is a singing religion.
Probably 80% of scripture memorization today exists only because of what is sung. Here we have a Gutenberg Bible printed on the press of Johann Gutenberg: science and religion are opposing forces in the world, but historically, that has not been the case.
Now the book is available to everyone—from Shakespeare to modern education and medicine and science to civilization itself. It is the most influential book in all history, and hopefully, people can walk away with at least a sense of that.
Yeah, well, we did three studies—you and I and Bob did all these studies. So Bob had a student who was working at McGill who was interested in potentially in the prediction of performance.
So we set out to see if we allowed students or encouraged students to write about their future to make a plan, to develop a vision for six dimensions of their future: intimate relationships, job career, education, care of mental and physical health, use of time outside of work, and regulation of response to temptations—friendship networks, development of friendship networks—to develop a coherent plan for those domains in relatively constrained circumstances, right?
I think we had people write for 90 minutes or write about what they had done the previous two weeks for 90 minutes, and then we evaluated, first at McGill, we evaluated the impact of a goal-setting program on academic performance, and we found that we decreased the dropout rate.
These are fairly highly selected students at McGill because it’s a selective university. We dropped their dropout rate substantially, and we increased their academic performance by 35%, and you know you’d expect that universities would just jump all over that. It’s like that’s a walloping improvement in performance, and although they didn’t—that’s for sure.
Then we did some work with Michaela Shippers in the Netherlands at the business school there and ran another series of studies over a couple of years showing that we got exactly the same results for undergraduate business schools, but that the results were even more pronounced for men who were underperforming, men in general were underperforming, and even more specifically for minority men.
So it was this weird intervention—most psychological interventions help people who aren’t doing well do somewhat better, but at the same time, they help people who are doing well do even better. But this intervention had this paradoxical effect where it really raised the bottom part of the performance distribution.
The culminating study in that sequence of studies we did at Mohawk College, and we had kids come in for their orientation day and write for 90 minutes about their future or write about what they did for the past two weeks, and we dropped their dropout rate 50% the first year, which was—and that was mostly—and it worked best for the men who had the lowest grades in high school and for minority men again.
We thought, well, this is just a no-brainer. People are going to eat this up like mad because, well, why wouldn't you want an intervention that's dirt cheap that reduces dropout by 50% and particularly targets minority men? Like what a deal for everyone, regardless of your politics.
We worked with Mohawk for, what, 10 months afterward, retooling our damn software to fit their bureaucratic idiocy. At the end of that, which was a lot of trouble for us, they just dropped it.
Yeah, that was hilarious. I really thought that was very funny.
Yeah, yeah. Well, and also very telling. Man, it’s like—it’s like cheap, that took no time, that had no negative side effects, that doubled the performance of your incoming students.
We offered it to you, with almost no difficulty, and retooled it for your bureaucracy, and we did the study in your institution, and we published it, and yet at the end of this whole process, you basically told us that you weren’t interested.
It’s like, yeah, you’re bloody well not interested. That’s why you have such a high dropout rate among young men, because you don’t give a damn.
And that’s pretty much the situation in universities writ large. So, what we decided to do, Bob, what we decided to do instead was just to sell this program to individuals and say, and we just stopped—we just stopped talking to corporations and large institutions altogether, and that was way better.
We have a pretty steady sales record now. We also produced a personality test called understandmyself.com that enables people to go and do a Big Five personality analysis, gives them a detailed report on their personality, and also they can compare up with a partner, an intimate partner, say, and they can go do the test and get their own report, and then they can get a joint report detailing out their similarities and their differences.
How many self-authoring programs are we selling a day, Daniel?
And how many understandmyself programs are we selling a day on average now?
About—I’m not sure—I can tell you that in aggregate we’ve probably done about 600,000, 500,000, about 600,000 understand myself users and possibly maybe 400,000 self-authoring, but I could be wrong.
No, it’s a couple hundred a day anyways on a regular basis, say, and the feedback we’ve been getting from people with regards to understand myself is very positive. People find the provision of accurate personality information about themselves extremely useful.
Also, on the partnership front, Tammy and I did that and did a podcast about it, and even though we know each other well and we designed—I designed at least part of the damn test—I still found doing it with her quite revealing, and it helped us understand each other better because it’s a wonderful thing.
You are different.
Go ahead, Daniel.
The wonderful thing about the understand myself process is that it gives— it gives our users what you could call immediately actionable insight into what they’re like and how they respond to various situations, and it doesn’t require the same amount of time commitment as the self-authoring, so it’s very—it’s actually quite a pleasant experience.
And I won’t say that for, let’s say, the past authoring.
Right, right. The people who get the most value from the past authoring don’t enjoy doing it. They sometimes even get angry at us for asking them to do it. But the understand myself is immediately helpful. I think it’s immediately rewarding as well.
Well, the past authoring, you know, there’s a good dictum from clinical psychology that it’s better to voluntarily face your demons, let’s say, or the dragons, right? You put yourself in the zone of proximal development by adopting a voluntary stance of confrontation with complexity and threat, and one of the things we asked people in the past authoring exercise to do is to divide their life into sections—epochs—and then to write about the more emotionally compelling experiences, both negative and positive.
Yeah, that’s quite difficult, and it does upset people. The research literature indicated, most of this generated by Pennebaker and his—and the people who followed in his footsteps basically showed that if you do an autobiographical exercise like that, which is akin in some sense to both confession and to psychotherapy, the immediate consequences are negative because reliving those upsetting experiences is upsetting and can put you into a bit of a spin.
But the medium to long-term consequences are very positive, both on the performance and on the mental and physical health front.
You can think of self-authoring—those of you who are watching and listening—you can think of this self-authoring.com set of exercises, the self-authoring suite, as what would you say? It’s in some sense do-it-yourself therapy.
I mean, I think our—the response we’ve had from the individuals who’ve been using it—well, you deal with that more because you’ve handled customer complaints and comments for years—like what’s—I have lots of people in my lectures who come up to me and tell me, you know, how helpful the Understand Myself program was, but also the self-authoring program.
Fewer people, because it’s more difficult, but people often respond to the self-authoring program in a manner that indicates, you know, that it’s changed their life.
I used the programs. We developed the programs from my "Maps of Meaning" course at the University of Toronto, right, when we were first walking through the paper-and-pencil versions. I had students write autobiographies and then also write out a plan for the future, and I could watch the impact that it had on them.
It’s really something to be able to develop a vision for the future.
Go ahead, Daniel.
If any users are watching this, I would say with the self-authoring suite because we all have a limited amount of time and a limited amount of energy.
If you're not sure that you can do the full thing, do the future authoring and do it badly, and push through it to the very end. If you’re doing the past authoring and you think you can only do a little bit at a time, do a little bit at a time, and then walk away from it, come back to it and re-read it.
Take your time over that one. They’re really quite different. I don’t think people should approach them the same way—push the future authoring through to the end, and then you can redo it again later if you want to, but with the past authoring, if it’s difficult, just do a little bit, set it aside, come back to it again later. It'll be waiting for you when you're ready.
Yeah, right. Don't bite off more than you can chew; there’s no need to—there’s no need to.
So, while we were pretty happy with the way the self-authoring and the Understand Myself tests have rolled out because we were able to fulfill our vision, we should talk a little bit, guys, about some of the problems that we've encountered trying to keep our relationship intact over this