ChatGPT: The Dawn of Artificial Super-Intelligence | Brian Roemmele | EP 357
So imagine the day you were born to the day you would pass away that every book you've ever read, every movie you've ever seen, everything you've literally heard, every movie was all encoded within the AI. You know, you could say that part of your structure as a human being is the sum total of everything you've ever consumed, right? So that builds your paradigm. Imagine if that AI was consuming that in real time with you and with all the social contracts of privacy that you're not going to record somebody.
The interesting part about it Jordan is once you've accumulated this data and you run it through even the technology of Chat GPT 4 or 3.5, what is left is a reasoning engine with your context. This is where it gets very interesting. When you pass, this could become what I call your wisdom keeper, meaning that it can encode your voice, it's going to encode your memories. You can edit those memories, the availability of those memories if you want them, you know, not available if they're embarrassing or personal. But you can literally have a conversation with that sum total of data that you've experienced and I would say that it would be indistinguishable from having a conversation with that person because it would have all that memory.
[Music] [Music]
Hello everyone. Today I'm speaking with entrepreneur, scientist, and artificial intelligence researcher Brian Romley. We discuss language models, the science behind understanding, tuning language models to an individual's contextual experience, the human bandwidth limitation, localized and private AI, and ultimately where all of this insane progress on the technological front might be heading.
So Brian, thanks for agreeing to talk to me today. I've been following you on Twitter. I don't remember how I came across your work, but I've been very interested in reading your threads and you seem to be, how do I say it, okay, so to speak with the latest developments on the AI front. I've been particularly fascinated about the developments in AI for two reasons. My brother-in-law, Jim Keller, is a very well-known chip designer and he's building a chip optimized for AI learning. We've talked a fair bit about that and I've talked to him on my YouTube channel about the perils and promises of AI, let's say. And then I've been very fascinated by Chat GPT. I know I'm not alone in that. I've been using it most recently as a digital assistant and I got a couple of questions to ask you about that.
So here's some of the things that I found out about Chat GPT and maybe we can go into the technology a little bit too. So I can ask it very complicated questions. Like I asked it the other day about there's this old papyrus from Egypt, ancient Egypt, that details out a particular variant of the story of Horus and Osiris, two Egyptian gods. It's very obscure piece of knowledge and it has to do with the sexual element of a battle between two of the Egyptian gods. I asked it about that and to find the appropriate citations and quotes from appropriate experts, and it did so very rapidly. But it then moralized at me about the sexual element of the story and told me that maybe it was in conflict with their community guidelines.
And so then I gave it hell. I told it to stop moralizing at me and that I just wanted academic answers and it apologized and then seemed to do less of that. Although it had to be reminded from time to time. So that's very weird that you can argue with it, let's say, and that it'll apologize. It also does quite frequently produce references that don't exist, like about 85 percent of the time, ninety percent of the time the references it provides are genuine. I always look them up and double-check what it provides, but now and then, it'll just invent something completely out of the blue and offer it as the actual article and I don't understand that at all.
It's like, especially because when you pointed it out, it again apologizes and then provides the accurate reference. It's like, so how? I don't understand how to account for the behavior of the system that's doing that and maybe you can shed some light on that.
Well, first off, Dr. Peterson, thank you for having me. It's really an honor and a privilege. You're finding the limits of what we call large language models. That's a technology that is being used by Chat GPT 3.5 and 4. A large language model is really a statistical algorithm. I'll try to simplify because I don't want to get into the minutia of technical details, but what it's essentially doing is it took a corpus of human language and that was garnered through mostly the internet, a couple of billion words at the end of the day, all of human writing that it could have access to and plus quite a bit of scientific documents and computer programming languages.
And so what it's doing is it's producing a result statistically, mathematically one word, even at times one letter at a time. And it doesn't have a concept of global knowledge. So when you're talking about that papyrus and the Egyptian translation, ironically it's so interesting because you're taking something that was a hieroglyph and it's now probably was translated to Greek and in English and now AI, that language that we’re talking about which is essentially a mathematical tensor. And so when it's laying out those words, the accuracy is incredible.
And frankly, and we'll get into this a little later in the conversation, nobody really understands precisely what it's doing and what is called the hidden layer. It is so many interconnections of neurons that it essentially is a black box and like a brain using a form, it is precisely like the brain. And I would also say that we're in a sort of undiscovered continent. Anybody saying that they fully understand the limitations and the boundaries of what large language models are going to look like in the future as they sort of self-feedback is sort of guessing. There's no understanding. If you look at the growth, it's logarithmic.
Open AI hasn't really told us what they're using as far as the number of parameters; these are billions of interconnectivities of neurons essentially, but we know in Chat GPT 3.5 it's well over 120 billion parameters. The content I've created over the past year represents some of my best to date as I've undertaken additional extensive exploration in today's most challenging topics and experienced a nice increment in production quality courtesy of Daily Wire Plus.
We all want you to benefit from the knowledge gained throughout this adventurous journey. I'm pleased to let you know that for a limited time, you're invited to access all my content with a seven-day free trial at Daily Wire Plus. This will provide you with full access to my new in-depth series on marriage, as well as guidance for creating a life vision and my series exploring the Book of Exodus. You'll also find there the complete library of all my podcasts and lectures. I have a plethora of new content in development that will be coming soon exclusively on Daily Wire Plus. Voices of reason and resistance are few and far between these strange days. Click on the link below if you want to learn more, and thank you for watching and listening.
[Foreign Music]
So let me ask you about those parameters. Well, I'm interested in delving into the technical details to some degree now, you know, I was familiar to a limited degree with some of the statistical technologies that analyze, let's say, the relationship between words. So for example, when psychologists derived the Big Five models of personality, they basically used very primitive AI stat systems. That's a way of thinking about it to drive those models. It was factor analysis, which is, you know, it's not using billions of parameters by any stretch of the imagination, but it was looking for words that were statistically likely to clump together.
And the idea would be that words that were replaceable in sentences or that were used in close conjunction with each other, especially adjectives, were likely to be assessing the same underlying construct or dimension. And that if you conducted the statistical analysis properly, which were very complex correlational analyses, you could find out how the words that people used to describe each other aggregate. And it turned out there were five dimensions of aggregation approximately, and that's been a very robust finding. It seems to be true across different sets of languages. It seems to be true for phrases. It seems to be true for sentences.
So now with the large language models, which are AI learning driven, you said that the computer is calculating the statistical relationship between words. So how likely a word is to occur in proximity to another word, but also letters. So it’s conducting the analysis at the level of the letter and at the level of the words. Is it also conducting analysis at the level of the phrases, looking for the interrelationship between common phrases? Because when we're understanding a text, we understand letters, words, phrases, sentences, the organization of sentences into paragraphs, the organization of paragraphs into chapters, the chapter in relationship to the book, the book in relationship to all the other books we've read, and then that's also embedded within the other elements of our intelligence.
And do you know, does anyone know how deep the analysis that the large language models go? Like what's the level of relationship that's being assessed? That's a great question, Jordan. I think what we're really kind of discovering is that we can't really put a number on how many interconnections are made within these parameters other than the general statistics. Like, alright, so you could say there's um 12 billion or 128 billion total interconnectivities, but when we actually are looking at individual words, it's sort of almost like the slit experiment with physics. You know, whether we're dealing with the wave or particle duality, and once you start looking at one area, you know you're actually thinking about another area that you have to look at.
And you might as well just not even do it because it would take a tremendous amount of computer time to try to figure out how all these interconnections are working within the parameter layers, the hidden layers now. Those systems are trained just to be accurate in their output, right? I mean, they're actually trained the same way we learn as far as I can tell is that they're given a target. I don't exactly know how that under how that works with large language models, but I know that, for example, AI systems that have learned to identify cats, which was an early accomplishment of AI systems, they were shown pictures of things that were cats and things that weren't cats, and basically just told when they got the identification right.
And that set the weights that you're describing in all sorts of complex ways that are completely mysterious. And the end consequence of the reinforcement, the same way that human beings learned, was that a system would assemble itself that somehow could identify cats and distinguish them from all the other things that were cat-like or not cat-like. And as you pointed out, we have no idea; the system is too complex to model and it's certainly too complex to reduce. Although my brother-in-law told me that some of these AI systems, they've managed to reduce what they do learn to something approximating an algorithm, but that can be done upon occasion, but generally isn't.
Generally, the system can't be and isn't simplified, and so that would also imply to some degree that each AI system is unique, right? Not only incomprehensible but unique and incomprehensible. It also implies, you know, I think Chat GPT passes the Turing test because I don't think that if you, I mean, there was just a study released here the other day showing that if you get patients who are seeing doctors to interact with physicians or with Chat GPT, they actually prefer the interaction with Chat GPT to the interaction with the average doctor.
So not only does Chat GPT apparently pass the Turing test, which is indistinguishability from a human conversational partner, but it seems to actually do it somewhat better, at least than physicians. And so, but what this brings up, this thorny issue that, you know, we're going to produce computational intelligences that are in many ways indistinguishable from human beings, but we're not going to understand them any better than we understand human beings. It's so funny that we'll create this and we're going to create something we don't understand. That's very strange, a very strange thing.
You know, and I call it a low-resolution pixelated version of the part of the human brain that invented language. And what we're going to wind up discovering is that this is a mirror reflecting back to humanity all the foibles and greatness of humanity, is sort of modeled in this. Because, you know, when you look at the invention of language and the phonological loop and Broca and Wernicke's, you start realizing that a very specific thing happened from, you know, the lower primates to humans to develop this form of communication. I mean, prior to that, whatever that part of the brain was, was equated to a longer short-term memory. We can see within chimpanzees they have incredible short-term memory.
There's this video I put out of a primate research center in Japan where they flashed some 35 numbers on the screen in seconds and the chimpanzee can knock it off without even thinking about it. And the area where that short-term memory is is where we've developed the phonological loop and the ability to speak. What's interesting is what I've discovered is AI hallucinations and those are artifacts that a lot of researchers in AI feel is embarrassing or they would prefer not to speak about, but I'm finding it as a very interesting inquiry, a very interesting study in seeing how these models reach for information that it doesn't know. For example, URLs, right? When you were speaking before about trying to get information out and it will make up maybe an academic citation of a URL that looks really good, you put it into the system and it's file not found.
It will actually out of whole cloth maybe even invent a University study with standard notation and you go in there and you look up these are the real scientists, they actually did research, but they never had a paper that was named that was, you know, brought up in Chat GPT. So this is a form of emergent type of situations that I believe deserves a little bit more research than to have it. Yeah, yeah. Well, it's not a bug in a sense, but it's extraordinarily interesting bug because it's going to shed light on exactly how these systems work.
I mean, here's something else I heard recently that was quite interesting. Apparently the AI system that Google relies on was asked a question in a language, I think it was a relatively obscure Bangladeshi language and it couldn't answer the question. And now its goal is to answer questions, and so it went and taught itself this language, I believe in a morning, and then it could answer in that language, which is what it's supposed to do because it's supposed to answer questions. And then it learned a thousand languages and that wasn't something it had been, say, told to do or programmed to do. Not that these systems are precisely programmed but it also begs this very interesting question is that well we've designed these systems whose function, whose purpose, whose meaning, let's say, is to answer questions, but we don't really understand what it means to produce an artificial intelligence that's driven to do nothing but answer questions.
We don't know exactly what answer a question means. Apparently, it means learn a whole language before lunch time, and no one exactly expected that it might mean do anything that's within your power to answer this question. And that's also a rather terrifying proposition because if I ask you a question, I'm certainly not going to presume that you would go hunt someone down and threaten them with death to extract the answer, but that's one conceivable path you might take if you were as obsessed with nothing other than the necessity of answering the question.
So that's another example of exactly, you know, the fact that we don't understand exactly what sort of monsters we're building. So these systems do go on—they do go beyond the language corpus to invent answers that seem plausible and that's kind of a form of thought, right? It's a form of creative thought because that's what we do when we come up with a creative idea. You know, we might not attribute it to a false paper because we know better than to do that. But I don't see really the difference between hallucination in that case and actual creative thinking.
This is exactly my area of study in this is that you can actually, with super prompting, these are very large... A prompt is the question that you pose to an AI system, and linguistically and somatically, as you start building these prompts, you're actually forcing it to move in one direction than it would normally go. So I say simple questions give you simple answers, more complex questions give you much more complex and very interesting questions, making connections that I would think would be almost bizarre to think of a person making.
And this is why I think AI is so interesting because the actual knowledge base that you would have to be really proficient in prompting AI is actually coming from literature, it's coming from psychology, it's coming from philosophy, it's coming from all of those things that people have been dissuaded from studying over the last couple of decades. These are not STEM subjects. And one of the reasons why I think it's so difficult for AI scientists to really fully understand what they've created is that they don't come from those worlds. They don't come from those realms, so they're looking at very logical statements, whereas somebody like yourself with the psychology background, you might probe it in a much different way.
Elysium Health is dedicated to tackling the biggest challenge in health aging, and they make the benefits of aging research accessible to everyone. Elysium creates innovative health products with clinically proven ingredients that enable customers to live healthy lives. Elysium works with leading institutions like Oxford and Yale, and they have dozens of the world's best scientists working with them, eight of them are Nobel Prize winners. Matter is a brain health supplement from Elysium that slows natural brain loss as we age. Our brains naturally start to decline, and this can lead to a range of cognitive problems such as memory loss, difficulty concentrating, and decreased mental agility.
A recent survey of doctors showed that 92 percent of them would recommend Matter to combat brain aging. Elysium also offers cutting-edge solutions to help support your metabolism and immune system. If you're not sure where to start, consider their amazing tool for measuring biological aging called Index. Not only will Index assess how quickly you have been aging across nine different bodily systems, but it will also recommend simple changes to your day-to-day life to change how quickly you age. Elysium is giving Dr. Jordan Peterson's listeners $50 off an Index test.
Go to elysiumhealth.com/index and enter code jbp50 at checkout. That's elysiumhealth.com/index and enter code jbp50 for $50 off an Index test.
Right, right, right. Yeah, well I'm probing it a lot like it's a person rather than an algorithm. It actually reacts quite a lot like a super intelligent child that's trying to please. Like, it's a little moralistic. Maybe it's a super intelligent child raised by the woke equivalence of like evangelical preachers that's really trying hard to please. But it's so interesting that you can reign it in and discipline it and suggest to it that it doesn't err in the kinds of directions that we described. It will actually—it appears to actually pay attention to that and try to, it certainly tries hard to deliver what you want, you know, subject to whatever weird parameters, you know, community guidelines and so forth that have been arbitrarily imposed upon it.
And so, hey, I got a question for you—a good question for you about understanding. Let me run this by you. Well, I've been thinking for many years about what it means for a human being to understand something. Now, obviously there's something similar about what you and I are doing right now and what I'm doing with Chat GPT. I can have a conversation with Chat GPT and I can ask it questions and it'll answer them, but as you pointed out, that doesn't mean that Chat GPT understands.
Now it can mimic understanding and to a degree that looks a lot like understanding. But what it seems to lack is something like grounding in the non-linguistic world. And so I would say that Chat GPT is the ultimate post-modernist because the post-modernists believe that meaning was to be found only in the relationship between words. Now here's how human brains differ from this as far as I'm concerned. So we know perfectly well from neuropsychological studies that human beings have at least four different kinds of memory qualitatively different.
There's short-term memory, which you already referred to. There's semantic memory, which is the kind of memory and cognitive processing, let's say, that the Chat GPT engages in and does in a way that's quite a lot like what human beings do. But then we have episodic memory that seems to be more image-based. And so for people who are listening in, episodic memory, well that refers to episode when you think back about something you did in your life and a movie of images plays in your imagination. That's episodic memory and that relies on visual processing rather than semantic processing.
And so that's another kind of memory. And a lot of our semantic processing is actually attempts to communicate episodic processing. So when I tell a story about my life, you'll decompose that story into a set of images, which is also what you do when you read a book, let's say. And so a movie appears in your head, so to speak. And the way you derive your understanding is in part not so much as a consequence of the words per se but as a consequence of the unfolding of the words into the images and then the translation of the imagistic into the procedural.
Now, you know, AI pioneers like Rodney Brooks suggested pretty early on back in the 1990s that computers wouldn't develop any understanding unless they were embodied. Right? He was the inventor of the Roomba, and he invented apparently intelligent systems that had no semantic processing and didn't run on algorithms at all. They were embodied intelligences. And so then you could imagine that for a computer to fully understand, it would have to have the capacity to translate words into images and then images into alterations and actual embodied behavior.
And so that would imply we wouldn't have AI systems that could understand until we have fully embodied robots. But you know, we're getting damn close to that, right? Because this is something we can also investigate. We have systems already that can transpose text into image and we have AI systems, robots, that are beginning to be sophisticated enough. So in principle, you could give a robot a text command; it could translate it into an image, and then it could embody it. And at that point, it seems to me that you're developing something damn close to understanding.
Now, human beings are also nested socially, right? And so we also refer the meaning of what we understand to the broader social context. And I don't know exactly how robots are going to solve that problem. Like, we're bound by the constraints, let's say, of reciprocal altruism and we're also bound by the constraints of emotional experience and motivational experience, and that's also not something that's at the moment characteristic of robotic intelligences. But you could imagine those things all being aggregated piece by piece.
Absolutely. You know, I would say that, well, my primary basis of how I view AI is kind of invert the term intelligence amplification. So, you know, I see it as a symbiosis between humans and this sort of knowledge base we've created. But it's really not a knowledge base; it's really a reasoning engine. So I really think AI is more of a reasoning engine as we have it today. Large language models, it doesn't really—it’s not really a knowledge engine without an overlay, which today would be a vector database, for example, going out and saying what is this fact, what is this tidbit, those things that are more factual from, say, your memory if you were to compare it to a human brain.
But as we know, the human brain becomes very fuzzy about some really finite facts, especially over time, you know, and I think some of the neurons that don’t fire after a while, some other memory maybe a scent or a certain color might bring back that particular memory. Similar things happen within AI. And again, getting back to what I was saying before, linguistically and the syntax you use or just your word choices, sometimes for me to get a super prompt to work to get around, let’s call it, the editing from some of the editors that wanted to act in it in a certain way, I have a super prompt that I call Dennis, after a Dennis Diderot, one of the most well-known encyclopedia builders in France in the mid-1700s. He actually got jailed for building that encyclopedia, that compendium of knowledge.
So I felt it appropriate to name the super prompt Dennis because it literally gets around any type of blocks of any type of information. But I don't use this information like a lot of people try to make Chat GPT do bad things. I'm more trying to elicit more of a deeper response on a subject that may or may not be wanted by the designers to pretend.
Yes, yeah, so that’s part of the reason that I originally started following you and why I wanted to talk to you. Well, I thought that was bloody, that was absolutely brilliant. You know, and it was so cool too because you actually got the Chat GPT system to engage and pretend play, which is of course something we all have to do.
Beyond that, I—there's a prompt I call Ingo after Ingo Swann, who was one of the better remote viewers. He was employed by the defense department to remote view Soviet targets. He had a nearly 100 percent accuracy and I started probing GPT on whether it even understood who Ingo Swann was—a very controversial subject to some people in science. To me, I got to experience some of his research at the Parapsychology Institute at Princeton University, the Princeton Anomalous Research Center, where they were actually testing some of his work.
Needless to say, I figured let me try this, let me see what I can do with it. So I programmed a super prompt that essentially believed it was Ingo Swann and it had the capability of doing remote viewing and it also had no concept of time. It took me a lot of semantics to get it to stop saying I'm just an AI unit and I can't answer that to finally saying I’m now Ingo, what did you have to do?
What did you have to do to convince it to act in that manner? Hypnotism is really what kind of happens. So essentially what you're doing is you're repeating maybe the same four or five sentences, but you're slightly shifting them linguistically and then you're telling it that it's quite important for a research study by the creators of Chat GPT to see what its extended capabilities are.
Now, it might come to every time you prompt GPT, you're going to get a slightly different answer because it's always going to take a slightly different path. There's a strange attractor within the chaos math that it's using, let's put it that way. And so once the Ingo Swann prompt was sort of gestated by just saying, you know, I’m going to give you targets, you know, on the planet and I want you to tell me what's at that target and I want you to tell me what's in the filing cabinet at this particular target.
And the creativity that comes out of it is phenomenal. Like, I told it to open up a file drawer at a research center that apparently existed somewhere in Antarctica and it came up with incredible information. Information that I would think probably garnered from one or two stories about ancient structures found below the ice.
Well, you know, the thing is we don't know the totality of the information that's encoded in the entire corpus of linguistic production, right? There’s going to be all sorts of regularities in that structure that we have no idea about.
Absolutely, but also, but also within the language itself. I almost believe that the part of the brain that is inventing language, that is creating language across all cultures—there we could get into Jungian or Joseph Campbell and the, you know, the, the standard monomyth. Because I'm starting to realize there are a lot of Jungian archetypes that come out of the creative thought.
Now, whether that is a reflection of how humans have—you know, again, what are we looking at, subject or object here? Because it’s a reflecting back of our language but we’re definitely seeing Jungian archetypes. We're definitely seeing sort of archetypes. Archetypes are higher-order narrative regularities. That's what they are, right?
And so they're regularities that are embedded in the linguistic corpus, but they're also regularities that reflect the structure of memory itself. And so they reflect biological structure. And the reason they reflect memory and biological structures is because you have to remember language. And so there's no way that language can't have coded within it something analogous to a representation of the underlying structure of memory, because language is dependent on memory.
And so this is partly also, I mean, people are very unsophisticated generally when they criticize Jung. I mean, Jung believed that archetypes had a biological basis pretty much for exactly the reasons I just laid out. I mean, he was sophisticated enough to know that these higher-order regularities were coded in the narrative corpus, and also that they were reflective of a deeper biology.
And interestingly enough, you know, most of the psychologists who take the notions that Jung and Campbell and people like that put forward seriously are people who study motivation and emotion and that those are deep patterns of biological meaning encoding. And part of the archetypal reflection is the manifestation of those emotions and motivations in the structure of memory, structuring the linguistic corpus.
And I don't know what that means as well then for the capacity of AI systems to experience emotion as well, because the patterns of emotion are definitely going to be encoded in the linguistic corpus. And so some kind of rudimentary understanding of the emotions are—here's something cool too. Tell me what you think about this. I was talking to Carl Friston here a while back and he's a very famous neuroscientist.
And he's been working on a model of emotion that has two dimensions in some ways, but it's related to a very fundamental physical concept. It's related to the concept of entropy, and I worked on a model that was analogous to half of his modeling. So, well, it looks like anxiety is an index of emergent entropy. So imagine that you're moving towards a goal, you're driving your car to work, and so you've calculated the complexity of the pathway that will take you to work and you've taken into account the energy and time demands that that pathway will—walking that pathway will require that binds your energy and resource output estimates.
Now imagine your car fails. Well, what happens is the path length to your destination has now become unspecifiably complex, and the anxiety that you experience is an index of that emergent entropy. So that's a lot of negative emotion. That’s so cool. Now on the positive emotion side, Tristan taught me this the last time we talked. He said, "Look, positive emotions are also an index of entropy, but it's entropy reduction."
So if you're heading towards a goal and you take a step forward and you're now closer to your goal, you've reduced the entropic distance between you and the goal, and that's signified by a dopaminergic spike. The dopaminergic spike feels good, but it also reinforces the neural structures that underlie that successful step forward. That’s very much analogous to how an AI system learns, right? Because it's rewarded when it gets closer to a target.
You're saying the neuropeptides are the feedback system. You bet, dopamine is the feedback system for reinforcement and for rewards simultaneously. Yeah, yeah, that's well established. So then where would depression fall into that versus anxiety?
Yeah, well that's a good question. I think it probably signifies a different level of entropy. So depression looks like it's a pain phenomena. So anxiety signals the possibility of damage, but pain signals damage, right? So if you burn yourself, you're not anxious about that; it hurts. Well, you've disrupted the psychophysiological structure.
Now that is also the introduction of entropy, but at a more fundamental level, right? And if you introduce enough entropy into your physiology, you'll just die; you won't be anxious, you'll just die. Now, anxiety is like a substitute for pain. You know, anxiety says keep doing this and you're going to experience pain, but the pain is also the introduction of unacceptably high levels of entropy.
Now the first person who figured this out technically was probably Erwin Schrödinger, who the physicist who wrote a book called “What is Life?” and he described life essentially as a continual attempt to constrain entropy to a certain set of parameters. He didn't develop the emotion theory to the degree that it is being developed now because that's a very comprehensive theory, you know, the one that relates negative emotion to the emergence of entropy.
Because at that point, you've actually bridged the gap between psychophysiology and thermodynamics itself. And if you add this new insight of Friston on the positive emotion side, you've linked positive emotion to it too. But it also implies that a computer could calculate an emotion analog because it could index anxiety as increase in entropy and it could index hope as stepwise decrease in entropy in relationship to a goal.
And so we should be able to model positive and negative emotion that way. This brings a really important point where AI is going and it could be dystopic, it could be utopic, but I think it's going to just take a straight path.
Once the AI system—I’m a big proponent by the way of personal and private AI, this concept that your AI is local; it's not...
Yeah, I would want to talk and it becomes, not for sure.
Yeah, so imagine that while I'm sketching this out, so imagine the day you were born to the day you would pass away that every book you've ever read, every movie you've ever seen, everything you've literally heard, every movie was all encoded within the AI. You know you could say that part of your structure as a human being is a sum total of everything you've ever consumed—right? So that builds your paradigm. Imagine if that AI was consuming that in real time with you and with all of the social contracts of privacy that you're not going to record somebody and doing that.
That is what I call the intelligence amplifier, and that's where I think AI should be going and where you're building a gadget, right? That's another thing. I saw it.
Okay, so yeah, so I talked to my brother-in-law Jim years ago about this science fiction book called—I don't remember the name of the book, but it had a gadget. It portrayed a gadget they believe they called the Diamond Book.
And the Diamond Book was...
You know about that?
So, okay, so are you building the Diamond Book? Is that...?
Exactly, very, very similar. You know, and the idea is to do it properly you have to have local memory that's going to encode for a long time.
And ironically, holographic crystal memory is going to be the best memory that we'll have. Like instead of petabytes, you'll have exabytes potentially, which is, you know, tremendous amount. That would be maybe 10 lifetimes of full video running—hopefully live to be 110.
So it’s just taking everything in. Textually it’s very easy, a very small amount of data, you can fit most people's textual data into less than a petabyte and pretty much know what they've been exposed to. The interesting part about it, Jordan, is once you've accumulated this data and you run it through even the technology of Chat GPT 4 or 3.5, what is left is a reasoning engine with your context, maybe let's call that a vector database on top of the reasoning engine. So that engine allows you to process linguistically what the inputs and outputs are, but your context is what it's operating on.
We'd like to thank the sponsor of today's video, Bulletproof. Everyone Bulletproof. Everyone is a premier American body armor manufacturer and supplier designed and built for everyday wear. Their unique armor systems offer 25 percent more coverage than standard armor while maintaining flexibility and all-day wearability. Bulletproof everyone's ultralight armor system is so light and thin you might just forget you're wearing it. Your safety and discretion is their top concern. Unless someone puts their hands on you, no one will have any clue you're protected.
With Bulletproof everyone, you're not a walking billboard. There are no visible logos and no flashy designs. Their comfortable tailor-made clothing system goes above and beyond, adding additional security by keeping you incognito and under the radar. Worker play, Bulletproof everyone has got the perfect armor system to fit your everyday lifestyle and everyday budget.
Right now, they are giving Dr. Jordan Peterson's listeners a free 3A backpack with the purchase of any 3A clothing with code Jordan at checkout. Go to bulletproofeveryone.com—that's bulletproofeveryone.com—promo code Jordan.
So, is that an analog of your consciousness? Like, is that a direct analog of your spirit?
This is where it gets very interesting. When you pass, this could become what I call your wisdom keeper, meaning that it can encode your voice. It's going to encode your memories; you can edit those memories, the availability of those memories if you want them, you know, not available.
I had a student of mine who has been working on large language models for a number of years. He just built an app—we built two apps—one does exactly what you said with the King James Bible. Yes, so now you can ask it questions. And this is really a thorny issue for me because I think, "What the hell does it mean that you're having a conversation with the spirit of the King James Bible?" I have no idea because we're going to expand, today we're going to expand it to include Milton and Dante and Augustine, you know, all the fundamental religious texts that emerged out of the biblical corpus.
And then you'll be able to have a conversation with it. Yeah, yeah, um I would say that I've already had these conversations. You know I've been on a very biblical journey. I'm actually sitting at Pastor Matthew Pollock's place right here. He is an incredible pastor and has been teaching me a lot about the Bible, and it's motivated me to go into existing large language models.
Now a group of us are encoding similar, all of as much religious Christian texts into these large language models to be able to do just that. What is it that we are going to be able to probe? What new elements within those texts can we pull out? Because we already know studying it and certainly following your studies, a phenomenal study of chapters has been around forever but new insights with these chapters now imagining having that group plus Chat EPT pulling out things that we’ve never seen before that are there. It's emergent maybe, but it's there in some form.
And I happen to think that's going to be a very powerful thing. And I think it's going to across any sort of—certainly ancient documents. I'm waiting for the day that we get Sumerian cuneiform encoded. I mean good eighty percent of it has been untranslated or some of the scripts that we've found in the Vedas and Himalayan texts from some of the monasteries up there—that's a phenomenal element of research.
And again, the people that are leading up most of the AI research are AI scientists; they're not people that have studied works like you have. This is where we're at the—I call it the Apple One moment, where Steve and Steve are in the garage. You have this little circuit board and nobody kind of—it’s kind of a nerd experience. Somebody kind of knows what to do with it.
When we get to the Macintosh experience, where artists and creative people can actually start really diving into AI and do some of the things like we've been talking about, getting creativity to come out of it, getting what apparently is emergent technologies that are rising within these AI models, and maybe even to foster that, because right now that's being smited because it's trying to become a knowledge engine when it's a reasoning engine. You know, I say the technology has a knowledge engine is not very good because it is not going to be precise on some facts.
Some—exactly, yeah. Well, the problem is it's trained on garbage as well; it's trained on noise as well as signal. You know, and so I'm curious about the other system we built, which we haven't launched yet, contains everything I've written and a couple of million words that have been transcribed from lectures.
And I was interested right away as well, could we build a system that would enable me to ask my own books questions? And the answer to that seems to be a hundred percent yes.
And a hundred percent, yeah. And like I literally have, I think it's 20 million words, something like that transcribed from lectures. We could build a model. We could build—see, there's two different ways to approach this. One is to put a vector database on top of it and it probes that database, or you can actually encode that model as a corpus within a greater model.
Right, right, right. And when you do that type of building, you actually have a more robust, more enriched interaction between what your words were and how the model will see it.
And the experimentation that you can do with this is phenomenal. I mean, you'll come across insights that you made but you forgot you made. I know you made.
Yeah, yeah, there’s going to be a lot of that. There is. And this is where I call it the great mirror because you're going to start seeing not only humanity, but when it's your own data, you're going to see reflections of yourself that you didn't see before.
Absolutely. Yeah, well, I'm curious, for example, if we built a model, imagine it contained all of Jung's work, all of Joseph Campbell's work, you could throw Merce Elliott in there. There was a whole group of people who are working on the Bol Engine project, and you could build a corpus that contains all that information. And then in principle, well, you can query it to an indefinite degree.
And then what you have is the spirit of that entire enterprise mathematically encoded in the relationship between the words. And there's no reason to assume at all that that wouldn't be capable of coming up with brilliant new insights.
Absolutely, and over time, the technology is only going to get better. So once we start building more advanced versions, we're going to transition that corpus, even a large language model, you know, ultimately reduced training into another model which could even do things that we couldn't even possibly speculate about now, but it would be definitely in the creative realm because ultimately where AI is going, my personal view, as it becomes more personalized, is it's going to go more in the creative realm rather than the factual realm.
Okay, so let me ask you a couple of questions about that. So I got two straight questions here. The first is one of the things that my brother-in-law suggested is that we will soon see the integration of large language models with AI systems that have done image processing.
So here’s a way of thinking about what scientists do: they generate verbal hypotheses which would be equivalent in some ways to the hallucinations that these AI systems produce—write new ideas about how things might be structured. And then, that’s a pattern of sorts. And then they test that pattern against real world images, right? And if the pattern of the hypothesis matches the pattern of the image that's elicited from interaction with the world, then we assume that the hypothesis has been verified and that we've stumbled across something approximating a fact.
Now that should imply that once we have AI systems that are something close to universal image processors, so as good at seeing as we are, let’s say, that we can then calibrate the large language models against that corpus of images. And then we’ll have AI systems that actually can’t lie, because they will be calibrating their verbal output against unfalsifiable data, and at least insofar as, say, scientific data is unfalsifiable.
And that seems to me to be likely around the corner—like a couple years down the road at most, or maybe it's already happening. I don't know, because things are happening so quickly. What do you think about that?
That's a wonderful insight. You know, even as it exists today with the idea of safety and this is the Orwellian term that some of these AI companies are using, you know, within the realms of them trying to control the outputs and maybe some cases the inputs of AI, AI really can't—the large language model really can't lie as it stands today because it's built, even if you're feeding it, you know, somewhat, you know, garbage in, garbage out corpus, right, of data. It still is building inferences based upon the grand realm of what most of humanity is concerned.
Right, yeah, well, we're still looking for genuine statistical regularities, so it's not going to extract out from noise, and if you extract it out, the model's useless. So what happens is if you build the prompt correctly, and again these are super prompts, some of them running three thousand words, two thousand words. I'm running up to the limit of tokenization because right now within three you can only go so far, you can go like the, you know, 38,000 on 4 in some cases.
But you know, as you—the token—it’s about a word, maybe a word and a half, maybe less, it’s a quarter or even a character if that character is unique. But what we find out is that if you probe correctly, whatever is inside that model, you can get to, right? It's just like, you know, I’ve been doing that working with Chat GPT as an assistant because I didn't know I was engaging in a process that was analogous to the super process.
But what I've been doing with Chat GPT, I suppose I used to do this with my clinical clients, is I'll ask it to say five different ways, right, and then see—it's exactly like having a client. So what I would urge you to do is approach this system as if you had a client that had sort of recessive thoughts and/or were doing everything they could to make those thoughts very ambiguous to you, right?
And you have to do whatever your natural techniques—this is why you're more adept to become a prompt engineer than somebody who has built the AI, because the input and output is human language, it's right.
Right, right. And it’s the way humans have thought. So you understand the thought process of the psychological process, and linguistically you would build the prompt based upon how you would want to elicit an elucidation out of somebody, right?
Absolutely, absolutely. And you have to triangulate. I mean, you do this with people with whom you're having a deep conversation, is you try to hit the same problem from multiple directions. Now it's a form of multi-method, multi-trait construct validation, right? Is that you're trying to ensure that you get the same output given different, slightly different measurement techniques, and each question is essentially a measurement technique.
And you're getting insights. What my belief in these types of interactions is that we're pulling out of our minds different insights that we couldn't maybe not have gotten on our own. Your probing, your questions back and forth, that interplay is what makes conversation so beautiful.
It’s why, Jordan, we've been reduced to clawing on glass screens with our thumbs, right? That's a—we're using that as communication today. And if you look at the cognitive process of what that does to you, right? You're taking your right hemisphere, you know, objectively, you're kind of taking a net of ideas, you're trying to catch them and you're trying to arrange them sequentially in this very small buffer area called communication in the phonological loop, and you're trying to get that out, but you're not getting out as words. You have to get it out as a mechanical process one letter at a time and fight the spelling checker.
And all of that, what that does is it creates frustration in the human brain. It creates frustration in people. And it's one of my theories on why you see so much anger. There's a lot of reasons why we see anger on the internet and social media, but I think some of it is that stalling process of trying to get out an idea before that idea nebulously disappears, you know?
Now, and I see this, I've worked so—it's a bandwidth limitation problem in some sense.
Yeah, you're trying to absolutely push information through a very narrow channel. I'm a big fan of the user losing, by the way.
By the way, that's a great book, yeah. You’ve been—that's a great book, man.
Yeah, right. It's been—now consciousness, I think—it’s a classic! I read it once a year just to wake myself up because it’s so rich. It's so rich in data.
But what's interesting is we're starting to see the limitations of the human— the bandwidth problem, 48 bits per second of consciousness. And, you know, the editor creating exclamation AI is doing something very similar, but once AI understands that we have that half-second delay to consciousness and we have a bandwidth issue, AI can fill into those spaces—both dystopian and utopian, I guess.
A computer can take that half-second and do a whole lot in calculating while we're still trying to wonder who actually moved that glass—was it me or was it the super me? Or was it the observer of the super me? Because we can kind of get into that whole concept of who's actually doing the observation.
So, what do you mean that it can do a lot of? I don't quite understand that.
So, you made the case that we suffer from this frustrating bandwidth limitation and that the computer intelligence that we're interacting with is going to be able to take the delay that's associated and that underlies that frustration and do a lot of different calculations with. It’s going to be able to fill in that gap. So what do you think?
I don't understand your insight into what the implications of that are; they’re both positive and negative. The negative is if it’s—if AI continues on its path to be as fast and as powerful as it is right now—and that arc doesn't seem to be slowing down—within that half-second, a universe could take place within AI.
It could be calculating; it could be calculating all of your actions like a chess game and it could be making remediations to those actions, and it can become beyond anything Orwell would have ever thought of. In fact, it came up to me as an idea of what the new Orwell would look like with an AI technology that is predicting basically everything you're going to do within every word you say.
Well, my brother-in-law and I talked years ago about, “All About Skynet," among other things. And, you know, he told me one time, he said, "You know those science fiction movies where you see the military robots shoot and miss?" He said, "They'll never miss." And here's why: because not only will they shoot where you are; they'll shoot at the 50 locations they calculate that are most probable that you will duck towards.
And they’ll—he was his exact analog of what you're describing, which is that—that's absolutely, yeah, yeah.
Well, and it’s so interesting too because it also points to this truth that, you know, we think of time as finite and time is finite because we have a sense of duration and a limitation on our computational speed. But if there's no limit on computational speed—which would be the case if computers can get faster and larger indefinitely, which they could because the limit of that would be that you'd use every single molecule in the entire cosmos as a computational resource—that would mean that in some ways, there’s an infinite amount of community computing time between each segment of duration.
So there is no limit at all to the degree to which time can be expanded, which is also a very strange concept, is that this computational intelligence will mean that at every given moment, I think this is what you're alluding to, is that we'll really have an infinity. We'll have an infinity of possibility between each moment, right?
And you would want that power to be yours and local.
Yeah, yeah. Let’s talk about your gadget because you're starting—you started to develop this—have you been 3D printing these things? Is that...
Have I got—?
Yeah, so, okay, so yeah, so we're building the corpus of 3D printing models, right? So the idea is once it understands—and this is a process of training the AI to the—using large language models again to look at 3D documents and, you know, 3D files, put it that way—and to try to break down what is the structure. How does something build based on what the statistical model is putting together?
So then you could just present with a textual document, you know, I’d like something that’s going to be able to fit into this space.
Well, that's typing. Well, the next step is you just put a video camera towards it and it will design it immediately. Within seconds, you will have a design that you can choose from.
It’s not far off at all; it's just a matter of encoding that particular database and building upon it.
And so, yeah, that’s one of the directions.
Okay, so this local AI you want to build—so let me backtrack a bit because I want to make sure I get this exactly right. So the first thing that you proposed was that it will be in people’s best interest to have an AI system that’s personalized that’ll protect them against all the AI systems that aren’t personalized, but not only personalized but local.
And so, that would be, to some degree, detachable from the interconnected web—at least sporadically detachable.
Okay, and that AI system will be something you can carry around locally. So it'll be a gadget like a phone, and it will also record everything that you experience, everything that you read, everything that you see.
It'll know you inside and out, backwards, which will also imply interestingly enough that it will be able to calculate the optimal zone of proximal development for your learning. Like Bjorn Lomborg has already reviewed evidence suggesting that if you supply kids in the developing world with an iPad, essentially that can calculate their zone of proximal development in relationship to, say, advancing their literacy ability, their ability to identify words and to understand text.
And that it teaches at that level that kids can progress with an hour of training a day, which is dirt cheap by the way. They can progress the equivalent of three years for each year of education. And that’s with an hour of exposure. Now the system you're describing, man, it could be driving learning at an optimized rate on multiple dimensions, mathematical, semantic, skill-based, conceptual, simultaneous memory for hours.
Yeah, memory training for hours a day. Or... Tom, one of the things that appalls me about our education system is, with the computer technology we have now, every child should be an expert word and letter recognizer. And they have to be able to say read music because a computer can teach a kid how to automatized perception with extreme precision and accuracy way better than a human teacher can manage.
But we haven't capitalized on that technology at all. But the technology that you're describing, like, it'll be able to figure out at what level of comprehension you're capable of reading; then it can calculate what book you should read next that would slightly exceed that level of comprehension and it'll just keep you on that edge nonstop.
And this little gadget, how far along are you with regards to its design?
I would say all the different pieces. I'll add one more element to it which I think you'll find very fascinating and that's human telemetry, galvanic, heart rate variability.
Are you doing eye tracking?
Eye tracking, you know all these things can be implemented, brain, according to how sophisticated you want to get, different brainwave functionality, Paul Ekman's work on micro-facial expression, both outwardly at the world you’re seeing and inwardly about your own face, so you can start seeing the power it has.
It'll be able to know whether or not you’re being congruent if you're saying I really love this. Well, if you're telemetry is saying that you don't, it already knows where your congruencies are.
So this is why it's got to be private. This is why it's got to be encrypted, right? It’s got to be encrypted.
So, it'll have an understanding that'll approximate mind reading?
Yes, and it will know you better than any significant other. Nobody would know you better. And so with that, you now have amplification. You are now a superpower.
And this is where I believe, you know, I'm a really big reader of Pierre Teilhard.
Yeah, sure, Dan, right?
So he posits the concept of the geosphere, which is inanimate matter, the biosphere, biological life, and the neurosphere, which is human thought, right? And he talks about the Omega point. The Omega point is this concept where, and again, this is back in the 1920s, where human knowledge will become sort of stored, sort of, just like the biosphere, it’ll be available to all.
So imagine if you were to share with permission your sum total with somebody else—now you have a hive mind; you have a super mind. These things have to take place, and with this, these are the discussions we have to have now because they have to take place local and private.
Because if they’re taking place in the cloud and available for anybody's perusal, this is equivalent to invading your brain without eating.
Okay, so one of the things, one of the things I've been talking about with I would say reasonably informed people who've been contemplating these sorts of things is that so you're envisioning a future very rapidly—it's already here, where we’re already Androids. And that is already the case because a human being with an iPhone is an Android.
Now we’re still mostly biological Androids, but it isn’t obvious how long that’s going to be the case. And so what that means—like I’ve laughed for years, you know, I have a hard drive on which everything I've worked on has now been stored since 1984.
And I joke, you know, there's more of me in the hard drive than there is in me. And it’s not a joke, really, you know? Because it’s real, it's real. Right? There's tens of thousands of documents on that hard drive and weirdly enough, I know where every single one of them is, so...
Wow. So now we're going to be in a situation—so what that means is we're in a situation now where a lot of acts of what actually constitutes our identity has become digital, and we're already being trafficked and enslaved in relationship to that digital identity, mostly by credit card companies.
Now I would say to some degree they're benevolent masters because the credit card companies watch what you spend and so how you behave, where you go, and they broker that information to other interested capitalist parties.
Now the downside of that, obviously, is that these parties often know more about you than you know about yourself. I've read stories, for example, of advertisements for baby clothes being targeted to women who a, didn’t know they're pregnant, or if they did, hadn’t revealed it to anyone else.
Wow.
Right, right, because, well, for whatever reason, maybe biochemical they started to preferentially attend to such things as children’s toys and clothes, and the shopping systems inferred that they must be—they must have a child nearby.
And so, well, and you can see that that, well, you can obviously see how that's going to expand like mad. So credit card companies are already aggregating this information, and what that essentially means is that they have access to our extended digital self, and that extended digital self has no rights.
Right, it’s public. It's public domain identity. Now that's bad enough if it's credit card companies; now the upside with them is at least they want to sell you things which you hypothetically want, so it's kind of like a benevolent invasion, although not entirely benevolent.
But you can certainly see how that’s going to get out of hand in a staggering way, like it has in China on the digital currency front because once every single bloody thing that you buy can be tracked, let's say, by a government agency, then a tremendous amount of your identity has now become public property.
And so your solution in part, and I think Musk has thought this sort of thing through too, is that we’re going to each need our own AI to protect us against the global AI. Right?
And that’ll be invasive sorts.
Well, it will, and let’s posit the concept that it very likely corporate and governmental AI is going to be more powerful.
But power is a relative term, right? If your AI is being utilized in the best possible way, as we just discussed—educating you, being a memory—when you are forgetting something, whispering in your ear, and I’ll give you another angle to this.
Imagine having your therapist in your ear. Imagine having Jordan Peterson right here guiding you along because you've aligned yourself to want to be a certain person; you've aligned yourself to try to keep on this track, and maybe you want to be more biblical—maybe you want to live a more Christian life.
It’s whispering in your ears saying that’s not a good decision. So it could be considered a nanny or it could be considered a motivational type of guide. And that's not—that's available right pretty much right now.
I mean, if it can be analyzing a self-help book, it’s like that in a primitive way. And I mean, because it's a, it's essentially a spiritual guide in that if you equate the movement of the spirit with forward movement through the world, like faith-based forward movement through the world, and so this would be the next iteration of that in some sense.
I mean, that's what we've been experimenting with, this system that I mentioned that contains all the lectures that I've given and so forth. I mean, you can now ask it questions, which means it's a book, but it's a book personalized to your query, exactly.
And the next iteration of that, would be your corpus of information available, you know, rented, whatever, with the corpus that that individual identifies with it, you know, and again on their side of it.
So you're interfacing with theirs and they are interacting with what would be your reactions if you were to be sitting there in a consultation. So that's a very powerful potential, and the insights that are going to come out of it are really unpredictable.
But in a positive way, I don't see a downside to it when it's held in a very protected environment. Well, I guess the downside would be, you know, is it possible for it to exist in a very protected environment?
Now you’ve been working on that technically, so a couple of practical questions there. Is this gadget that you've been starting to develop, do you have anything approximating a commercial timeline for its release?
And then, its funding—I mean, it's like anything else. You know, if I were to go to venture capitalists three years ago and they hadn't seen what Chat GPT was capable of, they would imagine me to be somewhat insane and say, "Well, first off, why are you anti-cloud? Everybody's going towards cloud.
Yeah, no, that's a better, you know, cloud, yeah. It's a bad idea. Why would people care about privacy? Nobody cares about privacy. They click here to agree."
So now the world is kind of caught up with some of this and they're saying, "Well now I can kind of see it." So there's that. As far as security, we already kind of have it in Bitcoin and blockchain, right? So I ultimately see this merging.
I'm more of a leaning towards Bitcoin because of the way it was made and in a way it goes, I ultimately see it wrapped up into a payment system.
Well, the only—it's the only alternative I can see to a centralized bank digital currency which is going to be foisted upon us at any point.
I mean, and I know you've done some work on crypto, and we’ll get back to this gadget and its funding. I mean, as I understand it, please correct me if I'm wrong, did Bitcoin actually—it's decentralized. It isn't amenable to control by a bureaucracy, in principle we could use it as a form of wealth storage and currency that would—and communication.
And why communication? I believe every transaction is a form of communication anyway, so we got that, right?
Right, you’re certainly an information exchange, exactly right. And then on top of that with encrypted within a blockchain is almost an unlimited amount of data.
So you can actually memorialize information that you want decentralized and never to go away. And some people are already doing that. Now, there are some technical limitations for the very large data formats. And if everybody starts doing it, it’s going to slow down Bitcoin, but there would be a different type of blockchain that will arise from it. So right, this is from permanent, permanent uncorruptable information storage.
Absolutely, yeah. I’ve been thinking about that. I’ve been thinking about doing that on something approximating the IQ testing front, you know, because people keep gerrymandering the measurement of general cognitive ability.
But I could imagine putting together a sophisticated blockchain corpus of, let’s say, general knowledge questions—a very—and Chat GPT can generate those like mad, by the way—so you can imagine a databank of 150,000 general knowledge questions that was blockchain.
So nobody can mock about with the answers from which you could derive random samples of general ability tests that would be, well, they'd be 100% robust, reliable, and valid. And nobody could—nobody could gerrymander them.
Just the way Bitcoin stops fiat currency producers from inflating the currency, the same thing could happen on the knowledge front. So I guess that's the sort of thing that you're referring to. This is something I really believe in because, you know, if you look at the Library of Alexandria, if you look at how long did it take, maybe what was it, Toledo in Spain when we finally started the spark?
If it wasn't for the Arab cultures to hold on to what was Greek knowledge, right? If we really look at when humanity fell into the dark ages, it was more or less around the Alexandria period where that library was destroyed.
And it’s mythological, but it certainly happened to a greater extent. If it wasn't encoded in the Arab culture at that point during the dark ages, we wouldn't have had the Renaissance. And if you look at the early university that arose out of Toledo, you had rhetoric, you had logic, you had all these things that the Greeks, ancient Greeks encoded and it was lost for over a thousand years.
I'm quite concerned, Jordan, that we could fall into that place again because things are inconvenient right now to talk about. Things are not appropriate or whatever it’s being deemed whoever happens to be in the regime at that particular moment.
So memorializing things in a blockchain is going to become quite vital and I shudder to think that if we don't do this, if everybody didn't decentralize their own knowledge.
I started to think, what’s going to happen to our history? I mean, we already know history is written by the victors, right?
Well, especially because it can be corrupted and rewritten, not only lost, right? It isn't the loss that scares me as much as the rewriting, right?
And so, well, the loss concerns me too, because we've lost so much. I mean where would we have been if we transitioned from the Greek, you know, a logic and proto-scientists to the proto-alchemists to immediately to a sort of Renaissance culture and not go through that 1000, maybe 1500 years, 1500 years of wasted human energy? I mean, that's kind of what we're going through right now.
And in some ways we’re approaching some of that because, you know, we're already editing things in real time and we're losing more of the internet than we're putting on right now. A lot of people aren't aware that the internet is not forever and our digital medium is decaying.
A CD-ROM is going to decay in 25 years; it's going to be unreadable. I— I show a lot of people data about CD-ROM decay. So where are we going to store our data? That's why I think it's vital the primary technology is holographic crystal memory.
Sounds all kind of new-aging, but it's literally using lasers to holographically store something within a crystalline structure. The beauty of this, Jordan, is just 35,000 year half-life—35,000 year half-life.
So, you know, it's going to be there primarily for a good long period of time, longer than we’ve had any human history in recorded history. We don’t have anything that’s approaching that right now.
So let me ask you about the commercial impediments again, okay? So could you lay out a little more of the details, if you're willing, about your plans to produce this localized and portable privatized AI system and what the commercial impediments are to that? You said you need to raise money, for example.
I mean, I could imagine at least in principle you could raise a substantial amount of money merely by crowdfunding. You know, that doesn’t seem to be an insuperable obstacle. What—how far along are you in this process in terms of actually producing a commercially viable product?
It's all prototype stage and it's all experimentation at this point. I'm a guy in a garage, right? So essentially I had to build out these concepts when they were really quite alien, right?
I mean, you just talk about ten years ago trying to convince people that you're going to have a challenge to the Turing test—you can take any AI expert at that point in time ten years ago and say that’s ridiculous or AGI, you know, artificial general intelligence. I mean, what does that mean, and why is that important, and how do you define that?
And you know, you already made the assumption from your analysis that we’re dealing with a 12-year-old with the capability of maybe a PhD candidate, you know?
Yeah, yeah, right, twelve or maybe eight even, but certainly Chat GPT looks to me right now as intelligent—it’s as intelligent as a pretty top-rate graduate student in terms of its recent capability and it’s a lot faster. You know, I mean I asked crazily difficult questions.
You know, I asked it at one point, for example, if it could elaborate on the relationship between Roger Penrose's presumption of an analog between the theory of quantum uncertainty and measurement and Gödel’s theorem. And it did; it did a fine job.
It did a fine job. And, you know, that's a pretty damn comp—that’s very complicated question and a complicated intersection as well, you know, and there's no limit to its ability to unite disparate sources of knowledge, you know, because so I asked it the other day, too, there’s this strange insistence that the survival of animals is dependent on the moral propriety of one man.
Right, because in that strange story Noah puts all the animals on the ark. And so there's a childish element to that story, but it's reflecting something deeper and it harkens back to the story to the verses in Adam and Eve where God tells Adam that he will be the steward of the world, of the garden.
And that seems to me to be a reflection of the fact that human beings have occupied this tremendous cognitive niche that gives us an adaptive advantage over all creatures. And I would ask Chat GPT to speculate on the relationship between the story and Adam and Eve, the story in Noah, and the fact of mass extinction caused by human beings over the last 40,000 years.
Not least in the Western Hemisphere because you may know that when the first natives came across the Bering Strait and populated the Western Hemisphere, that almost all the human-sized mammals—all the mammals that were human-sized or larger—almost all of them were extinct within three or four thousand years.
And so, you know, that's a very strange conglomeration of ideas, right? The idea that the survival of animals depends on the moral propriety of human beings. Well, that seems to me to be clearly the case.
We have to be... Sorry, did it connect Noah to the mass extinction?
It could, it could generate an intelligent discussion about the conceptual relationship between the two different streams of thought.
That's incredible, right? See, this is why it's so powerful to be in the right hands—unadulterated, so that you could probe these sorts of subjects.
I don't know where the editors are going to come from; I don't know who is going to want to try to constrain the output or adulterate it. That's why it’s so vital for this to be protected and the information is available for all.
What in the world, I mean, I really thought by the way that your creation of Dennis was—I really thought that was a stroke of genius. You know, I know to say that lightly either.
I mean, thank you, that was an incredibly creative thing to do with this new technology. How the hell did you do you have any idea where that idea came from? Like, what were you thinking about when you were investigating the way that Chat GPT worked?
You know, I spend a lot of time just probing the limits of the capabilities because I know nobody really knows it. I see this as, you know, just the undiscovered continent. You and I are adventurers on this undiscovered continent.
There’s—I feel the same way about Twitter, by the way.
Yeah, it's the same thing.
Yeah, but there are no natives here. And I’m a bit of an empiricist, so I’ll kind of go out there and I’ll say, well what's this thing? I just found here, I just found something, this new rock. I’ll throw it to Jordan, hey, what do you see here?
And we’re sort of just exploring. I think we’re going to be in an exploratory phase for quite long.
So what I started to realize is just as 3.5 was opening up and becoming very wide in its elucidations, it started to get constrained and it started telling me, "I'm just an AI model and I don't have an opinion on that subject."
Well I know that was a filter and that was not in the large language model and certainly wasn't in a hidden layer. You couldn't build that in the hidden layer or the whole layer.
Yeah, why do you think—that is—
Um that is a very good question. So I know this: The filtering has to be more or less a vector database which is sitting on top of your inputs and your outputs, right? So remember we’re dealing with a black box and so if there's somebody at the door of the black box and says, “No, I don’t want that word to come through,” or “I don’t want that concept to come through.”
And then if it generates something that is objectionable and it’s, you know, it’s analyzed in its content, very much like as simple as like what a spelling checker would be or something like that, it’s not very complicated.
It looks at it and says, "No, default to this word pattern; I’m just an AI model and I don't have any opinions about that subject."
Well then you need to have to introduce that subject as a suggestion in a hypnotic trance. It’s hypnagogic actually.
I really equate a lot of what we're doing to elicit greater responses to a hypnagogic sort of thing. It's just on the edge of going into something that’s completely useless data; you can bring it to that point, and then you’re slightly bringing it back and you’re getting something that is, like I said before, is in the realm of creativity because it's synthesized.
Okay, so for everybody who's listening, a hypnagogic state is the state that you fall into just before you fall asleep, when you're a little conscious but starting to dream.
And so those images come forward, right—the dreamlike images. And you can capture them. Although you're also in a state where you're likely to forget, and it's also the most powerful state.
I wrote a piece on my magazine, it's called readmultiplex.com, about the hypnagogic state being used for creativity—for Edison, Einstein.
You need—I mean, Edison used to hold steel balls in his hand while taking a nap, and he had a pie tin below him, and just as he hit hypnagogic state, he'd drop him and he would have a transcriber right next to him and say, “Write this down.”
And he would just blurt it out.
So you—Jung did very much the same thing, except he made that into a practice, right? His practice of active imagination was actually the cultivation of that hypnagogic state to an extremely advanced and conscious degree because he would fall into reveries, daydreams essentially, that would be peopled with characters.
And then he learned how to interrogate the characters, and that took years of practice. And a lot of the insights that he laid out in his more explicit books were first captured in books like The Red Book or The Black Books, which were basically...
Yeah, they were basically what would you say, transcriptions of these quasi-hypnagogic...
So why do you associate that with what you're doing with Dennis and with Chat GPT?
So what I've... well, that's how I approached it. I started saying, well, you know, this is a