yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

We can split the atom but not distinguish truth. Our information is failing us | Yuval Noah Harari


41m read
·Nov 3, 2024

Between the human and the computer together. Do you think that AI will actually help us? Lemme ask you how you think this is gonna impact democracies? What are some arguments for and against a future in which humans only have relationships with AI? What is your worst case fear about all of this?

  • I'm Yuval Noah Harari. I'm a professor of history at The Hebrew University of Jerusalem and the author of "Nexus: A History of Information Networks from the Stone Age to AI." If you consider, for instance, that we have managed to reach the moon, to split the atom, to decipher DNA, and yet with all our knowledge and wisdom, we are on the verge of destroying ourselves. And a traditional answer to this question is that there is something wrong in human nature. But the problem is not in our nature. The problem is in our information. If you give good people bad information, they make bad decisions. So why is it that the quality of our information did not improve over thousands of years of history? Why is it that very sophisticated societies in the 20th and 21st century have been as susceptible as Stone Age tribes to mass delusion and psychosis and the rise of destructive ideologies like Stalinism or Nazism?

  • Good evening, everybody. Thank you for being here. It is a privilege for me to be here with Yuval. I've been a long-time fan of his back when Sapiens was first published, and there is so much to talk about between then and now and how we think about AI and how we think about this new reality. Something that I think was actually a theme in your earliest books about reality and truth and information, and in a world where it's going to become all electronic and in a cloud, in the sky and a whole new way, what it ultimately means when you think about history. So thank you for being here. We're gonna talk about so many things, but I'm gonna tell you where we wanna start tonight. If I could, and I wanna read you something. This is you, you said that "Humankind gains enormous power by building large networks of cooperation, but the way these networks are built predisposes them to use power unwisely. Our problem then is a network problem," you call it, but even more specifically you say "it's an information problem." I want you to try to unpack that for the audience this evening because I think that that more than anything else in this book explains or at least sets the table for what this discussion is all about.

  • So basically the key question of the book and of much of human history is, "If we are so smart, why are we so stupid?" That's the key question. I mean, we can, you know, we can reach the moon, we can split the atom, we can create AI, and yet we are on the verge of destroying ourselves and not just through one way, like previously we just had nuclear war to destroy ourselves. Now we've created an entire menu of ways to destroy ourselves. So what is happening? And you know, one basic answer in many mythologies is that there is something wrong with human nature. That we reach for powers that we don't know how to use wisely. That there is really something deeply wrong with us. And the answer that I try to give in the book is that the problem is not in our nature. The problem is in our information, that if you give, if you give good people bad information, they will make bad decisions, self-destructive decisions. And this has been happening again and again in history because information isn't truth and information isn't wisdom. The most basic function of information is connection. Information connects many people to a network, and unfortunately, the easiest way to connect large numbers of people together is not with the truth, but with fictions and fantasies and delusions.

  • I was gonna say, there's a sense though, that information was supposed to set us free, information, access to information across the world was the thing that was supposed to make us a better planet.

  • Why would it do that?

  • Well, there was a sense that people in places that didn't have access to communication tools and other things didn't have access to information and didn't have access. And maybe to put a point on it to good information.

  • Hmm.

  • You make the argument. And now to go back to Sapiens, frankly, you talk about different types of realities. And this is what I think is very interesting because you talk about objective reality. You know, we can go outside and we can see that it's not raining and the sky is blue. And you'd say that's an objective reality, right? But then there are these other kinds of realities that you talk about, which makes us, I think, susceptible to what you're describing when I say susceptible, this idea that we as a group, as a humanity, that we are storytellers and that we are okay, oddly enough, with what you've described as a fictional reality.

  • Yeah, I mean, to take an example, suppose we want to build an atom bomb. So to do that—

  • Suppose? Just suppose somebody wants to build an atom bomb. You need to know some facts to build an atom bomb. You need some hold on objective physical reality. If you don't know that E=mc², if you don't know the fact of physics, you will not be able to build an atom bomb. But to build an atom bomb, you need something else besides knowing the fact of physics. You need millions and millions of people to cooperate. Not just the physicists, but also the miners who mine uranium, and the engineers and builders who build the reactor, and the farmers that grow rice and wheat in order to feed all the physicists, and engineers, and miners, and so forth. And how do you get millions of people to cooperate on a project like building an atom bomb? If you just tell them the facts of physics, "look, E=mc², now get to it." Nobody would do it. It's not inspiring. It doesn't mean anything in this sense of giving motivation. You always need to tell them some kind of fiction or fantasy. And the people who invent the fictions are far more powerful than the people who know the facts of physics. In Iran today, the nuclear physicists are getting their orders from people who are experts in Shiite theology. In Israel, increasingly, the physicists get their orders from Rabbis. In the Soviet Union, they got it from communist ideologues. In Nazi Germany, from Hitler and Himmler. It's usually the people who know how to weave a story that give orders to the people who merely know the facts of nuclear physics. And one last point, crucial point is that when if you build a bomb and ignore the fact of physics, the bomb will not explode. But if you build a story and ignore the facts of history and biology and whatever, the story still explodes and usually with a much bigger bang.

  • And so one of the underlying conceits of this book, in my mind as a reader, is that you believe that AI is going to be used for these purposes.

  • Yes. And that AI ultimately is going to be used as a storyteller to tell untruths and unwise stories. And that those who have the power behind this AI are going to be able to use it in this way. Do you believe it's that people are going to use the AI in the wrong way, or that the AI is going to use the AI in the wrong way?

  • Initially, it'll be the people, but if we are not careful, it'll get out of our control and it'll be AIs. I mean, maybe we start by saying something about the definition of AI because it's now kind of this buzzword, which is everywhere, and there is so much hype around it that it's becoming difficult to understand what it is. Whenever they try to sell you something, especially in the financial markets, they tell you, oh, it's an AI, this is an AI chair and this is an AI table, and this is an AI coffee machine. So buy it. It's like— And so what is AI? So if we take a coffee machine as an example, not every automatic machine is an AI. Even though it knows how to make coffee when you press a button, it's not an AI, it's simply doing whatever it was pre-programmed by humans to do. For something to be an AI, it needs the ability to learn and change by itself and ultimately to make decisions and create new ideas by itself. So if you approach the coffee machine and the coffee machine on its own accord tells you, "Hey, good morning Andrew. I've been watching you for the last couple of weeks and based on everything I learned about you and your facial expression and then the hour of the day and whatever, I predict that you would like an espresso. So here I made you an espresso." Then it's an AI and it's really an AI. If it says, "and actually I've just invented a new drink called 'Best-preso,' which is even better. I think you would like it better." So it took the liberty to make some for you. Then it's an AI, when it can make decisions and create new ideas by itself. And once you create something like that, then obviously the potential for it to get out of control and to start manipulating you is there. And that's the big issue.

  • And so how far away do you think we are from that?

  • When OpenAI developed GPT-4, like two years ago, they did all kinds of tests. What can this thing do? So one of the tests they gave, GPT-4 was to solve capture puzzles that when you access a webpage, a bank or something, and they want to know if you are a human being or you are a bot. So you have these twisted letters and symbols and—

  • You guys all know this, this is the nine squares and it says which one has something with four legs in it and you see the animal and you click the animals.

  • And GPT-4 couldn't solve it, but GPT-4 accessed TaskRabbit, which is an online site when you can hire people to do all kinds of things for you. And you know, these captures were developed so humans can solve them, but AI still struggles with them. So it wanted to hire a human to solve the problem for it. Now, and this is where it gets interesting, the human got suspicious. The human asked, "Why do you need somebody to solve your cup puzzles? Are you a robot?" And at that point, GPT-4 said, "No, I'm not a robot. I have a vision impairment, which is why I have difficulty with these captures."

  • So it knew how to lie.

  • It had basically theory of mind on some level. It knew what, you know, there's so many different explanations and lies you can tell. It told a very, very effective lie. It's the kind of lie that— vision impairment, I'm not going to ask any more questions about it. I will just take the word of GPT-4 for it. And nobody told it to lie and nobody told it which lie will be most effective. It learned it the same way that humans and animals learn things by interacting with the world.

  • Let me ask you a different question? To the extent that you believe that the computer will be able to do this to us, will, with the help of the computer, we be able to avoid these problems on the other side, meaning there's a cat and mouse game here.

  • Yep. Yeah.

  • Yeah. And the question is between the human and the computer together, will we be able to be more powerful, and maybe power is good or bad, but be able to get at truth in a better way?

  • Potentially yes. I mean the point of talking about it, of writing a book about it is that it's not too late. And that also the AIs are not inherently evil or malevolent. It's a question of how we shape this technology. We still have the power. We are still at, at the present moment. We are still far more powerful than the AI. They in, in, in very narrow domains like playing chess or playing go. They are becoming more intelligent than us, but we still have a huge edge. Nobody knows for how many more years. So we need to use it and make sure that we direct the development of this technology in a safe direction.

  • Okay, so I'm gonna make you king for the day. What would you do? How would you, would you regulate things? How, what, what is it exact in an environment and world where we don't have global regulators, we don't have people who like to interact and even talk to each other anymore, frankly, where we've talked about regulating social media, at least in the United States for the past 15 years now and have done next to nothing, how we would get there, what that looks like?

  • You know, there are many regulations we should enact right now and we can talk about that. One key regulation is to make social media corporations liable for the actions of their algorithms, not for the actions of the users, for the actions of the corporate algorithms. And another key regulation is that AIs cannot pretend to be human beings. They can interact with us only if they identify as AIs. But beyond all these specific regulations, the key thing is institutions. Nobody is able to predict how this is going to play out, how this is going to develop in the next 5, 10, or 50 years. So any thought that we can now kind of regulate in advance, it's impossible. No rigid regulation will be able to deal with what we are facing. We need living institutions that have the best human talent and also the best technology that can identify problems as they develop and react on the fly. And we are living in an era in which there is growing hostility towards institutions. But what we've learned again and again throughout history is that only institutions are able to deal with these things. Not single individuals and not some kind of miracle regulations.

  • Play it out. What is the— and you play it out in the book a little bit when it comes to dictators and your real fears. What is your worst case fear about all of this?

  • One of the problems with AIs is that there isn't just like this single nightmare scenario. With nuclear weapons, one of the good things about nuclear weapons, it was very easy for everybody to understand the danger because there was basically one extremely dangerous scenario nuclear war. With AIs, because they are not tools like atom bombs, they are agents. That, that's the most important thing to understand about them. And atom bomb ultimately is a tool in our hand. All decisions about how to use it and how to develop it. Humans make these decisions. AI is not a tool in our hand. It's an autonomous agent that, as I said, can make decisions and can invent ideas by itself. And therefore by definition there are hundreds of dangerous scenarios that, many of them, we cannot predict because they are coming from a non-human intelligence that thinks differently than us. You know, the acronym AI, it traditionally stood for artificial intelligence, but I think it should stand for alien intelligence. Not in the sense that it’s coming from outer space, but in the sense that it's an alien kind of intelligence; it thinks very differently than a human being. And it's not artificial because an artifact again is something that we create and control. And this now has the potential to go way beyond what we can predict and control. Still from all the different scenarios, the one which I think speaks to my deepest fears and also to human's deepest fears is being trapped in a world of delusions created by AI. You know, the fear of robots rebelling against humans and shooting people and whatever, this is very new. This goes back a couple of decades, maybe to the mid-20th century. But for thousands of years humans have been haunted by a much deeper fear. We know that our societies are ultimately based on stories and there was always the fear that we will just be trapped inside a world of illusions and delusions and fantasies, which we will mistake for reality. And we'll not be able to escape from that. We will lose all touch with truth and with reality.

  • Part of that exists today.

  • Yeah. And you know, it goes back to the parable of the cave in Plato's philosophy of a group of prisoners chained in a cave facing a blank wall, a screen, and there are shadows projected on this screen and they mistake the shadows for reality. And you have the same fears in ancient Hindu and Buddhist philosophy of the world of Maya, the world of illusions, and AI is kind of the perfect tool to create it. We are already seeing it being created all around us. If the main metaphor of the early internet age was the web, the world wide web and the web connects everything. So now the main metaphor is the cocoon. It's the web which closes on us, and I live inside one information cocoon. And you live inside maybe a different information cocoon. And there is absolutely no way for us to communicate because we live in separate, different realities.

  • Lemme ask you how you think this is gonna impact democracies. And let me read you something from the book because I think it speaks to maybe even this election here in the United States, "The definition of democracy as a distributed information network with strong self-correcting mechanisms stands in sharp contrast to a common misconception that equates democracy only with elections. Elections are an essential part of the democratic toolkit, but they are not democracy. In the absence of additional self-correcting mechanisms, elections can easily be rigged even if the elections are completely free and fair. By itself this too doesn't guarantee democracy for democracy is not the same thing as," and this one's interesting, "Majority dictatorship." What do you mean by that?

  • Well, many things, but I mean, on the most basic level, we just saw in Venezuela that you can hold elections and if the dictator has the power to rig the elections, so it means nothing. The great advantages of democracy is that it corrects itself. It has this mechanism that you try something, the people try something, they vote for a party, for a leader, let's try this bunch of policies for a couple of years. If it doesn't work well, we can now try something else. The basic ability to say we made a mistake, let's try something else. This is democracy. In Putin Russia, you cannot say that we made a mistake, let's try somebody else. In democracy you can, but there is a problem here. You give so much power for a limited time, you give so much power to a leader, to a party, to enact certain policies. But then what happens if that leader or party utilize the power that the people gave them, not to enact a specific time-limited policy, but to gain even more power and to make it impossible to get rid of them anymore? That was always one of the key problems of democracy, that you can use democracy to gain power and then use the power you have to destroy democracy. Erdogan said it in Turkey, I think best. He said democracy is like a tram, like a train. You take it until you reach a destination, then you go down. You don't stay on the train. So that’s one key problem that has been haunting democracy for its entire existence, you know, from ancient times until today. And for most of history, large-scale democracy was really impossible because democracy in essence is a conversation between a large number of people. And in the ancient world, you could have this conversation in a tribe, in a village, in a city-state like Athens or Republican Rome, but you could not have it in a large kingdom with millions of people because there was no technical means to hold the conversation. So we don't know of any example of a large-scale democracy before the rise of modern information technology in the late modern age. So democracy is built on top of information technology. And when there is a big change in information technology, like what is happening right now in the world, there is an earthquake in the structure built on top of it.

  • Who is supposed to be the arbiter of truth in these democracies? And the reason I raised this issue is we've, we have a big debate going on in this country right now about free speech.

  • Yeah.

  • And how much information we're supposed to know. What access to information we're supposed to get. Mark Zuckerberg just came out a couple weeks ago and said that, you know, there was information during the pandemic that he was suppressing that in retrospect he wishes now he wasn't suppressing. This gets to the idea of truth and it also gets to the idea of what information- you said information is not truth.

  • Yeah.

  • Are we supposed to get access to all of it and decipher what's real ourselves? Is somebody else supposed to do it for us? And will AI ultimately be that arbiter?

  • Well, so the, there are a couple of different questions here. One about free speech and the other about who is supposed to tell— I mean, most important thing is that free speech includes the freedom to tell lies, to tell fictions, to tell fantasies. So it's not the same as the question of truth. Ideally in a democracy, you also have the freedom to lie. You also have the freedom to spread—

  • You say ideally. Why is that ideal?

  • Hmm?

  • Why is it ideal? That's, again, that's part of freedom of speech.

  • Okay.

  • And this is something that is crucial to understand because the tech giants, they are constantly confusing the question of truth with the question of free speech. And there are two different questions. We don't want a kind of truth police that constantly tells human beings what they can and cannot say. There are of course limits even to that. But, ideally, yes, people should be able to also tell fictions and fantasies and so forth. This is information, it's not truth. The crucial role of distilling out of this ocean of information, the rare and costly kind of information, which is truth. This is the role of several different very important institutions in society. This is the role of scientists, this is the role of journalists, this is the role of judges. And the role or the aim of these institutions is that, again, it's not to limit the freedom of speech of the people, but to tell us what is the truth. Now the problem we now face is that there is a sustained attack on these institutions and on the very notion that there is truth, because the dominant worldview in large parts of both the left and the right, on the left it takes a Marxist shape and on the right a populist shape, is, they tell us that the only reality is power. All, all the world, all the universe, all reality is just power. The only thing humans want is power. And any human interaction, like we have in this conversation, is just a power struggle. And in every such interaction, when somebody tells you something, the question to ask, and this is where Donald Trump meets Karl Marx and they shake hands, they agree on it, when somebody tells you something, the question to ask is not what is, is it true? There's no such thing. The question to ask is, who is winning and who is losing? Whose privileges, like what I have just said, whose privileges are being served by it? And the idea is that again, scientists, journalists, judges, people at large, they're only in pursuing power. And this is an extremely cynical and destructive view which undermines all institutions. Luckily, it's not true. Especially when you look at ourselves. If we start to understand humanity just by looking at ourselves, most people will say, "Yes, I want some power in life, but it's not the only thing I want. I actually want truth also. I want to know the truth about myself, about my life, the truth about the world." And the reason that there is this deep yearning for truth is that you can't really be happy if you don't know the truth about yourself and about your life. If you look at these people who are obsessed with power, people like Vladimir Putin or Benjamin Netanyahu or Donald Trump, they have a lot of power. They don't seem to be particularly happy. And when you realize that I, I'm a human being and I, yeah, I really want to know at least some truth about life, why isn't it true also of others? And yes, there are problems in institutions; there are, there is corruption, there is influence. And so, but this is why we have a lot of different institutions.

  • Okay, I'm gonna throw another name at you. You talked about Putin, you talked about Trump, you talked about Netanyahu. He's not a politician, but he might want to be at some point, Elon Musk.

  • Hmm.

  • Where do you put him in this? And the reason I ask, talking about truth and information, he's somebody who said that he wants to set all information free.

  • Yeah.

  • Because he believes that if the information is free, he says that you, all of us, will be able to find and get to the truth. There are others who believe that all of that information will obscure the truth.

  • I think it's a, again, it's a very naive view of information and truth. The idea that you just open the floodgates and let information flood the world and the truth will somehow rise to the top. You don't know anything about history if this is what you think. It just doesn't work like that. The truth, again, it's costly, it's rare. Like if you want to write a true story in the newspaper or an academic paper or whatever, you have to research so much. It's very difficult. It takes a lot of time, energy, money. If you want to create some fiction, it's the easiest thing in the world. Similarly, the truth is often very complicated because reality is very complicated. Whereas fiction and fantasy can be made as simple as possible. And people in most cases, they like simple stories.

  • Right?

  • And the truth is often painful. Whether you think about the truth about entire countries or the truth about individuals, the truth is often unattractive. It's painful. We don't want to know, for instance, the pain that we inflict sometimes on people in our life or on ourselves. Because the truth is costly and complicated, and it can be painful. In a completely kind of free fight, it will lose. This is why we need institutions like newspapers or like academic institutions. And, again, something very important about the tech giants. My problem with Facebook and Twitter and so forth, I don't expect them to censor their users. We have to be very, very careful about censoring the expression of real human beings. My problem is with the algorithms.

  • Even when they're not telling truths?

  • Hmm.

  • Even when they're not telling the truth?

  • Yes. Even when they're not telling the truth. You have to be very careful about censoring the free expression of human beings. My problem is with the algorithms, that if you look at a historical case like the massacre of the Rohingya in Myanmar in 2016-2017, in which thousands of people were murdered, tens of thousands were raped, and hundreds of thousands were expelled and are still refugees in Bangladesh and elsewhere. This ethnic cleansing campaign was fueled by a propaganda campaign that generated intense hatred among Buddhist Burmese in Myanmar against the Rohingya. Now this campaign, to a large extent, took place on Facebook. And the Facebook algorithms played a very central role in it. Now whenever these accusations have been made, and they have been made by Amnesty and the United Nations and so forth, Facebook basically said, "But we don't want to censor the free expression of people. That all these conspiracy theories about the Rohingya, that they're all terrorists and they want to destroy us, they were created by human beings. And who are we, Facebook, to censor them?" And the problem there is that the role of the algorithms was, at that point, was not in creating the stories; it was in disseminating, in spreading particular stories because people in Myanmar were creating a lot of different kinds of content. In 2016-2017, there were hate field conspiracy theories, but there were also sermons on compassion and cooking lessons and biology lessons and so much content. And in the battle for human attention, what will get the attention of the users in Myanmar, the algorithms or the king makers? The algorithms were given, by Facebook, a clear and simple goal: increase user engagement. Keep more people, more time on the platform because this is the basis for, was, still is, for Facebook's business model. Now the algorithms, and this goes back to AIs that make decisions by themselves and invent ideas by themselves, nobody in Facebook wanted there to be an ethnic cleansing campaign. Most of them did not know anything about Myanmar and what was happening there. They just told the algorithms to increase user engagement, and the algorithms experimented on millions of human guinea pigs. And they discovered that if you press the hate button and the fear button in a human mind, you keep that human glued to the screen. So they deliberately started spreading hate field conspiracy theories and not cooking lessons or sermons on compassion. And this is the expectation, don't censor the users, but if your algorithms deliberately spread hate and fear because this is your business model, it's on you.

  • This is your fault.

  • Okay?

  • But here I'm gonna make it more complicated. I'm gonna make it more complicated. And the reason we’re talking about Facebook and Elon Musk is that the algorithms that are effectively building these new AI agents are scraping all of this information off of these sites. This human user-generated content, which we've decided everybody should be able to see. Except once the AIs see it, they will see the conspiracy theories, they will see the misinformation. And the question is how the AI agent will ever learn what's real and what's not?

  • That is a very, very old question that every, you know, the editor of the New York Times, the editor of the Wall Street Journal, have dealt with this question before. Why do we need to start again as if there has never been any history? Like the question, "There is so much information out there, how should I know what's true or what's not?" People have been there before. If you run Twitter or Facebook, you are running one of the biggest media companies in the world. So take a look at what media companies have been dealing with in previous generations and previous centuries.

  • It's basically—

  • Do you think they should be liable for what's on their sites right now?

  • The tech companies?

  • The tech companies should be liable for the user-generated content?

  • No, they should be liable for the actions of their algorithms. If their algorithms decided to put at the top of the newsfeed a hateful conspiracy theory, it's on them, not on the person who created the conspiracy theory in the first place. The same way that, you know, like the chief editor of the New York Times decides to put a hateful conspiracy theory at the top of the first page of the New York Times. And when you come and tell them "What have you done?" They say, "I haven't done anything. I didn't write it, I just decided to put it on the front page of the New York Times. That's all." That's not, that's a huge thing. Editors have a lot more power than the authors who create the content. It's very cheap to create content. The really key point is what the attention. You know, I'll give you an example from thousands of years ago that when Christianity began, there were so many stories circulating about Jesus and about the disciples and about the saint. Anybody could invent a story and said Jesus said it. And there was no Bible, there was no New Testament in the time of Jesus or a hundred years after Jesus, or even 200 years after Jesus. There were lots and lots of stories and texts and parables. At a certain point, the leaders of the Christian Church in the fourth century said, "This cannot go on like this. We have to have some order in this flood of information. How would Christians know what information to value, which texts to read and what are forgeries or whatever that they should just ignore?" And they had two church councils in Hippo and in Carthage, today in Tunisia, in the late fourth century. And this is where they, they didn't write the texts. The texts were written generations previously by so many different people. They decided what will get into the Bible and what will stay out. The book didn't come down from heaven in its complete form. There was a committee that decided which of all the texts will get in. And for instance, they decided that a text which claimed to be written by Saint Paul, but many scholars today believe that it wasn't written by Saint Paul, the first epistle to Timothy, which was a very misogynistic text, which said basically that the role of women is to be silent and to birth children. This got into the Bible. Whereas another text, The Acts of Paul and Thecla, which portrayed Thecla as a disciple of Saint Paul preaching and leading the Christian community and doing miracles and so forth, they said, nah, let's leave that one out. And the views of billions of Christians for almost 2000 years about women and their capacities and their role in the church and in the community, it was decided, and not by the authors of First Timothy and The Acts of Paul and Thecla but by this church committee who sat in what is today Tunisia and went over all these things and decided this will be in and this will be out. This is enormous power. The power of curation, the editorial power. You know, you think about again, the role of newspapers in modern politics. It's the editors who have the most power. Lenin, before he was dictator of the Soviet Union, his one job was chief editor of Iskra. And Mussolini, also, first he was a journalist, then he was an editor. Immense power. And now the editors increasingly are the algorithms. You know, I'm here partly on a book tour and I know that my number one customer is the algorithm. Like if I can get the algorithm to recommend my book, the humans will follow. So it’s an immense power.

  • And—

  • How are you planning to do that?

  • That's above my pay grade. I mean I'm just sitting here on the stage talking with you. There is a whole team that decides, oh we'll do this, we'll do that. We will put it like this. But seriously, again, when you realize the immense power of the recommendation algorithm, the editor, this comes with responsibility. So again, I don't think that we should hold Twitter or Facebook responsible for what their users write. And we should be very careful about censoring users even when they say, when they write lies. But we should hold the companies responsible for the choices and the decisions of their algorithms.

  • You talked now about Lenin and we've talked about all sorts of dictators. One of the things you talk about in the book is the prospect that AI could be used to turn a democracy into a totalitarian state.

  • Yeah.

  • Which I thought was fascinating because it would really require AI to cocoon all of us in a unique way, telling some kind of story that captures an entire country. How would that work in your mind?

  • Again, we don't see reality. We see the information that we're exposed to. And if you have better technology to create and to control information, you have a much more powerful technology to control humans. Previously, there was always limitations to how much a central authority could control what people see and think and do. Even if you think about the most totalitarian regimes of the 20th century, like the USSR or Nazi Germany, they couldn't really follow everybody all the time. It was technically impossible. Like if you have 200 million Soviet citizens, to follow them around all the time, 24 hours a day, you need about 400 million KGB agents. Because even KGB agents, they need to sleep, sometimes they need to eat, sometimes. They need two shifts. So 200 million citizens, you need 400 million agents. You don't have 400 million KGB agents. Even if you had, you still have a bigger problem because let's say that two agents follow each citizen 24 hours a day. What do they do in the end? They write a report. This is like 1940 or 1960, it's paper. They write a paper report. So every day KGB headquarters in Moscow is flooded with 200 million reports about each citizen, where they went, what they read, who they met. Somebody needs to analyze all that information. They, where do they get the analysts? It's absolutely impossible. Which is why even in the Soviet Union, privacy was still the default. You never knew who is watching and who is listening. But most of the time nobody was watching and listening. And even if they were, most likely, the report about you would be buried in some place in the archives of the KGB and nobody would ever read it. Now both problems of surveillance are solved by AI. To follow all the people of a country 24 hours a day you do not need millions of human agents. You have all these digital agents, the smartphones, the computers, the microphones and so forth. You can put the whole population under 24-hour surveillance and you don't need human analysts to analyze all the information. You have AIs to do it.

  • What do you make of the idea that an entire generation has given up on the idea of privacy, right? We all put our pictures on Instagram and Facebook and all of these sites. We hear about a security hack at some website and we go back the next day and try to buy, and put our credit card in the same site again. We say that we're very anxious about privacy. We love to tell people how anxious we are about privacy and then we do things that are the complete opposite.

  • Because there is immense pressure on us to do them. Part of it is despair and people still cherish privacy. And part of the problem is that we haven't seen the consequences yet.

  • Yet.

  • Yet. We will see them quite soon. It's like this huge experiment conducted on billions of human guinea pigs. But we still haven't seen the consequences of this inhalation of privacy. And it is extremely, extremely dangerous. Again, going back to the freedom of speech issue, part of this crisis of freedom of speech is the erosion of the difference between private and public. That there is a big difference if there's just the two of us sitting alone somewhere talking just you and me or if there is an entire audience and it's public. Now I'm a very big believer in the right to stupidity. People have a right to be stupid in private. That when you talk in private with your best friends, with your family, whatever, you have a right to be really stupid, to be offensive. As a gay person, I would say that even politicians, if they tell homophobic jokes in a private situation with their friends, this is none of my business. And it's actually good that politicians would have situations, but they can just relax and say the first stupid thing that pops up in their mind. This should not happen in public.

  • Now it's gonna be entered into the algorithm.

  • Yes. And this inability to know that anything I say even in private cannot now go viral. So there is no difference. And this is extremely harmful. Part of what's happening now in the world is this kind of tension between organic animals, we are organic animals, and an inorganic digital system which is increasingly controlling and shaping the entire world. Now part of being an organic entity is that you live by cycles day and night, winter and summer, growth and decay. Sometimes you are active, sometimes you need to relax, you need to rest. Now algorithms, and AIs, and computers, they are not organic. They never need rest. They are on all the time. And the big question is whether we adapt to them or they adapt to us? And more and more of course we have to adapt to them.

  • Right?

  • We have to be on all the time. So the news cycle is always on and everything we say, even when we are supposedly relaxing with friends, it can be public. So the whole of life becomes like this one-long job interview that any stupid thing you did in some college party when you were 18, it can meet you down the road 10, 20 years later. And this is destructive to how we function. You even think about the market. Like Wall Street, as far as I know, it's open Mondays to Fridays, 9:30 to 4 in the afternoon. Like if on Friday at five minutes past 4, a new war erupts in the Middle East, the market will react only on Monday. It is still running by organic cycles. Now what would happen to human bankers and financiers? And so— What is happening when the market is always active, you can never relax.

  • We've got some questions I think that are gonna come out here on cards, which I wanna ask in just a moment. I have a couple of questions before we get to them. One is, have you talked to folks like Sam Altman who runs OpenAI or the folks at Microsoft? I know Bill Gates was a big fan of your books in the past, or the folks at Google? What do they say when you discuss this with them and do you trust them as humans when you've met them? Do you go, "I trust you Sam Altman."

  • Most of them are afraid.

  • Afraid of you or afraid of AI?

  • No, afraid of what they are doing. Afraid of what is happening. They understand better than anybody else the potential, including the destructive potential, of what they are creating. And they are very afraid of it. At the same time, their basic shtick is that I'm a good guy and I'm very concerned about it. Now you have these other guys, they are bad. They don't have the same kind of responsibility that I have. So it would be very bad for humanity if they created first. So I must be the one who creates it first. And you can trust me that I will know, I will at least do my best to keep it under control. And everybody was saying it. And I think that to some extent they, they are genuine about it. There is of course also this another element in there of extreme kind of pride and hubris that they are doing the most important thing in basically not just the history of humanity, but the history of life.

  • Do you think they are?

  • They could be, yes. If you think about the timeline of the universe at least as far as we know it. So you have, basically, two stops. First stop, 4 billion years ago, the first organic life forms emerge on planet earth. And then for 4 billion years, nothing major happens. Like for 4 million years. It's more of the same, it's more organic stuff. So you have amoebas and you have dinosaurs and you have humans, but it's all organic. And then here comes Elon Musk or Sam Altman and does the second important thing in the history of the universe, the beginning of inorganic evolution. Because AI is just at the very, very beginning of its evolutionary process. It's basically like 10 years old, 15 years old. We haven't seen anything yet. GPT-4 and all these things. They are the amoebas of AI evolution and who knows how the AI dinosaurs are going to look like. But the name on the inflection point of the history of the universe, if that name is Elon Musk or that name is Sam Altman, that's a big thing.

  • We've got a bunch of really great questions and actually one of them is where I wanted to go before we even got to these. So we'll just go straight to it. We're gonna make a, it's actually a bit of a right turn of a question given the conversation; you mentioned Netanyahu on point so the question here on the card says "You're Israeli. Do you really think that the Israeli-Palestinian conflict is solvable?" And I thought actually that was an important question because it actually touches on some of these larger issues that you've raised about humanity.

  • So on one level, absolutely, yes. It's not like one of these kind of mathematical problems that we have a mathematical proof that there is no solution to this problem. No, there are solutions to the Israeli-Palestinian conflict because it's not a conflict about objective reality, it's not a conflict about land or food or resources that you say, "Okay, there is just not enough food, somebody have to starve to death, or there is not enough land, somebody has to be thrown in the sea." This is not the case. There is enough food between the Mediterranean and Jordan to feed everybody. There is enough energy, there is even enough land. Yes, it's a very crowded place, but technically there is enough space to build houses and synagogues and mosques and factories and hospitals and schools for everybody. So there is no objective shortage. But you have people each in their own information cocoon, each with their own mass delusion, each with their own fantasy, basically denying either the existence of the other side or the right of the other side to exist. And the war is basically an attempt to make the other side disappear. Like my mind has no space in it. It's not that the land has no space for the people. My mind has no space in it for the other people. So I will try to make them disappear the same way. They don't exist in my mind. They also must not exist in reality. And this is on both sides. And again, it's not an objective problem. It's a problem of what is inside the minds of people and therefore there is a solution to it. Unfortunately, there is no motivation.

  • What is the solution for someone who grew up there, who's lived there? You're not there right now, right?

  • I'm here right now.

  • You're here right now.

  • I, again, if you go back, say to the two-state solution, it's a completely workable solution in objective terms. And you can divide the land and you can divide the resources so that side by side you have a Palestinian state and you have Israel, and they are both, they both exist and they're both viable and they both provide security and prosperity and dignity to their citizens, to their inhabitants.

  • So how, how would you do that though? Because part of the issue is there's a chicken and egg issue here it seems like, where the Israelis or Netanyahu would say, "Look, unless we are fully secure and we feel completely good about our security, we can't really even entertain a conversation just about anything else." Right? And the Palestinians on the other side effectively say the opposite, which is to say that "We need you, you need to solve our issues here."

  • Both sides—the problem is that both sides are right. Each side thinks that the other side is trying to annihilate it. And both are right.

  • And both are right.

  • Both are right.

  • But hold on.

  • That's a problem if both are right.

  • This is the problem, yes. The place to start again is basically in our minds it's very difficult to change the minds of other people. The first crucial step is to say these other people, they exist and they have a right to exist. The issue is that both sides suspect that they don't think that we should exist. Any compromise they would make is just because they are now a bit weak. So they will be willing to compromise on something, but deep in their hearts they think we should not exist. So that sooner or later when they are stronger, when they have the opportunity, they will destroy us. And again, this is correct; this is what is in the hearts and minds of both sides. So the place to start is to, you know, inside first our heads come to recognize that the other side exists and it has a right to exist. That even if someday we'll have the power to completely annihilate them, we shouldn't do it because they have a right to exist. And this is something we can do—

  • Okay, but lemme ask you a question. I think people in Israel would say that people in Palestine have a right to exist, okay? There is a— You don't believe that. You don't, you think that's not the case?

  • Unfortunately, for a significant percentage of the citizens of Israel and certainly of the members of the present governing coalition, this is not the case.

  • And you think they have no right to exist.

  • They think—

  • Because I was gonna say, they would say that they feel that the Palestinians want to annihilate them. And do you believe that?

  • Yes, they, again, they think that the Palestinians want to annihilate us and they're right; they do want to annihilate us. But also, again I would say, I don't have the numbers to give you, but a significant part of the Israeli public and a very significant part of the current ruling coalition in Israel, they want to drive the Palestinians completely from the land. Again, maybe they say we are now too weak, so we can't do it. We have to compromise. We have only to do a little bit. But ultimately this is what they really think and want. And then, at least some members of the coalition are completely open about it. And their messianic fantasy, they actually want a bigger and bigger war. They want to set the entire Middle East on fire because they think that when the smoke clears, yes, there'll be hundreds of thousands of casualties. It'll be very difficult, it'll be terrible. But in the end, we'll have the entire land to ourselves and there will not be any more Palestinians between the Mediterranean and the Jordan.

  • Lemme ask you one other very political question as it relates to this, and then actually we have a great segue back into our conversation about AI as it happens. Politically here in the U.S. we have an election and there's some very interesting questions about how former President Trump, if he were to become the president, would be good or bad for Israel? I think there's a perception that he would be good for Israel. You may disagree. And what you think Vice President Harris, if she was the president, would be good or bad for Israel or good or bad for the Palestinians? How do you see that?

  • And I can't predict what each will decide, but it's very clear that President Trump is undermining the global order. He's in favor of chaos. He's in favor of destroying the liberal global order that we had for the previous couple of decades. And which provided the, with all the problems, with all the difficulties, the most peaceful era in human history. And when you destroy order and you have no alternative, what you get is chaos. And I don't think that chaos is good for Israel or that chaos is good for the Jews, for the Jewish people. And this is why I think that it'll be very, from even from a very kind of transactional, very narrow perspective. You know, America is a long, long way from Israel, from the Middle East and isolationist America that withdraws from the world order is not good news for Israel.

  • So what do you say to the Amer- There are a number of American Jews who say, and I'm Jewish and this is not something that I say, but I hear it constantly. They say, "Trump would be good for the Jews. Trump would be good for Israel. He would protect Israel."

  • In what way?

  • He would have a, an open checkbook and send military arms and everything that was asked, no asked.

  • As long as it serves his interests. And if at a particular point it serves his interests to make a deal with Putin or with Iran or with anybody at Israel's expense, he will do it.

  • Fair enough.

  • I asked the question to provoke an answer. Let me ask you, let me ask you this, and this is a great segue back into the conversation we've been having for the last hour, you've written about humans, how humans have evolved to pursue power, and knowledge, but power, I think is the big focus, how does the pursuit of happiness fit into the narrative?

  • And I, I don't think that humans are kind of obsessed only with power. I think that power is a relatively superficial thing in the human condition. It's a means to achieve various ends. It's not necessarily bad. It's not that power is always bad. No, it can be used for good, it can be used for bad. But the deep motivation of humans, I think, is the pursuit of happiness and the pursuit of truth, which are related to one another. Because you can never really be happy if you don't know the truth about yourself, about your life. Unfortunately, this often happens, the means becomes an end. That people become obsessed with power, not with what they can do with it. So very often they have immense power and they don't know what to do with it or they do with it very bad things.

  • This is, this is an interesting one. What are some arguments for and against a future in which humans no longer have relationships with other humans and only have relationships with AI?

  • One thing to say about it is that AI is becoming better and better at understanding our feelings, our emotions, and therefore of developing relationships and intimate relationships with us. Because there is a deep yearning in human beings to be understood. We always want people to understand us, to understand how we feel. We want my husband, my parents, my teachers, my boss to understand how I feel. And very often we are disappointed and they don't understand how I feel, partly because they are too preoccupied with their own feelings to care about my feelings. AIs will not have this problem. They don't have any feelings of their own. And they can be 100% focused on deciphering, on analyzing your feelings. So, you know, in these, all these science fiction movies in which the robots are extremely cold and mechanical and they can't understand the most basic human emotion, it's the complete opposite. Part of the issue we are facing is that they will be so good at understanding human emotions and reacting in a way which is exactly calibrated to your personality at this particular moment that we might become exasperated with the human beings who don't have this capacity to understand our emotions and to react in such a calibrated way. There is a very big question, which we didn't deal with, and it's a long question, of whether AIs will develop emotions, feelings of their own, whether they become conscious or not. At present, we don't see any sign of it, but even if AIs don't develop any feelings of their own. Once we become emotionally attached to them, it is likely that we will start treating them as our conscious entities as sentient beings. And we'll confirm them the legal status of persons. You know, in the U.S. there is actually a legal path already open for that. Corporations, according to U.S. law, are legal persons. They have rights; they have freedom of speech, for instance. Now you can incorporate an AI. When you incorporate a corporation like Google or Facebook or whatever, up until today, this was to some extent just make believe because all the decisions of the corporations had to be made by human beings, by the executives, the lawyers, the accountants. What happens if you incorporate an AI, it’s now a legal person and it can make decisions by itself? It doesn't need any human team to run it. So you start having legal persons, let's say, in the U.S., which are not human, and in many ways are more intelligent than us. And they can start making money, for instance, going on TaskRabbit and offering their services to various things like writing texts. So they earn money and then they go to the market. They go to Wall Street and they invest that money. And, because they're so intelligent, maybe they make billions. And so you have a situation in which perhaps the richest person in the U.S. is not a human being. And part of their rights is that they have, under freedom of speech, the right to make political contributions. So this AI person—

  • Now lobbyists. I see where this is going.

  • Contribute billions of dollars to some candidate in exchange for getting more rights to AIs.

  • So there'll be an AI president. I'm hoping you're gonna leave me, leave all of us maybe on a high note here with something optimistic. Here's the question, and it's my final question, humans are not always able to think from other perspectives. Is AI able to think from multiple perspectives?

  • I think the answer is yes.

  • Yes. But do you think that AI will actually help us think this way?

  • That's one of the positive scenarios about AI, that they will help us understand ourselves better, that their immense power will be used not to manipulate us, but to help us. And, again, we have historical precedents for that. Like we have relationships with humans, like doctors, like lawyers, accountants, therapists that know a lot of things about us. Some of like our most private information is held by these people, and they have a fiduciary duty to use our private information and their expertise to help us. And this is not, you don't need to invent the wheel. This is already there. And it's obvious that if they use it to manipulate us or if they sell it to a third party to manipulate us, this basically against the law; they can go to prison for that. And we should have the same thing with AI. And we talked a lot about the dangers of AI, but obviously AI has enormous positive potential otherwise we would not be developing it. It could provide us with the best healthcare in history. It could prevent most car accidents. It can also, you know, you can have an army of AI, doctors and teachers and therapists who help us, including, help us understand our own humanity, our own relationships, our own emotions better. This can happen if we make the right decisions in the next few years. Now, I would end maybe by saying that again, it's not that we lack the power. At the present moment, what we lack is the understanding and the attention. This is potentially the biggest technological revolution in history. And it's moving extremely fast. That's the key problem. It's just moving extremely fast. If you think about the U.S. elections, the coming U.S. elections, so whoever wins the elections over the next four years, some of the most important decisions they will have to make would be about AIs and regulating AIs and AI safety and so forth. And it's not one of the main issues in the presidential debates, it's not even clear what is the difference if there is one between Republicans and Democrats on AI. So on specific issues, we start seeing differences when it comes to issues of freedom of speech and regulation and so forth. But about the broader question, it's, it's not clear at all. And again, the biggest danger of all is that we will just rush forward without thinking and without developing the mechanisms to slow down or to stop if necessary. You know, if you think about it like, like a car. So when they taught me how to drive a car, the first thing I learned is how to use the brakes. That's the first thing I think they teach most people. Only after you know how to use the brakes, they teach you how to use the fuel pedal, the accelerator. And it's the same when you learn how to ski. I never learned how to ski, but people who had told me, the first thing they tell you is how to stop. Or how to fall. It's a bad idea to first teach you how to kind of, okay, go faster. And then when you are down the slope, they start to shout, "Okay, this is how you stop." And this is what we're doing with AI. Like, you have this chorus of people in places like Silicon Valley. Let's go as fast as we can. If there is a problem down the road, we'll figure it out.

  • How to stop. It's very, very dangerous.

  • Yuval, before you go, let me ask you one final, final question, and it's news we could all use, you are writing this whole book about AI and technology, and you do not carry a smartphone. Is this true?

  • I have a kind of, you know, emergency smartphone because for various services.

  • How does your whole life work?

  • But I don't carry it with me.

  • I'm told you do not carry a phone. You don't have an email? The whole thing.

  • No, I have email.

  • So you have email?

  • I don't, again, I try to use technology but not to be used by it. And part of the answer is that I have a team who is carrying the smartphone and doing all that for me. So, it's not so fair to say that I don't have it. But I think on, a bigger issue, what we can say, it's, it's a bit like with food. That a hundred years ago, food was scarce, so people ate whatever they could. And if they found something full of sugar and fat, they ate as much of it as possible because it gave you a lot of energy. And now we are flooded by enormous amounts of food and junk food, which is artificially pumped full of fat and sugar and is creating immense health problems. And most people have realized that more food is not always good for me and that I need to have some kind of diet. And it's exactly the same with information. We need an information diet. That previously information was scarce, so we consumed whatever we could find. Now we are flooded by enormous amounts of information, and much of it is junk information, which is artificially filled with hatred and greed and fear. And we basically need to go on an information diet and consume less information and be far more mindful about what we put inside. That information is the food for the mind. And, you know, you feed your mind with unhealthy information, you'll have a sick mind. It's as simple as that.

  • Well, we want to thank you, Yuval. Add "Nexus" to your information diet because it's an important document about our future and our world. I wanna thank you for this fascinating conversation.

  • Thank you.

  • Thank you. And thank you for all your questions. Want to dive deeper? Become a Big Think member and join our members-only community. Watch videos early and unlock full interviews.

More Articles

View All
Stick 'Em! | Wicked Tuna: Outer Banks
Something feels good. I’m warmer, not was yesterday. I’m gonna say the same thing: bottom line, time’s running out, and we need more fish, always more fish. We’ve got to support my family. I got to pay my crew. I have to maintain my boat, and I have to tr…
Is our universe the only universe? - Brian Greene
[Music] [Applause] A few months ago, the Nobel Prize in physics was awarded to two teams of astronomers for a discovery that has been held as one of the most important astronomical observations ever. Today, after briefly describing what they found, I’m …
What’s the most effective way to offset the depreciation of your jet?
So what’s kind of the sweet spot in terms of how old the jet is where someone else is taking the depreciation and the big hit for you, but you’re not going to be stuck with something no one wants in 5 to 10 years? No, it’s a great question, Preo, because…
Meet The Real Estate Investor who RETIRED at 25 Years Old (Self Made)
To get there, there’s only three things you can do: you can spend less, you can earn more, you can maximize your returns. And in that word, like spending less, yeah, is this way more impactful because it allows you to save more, yeah, and it requires you …
Helium 101 | National Geographic
[Narrator] Most people know helium as the lighter than air gas that fills our party balloons. But more importantly, it’s an irreplaceable element for science and industry. Helium was discovered in 1868 during a solar eclipse. Astronomers observed a yellow…
Khan Academy’s 100&Change proposal: World-class diplomas for anyone, anywhere
Hi, I’m Sal Khan, founder of the Khan Academy. We’re a not-for-profit with a mission of providing a free, world-class education for anyone, anywhere. There are tens of millions of people learning on Khan Academy who want to prove what they know, who want…