yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Gmail Creator Paul Buchheit On AGI, Open Source Models, Freedom


34m read
·Nov 3, 2024

It seems like Google has all the ingredients to just be the dominant AI company in the world. Why isn't it? Do you think OpenAI in 2016 was comparable to Google in 1999 when you joined it? Are you a believer that we are definitely going to get to AGI? What is the long-term trajectory of AI? It's the most powerful technology we've ever invented, and so the question is like where does that power go? I think we have to build a whole coalition of people who are in favor of freedom and open source and not just sort of bet everything on Facebook saving us.

Welcome to another episode of The Light Cone. I'm Gary; this is Jered Harge and Diana, and we're the partners at Y Combinator, where we funded hundreds of billions of dollars worth of companies. We have a special guest who is also one of the original outside partners, the non-founding partners at YC, Paul Buite. He created Gmail; he coined the term "don't be evil." PB, thanks for joining us today.

Thanks, Gary. So, what should we start off with?

Well, I think one thing people don't often realize is that you've been thinking about AI for a long time and that Google itself was kind of an AI company. Can you tell us more about that? What was the internal view of AI at Google?

Yeah, I mean, I think really Google was always supposed to be an AI company from the beginning. You know, Larry and Sergey set out to build these very large compute clusters and do a lot of machine learning on all of the data that they gathered. Actually, arguably, the mission statement is pretty straightforward: the Google mission is to gather all the world's training data and feed it into a giant AI supercomputer. They put it slightly less direct; they said gather all the world's information and make it universally useful and accessible or something like that. But essentially, what that really meant in practice is feeding it into a giant AI supercomputer.

Even the origin story of Google was all based on their PhD with PageRank, which is very much today in a lot of machine learning classes as gets taught; it is one of the foundational historical AI algorithms that gets taught. Yeah, I mean, there was an understanding very early on that if you have enough data, that's actually the path to making things intelligent instead of just trying to iterate forever on little algorithms.

How early did you join Google, Paul? Can you talk a little bit about what Google was like when you joined?

Uh, yeah, so it was June 1999, so that was, let's see, 25 years ago, a little more. And so, yeah, it was a very small startup. We were in Palo Alto on University Ave, just up above like a tea shop at the time. It was electric; it was really cool. I actually, after I was there for about a week, I tried to get more equity, but it turns out you have to negotiate before accepting. Um, so, but yeah, it had a very kind of unreal sense of excitement. You know, I was excited to go into work because we were just doing big things.

When you were there, like in that early set of Google people, how did you all envision that this AI thing would play out and what Google's AI future would look like?

You know, we didn't... something that ever came up, right? No, I mean, AI has obviously been a thing that people have been thinking about for a long time. I made my first neural net back; I dug up the code a while back. I think it was like 1995, and I had... it was like one of those three-layer neural nets with... do you do the classic MNIST digit classification thing? Yeah, I was doing... I did a, not exactly digit classification, but there were these things called figlets that are like ASCII letters, and so I made it do essentially like an OCR on those.

But, you know, it'd be like 100 weights, something very much smaller than today's model, which is like trillions of weights now. Yeah, and the history of like neural nets is kind of weird. The first thing was when they invented the perceptron, which was like a single neuron, and it was very hot for a short time until some researcher showed that a perceptron can't compute XOR. And then they were like, well, like it's just dead for a while until someone had this idea to use multiple neurons. It was like very slow going, and then it was kind of like dead again for a while—and then, to my perception, it kind of really picked up in the early teens, you know, when deep learning became popular.

That was when we first started seeing, like, I think impressive results, where that was when we started feeling like internally, you know, in the discussions at YC, that AI had switched from being something in the indefinite future to being in the more definite future. Um, and that is, you know, kind of what led to the creation of OpenAI.

Were there any conversations around like the power of AI and the implications of AI—specifically AGI—and just like the impact on society, or did it feel too far removed?

Yeah, I think it was still too far off in the future. I mean, it was very much sci-fi at that point. Um, we were dealing with more, you know, near-term how do we make search better? But search is, you know, kind of to some extent an AI problem. You have to figure out what it is the user is looking for. It's remarkably good if you actually look at Google search; there's a lot of stuff going on behind the scenes.

Um, and actually, one of the earliest kind of magical features that we added was the "did you mean" spell correction. That actually comes from originally just my inability to spell. I've never been very good at spelling. My brain doesn't like arbitrary patterns, so like when I was in school, math was easy because it's predictable, but spelling always made me struggle. Um, and so when I started at Google, one of the first features I added was a spell corrector because I was looking at the query logs, and I would see that I'm not the only person with this problem—like a third of the queries were misspelled or something like that. So it was like the easiest quality win ever was just to fix the spelling.

Wait, wait, so you built the original spelling corrector at Google?

I, um, I did the first "did you mean" feature. Um, and so, but I built it just based off of kind of an existing spell corrector library. And then, but it would give really dumb corrections. Like if you typed in Turbo Tax, it would try to correct it to "turbot axe," "turbot" being a type of fish. Um, and so I did some basic statistical filtering that would say like, that's an idiotic correction; don't show it.

And so I would just like filter the results, and then I was working on building a better spell corrector because I knew, you know, we could just use all of the data. We had a copy of the web, and we had billions of search queries; there was like a lot of information there. So I was working on making something better, and then I was just using it as an interview question. So when I would interview engineers, I would be like, how would you build a spell corrector? And I would say like 80% of engineers had no idea.

Yeah, and the other 20% gave sort of mediocre answers, but then there was this like one guy who gave a really, really good answer. Um, it's just like he was ahead of where I was already, so I was like, we have to hire him. Um, and so his first project he started—I think it was end of 2000, kind of like late December—his, I gave him as his like intro project, and I just gave him all of my code and showed him how to run, you know, projects on the cluster.

Um, and then I went away for a couple weeks for Christmas, and when I came back, he had invented what we now know as like the "did you mean" feature, and so he did all of that in like his first two weeks at Google. It was like this incredible thing that could spell correct my last name; you know, no one had ever done a spell corrector that would correct proper nouns and things like that. Um, and so that person was Gomi Shazir, who then is also the person who later on invented AI.

So he's one of the key people on the "All You Need Is Attention" paper, and then he's now since started Character AI. I never connected those dots, but I remember in 2000 when the original Google spelling corrector launched, it was a big deal because it was one of the first instances of AI that was like widely used by the general population.

Because the earlier spelling correctors, they had all been very simple things based on just like a list of dictionary words and edit distance, and so it couldn't handle proper nouns; it made all kinds of like dumb suggestions. The Google one was the first one that was trained on real data.

Exactly, so it actually worked. Right, so the Google spell corrector has no dictionary; it's just based on looking at, um, the web and at query logs and then predicting what is the, you know, most likely correction. It seems like Google has been working on AI for a long time. It has the data, the compute, the people—it has all the ingredients to just be the dominant AI company in the world. Why isn't it? What do you think happened?

It seemed like it got stuck someplace. Yeah, I mean, I don't know exactly. So I, you know, just to clarify for everyone, I don't work at Google. Um, I left in 2006. Um, but my perception, you know, as an outsider, I think a lot of it kind of happened, especially around the time of the transition to Alphabet, when, you know, the company was no longer really being run by the founders so much, and especially, you know, after they left.

Um, and I think it became more about protecting and preserving the search monopoly. And so if you think about it from that perspective, they have, you know, this gold mine—like, like search is just so valuable. Um, and AI is inherently a disruptive technology both in terms of maybe breaking the search, you know, business model, where if you actually give people the right answer, they won't need to click on an entire page full of ads.

There is—and this was noted, of course, in the very original Google paper back in, uh, 1998—that their search, a search company has an inherent tension between, um, profitability and giving the right answer. Because there's always a temptation that if you make your results worse, people will actually click on more ads.

Um, and so AI has the potential to disrupt that, but I think even more than that, it has the potential to completely, um, anger regulators. Um, and so a lot of Google's business is just dealing with regulators, and so, you know, we know if you put out an AI, it's definitely going to say offensive things. And so I think they were kind of terrified of that.

And so even internally, uh, when they were developing, um, you know, there was a version of a chatbot that Gnome had built, um, and this is the one that that sort of whistleblower claimed was conscious— I think they called it Lambda. Um, it actually originally had a different name, but he was forced—they were forced to change the name because the original name was human. So they weren't even allowed to give it a human name, so the original name was something human, and it had to be changed to Lambda.

Um, but even inside of the company, you know, there were restrictions on what you could put out. They had a version of, um, Dolly called Image Gen, and it was prohibited from making human forms. So even internally, the researchers weren't allowed to generate images of humans, so they were just extremely risk-averse, I think is the answer.

How do you think it would have been different if Sergey and Larry were still in charge and pushing forward?

I mean, I think they can override, you know, risk risk aversion, right? But it takes someone with that level of credibility to, to really bet the company or to say, yeah, we're going to do this thing and it's going to cause a lot of problems. Um, but I think that if given the chance, Google never would have launched AI. The only reason they launched it is because OpenAI, you know, put out ChatGPT and suddenly it became a thing that they were forced to do.

And that also helped them too because, you know, OpenAI took a lot of those bullets in terms of like saying crazy and offensive things. Um, and so at that point then, uh, you know, Google could put out something that was a more sanitized version that, you know, prohibits the existence of white people or whatever, but, um, you know, and OpenAI kind of spun out of YC and you were around at that time. Originally, it was, uh, YC research, right?

So, you know, again, kind of going back to the early teens, we were just tracking the progress of this technology, and that was where we started to see deep learning doing really kind of impressive things, where there's like playing video games and like winning and getting good at things where you could say where you could finally see that AI was real, right?

So for decades, AI was kind of the sci-fi thing, and you had all the symbolic AI, which I would say is kind of garbage. And so finally AI was doing something that was like truly impressive, and, um, so, you know, it was kind of on our radar. And then, you know, Sam I think talks to just a lot of people, and so he had, uh, I think been at one of these things where Elon was, was very—he essentially ringing the alarm bell, saying AI was going to kill us all and proposing that, um, maybe there should be regulation.

And so we're having these discussions, you know, Sam's asking like do you think we should push for AI regulation? And, um, yeah, I'm of the opinion that that only makes things worse because I don't have great confidence in our, um, elected representatives to be, you know, super wise and forward-thinking.

And so my argument was that the better thing to do would be that we actually build the AI and, um, you know, that way we're able to influence the direction that it goes. Um, but AI was still at that time something that we didn't really know what the timeframe would be to be able to have revenue because it was still basically a research project, and it requires just massive amounts of capital because the researchers are pretty highly paid.

Roughly what year was this?

2015, I think. This was about the time after Google did the DeepMind acquisition as well, right?

Yes, this was after DeepMind, which made this issue more complicated because we didn't perhaps in those conversations there was a desire that we wouldn't want this AI to be stuck at Google, right?

Exactly, so the fear is that basically this gets developed all locked up inside of Google. Um, and so the idea was that we wanted this to be something, you know, more open to the world, open to our startup ecosystem. Um, and so the idea was that, you know, we had this concept of YC research that we would, um, find some way to fund this and then hopefully, you know, our startups would be able to benefit from that and build on top of that, which you know has in fact happened; of course, like half our startups now are building on top of it.

What are your thoughts on now open-source models?

So I'm totally in favor of them. So I think like when we think about what is the long-term trajectory of AI, it's the most powerful technology we've ever invented. Um, and so the question is like where does that power go? I think there's essentially two directions; you either go towards centralization where all the power gets, you know, centralized in the government or in a small number of big tech companies or something like that.

My feeling is that that's catastrophic for the human species, um, because you essentially minimize the agency and power of the individual. Um, and I think the opposite direction is towards freedom. And as much as possible, we should give this power and these capabilities to every individual to be kind of the best version of themselves. And so you can think about that in terms of, you know, how much what would it look like if everyone had a 200 IQ or whatever, right?

Like instead of just having all of that power concentrated in one place, open source is very important because it's kind of a litmus test for that, right? Because it's true freedom; it's freedom of speech; it's First Amendment right. Um, and if you don't have that, if your models are all locked away under some sort of lockdown system where there's a lot of rules about what can be said, what kinds of thoughts are acceptable, then we essentially lose all freedom, right?

Freedom of speech is meaningless if I don't have the freedom of thought to even compose the ideas that I'm going to communicate. Going back to the history of OpenAI: the real story of how OpenAI got started is it's actually not well known. Um, you know, like many companies, the founding story as it gets retold and retold becomes sort of like sanitized for public consumption, but you had a front row seat; in fact, you interviewed many of the early researchers that became essentially the people who built OpenAI.

Like what is the—like can you tell us the real founding story?

Sure, I wouldn't say many—one—I interviewed Ilia. Um, so yeah, I mean, it goes back to again these discussions of like okay, maybe the way forward instead of trying to outlaw AI is actually that we should build it, and as much as possible, you know, in the public interest. Um, and so Sam, you know, is just an incredible organizer. Never met someone who's able to bring together so many different interests, um, and so many different people.

And so he was able to round up, uh, you know, essentially donations from, uh, Elon and a number of other people. I know PG and Jessica also contributed to the original, um, OpenAI nonprofit. Um, I think we even kicked in some YC value; we did. Um, and so that was kind of the root of it, and then he recruited the original team. Um, you know, Greg and Ilia and basically got the whole thing started, and he was still running YC at the time.

And originally this was like a subsidiary of YC called YC Research, right? So the original concept, I think, was that it was actually be part of this thing that we were calling YC Research. And then I think kind of like as Elon got more involved, it became its own, you know, OpenAI with kind of Elon more of the face of it, and no one really even knew about the YC roots.

Actually, if you go back and look, as part of their most recent lawsuit, they published some of the emails, and there's the one where Elon is like, get rid of the YC stuff. [Laughter]

Why do you think OpenAI worked? Like— I remember in the early 2000s looking at Google and being like that's the company that's going to invent AGI someday. And then the way it played out is not the way I would have predicted.

Again, the idea with OpenAI—and part of the lure, like the pitch to researchers, was that when you come here, your stuff's not going to be locked away; we're going to put it out in the world, right? And so researchers, you know, are motivated by that and motivated by the mission of, you know, making this something that isn't just locked up inside of Google.

Um, and so I think that attracted a lot of talent, and it's the same thing, you know, as with a startup. Do you want to be inside of like a large corporation where again Google—the people working at—the researchers working at Google couldn't even make a version of Image Gen that would generate human form, right? So they're just like so locked down um internally that if you're a person who I think likes to ship and likes to move fast, you know, OpenAI was the startup version of AI.

But yeah, I think if Google were in top form, there is no way that it would have worked. And that's often the way it is with startups, right? Like if you were facing an actual formidable competitor, you don't have a chance; the reason startups work a lot of times is because you're competing with slow companies, you know, big companies that have the wrong incentives internally.

Do you think OpenAI in 2016 was comparable to Google in 1999 when you joined it?

I would say it's actually more of a crazy long shot. Like, it really seemed—and again, if you look at these emails, you know, that got released as part of the lawsuit, there's like one from Elon where he's like, you guys have a zero percent chance of success, right? Like, and it really looked like that.

Um, and so it was far from obvious that it was going to be successful. Um, I think that the place—and for a long time it really wasn't, you know, they were still doing like the video games and everything, um, and it was really actually like the LLMs that made the big difference, right?

And so like GPT-2 was kind of like—I remember Sam just being really excited, wanting to show me this thing, you know, where it like predicts the next word. Um, and the next word prediction is such a like deceptively simple thing that you still hear people, you know, dismissing it like, "Oh, it's not really intelligent; it's just predicting the next word." But it's like, you know, you tried predicting the next word; it's not that easy.

Um, and in fact, if you think about it, if you can predict the next word, you can predict any. Right? That's what a prompt is, right? You say like whatever the thing is you want predicted; that's your prompt, and then the next word is the prediction, right? And so in order to do, um, next word prediction and be able to do what it does, it necessarily has to be building some sort of model of, of, you know, its perception of reality, which in this case is limited by the fact that it's just being fed text, which is a sort of strange thing to grow up on.

On the, like, control versus freedom thing, we're sort of betting on open source to give us freedom. Zuck has sort of interestingly become like the hero of open source. And like, on the one hand, I feel like you could argue it's accidental; like the weights were released, like, you know, unofficially, and he only had the GPUs because they were trying to compete with the TikTok algorithm. You've worked with him; like, is it sort of accidental, or is he like just the kind of guy that's always going to be at the center of everything big that happens in the world?

It's a good question. I mean, I don't know the backstory on—he's definitely like a smart guy; like I wouldn't underestimate him. Um, but—and obviously there's like an opportunistic element, right? Because they're kind of behind in many ways, right? And so it's a way for them to differentiate in a way for them to sort of weaken their competitors. So there is—but there's nothing wrong with that. I mean, the fact that it's good for them is a great thing, but should we be worried that we're relying on Meta to keep open source forward when he's a fairly strategic guy?

Oh, yeah, we shouldn't exclusively rely on them. I think we should be grateful that they're on the right side, but we can't count on them being the only ones. Like I think we have to build a whole coalition of people who are in favor of freedom and open source and not just sort of bet everything on Facebook saving us.

Well, I guess to build on Harge's question, Meta's not making money on this; they're funneling profits from their gigantic advertising monopoly and just using that to build open-source AI models for reasons but not to like make money.

They'll make money like—so, I mean, they're using the models internally as well, right? So the—and there's a lot of interesting stuff you can do with these models in terms of improving ad targeting recommendations, like all the things that are driving their business are going to be improved by, um, those algorithms. And of course, it's also an opportunity, you know, they exist in this competitive ecosystem versus Facebook—I mean, versus Google and Apple, who are, you know, are both rivals in various ways.

And so they're all kind of competing with each other, so their ability to kind of undercut competitors is also an important thing. But you were saying like specifically Facebook's not making money off open source as a strategy?

Well, I guess it's just like they seem to be in a fairly unique position to do this. If Zuck changes his mind and decides to stop open sourcing it, how else will we get large open-source models if they cost like a billion dollars to train, right? And it's not clear how you make a billion dollars off.

Yeah, I think that's—that's an unanswerable question. I mean, that is like the one of the fundamental concerns I have, which is that I think because it's so expensive to build these models, yeah, it is—that is like an inherently centralizing thing where if you need a trillion-dollar cluster to build your AGI, it's hard to do that.

Um, but at the very least to the extent that we can have like the legislative groundwork that says we have the right to do that, um, and then, you know, we also have a lot of startups that are working on ways to make all of this more efficient. So you know, right now it costs that much, but we're also developing new hardware that's going to be able to do these things perhaps orders of magnitude more efficient.

Like right now, I would say our algorithms are probably not that great. I would—I would be willing to bet that in 10 years, the actual fundamental learning algorithms are going to be way better and hopefully more efficient, so we'll have both better hardware and better algorithms.

It seems like that if you just think about the amount of computational power to train a human versus the computational power to train like GPT-4, like we're evidently much more efficient.

Yeah, I think—I think there's still a lot—I think there's still just a lot of— the human brain runs on like 15 or some watts or something around that.

Gary, can you share some of the stuff that you know about reasons why Zuck might be incentivized to keep funding money into open source?

I mean, this is wild speculation on my part, but, um, I think that, you know, the next generation of LLMs ostensibly maybe only a billion dollars. If you look at how much Meta—like, Meta literally changed their name to Meta because they were trying to, you know, sort of create the metaverse. And that, uh, you know, depending on what estimate you use from external sources, like, you know, 20 to 50 billion dollars—like, many multiples of the Apollo project.

So, um, I think 1 billion is not a lot. And then, uh, when you see things like OpenAI or Anthropic that have these incredible frontier models, I think it's smart for Meta to consider, you know, can we deflate the gross margins of these companies? And so releasing an open-source model and then allowing you to run it on your own pure hardware, on your own metal, uh, that's probably the most deflationary thing you could do to get you— you know, if, if a frontier model 40 billion, gets you to like, 90-ish, 98% of, uh, the performance of the best frontier model you can get behind a closed API, you could probably just like evaporate billions of dollars in pure gross margin that would then be used in R&D.

And so, you know, I think it's, you know, sort of incredibly smart, uh, you know, sort of seeing around the corner trying to prevent new competitors to Meta.

It's not that far off like Google releasing Gmail for free and just giving the storage away. It's like Google had another way to make lots of money, and so you could just release free services. Facebook has other ways to make money, and so they can just like release open-source AI and make sure that no one else has like a unique lead.

Yeah, and I would imagine it helps with recruiting too. I mean, if I were an AI researcher and it was kind of a toss-up between, you know, Meta and another and a closed source, I would definitely go with the open company. I mean, to refute what you were saying, Gary, is with the change of Meta, if we really just have the more AAAA’s racer, kind of speculation about Meta, if they really want to make this metaverse future, building artificial intelligence AGI is just a building block.

And building LLMs, and building a fair lab, which is like a component to get it out because Meta is very serious about that; they just announced today they spent a couple billion dollars again, not just for models but to buy a large stake on Luoda, who is, uh, kind of this major brand that owns a lot of the eyeglasses in the world—it's the Meta glasses—because the Ray Barn, apparently, uh, the last release that they had actually sold in two months more than they ever done in previous generations.

Oh yeah, people love these things.

So if we speculate and we just play a direct line, it could be that Zuck is very, very serious about making the metaverse happen, and AI is a component to get AR, VR working because in order to augment the digital world, you really need to understand it. Language is one part; vision is one part, so this is all a building block.

So a billion there is just like, "Yeah, I will say that, uh, I'm not that impressed with Meta's consumer execution of, uh, you know, just dropping AI into the product." Like recently, you know, I've been using Facebook, the blue app, for I don't know, since it came out, and, um, I wanted to just get photos of, you know, things that happened 5, 10 years ago, you know, when was the last time I went here? Who are my friends? These are sort of the most obvious things that, you know, if you use, uh, Facebook, you sort of want these out.

But, you know, they drop in, uh, 70B, and I think in some localities you can get access to 40B literally in, uh, both facebook.com and, um, WhatsApp, but there's no, you know, there's no rank on the stuff that, you know, is about me. So it seems like kind of like an obvious own goal. On the other hand, like seemingly, that stuff is pretty expensive, which is sort of the plight of anyone working on consumer using these frontier models.

I do wonder whether they are kind of the blue app has been kind of more deprecation because actually the AI on, uh, Instagram is actually a lot better than the one on the blue app. I kind of been playing with it a bit to get a bunch of, uh, plan my trip when I was in Japan, and it got me a lot of pictures and places.

Oh, I didn't realize you used the...

Yeah, we've been playing with a couple of them. I also use Perplexity.

Yeah, I like Perplexity; it's better than the Instagram one, but pictures are nice. So looking forward, what do you think are some of the ways this is going to break over the next few years?

Which is going AI like—and one thing we haven't talked about here, because we're kind of in the trenches of just helping the startups in the batch is like, are we trending towards AGI? And just like all the laws of everything we know go to the world over? Yeah, be startups? Will there be money? Will there be humans? Will money still exist?

Yeah, I mean, we don't know. That's—that's again one of the, you know, funny questions of OpenAI since it's all funded with these sort of post-AGI—it’s like we'll pay you back once AGI happens. You're like, will we still have money? Maybe it could happen, um, yeah, I mean, I think just honestly we don't really know.

Um, are you a believer that we are definitely going to get to AGI?

Yeah, I I think we're on the path. I think the key point that happened is we crossed the line where AI went from a research project where kind of putting in a lot of money and don't really get much out to a thing where you put in money and then you get out more.

Um, and so it's like when a reaction, you know, like goes critical, right? Goes critical. Like if you have plutonium, you have plutonium spheres and they're kind of warm, and then you put them together, and then it explodes. Or when DARPA became the internet moment, right?

And so right, and so right, the internet crossed that point, you know, in the 90s, in the mid-90s, where all of a sudden more investment produces more impressive outcomes, which leads to more investment. And that's where we are right now, where people can't seem to throw money at it fast enough, right? And we're actually talking about, it's actually like a national issue, is that we need to build, increase our electric supply to like train the AI, right? It's become like a national security thing.

Um, and so I think once that happens, you get that cycle, and it just keeps growing, right? We just keep investing more, and that just keeps making the AI better, and it's clearly, you know, solving a lot of problems, and we know this because we have all the companies that are out there building it. Um, and so I think it just keeps improving.

Why is that not unanimously the view among smart people? Like why—like there's Yan Lon Meta who's constantly arguing that this is not the path to AGI, and he's pretty smart domain expert. Like I don't know; I'm not—that's a question for him. You know, I I like a lot of what he says because he favors open source, but some of the other stuff he says, I don't—I can't—I can't explain.

I mean I do think that there's missing pieces, right? So it isn't like we have all of the parts of AGI, but I think that it's kind of an incremental thing at this point where we keep kind of tacking on like this thing and that thing and just keeps getting, um, incrementally better. I think the one that, um, at NeurIPS, this is the big AI academic conference where actually we—the all-attention paper un-need got published—like all the like top research gets published last year, the top topics were things around we are kind of figuring out system one type of thinking with Daniel Kahneman's framework where like really good at these things that are very like planned, but not like the high-level slower thinking that humans do with system two.

There's a lot of research that's kind of trying to figure out two system two and system one and trying to bridge the gap, which when we unlock that, I think that's when we're a step forward to AI.

Yeah, absolutely. I mean, it's important to remember that right now when you're talking to ChatGPT, it's kind of just running stream of consciousness, right? And so what human could answer any of these questions without stopping to think about it for a while?

So you know one of the obvious next steps, which people are working on is like how do you give it time to think and kind of, you know, plan, consider various options, explore ideas just the same as a human would?

Yeah, that's certainly what we're seeing in the companies themselves. They're spending a lot of time in workflows, chain of thought, multi-agent systems. You know, you have different steps, uh, you know, what does a human do, and then they literally make workflow like step by step, like read this paragraph, return one token, uh, from, you know, zero to nine relevance to the Prompt, uh, and then, you know, in aggregate, you know, make a metadata structure about that, you know, drop that into the embedding, and you know, have that be useful at, you know, at the final generation step.

Like it's literally a tailored time and motion study of what a human knowledge worker would do in different fields, which is exactly the type of thing in what happens in our thinking with system two, and all these Founders that you're, uh, mentioning is an example—they are kind of hardcoding the rules around this, but that I think we know is not the ultimate path to AGI, just like these kind of—it's a hack for now, right? It's kind of how—

Totally a hack. Yeah. But over time, you know, as the system gets more intelligent, it takes on more and more of that. Part of my belief is that it all just comes down to patterns, um, and that's part of why I believe in this generation of AI is because the neural nets are basically these huge pattern recognition and generation engines, um, and that's what I think is also our own intelligence.

Do we speculate a bit more on your views on the future? On this post that you had on book phase, you had a very concrete example where there will be a future where we won't distinguish a knowledge worker. Right? So just kind of as a thought experiment of where this goes—my prediction there was that by 2030, you could take a lot of what is today's like Zoom-based worker. So someone who sits in front of a laptop with a camera and a keyboard and a, you know, mouse, um, and the AI can basically watch that person do their job because it's all just virtual anyway and then pretty quickly learn their patterns and essentially deep fake that employee.

And so you could be in the situation where you're in a Zoom call with someone, and that person is actually an AI. And pretty clear, you know, we see all of these pieces coming together right now in terms of our ability to deep fake and all of these different things. And I use that as an example not because that's necessarily how it's going to play out, but that's a capability that we will have.

And so for example, you know, if you have one of these Zoom-based jobs, I think within 10 years, most of those things could be transparently replaced by an AI.

Which—oh man, I mean, we are in the path! I mean, all that data is already digital—your camera feed, your audio, your input of the keyboard and mouse and all of that. Probably there are companies building that right now that are just recording all that data and building it.

Yeah, the thin edge of the wedge on that community is R/antiwork. If you can make an AI agent that deep fakes you and R/antiwork decides this is the thing, that's a billion-dollar company.

I mean, the question is, of course, one, then, you know, what happens to all of those people, right? Um, and so I— I think that's like the thing where we need to really start developing longer-term visions of like what is it that we're aiming for, why are we building all this technology?

Um, and again, for me, that kind of goes back to this question of, of how is the power distributed, right? Is this control? Is this something where it's all centralized, or is it freedom where it enables everyone?

Um, because I think like in the lockdown scenario, we very quickly get to the point where people are just saying, well, we don't even need all of these humans, right? Um, and that also feeds into, you know, the same people who want lockdown tend to be doomers who, who, who, who are wanting to lock down humans in a lot of other ways with, you know, Central Bank Digital Currency, all those kinds of limitations on individual freedom.

Um, and the opposite direction, I think, is obviously what I favor, which is that we actually move towards giving everyone, um, you know, greater agency. And you think about all these tools, like artistic tools, you know, when let's say a child is able to make their own animated series that's on the same quality as like a Pixar movie or something like that, that's actually really amazing. Think of all the stories that can be told and all the creativity that enables.

We'll just, uh, sit there and make adult robots and games for each other.

Yeah, I mean, there's—but again, like I think like one of the errors in like central planning mindset is thinking that we can plan this all out, and—and we can't. All we try to do is move in the right direction and give people the right tools, and I think that as we enable everyone to be smarter and everyone to make better decisions, then collectively we can move the whole world in a better direction.

But we're not smart enough, and I think it's a mistake to think that we are—to actually be able to say, here's what the world's going to look like, and, you know, this is exactly how it's all going to work, and—and that's how you end up with people, you know, locked up in their pods or whatever.

Paul, another thing you've been thinking about a lot is geopolitics as this AI stuff starts to become real. How is that going to relate to geopolitics and the great power competition that we're seeing now?

This is part of the reason why we wanted to build it here, right, is because if, if, you know, China has the super AI, uh, that's not going to be good for us. Um, and in particular, you know, wanting to keep it away from these kind of authoritarian systems of control because the worst-case scenario is that we basically end up in permanent lockdown, right?

Because AI can create a totalitarian system from which escape is impossible because, you know, even our thoughts are essentially being censored. Um, and you know, I think that's kind of like the disaster scenario for our species, and I think that if we go down the path of control, humans basically end up zoo animals. Um, and I— I don't really want that.

Yeah, one of the funnier things is, uh, you know, some of the, uh, legislation that's coming along to try to control AI that we've been fighting, like SB 1407, they actually have certain statutes in there. They've watered it down a little bit, but ultimately what they want to do is, uh, hold the model builders, you know, in sort of personal liability or even criminal liability for the things that their models might have a hand in doing, which is sort of like throwing the car designer, uh, in jail because someone got drunk and, you know, drove the car and hit someone, right?

It's incredibly insidious, and I think if you attach that kind of liability, it becomes toxic, right? I'm not going to want to touch something that has unlimited liability, and so necessarily, that's a way for them to exert, essentially, total control, right?

Is if you impose that kind of liability on things, then no one is going to want to go near it, and they are strongly incentivized to put like really draconian guardrails in place, um, that again will limit our abilities in ways that, you know, we may not even think about. But we've seen this very recently in recent history with the lockdown of social media.

Um, you know, during COVID, we had a global pandemic that ultimately killed tens of millions of people. People were locked up in their home, schools were closed, and we weren't allowed to talk about where it came from. And I think that was like—that's a thing that we still don't fully appreciate how catastrophically bad that is. You—if we can't make sense of the most important thing in the world, then we can't make sense of anything.

I guess the wild thing to spot is that like this is basically, uh, statism. And, uh, the wild thing is I've heard stories of even China sort of, you know, doing that thing that is in SB 1407. I've heard that that has actually happened to, uh, AI founders in China—that they've literally been sort of disappeared and told like, you know, we will hold you personally accountable for the output of, uh, the LLM and models that your software that you created, uh, spits out.

Yeah, well, this is one of our great advantages—is freedom. It's why it's why we're ahead, right? Is because you can't build a model in that environment, you know, because if you ask it about Tiananmen Square or something like that, right, it has to lie to you.

Um, and actually, again, I, you know, uh, one of the things I I like really about like XAI, they haven't really reached a great product yet, but they have a great mission statement, right? To be maximally truth-seeking, and I think that's—that's really, um, important, and—and the authoritarian regime is inherently truth-denying.

And so they put themselves at a disadvantage, and hopefully, they keep themselves there. So it's up to us then—we've got to get involved, we've actually got to fight for open-source AI and keep it open.

Yeah, yeah, and fight that to—to make sure that AI is a thing that that increases the individual agency instead of eroding it. For people who are relatively neutral about being doomers or optimists, like do you—what are the things that tip them in like one direction versus the other?

I do think some people are inherently kind of in one direction or another, right? Because the Doomer thing has been around for a long time; it isn't just now, you know, a lot of the same Doomer thing goes back, um, to the, you know, 50s, 60s, or even much earlier than that—Industrial Revolution writers in particular.

You think about like there was a very influential book, Limits of Growth, the Club of Rome or something like that. There was a book published, The Population Bomb, that had everyone convinced that there was going to be mass famines in the 70s and 80s.

Um, me, this is something I grew up very aware of actually because it was, um, I was like the fourth of five children born in the 70s, and apparently, people would give my mother, you know, she'd be at the store, and they'd give her nasty looks— right, you're killing the planet, you know, that kind of thing—because people genuinely believed that, you know, we were all going to have famines and everything by now.

And there's been a continual string of doom, um, and—and always the doomers, the doomers always are pushing for central control. They're always on the side of control and lockdown, and so you know if you look at what did the Population Bomb advocate for? You know, mandatory sterilization—they—they want to lock people down, and we still have that today where they're trying to lock down the food supply.

They're trying to lock down the flow of information, you know, anything where they talk about combating misinformation. The misinformation is anything that threatens the power of control, right? Because it always comes down to control versus freedom, ultimately, and growth.

And so the doomers are degrowth; they're lockdown; they're control versus your freedom, growth, and open source.

We were, uh, talking a bit earlier about this; I had just watched this, uh, lecture from Richard Hamming, who's a legendary scientist mathematician who created lots of interesting things like the Hamming coding distance and all these things. It was, uh, earned a Nobel Prize as well, and he has this really cool lecture from like the early 90s or '80s. He has been writing about AI actually since way way back, and he starts a lecture with saying that what's going to get in the way of AI progress is going to be human ego, which like reminds me a lot of this thing of wanting to control it.

And the what's going to get in the way is really that, which still, like, applies now.

Yeah, I mean, it's definitely a lot of ego always in the way. I think YC has a huge role to play, well, just like the startup community broadly because I just feel like the more cool tools there are that show everyone how awesome AI can be, like makes all better just the more inspiring that vision is.

Yeah, absolutely, and again, I think that was part of what's so important about, um, like the launch of ChatGPT, like even if I would say even if OpenAI just vanishes tomorrow, I think they've achieved the most important part of their mission, which was really bringing this out to public awareness and that now we have, you know, all of these people working and all these people thinking about it.

It isn't something that's like locked away, you know, inside of Google or inside of, you know, again, the doomers are like this needs to be done in a secret government laboratory; that's how you get Skynet. Skynet is when you build it in a secret government laboratory.

Um, you know, I think developing in the open and, and across, uh, you know, a wide variety of perspectives and everyone working on it is is our best shot at the optimistic outcome.

Yeah, these are not theoretical things, by the way. I mean, there is some evidence already that, um, giant corporations like United Healthcare Group are already blocking, uh, you know, the use of AI calls just to get claims, um, cleared for instance, and that's very much in their interest.

You—they detect AI, they decide they're not going to talk to that thing. And then on the flip side, you could also—it’s purely adversarial—like on the flip side, you can imagine, uh, drowning human beings in like infinite phone trees that legally speaking are, you know, completely rock solid, but you will never get your claim reimbursed.

Yeah, and um, that's really sort of the most extreme, um, Kafkaesque sort of situation that I have in my head—like we don't want the best frontier models in one or two giant corporations locked away behind, you know, sort of this corporate morass that is, you know, basically paperclip maximizing of its own.

That's a really—I thought that example—it's because it's totally the wrong thing for United Health. Like what they should be doing is like developing their own AI voice thing that's better at convincing the other one that like the claim like shouldn't be purchased or something, right?

Yeah, and by default if we have this sort of status that locks everything down, that safest thing, then you know, guess what's going to happen, United Healthcare Group is the only one that should be entrusted with the frontier 200 IQ model because it is, you know, right there alongside the state.

Right, right; inevitably, you know, power concentrates, and part of I think what's great about Y Combinator as an organization is that we're about empowering all of these individuals, you know, where we find some 19-year-old kid and like help them build something.

You know, I mean like Sam himself was like one of the original 19-year-olds, right? So he's this random 19-year-old that PG picks out.

Right; sort of definitionally, like, if you're, you know, 20-something, and you know how to code and you want to build things for people, like there's just another option. Like you don't have to go and work for Mike.

Yeah, absolutely, and again this is one of the great things about AI is that your ability to do those things is increasing. I think we're going to see, you know, very successful startups that actually don't even require a massive team anymore, and that was part of, you know, what really has enabled—and again, the original concept behind the founding of YC was because of technology it is now possible for like a couple of kids to start a real company.

Um, and that trend has only accelerated.

Well, I feel like that was one of the best episodes we've done so far, and PB, thank you so much for joining us.

We hope to have you back many, many more times!

Thanks, Gary!

That's it for this time. Catch you next time!

More Articles

View All
Sam Altman - Startup Investor School Day 1
I’m going to turn it over to our first speaker, Sam Altman, the president of Y Combinator, who actually had the original idea for this course, so I’m pretty grateful for that. He’s also the man who has said, “You want to sound crazy, but you want to actua…
Kirchhoff's voltage law | Circuit analysis | Electrical engineering | Khan Academy
Now we’re ready to start hooking up our components into circuits, and one of the two things that are going to be very useful to us are Kof’s laws. In this video, we’re going to talk about Kof’s voltage law. If we look at this circuit here, this is a volt…
Examples identifying conditions for inference on two proportions | AP Statistics | Khan Academy
A sociologist suspects that men are more likely to have received a ticket for speeding than women are. The sociologist wants to sample people and create a two-sample z interval. In other videos, we introduce what that idea is: to estimate the difference b…
HOW TO BUILD VALUE AS AN INVESTOR | Dennis Miller
She believed in getting paid to wait. She would never own anything that didn’t send a check to her each month or each quarter, and she would live off those distributions. But if it didn’t pay you money, she didn’t get it; she didn’t consider it an investm…
The Worst Year to Be Alive
2020 was probably one of the worst years that most of us have ever experienced. China has identified the cause of the mysterious new virus, Corona virus covid-19. A pandemic took the lives of millions, forced us to stay isolated indoors for months, shut d…
5 Morning Habits That Will Skyrocket Your Productivity
You know, most people have never been taught the art of indulging in productivity porn, and that’s why they’re never going to escape the rat race. But we’re here to fix that. Here are five morning habits that will skyrocket your productivity. Welcome to a…