yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

How To Build The Future: Sam Altman


31m read
·Nov 9, 2024

We said from the very beginning we were going to go after AGI at a time when in the field you weren't allowed to say that because that just seemed impossibly crazy. I remember a rash of criticism for you guys at that moment. We really wanted to push on that, and we were far less resourced than Deep Mind and others. So, we said, "Okay, they're going to try a lot of things, and we've just got to pick one and really concentrate, and that's how we can win here."

Most of the world still does not understand the value of like a fairly extreme level of conviction on one bet. That's why I'm so excited for startups right now. It is because the world is still sleeping on all this to such an astonishing degree.

We have a real treat for you today. Sam Altman, thanks for joining us.

Thanks! This is actually a reboot of your series "How to Build the Future," and so welcome back to the series that you started years ago.

I was trying to think about that. Something like that, that's wild! I'm glad it's being rebooted.

That's right. Let's talk about your newest essay on the Age of Intelligence. You know, is this the best time ever to be starting a technology company?

Let's at least say it's the best time yet. Hopefully, there'll be even better times in the future. I sort of think with each successive major technological revolution you've been able to do more than you could before, and I would expect the companies to be more amazing and impactful in everything else. So yeah, I think it's the best time yet.

Big companies have the edge when things are like moving slowly and not that dynamic. Then when something like this or mobile or the Internet or semiconductor revolution happens, or probably like back in the days of the Industrial Revolution, that was when upstarts had their edge. So yeah, it's been a while since we've had one of these, so this is pretty exciting.

In the essay, you actually say a really big thing, which is ASI, superintelligence, is actually thousands of days away—maybe. I mean, that's our hope, our guess, whatever. But that's a very wild statement. Yeah. Tell us about it.

I mean, that's big. That is really big. I can see a path where the work we are doing just keeps compounding, and the rate of progress we've made over the last three years continues for the next three, six, or nine or whatever. You know, nine years would be like 3,500 days or whatever. If we can keep this rate of improvement or even increase it, that system will be quite capable of doing a lot of things.

I think already even a system like A1 is capable of doing like quite a lot of things from just like a raw cognitive IQ on a closed-end well-defined task in a certain area. I'm like, "Oh, A1 is like a very smart thing," and I think we're nowhere near the limit of progress. I mean, that was an architecture shift that sort of unlocked a lot.

What I'm sort of hearing is that these things are going to compound. We could hit some unexpected Ed wall or we could be missing something, but it looks to us like there's a lot of compounding in front of us still to happen.

I mean, this essay is probably the most techno-optimistic of almost anything I've seen out there. Some of the things we get to look forward to—fixing the climate, establishing a space colony, the discovery of all of physics, near-limitless intelligence, and abundant energy. I do think all of those things, and probably a lot more we can't even imagine, are maybe not that far away.

One of the things that I always have loved the most about YC is it encourages slightly implausible degrees of techno-optimism and just a belief that like, "Ah, you can figure this out." In a world that I think is sort of consistently telling people this is not going to work, you can't do this thing, you can't do that, I think the kind of early PG spirit of just encouraging founders to like think a little bit bigger is like a special thing in the world.

The abundant energy thing seems like a pretty big deal. You know, there's sort of path A and path B. You know, if we do achieve abundant energy, it seems like this is a real unlock. Almost any work, not just knowledge work, but actually like real physical work could be unlocked with robotics and with language and intelligence on tap. Like, there's a real age of abundance. I think these are like the two key inputs to everything else that we want.

There's a lot of other stuff, of course, that matters, but the unlock that would happen if we could just get truly abundant intelligence, truly abundant energy. What we'd be able to make happen in the world—like both come up with better ideas more quickly and then also make them happen in the physical world. Like, to say nothing of it'd be nice to be able to run lots of AI, and that takes energy too.

I think that would be a huge unlock, and the fact that it's—I'm not sure to be whether it be surprised that it's all happening at the same time or if this is just like the natural effect of an increasing rate of technological progress. But it's certainly a very exciting time to be alive and a great time to do a startup.

Well, we sort of walk through this age of abundance—you know, maybe robots can actually manufacture, do anything. Almost all physical labor can then result in material progress not just for the most wealthy but for everyone. You know, what happens if we don't unleash unlimited energy? If you know there's some physical law that prevents us from exactly that?

Solar plus storage is on a good enough trajectory that even if we don't get a big nuclear breakthrough, we would be like okay-ish. But for sure it seems that driving the cost of energy down, the abundance of it up, has like a very direct impact on quality of life. Eventually, we'll solve every problem in physics, so we're going to figure this out. It's just a question of when, and we deserve it.

Someday, we'll be talking not about fusion or whatever, but about the Dyson sphere, and that'll be awesome too. Yeah, this is a point in time. Whatever feels like abundant energy to us will feel like not nearly enough to our great-grandchildren, and there's a big universe out there with a lot of matter.

Yeah. I wanted to switch gears a little bit. You were mentioning Paul Graham, who brought us all together, really created Y Combinator. He likes to tell the story of how you got into YC. Actually, you were a Stanford freshman.

He said, "You know what? This is the very first YC batch in 2005," and he said, "You know what? You're a freshman, and we’ll still be here next time. You should just wait."

And you said, "I'm a sophomore, and I'm coming." You’re widely known in our community as one of the most formidable people. Where do you think that came from, that one story?

I think I would be happy if that like drifted off history. Well, now it's purely immortalized. Here it is. My memory of that is that like I needed to reschedule an interview one day or something. PG tried to like say, "Just do it next year," or whatever. And then I think I said some nicer version of "I'm a sophomore and I'm coming."

But you know, these things get slightly apocryphal. It's funny. I don't—and I say this with no false modesty—I don't like identify as a formidable person at all. In fact, I think there's a lot of ways in which I'm really not.

I do have a little bit of a just like I don't see why things have to be the way they are, and so I'm just going to like do this thing that from first principles seems like fine. I always felt a little bit weird about that.

I remember one of the things I thought was so great about YC and still that I care so much about YC is it was like a collection of the weird people who were just like "I'm just going to do my thing."

The part of this that does resonate as a like accurate self-identity thing is I do think you can just do stuff or try stuff a surprising amount of the time, and I think more of that is a good thing.

Then I think one of the things that both of us found at YC was a bunch of people who all believed that you could just do stuff for a long time. When I was trying to figure out what made YC so special, I thought that it was like okay, you have this like very amazing person telling you, "I believe in you, and you can do stuff."

As a young founder, that felt so special and inspiring. And of course, it is! But the thing that I didn't understand until much later was it was the peer group of other people doing that.

One of the biggest pieces of advice I would give to young people now is finding that peer group as early as you can was so important to me. I didn’t realize it was something that mattered. I kind of thought, "Ah, like, I’ll figure it out on my own.”

But man, being around inspiring peers is so, so valuable.

What's funny is both of us did spend time at Stanford. I actually did graduate, which I probably shouldn't have done that, but I did. Stanford is great; you pursued the path of, you know, far greater return by dropping out.

That was a community that purportedly had a lot of these characteristics, but I was still beyond surprised at how much more potent it was with a room full of founders.

I was just going to say the same thing, actually. I liked Stanford a lot. But I did not feel surrounded by people that made me want to be better and more ambitious and whatever else.

To the degree, did the thing you were competing with your peers on was like who was going to get the internship at which investment bank, which I'm embarrassed to say I fell on that trap. This is like how powerful peer groups are.

It was a very easy decision to not go back to school after like seeing what the YC vibe was like.

Yeah. There's a powerful quote by Carl Jung that I really love. It's, "The world will come and ask you who you are and if you don't know, it will tell you."

It sounds like being very intentional about who you want to be and who you want to be around as early as possible is very important.

Yeah, this was definitely one of my takeaways, at least for myself, is no one is immune to peer pressure. So all you can do is like pick good peers.

Yeah, obviously, you went on to create Looped, sold that, went to Green Dot, and then we ended up getting to work together at YC. Talk to me about like the early days of YC Research.

One of the really cool things that you brought to YC was this experimentation. I remember you coming back to partner rooms and talking about some of the rooms that you were getting to sit in with like the Larry and Sergeys of the world, and that AI was at the tip of everyone's tongue because it felt so close, and yet it was, you know, that was 10 years ago.

The thing I always thought would be the coolest retirement job was to get to like run a research lab. It was not specifically to AI at that time when we started talking about YC research.

Well, not only was it going to, it did end up funding like a bunch of different efforts. I wish I could tell the story of like, "Oh, it was obvious that AI was going to work and be the thing." But like we tried a lot of bad things too.

Around that time, I read a few books on like the history of Xerox PARC and Bell Labs and stuff. There were a lot of people like—it was in the air of Silicon Valley at the time that we need to like have good research labs again.

I just thought it would be so cool to do, and it was sort of similar to what YC does, in that you're going to allocate capital to smart people, and sometimes it's going to work and sometimes it's not going to.

I just wanted to try it. AI for sure was having a mini moment. This was kind of late 2014, 2015, early 2016; it was like the superintelligence discussion, like the book Superintelligence was happening.

Bo, yep. Yeah, the Deep Mind had a few impressive results, but a little bit of a different direction. You know, I had been an AI nerd forever, so I was like, "Oh, it'd be so cool to try to do something."

But it's very hard to say, "Was ImageNet out yet?"

ImageNet was out, yeah. For a while at that point, so you could tell if it was a hot dog or not. You could sometimes!

Yeah, that was getting there. You know, how did you identify the initial people you wanted involved in, you know, YC Research and OpenAI?

I mean, Greg Brockman was early. In retrospect, it feels like this movie montage, and there were like all of these like, you know, at the beginning of like the biopic movie when you're like driving around to find the people and whatever, and they're like, "You son of a—I'm in!"

Right, right! Like, Ilia—I like heard he was really smart and then I watched some video of his and he's honestly now extremely smart—like true, true, genuine genius and visionary.

But also, he has this incredible presence, and so I watched this video of his on YouTube or something, and I was like, "I got to meet that guy."

I emailed him, and he didn't respond, so I just like went to some conference he was speaking at, and we met up. After that, we started talking a bunch.

Then, like Greg, I had known a little bit from the early Stripe days.

What was that conversation like, though? It's like, "I really like your ideas about AI, and I want to start a lab."

Yes. And one of the things that worked really well in retrospect was we said from the very beginning we were going to go after AGI at a time when in the field you weren't allowed to say that because that just seemed possibly crazy and, you know, borderline irresponsible to talk.

So that got his attention immediately. It got all of the good young people's attention and the derision—whatever that word is—of the mediocre old people.

I felt like somehow that was like a really good sign and really powerful. We were like this ragtag group of people. I mean, I was the oldest by a decent amount. I was like, I guess I was 30 then.

So you had like these people who were like "Those are these irresponsible young kids who don't know anything about anything," and they're saying these ridiculous things.

And the people who that was really appealing to, I guess, are the same kind of people who would have said, "You know, I'm a sophomore, and I'm coming," or whatever.

They were like, "Let's just do this thing. Let's take a run at it." So we kind of went around and met people one by one and then in different configurations of groups, and it kind of came together over the course of in fits and starts, but over the course of like nine months.

And then it started happening.

One of my favorite memories of all of OpenAI was Ilia had some reason that with Google or something that we couldn't start, and we announced in December of 2015, but we couldn't start until January of 2016.

So like January 3rd, something like that of 2016, like very early in the month, people come back from the holidays, and we go to Greg's apartment—maybe there's 10 of us, something like that—and we sit around, and it felt like we had done this monumental thing to get it started.

Everyone's like, "So what do we do now?"

What a great moment! It reminded me of when startup founders work really hard to raise a round and they think, "Oh, I accomplished this great—we did it," and then you sit down and say, "Now we got to like figure out what we're going to do." It's not time for popping champagne. That was actually the starting gun, and now we got to run.

Yeah, and you have no idea how hard the race is going to be. It took us a long time to figure out what we were going to do.

One of the things I'm really amazingly impressed by Ilia in particular, but really all of the early people about is although it took a lot of twists and turns to get here, the big picture of the original ideas was just so incredibly right.

So they were like up on one of those flip charts or whiteboards—I don't remember which—in Greg's apartment.

Then we went off and, you know, did some other things that worked or didn't work or whatever—some of them did—and eventually now we have this like system.

It feels very crazy and very improbable looking backwards that we went from there to here with so many detours on the way, but got where we were pointing.

Was deep learning even on that flip chart initially?

Yeah, I mean more specifically than that, like do a big unsupervised model and then solve RL was on that flip chart—one of the flip charts from a very early offsite.

I think this is right; I believe there were three goals for the effort at the time. It was like figure out how to do unsupervised learning, solve RL, and never get more than 120 people. Missed on the third one.

But right, the predictive direction of the first two is pretty good. So deep learning then, the second big one sounded like scaling—like the idea that you could scale.

That was another heretical idea that people actually found even offensive. You know, I remember a rash of criticism for you guys at that moment when we started.

Yeah, the core beliefs were deep learning works and it gets better with scale. I think those were both somewhat heretical beliefs at the time.

We didn't know how predictably better it got with scale; that didn't come for a few years later. It was a hunch first, and then you got the data to show how predictable it was.

But people already knew that if you made these neural networks bigger, they got better. Yeah, that was—we were sure of that before we started.

What took the—where that keeps coming to mind is like the religious level of belief was that that wasn't going to stop.

Everybody had some reason of, "Oh, it's not really learning; it's not really reasoning; it can't really do this; it's—you know, it's like a parlor trick."

These were like the eminent leaders of the field, and more than just saying you're wrong, they were like, "You're wrong, and this is like a bad thing to believe," or a bad thing to say.

It was that there's got to—you know, this is like you're going to perpetuate an AI winter; you're going to do this; you're going to do that.

And we were just like looking at these results and saying they keep getting better. Then we got the scaling results.

It just kind of breaks my intuition even now. At some point, you have to just look at the scaling loss and say we're going to keep doing this and this is what we think it'll do.

Also, it was starting to feel at that time like something about learning was just this emergent phenomenon that was really important.

Even if we didn't understand all of the details in practice here, which obviously we didn't and still don't, that there was something really fundamental going on.

It was the PG-ism for this; we had like discovered a new square in the periodic table. Yeah, and so we just, we really wanted to push on that, and we were far less resourced than DeepMind and others.

So we said, "Okay, they're going to try a lot of things, and we've just got to pick one and really concentrate, and that's how we can win here," which is totally the right startup takeaway.

So we said, "Well, we don't know what we don't know. We do know this one thing works, so we're going to really concentrate on that."

I think some of the other efforts were trying to outsmart themselves in too many ways, and we just said, "We'll just— we'll do the thing in front of us and keep pushing on it."

Scale is this thing that I've always been interested in, kind of just the emergent properties of scale for everything—for startups, turns out for deep learning models, for a lot of other things.

I think it's a very underappreciated property and thing to go after, and I think it's—you know, when in doubt, if you have something that seems like it's getting better with scale, I think you should scale it up.

I think people want things to be—you know, less is more, but actually, more is more. More is more. We believed in that. We wanted to push on it.

I think one thing that is not maybe that well understood about OpenAI is we had just this—even when we were like pretty unknown, we had a crazy talented team of researchers.

If you have like the smartest people in the world, you can push on something really hard, yeah, and they're motivated.

And you created sort of one of the sole places in the world where they could do that. Like one of the stories I heard is just even getting access to compute resources—even today, it's this crazy thing.

And embedded in some of the criticism from maybe the elders of the industry at the moment was sort of that, you know, "You're going to waste a lot of resources, and somehow that's going to result in an AI winter. Like people won't give resources anymore."

It's funny. People were never sure if we were going to waste resources or if we were doing something kind of vaguely immoral by putting in too much resources.

You were supposed to spread it across lots of bets rather than like conviction on one. Most of the world still does not understand the value of like a fairly extreme level of conviction on one bet.

So we said, "Okay, we have this evidence. We believe in this; we're going to, at a time when like the normal thing was, we're going to spread against this bet and that bet and that bet."

Definite optimist. You're a definite optimist, and I think across like many of the successful YC startups, you see a version of that again and again.

Yeah, that sounds right. When the world gives you sort of pushback and the pushback doesn't make sense to you, you should do it anyway. Totally!

One of the many things that I'm very grateful for getting exposure to from the world of startups is how many times you see that again and again and again.

Before I think—before YC, I really had this deep belief that somewhere in the world, there were adults in charge, adults in the room, and they knew what was going on, and someone had all the answers.

If someone was pushing back on you, they probably knew what was going on. To the degree to which I now understand that, you know, to pick up the earlier phrase, you can just do stuff. You can just try stuff. No one has all the answers; there are no adults in the room that are going to magically tell you exactly what to do.

You just kind of have to like iterate quickly and find your way. That was like a big unlock in life for me to understand.

There is a difference between being high conviction just for the sake of it, and if you're wrong and you don't adapt and you don't try to be like truth-seeking, it still is really not that effective.

The thing that we tried to do was really just believe whatever the results told us and really kind of try to go do the thing in front of us. There were a lot of things that we were high conviction and wrong on.

But as soon as we realized we were wrong, we tried to like fully embrace it. Conviction is great until the moment you have data one way or the other.

And there are a lot of people who hold on to it past the moment of data. So it's iterative. It's not just, "They're wrong, and I'm right."

You have to go show your work. But there is a long moment where you have to be willing to operate without data. At that point, you do have to just sort of run on conviction.

Yeah, it sounds like there's a focusing aspect there too. Like you had to make a choice, and that choice had better— you didn't have infinite choices.

The prioritization itself was an exercise that made it much more likely for you to succeed.

I wish I could go tell you like, "Oh, we knew exactly what was going to happen, and it was, you know, we had this idea for language models from the beginning."

You know, we kind of went right to this, but obviously the story of OpenAI is that we did a lot of things that helped us develop some scientific understanding, but we're not on the short path.

If we knew then what we know now, we could have speedrun this whole thing to like an incredible degree.

It doesn't work that way. You don't get to be right at every guess, and so we started off with a lot of assumptions both about the direction of technology, but also what kind of company we were going to be and how we were going to be structured and how AGI was going to go and all of these things.

We have been like humbled and badly wrong many, many, many times. One of our strengths is the ability to get punched in the face and get back up and keep going.

This happens for scientific bets, for, you know, being willing to be wrong about a bunch of other things we thought about how the world was going to work and what the sort of shape of the product was going to be.

Again, we had no idea—or I at least had no idea. Maybe Alec Radford did; I had no idea that language models were going to be the thing.

You know, we started working on robots and agents, PL, video games, and all these other things. Then a few years later, GPT-3 happened. That was not so obvious at the time.

Yeah, it sounded like there was a key insight around positive or negative sentiment around GPT-1 even before GPT-1.

Oh, before? I think the paper was called "The Unsupervised Sentiment," and I think Alec did it alone, by the way. Alec is this unbelievable outlier of a human.

So he did this incredible work, which was just looking at—he noticed there was one neuron that was flipping positive or negative sentiment as it was doing these generative Amazon reviews.

I think other researchers might have hyped it up more, made a bigger deal out of it or whatever. But you know, it was Alec.

So it took people a while to I think fully internalize what a big deal it was. Then he did GPT-1, and somebody else scaled it up into GPT-2.

It was off of this insight that there was something amazing happening where—at the time, unsupervised learning was just not really working.

So he noticed this one really interesting property, which is that there was a neuron that was flipping positive or negative with sentiment, and yeah, that led to the GPT series.

I guess one of the things that Jake Heller from CaseText—we think of him as maybe—not surprisingly, a YC alum who got access to both GPT-3, 3.5, and GPT-4.

He described getting GPT-4 as sort of the big moment revelation because GPT-3.5 would still do—I mean, it would hallucinate more than he could use in a legal setting, and then with GPT-4, it reached the point where if he chopped the prompts down small enough into workflow, he could get it to do exactly what he wanted.

He built huge test cases around it and then sold that company for $650 million. So it's, you know, I think of him as like one of the first to commercialize GPT-4 in a relatively grand fashion.

I remember that conversation with him. Yeah, with GPT-4, that was one of the few moments in that thing where I was like, "Okay, we have something really great on our hands."

When we first started trying to like sell GPT-3 to founders, they would be like, "It's cool; it's doing something amazing; it's an incredible demo."

But with the possible exception of copyrighting, no great businesses were built on GPT-3. Then 3.5 came along, and people—startups, like YC startups in particular—started to do interesting.

It no longer felt like we were pushing a boulder uphill. So like people actually wanted to buy the thing we were selling. Totally!

Then with 4, we kind of like got the like just how many GPUs can you give me?

Oh yeah, moment—like very quickly after giving people access. So we felt like, "Okay, we got something like really good on our hands."

So you knew actually from your users that—you were totally impressed then too. We had all of these tests that we did on it that were very—it looked great, and it could just do these things that we were all super impressed by.

Also, like when we were all just playing around with it and getting samples back, I was like, "Wow, it can do this now."

And they were, "It can rhyme, and it can tell a funny joke—a slightly funny joke—and it can do this and that."

So it felt really great, but you know, you never really know if you have a hit product on your hands until you like put it in customer hands.

Yeah, you're always too impressed with your own work!

And so we were all excited about it. We were like, "Oh, this is really quite good." But until like the test happens, it's like the real test is users.

Yeah, so there's some anxiety until that moment happens.

I wanted to switch gears a little bit. So before you created obviously one of the craziest AI labs ever to be created, you started at 19 at YC with a company called Looped, which was basically find my friend's geolocation—probably what? 15 years before Apple ended up making it too early in any case.

What drew you to that particular idea?

I was interested in mobile phones, and I wanted to do something that got to like use mobile phones. This was when mobile was just starting. It was like, you know, still three years or years before the iPhone, but it was clear that carrying around computers in our pockets was somehow a very big deal.

I mean, that's hard to believe now that there was a moment when phones were actually literally just a phone.

Yeah, I mean, I try not to use it as an actual phone ever really. I still remember the first phone I got that had internet on it, and it was this horrible—like text-based, mostly text-based browser.

It was really slow; you could like, you know, do like—you could so painfully and so slowly check your email. But I was like a—I don't know, in high school—sometime in high school.

I got a phone that could do that versus like just text and call, and I was like hooked right then. Yeah, I was like, "Ah, this is not a phone; this is like a computer we can carry."

We’re stuck with a dial pad for this accident of history, but this is going to be awesome!

I mean now you have billions of people who—they don't have a computer! Like to us growing up, you know, that actually was your first computer—not physically, but a replica or like another copy of my first computer, which is LC2.

Yeah! So this is what a computer was to us growing up. And the idea that you would carry this little black mirror—like kind of—we've come a long way.

Unconscionable back then!

Yeah, so you know, even then you liked technology and what was going to come was sort of in your brain.

Yeah, I was like a real—I mean, I still am a real tech nerd, but I always—that was what I spent my Friday nights thinking about.

Then one of the harder parts of it was we didn't have the App Store. The iPhone didn't exist.

You ended up being a big part of that launch.

I think a small part, but yes, we did get to be a little part of it.

It was a great experience for me to have been through because I kind of like understood what it is like to go through a platform shift and how messy the beginning is and how much little things you do can shape the direction it all goes.

I was definitely on the other side of it then—like I was watching somebody else create the platform shift. But it was a super valuable experience to get to go through and sort of just see how it happens, how quickly things change, and how you adapt through it.

What was that experience like?

You ended up selling that company—it was probably the first time you were managing people and, you know, doing enterprise sales—all of these things were useful lessons from that first experience.

I mean, it obviously was not a successful company, and so it's a very painful thing to go through, but the rate of experience and education was incredible.

Another thing that PG said or quoted somebody else saying, but always stuck with me is, "Your 20s are always an apprenticeship, but you don't know for what, and then you do your real work later."

I did learn quite a lot, and I'm very grateful for it. It was like a difficult experience, and we never found product-market fit really, and we also never like really found a way to get to escape velocity, which is just always hard to do.

There is nothing that I have ever heard of that has a higher rate of generalized learning than doing a startup. So it was great in that sense.

You know, when you're 19 and 20, riding the wave of some other platform shift—this shift from, you know, dumb cell phones to smartphones—and, you know, here we are many years later, and your next act was actually—you know, I mean, I guess two acts later—literally spawning one of the major platform shifts.

We all get old!

Yeah, but that's really what's happening, you know. 18 to 20-year-olds are deciding that they could get their degree, but they're going to miss the wave. Like, 'cause all of the stuff that's great—everything's happening right now!

Do you have an intuitive sense, like speaking to even a lot of the, you know, really great billion-dollar company founders? Some of them are just not that aware of what's happening.

It's wild! I think that's why I'm so excited for startups right now is because the world is still sleeping on all of this to such an astonishing degree.

Then you have like the YC founders being like, "No, no, I'm going to like do this amazing thing and do it very quickly."

Yeah, it reminds me of when Facebook almost missed mobile because they were making web software, and they were really good at it.

And they—I mean, they had to buy Instagram. They had to buy Snapchat right up.

Yeah, and WhatsApp! So it’s interesting; the platform shift is always built by the people who are young with no prior knowledge.

It is!

I think it's great. So there’s this other aspect that's interesting in that I think you—you and Elon and Bezos and a bunch of people out there—like they sort of start their journey as founders, you know, really, you know, whether it's Looped or Zip2 or, you know, really in maybe pure software—like it's just a different thing that they start; and then later they, you know, sort of get to level up.

Is there a path that you recommend at this point if people are thinking, "You know, I want to work on the craziest hard tech thing first?" Should they just run towards that to the extent they can? Or is there value in, you know, sort of solving the money problem first, being able to invest your own money deeply into the next thing?

It's a really interesting question. It was definitely helpful that I could just like write the early checks for OpenAI, and I think it would have been hard to get somebody else to do that at the very beginning.

Elon did it a lot at much higher scale, which I'm very grateful for. Then other people did after that, and there are other things that I've invested in that I'm really happy to have been able to support.

I don't—and I think it would have been hard to get other people to do it.

So that's great for sure, and I did, like we were talking about earlier, learn these extremely valuable lessons, but I also feel like I kind of like was wasting my time, for lack of a better phrase, working on Looped.

I don't—I definitely don't regret it. It's like all part of the tapestry of life, and I learned a ton and whatever else.

What would you have done differently, or what would you tell yourself from like now in a time capsule that would show up on your desk at Stanford when you were 19?

Well, it's hard because AI was always the thing I most wanted to do, and AI just like—I went to school to study AI. But at the time, I was working in the AI lab. The one thing that I—they told you is definitely don't work on neural networks; we tried that; it doesn't work a long time ago.

I think I could have picked a much better thing to work on than Looped. I don't know exactly what it would have been, but it all works out; it's fine.

Yeah, there's this long history of people building more technology to help improve other people's lives, and I actually think about this a lot.

Like I think about the people that made that computer, and I don't know them. You know, many of them probably long retired, but I am so grateful to them!

Some people worked super hard to make this thing at the limits of technology. I got a copy of that on my 8th birthday, and it totally changed my life!

Yeah, and the lives of a lot of other people too. They worked super hard; they never got to thank you for me, but I feel it to them very deeply, and it's really nice to get to like add our brick to that long road of progress.

Yeah, it's been a great year for OpenAI—not without some drama.

Always, yeah!

We're good at that. What did you learn from, you know, sort of the ouster last fall, and how do you feel about some of the, you know, departures?

I mean, teams do evolve, but how are you doing?

Man, tired but good!

Yeah, it's—we've kind of like speedrun like medium-sized or even kind of like pretty big-sized tech company arc that would normally take like a decade in two years.

Like ChatGPT is less than two years old, yeah, and there's like a lot of painful stuff that comes with that.

Any company, as it scales, goes through management teams at some rate, and you have to sort of—the people who are really good at the zero-to-one phase are not necessarily people that are good at the one-to-ten or the ten-to-hundred phase.

We've also kind of like changed what we're going to be. We've made plenty of mistakes along the way. We've done a few things really right, and that comes with a lot of change.

I think the goal of the company—the emerging AGI or whatever, however you want to think about it—is like just keep making the best decisions we can at every stage.

But it does lead to a lot of change. I hope that we are heading towards a period now of more calm, but I'm sure there will be other periods in the future where things are very dynamic again.

So I guess how does OpenAI actually work right now? You know, I mean the quality and like the pace that you're pushing right now, I think is like beyond world-class compared to a lot of the other, you know, really established software players who came before.

This is the first time ever where I felt like we actually know what to do. Like, I think from here to building an AGI will still take a huge amount of work. There are some known unknowns, but I think we basically know what to go do.

It'll take a while; it'll be hard, but that's tremendously exciting. I also think on the product side, there's more to figure out, but roughly we know what to shoot at and what we want to optimize for.

That's a really exciting time, and when you have that clarity, I think you can go pretty fast.

Yeah! If you're willing to say we're going to do these few things, we're going to try to do them very well, and our research path is fairly clear, our infrastructure path is fairly clear, our product path is getting clearer, you can orient around that super well.

We for a long time did not have that; we were a true research lab. And even when you know that, it's hard to act with conviction on it because there's so many other good things you would like to do.

But the degree to which you can get everybody aligned and pointed at the same thing is a significant determinant in how fast you can move.

I mean, it sounds like we went from level one to level two very recently, and that was really powerful.

And then we actually just had our 01 hackathon at YC that was so impressive.

That was super fun!

And then weirdly, one of the people who won, I think they came in third, was Camper.

And so CAD-CAM startup, you know, did YC recently, last year or two, and they were able to during the hackathon build something that would iteratively improve an airfoil from something that wouldn't fly to literally something that had—

Yeah, that was awesome!

A competitive amount of lift! I mean, that sort of sounds like level four, which is, you know, the innovator stage.

It's very funny you say that. I had been telling people for a while I thought that the level two to level three jump was going to happen, but then the level three to level four jump was—level two to level three was going to happen quickly, and then the level three to level four jump was—somehow going to be much harder and require some medium-sized or larger new ideas.

That demo and a few others have convinced me that you can get a huge amount of innovation just by using these current models in really creative ways.

Well, yeah! I mean, it's—what's interesting is basically Camper already built sort of the underlying software for CAD-CAM.

And then, you know, language is sort of the interface to the large language model that then—which can then use the software like tool use.

And then, if you combine that with the idea of code gen, that's kind of a scary, crazy idea, right?

Like not only can the large language model code, but it can create tools for itself and then compose those tools, similar to, you know, chain of thought with O1.

Yeah, I think things are going to go a lot faster than people are appreciating right now.

Well, it's an exciting time to be alive, honestly! You know, we mentioned earlier that thing about discovering all of physics.

I wanted to be a physicist. I wasn't smart enough to be a good one, had to like contribute in this other way, but the fact that somebody else—I really believe—is now going to go solve all the physics with this stuff.

I'm so excited to be alive for that!

Let's get to level four! So happy for whoever that person is!

Yeah. Do you want to talk about level three, four, and five briefly?

Yeah, so we realized that AGI had become this like badly overloaded word, and people meant all kinds of different things.

We tried to just say, "Okay, here's our best guess roughly of the order of things." You have these level one systems, which are these chatbots.

Then there'd be level two that would come, which would be these reasoners. We think we got there earlier this year with the O1 release.

Three is agents—the ability to go off and do these longer-term tasks. You know, maybe like multiple interactions with an environment, asking people for help when they need it, working together—all of that.

And I think we're going to get there faster than people expect for as innovators. Like that's like a scientist.

And, you know, that's the ability to go explore like a not well understood phenomenon over like a long period of time and understand what's just—kind of go just figure it out.

Then and then level five, this is sort of slightly amorphous—like do that but at the scale of the whole company or, you know, a whole organization or whatever.

That's going to be a pretty powerful thing.

Yeah, and it feels kind of fractal, right? Like even the things you had to do to get to two sort of rhyme with level five and that you have multiple agents that then self-correct that work together.

I mean, that kind of sounds like an organization to me, just at like a very micro level.

Do you think that we'll have—I mean, you famously talked about it—I think Jake talks about it. It's like you will have companies that make, you know, billions of dollars per year and have like less than 100 employees; maybe 50, maybe 20 employees, maybe one.

It does seem like that. I don't know what to make of that other than it's a great time to be a startup founder!

Yeah, but it does feel like that's happening to me.

You know, it's like one person plus 10,000 GPUs—pretty powerful!

Sam, what advice do you have for people watching who, you know, either about to start or just started their startup?

Bet on this tech trend—bet on this trend. This is—we are not near the saturation point. The models are going to get so much better so quickly.

What you can do as a startup founder with this versus what you could do without it is so wildly different.

And the big companies—even the medium-sized companies—even the startups that are a few years old, they're already on like quarterly planning cycles.

Google is on a year-decade planning cycle! I don't know how they even do it anymore!

But your advantage with speed and focus and conviction and the ability to react to how fast the technology is moving—that is the number one edge of a startup kind of ever, but especially right now.

So I would definitely like build something with AI, and I would definitely like take advantage of the ability to see a new thing and build something that day rather than like put it into a quarterly planning cycle.

I guess the other thing I would say is it is easy when there's a new technology platform to say, "Well, because I'm doing something with AI, the rules—the laws of business don't apply to me." I have this magic technology, and so I don't have to build a moat or a competitive edge or a better product.

It's because, you know, I'm doing AI and you're not, so that's all I need. And that's obviously not true!

But what you can get are these short-term explosions of growth by embracing a new technology more quickly than somebody else and remembering not to fall for that.

And that you still have to build something of value, that's, I think, a good thing to keep in mind too.

Yeah, everyone can build an absolutely incredible demo right now.

But building a business, man—that's the brass ring! The rules still apply.

You can do it faster than ever before and better than ever before, but you still have to build a business.

What are you excited about in 2025? What's to come?

AGI!

Yeah, excited for that.

What am I excited for? We a kid! I’m more excited for that than congratulations.

Ever been incredible!

Yeah, probably that—that's going to be—that's the thing I've like most excited for ever in life!

Yeah, it changes your life completely, so I cannot wait!

Well, here's to building that better world for, you know, our kids and really hopefully the whole world.

This is a lot of fun! Thanks for hanging out, Sam.

Thank you!

[Music]

More Articles

View All
Hasan Minhaj on finding your gifts, being authentic, & understanding yourself | Homeroom with Sal
Hi everyone! Welcome to the Homeroom live stream! Sal here from Khan Academy. Very excited about today’s guest, Hasan Minhaj. I encourage everyone watching on Facebook or YouTube, if you have questions for Husso or myself, feel free to start putting those…
Einstein's Escape from Hitler | Genius
Albert Einstein lived through, and was, in fact, a central figure in some of the most important moments of the first half of the 20th century. You know the world was in a real state of chaos. Things were shifting hugely. Huge plates were shifting. The bi…
The Matapiiksi Interpretive Trail, Alberta - 360 | National Geographic
This UNESCO World Heritage Site is home to one of the most significant collections of Indigenous rock art in North America. So this is my first time hiking the Matapiiksi Trail, and it’s different from the trails I normally hike because it’s not mountaino…
Hormone Hacking: How to engineer your quality of life | Dave Asprey
Let’s talk about hormones. And there’s two big groups of hormones that I think most people know about. One is testosterone. The other is the estrogens. Well, let’s hit testosterone first. When I was 26, I had lower testosterone levels than my mother. And…
Vietnam POW Escape | No Man Left Behind
I certainly remember the day I got shot down: the 6th of June, 1964. The ocean government had requested a show of support from the United States. We were tasked to go in and fly some missions over there as a kind of a show of force. The last pass, the la…
Watch a Masterpiece Emerge from a Solid Block of Stone | Short Film Showcase
I always find that you have to be a bit mad to become a stone carver. I mean, this isn’t the Renaissance anymore. Stone isn’t a primary building material anymore. Why, why would you go into an industry? Why would you go into a profession that is expensive…