Brave New Words - Greg Brockman & Sal Khan
Hi everyone! It's here from KH Academy, and as some of you all know, I have released my second book, Brave New Words, about the future of AI in education and work. It's available wherever you might buy your books. But as part of the research for that book, I did some interviews with some fascinating people, which you are about to watch. So today, we have Greg Brockman. I'm very excited to have you here, Greg. For those of you who don't know, Greg is the co-founder, chairman, and president of an organization that some folks are talking about these days called OpenAI. We at KH Academy have done a little bit with OpenAI as well, and this is very exciting because this is the start of a new podcast, a new live stream, a new thing we're doing called Brave New Words, which is also a book that we're working on as well. So Greg, thanks so much for joining us.
Greg: Thank you for having me.
So let's just start and make sure everyone watching and listening has a shared understanding. Tell us a little bit about OpenAI and how you decided to start it, and maybe a little bit of where it's gone since then.
Greg: Yeah, so you know, for me, I first got excited about the idea of AI when I read Alan Turing's 1950 paper on the Turing test.
Have you read it?
Sal: You know, I haven't read the paper. I'm very familiar with it, but I’ve not actually read the paper.
Greg: Okay, so I recommend reading it because, you know, the first half is all about the Turing test, but the second half is about how you are going to solve it. He said, "Look, you're never going to program an answer to this thing; it's just too hard." But you could build a machine that could learn. Just to pause you, for folks who don't know, what was the Turing test, just to make sure everyone's on the same page?
Sal: Yeah, the Turing test is the idea of could you distinguish between a machine and a person by having a judge who talks to a person and talks to a machine, and that if they're indistinguishable, you could say that that machine is really intelligent.
Greg: And so again, you have to be able to not just sort of chat. You have to be able to, in his paper, say, "Okay, the judge asks questions about chess, and you have to answer chess questions." And so you realize that language captures so much of the human experience and so much of what it means to be intelligent. And I think his sort of, you know, one of my co-founders says that Turing was Turing for a reason; you know, he's truly, truly brilliant. One of his great insights is like, look, you're going to have to learn an answer. You're going to have to build a machine that can understand how to accomplish tasks that we cannot. And that, for me, was the moment it clicked.
But of course, this was like 2008, so nothing worked in AI, and it wasn't for another seven years that, looking from the outside at deep learning and the fact that computers had kind of gotten fast enough that they were now commercially useful, we started to realize that, look, maybe there's actually a shot at this dream. You know, I and others came together because we wanted to see if we could steer AI technology in a positive way, wanted to see if we could actually build the kind of machine Turing had talked about, a human-level intelligence, what we call an AGI, and to have that be something that benefits all of humanity. So that's the mission of OpenAI.
So we push forward the technology; we want to actually have it be beneficial, have it be safe, and distribute those benefits to everyone. We've been working on it for eight years now, I think, and in that time, you know, we keep doing the same activity: we build a bigger neural network, we make it more capable, we make it more aligned, we make it safer. Over the past couple of years, we've also started to deploy it and make it useful, and that's what I think is so interesting to me about this technology. It's not like fusion, where it's like you either got it or you don't. It is each step along the way; you can actually have impact, and you can actually start benefiting people. So I think that's good. You know, you get to see the benefits of what you've built and actually learn how to mitigate all the downsides, and so I think that's the stage that we're in.
Sal: Yeah, there's a bunch in that. I mean, you know, it sounds like, based on when you were working on AI stuff, I'm a little over 10 years older or a little more than that older than you. But, you know, in the mid-late '90s, when I was in college, I was working with some of the early pioneers, and some of them were my professors. And it was the same thing: I was super excited about artificial intelligence; I'd read the science fiction, and I'm like, "Oh, this is going to take forever if it ever happens." And even your point, you experienced the same about 10, 12 years later. And then even when y'all were thinking about starting OpenAI, this, you know, what you mentioned, AGI, artificial general intelligence, I think many thoughtful people, if I'm honest, even myself—and I tend to run pretty optimistic about things—I would have thought that that's a little bit delusional to actually now start working on a, like, you know, take it, not even a research lab. I mean, I guess it's something of a research lab, but actually start an organization that has a different key focus. Did you think that? Were people telling you that? How did you decide to do it anyway?
Greg: Oh yeah, I mean, we got plenty of very negative feedback from the community. And the thing that I actually found most interesting was that we were talking about AI safety before it was cool. In fact, I remember talking to a candidate who worked at a big AI lab who said, "Yeah, I think that AGI safety is like the most important problem. I think it really matters, but if you ever quote me on that, I will deny everything." And I think that that is what set us apart: we were really willing to think about where it goes and act on it.
We're not alone; there are other people in the field, other pioneers who had been pushing forward this kind of technology and had, you know, sort of been also, I think, very sort of visionary in terms of thinking even many years ahead of us to get started on this problem. But I think that maybe, you know, I don't know if it's just something in our DNA or if it's something about just kind of the, you know, for me personally, I felt like I'd spent five years building a technology company. I was ready to really sign up for a problem that I was ready to work on for the rest of my life, and it was so clear to me that if I could just make a little dent—even if we're talking like 300 years later and AGI comes along—it would be worth it, right? And you know, there's a chance that it would be even sooner. And so I think that is kind of the framing; the timeline, you can quibble it, you can debate it, you can talk about these things, but fundamentally signing up for the impact for the most important technology that humans will ever create, like that's something I can get behind.
What's notable, and I guess this is news to a lot of folks because OpenAI has been in the news a lot lately, is you know, you have these GPT models, these, you know, generative pre-trained transformers. This technology that's, you know, a flavor of neural nets, which have been in the AI community for some time. And you all had GPT-1, GPT-2, people said, "Oh, this is interesting; it can write, but it doesn't really have a good handle on knowledge." GPT-3, even better! And then ChatGPT comes out; it's an interface and starts to blow people's minds a little bit.
And then obviously, we announced with y'all we've been working together on GPT-4, on using it in KH Academy. But y'all have obviously been doing all of the work to develop it. This notion of AGI, artificial general intelligence, does not seem so outlandish anymore. You know, even some of what I think many folks have gotten GPT-4 to do starts to feel kind of like that. I guess my first question is: what do you think y'all are doing that led you to get to this place? There are many, many folks working in the field, many larger organizations with more resources. Do you think it's something you're doing differently or how you're approaching it? Or, yeah, what do you think is special?
Greg: Yeah, I think it's a classic question. I mean, I do think we are part of a much larger trend, right? This is like a much larger history. You look back at all the compute curves for 70 years, we had this exponential growth. You know, in like 2000, Ray Kurzweil was saying, “Hey, just look at the compute; that's going to kind of tell you what's going to be possible. That's the fuel for progress.” And everyone thought he was crazy, and now I think they basically think that he's right. And I think that, you know, when you think about the amount of engineering that goes into us being able to deliver something like GPT-4, from the actual compute infrastructure to all of the datasets and tools that we use, it's really this massive endeavor of humanity in a lot of ways.
But kind of specifically, you know, we've managed to execute because we brought together people from a research background and an engineering background. And I think that is something that's very unique, right? And I think that, you know, safety is a core part of that that kind of comes from all the different angles. It's sort of, you know, both in practice and kind of in theory, thinking about what you know how these systems are going to behave, how they're going to go right and go wrong. But yeah, I think that the thing that was so interesting for me when we were starting was looking at all the other labs, and you can really see that they come from typically a research-first background. And so you have these research engineers who are told what to do, and the research scientists get to do whatever they want, and you're like, “That doesn't seem like how you're going to actually build a working system. It seems like a great way to get a bunch of citations, but if you actually want to have an impact and develop something, you just need to structure the organization differently.”
And it's hard; like it sounds easy on paper, but I think there's these very conflicting ways of thinking about things if you come from a very practical background versus if you come from a more academic background. And we somehow have to lean into those, and I think that we've kind of solved more sophisticated versions of this sort of different mindsets, different backgrounds problem like many, many times. And you just never fully solve it; you move to a more sophisticated version of it. So I think it's this lean into the discomfort, lean into the hard parts.
Sal: No, there's so many questions I have there because I think what, you know, there is something unique that y'all must be doing. Y'all aren't that large of an organization, and y'all are definitely, I guess, punching above your weight. But one of the things that you've talked a lot about—and this was even one of the reasons to start OpenAI as a not-for-profit—and I'll we'll talk a little bit about how things have evolved since then, but you keep mentioning safety. And you know, AI is both exciting and maybe even scary to some folks. We've all read the both exciting and the dystopian science fiction.
Where does, when you talk about safety, what are you talking about? What are the real fears that folks should worry about and put constraints around, and what are the ones that maybe aren't as justified?
Greg: Yeah, so I think there's been a long history of AI safety thinking, right? And I think there's some that predates; you know, goes back to the '50s and '60s. You can find people like Arthur C. Clarke talking about this of having an intelligent machine. And you know, what sets humans apart is the fact we are intelligent. And so it's something new, right? This whole idea. And so I think we should approach it with equal parts excitement for what can be accomplished and caution for where we could go wrong.
So I think that is like fundamentally deeply correct: have these mixed feelings and to simultaneously be amazed by anything new but also ask where is this going and where is this particular one, where could the pitfalls be? I think that's the only way we can possibly navigate through the space correctly. But I think that another thing that's been very interesting is how surprising AI seems to be and how it plays out. Like you think about in the '90s, everyone thought that, oh, if you just solve chess, that will get you to AGI. And actually, chess was kind of the first thing we solved in a lot of ways, and it didn't really go further.
And I think I've seen the same on safety thinking, right? That if you think of it as, oh, like you know, the one thing that could have been built, one direction that I think was possible was that you'd build these agents that have to survive and replicate, evolve in some sort of complicated multi-agent simulation. That sounds really terrifying, right? To even know what those agents are capable of and, you know, to be able to trust them, you have to solve some really hard problems. And so a lot of effort goes into thinking about how do you design a reward function you write down very specifically that has to be careful what you wish for to it.
And so there's been a lot of thought that goes into those kinds of ways of solving problems. But the GPT paradigm, kind of no one saw it coming; it's just totally different from what you'd expect from a safety perspective. And so to me, the lesson really is this: to even know how to put a handle on the technology and to figure out what it could be used for and how to kind of steer it in the right way, that there's not necessarily a limit to force it, but I think that we have a history of getting overconfident on the wrong things. And so, you know, an example where you can see ahead is if you look at, for example, reinforcement learning from human preferences.
And so this is what we use to actually tune the behaviors of these models, and that's something that's very different from say GPT-3, where we kind of released the model after just training it on the base dataset to GPT-4, where we're actually able to really tune it and sort of choose the values that the model exhibits. And we actually started developing that technology in 2017 before any of these models existed. And so I think that you can see a little bit ahead into the future; you should think about how they're going to be used, both making sure that they're aligned with what the operator wants, you know, whoever the user is, that they're not sort of misused if someone wants to do something, but society kind of has societal limits that is illegal or is harmful to someone else. There should be some limits.
And then also, I think there's ecosystem effects where somehow you could imagine that the AIs, they all do kind of what they're to do locally, but it somehow adds up to a worse world. And so I think that we're going to have to sort of encounter an increasing series of stakes as time goes on. You know, we've sort of graduated in some ways for the risks of these models. For GPT-3, I think that we were really worried about misinformation. But when we really came down to it, we saw that people really just wanted to generate that. The most common abuse vector was generating medical advertisements for various drugs. I think that with GPT-4, you have a new sort of class of risks, and I think in the future that you'll have new classes of both benefits and risks that go hand in hand.
So, obviously, one of the things that is interesting about the two of us talking in our organizations, y'all reached out to us back, you know, six months ago when y'all were just starting to get the first version of GPT-4. And I guess one question I have is, you know, why did you reach out to us back then?
Greg: Yeah, well for me personally, I've always felt like one of the motivations for building AI systems, for trying to build AGIs, is to get everyone a personal tutor. Like I personally, I think many people have a story of that one teacher who really understood them, who helped them achieve and get excited about a subject. And you just imagine what would happen if everyone had access to such a tutor 24/7 who can really understand them and motivate them. And I feel like that is so aligned with what KH Academy is building.
And, you know, the potential that you want to unlock at every student. And so it was just like when we realized maybe we can actually make a dent in education, maybe this could be applied there, it was so clear that Khan Academy was the first port of call. And since then, you know, I guess as we've worked together, and obviously now that GPT-4 is out there, what are you hoping this becomes? Like, how do you hope the education world leverages this?
Famously, when ChatGPT came out, it caused a lot of stress in the education world. People were like, "Oh, kids are going to use this to cheat on their essays or do their homework." How should educators be thinking about this right now?
Greg: Yeah, I think that I would say that there's a sort of, you know, education-specific version of what I've been saying generally, right? That there's opportunities; there's risks. And I think figuring out how to navigate that is really important, and you have to lean into that tension, right? So, I think it is important that people learn to think for themselves, but I think it's also really important that students can be, you know, sort of get the best out of technology and that we're making this technology very accessible and available to people who may not be able to get great educational tools otherwise.
And so I think that there's—I, you know, my hope is that we serve as a platform that teachers, educators are able to shape to their liking and to help sort of work with their students and to fill gaps that they can't. And so, you know, I think that the kinds of applications—I'd actually be kind of curious, Sal, what you've been seeing as the ones that you're most excited about?
Sal: Yeah, well, you know, obviously we've been putting a lot into this, and we're very excited. You know, many—so we've even demoed, you know, what we're calling Kigo, which is essentially the incarnation of the AI on Khan Academy—with some large school districts, some of whom have famously banned ChatGPT— and they, you know, they're giving us the feedback, "This is what we wanted! We wanted to harness the powers of this technology but put some guardrails around it so that it's being used productively for students so that it, you know, teachers can kind of see what they're doing, that it's pedagogically sound."
Now, it is interesting that it has created a really big debate where people are like, "Well, this is great if they're within the sandbox, but then what's to stop them from going someplace else and someone else is going to create an application that uses the API to do something, you know, here or there?" So there, I think there are some real, real questions there. I guess maybe I'll turn that around as a question: how are y'all thinking about this in terms of—is it going to be a little bit like your classic app stores where there's a little bit of editorial review of how folks are using the API, or is it going to be more of like, you know, let's let—let's see what happens? How are you all thinking about this?
Greg: Well, and I guess we could talk about education generally or education specifically or generally, yep. Well, I think that the truth is that this technology is very new, and there's a lot to learn, right? But we're very thoughtful about it. We spend a lot of time thinking about exactly how people should build on our platforms, what the rules of engagement should be, trying to get lots of input. So, we engage a lot with educators and with other people in various spaces because I think ultimately the decision of how to integrate this kind of technology into the world—it's not— it should not just be up to us. We need to be part of that for sure, if it's our technology.
But we think it's really important to get broad input from everyone. So that's actually one thing that I think is maybe the single most important factor. And so you should expect— you should expect evolution, right? You should expect us to get data and realize that, hey, like this particular thing played out great, this particular thing did not, and then learn how to adapt. So my hope is that, you know, number one, I think it's really important to really show the upside, right? I think it's easy to just sort of only see the things that can go wrong, and I think it's important not to stick your head in the sand.
But the reason we build this in the first place, right, is to actually realize those benefits. And so what I'm really excited to see is with KH Academy or anyone else who's going to build in the space to really engage with, like, going deep with districts, talking to educators, and really figuring out what the exact shaping is that they want. Once you have a positive example of something working, it's easy to build standards around it, right? If you don't have that at all, then you're just shooting in the dark, and we've seen this already. You know, last year we published a blog post about safety standards for deploying language models, and all of that came from two years' worth of deployments and, honestly, getting a lot wrong. And so I think that this iterative deployment of learning from practice, that is, I think, the single most important thing that we can all be doing right now.
Sal: No, I completely agree. Clearly, you know, we're investing so much because we are generally very optimistic about where all of this is going, and, you know, I couldn't speak openly about it when there were all these ChatGPT debates out in the media, but then once we were able to, I was saying, "Look, I know there are some fears, but if you put it in the right framework with the right guardrails, not only can you mitigate those risks, but you can have massive advances—a tutor for every student—and just introduce completely new modalities that would have seemed like science fiction without AI. Things like interview historical figures, practice your debate skills, you know, we could go—teachers being able to help get help creating lesson plans, etc., etc.
You know, one debate that I've been having with a lot of friends lately, you know, knowledgeable friends who know about AI, etc., we've been getting into this classical debate about, "Is the tool going to diminish human capability or expand human capability?" I'm on the side of expand. So you know, that's where my cards are—that it's going to make us more creative. In some weird way, it might actually make us write more because we're going to be more editors, and we're going to be crafting more. But how do you think about that when people say, "Oh wow, you know, now people are just going to end up with one less thing that humans do. They're not going to develop their writing skills; they're not going to develop their creativity because they're going to lean that much more on artificial intelligence for it."
Greg: Yep. I mean, I'm definitely with you, Sal. Like, I think that that the net effect is going to be extremely positive, that I think that we're all going to get these AI superpowers; we can achieve things we couldn't otherwise. The drudgery will, like, drain away. All of those things I think are here, right? If not on the horizon. But of course, it would be sticking your head in the sand to say that that's the only effect, right? I think it's like, that's the net effect; I think it will be quite strong.
But I think that there will be anecdotes of places where, you know, that, that yeah, it's like people who loved a particular craft and now that craft is commoditized, right? There was a barrier to entry; you had to build up a skill, and now anyone can do it. And on the one hand, it's a beautiful thing, right? Because there are all these people whose now creativity can be unlocked, right? There's—you know, everyone—you know, I just like you think about how many people have a smartphone, and so if you're able to get access to everyone who has a smartphone to very powerful AI, and they can start creating in a way that before you'd have to like buy a bunch of professional software and you'd have to go to school and get a lot of training, you can see how that world is different. But in a lot of ways, it's more positive. You know, on that, but I think this change and being prepared for that, like, that's a scary thing, and I think that that's something we should kind of go into eyes wide open.
Absolutely. And you know, just in the time we have left, every time I talk to you, you sometimes say, "Oh, and by the way, Sal, we're also working on this," and then you tell me, and I'm like, "Oh, that's a big deal!” Like, you know, it's like—and there's more. Round us out this conversation just, you know, painting a picture for folks on like what is coming as much as you can talk about it and what you think are the implications and how y'all are trying to focus on one of those directions or another.
Greg: Well, look, I think at the most high level, like we really are serious about this trajectory to AGI. Like, we think that that is the trajectory that society is on; the world is on. I think it's a path that we've been on for a long time. If you look at all of these curves, and you know, I think we've picked up the torch in a lot of ways, and we feel it's our chief responsibility to not just build it but to build it right.
And kind of on the short term, I think that you see things like, you know, GPT-4 has vision inputs; that's something that we're still just piloting with one partner. But I think that that will also be a new step function in terms of usability, right? You'll be able to present documents; you'll be able to, you know, if you have a diagram that is part of, you know, the educational curriculum on KH Academy, and that understanding that diagram together with the student's question about it is important, you'll be able to do that.
And so I think that we're just going to open up the accessibility in a lot of ways, you know, making this stuff run faster, cheaper, more accessibly—that's always a big focus for us—and really trying to improve. Like, I think the thing that we're missing right now in a lot of ways is being able to generate new ideas, right? Being able to solve harder problems, and all of that we're thinking about; we're exploring, and you know, again doing so with our sort of safety-first focus.
Sal: Yeah, well, I got to say Greg, you know, thanks so much for spending the time and bringing us on this journey because it really feels like we're living in a science fiction book. And it's kind of one of these choose-your-own-adventure science fiction books where it can go in different directions. But I think as long as there are enough people thinking about how we maximize the opportunity and the benefits and mitigate as many risks as possible, I am—and it sounds like you too—we're pretty excited about what the world might be like because of this.
Greg: I am too! Yeah, it was great chatting.
Sal: Great, thanks for joining.
Greg: Yep, thank you!