yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Brave New Words - Ethan Mollick & Sal Khan


28m read
·Nov 10, 2024

Hi everyone, it's here from Khan Academy, and as some of you all know, I have released my second book, "Brave New Words," about the future of AI in education and work. It's available wherever you might buy your books. But as part of the research for that book, I did some interviews with some fascinating people, which you are about to watch. I'm excited to introduce Professor Ethan Mollik, who's a professor at the Wharton School at the University of Pennsylvania, who have also done a lot of work on simulations in business school. But I think even more relevant to what we're doing, you've kind of made a name for yourself over the last several months as someone who has been using AI for good in education. So, welcome, Ethan.

Ethan Mollik: Thank you for having me! I'm thrilled to be here.

Let's just start at the beginning. What was your first indoctrination or exposure to especially things like large language models? What were some of your initial reactions, and then what were some of the ways that you started to actually realize that it could be valuable?

So, as you said, I've been building—like thinking about how we democratize education for a really long time. I've come from a business school perspective, and there's all this research that like small amounts of business education transform lives and control studies around the world. So, I've been thinking about this for a while. I've also been playing with tools to do this, so building simulations; but I also kind of was aware of GPT-3, which was the pre-Chat GPT version of the software, which sort of did a good high-level fifth-grade kind of essay at that point. I thought I should get my students aware of this stuff, so I started assigning them assignments to cheat with it. So, write as best an essay as you could using this, and using DALL-E to generate an image, and then talk about where the state of the world was.

What was very funny is halfway through the cheating assignment, when half my students turned in but the other half didn't, Chat GPT came out. So, there was a sudden change in the quality level that was quite dramatic during that period.

What did you notice, actually? I'm curious. I mean, you said, “All right, in theory, you could cheat with this stuff.” So students go cheat with this stuff. By definition, it's not cheating if you're told to cheat. But did you see a difference relative to what you saw in previous years from student work?

Yeah, I mean, well, what's interesting, right, is like we judge writing and intelligence as being very close to each other, right? In ways that are hard to sort of separate, we consider essay writing to be how we learn how to think as we write essays. Maybe that's true; maybe it's not. We really haven't tested those things. It's like a lot of education stuff; we don't actually know the answers. And so I've had students in my class who are brilliant people but not good writers. English is their third language, or they came from a background where they never learned to write really well. So, even just having that little bit of a hint—it’s sort of like Grammarly—made a difference in making the writing better.

After I introduced some of my students to Chat GPT, they're like, “You know, I'm now getting job callbacks I didn't before, and I'm getting the job when I have an interview, but because I can write well now.” Right? So, it's part of why I've made AI use as mandatory in all my classes. So, I no longer accept anything that isn't perfectly written at this stage.

Why? Why bother? And what's your sense? You know think about from an employer's point of view, where I guess they were using a signal, someone who writes a really eloquent cover letter versus someone who doesn't. And they were using this little signal now; it's all eloquent, but maybe that person comes to work, and they're like, “Oh, maybe their language skills aren't as strong as I thought they were.” Based on, what's your point of view? Are they cheating, or is that okay because I can use Chat GPT or whatever to continue to write well?

Well, it's actually two really interesting questions. The first is, you know, this question of cheating overall is sort of a big question, right? What does it mean to cheat nowadays? Right? If I ask the AI for advice but don't use it, if I ask you to punch up a paragraph, if I ask you for—is that cheating? It's a much different question than plagiarizing. Right? And then the second question is, like, you know, a lot of stuff that's good is going to be hurt by AI, but a lot of stuff that's bad, right? Turns out like judging people based on cover letters has never been a particularly great method, or college essays have never been a particularly great method of deciding admissions.

The method we have for large scale, you know, meta-analysis is actually doing a detailed interview that's based around what people actually accomplish in their life, right? If writing skills are important, you're going to have to give people writing tests that are separate; but for a lot of jobs, that isn't the main skill, or it's going to be done by AI now. So, in some ways it's cheating, but it's also kind of forcing us to reconsider what's actually valuable in terms of signaling.

A lot of stuff that we do, right, especially as instructors in the room, is about sort of setting your time on fire to show that you care, right? Like if someone asks me to write a letter of recommendation for them, it takes an hour to write a really good letter of recommendation. I could paste the resume of the person and the job they're applying for and say to GPT-4 and get a better letter than it would take me an hour to write. And I'm always struggling: do I use the generated letter because it indicates my thoughts, but I'm not spending the time anymore that I was gonna spend to make this work? And I think that's the really interesting challenge.

I love your phrase of “setting my time on fire to show that I care.” As someone, my mom always wants me to pick her up from the airport, as my parents too.

Ethan Mollik: Same!

I do! I do! I set that part of my day on fire so that I can show her that I care. But I think you're right; we are full of traditions, including maybe the cover letter and all of this that are almost just rituals to light your time on fire to show that you care.

But do they actually add value? And even for folks who—some people have other people coaching them, helping them for things like college essays or resumes, and that's kind of a resource-intensive version of Chat GPT or GP4.

Ethan Mollik: Exactly!

And it was—and I mean, the exciting thing, and this is why I love what you guys are doing, is like we suddenly have billions of people who have access to the single best AI ever released in public. Like if you're rich, you don't get access to a better AI; you get free access to it through Bing or through the kinds of things that you guys are doing, right? It's available everywhere. And that's like the most profoundly interesting thing in education because we used to have all of these barriers. All the previous attempts to do this were hard and difficult.

Right? So, I think part of that question is, you know, it democratizes stuff we didn't get to democratize before. It used to be like, you can hire a tutor; you'll do better. Now what happens when we have a tutor that everybody could use that's actually better than human tutors? That's a really exciting and a little bit scary prospect.

Yeah, I know! I mean, you're right; it's both exciting and scary, and all of us, as you know, we're working on K-Migo inside of Khan Academy and rolling it out pretty increasingly widely now. And you and I are bumping into each other a lot because we seem to be on the speaking docket together a lot, like this is what Khan Academy is doing, and look at this professor who's using AI for—not afraid to use AI.

So, let's continue on your journey because you talk about those early days; you encourage the kids to quote cheat using artificial intelligence, but it was kind of leading to better outcomes. But what have you changed? What the assignments are? Have you changed your threshold of what makes a good assignment?

Yeah, so I mean, there's basically three things I've had to do, right, as a result of AI. So level one, I think everyone's gonna have to face these problems, right? Level one is just expecting more from students, right? So, the first thing we all have to do is like—so, you know, I teach an entrepreneurship class, and you know, again, I know there's people listening who teach, you know, basic writing skills; you're going to have to adjust in different ways. This is going to be different for every person. Some people, it's going to be all about having writing assignments in class so people learn how to write because they'll be cheating outside class. I get that.

So, this is not a universal thing, but it's certainly my case: I expect more. So, people used to have to turn in like a business plan; now they have to turn in a business plan, working code. Even if they could never code before, I now code—I can't code, but I've written 12 python programs in the last couple of weeks. Right?

You have to turn in, you know, a fully working web page. You have to interview both real and fake people with the AI, and there's data that you could survey the AI, and it's actually accurate. So, my expectations for work are now much higher than it was before. So, that's the sort of first adjustment, right?

The second set of adjustments is I now have AI integrated into assignments. So, AI is a teammate for my students, so they now end up actually having to bring AI—there are assignments, for example: every assignment they turn in has to be critiqued by at least three famous other entrepreneurs, which they have to generate. So, they get feedback along the way.

And then the fourth thing—which is the third thing—which is the really big thing is changing how we do classrooms. Lectures don't make as much sense when I've got tools like K-Migo that could do truly amazing training, you know, remotely. So think about how we can flip the classroom in that way.

Yeah, no, well, I mean, there's so much in there! I mean, I found fascinating your ex—so students will write like an executive summary for like a business plan or something, and then ask like Steve Jobs or some famous entrepreneur to, you know, um, Ford or somebody, or Tesla; somebody to weigh in on their business plan, and that's part of the assignment now.

Yep! They're required to have—for just turning their outline alone, which used to be something I would just comment on—they have to have three famous entrepreneurs comment on their plan. Right? And that also teaches a little about writing prompts but the limits of these things, they're not actually invoking the spirit of, you know, of Steve Jobs. Right? Instead, they have to realize, you know, it helps them think about this new world we're in.

And then they also have to do premortem, so projects succeed better when you actually do imagine how they could fail. So they have to generate a whole bunch of failure opportunities as a result of this too, work backwards, and how to solve them. So, it lets me do different things that we know for the research, make teams work better, make people work better, but we could do before because it was just too much work, and this is so powerful.

And you know what you're also doing, it sounds like either explicitly or implicitly, is you're just making them intimately familiar with these tools and the power that they have. So, because the worst thing you could do is these students are in a bubble, and then they come out into the workforce in a year or two. They're like, “Wait! What just happened? Everyone's using GPT-5 now,” or whatever. But they're going to be able to lead in that.

And your point about a flip classroom—yeah, as you know, I've been talking about, even pre-AI, in a world of on-demand video, honestly, even pre-on-demand video, humanities classes have even business school classes, read the case, come to school, and we're going to discuss it. That's always, I think, been a best practice, even more so in an AI world.

Where is this going? You know, are you continuing to evolve your courses? What are—I’m sure all sorts of faculty members are coming to you and saying, “What do I do?” What are you telling them?

I mean, so one thing we're doing is, you know, we're sharing prompts, right? Like, to me, this is the most important thing to do. So, I've got a couple of papers and I'm sure we can put, you know, links somewhere for people, but we're trying to kind of take pedagogical approaches, right? Prompting is not magical, but it helps me of expertise.

So, we've written some prompts that do various things, like generate explanations, analogies, some stuff you do in K-Migo, but that could kind of be applicable to many classrooms. There are tools to help, you know, teachers, so I'm trying to figure out how we— the democratization mission that you guys have is the same one I'm thinking about. Right? How do we give teachers tools? How do we give students tools? You know, the promise of AI is that suddenly, you know, the thing we thought that the internet would do, which is if we give everyone access to information, they'll do—everyone will learn everything. Turns out a very small set of people are really kind of constructivist learners who don't need support and could just learn on their own. But now we have a tool that actually could provide the support, the tutoring, some of the other material that we know works, and do it at scale, right?

And I've heard you talk about, you know, Bloom's, you know—I mean, we don't even know how well this works at this point, but we couldn't do it before. And just like you, I've done massive—you know, not anywhere near your scale, but I've had like, you know, a few hundred thousand people take my online video courses. You know, there's a big difference between that and actually having an exercise and getting feedback.

So, we're trying to think about all of those kind of pieces.

And what are you telling—because I have no doubt, I mean, you already are not only navigating this successfully; you're thriving in a generative AI world. You're making it work for education.

I feel similar optimism to what you do, but I would guess, and you know, 90-something percent of teachers right now are not in the same frame of mind. They're kind of freaking out. You know, let’s say you were writing freshman composition; that’s all you had to do. Like your whole job is to make sure that your students are learning to write and that you're evaluating them. What would you tell teachers like that who are a little bit more afraid that things like Chat GPT just set off like a bomb in their pedagogical style?

I mean, I think we did start with the bad news; it absolutely did, right? I mean, a lot of stuff just got blown up, and some of that was good stuff, right? I mean, there are ways we've been teaching for 2,000 years that made a lot of sense—maybe they weren't really researched back, but we've gotten very good at them. Right? We've gotten good at lectures; we've gotten good at doing essay assignments at home and evaluating them. We were good at that stuff; that's blown up. Right?

I mean, even for your best students, your students wouldn't cheat. And by the way, students have been cheating forever, right? The cheating just got much easier and cheaper. But even now, your best students are at a disadvantage because they're not cheating, right? Because, you know, they're not writing the same way.

So, it does blow up a lot of—if you are teaching a freshman composition class, you have to change how you teach. This AI is undetectable—anyone who's telling you otherwise is not actually telling you the truth. It's essentially undetectable. One of the things I learned from my students who cheat is after a couple rounds of doing, you know, back and forth of the AI, which is the best way to interact with it. You don't just put a prompt in; you sort of engage with it. The essays are undetectable by both humans and any tutor.

It blew up what you did; so you have to think this is an opportunity, but it’s also scary. And we can admit that as educators, what we should realize is education is going to be okay; we will adapt to this change. Right? It's bigger than calculators, but not that different than the narrow damage calculators did in math classes. Right?

They'll be—as an instructor, you're probably going to want to have more instruction, happy outside of class with videos and other stuff. More active writing assignments in class with critiquing each other and writing short essays. And you know, you can make this work, but it absolutely is a big change, but it's an opportunity to do things new and to figure out what works for you.

And also the thing I like to stress to instructors is this also makes your life easier in a lot of ways. Like there’s a lot of stuff we wish we could do that we didn't have time to do. And again, we've got prompts for all this; it writes great quizzes for you. I know you guys have guides also to doing this stuff. It helps you as an instructor. So it's not just increasing your workload and making your life difficult; it's also lowering your workload and giving you a little more time to think more about education and how to transform students' lives, which is why we're all in this in the first place.

Now I can imagine you threw out, I think, some very good tips in your answer, which is if I'm teaching freshman English comp at a university or if I'm teaching an English comp class in at a high school, the world we're going into could very reasonably be, okay, I have you for an hour, hour and a half at a time, write, and maybe you're writing with the AI. It's not writing for you, but it's acting, so it can help answer some questions, etc. But then it can evaluate; it can give me a first pass of grading for me. And so your students are actually going to get more writing time and more feedback in this world.

And yeah, maybe they have less to do at home. That's good; everyone has less to do at home. You have less to do at home; you're not grading, you know, like your 80th essay on, you know, something that's, you know, you're probably a little mind-numbing at that point.

And the students don't either; everything can happen in that night. It's just efficient for everyone. So yeah, that makes a ton of sense.

Yeah, and I think that, you know, that embracing this is going to be important. And it's scary; it is okay to be scared as you listen to this—to be like, “Oh no! I don't want to change what I've been doing.”

But we also should recognize a lot of the ways we were having people write essays didn't make sense. Like the people who weren't very good in class, right? They wrote bad essays outside class. How much time do we have to tutor them and make them better? Well, in a sort of flipped classroom, active learning environment, right? And the active learning is more important than the flipped classroom. The idea that we're doing things in class, maybe there's more time for this tutor; maybe the AI can help them catch up or give them an explanation where they were, as opposed to being an instructor who's grading essays. Like you said, maybe you have the AI help you flag the couple that need the most help or a couple that are great examples, you know, and give feedback that way.

We don't have all the answers yet, right? So we are in an exploratory environment, but we can't pretend the world didn't change. This isn't like one of those things like one day video classes. What are you gonna do about massive online courses? It's too late for that. Like, this is here now, and whether we like it or not, we have to adjust as instructors.

And where do you think this is—because as you know, I look we've been talking about large language models. You mentioned DALL-E, which is about generative images. There's technologies around video, de-generative video, speech to text, text to speech. I've seen some demos, you know, tech things like speech to text and text to speech have been around for a while. But I've seen some demos lately of some text-to-speech that really is not discernible from the human being, so to speak, which is going to create issues with deep fakes and all of that in the broader world.

But where do you think this is going? Like what? I mean, it seems—I think even for folks like us, every week there's new things going on. What do you think this is all going to look like in two or three years, and how are you keeping tabs on everything?

So a couple things there, right? One is everyone has to recognize that AI—generative AI is what's called a general-purpose technology. And so, you know, I teach entrepreneurship at innovation and innovation studies. General purpose technologies come along very rarely; they come around once in, you know, every generation or two. Think steam power—maybe the internet, maybe computers—but they affect every aspect of life, and this is the fastest one we've ever seen, fastest adoption. It's the most personal; people can do things. It's going to affect every industry differently, every person differently, every job differently, but it's going to affect everything, right?

The job that's least affected by AI, according to the early studies we have, is roofing. And I've talked to a couple roofers who are like, “Oh, now actually, roofing is going to change too because we can now do all of our proposals and stuff with AI help.” So it's going to change roofing also. So you should recognize this—the world's changing in ways we don't always know.

So what you should think about as an instructor, right, is preparing people to live in a world with fast adaptation. So the more we start to use AI and get familiar with using it, the easier it'll be for people to adapt to whatever is coming next. Because I think, just like you know, you and I both—there's this moment of GPT-4 where it's like, “Oh my gosh! It really does a lot of what people do; it like thinks like people.”

Like this isn't really software; it's worth thinking about like as a person. It's not a person, but you can think of it like a person that could be helpful. And then the question is what's next? We don't know. Does this keep getting better? Does it stay roughly where it is? I don't have answers to that, but we have to be ready.

And the best way to do that is start using it well. You know, as a business school professor—and I mean, you just touched on it—the world is moving faster and faster and faster. What advice—I'm sure students are asking you or parents are asking, like what should my kids be working—like what skills should they be developing, and which ones are not maybe as important as they were before?

So I think thinking still matters, right? So a lot of why people said writing was important, writing essays was thinking. I think that's still very true. We need to teach people how to think. I think that for right now, and hopefully for the foreseeable future, AI is great. If you were in the bottom 50% of a lot of different categories, you're now at the 50th or 60th or 70th percentile. That's exciting; it expands opportunities for people who did before. Right?

You couldn't write before; you can do that now. You weren't good at programming; you're now an okay programmer. Like, that's exciting! I think the question is what are you good at? What do you love that you might be in the top 10% of people in? And I think that's something where you can add a lot of value to an AI world.

So I think part of this is about thinking about what you want to do that you know you're talented in and how do you develop that talent and that expertise. And that requires you to still learn the basic knowledge of school. You build expertise by building basic knowledge, right? By, you know, seeing lots of examples, working with the AI to get there, and I think that can get you to a really exciting place.

But let's just—I’ll just make it some tangible examples. Like, if someone says, “My dream is to be a roofer,” you'd be like, “Great, that’s going to be a great job in the future because you’re gonna be able to do more roofing, and AI is gonna write your proposals for you.”

But if someone came to you and said, “I want to be a copyright editor,” or “I want to—” would you say the same answer? Just, you know, get really good at it, even though GPT-4 is already pretty good at it?

So that’s the bet, right? We don’t know how much better these systems are going to get, right? And I do think there's disruption. We are seeing in early controlled studies 30 to 80% performance improvement on many high-powered, white-collar analytical tasks, right? Writing-based tasks, analysis tasks, consulting tasks, programming tasks.

So, if you want to be in these fields, AI is gonna be part of your life, so you need to figure out if you can use AI to be 10 times more productive, there'll be huge value in you. There's still a need for a human in the loop. If you can't figure out how to use AI to get more productive, I'd be more nervous in those spaces. You know, I think what industries to go into, I mean—to be fair, people have been predicting that radiologists would be obsolete by AI for the last 15 years. They've been saying, “Don’t become a radiologist.”

There's still lots of jobs for radiologists, so it's hard to make a bet. I wouldn't be making huge career bets, but I would say the more writing-based your job is and the more routine it is—if you're being a fiction writer, AI still can't—your comedy writer, AI still can't write a good joke. Those are tough jobs anyway, but you know those are places—but if you're trying to do copywriting, think unless you figure out a way to use the AI, I would be nervous, right?

So the more exposed to AI the field is, the better AI is at it. The more you need to be a centaur in the model. Gary Kasparov said: half man, half horse, half person, half horse. You need to be able to work with the AI as your horse and be the human integrated with it, and then you could write better.

I don't know, I found my writing has improved tremendously by using the AI to throw back a paragraph I'm not interested in working on anymore because I'm stuck. Give me 20 versions of this, and it does right.

So I think there's hope in these fields, but we don't—don't know how good this is going to get, and that's also a little bit scary.

And you mentioned scary. I mean, because you seem like someone very not scared right now. But when you say scary, what are the scary thoughts in your head?

I mean, so there's a couple levels of scary, right? I mean, there’s—the stuff that's definitely going to happen. We have disruption that's definitely going to occur, right? As you said, it is now trivial to create fake videos, fake videos of myself talking with a minute of my speech. I've done that before. It's fake photos are easy, right? Content on mass is easy. There’s a bunch of social issues we’re going to have to deal with as a result of that.

Then there's also the fact that this is going to disrupt jobs. We don’t know which jobs yet, in what way. We don’t know how much disruption is going to involve. You know, usually in economic terms, when jobs get disrupted, it's not about jobs; it's about tasks. So you're not—it's not your jobs that change; it's the tasks you do.

And hopefully, usually that ends up with getting a better bundle of tasks. You put off your boring stuff, but that can be a disrupted period, people get fired, people who are going to get hired don't get hired. We don’t know how this is going to turn out. So, there is some scariness that's going to happen with job disruption, even if it does grow the economy.

And then the question is, does it keep getting better? Right? Now, if you're the top 10, 20% of a field, I think you know AI is a tool for you. Now the question is how do we train up interns when AI works as a better intern than a lot of our interns that we would have? How are we training people in the future? And if AI keeps getting better, what if it's better than everyone but the top 1%? What do we all do with our jobs then?

And then, of course, there's the sort of really scary thought that a lot of people spend too much time worrying about all these other concerns, which I think are much bigger, which is, you know, what happens if this thing actually becomes super intelligent AGI idea? And like what happens then? But I think you don't need to worry about that piece, is, you know, people are worrying about it—you should worry about it—but I think that this other stuff about what happens to jobs and how do we survive in this world of disruption, I think is more important.

And I mean, not—you know, both of our bread is buttered by this, but I do think education is part of the key here to being able to think and adapt is going to be important, right? Not being trained for one thing, but being trained for many things.

Yeah, I mean, you mentioned that you're already making your business school students write code and the business plan, and you know—and once again, they're able to—they're able to almost be a small-scale manager of an AI team, so to speak.

Ethan Mollik: That's exactly right!

I mean, I think you have to think of everyone on Earth just got a free intern! What are you gonna do with it is sort of the question. Asking everybody, or ten, right? And they do different things, and you—and it's not magical. People bounce often and get scared of the AI, and I get all the reasons to be nervous; we've been talking about them here.

But the fact is, it's here, and my feeling with teaching students how to use this is you kind of need 5 to 10 hours of using this in your job to see what it does for you. If you don't spend five or ten hours, you're not going to get there. So, that's sort of my advice: you have to use this! If you're a teacher, you're a student, use it for stuff and see what happens!

And that's the only way to kind of go forward. It's not a magic of copying a prompt off the internet; it really is just experimenting and being in dialogue with this thing like you're with a person.

I 100% agree with that! And someone who's probably personally spent at this point about 300 hours, I can—I can—and I would say it’s also—C, you know, if I were to add one skill, it's creativity, because I've seen so many people go to like Chat GPT or use GPT-4, and they're just like, “What do I do?” They're like, “It can—like this—you sky the limit!” Like, try interesting things, and you're going to realize that it's able to do things that are, you know, would have been science fiction.

I am curious, you know, what advice do you have for us? We're obviously working on Conmigo; it's a—you know—we’re trying to make it a tutor for every student, a teaching assistant, you know, maybe more than that. We are trying to add things like memory and different types of activities, expand our horizons. Like what are the types of things that you would like to see it do to make students' or teachers' or parents' lives easier, or anyone's lives easier?

So, I think part of the interesting thing is what's the interface with the human becomes an interesting question, right? How do we build these tools so that they fit into the human system that we have? Like what's the advice for how students use them? How do they give feedback to teachers in the right kind of way? You know, you guys are—you know, thinking a lot about that.

But I think how do we fit those into the human systems matter, and then I think the other thing is, like trying to be even more creative, right? What you're doing is replicating this human at a very high level. What if I had a one-on-one tutor? But, you know, that was the best we could think of, right?

So the question is, what can we do beyond that? Like what does education look like in a—like can we give different kinds of puzzles and tasks? Can we do more game-based instruction and teaching than we did before? Right? Those are a very powerful set of tools.

How do we start to—you know, another big question for me is how do we start doing multiplayer versions of this? How do we include many people in a classroom interacting with the AI? How do we create persistent personalities for the AI that you interact with and you select a person you keep working with?

I mean, I think that there's—I think we just need to—we get very constrained by this idea of like, now we can finally build Bloom's tutor, right? And like that's wonderful, but that was also human-constrained. We have a new system now; it's going to change how we work, how we organize work. I would be thinking internally, like do we still want to do sprints? Do we still want to do agile? Those were built around human systems; what's the new model look like?

So I think that’s—so that's the view, and like how do you take a companion to, you know, how does it force you to go beyond the classroom also and actually solve problems? Maybe it can push you to actually create something, right? So rather than just doing a project that's like a single class project, how do we integrate them together?

And then the other thing I would just say is I think STEM just became much less interesting relative to the humanities than it has been in the past. And humanities really helps—and social science really helps here because it kind of works like a person, and the more human history you can dive into, the more you can force the AI to think in interesting ways. So, I also think trying to weave some of that, you know, like AI, you know, a lot of stuff that wasn't practical before just became practical. A lot of stuff that was very practically oriented just became a lot more theoretical.

So taking that lead of rethinking agendas and syllabi is also really interesting.

You're absolutely right. As you know, someone who's kind of straddled both worlds, you know, with a more technical background in computer science, you know, I saw back when in school that a lot of the students who struggled with things like engineering and coding; they had trouble reducing problems to something that an algorithm that you could kind of linearize in an algorithm and think about all the edge cases and how do you create variables and how do you really reduce it down to that?

But now with AI, even though it still is software, you're seeing the other way, where I’ve actively told even members of the team, you've written this prompt like it's an algorithm. Do not do that. Give like—you know, people say, first ask the user this, then do this. I was like, “No, don’t do that! Tell the AI what your goal is!” Because you want the AI to be more robust and flexible; you don't want it to always do that first and always do that second. You want it to give it your goal here.

Don't micromanage how it gets to the goal, and it's going to do it in a more dynamic way. Or, or give it a sense of tone or—you know, these things are not—yeah! Tell it who it is! Again, it's bad software—it's good people! Right? Like, it doesn’t work well as software because it doesn't do the same thing every time; it gets stuck in loops.

Like, you have to learn to work with it. It lies to you; you have to learn to work with it no matter what we do. It does these things, right? Like you do your absolute best. That's dropping—hallucinations are dropping, but like they’re real; it's a real problem. So if you think of this like a person, right? It does better—like I've given it tests. You know, it does better on neuro-on-education myths than teachers do. It doesn’t believe anymore in, you know, in learning styles, for example, which is great!

But it still makes mistakes, and I think you're right; you have to give it a person, but you also have to teach people who are using it. It's a person! If it starts going off on a tangent, you have to redirect it. It's not a super intelligent entity that knows what the future is.

You have to—and people use it that way, as an oracle, all the time: "What does the future of AI look like?" It doesn't! No! It's not! Even if it was intelligent as a person, it doesn't have answers for you! Treat it like a person, and program it like a person, and it's a very different kind of beast to play with than, no.

And you do a lot of work, you know, just on the other dimension; you do a lot of work on simulations, business simulations, things like that. I could imagine you're going to town with—and we should think about collaborating!

I would love to get some of even your pre-simulations out to kids in K-12 or in other contexts, but I'm curious how you're thinking about it in the AI world.

Yeah! We’d love that! I mean, I've been spending—you know, I wrote a book on games in education like 12 years ago. I've been building games ever since Wharton iMagic. It's Wharton to, you know, invest in this thing. I've had a team of 14 people; we've had people who won the Hugo Award for best science fiction novels. I see some science fiction behind you, so, you know, is one of our writers who helped us out with some of the storyline, and they're amazing!

Right, I've spent years building these games where you run a fake startup company in real time or you're on a doomed space mission to Saturn, and it teaches you about leadership skills. And then, of course, the disturbing thing was I typed in a pro—one well-directed prompt into AI, and it started role-playing all the characters in my game very well! Right? 70% of a year's worth of effort could be accomplished with a paragraph!

So, I think this idea of playing with simulations is another thing. We're used to very didactic learning, and even the tutor model is sort of interactive, didactic learning. We can start doing other things, right? I mean, a lot of educators will be familiar with, like, the Diamond Age model, right? Of like this universal tutor that turns work like teaching into a game.

What I've learned is you can't make games that teach as compelling as enough to place take the place of real games. Like at their very best, they're 80% as much fun as a real-life game, but they can be a really interesting way to teach! And you're buying simulations and games and actual didactic learning and quizzing and write a book to do this, and project-based work—suddenly all of this is cheap to do. It's easy to do at scale. It is so exciting!

Because I can tell you the simulation-based stuff makes such a difference in people's lives, and it's just something that normally we would have to spend millions of dollars building a sim, and now it's something we could do much more easily. And I think that's really, really exciting too!

I'm so—I mean, I'm serious about this. After we should—we'll connect because I want to, you know, obviously, there are two schools that I help start—Khan Lab School, including it has a high school as well. And we have this other online high school. I think all your business school simulations I would love to have, you know, middle high school, maybe even elementary school students doing all of them! I think it’d be transformational!

Actually, it's pretty funny—we built this for college students. Our best players are often actually high school students who've been playing these games because they don't get a chance to actually do business stuff, and they take it very seriously! And as a result, they get a lot out of it, right? While someone who's in a business school class is like, “I've run—you know,” they get into it, but it's a different kind of experience. The high school students were trying to, you know, the goal before this was actually to create a class in a box in the same sort of way I've done massive online courses, like you, you know, you have. And just like you have recognized the limits of these things, right? In the same way, we've recognized the limits of these.

So how do we get people to experience the game and learn on demand became our big goal here, and I think we've accomplished it! So I'd love to talk about those things and get your try out there. You know, all the education we’ve got—talked to various billionaire founders, and they, you know, we have realized their experience in the game. And you know, there’s hundreds of branching paths, and we filled the world with—ugh, we've actually filled Google and Wikipedia with fake information about the world of the game! So you can actually just Google stuff and get useful information out of it. So had a lot of fun building that too.

You've made the reality part of your simulation! You got—you know all about alternate reality games!

Ethan Mollik: Yeah, exactly!

So, very much! Awesome!

Well, Ethan, I could talk to you for hours about this. I think, you know, you're one of the few people who are really leaning into this and make—especially in education. So, I expect this will not be our last conversation!

It's actually not our first either! I mean, I can't—yeah, I love it! And your—I love what you're doing with your mission, and it's just exciting to see you know, I was perfectly involved with everything from like one laptop for child down through all these educational efforts, and it’s—we're now actually close to that dream of like how do we make technology actually make education better? What you guys are doing is so important, and it's so exciting!

But it's also really scary to a lot of people. But we, you know, I think it's—I think we embracing it! You start to see the real value of what's happening here. So, thank you for having me!

Ethan Mollik: Thank you!

More Articles

View All
A Discussion With Sal About Systemic Racism
Hi everyone, uh, Sal Khan here from Khan Academy. Welcome to our daily live stream. Uh, for those of y’all who are wondering what this is, you know, this is something we started several months ago as a way to keep us all connected during times of social d…
What's WRONG With This Cat ?!?! IMG! #21
Every geek’s dream and a great reason to keep driving your car! It’s episode 21 of IMG. Here’s a picture of Darth Prime, and here’s Barbie as a homicidal sociopath. Not terrible enough for you? Then check out this example of bad parenting. What’s this ki…
Experience a Historical Russian Bathhouse | National Geographic
Now, Russians didn’t come up with the idea of public baths; the Romans did that. But Russians did take the bathing ritual to a whole new level. Today, we’re here in St. Petersburg at the old Cossack baths. They were built in 1879 and since then have seen …
YC Panel at Female Founders Conference 2015
We’ll start with Kirsty. Kirsty: Hi everyone! I’m Kirsty Nathu. I’m the CFO at Y Combinator, so I look after all of Y Combinator’s monies and help the startups with their money questions. Elizabeth: I’m Elizabeth Irans. I’m just a part-time partner at Y…
Zeros of polynomials (with factoring): common factor | Polynomial graphs | Algebra 2 | Khan Academy
So we’re given a p of x; it’s a third degree polynomial, and they say plot all the zeros or the x-intercepts of the polynomial in the interactive graph. The reason why they say interactive graph, this is a screenshot from the exercise on Khan Academy, whe…
Examples identifying Type I and Type II errors | AP Statistics | Khan Academy
We are told a large nationwide poll recently showed an unemployment rate of nine percent in the United States. The mayor of a local town wonders if this national result holds true for her town. So, she plans on taking a sample of her residents to see if t…