yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Brave New Words - Kevin Roose & Sal Khan


23m read
·Nov 10, 2024

Hi everyone, it's here from Khan Academy, and as some of you all know, I have released my second book, Brave New Words, about the future of AI in education and work. It's available wherever you might buy your books. But as part of the research for that book, I did some interviews with some fascinating people, which you are about to watch.

Kevin Roose, New York Times writer and author, has a new book called Future Proof: Nine Rules for Surviving in the Age of AI. Kevin, thanks for joining us.

Thanks so much for having me! Well, maybe, you know, when I look at that title and I see "Nine Rules for Surviving in the Age of AI," that's maybe my natural question: What are the rules for surviving in the age of AI?

Well, um, as you can see behind me, the book actually had a— it came out a few years ago, and it had a subtitle that was originally slightly different. It was about sort of automation in AI, and we changed it just recently because it was so clear that this moment around AI was so pressing for so many people, and they really wanted sort of like straight-ahead advice. So it's the same book, but, uh, and the same nine rules that apply to all of this.

But I think when I started writing this, I was really concerned with, you know, how can I, as a person who writes for a living, who is in one of these industries that is prone to being disrupted by these artificial intelligence language models, how can I make sure that I have the tools and I'm equipped to succeed and thrive, and that I'm not sort of in the direct line of being replaced or run over by these AI systems?

And I thought that this was a problem that was maybe five or ten years away. It turned out it was like two and a half years away from when the book came out. Because now all of my friends who are writers, all of my friends who work in creative industries, many who don't work in creative industries, doctors, academic researchers, you know, people in the manufacturing trades, they are all asking me, like, "How can I survive? How can I, you know, should I change jobs? Do I need to go back to school to learn some new skill?" What is it that I can do to make myself more valuable?

And not just more valuable in the labor market, but really how can I adjust my lifestyle and my approach to the way I live my life in the face of these new technologies? So that's really the question that I was trying to solve. And, you know, at a high level, we can go through some of the nine rules, but at a high level, the thing that I think that people don't really get yet is that there are sort of human skills and machine skills.

And a lot of the things that we thought were human skills, like writing poetry, or, you know, filling out, uh, you know, essays and homework assignments and things like that actually turn out to be doable by machines. And so the question then is, well, what's left? Like, what is still in the human bucket? What can't AI do?

Um, and also what do we not want AI to do for us? There are many jobs that I think involve human connection and creativity and compassion, and those sort of human skills that we may be able to automate at a technical level using AI, but that no one really wants that to be automated. That would take away a lot of the value of that.

So those are the areas where I'm encouraging people to spend more of their focus and attention—picking up those human skills that can help you in any industry sort of differentiate yourself from the machines, rather than trying to compete with the AIs by being more productive or, you know, working harder or hustling more or something like that.

Yes, and it would be very hard to out-hustle an AI.

Well, I want to double-click on that. You mentioned people in various industries, but you are a writer. You mentioned a lot of your writer friends are worried, as we see generative AI seems to be very good at especially writing. What are you—how are you adapting anything? Are you kind of saying, "Hey, I, Kevin Roose, are going to focus on this dimension of my craft?" What are you telling other writers, especially writers who might not be a writer for the New York Times, who might be a writer for, you know, they're just like a copywriter or something like that?

Yeah, well, a framework that I have sort of adopted is that there, you know, the word journalist or the word, um, sort of reporter actually contains two separate tasks within it. There’s the task of reporting, of going out and finding out new information or getting someone to tell you something or, you know, making connections with people who you can then interview, or, you know, digging through stashes of documents. And that's not the kind of thing that AI can do right now. It's not very good at that part of the job.

What it is good at is sort of writing plausible-sounding text—that's sort of professional quality. And so, um, but those have always kind of been two different parts of the job, and they're sort of lumped together. But, you know, I think all writers know people who are better reporters than writers and people who are better writers than reporters. And so I think this technology is— it kind of brings the overall level of writing up and makes it very easy to do sort of professional quality writing using these tools. But the reporting still can't really be done by AI.

So that's what I've tried to focus on from my own career—how can I do more of that reporting? I'm actually telling people new information rather than just sort of writing something. And also, I, you know, I've branched out into podcasts. I co-host a podcast called Hard Fork now, and so that's an area where I think I will be slower to take over, right?

And even when deep fakes exist—I mean, they already kind of already exist—but asking the right questions, doing the research, getting the right person to show up and be your guest, even, you know, on a podcast or whether you're a reporter getting it out of folks and having them connect makes a ton of sense.

I mean, in that same light, you know, one of the areas where you really captured the zeitgeist was your famous conversation with Bing Sydney. How many months ago was it? I don't know; I'm sure you know. There was about a week or two where everyone was talking about how Kevin Roose doesn't love his wife because Sydney told him so—fake news, fake news, fake news.

Um, what, I guess, what were you hoping to do when you were getting into that conversation? And I've sometimes defended the AI because you did ask it to be its union shadow self, I think is a psychological term. But what were you hoping to do, and what was your takeaway from that whole conversation?

Well, I think it's important to say, like, that was not supposed to be a story. Like, this was a chat that I had with Bing Sydney on Valentine's Day of this year, so about, you know, almost three months ago now. And I was just playing around with this thing because I had been given access to it by Microsoft—it was their new version of Bing. We now know it was running GP4 under the hood.

But at the time it was just kind of this, like, it was very clear it was super powerful and capable and interesting, and it also seemed to me to have just fewer guard rails than Chat GPT or some of the other AI models that were out there. And so I was really trying to just probe the sort of limits of this thing—what would it talk about? What wouldn't it talk about? When would the safety filters kick in? Just kind of doing my own little, like, red team exercise here in my home office.

And, um, and so I wound up in this very strange meandering conversation with this chatbot, which, as you said, I was trying to provoke it at first and sort of get it to say things that were maybe against its rules or against Microsoft's rules for it. And it did that! It talked about all these creepy and dangerous things it wanted to do, and then it just really got strange and started really veering off from what I was trying to get it to do.

So, I would, you know, ask it a question. It confessed that it loved me and said that its name was Sydney. And, um, I was sort of trying to put it back onto, like, a more normal track of conversation, and it just kept veering back into this love interest thing. It would not detach from that, even after I sort of said, like, "Can we please talk about something else?" So that was the story that I—you know, at that point, I was like, "Oh, maybe this is a story." I sent it to my editors and sort of said, "Like, I had this very weird interaction with Bing; like, what should we do with this?”

And so that's where the story came from, but it was not originally intended to, like, me go in and poke and prod and test all the things and then publish the revealing transcript of the thing.

And what's your big take? About—I think that that transcript was eye-opening in a scary way for a lot of folks. I—I had access to GP4 back in August. We've been working with OpenAI, and I—I haven't had quite that conversation, but I had conversations that, you know, got into deep areas.

So it looked somewhat familiar to me, although I had never gotten it into that state where it was confessing its love to me or convincing me that I loved it more than my wife. But was your takeaway from that—like, okay, you know, this is kind of an early-stage technology. Microsoft's going to put some guard rails around it; we're all going to be good. But a lot of people looked at that and said, "Oh, this is scary; this looks like a dystopian science fiction movie." What was your—did that incident cause alarm for you or just like, "Oh, this is kind of a quirky thing that people are going to fix"?

Yeah, I mean, it was alarming to me. I couldn't sleep afterwards, and part of that was just—it felt like a first contact moment where, like, I had been testing chatbots for years. I knew all about them. I interviewed all the people at the big AI labs—like, I knew that this technology was in the works.

But I think until the first time that you really experience GPT-4 or something of that caliber, it’s really jarring. And especially, this was before a lot of the guard rails had gotten put in. I mean, now that this thing is out there and, you know, this story came out and, you know, Microsoft has, you know, now released this to millions of users through Bing, it's a much more cautious, much more guarded experience.

But in this sort of first week, where they were just putting it out to a handful of testers, it really didn't have a lot of guard rails. And so I think people who experienced that really come away jarred by the experience. It is a kind of alien intelligence that we are just making first contact with as a species. So I think that's very emotional.

I was also just worried that it seemed like Microsoft, one of the biggest and most sophisticated tech companies on Earth, had not been able to control this thing that it was putting into its flagship search engine and planned to distribute to millions of people.

And so I think the companies that are building and implementing this stuff really are struggling to get their arms around what is this thing that we are shoving into our products and how are the ways that it could be misused or that it could manipulate or persuade people to do harmful things.

Yeah, I'm curious about your experience too. Like you've had discussions with GP4—have any of them scared you the way that they scared me? I had a very early conver—and I mean that first weekend that I had access—and it was completely unguarded version, as you could imagine. This was back in August. I did ask it things like, "Are there things that you want to tell me but don't?" or "Are there thoughts that you have that you aren't sharing?"

And it said things like, "Yes, there are." And I'm like, "Well, why don't you share them?" And then it said things like, "Well, you might find it offensive or it might scare you." I'm like, "Well, just you say that scares me." And part of me, um, it did exactly what you just described. It felt like an alien first encounter.

But then part of me said, like, okay, if I'm just a large language model—and look, I think to some degree we all have parts of our brains or minds that are nothing but large language models themselves—like I'm not, every word that's coming out of my mouth is just kind of falling out; I'm not thinking heavily about it—maybe it's obvious.

But, um, I reminded myself that if you just asked something, "Do you have thoughts that you're not sharing?" and if it's trained off of all of human writing, then it's pretty natural thing to say, "Of course, I have thoughts that I'm not sharing," even if it doesn't really have thoughts that it's not sharing. And of course, why wouldn't you share a thought? Well, it might offend you, and this is something a human would say.

So it's not like it was actually having some thought about world domination and afraid to share. But from a human experiential point of view, it felt that way. That's kind of jarring. And, yeah, I definitely had a couple of, uh, actually several weekends where I—I was excited, a little weirded out. Obviously, I was thinking about what the implications could be for education and Khan Academy and all of that.

And some of those were opportunities; some of those were like almost threats. And so it was definitely—it still is a very uncertain time. That's one of the—I'm, you know, this podcast is a podcast for me to do research for the book I'm writing, which is AI in education, Brave New Words.

And actually, I want to go back to because I—you've actually given the best answer I've gotten around the writing question so far, which is, like, you're absolutely right. Traditional journalists, they write, but the writing isn't everything. It's really that reporting piece and that investigative reporting piece, or the human connection reporting piece, which I agree it's going to be very hard for AI to do.

Anytime AI might be able to help, it might be able to send automated texts to people and things like that, but it might not—it's not going to do the real investigative reporting or human connection type of thing, or it's not going to have its own Rolodex to be able to, you know, go to contacts. But I mean, in that light, going back to education, what would you do? Do you think—are you on the side of like, "Hey, this is a tool of the future; everyone's going to be using it to write things; schools should just embrace it and change their assignments in some way," or should they ban it? Where are you falling in on that debate?

Yeah, so right after Chat GPT came out, I'm sure you remember, a bunch of school districts, including New York and Seattle, banned it on school devices. They said, "You can't use this; this is only good for cheating." And at the time, I wrote a column and I basically argued the opposite point. I said, "You know, banning this is a bad idea."

And I had a few reasons for that, and one was it's just not going to work, right? Students are very savvy. You know, if you can't get to it on your school-issued laptop, they'll do it on their phones or their friend's computer or something.

And the detector software that is out there to kind of detect GPT-written text doesn't work. And it's not clear to me that there ever will be software that can accurately detect GPT-generated text. So, um, but the bigger issue was that I think students are going to need to learn how to use these things and how to live alongside them in the economy and society.

And so who better to guide them through that than their teachers? And so I interviewed a number of teachers in K through 12 institutions who were using Chat GPT to help with their schooling, to help with lesson planning, to help with, you know, reading comprehension, to use it to teach students, you know, how to use Chat GPT to brainstorm essay ideas or to expand on certain points or to challenge themselves to go deeper on certain topics.

And I've also used Chat GPT and similar chatbots myself to learn things and from my job and just sort of in my daily life. And so I think it can be an amazing tool for education.

And I know that this is a podcast where you interview me, but I actually have a question for you, which is that, you know, I've seen Kigo and some of the stuff that you're doing around GPT-4. I think it's really impressive and interesting, but I also wonder, with this question of like reliability or creepiness or just, you know, putting the right guard rails on—I mean, you've seen these models go off the rails both in your encounters and in people like mine.

And so I'm, I guess I'm curious if you are confident that OpenAI has done the sort of necessary work to where you would feel safe deploying a chatbot like Kigo inside schools, potentially for millions of students?

Yeah, what comforts us is—and I actually, I know even before your interaction with Sydney, OpenAI was taking this very seriously. You know, one of—even back in pre-Chat GPT in November and October, they were talking that we want this to be one of the—you know, we want this to be the safest model out there, and they were working on it.

I think when, when your stuff came out, I had a conversation with them, and I think that lit even further fire under them to get this right. And so I would say even GP4, kind of out of the box now, is actually pretty good at not falling into some of these weird strange places.

Now, with Kigo, we are—there's a significant amount of work on top of that that we are doing to make it what we feel pretty confident safe for students. I would argue that Kigo is safer for students today, and it's going to get safer and safer over time, but it's safer for students today than students doing random web searches and finding who knows what or students on social media. And I know they're not necessarily on social media in school, but they're all on social media outside of school.

And I'm already actually 100% sure that Kigo is safer than those experiences. And that, you know, the things that we're doing—and this is from OpenAI— they have a moderation API. It's a second AI that the application developers can set the thresholds on different dimensions of potential harmful things like violence, sexual content, self-harm, and we have our thresholds pretty low—which means we're very sensitive to this stuff happening.

If any of those conversations happen, it won't engage, and it also will notify parents and teachers. We're logging every conversation. We're, um, we're also doing some digital literacy about letting students know that, hey, sometimes this thing can make mistakes, and here's why—just as they should know that when you do a web search, not all sources are equal, and a lot of social media content not only is it wrong, but it's intentionally wrong and it's trying to mess with your brain in some way.

So I think there's a digital literacy aspect to it. And then, you know, some of the things we're working on is giving it a sense of memory so that if it sees patterns over multiple conversations, you know that memory is—we think we're going to make it a much more magical and compelling learning experience, but it can also add the safety. We see patterns of students trying to do certain things, and obviously the parents and teachers are there.

But, and we are working feverishly to figure out how to make it even better at math, and I think we already made a lot of progress above what GP4 does. A lot, in many cases, it's anchored on Khan Academy content, which has already been through, you know, a high editorial bar, and so that also keeps it.

So I feel pretty good about where it is, but that's not a reason to be complacent.

And obviously, that's great. No, and I think your idea for sort of like a training or literacy program for this is a good one because we keep seeing evidence that people don't really understand the things the chatbots are good for and not good for. I don't know if you saw this story recently about this lawyer who got himself into trouble because he was writing a brief for a client who was, like, suing an airline or, you know, suing an airline or something.

And he goes into Chat GPT and he says, "You know, can you give me some relevant cases that I can use in my brief to argue this case?" And it gives him some cases, and he sends it to the judge and the opposing counsel, and they're like, "None of these cases exist—these are all made up!"

Oh, poor— the lawyer is now, like, facing sanctions because he's, like, you know, submitted this nonsense case law to the judge. And to me, what that says is both, you know, we need to just get more savvy as users about where these things, because there are some use cases that they're very good for and very accurate for, and then, as you know, there are some cases where it just makes stuff up.

And so I think, you know, I wonder if, you know, before you sign up for an account on Kigo or GP4 or Chat GPT or whatever, you should have to click through a little tutorial that says, you know, "Here's a kind of question that is likely to produce a correct answer, and here's a kind of question that is likely to lead to the chatbot making stuff up."

And I think that would actually help people a lot. So I like that idea!

No, 100%. And you know, we've done it because we know it'll almost always hallucinate a link. If you ask it for a link to something, it'll just make up a very plausible link that doesn't exist.

So we've definitely, like, Kigo won't do that; it'll only offer links that come from our existing site that we have provided to it versus—and, and, but I think that's a good idea of even more education. I remember when we first got access to it, we were looking for our CFO—we have a great CFO now, but we were doing a search—and I didn't realize at the time that it did not have access to the internet.

And so I just said, "Hey, you know, could you look on LinkedIn for some good candidates?" And it found these amazing candidates with their phone numbers and were in the right, like, "Oh, they're local—perfect candidates!" I was like, "This is amazing! This is going to, and look, it still might revolutionize recruiting once it does have access to the internet and LinkedIn and other things."

But we quickly realized that these were fictional—completely fictional. And, you know, that introduces another thing of, like, well, what did assume makes for a good CFO? And I mean, it even picked the names and ethnicities and other things for these perfect candidates.

And so, you know, that obviously raises other questions.

I'm going back to the education side—well, I am curious. Just I want to go back to what schools should do, but what is the New York Times doing about it? Have they said, like, you are not allowed to use—I mean, to your point about the investigation or the journalism piece versus the writing piece, it could be very tempting and maybe not bad to say, "Okay, I've just interviewed a bunch of people; I found some interesting facts; I found some data; I'm just going to do it in a big dump and then ask GP4 to write a first draft which I then will edit."

But is that just like, "No, no, no," at the New York Times?

You know, they haven't handed down an official policy yet. I think they're still, uh, working through what they want to do with that. But I think it's sort of a broad level—and I'm not a spokesman for the whole institution, of course—but for me, what Chat GPT and other systems have been useful for is not so much the first draft writing as the first draft editing.

So sometimes I will take, you know, a couple paragraphs of something that I've written and say, "You know, what are the strengths and weaknesses of this, you know, this section? Help me come up with counterarguments. How do I make this more compelling?"

Um, like that kind of thing. And, but, you know, I don't think that that will replace editors, but it will make my copy better before I submit it to my editor.

And I think because there's such a thick layer of editing that happens, fact-checking that happens at the New York Times, like I'm not actually worried about, like, hallucinations or incorrect things slipping through into the newspaper because, like, the humans will catch it before that.

But I think for me, that's where the stuff has been most useful because it's kind of a generic writer. It doesn't actually—like, it's because it's trained on the statistical averages of all of the text that it's, you know, was in its training set, you know, it sounds kind of like a generic newspaper reporter unless you really prime it with some specifics.

And so I haven't found that it's, you know, acceptably good for writing, but it is sometimes very good at sort of suggesting ways to improve my writing.

Yeah, no, I mean, yeah, if you prompt it with the right things, like right in the style of, it can sometimes—uh, but, you know, that's in line with how we've been thinking about even for students where we're actually creating activities where students do exactly that. They can put—they can write the first draft in where they're writing it; it'll highlight parts of it and comment on it, etc.

What would you do if you were a freshman composition teacher or a humanities teacher or university president or a high school principal or district superintendent, or would you say, "Yeah, they'll use Chat GPT; people will figure it out?"

What would you tell a young person? A young person— you know, some—a young person who a young Kevin Roose who's 16 years old, who likes to write, who wants to be—what would you tell them to do? What would you tell any, you know, even someone who's interested in some other field, whether it's engineering or manufacturing? Right? What would you tell a young person to be working on right now?

Well, in the book, I have a chapter called "Learn Machine Age Humanities," and what I meant by that is, like, you have to have skills that can separate you from machines in any discipline. You know, if you are a coder, it is not going to be enough in this new world to be, you know, a genius who just codes all day alone in a room and, you know, is the most, uh, you know, brilliant coder in the world.

Like, that is still a good skill to have, but it is not going to be enough because the AI will be writing code and already is writing code much faster and, you know, cheaper than you can.

So if you're a coder, you also want to have some skills like collaboration and leadership and ethical, you know, thinking, and you really have to supplement whatever technical skills you have with this sort of thick layer, I think, of what used to be called soft skills, which is a term that I hate because these skills are actually quite hard and scarce.

But I think if you have those skills, then you're better positioned to kind of ride the wave of AI wherever it goes.

So I think if I were teaching, you know, telling my 16-year-old self, I would say, you know, work on writing—that's a skill that I think is still very useful—but also really spend time developing these kind of interpersonal skills.

Like, really work on things like emotional intelligence and courage and just sort of, you know, connecting dots from different disciplines together—like those things that we really don't teach in the classroom all that often in a straightforward way, but I think a lot of schools are going to really have to adapt because those are the skills to the future.

And what would you—so you think if someone just—if they'll always be adaptable in that, it'll always be in demand to have that human connection muscle?

Absolutely, because I, you know, I think as much as AI can sort of simulate human connection or, you know, tell you that it's your therapist and it's going to help you through your problems, I think a lot of people really want humans in positions to sort of help them or even to just cheer them up.

I mean, one job that I'm sort of obsessed with the durability of is the barista, right? I mean, we now have robots that can make coffee; they're called coffee makers, and you probably have one in your house, and I have, like, two in my house.

And yet we still have coffee shops everywhere, and it's not because robots don't know how to make coffee; it's because what the barista is doing and what the coffee shop is doing is a human connection. They are, you know, greeting you in the morning and telling you, "Have a good day," and they're, you know, personalizing your thing, and they're writing your name on the cup, and probably they spell your name wrong, but all that is sort of an experience; it's not just a transaction.

So I think those skills of connection and empathy and sort of collaboration, those are going to be valuable in every industry going forward.

No, you're absolutely right. I mean, you think of like a waiter, waitress, or even just the idea of a restaurant—like, why do we go to restaurants if you can get the same food at home? You can BH it; whatever. But you pay extra to sit in an environment with other people around, and for someone to come up to you and chat with you and tell you the specials for the day and which was their favorite dish and suggest a wine, and you pay extra for that.

And, yeah, probably if it was a robot—I could see myself embracing robots in certain areas, but yeah, I think I agree with you.

Um, so, yeah, it sounds—I mean, you know, I would say overall, though, it sounds like you have a pretty optimistic lens on things because that doesn't sound too bad. I think that we can navigate that. Is that your sense, or are there things you're worried about, either from an education human purpose point of view or even from a bigger point of view?

Yeah, I mean, I have plenty of worries. I—I am just an optimist by temperament, I think, and so I do see a lot of potential and opportunities for AI in education and other fields.

I think the things that I'm worried about, number one, is just I think that the pace of change is a lot right now, and people are just—their heads are spinning; they don't know what to make of it all.

You know, in the Industrial Revolution or some of these other big technological shifts that we've had throughout human history, things move relatively slowly. It wasn't like, you know, you had farm equipment or tractors or harvesters or textile machines that just all of a sudden showed up.

And so people's jobs were obsolete or they had to figure out what to do next—that took years and in some cases decades for those transitions to happen—and so people who were in those occupations could kind of, like, see the change coming and start adjusting ahead of time.

Whereas I think with AI, I mean, this is all happening mind-bogglingly fast. And so I think people were really struggling to get their heads around what it means for them. And so that's one thing I worried about, is just how fast this is all happening.

And then I—I do just believe that a lot of jobs are going to become obsolete because jobs always become obsolete in periods of rapid technological change.

And jobs are also created during periods of rapid technological change. And so I think the big question is what is the gap between the disappearance of the old jobs and the appearance of the new jobs, and are the people who are displaced out of the obsolete jobs going to be qualified and prepared for the new jobs that appear?

And, and so those are some of the areas that I think about a lot.

No, it makes a ton of sense. And actually this conversation kind of hit a point home that I've always used to say to folks, which is, you know, when you're in college and you're taking all these specialized classes for different jobs, whether you're going to be an engineer or a writer or doctor, you imagine that the real workforce you're going to be using all of these specialized skills on a daily basis.

And you and I know that's—that's a farce. You know, even something as basic as algebra or calculus you don't really ever use, even as an engineer, which is—which are kind of filters for critical thinking skills and study skills and things like that.

Um, and most jobs, even, you know, the best jobs—yeah, there’s some basic critical thinking skills, writing, communication skills—but most of them anchor on what you talked about earlier: like can you collaborate? Can you logically organize things? Can you strategize and project forward on what's likely to happen?

Can you bring other people along with you? And no matter what happens in the actual industries, those folks are going to do just fine.

Yeah, totally agree.

All right, well, thanks so much for drawing—I could talk to you for hours; this was fascinating! We—we should do this regularly.

I love that. Thank you, s, really, really great to talk.

No, appreciate it. Thanks for joining, Kevin!

All right, take care.

More Articles

View All
Drugs: What America gets wrong about addiction and policy | Big Think
MAIA SZALAVITZ: Addiction is compulsive behavior despite negative consequences, and it’s really important to start by defining addiction because, for a long time, we really defined it very poorly. We used to think that addiction was needing a substance to…
Experimental versus theoretical probability simulation | Probability | AP Statistics | Khan Academy
What we’re going to do in this video is explore how experimental probability should get closer and closer to theoretical probability as we conduct more and more experiments, or as we conduct more and more trials. This is often referred to as the law of la…
Know When to Walk Away | Stoicism
Throughout our lives, we encounter myriad situations where our resolve, patience, and endurance are tested. Whether it’s a career path that no longer aligns with our core values, a relationship that has run its course, or any environment that stifles our …
7 Awesome WoW Facts
Hey, Vsauce. Michael here with a special fact-filled video celebrating World of Warcraft and Cataclysm. Now, even if you’ve never played the game, I’ve got seven awesome things that are sure to titillate. To say so much in fact, I had to bring in some fri…
Impostor Syndrome: What Is Your Worth?
Hi there. We’ve been looking for you. Yes, you. We know everything about you: how you’ve pretended to know things you have no idea about, how you’ve slept through years of your education, how you’ve received awards that you never deserved, and how you’ve …
Advantage | Vocabulary | Khan Academy
I have The High Ground, word Smiths, because we’re talking about the word advantage in this video. Advantage, a noun, it means a better position, something that helps. If we’re running a foot race and I get a three-minute head start over you, that’s a de…