yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Elon Musk : How to Build the Future


11m read
·Nov 3, 2024

Today we have Elon Musk.

Eon, thank you for joining us.

Thanks for having me.

Right, so we want to spend the time today talking about your view of the future and what people should work on. So, to start off, could you tell us you famously said when you were younger there were five problems that you thought were most important for you to work on? Um, if you were 22 today, what would the five problems that you would think about working on be?

Well, first of all, I think if somebody is doing something that is useful to the rest of society, I think that's a good thing. Like, it doesn't have to change the world. Like, you know, if you make something that has high value to people, and frankly, even if it's something, if it's like just a little game or, you know, some improvement in photo-sharing or something, if it has a small amount of good for a large number of people, that's fine. I mean, I think that's fine. Stuff doesn't need to change the world just to be good.

But, you know, in terms of things that I think are most likely to affect the future of humanity, I think AI is probably the single biggest item in the near term that's likely to affect humanity. So, it's very important that we have the advent of AI in a good way, but that is something that if you could look into a crucible and enter the future, you would like that outcome because it is something that could go wrong. And as we've talked about many times, we really need to make sure it goes right. That's, I think, working on AI and making sure it's a great future. That's the most important thing I think right now—the most pressing item.

Second, obviously, I think working with genetics. If you can actually solve genetic diseases, if you can promote dementia or Alzheimer's or something like that with genetic reprogramming, that would be wonderful. So, I think genetics might be the sort of second-most important item.

I think having a high bandwidth interface to the brain is important. We're currently bandwidth-limited. We have a digital tertiary self in the form of our email capabilities—like computers, phones, applications. We're effectively superhuman, but we're extremely bad with constraint in that interface between the cortex and our sort of tertiary digital form of ourselves. Helping solve that bandwidth constraint would be very important for the future as well.

One of the most common questions I hear young people ask is, "I want to be the next Elon Musk. How do I do that?" Obviously, the next Elon Musk will work on very different things than you did, but what have you done, or what did you do when you were younger that you think sort of set you up to have a big impact?

Well, I think this—I should say that I do not expect to be involved in all these things. The five things that I thought about at the time in college, still quite a long time ago—25 years ago—being, you know, making life multiplanetary, accelerating the transition to sustainable energy, the Internet broadly speaking, and then genetics and AI. I didn't expect to be involved in all of those things.

I actually, at the time in college, sort of thought helping with electrification of cars, which was how we started out. That's actually what I worked on as an intern—was advanced ultra-capacitors with the hope that there would be a breakthrough relative to batteries for energy storage in cars. When I came out to go to Stanford, that's what I was going to be doing my grad studies on, working on advanced energy storage technologies for electric cars.

I put that on hold to start an Internet company in '95 because that did seem to be like a time for particular technologies when they were at a steep point in the inflection curve. I didn't want to do a PhD at Stanford, and then—I'm not sure what would happen. I wasn't entirely certain that the technology I'd be working on would actually succeed. You can get a doctorate on many things that ultimately do not have practical bearing on the world. I wanted to, you know, just be useful. That's the optimization—what can I do that would actually be useful?

Do you think people that want to be useful today should get PhDs?

Um, mostly not.

So, what is the best? Some yes, but mostly not. How should someone figure out how they can be most useful?

Whatever this thing is that you're trying to create, what would be the utility delta compared to the current state-of-the-art times how many people it would affect? That's why I think having something that has that mix makes a big difference, but affects a sort of small to moderate number of people is great, as is something that makes even a small difference but affects a vast number of people. Like the area under the curve is roughly similar for those two things.

So, it's actually really about just trying to be useful and matter when you're trying to estimate the probability of success. So, you say something will be really useful—good area under the curve. I guess to use the example of SpaceX, when you made the NGO decision that you were actually going to do that, this was kind of a very crazy thing at the time.

Very crazy, yes.

I'm not shy about saying that, but I kind of agree. I agreed with them that it was quite crazy. If the objective was to achieve the best risk-adjusted return, starting a rocket company is insane. But that was not my objective.

I totally came to the conclusion that if something didn't happen to improve rocket technology, we'd be stuck on Earth forever. The big aerospace companies had no interest in radical innovation. All they wanted to do was try to make their old technology slightly better every year. In fact, sometimes, we would actually get worse.

Particularly in rockets, it's pretty bad. In '69, we were able to go to the moon with a Saturn V, and then the Space Shuttle could only take people to low-Earth orbit. The Space Shuttle retired, and that trend has been basically trending to zero. It feels like technology just automatically gets better every year, but it actually doesn't. It only gets better if smart people work like crazy to make it better.

That's how any technology actually gets better. By itself, if people don't work on it, it will actually decline. You can look at the history of civilizations. Many civilizations, look at ancient Egypt—were they able to pull off these incredible pyramids, and then they basically forgot how to build them?

Even hieroglyphics—they forgot how to read hieroglyphics. Look at Rome and how they were able to build these incredible roadways, aqueducts, and indoor plumbing. They forgot how to do all of those things. There are many such examples in history. So, I think you should bear in mind that, you know, entropy is not on your side.

One thing I really like about you is you are unusually fearless and willing to go in the face of other people telling you something that's crazy. I know a lot of pretty crazy people, but you still stand out. Where does that come from, or how do you think about making a decision when everyone tells you this is a crazy idea? Where do you get the internal strength to do that?

Well, first of all, I'd say I actually feel fear quite strongly, so it's not as though I just have the absence of fear. I feel it quite strongly. But there are times when something is important enough you believe in it enough that you do it in spite of the fear. So, speaking of important things—people shouldn't think, "I feel fear about this, and therefore I shouldn't do it." It's normal to feel fear. Like, you'd have to definitely have something mentally wrong if you didn't feel fear.

So, you just feel it and let the importance of it drive you to do it anyway. You know, actually, something that can be helpful is a degree of fatalism. If you just accept the probabilities, then that diminishes fear.

So, my starting SpaceX, I thought the odds of success were less than 10%. I just accepted that I would probably lose everything, but maybe we would make some progress. If we could just move the ball forward, even if we died, maybe some other company could pick up the baton and keep moving it forward. That would still do some good.

Yeah, same with Tesla. I thought your odds of a car company succeeding were extremely low.

What do you think the odds of the Mars colony are at this point today?

Well, oddly enough, I actually think they're pretty good.

So, like, when can I go?

Okay, at this point, I am certain there is a way. I'm certain that success is one of the possible outcomes for establishing a self-sustaining Mars colony. In fact, growing a Mars colony—I'm certain that that is possible. Whereas until maybe a few years ago, I was not sure that success was even one of the possible outcomes for a meaningful number of people going to Mars.

I think this is potentially something that can be accomplished in about 10 years, maybe sooner. I mean, maybe 9 years. I need to make sure that SpaceX doesn't die between now and then, and that I don't die, or if I do die, that someone takes over who will continue that.

You shouldn't go on the first launch.

Yeah, exactly. The first launch will be a robotic one anyway.

So, I want to go?

Except for that internet latency.

Yeah, the latency will be pretty significant. Mars is roughly 12 light minutes from the Sun, and Earth is 8 light minutes. So, the closest approach to Mars is 4 light minutes away. The furthest approach is a little more because you can't sort of talk directly through the Sun.

Speaking of really important problems—AI. So, you have been outspoken about AI. Could you talk about what you think the positive future for AI looks like and how we get there?

Okay, I mean, I do want to emphasize that this is not really something that I advocate. This is not prescriptive; this is simply, hopefully, predictive as you look. So, I'm saying, "Well, like, this is something that I want to occur." Instead of so, this—I think that probably is the best of the available alternatives. The best of the available alternatives that I can come up with—maybe somebody else can come up with a better approach or better outcome—is that we achieve democratization of AI technology.

Meaning that no one company or small set of individuals has control over advanced AI technology. Like that, that's very dangerous. It could also get stolen by somebody bad. You know, like some evil dictator or country could send their intelligence agency to go steal it and gain control of it. It just becomes a very unstable situation.

I think if you've got any incredibly powerful AI, you just don't know who's going to control that. It's not as I think that the risk is that the AI would develop a will of its own right off the bat. I think it's more that the consumers—that someone may use it in a way that is bad, or even if they weren't going to use it in a way that's bad, but somebody could take it from them and use it in a way that's bad. That I think is quite a big danger.

So, I think we must have democratization of AI technology. Make it widely available, and that's, you know, the reason that obviously you mean the rest of the team created OpenAI was to help with the democratization of AI technology so it doesn't get concentrated in the hands of a few.

But then, of course, that needs to be combined with solving the high bandwidth interface to the cortex. Humans are so slow.

Yes, exactly. But, you know, we already have a situation in our brains where we've got the cortex and limbic system. The limbic system is kind of a mess—it's the primitive brain. It's kind of like your instincts and whatnot. Then, the cortex is the thinking part of the brain. Those two seem to work together quite well. Occasionally, your cortex and limbic system may disagree, but generally, it works quite well.

And it's rare to find someone who—I have not found someone who wishes to either get rid of the cortex or get rid of the limbic system.

Very true.

Yeah, that's unusual. So, I think if we can effectively merge with AI by improving that neural link between your cortex and your digital extension of yourself—which already exists, just has a bandwidth issue—then effectively, you become an AI-human symbiote. If that then is widespread, anyone who wants it can have it, then we solve a control problem as well. We don't have to worry about some sort of evil dictator AI, because kind of, we are the AI collectively. That seems like the best outcome I can think of.

So, you've seen other companies in the early days that start small and get really successful. Um, hope I don't regret asking this on camera, but how do you think OpenAI is going as a six-month-old company?

I think it's going pretty well. I think we've got a really talented group at OpenAI. Yeah, really, really talented team, and they're working hard. OpenAI is structured as a 501(c)(3) nonprofit, but, you know, many nonprofits do not have a sense of urgency. It's fine; they don't have to have a sense of urgency. But OpenAI is—I think people really believe in the mission. I think it's important, and it's about minimizing the risk of existential harm in the future. So, I think it's going well. I'm pretty impressed with what people are doing, the talent level, and obviously, we're always looking for great people to join.

We call it a mission list of 40 people.

Noted.

Yes, well, all right. Just a few more questions before we wrap up. How do you spend your days now? Like, what do you allocate most of your time to?

My time is mostly split between SpaceX and Tesla. Of course, I try to spend part of every week at OpenAI, so I spend basically half a day at OpenAI most weeks. Then I have some OpenAI stuff that happens during the week. But other than that, it's really traceable interlaced.

Like, Tesla—what does your time look like there?

Yeah, so it's a good question. I think a lot of people think I must spend a lot of time with media or on business-y things, but actually, almost all my time—like 80% of it—is spent on engineering design and engineering, so it's developing next-generation products. That's 80% of it.

You probably remember this a very long time ago, many years, you took me on a tour of SpaceX, and the most impressive thing was that you knew every detail of the rocket and every piece of engineering that went into it. I don't think many people get that about you.

Yeah, I think a lot of people think I'm kind of a business person or something. That's fine; business is fine. But, um—really, it's you know, at SpaceX, Gwynne Shotwell is chief operating officer. She kind of manages legal, finance, sales, and kind of general business activity. My time is almost entirely with the engineering team working on improving the Falcon 9 and the Dragon spacecraft and developing the Mars colonial architecture.

I mean, at Tesla, it's working on the Model 3. Yes, I'm in the design studio to degree up happening a week dealing with aesthetics and look-and-feel things. Most of our week is just going through engineering of the car itself, as well as engineering of the factory. Because the biggest epiphany I've had this year is that what really matters is—is the machine that builds the machine—the factory.

This, at least towards my to eat, is harder than the vehicle itself. It's amazing to watch the robots go here, and these cars just happen.

Yeah, now this actually has a relatively low level of automation compared to what the Gigafactory will have and what Model 3 will have.

What's the speed on the line of these cars?

Actually, the average line is incredibly slow. It's probably about, both X and S—it's maybe about 5 centimeters per second.

And what can you go?

This is very slow. Or what would you like to get to?

I'm confident we can get to at least 1 meter per second—so, a 20-fold increase. That would be very fast.

Yeah, um, at least, I mean, I think quite a 1 meter per second—just put that in perspective—is a slow walk or a good medium-speed walk. A fast walk could be 1 and 1/2 meters per second, and the fastest humans can run over 10 meters per second. So, if we're only doing 0.05 meters per second, that's very slow—current flow speed.

And at 1 meter per second, you can still walk faster than the production line.

More Articles

View All
Mr. Robot's Co-Producer and Writer - Kor Adana
Okay, so Cor, how did you get into hacking? Well, when I was younger, I always took things apart. I’m the son of an engineer, so I always had tools around the house, soldering iron, stuff like that. I took apart TVs and VCRs and just figured out how thin…
Congruent shapes and transformations
We’re told Cassandra was curious if triangle ABC and triangle GFE were congruent, so he tried to map one figure onto the other using a rotation. So, let’s see. This is triangle ABC, and it looks like at first he rotates triangle ABC about point C to get i…
Rant: Doing this one thing will help ANY career - and most people don’t do it
What’s up you guys? It’s great news! So, I’m going to be sharing a very common-sense tip that will help you out dramatically in business. I don’t care if you’re self-employed or if you’re an employee; it doesn’t matter. Doing this one thing will dramatica…
Writing expressions with parentheses | 6th grade | Khan Academy
We have two different statements written in English that I would like you to pause this video and try to write as an algebraic expression. All right, now let’s work on this first one. So you might be tempted to say, “All right, I have five, so let me jus…
The ONE thing most Millionaires do that makes them Millionaires
What’s up, you guys? It’s Graham here. So, this is something that so many people seem to miss entirely or just don’t fully understand. This is also something that the most financially successful people all seem to do on autopilot without ever even thinkin…
Have a Moral Dilemma? Start with Your Gut Reaction, but Don’t Stop There | Glenn Cohen | Big Think
When thinking about an ethical problem, first of all, you should start always with your gut intuition. What’s my intuition about this case? You never stop there. You have to keep pushing yourself to say, “Why do I think this?” First thing to do, I think, …