10 People + AI = Billion Dollar Company?
What is the state of these AI programmers? Like, is it reliable yet, and where are we at? Well, we just see software companies have way less employees and converge on a point where you could have unicorns—billion-dollar companies—that have like 10 people on them. If we imagine a world where there could be companies with less than 10 employees, maybe you could still be a family, but is that still a good idea? I have a controversial argument against what Jensen said; this one will probably piss some people off.
[Music]
Nice! Welcome to another episode of The Light Cone. I'm Gary, this is Jared Harge and Diana, and collectively we funded companies worth hundreds of billions of dollars. Today, we're talking about this one very controversial clip that lit up the internet from Jensen.
I'm going to say something, and it's going to sound completely opposite of what people feel. You probably recall over the course of the last 10 years, 15 years, almost everybody who sits on a stage like this would tell you it is vital that your children learn computer science. Everybody should learn how to program. In fact, it's almost exactly the opposite. It is our job to create computing technology such that nobody has to program and that the programming language is human; everybody in the world is now a programmer. So what do you guys think? Is this true?
We're at the dawning of LLMs. We infused the rocks with electricity, and recently they've learned how to talk, and now they can code. What does it mean? I guess the question is, are the next generation of founders or anyone who's young looking to figure out what they want to do with their career? Should they still study computer science? Is that still a good bet on the long run?
A lot of us spent a long time telling people over all of these generations, "Yeah, you should learn to code. If you're a non-technical founder, you should learn to code." It's like the most important thing to do during college—definitely no matter what else you do, learn how to code, right?
So the question is whether LLMs and AI are just going to automate all of these jobs. I think we have different views on it, right? We funded a couple, a number of companies that are actually building coding assistance that are taking tasks off developers. What does the future look like for that?
I mean, I guess the analogy that you could say—I don't really agree with this, but you could say that given photography, you didn't have to learn how to use a paintbrush in order to create representations of real life. Today, you can prompt using a diffusion model; you can actually just write out what you want and an image will be developed for you. Will this transition to code? Some of the questions that Diana has done a little bit of research on, and I think Jared, you too, is what is the state of these AI programmers? Like, is it reliable yet, and where are we at?
Related to Jensen's clip is the launch of Devon, which also took the internet by storm and has inspired many founders to go into this area, including a lot of the companies that we've funded in the past two batches. It could be interesting to talk about that history and what the state-of-the-art is with AI programmers.
Yeah, so right now, these are the companies that I funded, companies like Sweep. We also work with Fume. A lot of them are solving a lot of tasks for more junior developers that have to do with fixing the HTML tag here or a bug here and there. That's fairly small, but it's a bit more difficult when you want it to actually build more complex systems, like, "Build me the distributor system of the backend." That scale, that we cannot do today.
I think it's important to put context around Jensen's tweet: that like three months ago, basically AI could not program usefully at all; it was hitting like almost a zero. What really changed? I actually think it goes back to before Devon. I think the real unlock for the current surge of interest in AI programmers goes back eight months ago to when the Princeton NLP group released this benchmarking dataset called SBench. SBench is a dataset of GitHub issues taken from real programming problems, and so it's a good representative dataset of real-world programming tasks— the kind of things that programmers actually do.
This dataset finally made it possible for people to really tackle this problem of building an AI programmer, to try an algorithm, and benchmark it to see how good it is and to compete with other people on the internet. Diana and I were actually just talking about how if you look back in the history of machine learning, a lot of the big unlocks came from somebody publishing a benchmarking dataset.
Going back to the very beginning of deep learning, do you want to talk about how deep learning actually got started?
Really? Yeah, so this benchmark with SBench is very reminiscent of ImageNet, which was a groundbreaking dataset from the lab at Stanford from Fei-Fei Li. It was a very challenging dataset with a lot of images and lots of classes, where the task for the algorithm was to classify and see what the image was because, at the time, like the biggest unsolved problem in machine learning—this is hard to believe—was to look at a picture of a cat and be able to tell you, "This is the picture of a cat." That was totally intractable in 2006.
Because a cat can have lots of variations, it's actually a very hard problem. You have cats that are yellow, they're black, they could be in different positions. They could be like sleeping or laying down, and they all look very different. But how do you encode that when you have limited sets on that? So before 2006, the traditional methods in machine learning were more statistical. You would do things that were more discriminant; you would have things like support vector machines. You would use things with feature extraction that were with hand-coded signal processing feature extractors and with putting things in like the frequency domain, or all these sorts of things that people tried or wavelets, whatever.
People tried it, and that dataset was really hard. The error rate was like really, really high—over 30% to 40%. For a bit of context, human perception on this dataset is about 5% accuracy, more or less. And error rate. Error, error rate, correct? Yes, 5% error rate. And then all these standard methods were like 50% or more or 30% above. So, which is really bad; it's like way, way bad.
So then came about AlexNet, right Jared? Yep. A group from the University of Toronto, and they had trained a deep learning net using GPUs. It was one of the first cases of people training deep learning networks using GPUs, and AlexNet blew the performance of everybody else out of the water. It was way better than all the other techniques.
I remember the day that that news article dropped; it took the programming internet by storm. I would argue that the AI race that we're in right now—we're literally still riding the wave that was kicked off by AlexNet in 2012. It just kicked off this incredible race.
Yeah, it was the first time that at that point it was getting to that human level perception. Then people found this phenomenon of stacking neural nets with lots of layers. People didn't exactly know what was happening in the middle, and people treated it like this black box, but it was actually starting to work.
So the interesting learning from this lesson is that SBench is that moment in time where we can measure something and then we can get better at it because, before, with ImageNet there wasn't a big enough dataset to do that. We will make progress in terms of programming.
But now the question is, are we going to get to the point that we're going to get AI algorithms that are just as good as programming with the humans? Is coding, like, an image recognition task? One of the reasons this wouldn't happen? Because so far, like if you zoom out, programming is one of the most promising early use cases for LLMs since they've launched essentially, right? You have the co-pilot term, which really was the GitHub co-pilot specifically, like a co-pilot for programmers. Data compute, everything is scaling. The models keep getting better.
We now have, like you said, like a benchmark and like human attention focused on trying to make this better. What are the reasons this isn't just a straight scaling law?
Oh, I think we will. We're now at like 14% on SBench; that's like the state-of-the-art performance, and it's still well below human performance. I'm not sure what human performance would be, but certainly, a skilled programmer could probably solve most of SBench given enough time.
I think the SBench mark is going to go like—is, I think we're going to see rapid improvements for the reasons that Tiana mentioned. But SBench is a collection of small bugs in existing repositories, which is quite different from building a new thing from scratch. Even when we get to a thing that can solve half of SBench, that's still pretty far from something where you could just give it instructions for an app to build, and you could just go build the whole app.
Yep. I mean, the way I think about it is—kind of what my question is really—is SBench the kind of tasks that are in SBench analogous to image recognition? But I think programming falls into a different kind of category of problems that it can solve. It is a bigger set because SBench is like a subset. It's still like in this idealized world.
To put a bit of context, I think in terms of engineering, there are two categories of problems and how we model the world. There’s sort of the design world that is all like perfect, where you have all the perfect engineering tolerances, all the simulation data, and all the laws of physics work perfect in that simulated world. Then you have the reality, which is messy.
I think the world of AI, LLMs, and all that do a good job with this design world, but when you're encountering the real world, a lot of stuff breaks. You end up with—when I was working and building all these engineering systems, hot fixes come in, and it’s like random magic numbers to make the system work. Or like, you could imagine all the self-driving cars—I'm pretty sure there's a lot of magic numbers because it’s just the placement of sensors that kind of like physics—physics. You have all these coefficients of friction, and they're not pretty.
Like, the laws of physics, like Newton, they're like beautiful equations in this ideal world, but in the real world, when you need to get systems to work—engineering and systems for startups—they solve real problems. You encounter friction, and there's all sorts of coefficients of friction depending on all the materials. And that world is infinite.
So my argument is that I don't think LLMs are going to be able to really encompass and really manage the whole real world. The real world is like infinite. Are you like going to Jensen's original video? He basically said, "Hey, like, basically the dream situation is you type in, 'I want an app that helps me share blah, blah, blah, photos,' and the software just magically figures out how to build it."
And I guess one way to build on that analogy—if I—I think the world that Jensen was envisioning was a world in which programmers are like product managers. Today, if you think about a product manager, a product manager basically builds an application by writing English, right? They write a spec, and then programmers go and they translate that into working code.
So maybe in the future that's how apps will be built; you'll just like write English, and the AI will take care of the translation. I think that gets into the heart of this debate that has always happened amongst engineers and non-engineers in Silicon Valley, which is how much of programming is an implementation thing? It’s just, "Hey, like you have the idea, and the implementation are separate," versus actually like, "You only get the ideas in the process of implementing."
Paul Graham is a huge proponent of the latter, right? Like in multiple ways, like in programming—it’s like the whole reason he's such a proponent of Lisp from the early days. You want a very flexible language because you only get the good ideas once you start building.
His philosophy actually translates over to writing—where writing is literally thinking. The process of actually writing is thinking. I remember when I was learning how to do YC interviews, watching him, being in the room with him, and asking him, "Well, how are you—what are you exactly looking for?"
One thing that he disabused me of was that sometimes people would come in, and I’d look at what they did in the past, and I generally felt like, "Well, this looks like someone who's smart and with it, and they did some impressive things in the past—surely they thought through this, and they just didn't say it in the meeting."
One of the things Paul would always say is, "Oh, no, no, no! If they don't say it, then they themselves do not know! Like the writing is actually thinking!"
I guess, to sort of torture this analogy, but I kind of like it, we are in this moment where if we take the analogy of the camera—it made it so that you don’t have to paint anymore. The subtlety there is that aesthetics in the world still exist. I think the artistry of creating software or technology products is actually in that interface between the human and the technology itself.
So my argument would be, if you're doing backend software and you're writing APIs and models, that might get a lot of help from these types of AI programmers, right? Like, you can actually strongly type this stuff, and then you can actually use language to translate that into saying what the product should actually do. But there is still an artistry in that interface of what should actually even do and how.
I think that's a very good point, Gary. I think maybe another way to think about this Advent with LLMs in programming is, if you think about the history of computer science and programming languages, as we progressed, we became more and more in higher language abstractions.
We started in the early days; it was just very, very much like coding an assembly. Yes, and it took like so many lines of code to just do addition. Then you went up and did a bit of things like with Fortran and then C++, where you had to really know about the metal still and manage your own memory. Then you went into things with more dynamically typed languages where you didn't have to think about the type—like JavaScript and Python, right?
Or duck typing, right? And now this is like a new thing with programming with English. But you still need the artistry craftsmanship to come up with the design and the architecture. Interestingly, the best programmers today—even if they are programming in Python—they’ve learned C, and they actually know a lot about how computers work, how the steps below the stack work even if they’re using higher abstraction.
I was curious to ask everyone here—another potential counter, for example, is the natural language to SQL idea that has been around for years and years and has never really taken off. I always wondered how much of that is because it’s hard to build and implement and how much of it is it because it’s actually not as simple as just, "I need someone to translate my thoughts into a SQL query."
It’s knowing the right questions to ask about the data and having some representation of how the pieces fit together. You have to have some sense of like the relational database in your head, at least the concepts to ask the right questions. If it’s true that there’s some step before of thinking involved, then you can’t just extrapolate from like, "Hey, it’s just like we started with binary code and we just abstracted all the way eventually to natural language."
There’s going to be some gap between like the highest level of abstraction you can get in actual natural language.
I think so. I mean, we kind of looked into a lot of these kinds of ideas and funded some companies doing this kind of idea. I think AI will get to the point where you could actually do the translation from English to SQL, but I think the hardest part is not that the problem with all these data modeling—why data engineering works are so big, because when I had to kind of manage these teams, they’re very messy. The reason is because the hardest part is the data modeling because that’s trying to encapsulate the real world, and the real world is messy.
We have all these annoying coefficients and frictions that we have to model. It’s like, “Okay, this person talks to who, and this workflow works to who,” and it’s all very, very messy. That perfect model, and AI can’t really encapsulate, and you kind of need the human to think through it.
Yeah, and that layer is how do you put an LLM to kind of parse through that and translate to the business requirements of the data model? Because if the data model is wrong, then it just causes all sorts of issues, and that’s where things get hard.
What do you think, Jared?
I have a controversial argument against what Jensen said; this one will probably piss some people off. Nice! My argument is that even if everything that Jensen predicts comes true and in the future you will be able to build a great app just by writing English, you should still learn how to code because learning how to code will literally make you smarter.
We have an interesting piece of evidence for this, which is there are a lot of studies now that show that the way LLMs learn to think logically is by reading all the code in GitHub and basically learning how to code. I think programmers have long suspected that learning how to code made them smarter, but it was kind of hard to prove with humans.
Now we have some actual evidence that this is really true. There’s definitely some evidence that for some certain class of problems with LLMs, you’re way better off having the LLM write code to solve the problem than to try to solve the problem itself.
Exactly, yeah. So tool use is actually a very weird emergent behavior and property of these systems. Summing up, it’s like, okay, let’s say that one thing is probably uncontroversial: there is going to be some sunset of programming work that will just be subsumed by LLMs. Maybe it’s going to be junior engineering work—like, glue code. A whole bunch of certain types of programming work we can all admit does not involve high creativity, high human reasoning.
I should worry more about all the deft shifts where all this stuff gets outsourced— that type of stuff that gets outsourced to dev shops or even like, frankly, Fang companies that have like armies of junior employees. One potential consequence of that is if we're not that far away from the junior AI software engineer, will we just see software companies have way less employees and converge on a point where you could have unicorns—billion-dollar companies—that have like 10 people on them?
Sam Altman had a recent comment about this that also went kind of viral on the internet—the idea that in the future unicorns could have 10 employees or fewer—which is only—well, it’s never quite happened. I think WhatsApp and Instagram are probably the closest to that ever happening.
Yeah, it feels like we’ve always had this thought for the last decade-plus at Silicon Valley, and we’ve always had flashes: “Oh, like Instagram gets bought for a billion dollars with like 20 employees.” WhatsApp gets bought for $13 billion with 15 employees or whatever the numbers are. But we’ve never seen like a sustained trend that we can point to. It’s always like these flashes, but maybe now we’re at the point where we will just see a trend.
It’s interesting; I feel like people who were new to Silicon Valley and new to being founders, they want to have more employees because employees are like correlated with status, essentially. And we know the more experienced founders who’ve been doing this for a while, they are obsessed with this idea of having fewer employees—having as few as possible.
Because once you manage a large company with lots of employees, you realize how much it sucks, and that’s why this meme has been around in Silicon Valley for a long time.
Yeah, it feels like there are often two types of people who really push for and are motivated for this smaller employee idea or smaller teams idea. It’s that profile, and then it’s also just engineers who are naturally more inclined towards like computers versus people who are not excited about the idea of managing lots of people.
Which totally Paul Graham thing—he was into this in 2005, long before it was a trend in Silicon Valley. Yep, and it had to be a combination of foresight and personal preference, right? Just not wanting to be like in an office with hundreds of people.
I met up with Mark Pincus from Zynga here at YC recently, and the most interesting thing he told me was, “I think at some point a company gets to about a thousand people, and even the most forceful, the most sort of with it CEO—you sort of lose the capability to really impose your will on the company right around when a thousand people.”
If I reflect on some of the founders that we interact with regularly, who have thousands of employees, like that’s actually sort of what their daily lived experience is like. There are these things that you know are extremely true—the company must go in this direction—and then even then you’re like a little bit boxed in, and you’re like unable to enforce that.
I have to say, I feel like of founders I work with, especially the younger hardcore technical engineers, I think they actually grow into leading bigger teams, just viewing people as a resource that should be used well.
An example I can have is like Patrick Collison of Stripe. I worked with him on our first startup together when he was like 19, and he was definitely the archetype of an incredibly intense engineer who wanted to be working on hard engineering problems all the time. He viewed too many people around as like a distraction from the core work, did not want to hire people, and didn’t want to do any of this stuff.
At some point, I think once he started Stripe, something changed, where he realized that the way to achieve his ambitions was to just take an engineering mindset—view the company as another product that needs to be engineered and built.
People are a core component of that, and I think he just embraced the—I need to be a very effective leader—hire a manager of people. I’m not saying in this new AI world, if it would have started today, he wouldn’t have less employees, but I don’t think he would have this internal motivation to just not hire anyone so much anymore. It’d just be more of like an expected value calculation of what is it better for me to automate, or is it better for me to rally people and use them as a resource?
What do you all think? I mean, these are hard things for a young founder to sort of approach; while these are sort of some of the reasons why my startup didn’t go as far as I wanted it to. I think maybe the most toxic or difficult thing that I struggled with was this idea that like somehow your startup is your family.
There’s actually a clip online of—I think Brian Chesky of Airbnb in a prior era actually like, you know, saying that relatively emphatically. And then today if you ask him, he would say, “Oh no, no, no, this is definitely not a family.” A family has all these old weird traumas. Like, imagine bringing home a boyfriend or girlfriend, and they’re sitting with your family, and you know, they go back, and they’re like, “Well, what happened there? Why is that like that?” And it’s like, “Oh, you don’t want to ask about that! You know, let's not ask about that,” right?
You don’t want to have a family be your model of a company is actually kind of a bad thing. The much more functional version of it is actually a sports team—like, “Here’s actually what we’re trying to do, and you know, basically we need to win.” I think wanting to win is sort of the ideal analogy. Whereas, you know, for a family, there's these weird things like, “Oh we just want love.”
I was like, “Oh no, no! That’s not what a company is for; that’s not what a startup is for. We’re here to solve problems and win.”
I guess I really wish that I had someone tell me that when I was, you know, sort of 27, going through my first stint at YC. I think that’s a hard transition. I personally went through that because we went from a very small engineering team to a very large one once we went through Niantic with Pokémon Go, and all of that hyper-success with Pokémon Go is very jarring when you go from that small intimate team and go into like an engineering org of like 500 people.
That concept of going from, "This is your tribe and people and family where you really know each other and everyone," to getting the best performance out of everyone is very different, and that’s hard.
What could be interesting with this era, where if we imagine a world where there could be companies with less than 10 employees, maybe you could still be a family, but is that still a good idea? I don’t actually believe this.
True, was about talking about—is Jared, to your point of like programming just sort of makes you smarter? There’s certainly some kind of learning founders go through when they hire people, build teams, deal with conflict, fire people, learn how to get the most out of them. That probably just makes them more effective overall.
Like maybe smart is not the word, but it certainly makes you more effective figuring out how to work well with people and get the best out of them.
Yes, you learn a lot about people in the process of having to build a company and a team.
Yeah, and I was thinking about what you said, Harge, about Patrick Collison and how he went from being a programmer to like learning how to run a company. I realized that’s not just Patrick Collison; that’s actually like all of our best founders are exactly like that.
Sometimes people wonder how we can fund like, you know, 18-year-olds with no prior management experience and expect them to build a big company someday, and it’s exactly that. It’s because they treat it like an engineering problem.
This is something I take away from—I read the Larry Ellison Oracle biography, and like a bunch of nuggets from there. But like one really interesting one is there’s a period in time where he completely ignored just like the finance function at the company because he thought it was the most boring thing in the world.
Then Oracle went through a near-death experience where they weren’t on top of their budgets and expenses, and just almost ran out of money. He, like, forced himself to have to get on top of it so they would not die from running out of money again.
And like the only way he could do it was to be like, “Okay, this is just like—I’m going to treat this like a programming problem. It’s just numbers; it’s process. I’m just going to optimize this as though I would like coding.”
He got really into it and just actually started really enjoying the whole process of process optimization, which then fed back into Oracle in a weird way. Because Oracle’s business was a lot of like going to companies, figuring out which of their processes were messy, and trying to sell them software to solve it.
He experienced the problem himself, and then he built the solution that he wanted, and then he was able to sell that solution to everybody else because everybody else had the same problem basically.
But again, it all came from an engineer who wanted to avoid a messy people process problem—just taking it on and treating it like a programming problem and actually becoming more effective at it than the team that was built to work on it.
I see this a lot with our technical founders who are doing B2B companies, where they treat their sales org this way. They definitely treat sales like a programming optimization problem.
Yep, it’s like stereotypical actually.
So what do we think the net effect of this is going to be overall? If AI, you know, makes us all more productive, if AI can start taking away some of the junior programming work, do we see a lot more unicorns? Does it make it possible for one company to become worth like a trillion dollars, or do we see a long tail of lots of unicorns started by much smaller teams, and do we think the teams will even shrink?
Because if we go back to predictions in the early 2000s, there were a lot of people who were predicting that as programming got more efficient, companies would be smaller. Because in the 90s, to build an internet startup, you had to build everything yourself. You had to build, you had to have people who knew how to rack servers. You had to hire people who knew to optimize databases. You had to hire people to run payroll. Then all of that stuff got turned into SaaS services or infrastructure—open source—and so like you could focus on just your core competency.
There were a lot of people who were predicting that this meant that companies would have fewer employees because they wouldn’t need all those people that you needed in the past.
I remember racking servers, but I bet a lot of people watching this have never even stepped foot in—don’t even know what that phrase means. What is a—what’s a rack? How does that even work? You just go and click a button on a website and like boom, I have a server, right? That’s how it works, right?
Yeah, and before that, we were looking at some data earlier, and what we discovered is, it didn’t happen, actually. Companies didn’t get smaller, and Harge discovered the reason why—there’s this concept in economics called the Gibbons Paradox, which is essentially once you make any service more efficient, like you make it cheaper to deliver, you increase demand for it.
So you actually just get more consumption. Like examples would be Excel spreadsheets making it easier to do financial analysis did not decrease the number of financial analysts; it actually just increased them.
I think typewriters being replaced by word processors is kind of another example where, yes, the strict role of being a typist and a typewriter went away, but the demand for people with word processing skills went way up.
So, software became cheaper to make, but at the same time, programmers became more efficient, but it did not reduce the demand for programmers; it actually increased the demand for programmers, which I think we actually see in the number of companies applying to YC.
There was this essay from PG just 15 years ago that he couldn’t imagine the world where we’d have more than 10,000 applications per year, and at this point, we’re getting over 50,000 applications per year. More than that!
It is becoming easier to start companies more than ever because there’s so much infrastructure built, but at the same time, the requirements to be good at it and be a good founder are higher. I think it requires having even better taste and more craftsmanship to become the best founder now, right?
Yeah, sometimes we joke that if we went through YC now in our younger selves, would we have gotten in? It’s actually very competitive now because the baseline is just so much higher.
Yep, so there are things that at the end you still need a computer science degree and engineering degree to really build that taste and craftsmanship—to know what to build and build it well. You need to whisper to the AI and LLM, but how do you even whisper to it? You don’t know how all this stuff works.
There’s this amazing Rick and Morty meme where there’s a little robot on the table passing butter, and he goes up to Rick, the master, and he’s like, “What is my purpose?” And it says, “You pass butter.”
Then he goes, “Oh my God!” The funniest thing about that is there are so many people in the world who basically have that job, and they’re not like robots, they’re human beings. Their 9 to 5 is something that is incredibly rote and not that invigorating or exciting to them, and yet that’s like sort of their entire lives.
How could we not celebrate the fact that now we have more software, more tooling, and potentially robotics coming around the way? Like, that might free that person from having to pass butter, and they can go off and do something else—something more creative.
Ideally, maybe they learned to code, maybe they learn to actually create things way off on the side in areas that OpenAI or Microsoft or whoever the tech giants are—like those companies can’t do everything; they probably shouldn’t do everything.
Not only that, it’s not clear to me that Lina Khan will allow that. So, you know, given that—maybe that’s the opportunity. Rather than just a few companies worth a trillion dollars—my genuine hope, and I think that we’re trying to manifest this world—is actually thousands of companies worth a billion dollars or more.
And, you know, some of those might have a thousand employees, some of them might only have 10, some of them might even be just one founder sitting there doing that thing. But at the end of the day, ultimately making it better for a real customer, a real problem, a real thing in society that frees someone from being a butter-passing robot.
That’s a human! I think this is such a good point, Gary, and I 100% agree with that. I think part of it is we’re in this world of post-abundance of sorts, where it’s easier to build things. It’s easier to get the infrastructure up and running if you get the right opportunity, and there’s a lot of capital too if you know where to tap.
But the bottleneck is, can you enable this equation of human capital to flourish and match that opportunity? Get the smart people that can do it and have a lot of ambition in front of this capital—and this is why right now our job is one of the coolest; we get to do that and enable this flourish of a lot of people that maybe got passed in different situations and give them a chance to build these companies that will go against the trillion-dollar ones, right?
Just a thousand billion-dollar companies! We have all definitely lived through and hugely benefited from this trend of the more powerful technology becomes, the easier it is to get a company off the ground. Clearly like just open-source software.
I mean, I just think back to when Jared and I first moved here like Rails was first taking off, and that was a huge innovation! Oh, that made me feel so powerful! Because before, I had to use Java, and it was so disempowering, right? You had Rails, and you had Heroku kind of come in and just make it easy to deploy and do—like, you know, you could be your own sysadmin essentially.
So I just think that we all—clearly made it easier for anybody to get their company off the ground. It didn’t necessarily mean these companies got much smaller; we didn’t get lots of 10-person unicorns, but we certainly cast a wider net of people who could prove out that they had an idea that people wanted with early signs of attraction, which then is what you kind of need to attract like the human capital and the actual capital to go out and scale these things.
I think even if we end up in a world where AI is not going to be able to like build like your perfect complex distributed system and scale to 100 million active users—even if it means slightly more people can take their idea and turn it into something and get it off the ground and get their first thousand users or their first bit of revenue, the human capital will come, the actual financial capital will come, and we’ll just get more of these things, which is great for everyone!
I love that, Harge! And I think that will—that’s one prediction I think we can definitely agree is going to come true, and how cool that is! Because there must be so many great ideas that just never get off the ground because the person who has the idea just kind of can’t go zero to one, to getting that flywheel going, to Aperture in front of the right people.
I felt very lucky that I grew up in jail in the middle of this desert—there’s like nobody really worked on computers, and they were just on the internet—and going through YC was one of those moments that changed my life and the trajectory of it. It really uplifted, and I hope that happens for a lot of more people that we can work with.
Well, so it sounds like the verdict is in: Learn to code! Yes, you should learn to code. Sorry, Jensen; you are brilliant, but you are not right every single time.
I think one thing that is uncontroversial is that over the last 10 years, there have been more unicorns started each year, right? And that's been because technology has made it more possible for people to get their ideas off the ground. I think AI only accelerates that trend, right?
I think we should just expect to see more unicorns started per year than ever because it is easier to go from getting your idea to like a prototype to your first users than it ever has been. At the same time, it still takes table stakes to be able to program and code because so much of the foundational knowledge you have to have good taste to build something great.
You only get the good taste by going and studying engineering and computer science. The most important thing to me that I really want to manifest in the world, that I think we get to do all the time at YC, is that there are people here who are craftspeople or who could be craftspeople, and those are the people who are going to go on to build the future.
So with that, we’ll see you next time.
[Music]