yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

AI, Startups, & Competition: Shaping California’s Tech Future


54m read
·Nov 5, 2024

Hey guys, please find your seats. We're going to get started. It's great to see you all! We have a very exciting topic today with some very exciting speakers. I'm so excited to be here with you to talk about AI competition and startups.

Before I recognize our speakers and special guests from Washington, I want to tell you a little bit about why we're here and YC's public policy program. Last October, we hired Luther Low to be YC's first head of public policy. Where is he? Alright, you'll hear more from him later.

Why does YC have to have... like, why should YC have a DC presence? It's pretty simple. For the longest time, only the largest players in the tech ecosystem have had a seat at the table. But to paraphrase Pericles, you may not be paying attention to politics, but politics is paying attention to you. YC wants to fight for little tech in Washington and make sure all of you have a seat at the table.

One of the most interesting spaces where we're seeing a lot of fierce competition is at the model layer of AI. Today, we're going to be speaking with the top antitrust enforcers whose job it is to keep the markets free and fair. Joining us from Washington are FTC Chair Lena Khan and DOJ Antitrust head Jonathan Kanter. These two are probably best known as Joe Biden's trustbusters and lead antitrust enforcement for the FTC and DOJ. Combined, they oversee literally thousands of attorneys and nearly a billion dollars per year of budget.

Of course, we will end things with a fireside chat with our very own State Senator Scott Wiener. We also have some important partners I want to recognize for this event, including the Omar Network, Electronic Frontier Foundation, and Mozilla, who will be on a panel later.

Finally, it wouldn't be a YC event if we weren't ultimately bringing it back to our founders. So, in between these presentations, we'll actually have amazing demos exhibiting some of the incredible stuff that you all are building. As we stand at the forefront of the AI revolution, understanding the regulatory landscape is crucial for every founder in this room, and that's why I'm thrilled to introduce our next speaker, Jonathan Kanter.

As Assistant Attorney General for the Antitrust Division at the US Department of Justice, he is at the center of shaping how AI will be regulated and how competition in this space will unfold. He's currently overseeing major antitrust cases that could significantly impact the AI ecosystem. His insights on antitrust concerns in AI, data monopolies, and fostering fair competition in this rapidly evolving field will be invaluable as you build and scale your AI startups.

So, without further ado, please join me in welcoming Jonathan Kanter to discuss the future of AI regulation and competition.

Thank you, Gary, and thank you to YC and Arzil, our hosts for today. I'm thrilled to be here. I have some prepared remarks and I'll deliver them, but I think it's important for me to kind of start really from the heart, which is I'm an antitrust enforcer.

What does that mean? What does that mean about the work that you do? I want to be very clear about this. Our goal is to make sure that you can build businesses that succeed, that thrive, and that take on new ideas, new problems, and solve them. Anti-trust enforcement is about making sure that there is opportunity for people to have great ideas, to build funding around those ideas, and to succeed. That's our goal.

Our goal is to make sure that the marketplace has room for all the people who are building businesses, for all the people who are starting up companies, for all the people who are daring to take on incumbents and build new products and services that are disruptive. That's successful. In a nutshell, that's what we're doing and that's why we're here.

It's no secret that AI is creating a transformative moment. Technological revolutions are inflection points that lead to major change and major opportunity. It is an invitation for new technologies and technological innovations. Platforms are an invitation for people to invest, to build, to create. That is wonderful.

It is an exciting time, and AI is one of those really important transformations. But it's important to make sure we get it right. It's important to make sure that these technologies are not only safe and sound, but that there is enough opportunity so that businesses of all sizes in all places throughout the country have a fair opportunity to compete on the merits of their innovations and on their skill.

It got me thinking about regulation versus enforcement, and I wanted to clarify that. So I'm a law enforcer, I'm not a regulator. My job isn't to regulate a market; it's to enforce the law. Only when companies violate it, to make sure that competition can thrive.

I went back and was thinking about Justice Louis Brandeis, someone that is near and dear to my heart. There is something he said in 1912 that I think is very important for today. In an address, he asked a question—the same question I think that I want to present to you all to think about today. Shall we regulate competition or shall we regulate monopoly? It was an incisive and astute question then.

At the time, there were a range of political leaders that believed monopolies were inevitable and so we should let them emerge and then we should just regulate the heck out of them and have very strong oversight. But Justice Brandeis rejected that idea. He said—and these are quotes—private monopoly in industry is, as he said, never desirable, and it is not inevitable. Monopolies were not inevitable then, and monopolies are not inevitable now.

The premise of the question presented by Justice Brandeis is that it is more desirable to rely on competition and market forces than the inevitability of concentrated markets where the only remaining solution is to oversee them with invasive regulation. In all markets, but especially markets involving technology and key inflection points, the first line of defense is competition.

As Justice Brandeis said, no system of regulation can safely be substituted for the operation of individual liberty as expressed in competition. To do so, he said, attempting to substitute a regulated monopoly for a public would be the result.

So today, there's no substitute for the competitive process. Competition is what ensures continued innovation and that improvements are not hampered or impeded by monopoly choke points. Competition is what ensures that consumers, entrepreneurs, and workers benefit in the long run. It ensures the freedom of opportunity that is available to all, including—and especially—companies that invest in new business and new business ideas.

Strong competition and preservation of competition, especially early on, can go a very long way. Promoting competition early on enables as many innovations as possible to survive and flourish.

Promoting competition early on can ensure that it's not only the big companies that succeed but also the small businesses and the startups.

Promoting competition early on can avoid the more drastic regulatory measures down the line that would necessarily be required once monopolies emerge. The importance of competition in AI at the early stages cannot be overemphasized.

We must examine market realities and promote sound policies that promote and sustain opportunities for free and fair competition for both AI infrastructure and AI applications. Foundation models promoting competition now will go a long way because it will shape how they develop and create oxygen for everybody to breathe.

How companies develop AI applications and how those applications are ultimately brought to market will shape the field.

The state of competition in AI hardware and chips, for example, will have enormous consequences for competition throughout the AI ecosystem. How we strike the balance between AI's use of creators' content and providing just compensation for those creators will shape the future of all industries.

If we can protect competition, AI holds the promise of generating new businesses and new markets that can truly form the new Main Street, both figuratively and literally.

Little tech companies, small and medium-sized businesses, have the potential throughout the country to become the new version of The Corner Store being built on Main Street. That Main Street doesn't have to just be here in San Francisco; it could be anywhere throughout the country. That is the world, that is the promise that we are trying to preserve.

Startups that build on new and emerging AI can create small businesses and tools that support small business ecosystems that don't just work for the biggest companies with massive global scale, but work for other corner stores.

Because it supports businesses anywhere, it has the potential to lead to the dispersion of opportunity so that we can see communities thrive—not just urban communities, but rural communities. That will create new job opportunities and lead to a rising tide that lifts all boats.

So we must do everything we can to promote competitive markets, especially as these technologies develop. We have to ensure that the markets are deconcentrated and that opportunities are dispersed. We should strive for an even playing field that allows for as many companies and as many ideas as possible to be tested, created, improved, and paved the way for new businesses and job opportunities.

To achieve all of this, the first step is a deep understanding of market realities. How does the stuff actually work?

In May, for example, we held a really exciting workshop at Stanford along with Stanford University promoting competition in AI, where we sought to hear from diverse perspectives, and we did—from scholars but also content creators, VCs, startups in all different areas of the economy, everything from tech for tech to tech for healthcare.

Those conversations, at least for me, were invaluable, because it's how we learn, it's how we grow, it's how we deepen our understanding of these issues so that we can have sound policy and make sound policy choices.

We've also worked closely with other agencies to develop our thinking and understanding. Just this past Tuesday, we released a joint statement on competition in generative AI, foundation models, and AI products along with our counterparts at the FTC, which you'll hear from later today, the EC, and the UK.

Through this joint statement, we pledged our use of available powers to promote effective competition in AI so that all companies—from the little ones to the big ones and everyone in between—will have the opportunity to succeed.

And we're going to continue to make every effort, and that means healthy markets. That means room for all business models, whether they are closed source or open source. Making sure that business model, especially open source business models, have the opportunity to build platforms, stay open, and stay extensible so that they can allow the rising tide to lift all boats and we can see the promise of innovation be realized across the entire continuum and help form the new Main Street.

So with that, I'm really excited to sit down have a conversation about some of these issues with Gary. And we'll go from there. Thank you. [Applause]

Jonathan, thanks so much. Thank you. It's great to be here. I think we just, I just have only a few questions. I think we're both hoping to open it up to you all. But just to kick it off, I mean the FTC has received a lot of really attention and praise for their statement on sort of the importance of protecting competitiveness in open source, especially at the foundation level.

You know, what does protecting competitiveness sometimes mean? Basically doing nothing or, you know, just wait to see how the market plays out? How do you think about that?

So I think we have to think about, first of all, we need to follow the facts and the law wherever it takes us. And so, if there's no problem to address, we don't need to step in and try to address it for its own sake, right? Our goal is simply to say, okay, what's happening in the marketplace, and does it necessitate action?

In the vast majority of instances, we don't do anything, whether it's mergers or conduct. There's a lot of business that happens without us having anything to do with it. We focus on the instances where there are bottlenecks to competition, where there are impediments to the kind of development and growth that we were talking about earlier, that I was just talking about a few moments ago.

In your opinion, is the AI Market at the foundation layer competitive, concentrated, or something else?

So we're seeing it play out. And so from my perspective, we have the promise for competition, and that's really exciting. And so, you know, certainly we've seen the development of proprietary foundation models, and then we've seen, as I alluded to in my remarks, the development of open source, which can be a force multiplier for competition, and that's really exciting.

One of the things that we talked about in our AI statement—one of the things I've talked about is making sure that open source stays open, right? So that people continue to build on it and people can continue to invest in what value can be created and realized by small and medium-sized tech companies who are innovating on that.

To me, that is competition in all its forms, everything from proprietary to open. And so a little bit of work early on by us and our communities to make sure that those models can thrive, to make sure that they can succeed can go a very long way to ensuring that there is not just competition at the model level, but competition for those who develop on top of models.

Can you comment on how AI connects to, you know, the two US versus Google lawsuits and your USV Apple lawsuit that was filed in the spring?

So I can't talk about our active cases. But what I can say is that we want to make sure that the next technological inflection points, particularly surrounding AI, have the opportunity to usher in a new generation of competitors.

Our cases, and if you read them carefully, they're designed to protect the ability of others, especially new innovative startups, to compete and realize the fruits of their investments. Those are the underlying concepts that animate our cases.

To the extent that many of the new technological inflection points that are emerging now are based on technologies that revolve around AI, to me it's important that we make sure that not just we correct the wrongs of the past, but that we create opportunity for those new and emerging technologies, many of which are being created by people in this room, have the opportunity not just to see the light of day, but to realize their full competitive and economic potential.

With that, let's open it up for questions.

Thanks! My name is Eric; I really appreciate the work that you've been doing on the Apple front. I started two companies that are relevant in that space, a smartwatch company and a messaging company. I know that you can't comment directly on the case, but for people who have, let's just say, much less of a background, what can we expect? Like, what are the major milestones that you see over the coming months and years in that type of case, for example?

And then the second question is, can you think of any examples of enforcement actions that you might ask the courts in inbox or similar cases?

Sure! Let me start kind of with an animating principle that cuts through a lot of our cases, and you can see it in some of the cases you mentioned. You can see of some of our recent cases involving concert tickets and others. A lot of our cases focus on moats that entrench monopoly power.

Rather than just thinking about how does a monopoly in one market lead to another, a lot of the fact patterns that we're confronting involve how do some of these technologies that you're talking about, some that you have experience with, have the potential not to just create new markets of their own, but to relieve and reduce the dependence on the massive tech companies or monopolies in the middle.

A lot of our cases are about those flywheels, those moats, those feedback effects that insulate those core monopolies from competition and allow them to continue to extract and stifle innovation. So what you can expect throughout our enforcement of these current cases, but also in the future, are the Antitrust Division, the DOJ, my team fighting for competition to open up, with respect to those practices that deepen the moat to allow those technological disruptors to create new opportunities and reduce dependence.

I think the cases that you'll see in the future and the kinds of remedies that we're going to demand, shall we—if we're successful—will focus on that core vision.

Awesome! Thank you. Hi, I'm Angela and I'm working on Andy; it's an AI search engine. I especially loved the comments you said about regulating monopolies and promoting competition. My question for you is, one thing we noticed with Google is, we think that they have a browser monopoly, not a search monopoly anymore. And so for AI, I think there's this huge potential to unlock more opportunities in competition. How do you think that we can prevent a monopoly from rising up again when it comes to AI specifically?

I think we have to study the past and learn from it, right? And so I think part of it goes to, okay, what are the core elements of a monopoly in the tech space? And again, this kind of goes to Eric's question and my answer, which is it's more often than not the feedback effect.

Where is the inflection point coming from and what are the assets that a company might own in order to protect those new competitors from emerging, right? And so you'll see, for example, in some of our cases involving things like search, that we talk about browsers. We talk about the kinds of advertising technology that might create interoperability and mobility for folks who want to participate in multiple ecosystems.

We talk about messaging and other kinds of technologies that, again, allow people to move cross-platforms and create the kind of technological innovation that we believe is important. So that's kind of how we're thinking about it, but our starting point in every case now is what is the core monopoly and does the conduct insulate that core monopoly from legitimate, innovative competition?

Hi, my name is Jeremy Nixon; I'm the founder of AGI House, which is this very charismatic community of hackers, and we all have some complex beliefs about monopolies. Specifically, Google Brain, where I worked for three and a half years, is responsible for the creation of the transformer and of sequence to sequence and of BERT. And certainly, with sort of paid for with, you know, monopoly money. I would say similar story for Bell Labs.

It seems like a lot of these monopolies do sometimes use their resources for enabling, you know, basic research and technological progress. And so do you have a complex take on how to make sure that that kind of thing is possible?

Similar story, I guess, if monopoly profits aren't available or at least like competitive profits aren't available for a company like OpenAI to take billions of dollars to create large compute clusters just to have it be whittled away by Meta releasing 305B. How can we have VCs who will be interested in investing in the creation of the next supercluster, which creates the next great model?

Very complex questions, but hopefully, you have some sense of it.

Yeah, I mean, no, that's great. They're great questions. So first, let me be very clear; we are not against companies succeeding and growing. We just believe that competition drives them to do better. The largest companies can innovate and bring great things to market, and they do, and I don't see that going stopping anytime in the near future.

But what we do care about is making sure that those companies feel competitive pressure to keep delivering and to keep delivering in ways that perhaps might stretch them even further than they think they can go. So that's part of what we're looking at.

Second is return on investment; again, is also something that we think is totally achievable and desirable, and there are plenty of companies making a lot of money. But business model competition is important too, right? There are different ways to monetize, there are different ways to bring and take value, and I think we want to let those innovations compete with each other.

But we also want to see those different business models compete with each other as well. These are great questions. A lot of fun.

Thanks! [Applause]

Hi, I'm Izzie Lerner at JLL. I really appreciate the work you guys are doing around, you know, kind of breaking down some of these moats, especially they involve business practices. But the other side of it is we see a lot of, I'd say, certain firms using litigation as a way to stifle competition and really in a way that they have capital that a startup does not have and can really stifle competition that way, which isn't really a technological moat, but it is a scale moat in the other direction.

Yeah, listen, bigger companies get, the more aggressive they can get, the more they can try to intimidate. A lot of the conduct that we see throughout the economy relating to monopolies often resembles bullies, and bullies can often throw lawsuits and other things your way.

I think it's important for us as enforcers, obviously there's a legal system; people have the right to use it, but we focus on making sure that companies are investing to the greatest extent possible and delivering results. To us, the more competition they face and the more that competitors can thrive free from impediments, the more that value will be given back to the public.

Hey, my name is Joe Dubet; I'm a founder of a company called Eden. My question for you is around the potential conflict between competition and the consumer in terms of protecting them. I see a lot of instances where protecting competition is totally in the consumer's best interest. You want as many pizza places you can have in your neighborhood, stuff like that.

But in the situation where, like, let's say a Craigslist or an Instagram or something where there's a benefit to having everyone in one place, how do you choose when you have to choose between protecting competition and maybe protecting consumer outcomes?

Sure! I don't think that we have to choose between the two; I think we care about consumers as an antitrust enforcer. The animating principle for me is that competition yields great outcomes, and I didn't come up with that concept on my own. That's what Congress said when it wrote the antitrust laws. It said, hey, we want competition. Why? Because we think that yields better outcomes; it promotes opportunity; it creates a more free and open and democratic society.

So that's the concept that we've been asked to protect. I think when you think about antitrust and the work that we do, it's not saying you're too big or you're too small or you should win or you should lose—it's simply saying, hey, you shouldn't engage in anti-competitive conduct that makes it impossible for someone else to compete if they have a better idea or if they have a better business model. It says, hey, if the dominant firm is going to gobble up rivals in order to deepen its moat rather than allowing some of those firms to continue in the market and succeed and potentially go public or create new opportunities, then those could be antitrust violations.

We're not used not on engineering outcomes. We're simply focused on addressing conduct that might get in the way. I think that's kind of how we think about it. But I do want to be very clear; I don't think there's that trade-off. I think good, healthy competition allows everybody to benefit.

Hi, my name is Hansen and I'm the founder of Light Sky. You're talking to a group of founders who've come through YC, and you know, obviously we hope that we're the founder that gets to IPO, but a lot of our companies will end in acquisition.

And so there's been a kind of chilling effect on acquisitions due to the kind of investigations that the DOJ and the FTC have started recently. And so can you comment a little bit about whether or not what your perspective is on the chilling effect of investigations on acquisitions and how that...

So I appreciate the question; it's an important one. I start with the data, right? We're all data-driven. The fact of the matter is the overwhelmingly majority of mergers never get looked. Single digits, and then even a small sliver of that actually get challenged.

So the deal economy is moving; deals are happening. We only focus on the deals that have problems, and the deals that have problems tend to be the deals in concentrated industries, markets, or deals where you have a disruptive rival who might be taking on an embedded entrenched incumbent. Those are the kinds of transactions that we tend to focus on.

I think that doesn't impede exit; it doesn't impede the deal economy. In fact, we see again plenty of deals happening. It's a narrow set of deals that satisfy a narrow set of criteria that are the problem.

The other thing, and I think this is important, is we want companies to have the opportunities to go public. We want companies to build. Not just though exit is a perfectly normal way for companies to realize value; great! The world is one where companies can build and they can find a path to becoming public and do so in a way that is cost-effective and can manage their resources as a public company and then build to become thriving durable businesses that can not only increase the number of firms who are out there building new products and services but frankly for startups increases the number of viable acquirers who can bid for assets.

So what we want to do is we get this cycle where there are only two or three firms—sometimes one—in each individual space that is a potential acquirer. Instead, we want to encourage companies to build and to do that; there needs to be pathways, one, to know to sell to someone who's not an antitrust problem. I think that happens all the time, but we wish it would happen more because there are more companies that wouldn't create antitrust problems.

Second is we want a world where companies feel comfortable that they can pursue long-term investments, including going public so that they can generate value and grow and thrive in a way that is durable and, Jonathan, it seems like the operating word that comes to mind in a lot of the things that you're talking about is like the word "open."

So you know, on the foundation side, it seems like open source, in terms of models, it seems like in terms of public companies, it's like, you know, a public company is like an open cap table. I know you probably can't comment on some of the active cases, but you know, openness in terms of platforms and being able to have competition at platforms is one of the things from an outsider it seems like that's certainly a goal.

It's like ultimately markets are about openness, whether it's a cap table, the source code, you know, or literally the platform that allows other people to thrive.

Opportunity, openness, and opportunity are the foundations for competition. And so we are really excited about making sure that we can help preserve that. A lot of our cases—one of the other themes is a phenomenon that I—you know, it's like open dominate close, right? Start open, get dominant, and then you start closing aspects of your open ecosystem so that others can't compete effectively.

And I think making sure that the availability of folks who are creating, who are dependent on platforms and services, who are induced to join and build on ecosystems can realize the fruits of those investments.

And I think that's what a healthy ecosystem looks like—where we have that openness but I think that you're exactly right, Gary. It's true in terms of having openness and diversity of models, business models, but also is openness to be able to succeed without having to exit through acquisition and openness of a public company that you said, like cap tables and where the public can essentially participate and benefit in the building of these great US businesses.

One more quick one. Lauren Good from W, thank you so much for doing this. I think what someone was alluding to earlier was basically the recent manifesto from Andreessen Horowitz about little tech versus big tech and the idea that the regulatory agencies are using brute force to squash little tech through M&A.

I think you addressed that well, but one of the other things that was in the manifesto was this idea that the government is proposing attacks on unrealized gains, and I was wondering if you're able to comment on that and generally what your responses to this idea, since there are a lot of little tech people in the room.

I'm just a small country antitrust enforcer, so I'll focus on my little area of the world, which is competition policy, and let tax experts focus on those other things. But when it comes to making sure that businesses, including startups, have the ability to thrive, compete, and grow, that's our focus when it comes to thinking about the tech ecosystem.

And I'll kind of finish where I started, which is, you know, I think competition is healthy. Competition is a force multiplier, and strong, competitive markets can often minimize, if not entirely eliminate, the need for invasive regulation. So when we have a technological inflection point like we have now, it is the time to make sure that we're doing a little bit of work on the front end to keep our markets competitive so that we can see as many of our great business ideas succeed and thrive and stay in the market and even become big open public businesses.

Thank you so much! I really appreciate it. Thank you! [Applause]

Hi, everybody! I'm Luther Low, head of public policy for YC. So at YC, we're all about our founders. And so between each of these policy conversations, we're going to have demos from our community.

So I'm going to hand it over to Mike N and Sayer! We're going to sort of feature how some of our community is using AI.

Alright! Hi everyone! First of all, thank you Gary and Luther for organizing this event and inviting me to give a demo. I'm Mike; I'm one of the co-founders and the head of AI at Zapier. My personal mission is to pull forward the future so that more people can experience the extraordinary benefits of future technology today.

Zapier is workflow automation software. We are used by millions of businesses across the United States. The vast majority of our customers are SMBs, so small, medium-sized businesses. Think like one to five people companies.

And I find automation can often be a little abstract to explain. So what I wanted to do is show you all a quick demo of how Zapier automation and AI are working together to serve many of the businesses across the US.

Let's imagine you're an SMB. We're going to use like imagine you're a coffee shop, and you get hundreds of customer emails every week. Your team's responding to these emails, but you, as the manager, the owner, you want to try and get a summary of what are all—what's a good summary of all the things that people are writing about.

So that you can orient your attention and spend time improving your business. I know we have a lot of policymakers and representatives in the room too. So maybe in your world, you can imagine you're getting hundreds of emails from constituents. And you want to try and get that summary of what are the top things on everyone's mind so you can make be better informed and make better decisions about how you spend your time.

So that's the problem, and this is a solution. This is a Zap; it's what we call our automation. What I'm going to do is kick this off by sending in an email to this fake email Dropbox that I set up for this fake coffee business.

"Hey, are you open during Memorial Day?" Okay! And I'll hit send on this email. And let me talk through what Zapier does here.

So this is a Zap that I set up ahead of time, and it's going—Zapier is going to receive any emails that that email. It's going to add it to a spreadsheet. Once we collect 100 emails in that spreadsheet, Zapier is going to release those and ask ChatGPT to create a summary and extract out insights from that.

Then it’s going to email me back a good summary of all of the things that I've collected so far. This is the spreadsheet that I've got. I've got a bunch of emails already in here that are sort of fake and representative. And there we can see my email that I just sent got automatically added to the bottom.

And if I flip back to my inbox—hopefully, there we go! Just got the summary back and this is a live summary that was automatically generated using Zapier and ChatGPT—it looks like some order shipping issues, product quality concerns, refund requests. Alright! I got a lot to work on for this business!

So this is a good example; this is just one of the millions of things that folks use Zapier for. Zapier supports 7,000 integrations on our platform, and our users plug and play those integrations to build things that they care about.

One of the things that I've learned over the last couple of years is that AI and automation, I think, are synonymous in a lot of our users' minds. The promise of both is software that just does more work for you. And I think this insight was what led us to go all-in very early on AI.

In fact, Zapier went all-in on AI—I personally did—in the summer of 2022, almost six months before ChatGPT was released. And Zapier is running about 10 million fully automated AI tasks per month at this point, I think that Zapier is the largest fully automated AI platform in the world at this point.

And that usage has given us a front row seat into what's the promise of the technology, what are people trying to do with this stuff, and also what are the fundamental limitations that they are running into.

What I’m seeing is this: the number one problem right now is low user trust—that's due to low accuracy and low reliability of language models. And it seems, because we've been tracking this for a few years now, it seems those problems are not going away with scale. And this was my first hint that the underlying language model technology seems like it might be inherently limited.

Now, of course, AI is getting faster and cheaper, and we have more model choice than ever thanks to open source. But as I dug in, I found something really surprising—the AGI innovation environment in 2024 is incredibly weak, and I think very few people realize this.

We're stalling out on our progress towards AGI, and I think this is highly surprising. It was to me because big tech, these AI labs, a lot of safety labs loudly promote this narrative that scale is all we need to reach AGI: just plug in more training data, make the models bigger, and we're going to get there. But this isn't true.

Now, I grew up in St. Louis, Missouri, famously the Show-Me state, so let me show you what I found that led me to believe that this is a chart of we put together of a bunch of the most popular AI benchmarks out there, all the blue lines—and you can see that over time, over the last couple of years, progress towards human-level performance has been accelerating on these to reach human performance.

What I found is that there's one eval benchmark called ARC, and it’s the only world's only AGI eval that exists, in contrast to all of the other evals out there; it's the only one that actually measures AGI instead of the more narrow form of AI.

This benchmark was introduced and created in 2019, on note, before language models, before any of the advent of scale that we have today; it remains unbeaten today. Now, you might be curious, like, "Okay, what's so special about this benchmark?"

So here's an example. This is one of the 100 tasks or a representative example of one of the 100 tasks that are in the benchmark, and your goal is to try and, as a human, figure out what's the pattern between the task input and the output and then map it to the test.

In this case, you might see, okay, it looks like we're trying to do maybe fill in these blocks with the square with the dark blue, and we can check our answer. And there we go, confetti, we got it right! So this is an example task. These tasks are designed to be incredibly straightforward and simple for humans, and yet empirically no AI system can solve these 100 tasks today.

This is incredibly shocking to me now. ARC remains unbeaten despite the fact that two months ago I launched ARC Prize, a million-dollar competition to anybody who could get AI to beat this benchmark, and so far, no one has been able to. That's on top of the last five years of evidence as well.

In the past, AI innovation was driven through curiosity, through sharing, through exploring new ideas. And instead, today, fueled in, I think, part by big tech and big Labs' interests, we now have scaling dogma. We have closed frontier research; we have LLMs as the only paradigm.

ARC shows we need new ideas to discover AGI. I think the industry's misrepresentation of reality is distorting decisions by not only policymakers but venture capitalists, by founders, by even students. There was 20 billion do invested in LLM startups in 2023 by my count, only 100 million into AGI startups working on new ideas. Half the students that I meet don't want to work on AGI because they think it's a solved problem, and policymakers are even now considering innovation rate limiters like S1047 because the scaling dogma is so loud.

I want to repeat, I think the AGI innovation environment we find ourselves in in 2024 is super weak. The world is basically betting that a single commercial lab in isolation is going to be the one to figure out and discover AGI. That is in direct contrast to how we got here and why I'm even standing in front of you. We got here through open progress, open science, and open sharing. That's not just true of AI; that's true of all science ever. In fact, I think I'd go as far as to say if you're someone in the audience who maybe thinks that we should pause AGI development or stop AGI development entirely, you should be pretty happy with the world that we find ourselves in in 2024.

But if you care about reaping the benefits of AGI in our lifetime like I do, we need to work to undistort this market. I think policy and incentives should push towards open AGI research and not rate-limit the crappy version of AI that we have today. Now, I'm putting my money and time where my mouth is. I created ARC Prize with the express goal to provide a very public measure of progress—or lack thereof—towards AGI, and I hope ARC Prize can play a small role in motivating researchers to work once again on new ideas and openly share them and help steer AI policy away from regulatory capture and rate limits and back towards open innovation.

Thank you! [Applause]

Hi, I'm Shawn Modi, CEO of Capital AI, summer batch YC. It's an honor to be here among such esteemed guests. Capital is a product to create persuasive content from data. In fact, it's so powerful; some of Washington DC's best lobbyists are already using it to win multi-million dollar appropriations and create some pretty important policies.

We're refining our product here at Y Combinator, and we already have 231,000 users. We are in the process of finalizing a pretty large engagement with Politico, which I'll give you a sneak peek of today.

So this is the nerve center of Capital. Under the hood, we're automating research. So we're looking at the entire open web and pulling in the most trustworthy links for the query at hand. We're also bringing open-source and frontier models and a native document editing experience. When you click into this prompt box, you'll see the different types of formats that you can select from.

The article is our most used format set; you can describe what audience you're addressing, you can say how long you want it. We're optimizing right now for quality of content, not speed, so you'll notice a little bit of lag as we put in a prompt. You can say what sources you want to pull from here: Google search, peer-reviewed, or have it fully hallucinate and say none.

Then you can select what content blocks you want: images, web charts, generated charts, metrics, tables, and direct quotes. So we'll put one in here. Let's say Gary asked me to do one actually. So nothing like putting myself on the spot.

So SB147 pros and cons; let's let it rip! So now we're going to push this out. It's creating a headline for me: "Understanding SB147 Pros and Cons," and it’s going to pull this information in line.

Why, while that's doing its thing, I'll show you some ones that I teed up already. So these, you kind of get a cross-section of my brain of what I'm curious about. So I learned about this company ASML, which is they make these very powerful machines that play a critical role in the global supply chain of semiconductors.

So my prompt here was the geopolitical risks of ASML controlling key infrastructure in the procurement of semiconductors, and you can see... well, let's just jump back to SB10. Here we go, so it's loading.

You can see the sources we're pulling from—this is from Allen, this is from Gradient Flow, Dee Piper. It's dynamically generating a table. So let's just see—SB147, also known as the California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

It's a proposed state bill aimed at regulating the development and deployment of advanced AI models in California. This bill specifically targets AI systems with significant computing power, particularly those capable of performing over 10 to the 26th power operations.

Interesting! So if I click on this paragraph, I can go deeper. So unlike other chatbots, this is a document creation experience, so it's making suggestions for me and it's understanding the context of this article: explain the potential impact on businesses and consumers. That sounds interesting.

So I'll let that prompt rip, and it's going to pull from the open web and reprompt that specific paragraph, and it's asking me for a clarification. I'll do that as Federal Government.

Scrolling down, we have dynamic tables generated here; looks like these are the different provisions of the bill. It's pulling out key metrics—so 10 to the 26 operations threshold for models.

We don't plagiarize; when there's a quote, we're going to give attribution to where it's from and you'll be able to see precisely the source—in this case, this is for Milan, it looks like it's from Forbes.

And arguments against stifling innovation, Exodus talent impact on open source economic impact; it pulled in a chart. This isn't the sexiest chart, but we scan the entire page and we understand the context of the page and it's like having an analyst tell you that the chart illustrates the increasing share of AI-related research and development expenditures at US colleges and universities, as well as the rise in proportion of newly founded AI startups among all tech startups, so on and so forth.

And I can go deeper at the article level here too: convert this into a dialogue between a policymaker and AI developer discussing the pros and cons long-term economic innovation impacts. I'll do that.

So I'm quickly understanding this topic and creating a piece of content that could be useful for my job. And then let's jump over here; I'll show you a few other things I've been looking at.

I was curious about the Republicans' approach to FTC antitrust, so I generated a pretty in-depth report here of how the Republicans and Democrats approach FTC antitrust enforcement. And if I want to go and take that, let's say I'm a journalist at Roy and I want to break some news or have an opinion piece, I can go take this and put it into a Google Doc and work with it.

And here it is with all those tables embedded and citations at the end of where the data came from. A few other things before I show you the political preview. These are the economic pitfalls of taxing unrealized gains. My prompt was write me a speech to an audience of reporters, startup technology founders, and US Federal regulators on why taxes on unrealized gains is a bad idea for the United States economy, including lots of data.

So here you can see, as of paragraph one, there's seven sources that went into this. Draw linkable. If I wanted to add a paragraph in between, it understands the context of the two paragraphs, and it'll make suggestions: analyze the implications for economic growth and investment.

Cool! So I'll add that in line. If I need an image for my article, I'll convert this paragraph into a synthetic image. So I pass this paragraph to a diffusion model, and what you're going to see is a synthetic image in line.

Looks like this: how I feel! So if I don't like the styling of it—in this case, it's not my favorite look. I'm going to change it to something maybe more cyberpunk. Where's my cyberpunk aesthetic? Let's let that go.

There's my new paragraph above with citations as we go down. You can see more tables, so it's giving you the information you need and in a fast form factor from very complex data sources.

We're looking at 12 different pages here that were fully scanned and synthesized into a cohesive narrative and fact-checked. And yeah, we'll keep going here.

I'll show you this—there was a big incident yesterday in Alaska of a joint Chinese and Russian military exercise. So I was curious about that—what are the national security implications of this issue? So I asked for an in-depth analysis of why the Chinese and Russian military fluid joint exercise near Alaska.

And I got a pretty good assessment of the history of their joint exercises and what it means for the Arctic. I went deeper into the natural resource context of China and Russia's interest in the Arctic. So you get the sense.

So now zooming out, you can all use Capital today, city a— for the affordable price of $8 a month. That's if you use the discount code "Design Capital." But now I'm going to show you something that's up and coming and pretty exciting, so we are friends of publishers. You know, I live in Washington DC. Some of my closest friends are journalists, and I think right now AI has a kind of, you know, hands-off-me-at-hands-distance relationship.

We want journalists to thrive. We cannot live in a world without journalists. Journalists play a critical part of democracy. So that's why Politico reached out to us, because they want to embed AI into their political product.

Pretty much every major lobbyist, every major government official, every Congressional office, every Senate office uses political Pro to understand the inner workings of Washington DC.

So this is their product, and what you'll notice is a little widget called AI Reports powered by Capital AI. So if you are curious about how Congressman R's Con versus Blake Moore voted on Ukraine Aid or on Israel Aid or on other types of key matters, you can put in that prompt, and it would look at all the political Pro proprietary data and give you an interactive report.

So I'll give you a preview of what that actually looks like. If you go to the open left nav, you'll see "Create an AI Report." We're going to introduce this concept of projects, so you can add multiple URLs, articles, data sources to this to basically a bucket, a folder.

In this case, this is 2024 funding bills. I'm going to add those sources to my prompt box, and then it's going to suggest different prompts of how I can interrogate that data and write a report.

So I'm going to go with the suggested version: write an article explaining Bill S205 and its impact on the Interior and EPA.

And just like you saw in our open product, you're seeing an in-depth analysis of this bill, and this was created in Capital. Obviously, this prototype for political Pro will be a full document editing experience, where you can edit text, highlight text, and add specific data to sections.

You can add YouTube videos, PDFs, Word docs, legislative data, regulatory data, and prompt it. So here you can ask Capital to provide examples of specific provisions of the bill or you can take a testimony from court, add it, and say highlight the key parts where X happened.

But it doesn't end just there. So X out at the document level, this is where you'll manage all your sources. You can remove sources, add them, regenerate things. If I want to turn this into a tweet storm or a piece of content to influence policy—not only understand policy but to influence policy—I'm going to go turn this into a tweet storm, and then I'm going to prompt it saying, write a series of tweets in opposition to this bill targeting spending.

And what I have here is a tweet storm opposing this bill, citing articles from Political Pro and other sources that are relevant. And if I click that, I can reprompt it using AI, so the cycle continues.

So that's a little preview of shapeshifting, as we call it, with our product. You can see that original source here, how AI will be infused throughout the entire analysis process. This goes into the regulatory aspects of well as well. Your average citizen, it's hard to understand the legalese of many of these things, and we want to democratize that.

So, yeah, that's what we're building. Capital: open to your questions afterwards, and thank you for your time!

Thank you, Shawn! Up next we have FTC Chair Lena Khan, who is going to make some remarks, and then she's going to participate in a fireside chat similar to how Gary did with Jonathan.

We’ve flown out Timothy B. Lee. Does anybody read "Understanding AI?" Outstanding Substack. I highly recommend it. It's really helped a non-technical person like me kind of understand what's going on under the hood.

And so I'm just going to quickly introduce Timothy, and then Cheran will come up and speak, and then Timothy will come up and interview her, so we'll have some time for Q&A for the audience.

Timothy is a technology journalist and analyst who writes the popular Substack, "Understanding AI." With over a decade of experience covering tech economics and public policy, he brings deep expertise to his analysis of artificial intelligence and its impacts.

He previously wrote for Ars Technica, The Fox, The Washington Post. He has a master’s degree in computer science from Princeton. Through his writing, he really helps readers grasp how AI works and how it’s reshaping our world.

His newsletter is exploring topics like large language models, self-driving cars, AI’s effects on labor markets, maybe SP 1047. I don't know, maybe we’ll see an article about that soon.

The philosophical questions raised by artificial intelligence, and then Lena M. Khan is the Chair of the Federal Trade Commission appointed by President Biden in 2021 at the age of 32. You know, this is Lena Khan's third YC event. We had my first—I was like two weeks into the job—and we had her come out last November, and then she came out, or we did a policy event in Washington called Remedy Fest.

Not only was Lena Khan there, but we also had Elizabeth Warren and JD Vance. So, it was like this interesting assembly. And then at the event, JD Vance said my favorite member of the Biden Administration is Lena Khan.

And so now that he's been named the vice presidential pick on the sort of Trump BS ticket, everybody is like scouring the transcripts of our first real policy event in Washington, Remedy Fest. So that was kind of a cool thing that happened last week.

Anyway, sorry, I got distracted. Legal scholar and antitrust expert Khan rose to prominence when her 2017 Yale Law Journal article, "Amazon's Antitrust Paradox," which challenged traditional antitrust thinking.

Prior to her FTC role, she was associate professor at Columbia Law School, served as counsel to the US House.

Judiciary Committee Subcommittee on Antitrust. She is known for her innovative approach to competition policy, advocating for more aggressive enforcement against tech giants and other large corporations.

Her works reshape the conversation around monopoly power in the digital age, earning her recognition from publications like Time Magazine, Foreign Policy. She has a JD from Yale Law and a BA from Williams College.

Please welcome to the stage Chair Lena Khan! [Applause]

Hey everybody! It's so great to be here. As Luther mentioned, this is my third YC event. Before each one, Luther really tries to convince me to ditch the blazer and just wear a t-shirt so I can fit in, but I haven't been able to get there, so I hope you'll forgive the blazer.

Well, it's so great to be here and, as you all know, there's been a lot of talk recently in Washington about how to address AI. Gary has been just at the forefront of making sure that little tech has a seat at the table, and so I really appreciate that advocacy and championing the role of little tech.

So, against this backdrop, it's really exciting to be here to get to talk about how we can make sure that America can harness the full opportunity innovation and growth that AI could present.

And I believe that a vital ingredient to making sure that these markets stay open, honest, and competitive is going to be vigorous antitrust enforcement and competition policy. Fair competition is what ensures that the best ideas win, that markets are actually rewarding businesses with the best skill and talent, rather than rewarding just the businesses that can exploit special privileges or advantages.

These are really the markets that allow tinkerers and dreamers to come up with a big idea, take a risk, and thrive. Historically, these open, fair, and competitive markets have been the engines of American growth and innovation, especially at moments of major technological disruption and transformation like the one that we're seeing now with AI.

These are the moments when we in particular see breakthrough ideas—ideas that can really shift the paradigm—but history shows that these are also the moments when incumbents can try to tighten their grasp even if it means abusing their power because they have the most to lose.

So these companies can potentially exploit bottlenecks and stop the flow of innovation. They can leverage their market power to pick winners and losers rather than allowing the best ideas to win.

To make sure we fully harness the benefits and opportunities of AI, we have to make sure that we're staying vigilant and that we're making sure that upstarts can really compete on a level playing field here.

I believe that this is going to require, alongside other things, a real commitment to a philosophy of openness across the industry, which means open markets—but also open architecture, open ecosystems, and open-source software.

And so I'm going to share a little bit about how we at the FTC have been thinking about that for decades. Open-source software has driven competition, innovation, and opportunity in the tech space. Under certain conditions, it has allowed researchers and developers to build on each other's discoveries more easily and more efficiently.

Take Linux, for example, which was developed more than 30 years ago with the help of the community that sprouted up around it. It has allowed countless technologies to flourish, from cloud services to supercomputers, and now it powers some of the world's most important systems—from the New York Stock Exchange to the particle accelerator.

It would not be an exaggeration to say that nearly all of Y Combinator's most successful companies would probably not even exist without open-source software and its community. So many of the technologies that we use every day would still be just ideas because the barriers to entry posed by proprietary software can sometimes be too high.

Openness is more than just a feel-good philosophy; it's been a proven catalyst of innovation, which is why it has attracted hundreds of billions of dollars in venture capital funding to help startup founders bring their ideas to life.

So it’s worth thinking about what open source could mean in the context of AI, both for you all as innovators, but also for us as law enforcers. Of course, the definition of open-source in the context of software does not neatly translate into the context of AI.

I know there's a lot of discussion still figuring this out, and I've been really grateful to chat with some of the leaders that are really thinking through what a shared understanding of openness could mean in the context of AI. As a starting point, instead of saying open-source models, the FTC has been using the phrase "open weights models," specifically referring to AI models for which the weights are publicly available.

It's still early days, but we can already see that open weights models have the potential to drive innovation, promote competition and consumer choice, and lower costs and barriers to entry for startups like the ones that incubate here.

The FTC is clear-eyed about the conditions that need to exist to make this vision come true. As you know better than anybody, it isn't easy to build an AI foundation model; it’s resource- and capital-intensive from retaining engineering talent to accessing expensive computing infrastructure to acquiring the necessary data.

These conditions have allowed in some instances the biggest technology companies to get a leg up in the AI race. If you control the raw materials, you can control the market and shut out smaller companies who don't have the infrastructure to compete.

This can lead to fewer exciting products made by even fewer companies and can come at the expense of both innovation and consumer choice. But with open weights models, more smaller players can bring their ideas to market.

There is tremendous potential for open weights models to promote competition across the AI stack and, by extension, spur innovation across the stack too. Open weights models can reduce costs for developers so that they can focus their capital on products and services rather than expensive model training, and they can free up venture capitalists to pursue promising new applications of models rather than starting at square one with model development.

At a basic level, open weights models can liberate startups from the arbitrary whims of closed developers and Cloud Gatekeepers. Developers that use open models, we've heard, can feel like they're less likely to have the ground shift under them because one day a model owner decides to significantly increase the cost of API access.

This is the type of free and fair competition that the FTC is committed to promoting. You all deserve the opportunity to build in that type of environment—free from the undue sway of monopoly power.

Of course, open weights models can also come with some challenges and risks. First, it matters who develops and owns the open model. We've seen firms deploy opportunistic open-first, closed-later strategies, where they use openness to draw in developers, scale up quickly, and enjoy the network effects and data feedback loops that this scale provides.

Then, once they've ridden openness to dominance and locked in a user base, they can flip the switch and become closed instead. Policymakers across government need to be vigilant of this playbook, and antitrust enforcers already are.

For example, in one of the FTC's lawsuits against Meta, we alleged that Meta in Web 2.0 had initially allowed third-party developers to build apps that integrated with Facebook, only to reverse course later and restrict access to those that challenge them.

The Justice Department's lawsuits against Google allege a version of this story as well. Second, the licensing terms attached to the model are crucial. A developer could make a model's weights available under licensing terms that ultimately restrict developers from using it to meaningfully compete in the marketplace.

Third, there's a serious risk of bad actors using open models for deeply concerning activities. AI can be used to clone voices, to defraud consumers, and to create sexually explicit imagery of people without their consent, including children. This isn't speculation—we are already seeing it.

And so even as we embrace our commitment to openness, we need to be clear, right, about these risks. Openness is just one way to promote a level playing field for startups. There are other ways that we can shape policy to structure markets so that they really promote free and fair competition that allows the best ideas to rise to the top.

Founders have told us that they struggle to compete because dominant players may be monopolizing access to great talent, to critical inputs, and to valuable data. The FTC is doing our part to be vigilant and to open up the market and ensure that founders have what they need to start and scale their businesses.

First, talent. Some of the best engineers in America have been bound by restrictive non-compete clauses. We've heard from startups that secured funding and entered the market, only to find that they can't grow because the talent pool has been locked up by the dominant players. Earlier this year, the FTC banned non-compete clauses. This will allow 30 million Americans, including developers, designers, and researchers across the country, to move freely from company to company with their innovative ideas and unique expertise.

Second, we're making sure that you all have access to the critical inputs to build AI tools and models. One of the first merger lawsuits that we filed after I joined the agency was to block Nvidia's attempted acquisition of Arm, which would have given one of the largest chip companies control over the technology and designs that its competitors rely on to develop their own ships.

Our team determined that the merger would undermine competition and hamstring innovation of next-generation technologies. We also launched an inquiry into the partnerships between dominant AI players and cloud service providers to better understand the impact of these relationships on competition and to make sure that no company is exerting undue influence or gaining special access because of the other firms that they happen to work with.

Third, we're making clear that major companies cannot collect data by surreptitiously changing their terms of service. This is not only an invasion of consumer privacy but can also distort the playing field and give these firms a leg up over smaller competitors that have fewer avenues for data collection.

At the helm of all of this work is the FTC's new office of technology, which we launched last year to deepen our in-house expertise. Our team includes folks who have built open source software and who have deployed this technology to millions of Americans. We have active members of the open source community and alumni of startups that incubated right here at YC.

I want to close by addressing what I see as a common misconception about the tech sector in policy circles, which is that tech is a monolith, that the interests and incentives of all companies are the same—be it the scrappy startup or a giant firm.

The many conversations that I've had with founders in Silicon Valley have really underscored just what a misconception this is. We've heard a lot of excitement from founders about the potential of this moment but also some weariness of the existing control and influence that some of the incumbents have.

So it's really just underscored the importance of us in DC to be hearing from a broad swath of market participants—little tech as well as the big guys. So I'm really grateful to you all for hosting me and really excited for the conversation. Thank you so much!

Hi everybody! I'm Timothy; I read a newsletter called “Understanding AI,” as Luther mentioned earlier. Thank you, Chair Khan, for coming. So, I feel like the classic antitrust case is like a company that's been around for decades and is behaving anti-competitively, and it's clearly a monopoly.

With large language models, we have kind of the opposite, where you have this U market that basically didn't exist at all two years ago, and we went from kind of nothing to OpenAI has billions of dollars of revenue. Can you walk me through how the FTC thinks about a market like that? Because it seems like it's pretty dynamic right now, but you could certainly imagine a company, you know, coming up in a monopolistic position.

What do you kind of watch for to see whether there's anything the FTC or any regulators more generally should be doing or whether it's just kind of to stay back and see how the market develops?

Yeah, it's a really good question and one that's really informed by the experience of the last few decades. In the early 2000s, we similarly had a moment where you saw a lot of dynamism among a set of companies, and there was a sense in DC that the best thing for the government to do, including in the context of antitrust, was to just step back and entirely get out of the way, with the thesis being that these markets are so fast-moving that even if you have dominant players, even if you have monopoly power, that will just naturally be eroded in the marketplace.

So that was kind of the governing thesis. You saw a whole set of acquisitions be allowed to go through—hundreds of acquisitions by, you know, the four or five big tech companies. Fast forwarding to today, I think we've realized that aspects of that thesis were really misguided and really under-indexed the significant barriers to entry that you see in digital markets.

I think there had been an assumption that, you know, people don't have to build massive factories to enter these markets, and so we're just going to see a lot of entry that will discipline any monopoly power. Instead, what we saw was that, you know, network externalities, the feedback loops of data really tended to lead some of these markets to tip such that early players, players that were able to get a leg up and really deepen their moats could end up actually solidifying dominance and monopoly power in a more long-term way.

So that even once they started acting in ways that weren't in the best interest of their customers, be it on the business side or the user side, it became much more difficult for entrants to come in and discipline that. So over the last few years—including under the last administration—we’ve seen major lawsuits filed.

A subset of those lawsuits are trying to correct for some of the inaction of the past, especially with regards to some of these acquisitions, but also with regards to some of what we now realize was anti-competitive conduct.

That experience has been a cautionary tale in terms of the downsides of just assuming that these markets are so dynamic, they're so fast-moving, so we don't have to worry about monopoly power at all.

Of course, the specific ways and tactics that dominant firms might use today in the AI context and in the LLM context will look different, but I think we've learned that there can be real downsides to not addressing monopoly power if it's being created or used illegally early, and instead allowing it to deepen and become much more difficult to fix on the back end.

The other thing that we really lose is a lot of innovation, right? I mean, when you have monopolists unlawfully be able to squash some of these innovators or create a chilling effect so that some of these ideas don't even come to market, a remedy that you might get five years down the line or six years down the line can really never fully make up for all of that innovation that was lost.

And so that's why we think it's especially important to try to prevent this stuff on the front end rather than playing catch-up and cleanup on the back end when you’ve already lost so much innovation and opportunity.

So I know that in addition to regulating antitrust, you're also—the FTC is a major agency in terms of privacy regulation. I know one of the concerns a lot of companies have and consumers have about large language models is sending their data to a big company like Google or OpenAI. Are there any privacy benefits to open-weight models in terms of being able to run these models on-device or, you know, on a company's own premises, as opposed to sending them out to some third party?

It's possible. I mean, I know we've seen some of those claims be made. I think we’d want to take a closer look before clearly saying that there are definitive privacy benefits here. But those are absolutely concerns we've heard from consumers but also from enterprise customers, who will say, you know, sometimes we are using some of these models, and we don't have entire confidence that our proprietary information is not being fed back in.

We think the terms of service are written in a way that would protect us from that, but I’m not sure, and those terms of service actually changed last week, so maybe it used to be safe but now isn’t, and just, just a general uncertainty about what could be a pretty core issue for a lot of firms is whether they're competitively sensitive information and proprietary information is or is not being fed back in.

So we've been putting out kind of notices and blog posts for the market to just put firms on notice about what types of practices could be illegal under our consumer protection authorities.

And so the consumer protection laws prohibit conduct that is deceptive, and so if companies are representing one set of data practices but actually engaging in a different set of data practices, that could be deceptive. You could also just deceive people by omitting certain key information, and so we think how this information that's being fed into these LLMs is being used is a material term.

So we've made clear that companies need to be upfront about that.

So we've had several people this morning mention SB 1047, the California legislation on safety. One of the big criticisms of that is that it might discourage companies from releasing open-weight models because it places a lot of requirements on companies that release those open-weight models. Is that something that you're concerned about?

I’m not going to weigh in on this specific bill. But as a general matter, I think it's tricky, right? Knowing how to strike the right balance, and I think candidly, policymakers do have a lot of legitimate concerns about how some of these tools could be misused, and they're already seeing some of that play out in real-time.

And so they feel an obligation to the public and the constituents to protect them from some of these risks. That said, I think it would be a real missed opportunity if we didn't position openness to really get a fair shot in this marketplace, especially from a competition perspective where we've seen that in this type of market, where the entry cost can be quite prohibitive, openness can be especially important because it just lowers those entry costs.

It can focus people on innovating in more efficient places, and so I do think we need to craft policy in a way that preserves the opportunity for that openness.

In your remarks, you mentioned the risk of fraud from deepfakes and AI. I was wondering if you could tell me a little bit more about what you're seeing. I mean, this is definitely something people always talk about, but you're probably in a unique position to actually see the kind of the magnitude of this. Is this still like a mostly theoretical thing where it's a few places or something that's really happening on a large scale?

So the FTC does police fraud, and we, you know, for years now, one of the most common sources of fraud that we see in our complaint database is impostor fraud. So people pretending to be the government or pretending to be a legitimate business, calling up people pretending to be the IRS saying, "Hey, you owe thousands of dollars, and if you don't, you know, wire over money in the next 24 hours, we're going to come and arrest you." These types of scams are actually still quite prominent.

What we've seen with fraud with some of these AI tools is that they risk turbocharging these existing types of fraud because it allows scammers to disseminate fraud much more quickly, much more cheaply on a much wider scale. With voice cloning tools in particular, we're also seeing an uptick in complaints of, you know, people pretending to be somebody's grandkid and, you know, faking their voice quite effectively.

We've been thinking through, you know, how can we be most effective? One thing we recently did was launched a voice cloning challenge where we actually invited the public to submit ideas to us about how either us as a law enforcer or people in the public could potentially detect voice cloning fraud in real time.

So, are there technologies or ideas already out there that would allow you, when you're getting a call, to figure out in real time if this is a real voice or is this a fake voice? We got some really interesting ideas and just announced a couple of months ago the three winners, and so we spotlighted those.

We hope that will help them, you know, get more uptick in the market and really be able to scale. We're using all law enforcement tools, but we're also thinking more creatively about how else can we encourage innovation, not just in fraud but in fighting and detecting fraud.

Questions, Sh, in the back?

Thank you! I'm Greg Miller, building 80, an AI legislative affairs platform. Curious to know, we see a lot of arguments that some of these firms are so big that even the big firms need to engage in anti-competitive behavior or acquisitions to compete against the other big firms.

We see this a lot in the flight industry, airline industry. We also saw, you know, Google needs to pay Apple $20 billion, and then Apple can compete on other fronts, right? I'm wondering how you think about that, how you think about these companies being so large that they need to do certain things to compete against the other really large companies?

As a general matter, we never think illegal practices are okay even if other people are doing them. You know, it raises an interesting point and one we definitely hear in all sorts of contexts. You know, famously in healthcare, I think we've seen a lot of consolidation, and you know some of these markets can get into a mode where you start seeing an arms race of companies that are having to bulk up and they say, "We're having to bulk up because the other side bulked up," and you know it's kind of ends up being this race to monopolization.

And, you know, we're law enforcers, so we look at what do the laws say? The laws say actually you need to be able to stop consolidation and its incipiency. Congress identified fair competition as the way that we need to structure our markets to make sure that the public can benefit, be it consumers, small businesses, workers, and so we have a mandate to actually crack down on some of that.

I can certainly see that if you have years of inaction on the law enforcement front, companies can feel, "Well, I'm not being protected from this monopoly abuse, and so my only way to survive is to become a monopoly myself," and you know, I think that really underscores the importance of enforcing the law so that firms are not put in that position.

But I think the risks of consolidation and unlawful monopolization can be so high that it's really not a good idea to double down on that. I mean, in addition to the harms that we've been talking about, I think we're also seeing the resiliency cost that can come with centralization.

Right? If you're centralizing production, you can also be centralizing risk so that when you have a single disruption, be it a natural disaster or a contamination in a factory, that can end up having cascading effects, and so the whole system breaks down.

And so I think, you know, the resiliency risks, the innovation risks of some of the centralization are so high that I don't think we want to go there.

Hi, my name is Emily, founder of Clearly AI. We're doing security and privacy from first principles, so I had a question around your role to protect against deceptive practices in consumer privacy, specifically around how you're thinking about the fact that, today, it is a default opt-in world for training on your data.

Specifically thinking around Slack's announcement to default opt-in everyone to train on your Slack conversations. You have to email an archaic email in order to opt out. How is the FTC thinking about that versus switching over to a default opt-out world?

Yeah, it's a really, really important question and one that we've been grappling with. A few months ago, we actually did a roundtable with a bunch of creators—people across the creative professions from authors to graphic artists to fashion models—people who have seen that, you know, they wake up one day and overnight their life's work has been by default fed into some of these models, and in some cases, is now being used to compete with them without their permission.

One of the big issues that they flagged is that all of this has been an opt-out model rather than an opt-in model. In addition to, you know, the lack of compensation and that they think fundamentally needs to change, we've put out some notice for firms about instances in which overnight changing your privacy policies so that by default firms are opted into that or people are opted into that scraping could potentially be illegal in some cases.

You really can't mislead people; you shouldn't make it extraordinarily difficult to exercise the choice to actually opt out. And so that's something we're tracking closely.

Hi there, my name is Nikelle. I'm founder of a company called Rescript in the current batch. I'm wondering if you could talk a little bit about the overturning of the Chevron case and what that means for companies who have to now, I don't know, comply with the different sort of regulatory environment.

Good question! I wasn't expecting a Supreme Court jurisprudence question here, but I love it! This is definitely a moment where we are seeing courts revisit some key principles that had been in place for many decades.

And so it's been a moment of uncertainty and some destabilization, even for government agencies, as we wait to see how the court rules on some pretty foundational questions. You know, in the Chevron context, in particular, the court has basically said that even when a statute is ambiguous, the courts should not necessarily defer to the agency's interpretation of that ambiguity.

And the courts really are the ones that should be in the driver's seat and figuring out what the law says and means. So, you know, generally, there's much more skepticism of administrative agencies and kind of just deferring to that expertise.

In practice, you know, importantly though, the court also said that although we are overturning Chevron, that doesn't mean it's open season to revisit all of these previous decisions that relied on Chevron, so they're kind of limiting the potential destabilization of that.

But it's something, you know, we're following closely. We think everything we're doing is already in compliance with these new rulings, but it's definitely an evolving landscape. There are going to be some big decisions teed up for the court next term as well that we're going to be following closely.

And so you have to kind of stay nimble and see where things land.

Good afternoon! Thank you very much for sharing your insights. We're not a founder; I'm representing the European Union here on the US West Coast. My name is K. So we're innovating a lot in regulation but not unfortunately so much in AI and these other things.

But I mean my question, and thank you for your leadership and also for the very close cooperation between the FTC and particularly, I mean DG competition and increasingly also DG

More Articles

View All
The End of The Universe
The universe was really small and dense at one point, and then all of a sudden it wasn’t. But whoa, whoa, wait a minute! Let’s rewind and figure out what happened right here. This is because of two things: entropy and dark energy. Put it simply, entropy …
Why MrBeast Philanthropy Will Never Save The World
Mr Beast has cured a thousand people of blindness, built a hundred homes for low-income families across the American continent, removed 33 million pounds of trash from the ocean, planted 20 million trees, and done much, much more. He might seem like a rea…
Determining if a function is invertible | Mathematics III | High School Math | Khan Academy
[Voiceover] “F is a finite function whose domain is the letters a to e. The following table lists the output for each input in f’s domain.” So if x is equal to a, then if we input a into our function, then we output -6. f of a is -6. We input b, we get …
Sums and products of irrational numbers
Let’s say that we have some number A and to that we are going to add some number B, and that sum is going to be equal to C. Let’s say that we’re also told that both A and B are irrational. So based on the information that I’ve given you, A and B are both…
The Peloponnesian War | World History | Khan Academy
As we’ve already seen, the fifth century BCE starts off with Athens and Sparta and various Greek city-states fighting on the same side against the Persian invaders. But as we saw in the last video, as soon as the Persians are dealt with, tensions start to…
A trick that always works...
This is a self-working card trick I learned from Ash Marlo 52 on Instagram. In this video I’m going to show you how to do the trick. In today’s other video, I explain why it always works. Have someone deal out four piles of four cards each, and then have…