Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity | Lex Fridman Podcast #452
If you extrapolate the curves that we've had so far, right? If you say, well, I don't know, we're starting to get to like PhD level and last year we were at undergraduate level, and the year before we were at like the level of a high school student. Again, you can quibble with at what tasks and for what— we're still missing modalities, but those are being added, like computer use was added, like image generation has been added. If you just kind of like eyeball the rate at which these capabilities are increasing, it does make you think that we'll get there by 2026 or 2027. I think there are still worlds where it doesn't happen in 100 years. The number of those worlds is rapidly decreasing. We are rapidly running out of truly convincing blockers, truly compelling reasons why this will not happen in the next few years.
The scale-up is very quick. Like we do this today; we make a model and then we deploy thousands, maybe tens of thousands of instances of it. I think by the time you know—certainly within two to three years, whether we have these super powerful AIs or not, areas are going to get to the size where you'll be able to deploy millions of these. I am optimistic about that because I worry about economics and the concentration of power. That's actually what I worry about more: the abuse of power. AI increases the amount of power in the world, and if you concentrate that power and abuse that power, it can do immeasurable damage. Yes, it's very frightening.
This is a conversation with Dario Amade, CEO of Anthropic, the company that created Claude, which is currently and often at the top of most LLM benchmark leaderboards. On top of that, Dario and the Anthropic team have been outspoken advocates for taking the topic of AI safety very seriously, and they have continued to publish a lot of fascinating AI research on this and other topics.
I'm also joined afterwards by two other brilliant people from Anthropic: first, Amanda Ascal, who is a researcher working on alignment and fine-tuning of Claude, including the design of Claude's character and personality. A few folks told me she has probably talked with Claude more than any human at Anthropic, so she was definitely a fascinating person to talk to about prompt engineering and practical advice on how to get the best out of Claude. After that, Chris Ola stopped by for a chat. He's one of the pioneers of the field of mechanistic interpretability, which is an exciting set of efforts that aims to reverse engineer neural networks to figure out what's going on inside, inferring behaviors from neural activation patterns inside the network.
This is the Alex Friedman podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Dario Amade.
Let's start with a big idea of scaling laws and the scaling hypothesis. What is it? What is its history, and where do we stand today?
So I can only describe it as it relates to kind of my own experience, but I've been in the AI field for about ten years, and it was something I noticed very early on. So I first joined the AI world when I was working at BYU with Andrew in late 2014, which is almost exactly ten years ago now.
The first thing we worked on was speech recognition systems. In those days, I think deep learning was a new thing. It had made lots of progress, but everyone was always saying we don't have the algorithms we need to succeed. We were only matching a tiny, tiny fraction; there's so much we need to discover algorithmically. We haven't found the picture of how to match the human brain.
And when— you know, in some ways, I was fortunate. I was kind of, you know, you can have almost beginner's luck, right? I was a newcomer to the field and, you know, I looked at the neural net that we were using for speech, the recurrent neural networks, and I said, "I don't know, what if you make them bigger and give them more layers? And what if you scale up the data along with this?" I just saw these as like independent dials that you could turn.
I noticed that the model started to do better and better as you gave them more data, as you made the models larger, as you trained them for longer. And I didn't measure things precisely in those days, but along with colleagues, we very much got the informal sense that the more data and the more compute and the more training you put into these models, the better they perform.
Initially, my thinking was, "Hey, maybe that is just true for speech recognition systems." Maybe that's just one particular quirk in one particular area. I think it wasn't until 2017 when I first saw the results from GPT-1 that it clicked for me that language is probably the area in which we can do this. We can get trillions of words of language data; we can train on them. The models we were training in those days were tiny; you could train them on one to eight GPUs, whereas now we train jobs on tens of thousands, soon going to hundreds of thousands of GPUs.
When I saw those two things together, there were a few people, like Ilya Sutskever, who you’ve interviewed, who had somewhat similar views. He might have been the first one, although I think a few people came to similar views around the same time. There was Rich Sutton's bitter lesson, there was Gur who wrote about the scaling hypothesis, but I think somewhere between 2014 and 2017 was when it really clicked for me, when I really got the conviction that, "Hey, we're going to be able to do these incredibly wide cognitive tasks if we just scale up the models."
And at every stage of scaling, there are always arguments. When I first heard them, honestly, I thought, "Probably I'm the one who's wrong, and all these experts in the field are right; they know the situation better than I do." I think you know—there's the Chomsky argument about you can get syntax but you can't get semantics. There's this idea, oh, you can make a sentence make sense, but you can't make a paragraph make sense. The latest one we have today is, you know, we're going to run out of data or the data isn't high quality enough or models can't reason. And each time, every time, we manage to find a way around or scaling just is the way around. Sometimes it's one; sometimes it's the other.
I'm now at this point where I still think, you know, it's always quite uncertain. We have nothing but inductive inference to tell us that the next few years are going to be like the last ten years, but I've seen the movie enough times, I've seen the story happen for enough times to really believe that probably the scaling is going to continue and that there's some magic to it that we haven't really explained on a theoretical basis yet.
And of course, the scaling here is bigger networks, bigger data, bigger compute—yes, all in particular linear scaling up of bigger networks, bigger training times, and more and more data. So all of these things almost like a chemical reaction: you have three ingredients in the chemical reaction, and you need to linearly scale up the three ingredients. If you scale up one, not the others, you run out of the other reagents, and the reaction stops. But if you scale up everything in series, then the reaction can proceed.
Of course, now that you have this kind of empirical science art, you can apply it to other more nuanced things like scaling laws applied to interpretability or scaling laws applied to post-training or just seeing how does this thing scale. But the big scaling law, I guess the underlying scaling hypothesis, has to do with big networks: big data leads to intelligence.
Yeah, we've documented scaling laws in lots of domains other than language, right? So initially, the paper we did that first showed it was in early 2020, where we first showed it for language. There was then some work late in 2020 where we showed the same thing for other modalities like images, video, text to image, image to text, math. They all had the same pattern. And you're right, now there are other stages like post-training, or there are new types of reasoning models. In all of those cases that we've measured, we see similar types of scaling laws.
A bit of a philosophical question, but what's your intuition about why bigger is better in terms of network size and data size? Why does it lead to more intelligent models?
So in my previous career as a biophysicist—so I did physics undergrad and then biophysics in grad school—I think back to what I know as a physicist, which is actually much less than what some of my colleagues at Anthropic have in terms of expertise in physics. There's this concept called one-over-f noise and one-over-x distributions where, just like if you add up a bunch of natural processes, you get Gaussian. If you add up a bunch of differently distributed natural processes—like if you take a probe and hook it up to a resistor—the distribution of the thermal noise in the resistor goes as one over the frequency. It's some kind of natural convergent distribution.
And I think what it amounts to is that if you look at a lot of things that are produced by some natural process that has a lot of different scales—not a Gaussian, which is kind of narrowly distributed—but if I look at kind of like large and small fluctuations that lead to electrical noise, they have this decaying one-over-x distribution.
And so now I think of patterns in the physical world. If I think about the patterns in language, there are some really simple patterns; some words are much more common than others, like "the." Then there's basic noun-verb structure. Then there's the fact that nouns and verbs have to agree— they have to coordinate. Then there's the higher-level sentence structure, and there's the thematic structure of paragraphs.
The fact that there's this regressing structure—you can imagine that as you make the networks larger, first they capture the really simple correlations, the really simple patterns, and there's this long tail of other patterns. If that long tail of other patterns is really smooth, like it is with the one-over-f noise in physical processes like resistors, then you could imagine as you make the network larger it's kind of capturing more and more of that distribution.
That smoothness gets reflected in how well the models are at predicting and how well they perform. Language is an evolved process, right? We've developed language; we have common words and less common words, we have common expressions and less common expressions, we have clichés that are expressed frequently, and we have novel ideas. That process has developed and has evolved with humans over millions of years.
So the guess— and this is pure speculation—would be that there's some kind of long-tail distribution of the distribution of these ideas. So there's the long tail, but also there's the height of the hierarchy of concepts that you're building up. So the bigger the network, presumably you have a higher capacity. If you have a small network, you only get the common stuff, right? If I take a tiny neural network, it's very good at understanding that, you know, a sentence has to have, you know, a verb, adjective, noun, right? But it’s terrible at deciding what those verb, adjective, and noun should be, and whether they should make sense. If I make it just a little bigger, it gets good at that. Suddenly, it's good at the sentences— but it's not good at the paragraphs.
These rare and more complex patterns get picked up as I add more capacity to the network.
Well the natural question then is: What's the ceiling of this? Like how complicated and complex is the real world? How much of stuff is there to learn?
I don’t think any of us knows the answer to that question. My strong instinct would be that there's no ceiling below the level of humans. Right? We humans are able to understand these various patterns, and so that makes me think that if we continue to scale up these models to kind of develop new methods for training them and scaling them up, that will at least get to the level that we've gotten to with humans.
There's then a question of, you know, how much more is it possible to understand than humans do? How much is it possible to be smarter and more perceptive than humans? I would guess the answer has got to be domain dependent.
If I look at an area like biology, and you know I wrote this essay "Machines of Loving Grace," it seems to me that humans are struggling to understand the complexity of biology. Right? If you go to Stanford, or to Harvard, or to Berkeley, you have whole departments of folks trying to study, you know, the immune system or metabolic pathways, and each person understands only a tiny part of it, specializes, and they're struggling to combine their knowledge with that of other humans.
I have an instinct that there's a lot of room at the top for AIs to get smarter. If I think of something like materials in the physical world, or, you know, like addressing conflicts between humans or something like that, I mean, you know, it may be there's only some of these problems are not intractable but much harder, and it may be that there’s only so well you can do with some of these things.
Right, just like with speech recognition, there's only so clear I can hear your speech. So I think in some areas there may be ceilings that are very close to what humans have done, and in other areas, those ceilings may be very far away. And I think we'll only find out when we build these systems.
It's very hard to know in advance; we can speculate, but we can't be sure.
And in some domains, the ceiling might have to do with human bureaucracies and things like this.
You're right about yes— humans fundamentally have to be part of the loop. That's the cause of the ceiling, not maybe the limits of the intelligence.
Yeah, I think in many cases. You know, in theory, technology could change very fast. For example, all the things that we might invent with respect to biology.
But remember, there's a clinical trial system that we have to go through to actually administer these things to humans. I think that's a mixture of things that are unnecessary, bureaucratic, and things that kind of protect the integrity of society, and the whole challenge is that it’s hard to tell what's going on—it’s hard to tell which is which, right?
My view is definitely that I think in terms of drug development we— my view is that we're too slow and we're too conservative. But certainly if you get these things wrong, you know, it’s possible to risk people’s lives by being too reckless. And so at least some of these human institutions are in fact protecting people.
So it's all about finding the balance.
I strongly suspect that balance is kind of more on the side of pushing to make things happen faster. But there is a balance. If we do hit a limit, if we do hit a slowdown in the scaling laws, what do you think would be the reason? Is it compute limited, data limited, or is it something else? Idea limited?
So a few things. Now we’re talking about hitting the limit before we get to the level of humans and the skill of humans.
So I think one that’s popular today and I think could be a limit that we run into, like most of the limits, I would bet against it, but it’s definitely possible, is we simply run out of data. There’s only so much data on the internet, and there’s issues with the quality of the data, right? You can get hundreds of trillions of words on the internet, but a lot of it is repetitive or it’s search engine optimization drill, or maybe in the future it’ll even be text generated by AIs itself.
And so I think there are limits to what can be produced in this way. That said, we— and I would guess other companies— are working on ways to make data synthetic where you can, you know, you can use the model to generate more data of the type that you have that you’ve already or even generate data from scratch. If you think about what was done with DeepMind's AlphaGo Zero, they managed to get a bot all the way from, you know, no ability to play Go whatsoever to above human level just by playing against itself. There was no example data from humans required.
In the other direction, of course, is these reasoning models that do chain of thought and stop to think and reflect on their own thinking in a way that’s another kind of synthetic data coupled with reinforcement learning.
So my guess is with one of those methods, we’ll get around the data limitation, or there may be other sources of data that are available.
We could just observe that even if there’s no problem with data, as we start to scale models up, they just stop getting better. It seems to be a reliable observation that they’ve gotten better could just stop at some point for a reason we don’t understand. The answer could be that we need to, you know, we need to invent some new architecture.
There have been problems in the past with, say, numerical stability of models where it looked like things were leveling off, but actually, you know, when we found the right unblocker, they didn’t end up doing so. So perhaps some new optimization method or some new technique we need to unblock things.
I've seen no evidence of that so far, but if things were to slow down, that perhaps could be one reason. What about the limits of compute, meaning the expensive nature of building bigger and bigger data centers?
So right now, I think, you know, most of the frontier model companies, I would guess, are operating, you know, roughly, you know, a billion-dollar scale plus or minus a factor of three, right? Those are the models that exist now or are being trained now.
I think next year we're going to go to a few billion, and then by 2026, we may go to, you know, above 10 billion and probably by 2027 there are ambitions to build hundred billion-dollar clusters. I think all of that actually will happen.
There’s a lot of determination to build the compute to do it within this country, and I would guess that it actually does happen. Now, if we get to 100 billion, that’s still not enough compute; that’s still not enough scale. Then, either we need even more scale, or we need to develop some way of doing it more efficiently, of shifting the curve.
I think between all of these, one of the reasons I'm bullish about powerful AI happening so fast is just that if you extrapolate the next few points on the curve, we're very quickly getting towards human-level ability, right? Some of the new models that we developed, some reasoning models that have come from other companies, they're starting to get to what I would call the PhD or professional level, right? If you look at their coding ability, the latest model we released, Sonet 3.5, the new or updated version, it gets something like 50% on SBench, and SBench is an example of a bunch of professional real-world software engineering tasks. At the beginning of the year, I think the state-of-the-art was 3 or 4%. So in ten months, we've gone from 3% to 50% on this task— and I think in another year we’ll probably be at 90%.
I mean, I don't know, but it might even be less than that. We've seen similar things in graduate-level math, physics, and biology from models like OpenAI's GPT-3.
So if we just continue to extrapolate this, right, in terms of skill that we have, I think if we extrapolate the straight curve, within a few years, we will get to these models being above the highest professional level in terms of humans.
Now, will that curve continue? You've pointed to and I've pointed to a lot of reasons why, you know, possible reasons why that might not happen. But if the extrapolation curve continues, that is the trajectory we're on.
So Anthropic has several competitors. It'd be interesting to get your sort of view of it all—OpenAI, Google, XAI, Meta. What does it take to win in the broad sense of "win" in the space?
Yeah, so I want to separate out a couple of things, right? So, you know, Anthropic's mission is to kind of try to make this all go well, right? And, you know, we have a theory of change called race to the top. Right? Race to the top is about trying to push the other players to do the right thing by setting an example. It's not about being the good guy; it's about setting things up so that all of us can be the good guy.
I'll give a few examples of this. Early in the history of Anthropic, one of our co-founders, Chris Ola, who I believe you're interviewing soon, you know he's the co-founder of the field of mechanistic interpretability, which is an attempt to understand what's going on inside AI models. So we had him and one of our early teams focus on this area of interpretability, which we think is good for making models safe and transparent for three or four years.
That had no commercial application whatsoever; it still doesn't today. We're doing some early betas with it, and probably it will eventually but, you know, this is a very long research bed in one in which we've built in public and shared our results publicly. We did this because we think it's a way to make models safer.
An interesting thing is that as we've done this, other companies have started doing it as well, in some cases because they've been inspired by it, and in some cases because they're worried that if other companies are doing this and looking more responsible, they want to look more responsible too. No one wants to look like the irresponsible actor, and so they adopt this. They adopt this as well.
When folks come to Anthropic, interpretability is often a draw and I tell them the other places you didn't go tell them why you came here. And then you see soon that there are interpretability teams elsewhere as well and in a way that takes away our competitive advantage because it's like, oh, now others are doing it as well. But it's good. It's good for the broader system, and so we have to invent some new thing that we're doing others aren't doing as well.
The hope is to basically bid up the importance of doing the right thing. It's not about us in particular, right? It's not about having one particular good guy. Other companies can do this as well if they join the race to do this—that’s the best news ever, right?
It's about kind of shaping the incentives to point upward instead of shaping the incentives to point downward.
We should say this example, the field of mechanistic interpretability, is just a rigorous, non-hand-wavy way of doing AI safety.
Yes, or it's trending that way, trying to—I think we're still early in terms of our ability to see things, but I've been surprised at how much we've been able to look inside these systems and understand what we see, right? Unlike with the scaling laws, where it feels like there's some law that's driving these models to perform better, on the inside the models aren't—there's no reason why they should be designed for us to understand them, right? They're designed to operate; they're designed to work just like the human brain or human biochemistry; they're not designed for a human to open up the hatch, look inside, and understand them.
But we have found, and you can talk in much more detail about this to Chris, that when we open them up, when we do look inside them, we find things that are surprisingly interesting. As a side effect, you also get to see the beauty of these models. You get to explore the sort of beautiful nature of large neural networks through the Mech interp kind of.
I'm amazed at how clean it's been. I’m amazed at things like induction heads. I'm amazed at things like we can use sparse autoencoders to find these directions within the networks, and that the directions correspond to these very clear concepts.
We demonstrated this a bit with the Golden Gate Bridge Claude. So this was an experiment where we found a direction inside one of the neural network layers that corresponded to the Golden Gate Bridge. We just turned that way up, and so we released this model as a demo. It was kind of half a joke for a couple of days, but it was illustrative of the method we developed.
You could take the Golden Gate—you could take the model; you could ask it about anything. You know, it would be like, how was your day? And anything you asked, because this feature was activated, would connect to the Golden Gate Bridge. So it would say, you know, I'm feeling relaxed and expansive, much like the arches of the Golden Gate Bridge, or, you know, it would masterfully change the topic to the Golden Gate Bridge. It integrated—there was also a sadness to it to the focus it had on the Golden Gate Bridge.
I think people quickly fell in love with it. I think so; people already miss it because it was taken down, I think, after a day. Somehow, these interventions on the model, where you kind of adjust its behavior, somehow made it seem more human than any other version of the model.
Strong personality, strong identity, strong personality; it has these obsessive interests. We can all think of someone who's obsessed with something, so it does make it feel somehow a bit more human.
Let's talk about the present. Let's talk about Claude. So this year, a lot has happened. In March, Claude 3, Opus, and Sonnet were released; then Claude 3.5, Sonnet in July with an updated version just now released. And then also Claude 3.5, Haiku was released.
Okay, can you explain the difference between Opus, Sonnet, and Haiku and how we should think about the different versions?
Yeah, so let's go back to March when we first released these three models. So, you know, our thinking was you have different companies produce kind of large and small models, better and worse models.
We felt that there was demand both for a really powerful model, you know, and you that might be a little bit slower that you'd have to pay more for, and also for fast, cheap models that are as smart as they can be for how fast and cheap. Right? Whenever you want to do some kind of like difficult analysis—like if I want to write code, for instance, or I want to brainstorm ideas, or I want to do creative writing, I want the really powerful model. But then there are a lot of practical applications in a business sense where it's like I'm interacting with a website. I, you know, like I'm doing my taxes, or I'm, you know, talking to, you know, to like a legal adviser, and I want to analyze a contract or you know, we have plenty of companies that are just like, you know, I want to do autocomplete on my IDE or something, and for all of those things, you want to act fast and you want to use the model very broadly.
So we wanted to serve that whole spectrum of needs. So we ended up with this, you know, this kind of poetry theme.
So what's a really short poem? It's a Haiku. And so Haiku is the small, fast, cheap model that, you know, was at the time surprisingly intelligent for how fast and cheap it was.
Sonnet is a medium-sized poem, right? A couple paragraphs long. Sonnet was the middle model; it is smarter but also a little slower, a little more expensive. And Opus, like a magnum opus, is a large work. Opus was the largest, smartest model at the time.
So that was the original kind of thinking behind it. And our thinking then was, well, each new generation of models should shift that trade-off curve. So when we release Sonnet 3.5, it has the same, roughly the same, you know, cost and speed as the Sonnet 3 model, but it increased its intelligence to the point where it was smarter than the original Opus 3 model, especially for code.
But also just in general, and so now, you know, we’ve shown results for Haiku 3.5, and I believe Haiku 3.5, the smallest new model, is about as good as Opus 3, the largest old model.
So basically the aim here is to shift the curve, and then at some point, there’s going to be an Opus 3.5. Now every new generation of models has its own thing; they use new data; their personality changes in ways that we kind of try to steer but are not fully able to steer, and so there's never quite that exact equivalence. The only thing you're changing is intelligence.
We always try and improve other things, and some things change without us knowing or measuring, so it’s very much an inexact science. In many ways, the manner and personality of these models is more an art than it is a science.
So what is sort of the reason for the span of time between, say, Claude, Opus 3 and 3.5? What takes that time, if you can speak to it?
Yeah, so there's different processes. There's pre-training, which is, you know, just kind of the normal language model training, and that takes a very long time. That uses, you know, these days, you know, tens of thousands; sometimes many tens of thousands of GPUs or TPUs or Tranaium.
You know, we use different platforms, but you know, accelerator chips, often training for months. There's then a kind of post-training phase where we do reinforcement learning from human feedback as well as other kinds of reinforcement learning. That phase is getting larger and larger now, and you know, you know, often that’s less of an exact science. It often takes effort to get it right.
Models are then tested with some of our early partners to see how good they are, and they're then tested both internally and externally for their safety, particularly for catastrophic and autonomy risks. So we do internal testing according to our responsible scaling policy, which I could talk more about that in detail.
Then we have an agreement with the U.S. and the UK AI Safety Institute, as well as other third-party testers in specific domains, to test the models for what are called CBRN risks: chemical, biological, radiological, and nuclear, which are, you know, we don’t think that models pose these risks seriously yet, but every new model we want to evaluate to see if we’re starting to get close to some of these more dangerous capabilities.
So, those are the phases, and then it just takes some time to get the model working in terms of inference and launching it in the API. So there's just a lot of steps to actually making a model work, and of course, you know, we're always trying to make the processes as streamlined as possible, right?
We want our safety testing to be rigorous, but we want it to be rigorous and to be you know, to be automatic, to happen as fast as it can without compromising on rigor— the same with our pre-training process and our post-training process. So, you know, just like building anything else, it’s just like building airplanes—you want to make them safe, but you want to make the process streamlined and I think the creative tension between those is an important thing in making the models work.
Yeah, a rumor on the street I forget who was saying that Anthropic is really good tooling. So, probably a lot of the challenge here is on the software engineering side is to build the tooling to have an efficient, low-friction interaction with the infrastructure.
You would be surprised how much of the challenges of building these models comes down to software engineering, performance engineering. You know, from the outside, you might think, "Oh man, we had this Eureka breakthrough, right? This new science we discovered it; we figured it out." But I think I think all things, even incredible discoveries, they almost always come down to the details and often super, super boring details.
I can't speak to whether we have better tooling than other companies. You know, I haven't been at those other companies at least not recently. But it's certainly something we give a lot of attention to.
I don't know if you can say, but from Claude 3 to Claude 3.5, is there any extra pre-training going on or are they mostly focused on the post-training?
I think at any given stage, we’re focused on improving everything at once, just just naturally. There are different teams—each team makes progress in a particular area, in making a particular, you know, their particular segment of the relay race better.
It’s just natural that when we make a new model we put all of these things in at once. So the data you have—like the preference data you get from RHF—is that applicable? Are there ways to apply it to newer models as they get trained up?
Yeah, preference data from old models sometimes gets used for new models, although, of course, it performs somewhat better when it’s trained on the new models. Note that we have this constitutional AI method such that we don’t only use preference data.
We kind of—there's also a post-training process where we train the model against itself, and there are, you know, new types of post-training, the model against itself, that are used every day. So it’s not just RHF; it’s a bunch of other methods as well. Post-training I think, you know, it’s becoming more and more sophisticated.
Well, what explains the big leap in performance for the new Sonnet 3.5? I mean, at least in the programming side, and maybe this is a good place to talk about benchmarks. What does it mean to get better? Just the number went up, but, you know, I program, but I also love programming and I, um, Claude 3.5 through Cursor is what I use to assist me in programming and there was at least experientially anecdotal; it’s gotten smarter at programming.
So what does it take to get it to get it smarter?
We observe that as well, by the way. There were a couple very strong engineers here at Anthropic who—all previous code models, both produced by us and produced by all the other companies—hadn't really been useful to them. You know, they said, “Maybe this is useful to beginners; it’s not useful to me.” But Sonnet 3.5—the original one—for the first time, they said, “Oh my God, this helped me with something that, you know, it would have taken me hours to do. This is the first model that has actually saved me time.”
So again, the waterline is rising. And then I think, you know, the new Sonnet has been even better in terms of what it takes. I mean, I’ll just say it’s been across the board, it’s in the pre-training, it’s in the post-training, it’s in various evaluations that we do. We’ve observed this as well; and if we go into the details of the benchmark—SBench is basically, you know, since you’re a programmer, you know you’ll be familiar with like Pull Requests, and, you know, just Pull Requests are like, you know, the sort of atomic unit of work. You know, you could say, “I’m implementing one thing.”
SBench actually gives you kind of a real-world situation where the codebase is in a current state, and I’m trying to implement something that’s, you know, described in language. We have internal benchmarks where we measure the same thing, and you say, just give the model free reign to, like, you know, run anything, edit anything— how well is it able to complete these tasks?
And it’s that benchmark that’s gone from it can do it 3% of the time to it can do it about 50% of the time. So I actually do believe that if we can get to 100% on that benchmark in a way that isn’t kind of like overtrained or gamed for that particular benchmark, it probably represents a real and serious increase in programming ability.
And I would suspect that if we can get to, you know, 90%, 90%, 95%, that, you know, it will represent the ability to autonomously do a significant fraction of software engineering tasks.
Well, ridiculous timeline question: when is Claude Opus 3.5 coming up?
Not giving you an exact date, but, you know, as far as we know, the plan is still to have a Claude 3.5 Opus.
Are we gonna get it before GTA 6 or no?
Like Duke Nukem Forever, was that game that there was some game that was delayed 15 years? Was that Duke Nukem Forever?
Yeah, I think GTA is now just releasing trailers. You know, it’s only been three months since we released the first Sonnet.
Yeah, it’s the incredible pace of release. It just tells you about the pace, the expectations for when things are going to come out.
So what about 4.0?
So how do you think about sort of as these models get bigger and bigger about versioning and also just versioning in general? Why Sonnet 3.5 updated with the date? Why not Sonnet 3.6?
Actually, naming is actually an interesting challenge here, right? Because I think a year ago, most of the model was pre-training, and so you could start from the beginning and just say, “Okay, we’re going to have models of different sizes. We’re going to train them all together.”
And, you know, we’ll have a family of naming schemes and then we’ll put some new magic into them, and then you know we'll have the next generation. Um, the trouble starts already when some of them take a lot longer than others to train, right? That already messes up your time a little bit.
But as you make big improvements in pre-training, then you suddenly notice, oh, I can make a better pre-train model and that doesn’t take very long to do. But you know, clearly it has the same size and shape of previous models.
Uh, so I think those two things as well as the timing issues—any kind of scheme you come up with—you know, the reality tends to frustrate that scheme, right? So it tends to break out of the scheme. It’s not like software where you can say, “Oh, this is like, you know, 3.7. This is 3.8.”
No, you have models with different trade-offs. You can change some things in your models. You can train, you can change other things. Some are faster and slower at inference; some have to be more expensive; some have to be less expensive.
And so I think all the companies have struggled with this. I think we did very—I think we were in a good position in terms of naming when we had Haiku, Sonnet, and we're trying to maintain it, but it’s not perfect. We’ll try and get back to the simplicity, but the nature of the field...
I feel like no one’s figured out naming; it’s somehow a different paradigm from normal software and so none of the companies have been perfect at it.
I think it’s something we struggle with surprisingly much relative to how trivial it is to, you know, for the grand science of training the models.
So from the user side, the user experience of the updated Sonnet 3.5 is just different than the previous June 2024 Sonnet 35. It would be nice to come up with some kind of labeling that embodies that because people talk about Sonnet 35, but now there’s a different one, and so how do you refer to the previous one and the new one?
It just makes conversation about it challenging.
Yeah, I definitely think this question of, there are lots of properties of the models that are not reflected in the benchmarks. I think that’s definitely the case, and everyone agrees.
Not all of them are capabilities; some of them are, you know, models can be polite or brusque. They can be, you know, very reactive or they can ask you questions. They can have what feels like a warm personality or a cold personality. They can be boring, or they can be very distinctive, like Golden Gate Claude was.
We have a whole, you know, we have a whole team kind of focused on—I think we call it Claude character—Amanda leads that team, and we'll talk to you about that, but it’s still a very inexact science.
And often we find that models have properties that we're not aware of. The fact of the matter is that you can, you know, talk to a model 10,000 times and there are some behaviors you might not see, just like with a human, right? I can know someone for a few months and not know that they have a certain skill or not know there's a certain side to them, and so I think we just have to get used to this idea.
We’re always looking for better ways of testing our models to demonstrate these capabilities and also to decide which are the personality properties we want models to have and which we don't want to have.
That itself, the normative question, is also super interesting.
I got to ask you a question from Reddit—oh boy. You know, there's just this fascinating— to me at least— it's a psychological and social phenomenon where people report that Claude has gotten dumber for them over time.
And so, the question is, does the user complaint about the dumbing down of Claude 3.5 score hold any water?
So, are these anecdotal reports a kind of social phenomenon, or did Claude— is there any case where Claude would get dumber?
So this actually doesn't just apply; this isn't just about Claude. I believe I've seen these complaints for every foundation model produced by a major company. People said this about GPT-4. They said it about GPT-4 Turbo.
So a couple things—one, the actual weights of the model, right, the actual brain of the model, that does not change unless we introduce a new model. There are a number of reasons why it would not make sense practically to be randomly substituting in new versions of the model.
It's difficult from an inference perspective, and it’s actually hard to control all the consequences of changing the way to the model. Let’s say you wanted to fine-tune the model to be like, I don't know, to say "certainly less," which, you know, an old version of Sonnet used to do. You actually end up changing a hundred things as well. So we have a whole process for it, and we have a whole process for modifying the model. We do a bunch of testing on it, a bunch of user testing and early customers.
So we both have never changed the weights of the model without telling anyone, and it wouldn’t certainly, in the current setup, it would not make sense to do that.
Now, there are a couple things that we do occasionally do. One is, sometimes we run AB tests, but those are typically very close to when a model is being released and for a very small fraction of time.
So, you know, like the day before the new Sonnet 3.5, I agree we should have had a better name—it's clunky to refer to it. There were some comments from people that like it got a lot better, and that's because, you know, a fraction were exposed to an AB test for those one or two days.
The other is that occasionally the system prompt will change, and on the system prompt, it can have some effects, although it's unlikely to dumb down models. We've seen that while these two things, which I'm listing to be very complete, happen relatively happen quite infrequently, the complaints about Claude, for us and for other model companies about the model changes, the model isn’t good at this, the model got more censored, the model was dumbed down—those complaints are constant.
I don't want to say people are imagining it or anything but for the most part, the models are not changing. If I were to offer a theory, I think it actually relates to one of the things I said before, which is that models have many are very complex and have many aspects to them and so often, you know, if I ask a model a question, you know, if I'm like, if I'm doing task X versus can you do task XX, the model might respond in different ways. And so there are all kinds of subtle things that you can change about the way you interact with the model that can give you very different results.
To be clear, this itself is like a failing by us and by the other model providers that the models are just often sensitive to like small changes in wording. It's yet another way in which the science of how these models work is very poorly developed.
And so, you know, if I go to sleep one night and I was talking to the model in a certain way and I slightly changed the phrasing of how I talk to the model, you know, I could get different results.
So that's one possible way. The other thing is, man, it’s just hard to quantify this stuff. It’s hard to quantify this stuff. I think people are very excited by new models when they come out, and then as time goes on, they become very aware of the limitations. So that may be another effect, but that’s all a very long-winded way of saying for the most part, with some fairly narrow exceptions, the models are not changing.
I think there is a psychological effect; you just start getting used to it—the baseline. Like when people have first gotten Wi-Fi on airplanes, it's like amazing magic, and then now like I can't get this thing to work—this is such a piece of crap!
Exactly. So it's easy to have the conspiracy theory of they're making Wi-Fi slower and slower. This is probably something I'll talk to Amanda much more about, but another Reddit question: when will Claude stop trying to be my puritanical grandmother, imposing its moral worldview on me as a paying customer, and also what is the psychology behind making Claude overly apologetic?
So this kind of reports about the experience is a different angle on the frustration; it has to do with the character.
Yeah, so a couple of points on this. First one is, like, things that people say on Reddit and Twitter or X or whatever it is, there's actually a huge distribution shift between the stuff that people complain loudly about on social media and what actually kind of statistically users care about and that drives people to use the models.
Like, people are frustrated with, you know, things like, you know, the model not writing out all the code or the model, you know, just not being as good at code as it could be, even though it's the best model in the world on code.
I think the majority of things are about that, but certainly a kind of vocal minority are kind of raising these concerns, right? They're frustrated by the model refusing things that it shouldn’t refuse, or like apologizing too much, or just having these kind of like annoying verbal ticks.
The second caveat—and I just want to say this like super clearly because I think it’s like some people don’t know it or others like kind of know it but forget it—is it very difficult to control across the board how the models behave. You cannot just reach in there and say, “Oh, I want the model to like apologize less.”
Like you can do that; you can include training data that says like, “Oh, the models should like apologize less.” But then in some other situation, they end up being like super rude or like overconfident in a way that's misleading people, so there are all these trade-offs.
For example, another thing is if there was a period during which models—ours and I think others as well— were too verbose, right? They would like repeat themselves; they would say too much. You can cut down on the verbosity by penalizing the models for just talking for too long.
What happens when you do that—if you do it in a crude way—is when the models are coding, sometimes they'll say, "Oh, the code goes here," right? Because they've learned that that's a way to economize, and that they see it. And then, so that leads the model to be so-called lazy in coding, where they’re just like, "Ah, you can finish the rest of it."
It’s not because we want to, you know, save on compute or because, you know, the models are lazy, and they’re, you know, during winter break or any of the other kind of conspiracy theories that have come up; it’s actually just very hard to control the behavior of the model to steer the behavior of the model in all circumstances at once.
You can kind of—there's this whack-a-mole aspect where you push on one thing, and like these—these other things start to move as well that you may not even notice or measure. And so, one of the reasons that I care so much about, you know, kind of grand alignment of these AI systems in the future is that actually these systems are actually quite unpredictable. They're actually quite hard to steer and control.
This version we’re seeing today of, you know, you make one thing better, it makes another thing worse; I think that's like a present-day analog of future control problems in AI systems that we can start to study today.
Right? I think that difficulty in steering the behavior and in making sure that if we push an AI system in one direction, it doesn’t push it in another direction in some other ways that we didn’t want. I think that’s an early sign of things to come.
And if we can do a good job of solving this problem—right, of like you ask the model to like, you know, to make and distribute smallpox, and it says no, but it’s willing to help you in your graduate-level virology class—like how do we get both of those things at once? It’s hard.
It’s easy to go to one side or the other, and it’s a multi-dimensional problem. And so, you know, I think these questions of like shaping the models’ personality, I think they’re very hard. I think we haven’t done perfectly on them. I think we’ve actually done the best of all the AI companies, but still so far from perfect.
And I think if we can get this right, if we can control the false positives and false negatives in this very kind of controlled present-day environment, we’ll be much better at doing it for the future when our worry is, you know, will the models be super autonomous? Will they be able to, you know, make very dangerous things? Will they be able to autonomously, you know, build whole companies?
And are those companies aligned? So I think of this present task as both vaccine and also good practice for the future.
What’s the current best way of gathering sort of user feedback, like not anecdotal data, but just large-scale data about pain points or the opposite of pain points, positive things? Is it internal testing? Is it a specific group testing?
What works?
So typically, we’ll have internal model bashings where all of Anthropic—Anthropic is almost a thousand people; you know, people just try and break the model. They try and interact with it various ways. We have a suite of evals for, you know, oh, is the model refusing in ways that it kind of couldn't?
I think we even had certainly eval, because our model had this problem where it had this annoying tick where it would like respond to a wide range of questions by saying, "Certainly, I can help you with that. Certainly, I would be happy to do that. Certainly, this is correct."
And we had a like certainly eval which is like how often does the model say “certainly?” But look, this is just a whack-a-mole—what if it switches from certainly to definitely?
So every time we add a new eval, and we’re always evaluating for all the old things, so we have hundreds of these evaluations. But we find that there's no substitute for human interacting with it.
It’s very much like the ordinary product development process. We have hundreds of people within Anthropic bash the model, then we do external AB tests. Sometimes we’ll run tests with contractors, we pay contractors to interact with the model.
So you put all of these things together, and it’s still not perfect; you still see behaviors that you don’t quite want to see, right? You still see the model like refusing things that it just doesn’t make sense to refuse.
But I think trying to solve this challenge—right? Trying to stop the model from doing genuinely bad things that no one—everyone agrees it shouldn’t do, you know, everyone agrees that the model shouldn’t talk about, you know, I don’t know, child abuse material, right? Like everyone agrees the model shouldn’t do that.
But at the same time, that it doesn’t refuse in these dumb and stupid ways—I think drawing that line as finely as possible, approaching perfectly, is still a challenge, and we're getting better at it every day. But there's a lot to be solved, and again I would point to that as an indicator of a challenge ahead in terms of steering much more powerful models.
Do you think Claude 4.0 is ever coming out?
I don’t want to commit to any naming scheme because if I say, “Here, we’re going to have Claude 4 next year,” and then, you know, then we decide that like, you know, we should start over because there's a new type of mod, like I I don’t want to commit to it.
I would expect in a normal course of business that Claude 4 would come after Claude 3.5, but you never know in this wacky field, right? But the sort of this idea of scaling is continuing. Scaling is continuing. There will definitely be more powerful models coming from us, and the models that exist today, that is certain or if there aren't, we've deeply failed as a company.
Okay, can you explain the responsible scaling policy and the AI safety level standards (ASL) levels?
As much as I'm excited about the benefits of these models—and you know, we'll talk about that if we talk about Machines of Loving Grace—I’m worried about the risk and I continue to be worried about the risks. No one should think that Machines of Loving Grace was me saying I’m no longer worried about the risks of these models.
I think there are two sides of the same coin—the Power of the models and their ability to solve all these problems in, you know, biology, neuroscience, economic development, governance, and peace—large parts of the economy. Those come with risks as well, right? With great power comes great responsibility.
That’s the two are paired. Things that are powerful can do good things and they can do bad things. I think of those risks as being in, you know, several different categories. Perhaps the two biggest risks that I think about—and that's not to say that there aren't risks today that are important—but when I think of the really the, you know, the things that would happen on the grandest scale, one is what I call catastrophic misuse.
These are misuse of the models in domains like cyber, bio, radiological, nuclear—things that could harm or even kill thousands, even millions of people if they really, really go wrong. Like these are the number one priority to prevent. And here I would just make a simple observation, which is that almost the models, you know, if I look today at people who have done really bad things in the world, I think actually humanity has been protected by the fact that the overlap between really smart, well-educated people and people who want to do really horrific things has generally been small.
Like, you know, let’s say I’m someone who, you know, I have a PhD in this field, I have a well-paying job. There’s so much to lose. Why do I want to, like, even assuming I’m completely evil—which most people are not—why would such a person risk their life, risk their legacy, their reputation to do something like truly, truly evil?
If we had a lot more people like that, the world would be a much more dangerous place, and so my worry is that by being a much more intelligent agent, AI could break that correlation. And so I do have serious worries about that. I believe we can prevent those worries, but I think, as a counterpoint to Machines of Loving Grace, I want to say that there's still serious risks.
The second range of risks would be the autonomy risks, which is the idea that models might on their own—particularly as we give them more agency than they've had in the past, particularly as we give them supervision over wider tasks like, you know, writing whole code bases or someday even, you know, effectively operating entire companies—they're on a long enough leash. Are they doing what we really want them to do? It’s very difficult to even understand in detail what they’re doing, let alone control it.
And like I said, these early signs that it's hard to perfectly draw the boundary between things the model should do and things the model shouldn’t do. If you go to one side, you get things that are annoying and useless, and you go to the other side, you get other behaviors. If you fix one thing, it creates other problems. We're getting better and better at solving this. I don’t think this is an unsolvable problem.
I think this is a science like the safety of airplanes, or the safety of cars, or the safety of drugs. You know, I don’t think there’s any big thing we’re missing. I just think we need to get better at controlling these models. And so these are the two risks I’m worried about, and our responsible scaling plan— which I'll recognize is a very long-winded answer to your question.
Our responsible scaling plan is designed to address these two types of risks. And so every time we develop a new model, we basically test it for its ability to do both of these bad things.
So if I were to back up a little bit, I think we have an interesting dilemma with AI systems where they're not yet powerful enough to present these catastrophes. I don’t know that they ever will. Or maybe they won't. But the case for worry, the case for risk is strong enough that we should act now, and they're getting better very, very fast, right?
I testified in the Senate that we might have serious bio risks within two to three years—that was about a year ago. Things have preceded a pace and so we have this thing where it’s surprisingly hard to address these risks because they're not here today; they don't exist. They're like ghosts.
But they're coming at us so fast because the models are improving so fast. So how do you deal with something that’s not here today, doesn’t exist, but is coming at us very fast?
So the solution we came up with for that in collaboration with, you know, people like the organization Meter and Paul Christiano is, okay, what you need for that you need tests to tell you when the risk is getting close; you need an early warning system.
And so every time we have a new model, we test it for its capability to do these CBRN tasks as well as testing it for, you know, how capable it is of doing tasks autonomously on its own.
In the latest version of our RSP, which we released in the last month or two, the way we test the autonomy risks is the model’s ability to do aspects of AI research itself, which when the AI models can do AI research, they become kind of truly, truly autonomous on that—that threshold is important for a bunch of other ways.
And so what do we then do with these tasks? The RSP basically develops what we've called an if-then structure, which is if the models pass a certain capability, then we impose a certain set of safety and security requirements on them.
So today’s models are what's called ASL2. Models that were ASL1 are for systems that manifestly don’t pose any risk of autonomy or misuse. So, for example, a chess-playing bot, Deep Blue, would be ASL1; it was just manifestly the case that you can’t use Deep Blue for anything other than chess. It was just designed for chess; no one’s going to use it to conduct a masterful cyber attack or to run wild and take over the world.
ASL2 is today’s AI systems where we’ve measured them and we think these systems are simply not smart enough to autonomously self-replicate or conduct a bunch of tasks and also not smart enough to provide meaningful information about CBRN risks and how to build CBRN weapons above and beyond what can be known from looking at Google.
In fact, sometimes they do provide information but not above and beyond a search engine, and not in a way that can be stitched together.
ASL3 is going to be the point at which the models are helpful enough to enhance the capabilities of non-state actors. Right? State actors can already do a lot, unfortunately, to a high level of proficiency a lot of these very dangerous and destructive things. The difference is that non-state actors are not capable of it.
So when we get to ASL3, we’ll take special security precautions designed to be sufficient to prevent theft of the model by non-state actors and misuse of the model as it’s deployed. We’ll have to have enhanced filters targeted at these particular areas: cyber, bio, nuclear, cyber, bio, nuclear, and model autonomy—which is less a misuse risk and more a risk of the model doing bad things itself.
ASL4 is getting to the point where these models could enhance the capability of an all-knowledgeable state actor, and or become the main source of such a risk. Like if you wanted to engage in such a risk, the main way you’d do it is through a model.
And then I think ASL4, on the autonomy side, it’s some amount of acceleration in AI research capabilities with an AI model. Then ASL5 is where we would get to the models that are truly capable that could exceed humanity in their ability to do any of these tasks.
The point of the if-then structure commitment is basically to say, look, I don’t know. I’ve been working with these models for many years and I’ve been worried about risk for many years.
It’s actually kind of dangerous to cry wolf. It’s actually kind of dangerous to say this model is risky, and you know people look at it and they say this is manifestly not dangerous. Again, the delicacy of the risk isn't here today, but it’s coming at us fast. How do you deal with that? It's really vexing to a risk planner to deal with it.
The solution we came up with in collaboration with people like the organization Meter and Paul Christiano is to okay, you need for that; you need tests to tell you when the risk is getting close. You need an early warning system.
So every time we have a new model, we test it for its capability to do these CBRN tasks, as well as testing it for, you know, how capable it is of doing tasks autonomously on its own.
In the latest version of our responsible scaling plan, which we released in the last month or two, the way we test autonomy risks is the models’ ability to do aspects of AI research itself, which when the AI models can do AI research, they become truly, truly autonomous on that. That threshold is important for a bunch of other ways.
So what do we do with these tasks? The responsible scaling plan basically develops what we've called an if-then structure, which is if the models pass a certain capability, then we impose a certain set of safety and security requirements on them.
So today, the models are called ASL2. Models that were ASL1 are for systems that manifestly don’t pose any risk of autonomy or misuse. For example, Deep Blue, a chess-playing bot, would be ASL1; it was just manifestly the case that you can’t use Deep Blue for anything other than chess. It was just designed for chess; no one’s going to use it to conduct a masterful cyber attack or to run wild and take over the world.
ASL2 includes today’s AI systems where we’ve measured them, and we think these systems are simply not smart enough to autonomously self-replicate or conduct a bunch of tasks and also not smart enough to provide meaningful information about CBRN risks and how to build CBRN weapons above and beyond what can be known from looking at Google.
In fact, sometimes they do provide information but not above and beyond a search engine, and not in a way that can be stitched together.
ASL3 is going to be the point at which the models are helpful enough to enhance the capabilities of non-state actors. Right? State actors can already do a lot, unfortunately, to a high level of proficiency—lots of these very dangerous and destructive things.
The difference is that non-state actors are not capable of this. So when we get to ASL3, we'll take special security precautions designed to be sufficient to prevent theft of the model by non-state actors and misuse of the model as it's deployed. We'll have to have enhanced filters targeted at these particular areas: cyber, biological, nuclear.
ASL4 is getting to the point where these models could enhance the capability of an all-knowledgeable state actor and/or become the main source of such a risk. Like if you wanted to engage in such a risk— the main way you would do it is through a model.
And then I think ASL4, on the autonomy side, it’s some amount of acceleration in AI research capabilities with an AI model. Then ASL5 is where we would get to the models that are truly capable, that could exceed humanity in their ability to do any of these tasks.
The point of the if-then structure commitment is basically to say, look, I don’t know. I’ve been working with these models for many years, and I've been worried about risk for many years.
It’s actually kind of dangerous to cry wolf. It’s actually kind of dangerous to say this model is risky. You know, people look at it and they say this is manifestly not dangerous. Again, the delicacy of the risk isn't here today, but it’s coming at us fast.
How do you deal with that? It’s really vexing to a risk planner to deal with it.
So the solution we came up with for that in collaboration with organizations like Meter and Paul Christiano is to say, hey, we need tests to tell us when the risk is getting close; we need an early warning system.
So every time we have a new model, we test it for its capability to do these CBRN tasks, as well as testing it for how capable it is of doing tasks autonomously on its own.
In the latest version of our Responsible Scaling Plan, which we released in the last month or two, the way we test autonomy risks is the model’s ability to do aspects of AI research itself. When AI models can do AI research, they become truly autonomous.
That threshold is important for a bunch of other ways.
What do we then do with these tasks? The RSP basically develops what we've called an if-then structure, which is if the models pass a certain capability, then we impose a certain set of safety and security requirements on them.
So today’s models are what we call ASL2. Models that were ASL1 are for systems that manifestly don’t pose any risk of autonomy or misuse. For example, Deep Blue, a chess-playing bot, would be ASL1; it was just manifestly the case that you can’t use Deep Blue for anything other than chess.
It was just designed for chess; no one’s going to use it to conduct a masterful cyber attack or to run wild and take over the world.
ASL2 is today’s AI systems where we’ve measured them, and we think these systems are simply not smart enough to autonomously self-replicate or conduct a bunch of tasks and also not smart enough to provide meaningful information about CBRN risks, how to build CBRN weapons above and beyond what can be known from looking at Google.
In fact, sometimes they do provide information but not above and beyond a search engine, and not in a way that can be stitched together.
ASL3 is going to be the point at which the models are helpful enough to enhance the capabilities of non-state actors. State actors can already do a lot—unfortunately, to a high level of proficiency—lots of these very dangerous and destructive things.
The difference is that non-state actors are not capable, so when we get to ASL3, we’ll take special security precautions designed to be sufficient to prevent theft of the model by non-state actors and misuse of the model as it's deployed.
We’ll have to have enhanced filters targeted at these particular areas—cyber, biological, nuclear.
ASL4 is getting to the point where these models could enhance the capability of an all-knowledgeable state actor and/or become the main source of such a risk. Like, if you wanted to engage in such a risk, the main way you would do it is through a model.
And then I think ASL4, on the autonomy side, it’s some amount of acceleration in AI research capabilities with an AI model.
Then ASL5 is where we would get to the models that are truly capable, that could exceed humanity in their ability to do any of these tasks.
The point of the if-then structure commitment is basically to say, look, I don’t know—I’ve been working with these models for many years, and I’ve been worried about risk for many years.
It’s actually kind of dangerous to cry wolf; it’s actually kind of dangerous to say this model is risky, and people look at it and they say this is manifestly not dangerous. Again, the delicacy of the risk isn't here today, but it’s coming at us fast.
How do you deal with that? It’s really vexing to a risk planner to deal with, and so this if-then structure basically says, look, we don’t want to antagonize a bunch of people. We don’t want to harm our own ability to have a place in the conversation by imposing these very onerous burdens on models that are not dangerous today, so the if-then trigger commitment is basically a way to deal with this.
It says you clamp down hard when you can show that the model is dangerous, and of course, what has to come with that is enough of a buffer threshold that you know you're not at high risk of missing the danger.
It’s not a perfect framework; we’ve had to change it every, you know, we came out with a new one just a few weeks ago, and probably moving forward we might release new ones multiple times a year because it’s hard to get these policies right—technically, organizationally, from a research perspective. But that is the proposal: if-then commitments and triggers in order to minimize burdens and false alarms and really react appropriately when the dangers are here.
What do you think the timeline for ASL3 is? Where several of the triggers are fired, and what do you think the timeline is for ASL4?
Yeah, so that is hotly debated within the company. We are working actively to prepare ASL3 security measures as well as ASL3 deployment measures. I’m not going to go into detail, but we’ve made a lot of progress on both, and you know, we’re prepared to be I think ready quite soon.
I would not be surprised at all if we hit ASL3 next year. There was some concern that we might even hit it this year; that's still possible— that could still happen. It’s very hard to say, but I would be very, very surprised if it was like 2030. I think it’s much sooner than that.
So there are protocols for detecting it, the if-then, and then there’s protocols for how to respond to it.
Yeah, so how difficult is the second, the latter?
Yeah, I think for ASL3 it’s primarily about security and about, you know, filters on the model relating to a very narrow set of areas when we deploy the model because at ASL3 the model isn’t autonomous yet.
And you don’t have to worry about the model itself behaving in a bad way, even when it’s deployed internally, so I think the ASL3 measures are, I won’t say straightforward, they’re rigorous but they’re easier to reason about.
I think once we get to ASL4, we start to have worries about the models being smart enough that they might sandbag tests—they might not tell the truth about tests. We had some results come out about like 'sleeper agents', and there was a more recent paper about, you know, can the models mislead attempts to, you know, sandbag their own abilities, right?
Show them, you know, present themselves as being less capable than they are. And so I think with ASL4, there’s going to be an important component of using other things than just interacting with the models—for example, interpretability or hidden chains of thought where you have to look inside the model and verify via some other mechanism that that is not as easily corrupted as what the model says—that the model indeed has some property.
So we’re still working on ASL4. One of the properties of the RSP is that we don’t specify ASL4 until we’ve hit ASL3. I think that’s proven to be a wise decision because even with ASL3, again it’s hard to know this stuff in detail and we want to take as much time as we can possibly take to get these things right.
So for ASL3, the bad actor will be the humans, yes, and so there it’s a little bit more. For ASL4, it’s both.
I think it’s both, and so deception—and that’s where mechanistic interpretability comes into play, and hopefully the techniques used for that are not made accessible to the model.
Yeah, I mean of course you can hook up the mechanistic interpretability to the model itself, but then you’ve kind of lost it as a reliable indicator of the model state.
There are a bunch of exotic ways you can think of that it might also not be reliable, like if the model gets smart enough that it can like, you know, jump computers and read the code where you’re like looking at its internal state.
We’ve thought about some of those; I think they’re exotic enough. There are ways to render them unlikely. But generally, you want to preserve mechanistic interpretability as a kind of verification set or test set that’s separate from the training process of the model.
See, I think as these models become better and better, conversation and become smarter, social engineering becomes a threat too because they can start being very convincing to the engineers inside companies.
Oh yeah, yeah. It’s actually like, you know, we’ve seen lots of examples of demagoguery in our life from humans, and you know, there’s a concern that models could do that as well.
One of the ways that Claude has been getting more and more powerful is it’s now able to do some agentic stuff—computer use. There's also an analysis within the sandbox of Claude itself, but let’s talk about computer use.
That seems to me super exciting: that you can just give Claude a task and it takes a bunch of actions, figures it out, and has access to your computer through screenshots. So can you explain how that works and where that’s headed?
Yeah, it’s actually relatively simple. So Claude has had for a long time, since Claude 3 back in March, the ability to analyze images and respond to them with text. The only new thing we added is those images can be screenshots of a computer, and in response, we trained the model to give a location on the screen where you can click or buttons on the keyboard you can press in order to take action.
It turns out that with actually not all that much additional training, the models can get quite good at that task. It’s a good example of generalization. You know, people sometimes say if you get to low Earth orbit, you’re like halfway to anywhere, right? Because of how much it takes to escape the gravity well. If you have a strong pre-trained model, I feel like you’re halfway to anywhere in terms of the intelligence space.
And so actually, it didn’t take all that much to get Claude to do this. You can just set that in a loop—give the model a screenshot, tell it what to click on, give it the next screenshot, tell it what to click on, and that turns into a full kind of almost 3D video interaction of the model, and it's able to do all of these tasks.
You know, we showed these demos where it's able to fill out spreadsheets, it's able to kind of like interact with a website, it's able to