yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Why Vertical LLM Agents Are The New $1 Billion SaaS Opportunities


28m read
·Nov 3, 2024

This is their first ever experience talking to this Godlike feeling, you know, AI that was all of a sudden doing these tasks that would take me, when I practice, like a whole day. And it's being done in a minute and a half. The whole company, all 120 of us, did not sleep for those, you know, months before GPT-4. We felt like we had this amazing opportunity to run far ahead of the market. That's why you're the first man on the moon.

[Music]

Yeah, welcome back to another episode of the Light Cone. I'm Gary, this is Jared, and Diana. Harge is out, but he'll be back on the next one. And today we have a very special guest, Jake Heller of CaseText. I think of Jake as a little bit like one of the first people on the surface of the Moon. He created CaseText more than, I think, 11 or 12 years ago actually. And in the first 10 years, he went from $0 to $100 million valuation. Then, in a matter of 2 months after the release of GPT-4, that valuation went to a liquid exit to Thomson Reuters for $650 million. So you have a lot of lessons about how to create real value from really large language models. I think you were one of our friends in YC, one of the first people to actually realize this is a sea change and revolution. Not only that, but we're gonna bet the company on it, and you were super right. So welcome, Jake! Happy to be here.

One of the cool things I think about Jake's story, and the reason why we wanted to bring him on today, is that if you just look at the companies that good founders are starting now, it's a lot of vertical AI agents. I mean, I was trying to count the ones in S24; we have literally dozens of the YC companies in the last batch that are building vertical-specific AI agents. And I think Jake is the founder who is currently running the most successful vertical AI agent. It's by far the largest acquisition, and it's actually deployed at scale in a lot of mission-critical situations.

The inspiration for this was, we hosted this retreat a few months ago, and Jake gave an incredible talk about how he built it. We thought that it'd be super useful for people who watch the Light Cone, who are interested in this area, to hear directly from one of the most successful builders in this area how he did it. So how did you do it?

Well, first of all, like a lot of these things, there's a certain amount of luck. Over the course of our decade-long journey, we started investing very deeply in AI and natural language processing, and we became close with a number of different research labs, including some of the folks at OpenAI. When it came time for them to start testing early versions—uh, we didn't realize it was GPT-4 at the time—but what was GPT-4? We got a very early kind of like view of it. So, you know, months before the public release of GPT-4, we, as a company, were all under NDA, all working on this thing. I'll never forget the first time I saw it.

It took maybe 48 hours for us to decide to take every single person in the company and shift what they were working on from the projects we were then working on at the time to 100% of the company all working on building this new product we call Co-Counsel, based on the GPT-4 technology.

How many people was that?

We're about 20 people at the time.

So you took like 120 people and completely changed what they were all working on?

Yes, yes, yes, in 48 hours.

Yes! And for the people watching, CaseText originally, I mean, had always been in the legal space. You're a lawyer and you built something for yourself. And you know, sort of the first versions of it were actually sort of annotated versions of case law, actually.

Yeah, that's exactly right. So in the very early origins of the company, the mission of the company, what we're always focused on is how can we build something that brings the best of technology to the legal space? I, as a lawyer, actually liked the job a lot. The parts of my job that I hated the most was when I had to interact with the technology that lawyers have to use regularly to get the job done. I remember thinking, and this is like 2012 when I was at a law firm, if I wanted to do something really trivial, I had like a new iPhone at the time. I can go on Google and find like movie times or where's the closest open Thai restaurant with vegetarian options. That was super easy. But if I wanted to find the piece of evidence that was going to exonerate my client and make it so he doesn't have to go to jail for the rest of his life, or the key legal case that will help me win a billion-dollar lawsuit, well, that's going to be like 5 days in a row till 5:00 a.m. every day. It's like, there's got to be a better way.

What is the process as a lawyer? You would have to read stacks and stacks of documents pretty much, yeah?

Right before I started practicing, before everything went virtual or like online, you would literally be in a basement with bankers' boxes full of documents, reading them one by one by one to try to find, you know, all the emails in a company like Fiser or Google to see if there is potential fraud. And then if you wanted to find case law slightly before my time, you'd literally go to the library and open up books and just start reading. You know, new products were coming out that were some of the first web-based research tools, but they were pretty clunky. It was just hard to find the relevant information; you couldn't do Ctrl + F or any of this stuff. Basically, not yet.

And what was interesting about you is, you also happened to be the rare breed of having also computer science training, so this must have driven you nuts.

Yeah, exactly. I mean, in the law firm, I'll never forget I was building like browser plugins to go on top of the tools I was using just to make my life more efficient and effective. One of the reasons I left the law firm to start a company and apply to YC was I got in trouble with the general counsel, who thought like, "Hey, why are you spending all your time, you know, doing this tech stuff?" And also made it at the time very clear that my law firm owns all that technology. So I decided to do something different.

So do you want to tell us a little bit about the first 10 years of CaseText, the sort of like long slog in the pre-LLM era?

One of the lessons here I think that I took away from that time period is that when you start a company, you may not get the exact right, you may have like the right kind of general direction, you know there's a problem you're trying to solve, but it could take a very long time to figure out what the solution is. For us, for example, you know, we saw that there was this kind of combined issue of bad technology in the legal sphere, but also like a lot of lawyers use content to do things like research and understand like what the law is. So we thought, "Okay, well, we can do the technology better, but how are we going to get this content?" We spent like a couple of years trying to get, as Gary said, lawyers to annotate case law and to provide information. So it's like a UGC site, like user-generated content. That was a big focus of ours, like the kind of one-two punch of better technology, but also better content.

Our heroes were like Stack Overflow, Wikipedia, and GitHub and other kind of Open Source or UGC kind of websites, and it was a total failure. We could not get lawyers to contribute their time and information. I think these are just different populations. The typical Wikipedia editor has more time on their hands than they know what to do with, and many do— they're adding content for free and altruistically. Lawyers bill by the hour; their time is incredibly valuable. They're always running out of time. They had no time to kind of contribute to some UGC site.

So we had to pivot, and we started investing very deeply. At the time it was not called AI, it was just like natural language processing and machine learning. We saw that first of all, we didn't need to create all this UGC to replicate some of the best benefits of what our competitors had in these big content databases. Some of it you can basically do even then on an automated basis, and then also we were starting to create these user experiences that were a lot better than what our competitors could offer. Based on then, at the time, what seems kind of quaint, like AI stuff—like, you know, the same recommendation algorithm that powers Pandora and Spotify's recommended music— you can use. They look at basically how this song relates to that song. People listen to this; they also listen to this and this and this.

Right? Similarly, we looked at okay cases that cite, you know, other cases. They all reference earlier opinions; they kind of build out this network of citations and we found ways that we can check a lawyer’s work. They'd upload their work so far and be like, "Well, everybody who talks about this case talks about this case too, and you missed that." So cool experiences like that.

But the truth is, until the very end, until Co-Counsel, a lot of what we did were, relatively speaking, kind of incremental improvements on the legal workflow. One of the things that's kind of weird about this is when there's just an incremental improvement, it's actually pretty easy to ignore. A lot of our clients, they never say this literally, but you kind of get this impression. You walk into the room, their office, and you try to pitch them a product and you say, "This is going to change everything about the way you practice," and they go, "Well, I make $5 million a year. I don't want nothing to change." This technology, PL, is not—it is not—I do not want to introduce anything that has the opportunity to make my life at all worse or potentially worse or potentially more efficient because they bill by the hour.

It was really only after, like much later, when ChatGPT came out. You know, the time we were privately and secretly working on GPT-4, ChatGPT came out, and all of a sudden, every lawyer in America, probably in the world, saw, "Oh my God, I don't know exactly how this is going to change my work, but it's going to change it very substantially." Like they could feel it. The same, you know, guys and gals were telling us, "I make $5 million a year. Why would it change anything about my life?" We were like, "I make $5 million a year. This is going to change something. I need to be ahead of this." The technology itself, and we'll get into a second, really changed what we can build for life employers, but also the market perceptions of what was like what was necessary really changed as well.

For the first time in our 10 years, you know, even before we launched Co-Counsel publicly based on GPT-4, they were calling us like, you know, "We know you work on AI. We need to get on top of this. What can you know? What can you show us? What can we work on?" I think it's because the change was not incremental anymore, it was like fundamental. All of a sudden, they had to pay attention; they could not ignore it.

I guess the mental model I have for you is there's this concept of the idea maze. You know, the founder goes in the beginning of the maze, and they're just like feeling around, actually in the arena, talking to, you know, customers, learning, like where are the walls? Which path to go? Should I go left or right? And then, as is actually common for startup founders in the idea maze, you will actually reach a dead end. Then, usually, you have to pivot.

Yeah, and then I think you have a very interesting story because you were sort of towards the end of maybe like one of the, you know, parts that weren't going to get you all the way to product-market fit. But then LLMs dropped, and then it's like the maze got shaken up.

Yeah, and then you are actually much closer to product-market fit than absolutely anyone else.

That's yeah, it's exactly right. That's why you're the first man on the moon.

Yeah, I think there's something to that. And the thing is, you know, each time we progressed through that maze, it felt like maybe now we're at product-market fit. You know, we were making real revenue before we launched Co-Counsel, and we had real customers, and they said really great things about us. I keep on thinking about this article written by Mark Andreessen in like the early 2000s. I think it's called "The Only Thing That Matters." In it, he describes what it feels like to have product-market fit. He lists things like your servers will go down, you can't hire support people and salespeople fast enough. You're going to eat for a year free at Buck's, the kind of famous Woodside diner where a lot of VCs will take you. I read that early on in my, like, you know, career and I was like, okay, well that's like hyperbolic, but when we launched Co-Counsel, it was literally exactly that. Our servers were going down. We could not hire support people fast enough. We couldn't hire salespeople fast enough. I ate a lot at Buck's. You know, it was a really big day if we, in the ABA Journal or some other, you know, legal specific publication, we were on CNN and MSNBC, and like, you know, all of a sudden everything changed. And that's real product-market fit, looks like. I think Mark was even in like 2005 whenever the article came out.

Exactly right about it looked like in 2023. Can you talk about that crazy time? Because it was only two months from when you launched Co-Counsel to getting bought for $650 million. So like what happened in those two months?

Well, to be clear, the transaction only closed six months after we launched, but it was two months the conversation started. So we started building Co-Counsel, and for just to background purposes, the idea we came up with, again, like 48 hours, like a weekend after seeing GPT-4 was—it's something that sounds really crazy today, but it felt crazy at the time—which is this AI legal assistant. By which we mean it's like almost like a new member of the firm. You can just talk to it, not unlike how you might talk to something like ChatGPT today, and give it tasks like, "I need you to read these a million documents for me and tell me if there's any evidence of fraud happening in this company." And then within a couple of hours, it's like, "I've read all the documents, here's what the summary is," or "summarize documents" or "do legal research" and "put together a whole memo after researching hundreds or thousands of cases answering the lawyer's initial research question."

And in that sense, it was this like really powerful extension of the workforce of these law firms; that was the concept from the beginning. We made a very early initial version of it, and we started because we couldn't, you know, under our agreement with OpenAI, we could not be public about this product. But they did let us extend the NDA to a handful of our customers. So we started having our customers use it. For months before GPT-4 was launched publicly, we had a number of law firms—unlike they had no idea they were using GPT-4—but they were like seeing something really special. Right? This is actually even before ChatGPT. So this is their first ever experience talking to this Godlike feeling, you know, AI that was all of a sudden doing these tasks that would take me when I practiced like a whole day and it's being done in a minute and a half.

Right? And so, as you might imagine, like it was nuts. I mean, first of all, the whole company, all 120 of us, did not sleep for those, you know, months before GPT-4 was like publicly launched. Therefore, we could publicly launch the product. We felt like we had this amazing opportunity to run far ahead of the market. Something really beautiful happens when everybody's working super hard, which is you iterate so quickly. And actually, I still see some companies out there that are stuck where we were in the first month of seeing GPT-4, right? And I think it's because they're just not like as intensely focused and engaged as we were able to be during those like couple—like about six months or so before the public launch of GPT-4.

You kind of, to do this transition, you had to shake the company. You kind of went into deep founder mode because there was a lot of pushback from employees. Like, "Oh, this thing was working. Why should we go and throw ourselves into the deep end of AI?" Tell us about that founder mode moment for you.

So, first of all, like this is especially true when you're running a business for 10 years because they've seen you wander through that maze and bump into dead ends. A lot of those folks have been there for most or all that time watching, you know, me as the founder saying, "We're definitely going this direction. It's definitely going to work," and sometimes it doesn't. You only get so many of those with employees, right? So this was maybe my last one that I had with some of these folks, and they're like, "Here Jake goes again with this crazy new technology and some idea we're gonna invest deeply in." And yeah, it took some—some a job to convince people. And if you imagine, like, what some of the different roles are—if you're in the go-to-market role, if you're selling or marketing a product and we're making, you know, we're growing 70-80% year-over-year; we're between $15 and $20 million in ARR—things weren't terrible, right?

That's great!

Yeah, we were great. Yeah, we— but like, so they were like, "What? Why are we even the board?" You know, some of the members like I get this immediately, some of them had to be persuaded. Right? And about the founder mode moment, like one thing that really worked for me is, uh, I led the way through example. I built the first version of it myself.

Wow!

Even with a 120 person company with like a whole bunch of engineers and lawyers and stuff, like, hmm.

Before that, you like opened up your IDE and actually built the thing yourself?

Oh yeah. And part of it was the NDA only extended at first to me and my co-founder. That was it. That was a blessing though. It turned out to be like perfect. And even after the NDA got extended a little bit, we kept it pretty small at first for the first like, you know, little bit of time. I made my mind within 48 hours, "Whole is going to do this," but we actually only told the company, I think a week and a half after we first got access. During that week and a half, like we built the very first version, like prototype version of this.

And again, I won't—I’ll never forget this. The timing is just so funny. Like we saw it on like a Friday; we had it all weekend long. We're working with it. And then Monday was an executive offsite where everybody came, all my executives came, and they expected we were going to be talking about how we're going to hit our sales target for the next quarter. How—and it's like, guys, we're talking about none of that. You know, we are talking about something totally different right now. Let me show you something on my laptop, you know?

So, yeah, I built the first version myself, but going through that process, me and then a handful of other people, I think was really helpful. We also brought in customers early, and that helped convince a lot of people. As soon as like a skeptical sales or marketing or whatever person, or even an engineer, was on the other end of a Zoom call where a customer was reacting to the product in real-time and giving us their honest reactions and like seeing the look on their face—again, you have to imagine it's almost hard to imagine that the world was like pre-ChatGPT, but then some of these people were seeing that idea for the first time and they were just blown away.

That really changed minds quickly. I mean, we saw people go through like existential crises live, you know, on Zoom calls, like "Oh my God!" You see their expression change exactly in all kinds of ways. It's like, "What am I going to do?" The very common reaction amongst the senior attorneys we showed it to was like, "Well, that got to retire soon." Like, you know, I have to deal with this. Some of this was really driven by GPT-4 coming out.

Like you had access to 3; you had access even to 2, I think. We had access—we were in a close relationship with a lot of the labs, but including OpenAI, and they kept on showing us stuff kind of early on in its development. And they were like, "Well, can you build something with this for legal?" And every time we were like, "No, this sucks." Like, you know, by the time we got to 3 and 3.5, it was like, "Okay, well this is plausible sounding English and sounds kind of like a lawyer," so kudos for that, but it is just making stuff up wildly.

Like we just didn't—it's very hard to connect it to a real use case, especially in legal where it's so important that you actually get the facts right; you can't hallucinate. You can't even, you know, make the wrong kinds of assumptions. We had to do a lot of work with those earlier models to even get them close to usable, and they just weren't really. I mean, like one totem, or like one example along the way is when GPT-3.5 came out, the study was run, and it showed that GPT-3.5 got a 10th percentile on the bar passage, right? So like it did better than some people, actually, but the 10% of them, yeah, probably the ones who just filling out randomly basically.

When we got early access to GPT-4, we were like, "Let's run the study again too." We worked with OpenAI; we're like, "We're going to confirm this. This test is not in the training set," and it wasn't. Totally new test to it. And the test we ran it did better than 90% of the test takers, right? So it's like a big difference.

And also we started running some tests like, "Okay, here's like four or five cases to read. Using those cases, write a memo responding to this question." We did a lot of prompt work to get it to essentially just do it accurately, to cite the actual things in context that we gave it and not make things up. And we were like, "Okay, well this is very different than we saw before." So it was a big moment for us.

Honestly, I'm not sure what the mindset was of the researchers we were working with, but it almost felt like by the time we were having that meeting, it felt like one of those other meetings we had in the past where we were getting ready to say, like, "This is not going to work for legal. Keep on trying." And I think they saw us go through maybe some form of the existential crisis on that call that our customers did.

We were like, "Oh wait, this is super, super, super different." I guess, you know, today we have zero shot; we have, you know, chain of thoughts reasoning. I think a lot of people look at it as it's not merely the text itself but also the instructions that lead up to, you know, the workflow. But you know, way at the beginning, nobody knew any of this stuff. How did you start? You had your sort of tests that you had written for previous versions of the model. They outperformed, but then there's this moment where you say, "Okay, well now it's something, but what do we do next, and how do we do it?"

So the process that we started with then is actually not too dissimilar to what we're doing today. It started with a question of like, "Okay, well what problem are we trying to solve for the user?" Right? The user wants to do research—legal research. So—and they want like a memo answering their question with citations to the original source. So like, that's the end result. And then we're like, "Well how do we go from that end result, like working backwards almost? What would it take to get there?"

What ends up happening a lot with the things that we built for Co-Counsel, which we call skills— which felt very unique in at the time; I think a lot of companies now call their AI capability skills—so when you're building these skills, it turns out it usually takes a lot of work to go from like, say the customer inputs something, say like a set of documents or a question or what have you, to the end result that they're looking for.

And the way that we thought about it was how would the best attorney in the world approach this problem? And so in the case of research, for example, the best attorney would, you know, get the request save from a partner and then break that request down into, like, actual search queries that run against these platforms. Sometimes they'd use special search syntax—it looks actually probably like SQL almost, right?

So like, from the English language query, you have to break it down to these different kind of search queries—maybe a dozen different search queries. You’d be really diligent, and then they'd execute the search queries against these databases of law. They come back with, say, like 100 results each, and then the most diligent best attorney would sit down and just read every single one of these results that come back—all the case law, statutes, regulations—and you start to do things like make notes and summarize and compile like an outline of what your response might be like line by line or paragraph by paragraph.

Actually, yeah, 100%. And you start like just taking out those insights you're getting from what you're reading. And then finally, based on all that work and all the citations you've gathered, et cetera, then finally you put together your research memo.

We're like, "Okay, well, each one of those steps along the way for the vast majority of them, those were impossible to accomplish with previous technology. But now they're prompts."

Think step by step?

Yeah, think step by step, yeah, exactly. But we actually broke it down—each, you know, so getting to the final result may be a dozen or two dozen different individual prompts, each of which might, by the way, be thinking step by step themselves, but for each of those prompts, you know, as part of this, like, chain of actions you take to get to the final result, we had a very clear sense of what good looks like.

We were able, you know, we had a series like a battery of tests before, but this got way more intense where we’d write, at first, maybe a few dozen tests, and then a few hundred, and a few thousand for every single one of those prompts. So, you know, if the job to be done in the very beginning of this research process, for example, is taking the English language query and breaking it down into search queries, we had a very clear sense of what good search queries look like and wrote like a gold standard answers for, giving this input, this is what the output looks like, right?

And so our prompt engineers, and I was one of them at the very beginning, we all just kind of in it together. We're writing these English language prompts to try to, you know, write the test first basically and wrote these English language prompts to try to get it so of 1,200 times they got the right answer 1,199 times or what have you.

Sort of like test-driven development.

Oh yeah, really. Approach from doing software engineering to prompt development.

That's exactly right. And the funny thing is I never really believed in test-driven development before prompting. Like I was like, "The code works, it doesn't, it's fine; you'll see it when you..." But like with prompting, actually, I think it becomes even more important because of the kind of like nature of these LLMs.

They might go in crazy directions unexpectedly, and so, you know, you might very easily add in a set of instructions to solve one problem you're seeing with these sets of tests and then to break something with these sets of tests.

That exact kind of theory of test development applies, you know, 10x more, I'd say, in the world of prompting.

There's a lot of sort of the naysayers saying that a lot of companies are just building GPT wrappers and there's not a lot of IP getting built, but it's actually—there's a lot of finesse to how you explain all of this. Can you tell us about all of that and how much more there's to be built?

Oh yeah, I mean, I think the thing is when you're actually trying to solve a problem for a customer and actually doing the job—in our case, of like what a young associate might do—and do it really well, there are many layers of things you have to add in to actually get the job done.

And by the time you like add that all up, you're not like a GPT wrapper; you're a full application that may include, in our case, proprietary datasets like the law itself, and our annotations to the law that we added automatically. It may include connections into customer databases—in our case, in legal; they have these very specific legal specific document management systems—you know, so connecting into those is like very important.

It may include something as subtle as like how well you OCR and like what OCR programs you use and how you set those up. When you're doing that task of, you know, one of the tasks that the Co-Counsel does, for example, is reviewing large sets of documents. Once you start working a lot of documents, you see like stuff there handwritten all over it, and they're like tilted in the scan.

There's this crazy thing that they do in law where they print four pages on one page to save room, and LRs are going to read it directly across, but actually goes, you know, 1, 2, 3, 4. So by the time you've dealt with like all the edge cases, frankly, not even before you hit the large language model—everything else up to the large language model—there might be dozens of things you build into your application to actually make it work and work well.

Then you get to the prompting piece and writing out tests and very specific prompts and the strategy for how you break down, you know, a big problem into step by step by step kind of thinking.

And how you feed in the information, how you format that information the right way—all of that also becomes like, you know, your IP and it's very hard to replicate. Very hard to build, and therefore very hard to replicate, which is all the business logic, which is all even all the very successful SaaS companies with very specific domain need very, very custom, esoteric niche integrations—like plug into this esoteric law database.

Yeah, absolutely. Two things I think about it all the time. It's like basically all SaaS for a while was just like a SQL wrapper, right? Like if you think about like very successful companies like Salesforce, they built that business logic around basically just databases and connections between like tables and a database.

Sometimes bridging that gap between something that like either a very technical person can do, but most people can't and making it accessible, or bridging that gap between that almost works—like you can do a lot of cool demos in ChatGPT without building a line of code—but that almost works and works, you know, 70% of the time. But going to 100% of the time is a very different kind of task, and people will pay $20 a month for the 70% and maybe $500 or $1,000 a month for something that actually works, depending on the use case, right? So there's a lot of value gained going that last mile or 100 miles, whatever it is.

Yeah, can you talk about how you went from 70% to 100%? Because I think the other knock on this technology that we hear a lot is like, "Oh, these LLMs hallucinate too much; they're not accurate enough for real-world use." But as you said earlier, like the use case that you're working on is a mission-critical use case. There's like a lot at stake if the agent gives bad information to lawyers who are working on important court cases.

How did you make it accurate enough for lawyers who are conservative by nature to trust it?

This test-driven development framework, first of all, goes a long way because you can start seeing, you know, patterns and why it's making a mistake. And then you add instructions against that pattern, and then sometimes it still doesn't, you know, do the right thing, and then you kind of really ask yourself, "Okay, well was I being super clear in my instructions? Am I including information it doesn't, you know, it doesn't see? Or too much or too little information for it to really get the full context?"

And usually, like these things are pretty intelligent, and so usually you can kind of root cause why you're failing certain tests and then build to a place where you're actually passing those tests and just getting it right, you know? And one of the things we learned is if it passes frankly, even like 100 tests, the odds that it will do on any random distribution of like user inputs the next 100,000 100% accurately is like very high.

One of the things that strikes me that is tricky: like many founders, we work with, are very tempted to just raw-dog it. There's like no evals, no tests driven, we're just like vibes only prompt engineering. And maybe—I mean you switched over to this very quickly then. Like was it just obvious from the beginning you're like we just can't do it that other way, we should not raw-dog any of these prompts?

Yeah, I think the biggest thing, first of all, depends on the use case. For a lot of things that we were working on, for better or for worse, there was a right answer, and if you get the wrong answer, lawyers are not going to be happy about it. You know, I had been a lawyer myself but also been with lawyers for a decade. Every time we made the smallest mistake in anything that we did, we heard about it immediately. Right?

And so I had that voice in my head maybe as I was going through this process—and that—that was the learning from the 10 years of slogging through pre-LLMs. You're like, "No, it has to be 100%!"

Oh yeah, oh yeah, that's probably true of way more domains than we realize actually. It could be.

Because—and the other thing that we were thinking about a lot is you can lose faith in these things really quickly, right? You have one bad experience, especially if it's your first bad experience is bad, and you're like, "You know, maybe I'll check on this AI stuff a year from now," especially if you're like a busy lawyer, not a technologist.

So we knew you had to make that first encounter the first week really, really work for the lawyer or else they're not going to invest in it deeply.

So let's talk a bit about OpenAI’s GPT-4 because it is a very different model. I mean, up to this point with GPT-4 and all that previous generation, the analogy in terms of the intelligence is sort of the kind of system one thinking and the Daniel Kahneman type of intelligence, right? He has this whole economic theory, won the Nobel Prize around it.

System one thinking is just very fast; it's kind of these decisions that humans make very intuitively and based on patterns, and LLMs are fantastic at that, but they're terrible at the executive function because what I'm hearing with all the stuff that you're describing is kind of you're just giving the LLM like executive function and how do you think? Right? How do I manage you? It's really that slower thinking.

I think GPT-4 is exciting. We haven't seen things built yet because it just got announced a few days ago, right? I think it's getting to that system two thinking, and I think this has been a big area of research which I saw a lot in news a year ago where a lot of the researchers were excited to unlock this because this is the missing piece to our AGI.

Let's talk about what are your thoughts on GPT-4 and how this changes things.

First of all, I think it's a very impressive model. Like with other things, we gave it the kinds of tests that we knew were failing and the degree of it's not just math, degree of thoroughness, precision, intelligence applied to some of these questions. Sometimes it's the stuff that you wouldn't expect you need a super smart model to do.

In one of the tests that we run, we give it, uh, lawyers' real legal brief, but we edited very slightly some of that lawyer's quotations to the case to make it a wrong quotation or wrong kind of summarization of his case. So he has this like 40-page legal brief; you alter things with just adding the word like "not" can change the meaning of something entirely.

Right? And then we give the full text of the case as well to the AI and we say, "Well, what did you know? What did the lawyer get wrong about this case, if anything?" And literally every LLM before that would be like, "Nothing, it's perfectly right." And it's just not a precise thinker about some of the very nuanced things that we altered about the brief to make it slightly wrong. And then GPT-4 got it—got it immediately.

Like you said, like it thinks actually for a while. Like it sits there for a minute, you're like, is this anything? Is it on? You know? Like—but then it starts answering, and it's like, "Oh well, you know, you changed an 'and' to a 'neither nor.'"

So those are the kinds of tests that you kind of expect even, frankly, earlier AI, like LLMs to be able to pass, but just could not, and all of a sudden GPT-4 is even doing these things that take like, like precise detail thinking.

Obviously, we don't have the internals on GPT-4 or how it really works. We have, you know, this broad idea of chain of thoughts. Seemingly, we know that if OpenAI had a giant corpus of internal monologue of people thinking through doing things step by step, GPT-4 would be even a lot better.

It sort of rhymes with, uh, the thing you did to, you know, put your first step on the moon, right? Like you—it rhymes with break it down into, you know, chunks where you can get to 100% accuracy instead of just throw it all in the context window and, you know, maybe magically it will work.

Yeah, do you think that that's what's happening then?

I think there's a good shot that they've had, you know, maybe change what their contractors are doing. Instead of just doing, you know, input in, answer out, they were doing input in, "How would I think about solving this problem?" and then answer out.

But then, you know, the interesting thing is then it's kind of limited by the intelligence of the people writing those instructions. And one of the things that we're investigating for what it's worth with GPT-4 is can we prompt it to tell it what to think about during its thinking process?

Inject like, again, we've hired some of the best lawyers in the country, how would the some of the best lawyers in the country think about solving this problem?

Maybe, you know, we have no conclusive evidence one way or the other yet that this dramatically improves things. It's so early, and just not enough time yet has passed. There's a chance that one of the new prompting techniques with GPT-4 is teaching it not just like how to answer the question what examples of good answers look like but how to think. And I think that that's another like really interesting opportunity here is "injecting" domain expertise or just your own intelligence.

I'm just so thankful because I think you're sort of sharing the breadcrumbs. You know where there are a great many other spaces where this technology is just beginning. I mean, you go to pretty much any company, people have no concept of what's just happened.

Yeah.

Like they actually literally still repeat all of those sort of tired tropes of, "Oh, you better be fine-tuning," or all the—I mean, these things are just not connected to like what we're seeing day-to-day with startups and founders trying to create things for users.

What I'm kind of glad for is that we get to actually share this news, like this knowledge, because like even the things we talked about, you know, hey, you should probably do evals. Like there's a lot of alpha in getting to 100%, not just 70%.

These are sort of the breadcrumbs that will actually go on to create all of the billion dollar companies, maybe thousands of them actually.

We hope so. I mean, I think that you're about to start to see a lot of other fields, like law, really level up when you don't have to spend, you know, millions of dollars in six months literally in a basement reading document by document by document.

Right? When you actually can just get past that and get just the results, and now you're thinking strategically and intelligently.

The unlock for these companies, I mean, they currently pay, again, millions of dollars in salaries for these jobs to be done, each of them, right? So for any company to come out with an AI that can do even 70% of that—the value is like really there.

And I just want to encourage people to not kind of give up based on those tropes. Right? Like, "Oh, it hallucinates too much. It's too inaccurate." It's "do whatever there's for an example of anything." It's like there's a path, and you can do it and there's some good news in that. You know what? The jobs aren't going to go away; they'll just be more interesting.

That's what I think.

Yeah, well with that we're out of time, but Jake, thank you so much for being with us.

Thanks for having me!

See you guys next time!

[Music]

Now

[Music]

More Articles

View All
Gettysburg
So we’ve been talking about the progress of the American Civil War, which started in early 1861 after the 11 states of the South, which were slave states, seceded from the Union and tried to establish an independent nation known as the Confederate States …
Our New App: Weezles
Hey guys! This Mac has it on with a new free app that we have on the App Store now. So, a long time ago, I worked on an app called, uh, iPad Shooter, where iPads would pop up and you shoot them and you get points. There were achievements, there were bird…
Economist explains the two futures of crypto | Tyler Cowen
All right. So now let’s move on to cryptocurrency. You need to drink water or anything before we switch topics? Let’s try crypto without water - but we’ll see. ‘Crypto.’ ‘Crypto.’ ‘Crypto.’ ‘Cryptocurrency becomes more popular, we’re starting to …
Electricity in India | Before the Flood
About 30% of households in India are yet to have access to electricity. If you want to provide electricity to everybody, we have to ensure that our electricity is affordable. India has a vast reservoir of coal; we are probably the third or fourth largest …
Caught in a Bat Tornado | Expedition Raw
If I’d reach my hand up right now, I could probably catch ten back. We were literally surrounded; millions of bats about us, running into us. Unbelievable! It’s so incredible! We have 20 million bats all coming out of a cave at the same time. Perhaps one …
Free Will: be glad you don't have it
Free Will is a fantasy we should be glad we don’t have it. Um, I’m going to talk about the implications of radical Free Will and why we’re much better off without it. So, what is Free Will? Um, in this video, I’m talking specifically about a version of F…