Model Context Protocol (MCP), clearly explained (why it matters)
Greg: Everyone is talking about mcps, it's gone completely viral, but the reality is most people have no idea what mcps are and what they mean and what are the startup opportunities associated with it. So in this episode I brought Professor Ross Mike who is probably the the best explainer of technical Concepts in a really easy way that someone who's non-technical can really understand. I brought him on; he explains it beautifully in such a short amount of time, and if you stick to the end you'll hear a couple of his startup ideas that incorporate mcps. So, um, enjoy the episode and see you [Music] soon.
Greg: All right, well we got Professor Ross Mike on the Pod, um, and the reason why we have him is because I don't know what the hell mcps are and I've been seeing it on X and I need a succinct clear Professor Ross Mike explanation, um, yes I've read a bunch of threads on it and I've seen a couple videos on it but there's nothing like a Ross mic explanation so I'm here for the what do I need to know about mcps and that's that's what that's why you're here. Thank you for coming on; I I I appreciate that, thank you very much.
Professor Ross Mike: Yeah, class is definitely in session. I'll just start, um, sharing my screen, okay? So, understanding mCP, um, is really, um, important, uh, but you'll also realize the benefits and why it's sort of a big deal but not really at the same time. You see one of the things in programming land that we have—and that programmers love—are standards, and the reason why standards are important is they allow for us Engineers to build systems that communicate with each other. The most popular one that you know—you might have heard of or you might not—and you don't really need to know the details is REST, REST APIs, and they're basically a standard that every company follows when they construct their APIs, when they construct their services, for me as an engineer to be able to connect with them.
Professor Ross Mike: Now, understanding that engineering is all about standards and having these formalities we follow to make life easier, when we think of in the context of an LLM I want you to understand this one important thing: LLMs by themselves are incapable of doing anything meaningful. What do I mean by that? If you remember the first, you know, Chat GBT 3 or was it 3.5, I'm not sure, but if you just open any chat bot and you tell it to send you an email, um, it won't know how to do that; it will just tell you, "Hey, I can't send you an email." The most you can do with an LLM is ask it questions, uh, maybe ask it to tell you about some historical figure, whatever it may be. LLMs are truly incapable of doing anything, uh, meaningful and what I mean by meaningful, it'd be nice if you know it could send me an email, um, if it could, um, do some specific task on my behalf.
Professor Ross Mike: But the only thing an LLM in its current state is good at is predicting the next text, right? So for example, if I say "My Big Fat Greek" an LLM, with all the data source, with all its training material, will determine that the next word is "wedding," right? So this is the most an LLM by itself that it could do, right? The next evolution was developers figured out how to take LLMs and combine them with tools, and you can think of a tool like an API, for example. Most of us are aware where Chat GBT and these other chat Bots are able to search the internet—for example, Perplexity, right? Perplexity gives you the option to chat with an LLM, but that LLM has the ability to fetch, um, information from the internet and present that to you. The LLM itself is not capable of doing that, but what they've done is they've constructed a tool, they've given the LLM access to an external service, right? And there's plenty of these Services, right? I think there's Brave search, um, Chat Open AI offers an API now.
Professor Ross Mike: So LLMs have started to become a bit more powerful when we connected tools to them, right? I can give you an example—let's say, um, let's say every time I get an email I want there to be an entry in a spreadsheet. Now most of you know there are services like Zapier, End8, or you know any of those Automation Services; if I build out an autom and connect that to my LLM, it just became a bit more meaningful. Now that's awesome and cool, but it gets really frustrating when you want to build an assistant that does multiple things. Imagine: search the internet, um, read your emails, summarize this—you start to become someone who glues a bunch of different tools to these LLMs and it can get very frustrating, very cumbersome.
Professor Ross Mike: If you're wondering why we don't have an Iron Man level Jarvis assistant, it's because combining these tools, making it work with the LLM, is one thing, but then stacking these tools on top of each other, making it cohesive, making it work together is a nightmare itself. And this is where we're currently at, and does—before I continue—does this make sense? This is where we started: LLMs by themselves write me a poem, um, you know, tell me about World War I, um, and then the second evolution is, "Oh, we now have tools," right? We now have, um, these things, these external services that we can connect to our LLM. The problem here is they're difficult; it's annoying, and as someone who works at an AI startup, Tempo, and we have a lot of tools—like for example we do a search—um, you have to find an external service, you have to connect it to the LLM, and you have to make sure the LLM doesn't hallucinate or do something stupid. And believe it or not, as cool as LLMs are by themselves, they're very, very dumb, um, but these tools make them just a bit more capable.
Professor Ross Mike: So this is where we're at. Uh, Greg, we good so far? Crystal clear? I'm loving this beautiful quick break in the Pod to tell you a little bit about Startup Empire. So Startup Empire is my private membership where it's a bunch of people like me, like you, who want to build out their startup ideas. Now they're looking for content to help accelerate that, they're looking for potential co-founders, they're looking for, uh, tutorials from people like me to come in and tell them, "How do you do email marketing? How do you build an audience? How do you go viral on Twitter?" All these different things—that's exactly what Startup Empire is, and it's for people who want to start a startup but are looking for ideas or it's for people who have a startup but just they're not seeing the traction, uh, that they need. So you can check out the link to StartupEmpire.co in the description.
Professor Ross Mike: Now enters mCP, and what does mCP mean? I think the simplest way, right, without getting too technicals—I’ve read the threads too and as a technical person I appreciate it, but for the non-Tey I can assume it's frustrating. Think of it this way: think of every tool that I have to connect to, to make my LLM valuable, um, as a different language. So tool one's English, tool two is Spanish, tool three is Japanese, right? And imagine every tool—it's its own language—and it's not that there isn't a standard for how APIs work, but every service provider constructs their APIs differently. There's different information you have to pass; there's just various degree of, of, of, of things that you have to set up—that again, it just feels like gluing a bunch of different things together. Will it work? Yes, but at scale it gets very diff difficult.
Professor Ross Mike: mCP you can consider it to be a layer between your LLM and the services and the tools, and this layer translates all those different languages into a unified language that makes complete sense to the LLM, right? So it's the evolution of LLM plus tools, but in this Evolution it just makes it, makes it very simple for the LLM to connect and to access different outside resources, right? Because that's what tools are at the end of the day. So with mCP I'm able to connect to an outside data source, an outside database, maybe, um, a tool like, uh, Convex or Supabase, right? Um, imagine I, I just tell the LLM, you know what, create me a new entry in my database and it—it's connected to my database via mCP and it knows exactly what to do and how to do it.
Professor Ross Mike: In the second evolution, LLMs and tools, there's a lot of manual work that goes on; there's a lot of step-by-step planning that you have to do and there's a lot of edge cases where it can fail. And this is why, again, none of us—as exciting as the space is—none of us have a Jarvis level assistant yet. It feels like we're there and we're close, but this system makes it so that it's very diff and what's frustrating is this: imagine, let me think of a simple service, a simple like, you know, tool. Imagine, um, every time a Slack message comes, your LLM reads that Slack message and it shoots you a text, right? Sounds pretty trivial.
Professor Ross Mike: Here's the frustrating part: imagine Slack updates their API or the text service updates, makes a change, and let's say that service is connected to other services or you have some sort of like automation, step-by-step thing that you've planned. It becomes a nightmare; it becomes terrifying, and this is why even in the age of LLMs good Engineers will still get paid because stuff like this, like this exists. But what mCP does, it unifies the LLM and the service, right? It creates this, this, uh, layer where the service and the LLM can communicate efficiently.
Professor Ross Mike: Now let's get into some practicality. You can think of the mCP ecosystem as follows: you have an mCP client, you have the protocol, you have an mCP server, and you have a service, right? An mCP client is something like Tempo, Wind Surf, Cursor and they are basically the client facing side—the LLM facing side of this ecosystem. The protocol, again, is that two-way connection between the client and the server, and the server is what translate, translates that external service, its capabilities and what it can do to the client. And that's why between the mCP client and the mCP server there's the mCP protocol.
Professor Ross Mike: But here's the fascinating part, and this is why I think Anthropic, they're playing 3D chess when they built this, is the way this is architected. The mCP server is now in the hands of the service provider. So if, let's say, me and Greg run a Dev Tool company, right, where maybe we're doing a database, right, like we're, like, "Listen, we're going to build the best database company in the world and we want people's LLMs to have access to this database," it is now on us to construct this mCP server so that the client can fully access this. So Anthropic in a way sort of said, "Listen, we want our LLMs to be more powerful, more capable, uh, but it's your job to figure this out." And this is why you've noticed all the external service providers are now building different mCP servers, they're building out repos and all this stuff, right?
Professor Ross Mike: So this is a big deal in a sense where LLMs are going to be more capable, but from a technological perspective all they did was create a standard—a standard that it seems like all companies and all Engineers are going to upon, because you can construct any system, any API however you please. The problem is if you want to scale, you want to grow, you want other developers, other businesses to connect and work with your service, it has to be in a fashion that makes sense for them. Imagine if all of us just spoke different languages, but standards allow us to communicate in a way that makes sense to all of us, and mCP is that for LLMs because LLMs by themselves are not that capable. They're just, they're, they're, they're systems that have great predictability and they know how to predict the next word. But when you add this mCP protocol as a whole, you now have a way for it to be capable of doing important stuff.
Professor Ross Mike: Now, understanding all this, it's not all sunshine and rainbows; there are some technical challenges. If you notice, if anyone has set up an mCP, um, server on any of their favorite mCP clients, it's annoying, um, there's a lot of downloading, you have to move this file, you have to copy this, that, and the third, um, and it's a lot of local stuff. There are some kinks that have to be figured out, uh, but once this is figured out or finalized, polished, or maybe they update the standard, or maybe someone comes up with a better one, we start to enter a world where LLMs start to become more capable. And that is literally all what mCP is: just making LLMs more capable. We're trying, we're doing that with tools right now—it's kind of working, um, but mCP seems to be the next Evolution.
Professor Ross Mike: I think, Greg, I saw your latest video, um, Manis—Manis is a great example of number two. They have tons of tools and kudos to them, they've engineered it well in a way where, you know, they—well, they work well cohesively. I didn't get to try it out, so I'm just looking at what people have done, but I can tell you this: it's a lot, lot of engineering hours; it's a lot of one change happens, something broke, someone's on call and not sleeping. But with mCP, um, it's structured in a way where, um, if we all follow this, um, standard, the LLM will have access to everything it needs, um, and we will all be happy users.
Professor Ross Mike: So, in short, that is literally all what mCP is. It's not, um, Einstein's fifth law of physics or anything crazy like that—it's literally standard for LLMs and it's exciting; it's something to be excited about, um, and yeah, I hope, I hope that clarified—I just kept rambling so I apologize for that.
Greg: No, no, this is, this is exactly what I wanted. I want to end on one question for you. So, this is now clear to me—crystal clear to me—what mcps are, but my question is, well before I even ask my question, every time there's been a popularized protocol, for example HTTPS or, uh, SMTP, um, examples like that, there's been a lot of big businesses that were created on top of it and there's been basically this like, why now, you know, why this just opening of opportunities. Yeah, the average person listening to this podcast is building out their ideas. Is this, does this matter or you at all for that person?
Professor Ross Mike: Like, yeah, I think that's a great question. I think if I were—so I'll speak to the technical and the non-technical—to the technical, there's a lot of things that a technical person can do here. I—I just don't have time, Greg, but one thing I was thinking of was like an mCP App Store and I'll just give this idea out for free because this, this podcast is all about ideas, basically. There's a lot of these repos out there, um, of mCP servers and it'd be cool if someone can go on a site—I even bought the domain, um, it does nothing but again, please anybody, like steal this idea, um, um, I bought the domain and it'd be cool if someone could, um, go on U, like look at the different mCP servers there—they, they see the GitHub code and whatever and they can click, like, install or deploy and they, that server is deployed and gives them a specific URL and then they can paste that in an mCP client and work that out. So for the technical person, if you make millions all I ask is just, you know, send me ,000.
Professor Ross Mike: But for the non-technical person, what I would really focus on is I would just stay up to date with the platforms that are building out mCP capability and just see where the standards are going, right? Because, like you said, um, when these standards are finalized—I don't know if mCP has fully won, I think it needs to be challenged, um, or I don't know if Anthropic is going to make an update—we don't know, it's very early. But I would say keep very close attention to what the final standard is going to be because once that standard is finalized and all these service providers start to, like, you know, build out their mCP or whatever thing it is, you can now start to integrate much seamlessly and much easier, right? This is why, again, every week there's a new chatbot interface with new tools and it wins because this part—step number two—is not easy, right? Especially making it cohesive and making it work fast, right? Like I can sit in two hours and build something like this, but building out that user experience, making it flawless, limiting the hallucinations—it's very, very hard. I mean, this is a lot of the work we do at Tempo, but this makes it so that integrating is a lot easier and you can think of these as like Lego pieces that you can continue to stack to stack.
Professor Ross Mike: So, for my smart and wise business owners, startup, uh, ideas podcast enjoyers, I would really just keep a close attention, right? I think even for myself I don't think with this mCP stuff we're at a place where any shots can be fired that make, um, any smart business decision. But this is one of those things where you just, you sit and you watch and you're just observing and learning and when the right thing at the right time happens, you strike. So, I don't see any crazy business opportunities right now for a non-technical person, even for a technical person. Like, imagine if Open AI comes with a standard tomorrow and we all just shift to that, right? It's very early stages, but I think understanding how this works means you'll understand how the next thing works and when that becomes fin finalized, you hit the ground running.
Greg: Amen. All right, Ross Mike, Professor Ross Mike, there's no one like you. We'll include in the show notes where you can follow him for more really clear explanations around this whole AI coding world, and, uh, dude, I'll see you in Miami in a few weeks.
Professor Ross Mike: Yeah man, I appreciate you. I'm booking my flight soon so yeah, definitely bro, I'll see you soon. Thank you, everybody. [Music]