Okay. So while people are waiting for the latest series of new models, the GPT-5's, the Gemini 2's, the Claude 4's. One of the things that people really overlook is that the real power of LLMs nowadays is mostly coming from what you put in the context window that you're sending to the LLM.
And that means getting the right data at the right time for whatever particular query that you want to do of the LLM. As we know all the LLMs have a cutoff date, meaning that they don't have any information in them after that training cutoff date. But certainly over the last few months, we've seen a lot of the big providers start to incorporate search into their online apps for their particular LLM.
So for example, we saw OpenAI, have GPT search. We've seen Google had Google search context grounding. And the one big player that's been missing in this has been Anthropic.
That's until now. So in this video, I'm going to talk about what Anthropic has announced with their model context protocol. I'm going to talk about how it takes some of these ideas way beyond what other people have been looking at.
And how it stands a really strong chance of becoming the fundamental protocol for agents going forward. So the idea of basically giving an LLM, some kind of protocol to be able to refer to other data is not new. if you think back to ChatGPT plugins back in March, 2023, this is exactly what OpenAI were trying to do.
They were basically trying to give people a way to ping external websites. Maybe that was going to be Bing searched. Maybe that was going to be a whole bunch of things.
It didn't take off, Their implementation of it was not great. And you can imagine the Anthropic's been sitting on the sidelines thinking for quite a while, how they going to do this? And the way that they've done, this is not to.
specifically provide a search function themselves or any sort of unique thing like that. What they've developed is an open sourcing protocol. Now this is what they're calling the model context protocol.
and the idea here is that this is a way to connect LLMs to data and to systems of data. You could also think of it as tool use that you're connecting these models initially in this case, it's going to be the Claude's Sonnet 3. 5 model, to external data, both for coming in and for sending it back to that tool So what Anthropic is proposing here is an open standard that everybody used, not just for Anthropic models, but for other models, so that people can start to build tools that you could swap in LLMs easily.
You could swap out tools for LLMs easily. That you have a real plug and play kind of system going on here. if we look at the model context protocol, we can see that the key thing here is that this is a two way connection between data sources and their hosts.
Now they're talking about AI powered tools here. Really? What they're talking about is the host, which currently today is going to be the Claude desktop app, but you could imagine very quickly it's going to be a VS Code integration.
It's going to be something that's built into some kind of slide tool. And what this protocol does is it allows the LLM to be able to call these external data sources and to be able to bring them in so that it can use them for particular queries that the user has. Now that could be simple things like asking what's the weather in a particular, location, right through to replacing things like code editors, where suddenly, you just have a plugin in VS Code.
And the LLM can basically tell that plugin to create these files, populate them with this code, run some tests to see if it worked and send that back to the LLM. So really this is going to be like a backbone behind everything that's we're going to be running agents here. So if we jump in and look at the docs here, we can see this diagram, which kind of explains a lot of this.
So you've got this sort of MCP host, and this is going to be like your VS Code. like I mentioned currently, it's the Claude desktop app. And the Claude desktop app is out there for both Mac and for Windows, and for Windows arm64.
you can install it. you can then customize it to run what's called an MCP server. Now these servers really can just think of them as scripts, which currently are going to be things that live on your device, meaning on your local computer.
In the future, though, you can imagine that they're going to be microservices as tools in the cloud. And this is one of the things I've experimented a lot with Google cloud functions is building tools that your agents can just call when they need. And once you've got that tool sort of set up, you can have it being used by 20 different agents.
It doesn't matter. And I kind of feel that this is going to take this idea to the next level, with this protocol to do this. So, if we look at the diagram, we've got the Claude desktop app as being the host.
once it's set up and I'll show you after this, how to do it, the actual setup and stuff like that, it can then call a number of servers. Now you could imagine that one of these servers is going to be a search service. So in their example, servers that they give you a brave search for this.
But the thing is that they've actually already opened a bunch of these MCP servers. so you can see we've got a file system one for being able to change things in our file system. We've got GitHub to be able to access things in GitHub.
And that includes reading from GitHub and also pushing to GitHub, which is interesting. And then there are a whole bunch of things around scraping, memory, slack integrations, et cetera. And the really cool thing is not only have they given you a bunch of prebuilt servers, but they've published an SDK.
In fact, two SDKs, both a TypeScript SDK, and a Python SDK for you to be able to create these servers yourself. So I'm going to have a look at the basics in this video. but certainly in future videos, I think we'll look at actually just building full on agents that you can have doing things where you've got these servers that are going back and forth to the LLM host as you go along.
Now the idea is that once you've got this server set up, so let's say this is brave search. We can also then populate it With another one, say, GitHub or a custom RAG database for your particular information. And look at this third example, is that even though currently we need to have the servers, I think running locally on our machines, you can have those servers call external APIs.
So already, if you've got some kind of database in the cloud, if that's a chroma DB, If that's a PineCone DB, et cetera, very quickly, my guess is you're going to see integrations come out so that you'll be able to wire your local server up to things in the cloud like that. And then be able to pull that data back to the host that you're using. Now, remember at the moment, the only host, I think that's out there is the Claude desktop app.
I would imagine, you'll see the integrations come for a lot of the, low code, no code sort of agent tools that are out there, et cetera. So the simplified way of thinking about how this actually works is that you've now got a protocol that can give your LLM a whole suite of tool uses that you can create that you could just plug in for your usage. So to get started using this, you're going to need to install the Claude desktop app.
And that app will work, whether you've got a free account or a paid account that doesn't matter. And I'll walk through doing the setup on my Mac. But they have instructions in here of how you can do this for Windows as well.
And then we'll look at adding some NCP servers and perhaps how we could create something custom ourselves. Okay. So the first thing that you want to do is to see where you've got the Claude application support.
Now in the Mac, this is going to be in your library application support Claude in here. And what I want I'm going to do is just cd into that. once I've done that, I can then open this up with my IDE.
So in this case, I'm going to be using Cursor as the IDE, but you could use, VS Code where you just have code dot, etc. Now, what you're gonna want to do is whether you do this in cursor or in the terminal is you want to be able to create a new Claude desktop config. so if I come in here, I'll just do it in the terminal here.
We'll see that if we go back now to our cursor, We've got this config here. And this is where we're going to paste in the server information. And by server really, we're just pointing to a script, right?
We're not connecting to the cloud or anything with this part of it. But this is so that Claude desktop will be able to actually use these for accessing things. So we'll come back to this.
Let's look next at how to actually set up some of these servers. Okay. So to get started with trying out some of their pre-made MCP servers.
You can just come over to the GitHub link in the docs. and you'll see that they have got a whole bunch of pre-made servers that are in here. let's look at, for example, the Brave search one.
We can see that these are actually made in TypeScript. If you wanted to see actually what it does, what the prompt is and stuff like that. this is the tool definition.
So this is actually telling Claude what it's going to actually use in here and what the schema of it is, what it expects the search query to be, what expects the number to be, et cetera going through this. So it's got a bunch of different, search things in here. And you can see that they've got something similar for the file system.
So if you want to be able to read, write files to your local directories, et cetera, this is something that you can do in here. So when you come in here, the definitions for these. These you don't need to install basically the first time it runs, it will install it in here for you.
but what you'll need to do is give it access to which folders, et cetera that it can, You know that I can access in here. So you want to get these definitions in this case for the file system for the brave search and actually for the brave search, you're going to need to go and set up an API key for that. It's quite simple.
Just follow the directions in here. set up a free account and then you can use this. And then we're going to do is copy these all across.
Now you can see here, I've got the brave search set up. So we set up our MCP servers which is going to be an object that contains brave search. I'm going to have file system.
I'm going to have puppeteer in here for doing this. Now, once that's done, you want to launch the Claude desktop app so you can see here, I've basically just launched this. Now there is a couple of ways that you can see if your tools have been loaded.
So if you come over here, we can see we've got 16 MCP tools in here, so you can see we've got a brave search web search, local search. we've got create directory. These are from the file tools.
So I think the file tools has nine different things that you can do here. We can list out, directories, that kind of thing. And then we can see we've got the puppeteer tools in there as well for being able to navigate to a URL, to be able to get some details out that kind of thing as we go through here.
All right. Okay. So if I wanted to do a normal search, I can do a search with it and let's give it something a little bit more challenging.
Okay. Now, first off, I'm going to tell it to basically go to the VentureBeat site and extract out the titles are about Anthropic. So the idea here is that we want to use puppeteer to actually do this.
So it's going to ask it as can use puppeteer. I'm going to say yes. if you look already it's, opened up Google Chrome for testing.
And it's asking me now can it use another tool for evaluating from puppeteer? I'm going to allow that. we can see that actually what it's searching for in here is we can see that, okay, it's looking for a different kind of H2 H3 tags in here.
And sure enough, come back it says, I found several recent articles about Anthropic on VentureBeat. here are the titles and then Anthropic releases model context protocol. Anthropic bets on personalization.
So let's come over and have a look. Okay, so sure enough, if we come over to VentureBeat, we can see the Anthropic releases model context. we can see Anthropic bets on personalization.
and I think it had one more. About yes, Amazon doubles down on Anthropic. So we can see that it's got an, all of those.
and it's got another one as well in there. Okay. Now, if we want to save that and use a different tool, I can just put in, save those into a text file in agents Claude saves.
So I've made a folder specifically where it can do this. And you can see sure enough, it's gonna ask for, create directory. So it's going to check that the directory is there, and do a whole sort of, if exists kind of thing.
and then it wants to be able to, list from the directories. And then be able to write to the file system. So at the moment you need to approve everything that it does each time.
And this can be a little bit frustrating. my guess is at some point you'll be able to just approve certain ones so that they work in all of your chats. But I guess it makes sense that it is able to do this.
Okay. So looking at this, it's basically saying that it saved it as VentureBeat Anthropic articles. If I just bring over that folder.
sure enough, we can see that folder is there. And if we come down and look at it, We can see the titles have been extracted out nicely. So just using this, we will be able to do this kind of thing.
Now with puppeteer, you can also do things like capture a screenshot of a website. You can do a whole bunch of different things in here. So this was using their pre-made servers that we looked at.
Now, as I mentioned earlier on, they have two, SDKs that you can make your own MCP servers. So they have a Python, SDK, and they have a TypeScript SDK. What I'm going to do is leave it here for this video.
And in the next video, I'll walk through step by step, actually making some of our own tools and showing you how to use the SDK, how to point the MCP server, where your tool, code is actually gonna be, And some tips and tricks of things you would want to be wary about when you're going to be cramming a lot of tools, into this kind of thing. So anyway, just to finish up, I would say that this is definitely a big deal. And the reason why is that they've made the whole, protocol open source.
They've allowed people to use it, not just for their models, for any models. My guess is pretty quickly we will see other people adopt this and it will become more of a standard. Now, it will be interesting to see how does OpenAI react to this?
Do they get on board or do they insist on having their own standard for this kind of thing. You could imagine these combined with things like artifacts inside of the Claude desktop app, you can now do a whole lot more with those artifacts that you couldn't do before. And in some ways, you're going to be able to design your own sort of cursor editor, bolt editor.
different sort of AI coding editors. and I think we're going to see this for a lot of different ideas going forward. Anyway, I'd love to hear from you in the comments about what tools you would most like to see implemented with this.
I will put something together for the next video. As always, if you found the video useful, and you want to be reminded about the next one, please click and subscribe. And I will talk to you in the next video.
Bye for now.