Powering up AI models by handing them tools

Most LLMs are great for simple conversations, but they lack the ability to interact with their environment. Using Model Context Protocol (MCP) we can give our LLM access to a wide range of open-source tools that will allow it to take agentic action.

In this tutorial we will first go over the basic features of MCP, then build a simple AI agent that can use tools to access the web and write files. We do this by creating a host that uses multiple clients to connect to existing MCP servers, makes the tools from these servers available to an LLM and handles tool calls made by the LLM.

How Model Context Protocol works

MCP is an open protocol that standardises how applications provide context to LLMs. It can be used to create servers that provide tools and data to an MCP client in a standardised format, and hosts that connect one or more of these clients to an LLM. The LLM can then call on the tools and data made available via the host.

             ┌──>client_1<──>server_1
LLM<──>host<─┤
             └──>client_2<──>server_2

Diagram showing the connection between the host, LLM, clients and servers

There are many MCP servers available already, with more being added every day. Their functionality ranges from git use, to Spotify control, to web search and more. These servers can transform an LLM into a useful assistant that can perform complex tasks.

Tools

Tools are callable functions made available by the server. The LLM will be able to ask for a function to be called–with specific arguments if required–and for the outcome of the function to be returned to the LLM. For example a calculation function that take two values and returns the sum of them.

Resources

Resources are data made available to the LLM. Unlike calling a tool, calling a resource will not have any side-effects. For example fetching rows of error logs from a database, but not writing new data to the database.

Prompts

Prompts are basically templates made available by the server. They will take an argument, insert it in the template and return a message with instructions, ready to pass to the LLM. For example, a prompt called research might take a subject as an argument and return a prompt with detailed instructions on how the LLM is expected to research the subject.

Advanced features

There are other, more advanced features included in the protocol, such as sampling, roots, and Server-Sent Events transport. We won’t need those for this tutorial, but they will give you an idea of the flexibility of MCP.

Setting up the project

Before we write any code, we need to set up our environment and dependencies.

1. Create an account with Anthropic

To make calls to the Anthropic API, you will need to create an account on the Anthropic API console and add some credit. $5 will be more than enough for this tutorial.

2. Create a project folder

Create a folder for the project and change the current working directory to this folder.

mkdir mcp-host && cd mcp-host

3. Install dependencies

Install pnpm if you don’t have it already. You can install it using Corepack which is included with Node.js.

corepack enable pnpm

Then initialise pnpm and install the required dependencies.

pnpm init && pnpm add @anthropic-ai/sdk @modelcontextprotocol/sdk dotenv && pnpm add -D @types/node

We will also need uv to run MCP servers written in Python

brew install uv

4. Set up env variables and TypeScript config

Create a file to safely store env variables and add your Anthropic API key, which you can find in the Anthropic console.

echo "ANTHROPIC_API_KEY=[YOUR API KEY HERE]" > .env

We’ll also add some basic configuration for TypeScript. First create the config file.

touch tsconfig.json

Then add the following rules to tsconfig.json so we can use top-level await.

{
  "compilerOptions": {
    "module": "esnext",
    "moduleResolution": "node",
    "esModuleInterop": true,
    "target": "esnext",
    "types": ["node"]
  },
  "include": ["**/*.ts"]
}

Creating a host that connects to tools

Before we start making calls to the LLM API, let’s build and test the connection with the MCP servers. Create a file called host.ts. Add a class with a function that takes an array of servers to connect to, creates an MCP client for each one and then connects the client to the server. For each client we get all the tools available from the connected server and add them to a map that will allow us to quickly find the right client when the time comes to call the tool. For now we just log all the tool names so we can see if the connections worked and which tools we’ll be making available to the LLM.

// host.ts

import { StdioServerParameters } from "@modelcontextprotocol/sdk/client/stdio";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio";
import { Client } from "@modelcontextprotocol/sdk/client/index";
import { Tool } from "@anthropic-ai/sdk/resources";

type Server = StdioServerParameters & { name: string };

export class Host {
  private clients: Client[] = [];
  private tools: Tool[] = [];
  private toolToClientMap: Record<string, Client> = {};

  async connect(servers: Server[]) {
    for (const server of servers) {
      const transport = new StdioClientTransport(server);
      const client = new Client({
        name: server.name + "-client",
        version: "1.0.0",
      });
      await client.connect(transport);
      this.clients.push(client);
      console.log(`Connected to ${server.name}`);

      const tools = (await client.listTools()).tools;
      for (const tool of tools) {
        if (tool.inputSchema) {
          this.tools.push({
            name: tool.name,
            description: tool.description,
            input_schema: tool.inputSchema as Tool.InputSchema,
          });
          this.toolToClientMap[tool.name] = client;
        }
      }
    }

    console.log(
      "Available tools: ",
      this.tools.map((tool) => tool.name)
    );
  }

  async disconnect() {
    for (const client of this.clients) {
      await client.close();
    }
  }
}

Passing tools to the host

Create a file called main.ts where we will initialise the host and pass it some tools to use. For this tutorial we will use the ‘fetch’ and ‘filesystem’ servers created by the MCP team. The ‘fetch’ server can be used to read a website at a given URL, and ‘filesystem’ can read, write and edit files in folders it is given access to. The third item in the args of ‘filesystem’ is the path to the directory it is allowed to access. Change this to the path of the folder you created for this project.

After we’ve connected to to the servers and listed all the tools we will need to disconnect to end the process.

// main.ts

import { Host } from "./host";

const servers = [
  {
    name: "fetch",
    command: "uvx",
    args: ["mcp-server-fetch"],
  },
  {
    name: "filesystem",
    command: "npx",
    args: [
      "-y",
      "@modelcontextprotocol/server-filesystem",
      "/Users/yourname/mcp-host", // Replace with path to folder for this project
    ],
  },
];

const host = new Host();
await host.connect(servers);
await host.disconnect();

Now run the app and check the logs to see the host connecting to servers and listing the available tools.

npx tsx main.ts

Connecting the host to an LLM

Once we’ve confirmed we can connect to the MCP servers and list their tools, we can connect the host to an LLM and pass that LLM a list of the available tools. Anthropic’s Claude models conveniently accept a tools argument and will include a “tool use” block in its response when the LLM want to call the tool.

When we receive a response, we first log the text. Then, if a code use block is present, we use the map we created earlier to find the right client. We tell that client to run the tool and pass it the arguments provided by the LLM. We then pass that message back to Claude and wait for a response. We continue to loop over this whole process.

The final response will also have a stop_reason value of end_turn, indicating the model completed the request, which is safer to check, but unnecessary for this simple project.

// host.ts

import Anthropic from "@anthropic-ai/sdk";
import { StdioServerParameters } from "@modelcontextprotocol/sdk/client/stdio";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio";
import { Client } from "@modelcontextprotocol/sdk/client/index";
import {
  ContentBlockParam,
  ImageBlockParam,
  MessageParam,
  TextBlockParam,
  Tool,
} from "@anthropic-ai/sdk/resources";

type Server = StdioServerParameters & { name: string };
type ToolResult = {
  content: string | Array<TextBlockParam | ImageBlockParam>;
  isError: boolean;
};

export class Host {
  private llmClient: Anthropic;
  private clients: Client[] = [];
  private tools: Tool[] = [];
  private messages: MessageParam[] = [];
  private toolToClientMap: Record<string, Client> = {};

  constructor(llmClient: Anthropic) {
    this.llmClient = llmClient;
  }

  async connect(servers: Server[]) {
    for (const server of servers) {
      const transport = new StdioClientTransport(server);
      const client = new Client({
        name: server.name + "-client",
        version: "1.0.0",
      });
      await client.connect(transport);
      this.clients.push(client);
      console.log(`Connected to MCP server ${server.name}`);

      const tools = (await client.listTools()).tools;
      for (const tool of tools) {
        if (tool.inputSchema) {
          this.tools.push({
            name: tool.name,
            description: tool.description,
            input_schema: tool.inputSchema as Tool.InputSchema,
          });
          this.toolToClientMap[tool.name] = client;
        }
      }
    }
  }

  async disconnect() {
    for (const client of this.clients) {
      await client.close();
    }
  }

  private async handleResponse(response: Anthropic.Messages.Message) {
    if (response.content[0].type === "text") {
      console.log(`[Claude]: ${response.content[0].text}`);
    }

    this.messages.push({
      role: "assistant",
      content: response.content,
    });

    if (response.content[1]?.type === "tool_use") {
      const toolUse = response.content[1];
      const client = this.toolToClientMap[toolUse.name];
      console.log(`Using tool: ${toolUse.name} ...`);
      const toolResult = (await client.callTool({
        name: toolUse.name,
        arguments: toolUse.input as Record<string, unknown>,
      })) as ToolResult;

      const message = [
        {
          tool_use_id: toolUse.id,
          type: "tool_result",
          content: toolResult.content,
          is_error: toolResult.isError,
        } as const,
      ];

      await this.call(message);
    }
  }

  async call(message: string | Array<ContentBlockParam>) {
    this.messages.push({
      role: "user",
      content: message,
    });

    const response = await this.llmClient.messages.create({
      max_tokens: 2048,
      messages: this.messages,
      tools: this.tools,
      model: "claude-3-5-haiku-latest",
    });

    return await this.handleResponse(response);
  }
}

Calling the LLM

Finally we update the main script to instantiate the Anthropic client and pass it to the host. Then we pass the server config and tell the host to connect to the servers as before. Finally, we pass a message to the host, instructing it to fetch the introduction page of the MCP site, summarise the content and write the summary down in a file for us.

As the LLM is only allowed access to a single folder via the ‘filesystem’ client, it will write the file in the directory you have provided for it.

// main.ts

import "dotenv/config";
import Anthropic from "@anthropic-ai/sdk";
import { Host } from "./host";

const anthropicClient = new Anthropic({
  apiKey: process.env["ANTHROPIC_API_KEY"],
});

const servers = [
  {
    name: "fetch",
    command: "uvx",
    args: ["mcp-server-fetch"],
  },
  {
    name: "filesystem",
    command: "npx",
    args: [
      "-y",
      "@modelcontextprotocol/server-filesystem",
      "/Users/yourname/mcp-host", // Replace with path to folder for this project
    ],
  },
];

const host = new Host(anthropicClient);
await host.connect(servers);
await host.call(
  "Please fetch https://www.anthropic.com/news/model-context-protocol, summarise the content of that page, then write it to a markdown file."
);
await host.disconnect();

Run the app again. Once the process is complete, have a look at the new file that was just created. If all went well, you will find a nice summary of MCP inside.

npx tsx main.ts

Conclusion

We now have a simple way to greatly enhance the capabilities of Claude by giving it access to any number of tools. We can give it persistent memory, the ability to surf the web or even to send emails on our behalf. All thanks to the power of MCP.

Key takeaways:

  1. Giving an LLM access to tools allows it to take action to affect its environment. This greatly increases the usefulness of the LLM.
  2. The Model Context Protocol provides an excellent standard for tool interfaces, allowing us to make use of the many available open-source servers.
  3. By creating a relatively simple host ourselves, we can connect an LLM to tools, while remaining in full control.