How to call tools with Mistral AI using MCP servers

In a previous post we’ve seen how to build a host that connects an AI model to Model Context Protocol (MCP) tools. In that post we used Anthropic’s Claude model to do the tool calling. As MCP was originally created by Anthropic, it is not surprising their AI API is the most directly compatible with the protocol. However, due to recent events, we might want to use a different, European, model for our agents and tool calling workflows.

In this tutorial we will create helper functions to both convert MCP server output into a format Mistral’s API can handle and convert the API’s responses to an MCP compliant format. We’ll then use those helpers to connect a couple of MCP servers to one of Mistral’s free models and use it to fetch and summarise data from the web.

This tutorial assumes you are familiar with TypeScript and MCP.

Setting up the project

Before we write any code, we need to set up our environment and dependencies. First, create a folder for the project and change the current working directory to this folder.

mkdir mistral-mcp && cd mistral-mcp

Install pnpm if you haven’t already. You can install it using Corepack which is included with Node.js.

corepack enable pnpm

Then initialise pnpm and install the required dependencies.

pnpm init && pnpm add @mistralai/mistralai @modelcontextprotocol/sdk dotenv && pnpm add -D @types/node typescript

We’ll also need to install uv (a python package manager) to run MCP servers written in Python.

brew install uv

Configuring TypeScript and Node.js

First we add some basic configuration for TypeScript. Create a tsconfig.json file, then add the following rules.

tsconfig.json
{
"compilerOptions": {
"module": "esnext",
"moduleResolution": "node",
"esModuleInterop": true,
"target": "esnext",
"types": ["node"]
},
"include": ["**/*.ts"]
}

Then we need to update our package.json to treat our files as ES modules by default, so we can use import syntax and top-level await.

package.json
{
"name": "mistral-mcp",
"type": "module",
"scripts": {
"dev": "npx tsx main.ts"
}
// Rest of the file can stay as is
}

Connecting to the Mistral API

To use AI models via their API, you will have to create and account on the Mistral AI console. For this project we can use their free models so you won’t have to set up any payment method for your account.

Once you have registered, go to the API keys section and create a new key. Then create a file to safely store env variables in your project directory and add your API key.

echo "MISTRAL_API_KEY=[YOUR API KEY HERE]" > .env

Let’s give the Mistral API a try without using any tools first. Create a TypeScript file using touch main.ts, then add the following code to call the API and log the result. Mistral’s AI models love cheese, so let’s have it write a poem about its favourite subject.

main.ts
import "dotenv/config";
import { Mistral } from "@mistralai/mistralai";
const mistral = new Mistral({ apiKey: process.env.MISTRAL_API_KEY });
const response = await mistral.chat.complete({
model: "mistral-small-latest",
messages: [
{
role: "user",
content: "Please write a poem about cheese",
},
],
});
if (response.choices?.length) {
console.dir(response.choices[0], { depth: null });
}

Run the code using pnpm dev and enjoy Mistral’s poetic prowess.

Adding MCP servers

To access the tools we’ll need, we can add two MCP servers; fetch, which can get the HTML for a site at a given URL, and filesystem, which will allow the AI model to read and write files. The third item in the args of filesystem is the path to the directory it is allowed to access. Change this to the path of the folder you created for this project.

We will need a connect function to initialise clients for each server and connect them. We will also need a disconnect function, which we will have to run after all tasks are done to close the connection between each client and server, and end the process.

Let’s create an agent class that contains all the required functions and can keep track of the shared state, like the list of clients. We’ll add logging to the connect function so we can see which tools the agent has connected to. Then we call the connect function with config for the servers we want and the call function. We won’t use the tools yet, for now we just check if we can successfully connect to them.

main.ts
import "dotenv/config";
import { Mistral } from "@mistralai/mistralai";
import { Client } from "@modelcontextprotocol/sdk/client/index";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio";
class Agent {
ai: Mistral;
clients: Client[] = [];
constructor(ai: Mistral) {
this.ai = ai;
}
async connect(servers: MCPServer[]) {
for (const server of servers) {
const transport = new StdioClientTransport(server);
const client = new Client({
name: server.name + "-client",
version: "1.0.0",
});
await client.connect(transport);
this.clients.push(client);
console.log(`Connected to MCP server ${server.name}`);
}
}
async disconnect() {
for (const client of this.clients) {
await client.close();
}
}
async call(message: string) {
return ai.chat.complete({
model: "mistral-small-latest",
messages: [{ role: "user", content: message }],
});
}
}
type MCPServer = {
name: string;
command: string;
args: string[];
};
const servers: MCPServer[] = [
{
name: "fetch",
command: "uvx",
args: ["mcp-server-fetch"],
},
{
name: "filesystem",
command: "npx",
args: [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/yourname/mistral-mcp", // Replace with path to folder for this project
],
},
];
const ai = new Mistral({ apiKey: process.env.MISTRAL_API_KEY });
const agent = new Agent(ai);
await agent.connect(servers);
const response = await agent.call("Please write a poem about cheese");
if (response.choices?.length) {
console.dir(response.choices[0], { depth: null });
}
await agent.disconnect();

Listing tools

Now that we can connect to the servers, we want to list the tools available from each server so we can tell the AI model which tools it can call. However, MCP servers list their servers in a format that can’t be passed directly to the Mistral API, so we’ll need to convert the data to the expected format.

Let’s specify some types for the result of the list tools function of the MCP server and for the tool format Mistral is expecting. This will help us determine the data transformation we need to do. We can also move the MCPServer type here for consistency.

types.ts
export type MCPServer = {
name: string;
command: string;
args: string[];
};
type MCPTool = {
name: string;
description?: string;
inputSchema: {
type: "object";
properties?: Record<string, unknown>;
};
};
export type MCPListToolsResult = {
tools: MCPTool[];
nextCursor?: string;
};
export type MistralTool = {
type?: "function";
function: {
name: string;
description?: string;
strict?: boolean;
parameters: Record<string, unknown>;
};
};

Looking at the types, we can see here that the structure of the MCP tool and the Mistral tool are not that different. We only need to specify that the tool is of type function and map the tool parameters to the expected function parameters. Let’s write a simple converter to do so.

converters.ts
import type { MCPListToolsResult, MistralTool } from "./types";
export function toMistralTools(
listToolResult: MCPListToolsResult
): MistralTool[] {
return listToolResult.tools.map((tool) => ({
type: "function",
function: {
name: tool.name,
description: tool.description,
parameters: tool.inputSchema,
},
}));
}

Calling tools

Now that we have a way to pass the tools from the MCP server to Mistral we’ll also need a way to convert the response data that the API returns when it wants to call a tool. Let’s first look at the types again and then build another converter function. Add the following types to the types file.

types.ts
// Previously added types stay the same
export type MistralToolCall = {
id?: string;
type?: "function" | string;
function: {
name: string;
arguments: Record<string, unknown> | string;
};
index?: number;
};
export type MCPCallToolRequest = {
name: string;
arguments?: Record<string, unknown>;
};

Again we see the structures are fairly similar, with the MCP format being a little simpler. One thing to be aware of is that the arguments for the function call that Mistral return will be in stringified JSON format, which means we’ll have to parse them. Also note the Mistral tool call includes a tool call ID. We do not need to pass that ID to the MCP server, but we do want to save it so we can include it when we send back the result of the tool call to Mistral.

converters.ts
import type {
MCPCallToolRequest,
MCPListToolsResult,
MistralTool,
MistralToolCall,
} from "./types";
// toMistralTools function stays the same as before
export function toMcpToolCall(toolCall: MistralToolCall): MCPCallToolRequest {
const call = toolCall.function;
const toolCallArguments =
typeof call.arguments === "string"
? (JSON.parse(call.arguments) as Record<string, unknown>)
: call.arguments;
return {
name: call.name,
arguments: toolCallArguments,
};
}

Let’s use our new converters to have Mistral make a tool call. We will gave the agent a more challenging command, which involves fetching data from a website. The model will return a request for a tool call, which we will convert into the right format. We will then use that to call the tool using the server and log the result.

main.ts
import "dotenv/config";
import { Mistral } from "@mistralai/mistralai";
import { Client } from "@modelcontextprotocol/sdk/client/index";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio";
import {
ChatCompletionResponse,
Tool,
} from "@mistralai/mistralai/models/components";
import { toMcpToolCall, toMistralTools } from "./converters";
import { MCPServer } from "./types";
class Agent {
ai: Mistral;
clients: Client[] = [];
tools: Tool[] = [];
toolToClientMap: Record<string, Client> = {};
constructor(ai: Mistral) {
this.ai = ai;
}
async connect(servers: MCPServer[]) {
for (const server of servers) {
const transport = new StdioClientTransport(server);
const client = new Client({
name: server.name + "-client",
version: "1.0.0",
});
await client.connect(transport);
this.clients.push(client);
console.log(`Connected to MCP server ${server.name}`);
// Store the tools and their corresponding clients in a map,
// so we can call the right client when a tool is called later.
const toolsList = await client.listTools();
const toolsForMistral = toMistralTools(toolsList);
this.tools.push(...toolsForMistral);
for (const tool of toolsForMistral) {
this.toolToClientMap[tool.function.name] = client;
}
}
}
async disconnect() {
for (const client of this.clients) {
await client.close();
}
}
async call(message: string) {
const response = await this.ai.chat.complete({
model: "mistral-small-latest",
messages: [{ role: "user", content: message }],
tools: this.tools, // Pass the tools to the API
});
if (response.choices?.length) {
await this.handleResponse(response);
}
}
async handleResponse(response: ChatCompletionResponse) {
if (!response.choices?.length) {
throw new Error("No response choices found");
}
const { message: assistantMessage, finishReason } = response.choices[0];
if (typeof assistantMessage.content === "string") {
console.log(`[Mistral]: ${assistantMessage.content}`);
}
if (finishReason === "tool_calls") {
if (!assistantMessage.toolCalls) {
throw new Error("No tool calls found in response");
}
for (const toolCall of assistantMessage.toolCalls) {
if (!toolCall.id) {
throw new Error("Tool call ID not found");
}
const mcpToolCall = toMcpToolCall(toolCall);
const client = this.toolToClientMap[toolCall.function.name];
console.log(`Using tool: ${toolCall.function.name} ...`);
const toolResult = await client.callTool(mcpToolCall);
console.log("Tool call result:");
console.dir(toolResult, { depth: null });
}
}
}
}
const servers: MCPServer[] = [
{
name: "fetch",
command: "uvx",
args: ["mcp-server-fetch"],
},
{
name: "filesystem",
command: "npx",
args: [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/yourname/mistral-mcp", // Replace with path to folder for this project
],
},
];
const ai = new Mistral({ apiKey: process.env.MISTRAL_API_KEY });
const agent = new Agent(ai);
await agent.connect(servers);
await agent.call(
"Please fetch the content at https://mistral.ai/news/mistral-small-3, summarise it in your own words and write the summary to a markdown file."
);
await agent.disconnect();

Returning tool results to the API

Now that we the result of the tool call, we want to return it to the AI model so it can determine the next action to take. This means we’ll have to convert the MCP tool call output to a message format Mistral can handle. Once again let’s first specify the types so we can see what transformations are needed.

types.ts
// Previously added types stay the same
type MCPTextContent = {
type: "text";
text: string;
};
type MCPImageContent = {
type: "image";
data: string;
mimeType: string;
};
type MCPResourceContent = {
type: "resource";
resource:
| {
text: string;
uri: string;
mimeType?: string;
}
| { blob: string; uri: string; mimeType?: string };
};
type MCPContent = MCPTextContent | MCPImageContent | MCPResourceContent;
export type MCPCallToolResult =
| {
content: MCPContent[];
isError?: boolean;
}
| { toolResult?: unknown };
type MistralImageURLChunk = {
type: "image_url";
imageUrl:
| string
| {
url: string;
detail?: string | null;
};
};
type MistralTextChunk = {
type: "text";
text: string;
};
type MistralReferenceChunk = {
type: "reference";
referenceIds: number[];
};
type MistralContentChunk =
| MistralTextChunk
| MistralImageURLChunk
| MistralReferenceChunk;
export type MistralToolMessage = {
role: "tool";
content: string | MistralContentChunk[] | null;
toolCallId?: string | null;
name?: string | null;
};

We can see here that tools can return not only text, but also images and different types of resources. For now we will focus on handling text results only. Converting other result types to the appropriate Mistral content type will be left as an exercise for the reader.

converters.ts
import type {
MCPCallToolRequest,
MCPListToolsResult,
MCPCallToolResult,
MistralTool,
MistralToolCall,
MistralToolMessage,
} from "./types";
// toMistralTools and toMCPToolCall functions we added previously stay the same
export function toMistralMessage(
toolCallId: string,
callToolResult: MCPCallToolResult
): MistralToolMessage {
let content: MistralToolMessage["content"];
if ("content" in callToolResult && callToolResult.content) {
content = callToolResult.content.map((item) => {
if (item.type === "text") {
return {
type: "text",
text: item.text,
} as const;
} else {
throw new Error(`Unsupported content type: ${item.type}`);
}
});
} else if ("toolResult" in callToolResult && callToolResult.toolResult) {
// Handle case where content is not provided and toolResult is used instead
content = JSON.stringify(callToolResult.toolResult);
} else {
throw new Error("No tool result found");
}
return {
content: content,
toolCallId,
role: "tool",
};
}

This will convert the result of the MCP tool call to the Mistral message format as well as handle some unexpected result data. Let’s put it all together now. When receiving a response from Mistral that includes a tool call we call the tool, convert the result to a Mistral message and use the call function to send that message back to the API.

main.ts
import "dotenv/config";
import { Mistral } from "@mistralai/mistralai";
import { Client } from "@modelcontextprotocol/sdk/client/index";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio";
import {
AssistantMessage,
ChatCompletionResponse,
Messages,
Tool,
} from "@mistralai/mistralai/models/components";
import { toMistralMessage, toMistralTools, toMCPToolCall } from "./converters";
import { MCPServer } from "./types";
class Agent {
ai: Mistral;
clients: Client[] = [];
tools: Tool[] = [];
toolToClientMap: Record<string, Client> = {};
messages: Messages[] = [];
constructor(ai: Mistral) {
this.ai = ai;
}
// Connect and disconnect functions stay the same as before
async call(message: string | Messages) {
if (typeof message === "string") {
this.messages.push({
role: "user",
content: message,
});
} else {
this.messages.push(message);
}
const response = await ai.chat.complete({
model: "mistral-small-latest",
messages: this.messages,
tools: this.tools,
});
if (response.choices?.length) {
await this.handleResponse(response);
}
}
async handleResponse(response: ChatCompletionResponse) {
if (!response.choices?.length) {
throw new Error("No response choices found");
}
const { message: assistantMessage, finishReason } = response.choices[0];
if (
assistantMessage.content &&
typeof assistantMessage.content === "string"
) {
console.log(`[Mistral]: ${assistantMessage.content}`);
}
if (assistantMessage.role === "assistant") {
this.messages.push(
assistantMessage as AssistantMessage & { role: "assistant" }
);
}
if (finishReason === "tool_calls") {
if (!assistantMessage.toolCalls) {
throw new Error("No tool calls found in response");
}
for (const toolCall of assistantMessage.toolCalls) {
if (!toolCall.id) {
throw new Error("Tool call ID not found");
}
const mcpToolCall = toMCPToolCall(toolCall);
const client = this.toolToClientMap[toolCall.function.name];
console.log(`Using tool: ${toolCall.function.name} ...`);
const toolResult = await client.callTool(mcpToolCall);
const message = toMistralMessage(toolCall.id, toolResult);
await this.call(message);
}
}
}
}
// Instantiate the agent and call it, same as before

Now we call the AI with instructions, triggering a loop of tool calls and results being returned that will end when there are no more tool calls to make. The expectation is that the model will call the fetch tool to get the content of the website, receive the response, call the file system tool to write a summary of the content to a file, then send a response without a tool call to report the results, ending the loop.

Conclusion

Although Mistral’s tool calling does not follow the MCP standard (yet), the format it uses for tools and messages is similar enough to make conversion possible. With a few simple converters we can build extendable AI agents that use Mistral instead of relying on American models.

We can provide more tools to our agents by simply adding additional MCP servers to the config and asking the agent to use them in the prompt when we call it. Just keep in mind the Mistral Small model is limited in capabilities (and the free use is rate limited), so switch to the larger or more specialised models for more complex workflows.