A Library for Direct MCP Server Communication in Your Code

I'm going to show you a library that lets you communicate with your MCP servers directly in your code. You can use it with any LLM you prefer. Previously, communicating with MCP servers required specific MCP clients; we already have clients like Windinsurf, Cursor, and Claude Desktop. Now you can use this new MCP client library. It works by using an agent and comes with some pretty cool features. It's also really easy to install. I'll show you how to set it up. If you're not familiar with code, keep in mind that this does require some coding knowledge to use properly. Even if you're new to it, that's not a big issue. I'll also show you how to vibe code with it.

Library Installation

Let's look at the installation of the library. It's a Python-based library, so the first step is to check whether Python is installed on your system.

Then, you'll create a virtual environment. Once it's created, you need to activate it. The commands for both Windows and Mac OS are provided here.

Windows: bash python -m venv venv .\venv\Scripts\activate

Mac OS / Linux: bash python3 -m venv venv source venv/bin/activate

These are the commands to install the library. They're available on the GitHub repo too. bash pip install mcpus If you're using Python version 3, which you'll know from the Python version command mentioned earlier, then make sure to run everything using pip3, not pip.

If you're planning to use an OpenAI key or model to interact with your MCPS, you need to install langchain-openai. For Anthropic, you need to install langchain-anthropic. For other providers like Grock or Llama, you can find the required libraries listed at their respective documentation links.

Project Setup

Once everything is installed, open up your terminal. You can see that I'm in a Python environment. I've installed the pip packages, and now I'll show you how to move forward.

First, open this directory in your code editor. This will launch your project. Once it opens, create a new file named .env. Your project will only have the virtual environment folder at this point; you need to create the other files yourself and add the code manually.

Create the .env file and paste the following line with your API key. You only need to paste the key for the provider you're using. I'm using OpenAI in this example, so I pasted that key.

.env file: OPENAI_API_KEY="your-api-key-here"

Code Explanation

Let me quickly explain the code and how it works.

At the top, you can see different imports from MCPUs, Langchain, and OpenAI, as we're using OpenAI for the LLM. We define a function and load_dotenv() loads the environment variables from the .env file. Then, we create an MCP client using the MCP configuration from a separate file. I've placed it here. You can see the Airbnb MCP configuration is right here.

Next, we define the LLM we want to use. If you're using Anthropic, the setup will be a bit different. Then we choose our model and create an agent. This agent takes the LLM, the client we created, and defines the maximum number of steps it can take. It also gives a prompt to the LLM, which is used on the MCP server. It then prints the result and gives it to us.

Here is a basic example of using the Airbnb MCP: ```python

Imports from MCPUs, Langchain, and OpenAI

from mcpus import MCPClient, MCPUSEAgent from langchainopenai import OpenAI from dotenv import loaddotenv

Define a function and load environment variables

load_dotenv()

Create an MCP client from a configuration file

Assumes a file named 'airbnbmcpconfig.json' exists

mcpclient = MCPClient(config="airbnbmcp_config.json")

Define the LLM (OpenAI in this case)

llm = OpenAI(model_name="text-davinci-003")

Create an agent

agent = MCPUSEAgent( llm=llm, client=mcpclient, maxsteps=5, prompt="Find listings with a pool and good ratings." )

Print the result

result = agent.run() print(result) ``` You can modify it however you like and build really interesting applications. You don't need a separate client anymore. You can bind an LLM to an MCP and create modular applications. If you've seen our WhatsApp MCP article, that same concept can be used here to make fully autonomous WhatsApp agents.

Running the Code

Now, let me run it for you. The server has started and it's running. It looks like there was some kind of error, but we still got the output. We received the listings from the Airbnb MCP. It gave us the links because we added a feature that filters listings by our preferences, like having a pool and good ratings. It handpicked based on those conditions. This is a cool implementation. It works, and the possibilities for creating different agents are endless.

The code you just saw is already in the GitHub repository.

Framework Features

One issue you might run into is that your editor doesn't have the context of this framework. To give it that context, you can often add documentation. For example, in Cursor, you can scroll down to the features section and go to "Docs". Add a new doc, and in the link field, go back to the GitHub repo and open the readme file. You don't need to provide the link to the entire repository; just use the readme file since it contains the full documentation. Copy the link and paste it into the doc section. The editor will read it, index it, and use it as context. To use it in code, type the @ sign, go into docs, and select the MCP use docs. It will reference that and generate code based on the framework properly.

Another thing you can do is convert the repo into an LLM-ingestible format if you have any questions about it. To do that, you can replace hub with ingest in the GitHub URL. This will open the repository in a tool that will convert the entire repo into readable text that you can use with any LLM. You can then ask questions about it if you're ever confused or need clarification.

You've seen it in action, and you can check the repo for other example use cases, like using Playwright and Airbnb. I use the Airbnb one, but with OpenAI. The Blender MCP server can also be used.

This framework also supports: * HTTP Connections: You can connect to servers running on localhost. * Multi-Server Support: This allows multiple servers to be defined in a single file. If you're working with multiple MCP servers, you can either specify which result should come from which server or handle it dynamically by setting use_service_manager to true. The agent will intelligently choose the right MCP server. * Tool Access Control: You can also control which tools it has access to.

This is a solid framework, and I'm already thinking of all the wild ways to build new applications with the MCP library. You should check it out too. I'm working on a few projects with it right now. If you don't fully understand it, I've already shown how you can use an LLM to make sense of everything.