Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add MCP support #3672

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Add MCP support #3672

wants to merge 6 commits into from

Conversation

Antonin-Deniau
Copy link

@Antonin-Deniau Antonin-Deniau commented Mar 28, 2025

This is a rough implementation of MCP into Aider.
It currently support adding stdio MCP servers with this configuration in the ~/.aider.conf.yml

mcp: true
mcp-servers:
  - git-server
mcp-server-command:
  - "git-server:uvx mcp-server-git"
mcp-tool-permission:
  - "git-server:git_status=auto"
mcp-server-env:
  - "git-server:GIT_SERVER=server"

Todo:

  • Add http support for remote MCP servers
  • Change the way async is implemented in the mcp module ? Currently I used threading and message queue to keep everything else synchronous
  • Use a better structure for the config file ? I used a format that is easy to write for both the config file and the cli arguments
  • Delete the command that I used to test mcp tools
  • Add documentations
  • Add MCP ressource
  • Add MCP prompts
  • Add tests
  • Add anthropic's mcp dependency, since the feature depend on it. (I dont know how to add it in the requirements.txt properly)

@CLAassistant
Copy link

CLAassistant commented Mar 28, 2025

CLA assistant check
All committers have signed the CLA.

@bartlomiejwolk
Copy link

How to interpret the config in ~/.aider.conf.yml?

Here's what I know from Claude Desktop / Roo Cline:

{
  "mcpServers": {
    "filesystem": {
      "command": "node",
      "args": [
        "c:/Users/<user>/AppData/Roaming/npm/node_modules/@modelcontextprotocol/server-filesystem/dist/index.js",
        "D:/"
      ],
      "disabled": true,
      "alwaysAllow": []
    }
  }
}

@Antonin-Deniau
Copy link
Author

How to interpret the config in ~/.aider.conf.yml?

Here's what I know from Claude Desktop / Roo Cline:

{
  "mcpServers": {
    "filesystem": {
      "command": "node",
      "args": [
        "c:/Users/<user>/AppData/Roaming/npm/node_modules/@modelcontextprotocol/server-filesystem/dist/index.js",
        "D:/"
      ],
      "disabled": true,
      "alwaysAllow": []
    }
  }
}

For this, the config should be something like:

mcp: true
mcp-servers:
  - filesystem
mcp-server-command:
  - "filesystem:node c:/Users/<user>/AppData/Roaming/npm/node_modules/@modelcontextprotocol/server-filesystem/dist/index.js D:/"

But now you made me realize that I havent checked if it work on Windows...

@wladimiiir
Copy link

wladimiiir commented Mar 30, 2025

I checked your code, and I have to say - great job implementing this! While I’m not a Python developer and can’t comment on the threading aspect 😁, I do have some feedback.

One suggestion: I’d avoid using the reflection message as the MCP tool’s response. If I’m not mistaken, only three reflections are allowed by default, which could limit the agent’s usability by restricting iterations. Changing max_reflection_count would have broader effects, like impacting incorrect SEARCH/REPLACE blocks.

From a UX perspective, I’d also reconsider how servers are defined in the config file. Something like this might be more intuitive:

mcp: true
mcp-servers:
  git-server:
    command: "uvx mcp-server-git"
    tool-permissions:
      git_status: auto
    env:
      GIT_SERVER: server
  another-server:
    command: "some-command"
    tool-permissions:
      some_tool: auto
    env:
      SOME_ENV: value

@bartlomiejwolk
Copy link

bartlomiejwolk commented Mar 30, 2025

I like the proposal by @wladimiiir for config formatting (no hypens).

Copy link

@lutzleonhardt lutzleonhardt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Antonin-Deniau

General

Thank you for your PR and adding the ability to let aider use MCP :)
I think there is a potential problem with the queues/threading, but besides that (and your mentioned todos) your code looks very clean and nice to me.

Just some additional ideas:

  • what about adding a new coder /mcp or /agent to only add the tool prompt if I really want tools it in an ongoing conversation. There is always this overhead of adding the toolprompt with each request
  • is there a way to leverage native tool use via litellm and the OpenAI tool spec? i.e. LiteLLM plans to integrate a MCP Bridge which could maybe used: [Feature] MCP bridge support BerriAI/litellm#7934

"""Sync call to the thread with queues."""

self.message_queue.put(CallArguments(server_name, function, args))
result = self.result_queue.get()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO I see a potential problem here.
All server threads read from the same message_queue. Whichever thread calls get() first, gets the message. If the server name does not match, that thread just calls task_done() and discards it.
This means the correct server thread might never see that message—it’s already been consumed by the wrong thread.

@@ -1455,6 +1466,14 @@ def send_message(self, inp):
self.reflected_message = add_rel_files_message
return

tool_results = self.check_for_tool_calls(content)
for tool_result in tool_results:
if self.reflected_message:
Copy link

@lutzleonhardt lutzleonhardt Mar 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As told already by others: max_reflections is the limit for consecutive tool calls.

        while message:
            self.reflected_message = None
            list(self.send_message(message))

            if not self.reflected_message:
                break

            if self.num_reflections >= self.max_reflections:
                self.io.tool_warning(f"Only {self.max_reflections} reflections allowed, stopping.")
                return

            self.num_reflections += 1
            message = self.reflected_message

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What should be the correct way to implement that ? Since it behave almost exactly like a diff edit/a file inclusion,
should I implement a reflection limit / reflected_message only for the mcp tools ?

Copy link

@wladimiiir wladimiiir Mar 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say you could have another field self.mcp_tool_result_message = tool_result instead of using self.reflected_message. And then:

while message:
    self.reflected_message = None
    list(self.send_message(message))

    if not self.reflected_message and not self.mcp_tool_result_message:
        break

    if self.num_reflections >= self.max_reflections:
        self.io.tool_warning(f"Only {self.max_reflections} reflections allowed, stopping.")
        return
    if self.num_mcp_iterations >= self.max_mcp_iterations:
        self.io.tool_warning(f"Only {self.max_mcp_iterations} MCP iterations allowed, stopping.")
        return

    if self.reflected_message:
        self.num_reflections += 1
        message = self.reflected_message
    if self.mcp_tool_result_message:
        self.num_mcp_iterations += 1
        message = self.mcp_tool_result_message

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, it seem to work, maybe I'll all the max_mcp_iterations in the configuration too


for server in self.get_enabled_servers():
if server.command:
t = threading.Thread(target=self._server_loop, args=(server, loop))
Copy link

@lutzleonhardt lutzleonhardt Mar 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have one thread per MCP server, but because of the blocking behaviour of the queues the tool calls are made sync because the main thread is blocked. In addition there is the problem that all threads read from one queue (which leads to the problem only one thread can receive it atm).

Alternatives (provided by sonnet):

  1. Single AsyncIO Event Loop in a Background Thread
class AsyncioManager:
    def __init__(self):
        self.loop = asyncio.new_event_loop()
        self.thread = threading.Thread(target=self._run_event_loop, daemon=True)
        self.thread.start()
        self.servers = {}

    def _run_event_loop(self):
        asyncio.set_event_loop(self.loop)
        self.loop.run_forever()
        
    def call_async(self, coro):
        future = concurrent.futures.Future()
        
        async def wrapper():
            try:
                result = await coro
                future.set_result(result)
            except Exception as e:
                future.set_exception(e)
                
        asyncio.run_coroutine_threadsafe(wrapper(), self.loop)
        return future.result()  # This blocks until result is available
  1. Asynchronous API with Callbacks
class AsyncToolManager:
    def __init__(self):
        self.loop = asyncio.new_event_loop()
        self.thread = threading.Thread(target=self._run_event_loop, daemon=True)
        self.thread.start()
        
    def execute_tool(self, server_name, tool_name, arguments, callback):
        """Non-blocking tool execution with callback"""
        
        async def run_tool():
            result = await self._async_execute_tool(server_name, tool_name, arguments)
            # Call the callback in the main thread
            callback(result)
            
        asyncio.run_coroutine_threadsafe(run_tool(), self.loop)

@@ -1106,6 +1108,11 @@ def fmt_system_prompt(self, prompt):
)
else:
quad_backtick_reminder = ""

# Add MCP tools information if MCP is enabled

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 2 sections could be combined to one, right?

prompt = prompt.format(
    fence=self.fence,
    quad_backtick_reminder=quad_backtick_reminder,
    lazy_prompt=lazy_prompt,
    platform=platform_text,
    shell_cmd_prompt=shell_cmd_prompt,
    shell_cmd_reminder=shell_cmd_reminder,
    language=language,
)

if self.main_model.system_prompt_prefix:
    prompt = self.main_model.system_prompt_prefix + prompt

# Add MCP tools information if MCP is enabled
if is_mcp_enabled():
    mcp_tools_info = self.gpt_prompts.mcp_tools_prefix + "\n\n" + get_available_tools_prompt()
    prompt += "\n\n" + mcp_tools_info

Remote task_done to avoid deadlock
@ishaan-jaff
Copy link

hi @lutzleonhardt - I'm the litellm maintainer. We'd love to help aider support MCP. Here's our MCP bridge on the litellm python SDK: https://docs.litellm.ai/docs/mcp#litellm-python-sdk-mcp-bridge

any thoughts on this ? how can we help you ?

Add a mcp file configuration file
Remove redundant enable in mcp server class
Add num_mcp_iterations & max_mcp_iterations so it wont clash with max_reflections
@Antonin-Deniau
Copy link
Author

Since the YAML configuration file cannot support a nested configuration structure, I added a mcp-configuration cli argument, that point to a YAML for MCP configuration specifically, eg:
In the regular configuration file:

mcp-configuration: ~/.aider/mcp.yml

In ~/.aider/mcp.yml

servers:
- name: git-server
  command: "python -m mcp_server_git"
  env:
    GIT_SERVER: server
  permissions:
    git_status: auto

I removed redundant enable property in the MCP server class
I added num_mcp_iterations & max_mcp_iterations so it wont clash with max_reflections
I fixed some deadlock in the message queues for now

@mark-bradshaw
Copy link

Can we have a fall back in the mcp config file names, like happens with a lot of other aider config files? That way there can be mcp config that's specific to a particular repo, but also config that is global. So the final mcp config would be ~/.aider/mcp.yml + ./.aider.mcp.yml

@lutzleonhardt
Copy link

hi @lutzleonhardt - I'm the litellm maintainer. We'd love to help aider support MCP. Here's our MCP bridge on the litellm python SDK: https://docs.litellm.ai/docs/mcp#litellm-python-sdk-mcp-bridge

any thoughts on this ? how can we help you ?

Hi @ishaan-jaff
thanks for offer to help :)
Aider already uses LiteLLM to access the models. But for the MCP Bridge a dedicated server (the LiteLLM proxy) needs to be run, right? I assume this is not suitable in this case, because aider wants to stay independent of a dedicated server part.

…background,

using non blocking message queue on the mcp side
Fixed some problems with mcp tools permissions
Use thread.join to gracefully stop thread
@Antonin-Deniau
Copy link
Author

The new commit now aggregate all the session into one async while loop in a background thread.

There is currently a problem I'm trying to debug where when we send a Ctrl-C (To cancel an llm call for example) it is sent to the mcp server too, and make it crash in a subsequent call. it cause issue with this one for example: https://github.com/modelcontextprotocol/servers/tree/main/src/git

@jerzydziewierz
Copy link

please do not invent a new format of configuring the MCP servers !!!
MCP servers configuration is already well established and changing it from json to yaml will create endless maintenance headaches for all users

@jerzydziewierz
Copy link

From a UX perspective, I’d also reconsider how servers are defined in the config file. Something like this might be more intuitive:

mcp: true
mcp-servers:
  git-server:
    command: "uvx mcp-server-git"

no, please don't!

again, the way to configure MCP servers is well established, and inventing a new configuration format will create endless headaches of incomplete translatability between classic and this new custom yaml way

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants