mcp-server-code-execution-mode
Сообществоот elusznik
An MCP server that executes Python code in isolated rootless containers with optional MCP server proxying. Implementation of Anthropic's and Cloudflare's ideas for reducing MCP tool definitions context bloat.
Установка
podman pull python:3.13-slimОписание
# MCP Code Execution Server: Zero-Context Discovery for 100+ MCP Tools [](https://mseep.ai/app/elusznik-mcp-server-code-execution-mode) **Stop paying 30,000 tokens per query.** This bridge implements Anthropic's discovery pattern with rootless security—reducing MCP context from 30K to 200 tokens while proxying any stdio server. [](https://www.anthropic.com/engineering/code-execution-with-mcp) [](https://blog.cloudflare.com/code-mode/) [](https://www.docker.com/blog/dynamic-mcps-stop-hardcoding-your-agents-world/) [](https://machinelearning.apple.com/research/codeact) [](https://modelcontextprotocol.io/) [](https://mseep.ai/app/4a84c349-1795-41fc-a299-83d4a29feee8) ## Overview This bridge implements the **"Code Execution with MCP"** pattern, a convergence of ideas from industry leaders: - **Apple's [CodeAct](https://machinelearning.apple.com/research/codeact)**: "Your LLM Agent Acts Better when Generating Code." - **Anthropic's [Code execution with MCP](https://www.anthropic.com/engineering/code-execution-with-mcp)**: "Building more efficient agents." - **Cloudflare's [Code Mode](https://blog.cloudflare.com/code-mode/)**: "LLMs are better at writing code to call MCP, than at calling MCP directly." - **Docker's [Dynamic MCPs](https://www.docker.com/blog/dynamic-mcps-stop-hardcoding-your-agents-world/)**: "Stop Hardcoding Your Agents’ World." - **[Terminal Bench](https://www.tbench.ai)'s [Terminus](https://www.tbench.ai/terminus)**: "A realistic terminal environment for evaluating LLM agents." Instead of exposing hundreds of individual tools to the LLM (which consumes massive context and confuses the model), this bridge exposes **one** tool: `run_python`. The LLM writes Python code to discover, call, and compose other tools. ### Why This vs. JS "Code Mode"? While there are JavaScript-based alternatives (like [`universal-tool-calling-protocol/code-mode`](https://github.com/universal-tool-calling-protocol/code-mode)), this project is built for **Data Science** and **Security**: | Feature | This Project (Python) | JS Code Mode (Node.js) | | :--- | :--- | :--- | | **Native Language** | **Python** (The language of AI/ML) | TypeScript/JavaScript | | **Data Science** | **Native** (`pandas`, `numpy`, `scikit-learn`) | Impossible / Hacky | | **Isolation** | **Hard** (Podman/Docker Containers) | Soft (Node.js VM) | | **Security** | **Enterprise** (Rootless, No Net, Read-Only) | Process-level | | **Philosophy** | **Infrastructure** (Standalone Bridge) | Library (Embeddable) | **Choose this if:** You want your agent to analyze data, generate charts, use scientific libraries, or if you require strict container-based isolation for running untrusted code. ## What This Solves (That Others Don't) ### The Pain: MCP Token Bankruptcy Connect Claude to 11 MCP servers with ~100 tools = **30,000 tokens** of tool schemas loaded into every prompt. That's **$0.09 per query** before you ask a single question. Scale to 50 servers and your context window *breaks*. ### Why Existing "Solutions" Fail - **Docker MCP Gateway**: Manages containers beautifully, but still streams **all tool schemas** into Claude's context. No token optimization. - **Cloudflare Code Mode**: V8 isolates are fast, but you **can't proxy your existing MCP servers** (Serena, Wolfram, custom tools). Platform lock-in. - **Academic Papers**: Describe Anthropic's discovery pattern, but provide **no hardened implementation**. - **Proofs of Concept**: Skip security (no rootless), skip persistence (cold starts), skip proxying edge cases. ### The Fix: Discovery-First Architecture - **Constant 200-token overhead** regardless of server count - **Proxy any stdio MCP server** into rootless containers - **Fuzzy search across servers** without preloading schemas - **Production-hardened** with capability dropping and security isolation ### Architecture: How It Differs ```text Traditional MCP (Context-Bound) ┌─────────────────────────────┐ │ LLM Context (30K tokens) │ │ - serverA.tool1: {...} │ │ - serverA.tool2: {...} │ │ - serverB.tool1: {...} │ │ - … (dozens more) │ └─────────────────────────────┘ ↓ LLM picks tool ↓ Tool executes This Bridge (Discovery-First) ┌─────────────────────────────┐ │ LLM Context (≈200 tokens) │ │ “Use discovered_servers(), │ │ query_tool_docs(), │ │ search_tool_docs()” │ └─────────────────────────────┘
Отзывы (0)
Пока нет отзывов. Будьте первым!
Статистика
Информация
Технологии
Похожие серверы
mcp-chain-of-draft-server
Chain of Draft Server is a powerful AI-driven tool that helps developers make better decisions through systematic, iterative refinement of thoughts and designs. It integrates seamlessly with popular AI agents and provides a structured approach to reasoning, API design, architecture decisions, code reviews, and implementation planning.
mcp-use-ts
mcp-use is the framework for MCP with the best DX - Build AI agents, create MCP servers with UI widgets, and debug with built-in inspector. Includes client SDK, server SDK, React hooks, and powerful dev tools.
mesh
Define and compose secure MCPs in TypeScript. Generate AI workflows and agents with React + Tailwind UI. Deploy anywhere.
rhinomcp
RhinoMCP connects Rhino 3D to AI Agent through the Model Context Protocol (MCP)