Установка
pip install jinniОписание
<img src="assets/jinni_banner_1280x640.png" alt="Jinni Banner" width="400"/> # Jinni: Bring Your Project Into Context <a href="https://glama.ai/mcp/servers/@smat-dev/jinni"> <img width="380" height="200" src="https://glama.ai/mcp/servers/@smat-dev/jinni/badge" alt="Jinni: Bring Your Project Into Context MCP server" /> </a> Jinni is a tool to efficiently provide Large Language Models the context of your projects. It gives a consolidated view of relevant project files, overcoming the limitations and inefficiencies of reading files one by one. Each file's content is preceded by a simple header indicating its path: ``` ```path=src/app.py print("hello") ``` The philosophy behind this tool is that LLM context windows are large, models are smart, and directly seeing your project best equips the model to help with anything you throw at it. There is an MCP (Model Context Protocol) server for integration with AI tools and a command-line utility (CLI) for manual use that copies project context to the clipboard ready to paste wherever you need it. These tools are opinionated about what counts as relevant project context to best work out of the box in most use cases, automatically excluding: * Binary files * Dotfiles and hidden directories * Common naming conventions for logs, build directories, tempfiles, etc Inclusions/exclusions are customizable with complete granularity if required using `.contextfiles` – this works like `.gitignore` except defining inclusions. `.gitignore` files themselves are also respected automatically, but any rules in `.contextfiles` take priority. The MCP server can provide as much or as little of the project as desired. By default the scope is the whole project, but the model can ask for specific modules / matching patterns / etc. # MCP Quickstart MCP server config file for Cursor / Roo / Claude Desktop / client of choice: ```json { "mcpServers": { "jinni": { "command": "uvx", "args": ["jinni-server"] } } } ``` *You can optionally constrain the server to only read within a tree for security in case your LLM goes rogue: add `"--root", "/absolute/path/"` to the `args` list.* Install uv if it is not on your system: https://docs.astral.sh/uv/getting-started/installation/ Reload your IDE and you can now ask the agent to read in context. If you want to restrict this to particular modules / paths just ask - e.g. "Read context for tests". In action with Cursor: <img src="assets/use_example.png" alt="Usage Example"> # Note For Cursor Users Cursor can silently drop context that is larger than the allowed maximum, so if you have a sizable project and the agent acts like the tool call never happened, try reducing what you are bringing in ("read context for xyz") ## Components 1. **`jinni` MCP Server:** * Integrates with MCP clients like Cursor, Cline, Roo, Claude Desktop, etc. * Exposes a `read_context` tool that returns a concatenated string of relevant file contents from a specified project directory. 2. **`jinni` CLI:** * A command-line tool for manually generating the project context dump. * Useful for feeding context to LLMs via copy-paste or file input. Or pipe the output wherever you need it. ## Features * **Efficient Context Gathering:** Reads and concatenates relevant project files in one operation. * **Intelligent Filtering (Gitignore-Style Inclusion):** * Uses a system based on `.gitignore` syntax (`pathspec` library's `gitwildmatch`). * Automatically loads `.gitignore` files from the project root downward. These exclusions can be overridden by rules in `.contextfiles`. * Supports hierarchical configuration using `.contextfiles` placed within your project directories. Rules are applied dynamically based on the file/directory being processed. * **Matching Behavior:** Patterns match against the path relative to the **target directory** being processed. Output paths remain relative to the original project root. * **Rule Root Behavior:** Each target has its own rule root: * Targets within the project root (or CWD) use the project root/CWD as their rule root * External targets use themselves as their rule root, ensuring self-contained rule sets * **Overrides:** Supports `--overrides` (CLI) or `rules` (MCP) to use a specific set of rules exclusively. When overrides are active, both built-in default rules and any `.contextfiles` are ignored. Path matching for overrides is still relative to the target directory. * **Explicit Target Inclusion:** Files explicitly provided as targets are always included (bypassing rule checks, but not binary/size checks). * **Customizable Configuration (`.contextfiles` / Overrides):** * Define precisely which files/directories to include or exclude using `.gitignore`-style patterns applied to the **relative path**. * Patterns starting with `!` negate the match (an excl
Отзывы (0)
Пока нет отзывов. Будьте первым!
Статистика
Информация
Технологии
Похожие серверы
mcp-chain-of-draft-server
Chain of Draft Server is a powerful AI-driven tool that helps developers make better decisions through systematic, iterative refinement of thoughts and designs. It integrates seamlessly with popular AI agents and provides a structured approach to reasoning, API design, architecture decisions, code reviews, and implementation planning.
mcp-use-ts
mcp-use is the framework for MCP with the best DX - Build AI agents, create MCP servers with UI widgets, and debug with built-in inspector. Includes client SDK, server SDK, React hooks, and powerful dev tools.
mesh
Define and compose secure MCPs in TypeScript. Generate AI workflows and agents with React + Tailwind UI. Deploy anywhere.
rhinomcp
RhinoMCP connects Rhino 3D to AI Agent through the Model Context Protocol (MCP)