Назад к каталогу
consult7

consult7

Сообщество

от szeider

0.0
0 отзывов

MCP server to consult a language model with large context size

Установка

google/gemini-3-pro-preview

Описание

# Consult7 MCP Server **Consult7** is a Model Context Protocol (MCP) server that enables AI agents to consult large context window models via [OpenRouter](https://openrouter.ai) for analyzing extensive file collections - entire codebases, document repositories, or mixed content that exceed the current agent's context limits. ## Why Consult7? **Consult7** enables any MCP-compatible agent to offload file analysis to large context models (up to 2M tokens). Useful when: - Agent's current context is full - Task requires specialized model capabilities - Need to analyze large codebases in a single query - Want to compare results from different models > "For Claude Code users, Consult7 is a game changer." ## How it works **Consult7** collects files from the specific paths you provide (with optional wildcards in filenames), assembles them into a single context, and sends them to a large context window model along with your query. The result is directly fed back to the agent you are working with. ## Example Use Cases ### Quick codebase summary * **Files:** `["/Users/john/project/src/*.py", "/Users/john/project/lib/*.py"]` * **Query:** "Summarize the architecture and main components of this Python project" * **Model:** `"google/gemini-2.5-flash"` * **Mode:** `"fast"` ### Deep analysis with reasoning * **Files:** `["/Users/john/webapp/src/*.py", "/Users/john/webapp/auth/*.py", "/Users/john/webapp/api/*.js"]` * **Query:** "Analyze the authentication flow across this codebase. Think step by step about security vulnerabilities and suggest improvements" * **Model:** `"anthropic/claude-sonnet-4.5"` * **Mode:** `"think"` ### Generate a report saved to file * **Files:** `["/Users/john/project/src/*.py", "/Users/john/project/tests/*.py"]` * **Query:** "Generate a comprehensive code review report with architecture analysis, code quality assessment, and improvement recommendations" * **Model:** `"google/gemini-2.5-pro"` * **Mode:** `"think"` * **Output File:** `"/Users/john/reports/code_review.md"` * **Result:** Returns `"Result has been saved to /Users/john/reports/code_review.md"` instead of flooding the agent's context ## Featured Model: Gemini 3 Pro Consult7 now supports **Google's Gemini 3 Pro** (`google/gemini-3-pro-preview`) - the flagship reasoning model with a 1M context window and state-of-the-art performance on reasoning benchmarks. **Quick mnemonics for power users:** - **`gemt`** = Gemini 3 Pro + think mode (flagship reasoning) - **`gptt`** = GPT-5.2 + think mode (latest GPT) - **`grot`** = Grok 4 + think mode (alternative reasoning) - **`gemf`** = Gemini Flash Lite + fast mode (ultra fast) - **`ULTRA`** = Run GEMT, GPTT, and GROT in parallel for maximum insight These mnemonics make it easy to reference model+mode combinations in your queries. ## Installation ### Claude Code Simply run: ```bash claude mcp add -s user consult7 uvx -- consult7 your-openrouter-api-key ``` ### Claude Desktop Add to your Claude Desktop configuration file: ```json { "mcpServers": { "consult7": { "type": "stdio", "command": "uvx", "args": ["consult7", "your-openrouter-api-key"] } } } ``` Replace `your-openrouter-api-key` with your actual OpenRouter API key. No installation required - `uvx` automatically downloads and runs consult7 in an isolated environment. ## Command Line Options ```bash uvx consult7 <api-key> [--test] ``` - `<api-key>`: Required. Your OpenRouter API key - `--test`: Optional. Test the API connection The model and mode are specified when calling the tool, not at startup. ## Supported Models Consult7 supports **all 500+ models** available on OpenRouter. Below are the flagship models with optimized dynamic file size limits: | Model | Context | Use Case | |-------|---------|----------| | `openai/gpt-5.2` | 400k | Latest GPT, balanced performance | | `google/gemini-3-pro-preview` | 1M | **Flagship reasoning model** | | `google/gemini-2.5-pro` | 1M | Best for complex analysis | | `google/gemini-2.5-flash` | 1M | Fast, good for most tasks | | `google/gemini-2.5-flash-lite` | 1M | Ultra fast, simple queries | | `anthropic/claude-sonnet-4.5` | 1M | Excellent reasoning | | `anthropic/claude-opus-4.5` | 200k | Best quality, slower | | `x-ai/grok-4` | 256k | Alternative reasoning model | | `x-ai/grok-4-fast` | 2M | Largest context window | **Quick mnemonics:** - `gptt` = `openai/gpt-5.2` + `think` (latest GPT, deep reasoning) - `gemt` = `google/gemini-3-pro-preview` + `think` (Gemini 3 Pro, flagship reasoning) - `grot` = `x-ai/grok-4` + `think` (Grok 4, deep reasoning) - `oput` = `anthropic/claude-opus-4.5` + `think` (Claude Opus, deep reasoning) - `opuf` = `anthropic/claude-opus-4.5` + `fast` (Claude Opus, no reasoning) - `gemf` = `google/gemini-2.5-flash-lite` + `fast` (ultra fast) - `ULTRA` = call GEMT, GPTT, GROT, and OPUT IN PARALLEL (4 frontier models for maximum insight) You can use any OpenRouter model ID (e.g., `deepseek/deepseek-r1-0528`). See the [full model l

Отзывы (0)

Пока нет отзывов. Будьте первым!