Назад к каталогу
mcphost

mcphost

Сообщество

от mark3labs

0.0
0 отзывов

A CLI host application that enables Large Language Models (LLMs) to interact with external tools through the Model Context Protocol (MCP).

Установка

go install github.com/mark3labs/mcphost@latest

Описание

# MCPHost 🤖 A CLI host application that enables Large Language Models (LLMs) to interact with external tools through the Model Context Protocol (MCP). Currently supports Claude, OpenAI, Google Gemini, and Ollama models. Discuss the Project on [Discord](https://discord.gg/RqSS2NQVsY) ## Table of Contents - [Overview](#overview-) - [Features](#features-) - [Requirements](#requirements-) - [Environment Setup](#environment-setup-) - [Installation](#installation-) - [SDK Usage](#sdk-usage-) - [Configuration](#configuration-) - [MCP Servers](#mcp-servers) - [Environment Variable Substitution](#environment-variable-substitution) - [Simplified Configuration Schema](#simplified-configuration-schema) - [Tool Filtering](#tool-filtering) - [Legacy Configuration Support](#legacy-configuration-support) - [Transport Types](#transport-types) - [System Prompt](#system-prompt) - [Usage](#usage-) - [Interactive Mode](#interactive-mode-default) - [Script Mode](#script-mode) - [Hooks System](#hooks-system) - [Non-Interactive Mode](#non-interactive-mode) - [Model Generation Parameters](#model-generation-parameters) - [Available Models](#available-models) - [Examples](#examples) - [Flags](#flags) - [Authentication Subcommands](#authentication-subcommands) - [Configuration File Support](#configuration-file-support) - [Interactive Commands](#interactive-commands) - [Automation & Scripting](#automation--scripting-) - [MCP Server Compatibility](#mcp-server-compatibility-) - [Contributing](#contributing-) - [License](#license-) - [Acknowledgments](#acknowledgments-) ## Overview 🌟 MCPHost acts as a host in the MCP client-server architecture, where: - **Hosts** (like MCPHost) are LLM applications that manage connections and interactions - **Clients** maintain 1:1 connections with MCP servers - **Servers** provide context, tools, and capabilities to the LLMs This architecture allows language models to: - Access external tools and data sources 🛠️ - Maintain consistent context across interactions 🔄 - Execute commands and retrieve information safely 🔒 Currently supports: - Anthropic Claude models (Claude 3.5 Sonnet, Claude 3.5 Haiku, etc.) - OpenAI models (GPT-4, GPT-4 Turbo, GPT-3.5, etc.) - Google Gemini models (Gemini 2.0 Flash, Gemini 1.5 Pro, etc.) - Any Ollama-compatible model with function calling support - Any OpenAI-compatible API endpoint ## Features ✨ - Interactive conversations with multiple AI models - **Non-interactive mode** for scripting and automation - **Script mode** for executable YAML-based automation scripts - Support for multiple concurrent MCP servers - **Tool filtering** with `allowedTools` and `excludedTools` per server - Dynamic tool discovery and integration - Tool calling capabilities across all supported models - Configurable MCP server locations and arguments - Consistent command interface across model types - Configurable message history window for context management - **OAuth authentication** support for Anthropic (alternative to API keys) - **Hooks system** for custom integrations and security policies - **Environment variable substitution** in configs and scripts - **Builtin servers** for common functionality (filesystem, bash, todo, http) ## Requirements 📋 - Go 1.23 or later - For OpenAI/Anthropic: API key for the respective provider - For Ollama: Local Ollama installation with desired models - For Google/Gemini: Google API key (see https://aistudio.google.com/app/apikey) - One or more MCP-compatible tool servers ## Environment Setup 🔧 1. API Keys: ```bash # For all providers (use --provider-api-key flag or these environment variables) export OPENAI_API_KEY='your-openai-key' # For OpenAI export ANTHROPIC_API_KEY='your-anthropic-key' # For Anthropic export GOOGLE_API_KEY='your-google-key' # For Google/Gemini ``` 2. Ollama Setup: - Install Ollama from https://ollama.ai - Pull your desired model: ```bash ollama pull mistral ``` - Ensure Ollama is running: ```bash ollama serve ``` You can also configure the Ollama client using standard environment variables, such as `OLLAMA_HOST` for the Ollama base URL. 3. Google API Key (for Gemini): ```bash export GOOGLE_API_KEY='your-api-key' ``` 4. OpenAI Compatible Setup: - Get your API server base URL, API key and model name - Use `--provider-url` and `--provider-api-key` flags or set environment variables 5. Self-Signed Certificates (TLS): If your provider uses self-signed certificates (e.g., local Ollama with HTTPS), you can skip certificate verification: ```bash mcphost --provider-url https://192.168.1.100:443 --tls-skip-verify ``` ⚠️ **WARNING**: Only use `--tls-skip-verify` for development or when connecting to trusted servers with self-signed certificates. This disables TLS certificate verification and is insecure for production use. ## Installation 📦 ```bash go install github.com/mark3labs/mcphost@latest ``` ## SDK Usage 🛠️ MCPHost also provides a Go SDK for programmatic access wi

Отзывы (0)

Пока нет отзывов. Будьте первым!