Назад к каталогу
tuui

tuui

Сообщество

от AI-QL

0.0
0 отзывов

A desktop MCP client designed as a tool unitary utility integration, accelerating AI adoption through the Model Context Protocol (MCP) and enabling cross-vendor LLM API orchestration.

Установка

{

Описание

# <img src="https://cdn.jsdelivr.net/gh/AI-QL/tuui@main/buildAssets/icons/icon.png" alt="Logo" width="24"/> TUUI - Local AI Playground with MCP #### TUUI is a desktop MCP client designed as a tool unitary utility integration, accelerating AI adoption through the Model Context Protocol (MCP) and enabling cross-vendor LLM API orchestration. ## `Zero accounts` `Full control` `Open source` `Download and Run` [![](https://img.shields.io/badge/Windows-blue?logo=icloud)](https://github.com/AI-QL/tuui/releases/latest) [![](https://img.shields.io/badge/Linux-orange?logo=linux)](https://github.com/AI-QL/tuui/releases/latest) [![](https://img.shields.io/badge/macOS-lightgrey?logo=apple)](https://github.com/AI-QL/tuui/releases/latest) ## 📜 Introduction [![](https://camo.githubusercontent.com/077907eb137aa9b2d46ca4af30b77714cb69225eb8be49ad89f3e0ae668c90ca/68747470733a2f2f62616467652e6d6370782e6465763f747970653d636c69656e74)](https://modelcontextprotocol.io/clients#aiql-tuui) [![](https://img.shields.io/badge/Vue3-brightgreen.svg)](https://vuejs.org) [![](https://img.shields.io/badge/Vuetify-blue.svg)](https://vuetifyjs.com) [![LICENSE](https://img.shields.io/github/license/AI-QL/tuui)](https://github.com/AI-QL/tuui/blob/main/LICENSE) [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/AI-QL/tuui) This repository is essentially an **LLM chat desktop application based on MCP**. It also represents a bold experiment in **creating a complete project using AI**. Many components within the project have been directly converted or generated from the prototype project through AI. Given the considerations regarding the quality and safety of AI-generated content, this project employs strict syntax checks and naming conventions. Therefore, for any further development, please ensure that you use the linting tools I've set up to check and automatically fix syntax issues. ## ✨ Features - ✨ Accelerate AI tool integration via MCP - ✨ Orchestrate cross-vendor LLM APIs through dynamic configuring - ✨ Automated application testing Support - ✨ TypeScript support - ✨ Multilingual support - ✨ Basic layout manager - ✨ Global state management through the Pinia store - ✨ Quick support through the GitHub community and official documentation ## 🚀 Getting Started You can quickly get started with the project through a variety of options tailored to your role and needs: - To `explore` the project, visit the wiki page: [TUUI.com](https://www.tuui.com) - To `download` and use the application directly, go to the releases page: [Releases](https://github.com/AI-QL/tuui/releases/latest) - For `developer` setup, refer to the installation guide: [Getting Started (English)](docs/src/en/installation-and-build/getting-started.md) | [快速入门 (中文)](docs/src/zhHans/installation-and-build/getting-started.md) - To `ask the AI` directly about the project, visit: [TUUI@DeepWiki](https://deepwiki.com/AI-QL/tuui) ## ⚙️ Core Requirements **To use MCP-related features, ensure the following preconditions are met for your environment:** - Set up an LLM backend (e.g., `ChatGPT`, `Claude`, `Qwen` or self-hosted) that supports tool/function calling. - For NPX/NODE-based servers: Install `Node.js` to execute JavaScript/TypeScript tools. - For UV/UVX-based servers: Install `Python` and the `UV` library. - For Docker-based servers: Install `DockerHub`. - For macOS/Linux systems: Modify the default MCP configuration (e.g., adjust CLI paths or permissions). > Refer to the [MCP Server Issue](#mcp-server-issue) documentation for guidance For guidance on configuring the LLM, refer to the template(i.e.: Qwen): ```json { "name": "Qwen", "apiKey": "", "url": "https://dashscope.aliyuncs.com/compatible-mode", "path": "/v1/chat/completions", "model": "qwen-turbo", "modelList": ["qwen-turbo", "qwen-plus", "qwen-max"], "maxTokensValue": "", "mcp": true } ``` The configuration accepts either a JSON object (for a single chatbot) or a JSON array (for multiple chatbots): ```json [ { "name": "Openrouter && Proxy", "apiKey": "", "url": "https://api3.aiql.com", "urlList": ["https://api3.aiql.com", "https://openrouter.ai/api"], "path": "/v1/chat/completions", "model": "openai/gpt-4.1-mini", "modelList": [ "openai/gpt-4.1-mini", "openai/gpt-4.1", "anthropic/claude-sonnet-4", "google/gemini-2.5-pro-preview" ], "maxTokensValue": "", "mcp": true }, { "name": "DeepInfra", "apiKey": "", "url": "https://api.deepinfra.com", "path": "/v1/openai/chat/completions", "model": "Qwen/Qwen3-32B", "modelList": [ "Qwen/Qwen3-32B", "Qwen/Qwen3-235B-A22B", "meta-llama/Meta-Llama-3.1-70B-Instruct" ], "mcp": true } ] ``` ### 📕 Additional Configuration | Configuration | Description | Location | Note | | --- | --- | --- | --- | | LLM Endpoints | Default LLM Chatbots config | [llm.json](/src/main/assets/config/llm.json) | Full config types c

Отзывы (0)

Пока нет отзывов. Будьте первым!