Назад к каталогу
chatgpt-cli

chatgpt-cli

Сообщество

от kardolus

0.0
0 отзывов

ChatGPT CLI is a versatile tool for interacting with LLMs through OpenAI, Azure, and other popular providers like Perplexity AI and Llama. It supports prompt files, history tracking, and live data injection via MCP (Model Context Protocol), making it ideal for both casual users and developers seeking a powerful, customizable GPT experience.

Установка

brew tap kardolus/chatgpt-cli && brew install chatgpt-cli

Описание

# ChatGPT CLI ![Test Workflow](https://github.com/kardolus/chatgpt-cli/actions/workflows/test.yml/badge.svg?branch=main) [![Public Backlog](https://img.shields.io/badge/public%20backlog-808080)](https://github.com/users/kardolus/projects/2) **Tested and Compatible with OpenAI ChatGPT, Azure OpenAI Service, Perplexity AI, Llama and 302.AI!** ChatGPT CLI provides a powerful command-line interface for seamless interaction with ChatGPT models via OpenAI and Azure, featuring streaming capabilities and extensive configuration options. ![a screenshot](cmd/chatgpt/resources/vhs.gif) ## Table of Contents - [Features](#features) - [Prompt Support](#prompt-support) - [Using the prompt flag](#using-the---prompt-flag) - [Example](#example) - [Explore More Prompts](#explore-more-prompts) - [MCP Support](#mcp-support) - [Overview](#overview) - [Examples](#examples) - [Default Version Behavior](#default-version-behavior) - [Handling MCP Replies](#handling-mcp-replies) - [Config](#config) - [Installation](#installation) - [Using Homebrew (macOS)](#using-homebrew-macos) - [Direct Download](#direct-download) - [Apple Silicon](#apple-silicon) - [macOS Intel chips](#macos-intel-chips) - [Linux (amd64)](#linux-amd64) - [Linux (arm64)](#linux-arm64) - [Linux (386)](#linux-386) - [FreeBSD (amd64)](#freebsd-amd64) - [FreeBSD (arm64)](#freebsd-arm64) - [Windows (amd64)](#windows-amd64) - [Getting Started](#getting-started) - [Configuration](#configuration) - [General Configuration](#general-configuration) - [LLM Specific Configuration](#llm-specific-configuration) - [Custom Config and Data Directory](#custom-config-and-data-directory) - [Example for Custom Directories](#example-for-custom-directories) - [Variables for interactive mode](#variables-for-interactive-mode) - [Switching Between Configurations with --target](#switching-between-configurations-with---target) - [Azure Configuration](#azure-configuration) - [Perplexity Configuration](#perplexity-configuration) - [302 AI Configuration](#302ai-configuration) - [Command-Line Autocompletion](#command-line-autocompletion) - [Enabling Autocompletion](#enabling-autocompletion) - [Persistent Autocompletion](#persistent-autocompletion) - [Markdown Rendering](#markdown-rendering) - [Development](#development) - [Using the Makefile](#using-the-makefile) - [Testing the CLI](#testing-the-cli) - [Reporting Issues and Contributing](#reporting-issues-and-contributing) - [Uninstallation](#uninstallation) - [Useful Links](#useful-links) - [Additional Resources](#additional-resources) ## Features * **Streaming mode**: Real-time interaction with the GPT model. * **Query mode**: Single input-output interactions with the GPT model. * **Interactive mode**: The interactive mode allows for a more conversational experience with the model. Prints the token usage when combined with query mode. * **Thread-based context management**: Enjoy seamless conversations with the GPT model with individualized context for each thread, much like your experience on the OpenAI website. Each unique thread has its own history, ensuring relevant and coherent responses across different chat instances. * **Sliding window history**: To stay within token limits, the chat history automatically trims while still preserving the necessary context. The size of this window can be adjusted through the `context-window` setting. * **Custom context from any source**: You can provide the GPT model with a custom context during conversation. This context can be piped in from any source, such as local files, standard input, or even another program. This flexibility allows the model to adapt to a wide range of conversational scenarios. * **Support for images**: Upload an image or provide an image URL using the `--image` flag. Note that image support may not be available for all models. You can also pipe an image directly: `pngpaste - | chatgpt "What is this photo?"` * **Generate images**: Use the `--draw` and `--output` flags to generate an image from a prompt (requires image-capable models like `gpt-image-1`). * **Edit images**: Use the `--draw` flag with `--image` and `--output` to modify an existing image using a prompt ( e.g., "add sunglasses to the cat"). Supported formats: PNG, JPEG, and WebP. * **Audio support**: You can upload audio files using the `--audio` flag to ask questions about spoken content. This feature is compatible only with audio-capable models like gpt-4o-audio-preview. Currently, only `.mp3` and `.wav` formats are supported. * **Transcription support**: You can also use the `--transcribe` flag to generate a transcript of the uploaded audio. This uses OpenAI’s transcription endpoint (compatible with models like gpt-4o-transcribe) and supports a wider range of formats, incl

Отзывы (0)

Пока нет отзывов. Будьте первым!