Назад к каталогу
npcpy

npcpy

Сообщество

от NPC-Worldwide

0.0
0 отзывов

The AI toolkit for the AI developer

Установка

# these are for audio primarily, skip if you dont need tts

Описание

<p align="center"> <a href= "https://github.com/cagostino/npcpy/blob/main/docs/npcpy.md"> <img src="https://raw.githubusercontent.com/cagostino/npcpy/main/npcpy/npc-python.png" alt="npc-python logo" width=250></a> </p> # npcpy Welcome to `npcpy`, the core library of the NPC Toolkit that supercharges natural language processing pipelines and agent tooling. `npcpy` is a flexible framework for building state-of-the-art applications and conducting novel research with LLMs. Here is an example for getting responses for a particular agent: ```python from npcpy.npc_compiler import NPC simon = NPC( name='Simon Bolivar', primary_directive='Liberate South America from the Spanish Royalists.', model='gemma3:4b', provider='ollama' ) response = simon.get_llm_response("What is the most important territory to retain in the Andes mountains?") print(response['response']) ``` ```python The most important territory to retain in the Andes mountains is **Cuzco**. It’s the heart of the Inca Empire, a crucial logistical hub, and holds immense symbolic value for our liberation efforts. Control of Cuzco is paramount. ``` Here is an example for getting responses for a particular agent with tools: ```python import os import json from npcpy.npc_compiler import NPC from npcpy.npc_sysenv import render_markdown def list_files(directory: str = ".") -> list: """List all files in a directory.""" return os.listdir(directory) def read_file(filepath: str) -> str: """Read and return the contents of a file.""" with open(filepath, 'r') as f: return f.read() # Create an agent with fast, verifiable tools assistant = NPC( name='File Assistant', primary_directive='You are a helpful assistant who can list and read files.', model='llama3.2', provider='ollama', tools=[list_files, read_file], ) response = assistant.get_llm_response( "List the files in the current directory.", auto_process_tool_calls=True, #this is the default for NPCs, but not the default for get_llm_response/upstream ) # show the keys of the response for get_llm_response print(response.keys()) ``` ``` dict_keys(['response', 'raw_response', 'messages', 'tool_calls', 'tool_results']) ``` ```python for tool_call in response['tool_results']: render_markdown(tool_call['tool_call_id']) for arg in tool_call['arguments']: render_markdown('- ' + arg + ': ' + str(tool_call['arguments'][arg])) render_markdown('- Results:' + str(tool_call['result'])) ``` ```python • directory: . • Results:['research_pipeline.jinx', '.DS_Store', 'mkdocs.yml', 'LICENSE', '.pytest_cache', 'npcpy', 'Makefile', 'test_data', 'README.md.backup', 'tests', 'screenshot.png', 'MANIFEST.in', 'docs', 'hero_image_tech_startup.png', 'README.md', 'test.png', 'npcpy.png', 'setup.py', '.gitignore', '.env', 'examples', 'npcpy.egg-info', 'bloomington_weather_image.png.png', '.github', '.python-version', 'generated_image.png', 'documents', '.env.example', '.git', '.npcsh_global', 'hello.txt', '.readthedocs.yaml', 'reports'] ``` Here is an example for setting up an agent team to use Jinja Execution (Jinxs) templates that are processed entirely with prompts, allowing you to use them with models that do or do not possess tool calling support. ```python from npcpy.npc_compiler import NPC, Team, Jinx from npcpy.tools import auto_tools import os from jinja2 import Environment, Undefined, DictLoader # Import necessary Jinja2 components for Jinx code # --- REVISED file_reader_jinx --- file_reader_jinx = Jinx(jinx_data={ "jinx_name": "file_reader", "description": "Read a file and optionally summarize its contents using an LLM.", "inputs": ["filename"], "steps": [ { "name": "read_file_content", "engine": "python", "code": ''' import os from jinja2 import Environment, Undefined, DictLoader # Local import for Jinx step # The 'filename' input to the file_reader jinx might be a Jinja template string like "{{ source_filename }}" # or a direct filename. We need to render it using the current execution context. # Get the Jinja environment from the NPC if available, otherwise create a default one. # The 'npc' variable is available in the Jinx execution context. # We need to ensure 'npc' exists before trying to access its 'jinja_env'. execution_jinja_env = npc.jinja_env if npc else Environment(loader=DictLoader({}), undefined=Undefined) # Render the filename. The current 'context' should contain the variables needed for rendering. # For declarative calls, the parent Jinx's inputs (like 'source_filename') will be in this context. # We also need to ensure the value from context['filename'] is treated as a template stri

Отзывы (0)

Пока нет отзывов. Будьте первым!