Multi-agent orchestration for Python. Compose agent teams in YAML, stream events live, and expose them over A2A or MCP.
features
LLM, ReAct, Sequential, Parallel, Loop. Chain a researcher into a writer, fan out across analysts, or loop a coder with a reviewer until tests pass.
Every agent is an async generator yielding typed events — thoughts, tool calls, partial tokens, results — observable from any parent.
Declare entire orchestras in YAML. Tools, models, planners, loops, pipelines — all in one file. Run with orx my-agents.yaml.
20+ built-in tools: filesystem, shell, background tasks, memory, agent-as-tool, tool search. Each caps its own output to prevent context overflow.
Persistent per-project memories that survive sessions. Four typed categories with YAML frontmatter and an auto-maintained index.
TaskPlanner injects status into every prompt and keeps the agent working until all tasks complete. PlanReAct adds structured chain-of-thought.
First-class Model Context Protocol for tool servers and Agent-to-Agent for cross-service communication. Expose any agent as A2A with one flag.
OpenAI, Anthropic, Google, Mistral, Cohere, Groq, DeepSeek, Ollama — and 22 more. Auto-detects provider from model name.
OpenAI · Azure · Anthropic · Google · Vertex AI · AWS Bedrock · Mistral · Cohere · Groq · DeepSeek · Ollama · xAI · NVIDIA · HuggingFace · Together · Fireworks · Perplexity · OpenRouter · IBM · Upstage · +9 more
Create powerful AI agents with a few lines of Python. Stream events in real-time and compose complex orchestration patterns.
from orxhestra import LlmAgent, Runner from langchain_openai import ChatOpenAI agent = LlmAgent( name="assistant", model=ChatOpenAI(model="gpt-5.4"), instructions="You are a helpful assistant.", ) runner = Runner(agent=agent, app_name="my-app") async for event in runner.astream( user_id="user-1", session_id="session-1", new_message="Hello!", ): print(event.content)
agents
Chat model agent with tools, instructions, and structured output.
Reasoning + acting loop with automatic tool use.
Runs sub-agents in order, like a pipeline.
Runs sub-agents concurrently for maximum throughput.
Repeats sub-agents until an exit condition is met.
Connects to remote agents via the A2A protocol.
cli
orx> add error handling to the API routes ♪ analyzing… ┌ read_file │ src/api/routes.py └ done (0.1s) ┌ write_todos │ (task list update) └ done (0.0s) Tasks: ▸ Add try/except to route handlers ○ Add custom ErrorResponse model ○ Write tests for error cases ┌ edit_file │ src/api/routes.py └ done (0.2s) ┌ shell_exec │ pytest tests/test_api.py └ done (2.8s) ✓ done — added structured error handling to all 4 route handlers. 15.4s · 3,200 tokens · 14:32
composer
# Define an entire coding agent in YAML defaults: model: provider: openai name: gpt-5.4 tools: filesystem: builtin: "filesystem" shell: builtin: "shell" agents: planner: type: llm instructions: Output actionable steps for the coder. coder: type: llm tools: [filesystem, shell] instructions: Execute the plan. Never ask the user. dev_loop: type: loop agents: [coder, reviewer] coordinator: type: sequential agents: [planner, dev_loop] main_agent: coordinator
# interactive agent $ orx orx.yaml # or expose as A2A server $ orx orx.yaml --serve -p 9000
$ curl -X POST http://localhost:9000/ \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": "1", "method": "message/send", "params": { "message": { "role": "user", "parts": [{"text": "Hello!"}] } } }'