orxhestra

v… Apache 2.0

Multi-agent orchestration for Python. Compose agent teams in YAML, stream events live, and expose them over A2A or MCP.

$pip install orxhestra

features

Everything you need to orchestrate agents

Agent ensemble

LLM, ReAct, Sequential, Parallel, Loop. Chain a researcher into a writer, fan out across analysts, or loop a coder with a reviewer until tests pass.

Event streaming

Every agent is an async generator yielding typed events — thoughts, tool calls, partial tokens, results — observable from any parent.

Composer

Declare entire orchestras in YAML. Tools, models, planners, loops, pipelines — all in one file. Run with orx my-agents.yaml.

Tool system

20+ built-in tools: filesystem, shell, background tasks, memory, agent-as-tool, tool search. Each caps its own output to prevent context overflow.

Auto-memory

Persistent per-project memories that survive sessions. Four typed categories with YAML frontmatter and an auto-maintained index.

Planners

TaskPlanner injects status into every prompt and keeps the agent working until all tasks complete. PlanReAct adds structured chain-of-thought.

MCP & A2A

First-class Model Context Protocol for tool servers and Agent-to-Agent for cross-service communication. Expose any agent as A2A with one flag.

29 LLM providers

OpenAI, Anthropic, Google, Mistral, Cohere, Groq, DeepSeek, Ollama — and 22 more. Auto-detects provider from model name.

OpenAI · Azure · Anthropic · Google · Vertex AI · AWS Bedrock · Mistral · Cohere · Groq · DeepSeek · Ollama · xAI · NVIDIA · HuggingFace · Together · Fireworks · Perplexity · OpenRouter · IBM · Upstage · +9 more

Build agents in minutes

Create powerful AI agents with a few lines of Python. Stream events in real-time and compose complex orchestration patterns.

  • 29 LLM providers with auto-detection
  • Persistent auto-memory across sessions
  • Dark/light themes, thinking spinner, tool approval
  • Session management built-in
  • Type-safe with full Pydantic models
agent.py
from orxhestra import LlmAgent, Runner
from langchain_openai import ChatOpenAI

agent = LlmAgent(
    name="assistant",
    model=ChatOpenAI(model="gpt-5.4"),
    instructions="You are a helpful assistant.",
)

runner = Runner(agent=agent, app_name="my-app")

async for event in runner.astream(
    user_id="user-1",
    session_id="session-1",
    new_message="Hello!",
):
    print(event.content)

agents

A complete ensemble

LlmAgent

Chat model agent with tools, instructions, and structured output.

ReActAgent

Reasoning + acting loop with automatic tool use.

SequentialAgent

Runs sub-agents in order, like a pipeline.

ParallelAgent

Runs sub-agents concurrently for maximum throughput.

LoopAgent

Repeats sub-agents until an exit condition is met.

A2AAgent

Connects to remote agents via the A2A protocol.

cli

Your terminal, orchestrated

orxhestra-code Full-featured coding agent with permissions, multi-file editing, and project-aware context.
terminal
orx> add error handling to the API routes

  ♪ analyzing…
  ┌ read_file
  │ src/api/routes.py
  └ done (0.1s)
  ┌ write_todos
  │ (task list update)
  └ done (0.0s)

Tasks:
   Add try/except to route handlers
   Add custom ErrorResponse model
   Write tests for error cases

  ┌ edit_file
  │ src/api/routes.py
  └ done (0.2s)
  ┌ shell_exec
  │ pytest tests/test_api.py
  └ done (2.8s)

✓ done — added structured error handling to all 4 route handlers.
  15.4s · 3,200 tokens · 14:32

composer

Declare. Compose. Run.

orx.yaml
# Define an entire coding agent in YAML
defaults:
  model:
    provider: openai
    name: gpt-5.4

tools:
  filesystem:
    builtin: "filesystem"
  shell:
    builtin: "shell"

agents:
  planner:
    type: llm
    instructions: Output actionable steps for the coder.

  coder:
    type: llm
    tools: [filesystem, shell]
    instructions: Execute the plan. Never ask the user.

  dev_loop:
    type: loop
    agents: [coder, reviewer]

  coordinator:
    type: sequential
    agents: [planner, dev_loop]

main_agent: coordinator
terminal
# interactive agent
$ orx orx.yaml

# or expose as A2A server
$ orx orx.yaml --serve -p 9000
curl
$ curl -X POST http://localhost:9000/ \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "id": "1",
    "method": "message/send",
    "params": {
      "message": {
        "role": "user",
        "parts": [{"text": "Hello!"}]
      }
    }
  }'

Ready to orchestrate?

Get started with orxhestra in minutes.