Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt

Use this file to discover all available pages before exploring further.

Xpander.ai is the platform for building production AI agents that connect to enterprise APIs, data, and channels. There are two ways to start building AI agents:
  • Agent Studio: a visual app at app.xpander.ai where you click together instructions, tools, and knowledge
  • SDK + CLI: the same platform, driven from Python
The SDK lets you use different agent frameworks, create tools from private APIs, run agents in your own containers, or embed Xpander in your Python tools. Anything you build in Agent Studio is also available in code. New to xpander overall? Start with What is xpander.ai?.

Install

You need all three steps. The CLI and SDK are separate packages with different runtimes: the CLI ships via npm, the SDK via pip.
# 1. CLI: scaffolds projects, deploys, streams logs
npm install -g xpander-cli

# 2. SDK: the runtime library you import in Python
pip install "xpander-sdk[agno]"

# 3. Auth: opens a browser, writes ~/.xpander/credentials
xpander login
Python 3.12+ for the local dev server; the SDK itself works on 3.9+.

Quickstart

10-minute scaffold-to-deploy walkthrough. Start here once you’re installed.

When to use code

Agent Studio covers most agent-building. Reach for the SDK when one of these applies:
  • Use a specific framework. You’re already invested in Agno, OpenAI Agents SDK, LangChain, or AWS Strands.
  • Wrap a private API as a tool. Decorate a Python function with @register_tool; the SDK generates the JSON schema from your type hints. Example: a lookup_customer(id) tool that hits your internal billing service.
  • Custom dependencies. Numerical libraries, system packages, your own internal SDKs: anything the serverless runtime doesn’t ship. Deploy as a container instead.
  • Embed in an existing service. Run an agent inside code you already deploy (a FastAPI worker, a cron job, a Slack bot) without standing up a separate process.
  • Programmatic scale. Spawn many tasks at once. Example: backfill structured fields across 10k support tickets, or run an eval suite that compares two agent versions on a fixed prompt set.
If none of that applies, Agent Studio is faster.

How it fits together

Xpander splits into two halves:
  1. The control plane (cloud or self-hosted) owns the agent’s identity: instructions, tools, model + credentials, knowledge bases, session storage.
  2. Your process owns the execution loop: the framework that decides what to call and when.
They talk through Backend, which fetches the agent and returns a dict ready to splat into your framework’s Agent constructor.
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ xpander control plane   β”‚         β”‚ Your process                β”‚
β”‚ (cloud or self-hosted)  β”‚         β”‚                             β”‚
β”‚                         β”‚         β”‚ Backend.aget_args(...)      β”‚
β”‚ β€’ Agent definition      β”‚ ──────▢ β”‚   returns framework args    β”‚
β”‚ β€’ Tools / connectors    β”‚         β”‚                             β”‚
β”‚ β€’ Knowledge bases       β”‚         β”‚ AgnoAgent(**args)           β”‚
β”‚ β€’ Model + credentials   β”‚         β”‚   runs the LLM loop         β”‚
β”‚ β€’ Instructions          β”‚         β”‚                             β”‚
β”‚ β€’ Postgres (sessions)   β”‚ ◀────── β”‚ @on_task receives a Task,   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚ writes task.result, returns β”‚
                                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
This split is why the SDK stays small and your code stays your code. There’s no xpander-flavored wrapper around your framework. You instantiate the framework’s own Agent class with arguments xpander provides.

What you’ll work with

A typical project pulls in three pieces:
  • Python SDK (xpander-sdk): runtime classes (Backend, Agents, Task) and decorators (@on_task, @register_tool). The class-by-class breakdown lives in Core Concepts.
  • CLI (xpander): scaffolds projects, deploys containers, streams logs, manages auth. Every command is in the CLI Reference.
  • A framework: Agno is the recommended path because the SDK does the most wiring for it. OpenAI Agents SDK, LangChain, and AWS Strands are also supported; the Frameworks overview compares what’s auto-wired vs. manual for each.
When you run xpander agent new, the CLI generates a starter project in your current directory. Here’s what it creates:
xpander_handler.py     # Your @on_task handler. The entry point.
xpander_config.json    # Agent ID, framework selection.
agent_instructions.json
requirements.txt
Dockerfile             # For container deployment.
.env                   # XPANDER_API_KEY, XPANDER_ORGANIZATION_ID, XPANDER_AGENT_ID.
For most projects, xpander_handler.py is the only file you’ll edit.

Quickstart

Scaffold and run your first agent locally in under 10 minutes.

Core Concepts (SDK lens)

The SDK class names mapped onto agents, tasks, threads, tools, and memory.

Frameworks

Pick a framework and see what xpander wires up for you.

SDK Reference

Per-module class and method documentation.