xpander.ai is a platform for building production AI agents. An agent is the platform’s central object: a configured LLM with instructions, tools, knowledge bases, memory, and a deployment target. This page is the SDK companion that tells you which Python class each of those concepts becomes when youDocumentation Index
Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt
Use this file to discover all available pages before exploring further.
import xpander_sdk.
The job of this page: when you see Backend, Task, or agent.tools.functions in code, you should know exactly what they are without context-switching back to the conceptual docs.
Product Guides → Core Concepts
The same model from the Agent Studio side. Read that for the concepts; read this for the class names.
The two halves
xpander splits responsibilities between a control plane (cloud or self-hosted) and your runtime. Reading code, you’ll move between three boundaries constantly:Backend
Backend is the gateway. It’s the only class that knows how to talk to the control plane to fetch a fully resolved agent definition. Inside an @on_task handler, the typical use is one line:
- Model client with credentials.
- Instructions (system prompt, role, goal).
- Tools (connectors + your
@register_toolfunctions). - Session DB.
- Memory settings.
- Output schema.
agno.agent.Agent(**args) is the production pattern.
Pass the current task so the SDK can forward task-level overrides (instructions overrides, expected output, output schema) through to the framework. Both get_args (sync) and aget_args (async) accept the same arguments. The async form is what you’ll use inside @on_task and any FastAPI service; the sync form is fine in scripts and notebooks. This async/sync pairing holds across the entire SDK.
Agent
Agent is the loaded, in-memory representation of an agent. It carries everything the control plane knows about it:
- Name and unique identifier.
- Instructions (role, goal, general).
- Framework selection.
- Model + provider.
- Deployment type (
ServerlessorContainer). - Tools, knowledge bases, sub-agents.
- Memory settings.
Agents().get(agent_id=...): returns a fully loadedAgent. Heavyweight.Agents().list(): returnsAgentsListItemsummaries (just names + IDs). Faster when you’re enumerating. Call.aload()on a list item to upgrade it to a fullAgent.
Task
ATask is one execution. It has an ID, a status, an input, and a result.
pending, executing, completed, failed, plus a few transitional states.
Inside @on_task, the platform creates the Task for you and you receive it as a parameter. Your job is to set task.result and return the task. The decorator marks it completed if you return cleanly, or failed if you raise.
Helpers on Task for framework integration:
task.to_message(): joins input text, file URLs, and any embedded readable file content into a single string ready to feed into Agno.task.get_files(): returns PDFs asagno.media.Fileobjects.task.get_images(): returns image URLs asagno.media.Imageobjects.
Tasks().get(task_id). That’s how you implement retries, audit logging, or deferred result fetching.
Threads (sessions)
The SDK doesn’t have a class calledThread. What Agent Studio shows as a thread is a session_id shared across multiple Tasks, with the conversation history persisted in the agent’s Postgres schema.
If your agent has Agno session storage enabled, you can list and inspect those sessions directly:
agent.get_db() returns the underlying agno.db.postgres.AsyncPostgresDb instance scoped to this agent’s schema. Reach for it directly only when you want to do something Agno doesn’t expose, like writing custom Postgres queries against session metadata.
Connectors and tools
Whatever you select in Agent Studio under “Tools” becomes available in code as part ofagent.tools. There are three flavors and one container:
- Connectors: pre-built integrations from the catalog (Slack, Gmail, GitHub, 2,000+). Authenticated and configured in Agent Studio.
- Custom tools: Python functions you’ve decorated with
@register_toolin your handler. - MCP servers: model-context-protocol servers, either remote (URL) or local (process), wired in through Agent Studio.
agent.tools: the unifiedToolsRepositorythat flattens all three into one list.agent.tools.listenumeratesToolobjects;agent.tools.functionsreturns normalized callables ready to bind to LangChain, OpenAI Agents SDK, or any framework that accepts plain Python functions.
Tool has a parameters JSON schema that the agent’s LLM uses to call it. The SDK auto-generates that schema from your function’s type hints and docstring, which is why annotation matters more than usual here.
Knowledge bases
KnowledgeBase represents one document collection. The platform-managed type is the default; external KBs (your own vector store) are an advanced setup.
agent.knowledge_bases and queried automatically by the framework. Call kb.search directly only when you’re building something outside the agent loop, like a search box or a one-off enrichment job.
Memory
There are three memory layers, and they’re not the same thing. The Agno framework configures all of them throughagent.agno_settings:
- Session storage (
session_storage=True, default): keeps conversation history within a single thread. Postgres-backed, scoped per agent. - User memories (
user_memories=Trueoragentic_memory=True): facts the agent should remember about a specific user, across all their sessions. The two flags select between manual and agentic-managed mode. - Agent memories (
agent_memories=Trueoragentic_culture=True): organization-wide knowledge the agent should carry into every conversation. Same two-flag pattern.
- Session storage is essentially free.
- User and agent memories cost LLM calls to maintain.
Decorators
The SDK exports a small set of decorators that handle the lifecycle for you:@on_task: the entry point. Stands up the HTTP server and subscribes to the platform task stream.@on_boot/@on_shutdown: lifecycle hooks that run before and after the handler is registered.@register_tool: turns a Python function into an agent tool. SDK generates the JSON schema from your type hints.@on_tool_before/@on_tool_after/@on_tool_error: observe tool calls without modifying the tools themselves. Use for logging, metrics, alerting.@on_auth_event: fires when an MCP OAuth flow needs the user to log in.
@on_task plus @register_tool. The rest are situational and covered in their own pages.
Configuration
Configuration is the explicit form of the credentials and base URL. Most code never instantiates it: the SDK reads from XPANDER_API_KEY, XPANDER_ORGANIZATION_ID, and XPANDER_BASE_URL automatically.
base_url, and the agent controller’s API key, not your cloud key. See SDK configuration for the full setup.
Cheat sheet
A one-line answer for the questions you’ll have most often:| Question | Answer |
|---|---|
| Where do I configure the agent’s behavior? | Agent Studio. The SDK reads it. |
| Where do my custom tools live? | In your xpander_handler.py, decorated with @register_tool. |
| How do I run an agent in code? | Inside @on_task: Agent(**(await Backend(...).aget_args(task=task))). Framework-specific for others. |
| How do I receive incoming tasks? | A function decorated with @on_task. |
| How do I find session data? | agent.get_user_sessions(user_id) or agent.get_session(session_id). |
| How do I add a document to a KB? | KnowledgeBases().get(kb_id).add_documents([url]). |
| Async or sync? | Every method has both. Async in production, sync in scripts. |
Next steps
Frameworks: Agno
What
Backend.aget_args() actually wires up, with the AgnoSettings reference.Custom Tools
The full
@register_tool decorator surface.Memory & State
Session storage, user memories, and agent memories explained.
SDK Reference
Per-method reference for every class on this page.

