Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt

Use this file to discover all available pages before exploring further.

The OpenAI Agents SDK is OpenAI’s lightweight framework for building tool-using agents. It gives you a small Agent class plus a Runner that drives the LLM loop. xpander.ai supplies the agent’s identity (instructions, tools, model, knowledge bases); this page wires the two together. In this guide, we’ll create an xpander Agent with OpenAI Agents SDK.

What doesn’t come built in

Unlike Agno, some capabilities aren’t auto-wired. Configure them yourself:
CapabilityHow to wire it
Knowledge-base retrievalWrap xpander_agent.knowledge_bases_retriever() in a @function_tool and concatenate it onto openai_agents_sdk_tools. Full example in step 5.
Session storageThe OpenAI Agents SDK has no Postgres-backed store. Persist result.to_input_list() between turns and pass it back as input=previous_items + new_message on the next run, or move to the Agno integration for a managed store.
GuardrailsImplement as pre-checks before Runner.run, or use the OpenAI Agents SDK’s own input_guardrails parameter on Agent. Agno’s PII / prompt-injection / OpenAI-moderation pre-hooks don’t apply here.

Prerequisites

  • Complete the Quickstart so the CLI, SDK, and xpander login are already set up.
  • Python 3.12+ for the local handler.
  • OPENAI_API_KEY in your shell. The OpenAI Agents SDK uses its own OpenAI client and does not pick up the LLM credentials configured on the agent in Agent Studio. If you’ve set a custom key on the agent, mirror it into your .env so the runner uses it.

1. Install

Both packages are required. The OpenAI Agents SDK ships under the openai-agents distribution but imports as agents.
# 1. xpander runtime (provides the openai_agents_sdk_tools adapter property).
pip install xpander-sdk

# 2. OpenAI Agents SDK itself (imported as `agents`).
pip install openai-agents

2. Set up scaffolding

# Create an agent named "my-first-agent"
xpander agent new \
    --name "my-first-agent" \
    --framework "open-ai-agents" \
    --folder "."
These files get created:
./

├── xpander_handler.py        # Your @on_task entry point. The file you'll edit most.
├── xpander_config.json       # Agent ID, organization ID, API key, framework selection.
├── agent_instructions.json   # role / goal / general (the agent's system prompt).
├── requirements.txt          # Python dependencies (xpander-sdk and openai-agents pinned here).
├── Dockerfile                # Used by xpander agent deploy.
└── .env                      # XPANDER_API_KEY, XPANDER_ORGANIZATION_ID, XPANDER_AGENT_ID.

xpander_config.json reference

xpander_config.json
{
  "agent_id": "agt_01H...",
  "organization_id": "org_01H...",
  "api_key": "xpd_...",
  "framework": "open-ai-agents"
}

3. Create task handler

The full pattern, wrapped in @on_task so the platform routes tasks to it. The highlighted lines are the integration’s load-bearing reads:
xpander_handler.py
from dotenv import load_dotenv
load_dotenv()  # loads XPANDER_API_KEY, OPENAI_API_KEY, etc. before any sdk import

from xpander_sdk import on_task, Task, Agents
from agents import Agent as OpenAIAgent, Runner

@on_task
async def handler(task: Task) -> Task:
    # 1. Load the xpander agent (instructions, tools, model are all on it).
    xpander_agent = await Agents(configuration=task.configuration).aget(
        agent_id=task.agent_id,
    )

    # 2. Build the OpenAI Agents SDK's own Agent from three xpander fields.
    native = OpenAIAgent(
        name=xpander_agent.name,
        instructions=xpander_agent.instructions.full,
        tools=xpander_agent.openai_agents_sdk_tools,
        model=xpander_agent.model_name,
    )

    # 3. Run the LLM loop with the task's user message.
    result = await Runner.run(native, input=task.to_message())

    # 4. Write the result back so the platform can store and display it.
    task.result = result.final_output
    return task
Here’s what’s happening:
  1. Agents(configuration=...).aget(agent_id=...) calls the xpander control plane and returns a fully-hydrated Agent object. Its instructions, tool repository, model, and knowledge-base links are all populated.
  2. xpander_agent.instructions.full is a single string that wraps the agent’s general description, role list, and goal list in <description>, <instructions>, and <goals> tags. Drop it straight into the OpenAI Agents SDK’s instructions parameter.
  3. xpander_agent.openai_agents_sdk_tools is a computed property that wraps every xpander tool (connectors, custom @register_tool functions, MCP tools) as a FunctionTool from agents.tool. Each wrapper’s on_invoke_tool calls back into xpander’s tool execution path, so connector auth, observability, and retries still work.
  4. xpander_agent.model_name is the model identifier configured on the agent (e.g. gpt-4.1, gpt-4o). Pass it to the OpenAI Agents SDK’s model parameter.
  5. Runner.run(native, input=task.to_message()) drives the LLM loop. task.to_message() returns the task’s user message (text plus any attachments) in the shape the runner expects.
  6. Writing back to task.result lets xpander store the output and surface it in the API, Agent Studio, and any wired channels.

4. Edit the agent’s system prompt

agent_instructions.json contains the agent’s system prompt and has exactly three fields:
agent_instructions.json
{
  "role": [
    "You are a customer support assistant for Acme.",
    "Always confirm the customer's account ID before taking any action."
  ],
  "goal": [
    "Resolve the customer's issue in as few turns as possible.",
    "Escalate to a human if the request involves a refund over $500."
  ],
  "general": "Be concise, professional, and friendly. Never invent policy details; if you don't know something, say so and offer to escalate."
}
Save the file and the next xpander agent dev or xpander agent deploy syncs it to the control plane.

5. Wire knowledge-base retrieval (optional)

The OpenAI Agents SDK doesn’t auto-wire xpander’s knowledge bases, so expose the retriever as a @function_tool the runner can call. The highlighted lines show the two integration points: building the retriever and concatenating it onto the auto-wired tool list.
xpander_handler.py
from agents import Agent as OpenAIAgent, Runner, function_tool

@function_tool
async def search_knowledge_base(query: str, num_documents: int = 5) -> list[dict]:
    """Search the agent's linked knowledge bases. Returns top-k matching documents."""
    # knowledge_bases_retriever() returns a callable: (query, agent=None, num_documents=5, **kwargs)
    retriever = xpander_agent.knowledge_bases_retriever()
    return retriever(query=query, num_documents=num_documents)

native = OpenAIAgent(
    name=xpander_agent.name,
    instructions=xpander_agent.instructions.full,
    # Concatenate the auto-wired tools with the KB retriever.
    tools=[*xpander_agent.openai_agents_sdk_tools, search_knowledge_base],
    model=xpander_agent.model_name,
)
The retriever runs concurrent searches across every linked KB and returns the top N results by score.

6. Set up streaming (optional)

For token-by-token output, decorate an async def that yields TaskUpdateEvent objects instead of returning a Task. The decorator detects the difference automatically.
streaming_handler.py
from datetime import datetime, timezone
from xpander_sdk import on_task, Task, Agents, TaskUpdateEvent, TaskUpdateEventType
from agents import Agent as OpenAIAgent, Runner

@on_task
async def handler(task: Task):
    xpander_agent = await Agents(configuration=task.configuration).aget(agent_id=task.agent_id)
    native = OpenAIAgent(
        name=xpander_agent.name,
        instructions=xpander_agent.instructions.full,
        tools=xpander_agent.openai_agents_sdk_tools,
        model=xpander_agent.model_name,
    )

    final_output = ""
    # Runner.run_streamed yields events as the LLM produces them.
    streaming = Runner.run_streamed(native, input=task.to_message())
    async for event in streaming.stream_events():
        # Surface each text delta to the platform's SSE stream.
        if event.type == "raw_response_event" and getattr(event.data, "delta", None):
            chunk = event.data.delta
            final_output += chunk
            yield TaskUpdateEvent(
                type=TaskUpdateEventType.Chunk,
                task_id=task.id,
                organization_id=task.organization_id,
                time=datetime.now(timezone.utc),
                data=chunk,
            )

    task.result = final_output
    yield TaskUpdateEvent(
        type=TaskUpdateEventType.TaskFinished,
        task_id=task.id,
        organization_id=task.organization_id,
        time=datetime.now(timezone.utc),
        data=task,
    )
Here’s what’s happening:
  1. Runner.run_streamed(native, input=task.to_message()) returns a RunResultStreaming. Iterating streaming.stream_events() yields raw response events, run-item events, and a final completion event.
  2. The Chunk event forwards each text delta to the platform’s SSE stream so clients render output as it arrives.
  3. The TaskFinished event signals the end of the stream and carries the final task back to the platform.
A streaming handler exposes itself only through POST /invoke, returning Server-Sent Events. The platform’s SSE listener for cloud-deployed agents expects a regular handler that returns a Task. So if you need both an interactive streaming experience and platform-routed tasks, run two handlers, or have your streaming endpoint proxy through a regular handler.

7. Test local development

Run the handler with the dev server. Tasks created from any channel (REST, Slack, Agent Studio) route to your laptop:
# Starts the @on_task HTTP server and subscribes to the platform event stream.
xpander agent dev
Routing cloud traffic to a local instance is a preview feature.Inbound traffic goes to your deployed container by default. When a local instance is running via xpander agent dev, it takes over and all tasks route to your locally running agent instead. Only one can be active at a time.If a container is already deployed, run xpander agent stop first, then start dev. When you stop the local server, the cloud-based container automatically reclaims traffic.
For one-shot testing without a server:
# Calls your handler exactly once with the given prompt and exits.
python3 xpander_handler.py \
    --invoke \
    --prompt "Quick test" \
    --output_format json \
    --output_schema '{"answer":"string"}'
--output_format and --output_schema are useful for testing structured output without changing the agent’s settings in the control plane.

8. Deploy to xpander cloud

When the local handler works, push it as a managed container:
# Bundles the project, builds a Docker image, rolls out the new version.
xpander agent deploy
What happens:
  1. The CLI bundles xpander_handler.py, requirements.txt, the Dockerfile, and the rest of the project.
  2. xpander builds a Docker image, pushes it, and rolls out a new immutable version. The previous version stays available for instant rollback.
  3. Once the rollout finishes, the platform routes inbound tasks to the new container. The first deploy takes a couple of minutes; subsequent deploys are faster thanks to layer caching.
Stream logs from the running container while the rollout settles:
xpander agent logs

Secrets and environment variables

.env ships with the deploy by default. For values you don’t want bundled into the image (production keys, rotating secrets), upload them to xpander’s secret store instead:
# Pushes the variables in your local .env to the agent's secret store
# and injects them into the container at runtime.
xpander secrets-sync
OPENAI_API_KEY belongs in the secret store, not the bundled .env. Re-run xpander secrets-sync whenever you rotate a secret. Don’t commit .env to source control either way.

Lifecycle hooks

Containers support @on_boot and @on_shutdown for one-time resource setup and teardown. Use them for caches you want to warm before the first task lands, or open connections you want to close cleanly when the container is replaced:
from xpander_sdk import on_boot, on_shutdown

@on_boot
async def warmup():
    # Pre-load a model, open a DB pool, fetch config, etc.
    ...

@on_shutdown
async def cleanup():
    # Flush queues, close connections, etc.
    ...

When to redeploy

Anything that changes Python code, dependencies, or the Dockerfile needs a redeploy. The control-plane bits stay live without one:
  • Live (no redeploy): instructions, model selection, attached agents, attached knowledge bases, tool selection from the catalog.
  • Needs xpander agent deploy: any change to xpander_handler.py, requirements.txt, Dockerfile, or other files in the container.
Full deployment reference, including rollback and lifecycle controls, is on the Containers page.

Troubleshooting

The OpenAI Agents SDK instantiates its own OpenAI client and reads OPENAI_API_KEY from the environment. It does not pick up a custom LLM key configured on the agent in Agent Studio. If the runner is using the wrong key, mirror the cloud-side custom key into your local .env (or your container’s secret store) as OPENAI_API_KEY.
Backend.aget_args() currently dispatches only to the Agno builder. For every other framework, including the OpenAI Agents SDK, you load the Agent yourself with Agents().aget(...) and read the fields you need.
There’s no built-in session storage for the OpenAI Agents SDK. The runner exposes result.to_input_list(), which returns the full conversation as input items you can persist (Postgres, Redis, your own store) and pass back as input=previous_items + new_message on the next turn. If you’d rather not build that yourself, switch to the Agno integration.
Yes. xpander_agent.openai_agents_sdk_tools only supplies tools, not the runner’s handoff configuration. You declare handoffs on the native Agent exactly as you would in any OpenAI Agents SDK app. Each agent in the handoff chain can independently load its own xpander tools.
The OpenAI Agents SDK supports other model clients through its model-agnostic interface. xpander_agent.model_name is just a string. Pass it to whichever client you instantiate. The underlying model has to support tool calling for the integration to work end to end.

Next steps

Quickstart

The 10-minute scaffold-to-deploy walkthrough that produced the handler shown above.

Custom Tools

Wrap private APIs as tools with @register_tool and ship them through openai_agents_sdk_tools.

Compare with Agno

What you’d gain by switching: session storage, knowledge-base auto-wiring, Backend.aget_args().

Containers

Ship the handler as a container managed by xpander.

Core Concepts

The SDK class names mapped onto agents, tasks, threads, and memory.

Frameworks overview

What’s auto-wired vs. manual for Agno, OpenAI Agents SDK, LangChain, and AWS Strands.