The OpenAI Agents SDK is OpenAI’s lightweight framework for building tool-using agents. It gives you a smallDocumentation Index
Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt
Use this file to discover all available pages before exploring further.
Agent class plus a Runner that drives the LLM loop. xpander.ai supplies the agent’s identity (instructions, tools, model, knowledge bases); this page wires the two together.
In this guide, we’ll create an xpander Agent with OpenAI Agents SDK.
What doesn’t come built in
Unlike Agno, some capabilities aren’t auto-wired. Configure them yourself:| Capability | How to wire it |
|---|---|
| Knowledge-base retrieval | Wrap xpander_agent.knowledge_bases_retriever() in a @function_tool and concatenate it onto openai_agents_sdk_tools. Full example in step 5. |
| Session storage | The OpenAI Agents SDK has no Postgres-backed store. Persist result.to_input_list() between turns and pass it back as input=previous_items + new_message on the next run, or move to the Agno integration for a managed store. |
| Guardrails | Implement as pre-checks before Runner.run, or use the OpenAI Agents SDK’s own input_guardrails parameter on Agent. Agno’s PII / prompt-injection / OpenAI-moderation pre-hooks don’t apply here. |
Prerequisites
- Complete the Quickstart so the CLI, SDK, and
xpander loginare already set up. - Python 3.12+ for the local handler.
OPENAI_API_KEYin your shell. The OpenAI Agents SDK uses its own OpenAI client and does not pick up the LLM credentials configured on the agent in Agent Studio. If you’ve set a custom key on the agent, mirror it into your.envso the runner uses it.
1. Install
Both packages are required. The OpenAI Agents SDK ships under theopenai-agents distribution but imports as agents.
2. Set up scaffolding
xpander_config.json reference
xpander_config.json
3. Create task handler
The full pattern, wrapped in@on_task so the platform routes tasks to it. The highlighted lines are the integration’s load-bearing reads:
xpander_handler.py
Agents(configuration=...).aget(agent_id=...)calls the xpander control plane and returns a fully-hydratedAgentobject. Its instructions, tool repository, model, and knowledge-base links are all populated.xpander_agent.instructions.fullis a single string that wraps the agent’sgeneraldescription,rolelist, andgoallist in<description>,<instructions>, and<goals>tags. Drop it straight into the OpenAI Agents SDK’sinstructionsparameter.xpander_agent.openai_agents_sdk_toolsis a computed property that wraps every xpander tool (connectors, custom@register_toolfunctions, MCP tools) as aFunctionToolfromagents.tool. Each wrapper’son_invoke_toolcalls back into xpander’s tool execution path, so connector auth, observability, and retries still work.xpander_agent.model_nameis the model identifier configured on the agent (e.g.gpt-4.1,gpt-4o). Pass it to the OpenAI Agents SDK’smodelparameter.Runner.run(native, input=task.to_message())drives the LLM loop.task.to_message()returns the task’s user message (text plus any attachments) in the shape the runner expects.- Writing back to
task.resultlets xpander store the output and surface it in the API, Agent Studio, and any wired channels.
4. Edit the agent’s system prompt
agent_instructions.json contains the agent’s system prompt and has exactly three fields:
agent_instructions.json
xpander agent dev or xpander agent deploy syncs it to the control plane.
5. Wire knowledge-base retrieval (optional)
The OpenAI Agents SDK doesn’t auto-wire xpander’s knowledge bases, so expose the retriever as a@function_tool the runner can call. The highlighted lines show the two integration points: building the retriever and concatenating it onto the auto-wired tool list.
xpander_handler.py
6. Set up streaming (optional)
For token-by-token output, decorate anasync def that yields TaskUpdateEvent objects instead of returning a Task. The decorator detects the difference automatically.
streaming_handler.py
Runner.run_streamed(native, input=task.to_message())returns aRunResultStreaming. Iteratingstreaming.stream_events()yields raw response events, run-item events, and a final completion event.- The
Chunkevent forwards each text delta to the platform’s SSE stream so clients render output as it arrives. - The
TaskFinishedevent signals the end of the stream and carries the final task back to the platform.
POST /invoke, returning Server-Sent Events. The platform’s SSE listener for cloud-deployed agents expects a regular handler that returns a Task. So if you need both an interactive streaming experience and platform-routed tasks, run two handlers, or have your streaming endpoint proxy through a regular handler.
7. Test local development
Run the handler with the dev server. Tasks created from any channel (REST, Slack, Agent Studio) route to your laptop:--output_format and --output_schema are useful for testing structured output without changing the agent’s settings in the control plane.
8. Deploy to xpander cloud
When the local handler works, push it as a managed container:- The CLI bundles
xpander_handler.py,requirements.txt, theDockerfile, and the rest of the project. - xpander builds a Docker image, pushes it, and rolls out a new immutable version. The previous version stays available for instant rollback.
- Once the rollout finishes, the platform routes inbound tasks to the new container. The first deploy takes a couple of minutes; subsequent deploys are faster thanks to layer caching.
Secrets and environment variables
.env ships with the deploy by default. For values you don’t want bundled into the image (production keys, rotating secrets), upload them to xpander’s secret store instead:
OPENAI_API_KEY belongs in the secret store, not the bundled .env. Re-run xpander secrets-sync whenever you rotate a secret. Don’t commit .env to source control either way.
Lifecycle hooks
Containers support@on_boot and @on_shutdown for one-time resource setup and teardown. Use them for caches you want to warm before the first task lands, or open connections you want to close cleanly when the container is replaced:
When to redeploy
Anything that changes Python code, dependencies, or the Dockerfile needs a redeploy. The control-plane bits stay live without one:- Live (no redeploy): instructions, model selection, attached agents, attached knowledge bases, tool selection from the catalog.
- Needs
xpander agent deploy: any change toxpander_handler.py,requirements.txt,Dockerfile, or other files in the container.
Troubleshooting
Wrong model or wrong key used at runtime
Wrong model or wrong key used at runtime
The OpenAI Agents SDK instantiates its own OpenAI client and reads
OPENAI_API_KEY from the environment. It does not pick up a custom LLM key configured on the agent in Agent Studio. If the runner is using the wrong key, mirror the cloud-side custom key into your local .env (or your container’s secret store) as OPENAI_API_KEY.Why doesn't `Backend.aget_args()` work for the OpenAI Agents SDK?
Why doesn't `Backend.aget_args()` work for the OpenAI Agents SDK?
Backend.aget_args() currently dispatches only to the Agno builder. For every other framework, including the OpenAI Agents SDK, you load the Agent yourself with Agents().aget(...) and read the fields you need.How do I keep conversation history across turns?
How do I keep conversation history across turns?
There’s no built-in session storage for the OpenAI Agents SDK. The runner exposes
result.to_input_list(), which returns the full conversation as input items you can persist (Postgres, Redis, your own store) and pass back as input=previous_items + new_message on the next turn. If you’d rather not build that yourself, switch to the Agno integration.Can I use the runner's built-in handoffs?
Can I use the runner's built-in handoffs?
Yes.
xpander_agent.openai_agents_sdk_tools only supplies tools, not the runner’s handoff configuration. You declare handoffs on the native Agent exactly as you would in any OpenAI Agents SDK app. Each agent in the handoff chain can independently load its own xpander tools.Does this work with non-OpenAI models?
Does this work with non-OpenAI models?
The OpenAI Agents SDK supports other model clients through its model-agnostic interface.
xpander_agent.model_name is just a string. Pass it to whichever client you instantiate. The underlying model has to support tool calling for the integration to work end to end.Next steps
Quickstart
The 10-minute scaffold-to-deploy walkthrough that produced the handler shown above.
Custom Tools
Wrap private APIs as tools with
@register_tool and ship them through openai_agents_sdk_tools.Compare with Agno
What you’d gain by switching: session storage, knowledge-base auto-wiring,
Backend.aget_args().Containers
Ship the handler as a container managed by xpander.
Core Concepts
The SDK class names mapped onto agents, tasks, threads, and memory.
Frameworks overview
What’s auto-wired vs. manual for Agno, OpenAI Agents SDK, LangChain, and AWS Strands.

