AWS Strands is the agent framework from AWS for orchestrating tool-using LLMs. It ships a smallDocumentation Index
Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt
Use this file to discover all available pages before exploring further.
Agent class with first-class Bedrock support and a callable run interface. xpander.ai supplies the agent’s identity (instructions, tools, model, knowledge bases); this page wires the two together.
In this guide, we’ll build an agent that runs on a native strands.Agent, with its instructions, tools, and model all coming from xpander.
What doesn’t come built in
Unlike the Agno path, Strands doesn’t have a one-call shortcut for pulling everything in at once, so we grab the agent definition from xpander and hand the pieces to Strands ourselves. It’s only a few extra lines, but a few capabilities aren’t auto-wired and you wire them yourself:| Capability | How to wire it |
|---|---|
| Knowledge-base retrieval | Wrap xpander_agent.knowledge_bases_retriever() in a @strands.tool and concatenate it onto strands_tools. Full example in step 5. |
| Session storage | Strands has no Postgres-backed store. Use Strands’ own SessionManager and conversation_manager for in-process history, or move to the Agno integration for a managed store. |
| Guardrails | Implement as pre-checks before invoke_async, or as Strands hooks on the Agent. Agno’s PII / prompt-injection / OpenAI-moderation pre-hooks don’t apply here. |
Prerequisites
- Complete the Quickstart so the CLI, SDK, and
xpander loginare already set up. - Python 3.12+ for the local handler.
- AWS credentials in your shell (
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_REGION, or an instance profile). Strands defaults to AWS Bedrock whenmodel=is a string. If you wire a non-Bedrock client instead, an alternate provider key (OPENAI_API_KEY,ANTHROPIC_API_KEY) takes its place. Strands does not pick up the LLM credentials configured on the agent in Agent Studio; if you’ve set a custom key on the agent, mirror it into your.envso the runner uses it.
1. Install
Both packages are required. Strands ships under thestrands-agents distribution but imports as strands.
2. Set up scaffolding
xpander_config.json reference
xpander_config.json
3. Create task handler
The full pattern, wrapped in@on_task so the platform routes tasks to it. The highlighted lines are the integration’s load-bearing reads:
xpander_handler.py
Agents(configuration=task.configuration).aget(agent_id=task.agent_id)calls the xpander control plane and returns a fully-hydratedAgentobject. Its instructions, tool repository, model, and knowledge-base links are all populated.xpander_agent.instructions.fullis a single string that wraps the agent’sgeneraldescription,rolelist, andgoallist in<description>,<instructions>, and<goals>tags. Drop it straight into Strands’system_prompt=kwarg (note the kwarg name; it isn’tinstructions=).xpander_agent.strands_toolsis a computed property that wraps every xpander tool (connectors, custom@register_toolfunctions, MCP tools) with@strands.tool. Each wrapper’s underlying callable invokes xpander’s tool execution path, so connector auth, observability, and retries still work.xpander_agent.model_nameis the model identifier configured on the agent (e.g.anthropic.claude-sonnet-4-5-20250929-v1:0,gpt-4o). Strands wraps a string asBedrockModel(model_id=...)automatically. For non-Bedrock providers, swap in an explicit model client (see the Troubleshooting section).native.invoke_async(task.to_message())drives the LLM loop.task.to_message()returns the task’s user message in the shape Strands expects.- Writing back to
task.resultlets xpander store the output and surface it in the API, Agent Studio, and any wired channels.str(result)concatenates the text blocks fromresult.messageinto a single string.
4. Edit the agent’s system prompt
agent_instructions.json contains the agent’s system prompt and has exactly three fields:
agent_instructions.json
xpander agent dev or xpander agent deploy syncs it to the control plane. general is also exposed as xpander_agent.instructions.description, which the handler passes to Strands’ description= kwarg so other agents that wrap this one as a tool see the right summary.
5. Wire knowledge-base retrieval (optional)
Strands doesn’t auto-wire xpander’s knowledge bases, so expose the retriever as a@strands.tool the agent can call. The highlighted lines show the two integration points: building the retriever and concatenating it onto the auto-wired tool list.
xpander_handler.py
6. Set up streaming (optional)
For token-by-token output, decorate anasync def that yields TaskUpdateEvent objects instead of returning a Task. The decorator detects the difference automatically. Strands exposes agent.stream_async(...), which yields a sequence of dict events; text deltas arrive on events that carry a "data" key.
streaming_handler.py
native.stream_async(task.to_message())returns an async iterator. Each event is a dict; text deltas carry a"data"key, tool-use events carry"current_tool_use", and a final completion event carries theAgentResultunder"result".- The
Chunkevent forwards each text delta to the platform’s SSE stream so clients render output as it arrives. - The
TaskFinishedevent signals the end of the stream and carries the final task back to the platform.
POST /invoke, returning Server-Sent Events. The platform’s SSE listener for cloud-deployed agents expects a regular handler that returns a Task. So if you need both an interactive streaming experience and platform-routed tasks, run two handlers, or have your streaming endpoint proxy through a regular handler.
7. Test local development
Run the handler with the dev server. Tasks created from any channel (REST, Slack, Agent Studio) route to your laptop:--output_format and --output_schema are useful for testing structured output without changing the agent’s settings in the control plane.
8. Deploy to xpander cloud
When the local handler works, push it as a managed container:- The CLI bundles
xpander_handler.py,requirements.txt, theDockerfile, and the rest of the project. - xpander builds a Docker image, pushes it, and rolls out a new immutable version. The previous version stays available for instant rollback.
- Once the rollout finishes, the platform routes inbound tasks to the new container. The first deploy takes a couple of minutes; subsequent deploys are faster thanks to layer caching.
Secrets and environment variables
.env ships with the deploy by default. For values you don’t want bundled into the image (production keys, rotating secrets), upload them to xpander’s secret store instead:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) belong in the secret store, not the bundled .env. The same applies to alternate-provider keys (OPENAI_API_KEY, ANTHROPIC_API_KEY) when you wire a non-Bedrock client. Re-run xpander secrets-sync whenever you rotate a secret. Don’t commit .env to source control either way.
Lifecycle hooks
Containers support@on_boot and @on_shutdown for one-time resource setup and teardown. Use them for caches you want to warm before the first task lands, or open connections you want to close cleanly when the container is replaced:
When to redeploy
Anything that changes Python code, dependencies, or the Dockerfile needs a redeploy. The control-plane bits stay live without one:- Live (no redeploy): instructions, model selection, attached agents, attached knowledge bases, tool selection from the catalog.
- Needs
xpander agent deploy: any change toxpander_handler.py,requirements.txt,Dockerfile, or other files in the container.
Troubleshooting
Why doesn't `Backend.aget_args()` work for Strands?
Why doesn't `Backend.aget_args()` work for Strands?
Backend.aget_args() currently dispatches only to the Agno builder and raises NotImplementedError for any other framework. For Strands, you load the Agent yourself with Agents().aget(...) and read the fields you need (instructions, tools, model name) onto Strands’ Agent constructor.`TypeError: Agent.__init__() got an unexpected keyword argument 'instructions'`
`TypeError: Agent.__init__() got an unexpected keyword argument 'instructions'`
Strands names the system-prompt kwarg
system_prompt=, not instructions=. Pass system_prompt=xpander_agent.instructions.full to the Agent constructor. The xpander side reads from instructions (the Pydantic field on the SDK’s Agent); the Strands side accepts system_prompt. The two are not the same kwarg.Wrong model, wrong region, or `NoCredentialsError` from boto3
Wrong model, wrong region, or `NoCredentialsError` from boto3
Strands wraps a string
model= as BedrockModel(model_id=...) and the underlying boto3 client reads standard AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, or an instance profile). It does not pick up a custom LLM key configured on the agent in Agent Studio. If the runner can’t authenticate or hits the wrong region, set the AWS env vars in your local .env (or your container’s secret store) and confirm the model ID is enabled in that region’s Bedrock model catalog.How do I use a non-AWS model?
How do I use a non-AWS model?
Pass an explicit Strands model client instead of a string:The xpander tools layer is provider-agnostic. The underlying model has to support tool calling for the integration to work end to end.
How do I keep conversation history across turns?
How do I keep conversation history across turns?
Strands has its own
SessionManager and conversation_manager for in-process history. For durable cross-process state, use Strands’ built-in session managers (or a custom one), or switch to the Agno integration for an auto-wired Postgres store. The convenience helpers xpander_agent.get_user_sessions() and xpander_agent.get_session() raise NotImplementedError outside Agno.Why does the tool input show `payload` as the top-level field?
Why does the tool input show `payload` as the top-level field?
The
strands_tools wrapper’s input schema is {"payload": <tool.parameters>}, so the LLM is asked to nest its tool arguments under payload. This mirrors how xpander stores connector schemas internally and keeps the same shape across every framework adapter. You don’t need to do anything in your handler; the wrapper unpacks payload before invoking the tool.Next steps
Quickstart
The 10-minute scaffold-to-deploy walkthrough that produced the handler shown above.
Custom Tools
Wrap private APIs as tools with
@register_tool and ship them through strands_tools.Compare with Agno
What you’d gain by switching: session storage, knowledge-base auto-wiring,
Backend.aget_args().Containers
Ship the handler as a container managed by xpander.
Core Concepts
The SDK class names mapped onto agents, tasks, threads, and memory.
Frameworks overview
What’s auto-wired vs. manual for Agno, OpenAI Agents SDK, LangChain, and AWS Strands.

