Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt

Use this file to discover all available pages before exploring further.

AWS Strands is the agent framework from AWS for orchestrating tool-using LLMs. It ships a small Agent class with first-class Bedrock support and a callable run interface. xpander.ai supplies the agent’s identity (instructions, tools, model, knowledge bases); this page wires the two together. In this guide, we’ll build an agent that runs on a native strands.Agent, with its instructions, tools, and model all coming from xpander.

What doesn’t come built in

Unlike the Agno path, Strands doesn’t have a one-call shortcut for pulling everything in at once, so we grab the agent definition from xpander and hand the pieces to Strands ourselves. It’s only a few extra lines, but a few capabilities aren’t auto-wired and you wire them yourself:
CapabilityHow to wire it
Knowledge-base retrievalWrap xpander_agent.knowledge_bases_retriever() in a @strands.tool and concatenate it onto strands_tools. Full example in step 5.
Session storageStrands has no Postgres-backed store. Use Strands’ own SessionManager and conversation_manager for in-process history, or move to the Agno integration for a managed store.
GuardrailsImplement as pre-checks before invoke_async, or as Strands hooks on the Agent. Agno’s PII / prompt-injection / OpenAI-moderation pre-hooks don’t apply here.

Prerequisites

  • Complete the Quickstart so the CLI, SDK, and xpander login are already set up.
  • Python 3.12+ for the local handler.
  • AWS credentials in your shell (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, or an instance profile). Strands defaults to AWS Bedrock when model= is a string. If you wire a non-Bedrock client instead, an alternate provider key (OPENAI_API_KEY, ANTHROPIC_API_KEY) takes its place. Strands does not pick up the LLM credentials configured on the agent in Agent Studio; if you’ve set a custom key on the agent, mirror it into your .env so the runner uses it.

1. Install

Both packages are required. Strands ships under the strands-agents distribution but imports as strands.
# 1. xpander runtime (provides the strands_tools adapter property).
pip install xpander-sdk

# 2. AWS Strands itself (imported as `strands`).
pip install strands-agents

2. Set up scaffolding

# Create an agent named "my-first-agent"
xpander agent new \
    --name "my-first-agent" \
    --framework "strands-agents" \
    --folder "."
These files get created:
./

├── xpander_handler.py        # Your @on_task entry point. The file you'll edit most.
├── xpander_config.json       # Agent ID, organization ID, API key, framework selection.
├── agent_instructions.json   # role / goal / general (the agent's system prompt).
├── requirements.txt          # Python dependencies (xpander-sdk and strands-agents pinned here).
├── Dockerfile                # Used by xpander agent deploy.
└── .env                      # XPANDER_API_KEY, XPANDER_ORGANIZATION_ID, XPANDER_AGENT_ID.

xpander_config.json reference

xpander_config.json
{
  "agent_id": "agt_01H...",
  "organization_id": "org_01H...",
  "api_key": "xpd_...",
  "framework": "strands-agents"
}

3. Create task handler

The full pattern, wrapped in @on_task so the platform routes tasks to it. The highlighted lines are the integration’s load-bearing reads:
xpander_handler.py
from dotenv import load_dotenv
load_dotenv()  # loads XPANDER_API_KEY, AWS_ACCESS_KEY_ID, etc. before any sdk import

from xpander_sdk import on_task, Task, Agents
from strands import Agent as StrandsAgent

@on_task
async def handler(task: Task) -> Task:
    # 1. Load the xpander agent (instructions, tools, model are all on it).
    xpander_agent = await Agents(configuration=task.configuration).aget(
        agent_id=task.agent_id,
    )

    # 2. Build the Strands Agent from xpander fields. Strands wraps a model-id
    #    string as BedrockModel(model_id=...) automatically.
    native = StrandsAgent(
        name=xpander_agent.name,
        description=xpander_agent.instructions.description,
        system_prompt=xpander_agent.instructions.full,
        tools=xpander_agent.strands_tools,
        model=xpander_agent.model_name,
    )

    # 3. Run the LLM loop with the task's user message.
    result = await native.invoke_async(task.to_message())

    # 4. Write the result back so the platform can store and display it.
    #    str(result) concatenates the text blocks from result.message.
    task.result = str(result)
    return task
Here’s what’s happening:
  1. Agents(configuration=task.configuration).aget(agent_id=task.agent_id) calls the xpander control plane and returns a fully-hydrated Agent object. Its instructions, tool repository, model, and knowledge-base links are all populated.
  2. xpander_agent.instructions.full is a single string that wraps the agent’s general description, role list, and goal list in <description>, <instructions>, and <goals> tags. Drop it straight into Strands’ system_prompt= kwarg (note the kwarg name; it isn’t instructions=).
  3. xpander_agent.strands_tools is a computed property that wraps every xpander tool (connectors, custom @register_tool functions, MCP tools) with @strands.tool. Each wrapper’s underlying callable invokes xpander’s tool execution path, so connector auth, observability, and retries still work.
  4. xpander_agent.model_name is the model identifier configured on the agent (e.g. anthropic.claude-sonnet-4-5-20250929-v1:0, gpt-4o). Strands wraps a string as BedrockModel(model_id=...) automatically. For non-Bedrock providers, swap in an explicit model client (see the Troubleshooting section).
  5. native.invoke_async(task.to_message()) drives the LLM loop. task.to_message() returns the task’s user message in the shape Strands expects.
  6. Writing back to task.result lets xpander store the output and surface it in the API, Agent Studio, and any wired channels. str(result) concatenates the text blocks from result.message into a single string.

4. Edit the agent’s system prompt

agent_instructions.json contains the agent’s system prompt and has exactly three fields:
agent_instructions.json
{
  "role": [
    "You are a customer support assistant for Acme.",
    "Always confirm the customer's account ID before taking any action."
  ],
  "goal": [
    "Resolve the customer's issue in as few turns as possible.",
    "Escalate to a human if the request involves a refund over $500."
  ],
  "general": "Be concise, professional, and friendly. Never invent policy details; if you don't know something, say so and offer to escalate."
}
Save the file and the next xpander agent dev or xpander agent deploy syncs it to the control plane. general is also exposed as xpander_agent.instructions.description, which the handler passes to Strands’ description= kwarg so other agents that wrap this one as a tool see the right summary.

5. Wire knowledge-base retrieval (optional)

Strands doesn’t auto-wire xpander’s knowledge bases, so expose the retriever as a @strands.tool the agent can call. The highlighted lines show the two integration points: building the retriever and concatenating it onto the auto-wired tool list.
xpander_handler.py
from strands import Agent as StrandsAgent, tool

@tool
def search_knowledge_base(query: str, num_documents: int = 5) -> list[dict]:
    """Search the agent's linked knowledge bases. Returns top-k matching documents."""
    # knowledge_bases_retriever() returns a callable: (query, agent=None, num_documents=5, **kwargs)
    retriever = xpander_agent.knowledge_bases_retriever()
    return retriever(query=query, num_documents=num_documents)

native = StrandsAgent(
    name=xpander_agent.name,
    description=xpander_agent.instructions.description,
    system_prompt=xpander_agent.instructions.full,
    # Concatenate the auto-wired tools with the KB retriever.
    tools=[*xpander_agent.strands_tools, search_knowledge_base],
    model=xpander_agent.model_name,
)
The retriever runs concurrent searches across every linked KB and returns the top N results by score.

6. Set up streaming (optional)

For token-by-token output, decorate an async def that yields TaskUpdateEvent objects instead of returning a Task. The decorator detects the difference automatically. Strands exposes agent.stream_async(...), which yields a sequence of dict events; text deltas arrive on events that carry a "data" key.
streaming_handler.py
from datetime import datetime, timezone
from xpander_sdk import on_task, Task, Agents, TaskUpdateEvent, TaskUpdateEventType
from strands import Agent as StrandsAgent

@on_task
async def handler(task: Task):
    xpander_agent = await Agents(configuration=task.configuration).aget(agent_id=task.agent_id)
    native = StrandsAgent(
        name=xpander_agent.name,
        description=xpander_agent.instructions.description,
        system_prompt=xpander_agent.instructions.full,
        tools=xpander_agent.strands_tools,
        model=xpander_agent.model_name,
    )

    final_output = ""
    # stream_async yields dict events: text deltas carry a "data" key,
    # tool calls and tool results carry their own keys.
    async for event in native.stream_async(task.to_message()):
        chunk = event.get("data") if isinstance(event, dict) else None
        if not chunk:
            continue
        final_output += chunk
        # Forward each text delta to the platform's SSE stream.
        yield TaskUpdateEvent(
            type=TaskUpdateEventType.Chunk,
            task_id=task.id,
            organization_id=task.organization_id,
            time=datetime.now(timezone.utc),
            data=chunk,
        )

    task.result = final_output
    yield TaskUpdateEvent(
        type=TaskUpdateEventType.TaskFinished,
        task_id=task.id,
        organization_id=task.organization_id,
        time=datetime.now(timezone.utc),
        data=task,
    )
Here’s what’s happening:
  1. native.stream_async(task.to_message()) returns an async iterator. Each event is a dict; text deltas carry a "data" key, tool-use events carry "current_tool_use", and a final completion event carries the AgentResult under "result".
  2. The Chunk event forwards each text delta to the platform’s SSE stream so clients render output as it arrives.
  3. The TaskFinished event signals the end of the stream and carries the final task back to the platform.
A streaming handler exposes itself only through POST /invoke, returning Server-Sent Events. The platform’s SSE listener for cloud-deployed agents expects a regular handler that returns a Task. So if you need both an interactive streaming experience and platform-routed tasks, run two handlers, or have your streaming endpoint proxy through a regular handler.

7. Test local development

Run the handler with the dev server. Tasks created from any channel (REST, Slack, Agent Studio) route to your laptop:
# Starts the @on_task HTTP server and subscribes to the platform event stream.
xpander agent dev
Routing cloud traffic to a local instance is a preview feature.Inbound traffic goes to your deployed container by default. When a local instance is running via xpander agent dev, it takes over and all tasks route to your locally running agent instead. Only one can be active at a time.If a container is already deployed, run xpander agent stop first, then start dev. When you stop the local server, the cloud-based container automatically reclaims traffic.
For one-shot testing without a server:
# Calls your handler exactly once with the given prompt and exits.
python3 xpander_handler.py \
    --invoke \
    --prompt "Quick test" \
    --output_format json \
    --output_schema '{"answer":"string"}'
--output_format and --output_schema are useful for testing structured output without changing the agent’s settings in the control plane.

8. Deploy to xpander cloud

When the local handler works, push it as a managed container:
# Bundles the project, builds a Docker image, rolls out the new version.
xpander agent deploy
What happens:
  1. The CLI bundles xpander_handler.py, requirements.txt, the Dockerfile, and the rest of the project.
  2. xpander builds a Docker image, pushes it, and rolls out a new immutable version. The previous version stays available for instant rollback.
  3. Once the rollout finishes, the platform routes inbound tasks to the new container. The first deploy takes a couple of minutes; subsequent deploys are faster thanks to layer caching.
Stream logs from the running container while the rollout settles:
xpander agent logs

Secrets and environment variables

.env ships with the deploy by default. For values you don’t want bundled into the image (production keys, rotating secrets), upload them to xpander’s secret store instead:
# Pushes the variables in your local .env to the agent's secret store
# and injects them into the container at runtime.
xpander secrets-sync
AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) belong in the secret store, not the bundled .env. The same applies to alternate-provider keys (OPENAI_API_KEY, ANTHROPIC_API_KEY) when you wire a non-Bedrock client. Re-run xpander secrets-sync whenever you rotate a secret. Don’t commit .env to source control either way.

Lifecycle hooks

Containers support @on_boot and @on_shutdown for one-time resource setup and teardown. Use them for caches you want to warm before the first task lands, or open connections you want to close cleanly when the container is replaced:
from xpander_sdk import on_boot, on_shutdown

@on_boot
async def warmup():
    # Pre-load a model, open a DB pool, fetch config, etc.
    ...

@on_shutdown
async def cleanup():
    # Flush queues, close connections, etc.
    ...

When to redeploy

Anything that changes Python code, dependencies, or the Dockerfile needs a redeploy. The control-plane bits stay live without one:
  • Live (no redeploy): instructions, model selection, attached agents, attached knowledge bases, tool selection from the catalog.
  • Needs xpander agent deploy: any change to xpander_handler.py, requirements.txt, Dockerfile, or other files in the container.
Full deployment reference, including rollback and lifecycle controls, is on the Containers page.

Troubleshooting

Backend.aget_args() currently dispatches only to the Agno builder and raises NotImplementedError for any other framework. For Strands, you load the Agent yourself with Agents().aget(...) and read the fields you need (instructions, tools, model name) onto Strands’ Agent constructor.
Strands names the system-prompt kwarg system_prompt=, not instructions=. Pass system_prompt=xpander_agent.instructions.full to the Agent constructor. The xpander side reads from instructions (the Pydantic field on the SDK’s Agent); the Strands side accepts system_prompt. The two are not the same kwarg.
Strands wraps a string model= as BedrockModel(model_id=...) and the underlying boto3 client reads standard AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, or an instance profile). It does not pick up a custom LLM key configured on the agent in Agent Studio. If the runner can’t authenticate or hits the wrong region, set the AWS env vars in your local .env (or your container’s secret store) and confirm the model ID is enabled in that region’s Bedrock model catalog.
Pass an explicit Strands model client instead of a string:
from strands.models.openai import OpenAIModel

native = StrandsAgent(
    name=xpander_agent.name,
    description=xpander_agent.instructions.description,
    system_prompt=xpander_agent.instructions.full,
    tools=xpander_agent.strands_tools,
    model=OpenAIModel(
        client_args={"api_key": "sk-..."},
        model_id=xpander_agent.model_name,
    ),
)
The xpander tools layer is provider-agnostic. The underlying model has to support tool calling for the integration to work end to end.
Strands has its own SessionManager and conversation_manager for in-process history. For durable cross-process state, use Strands’ built-in session managers (or a custom one), or switch to the Agno integration for an auto-wired Postgres store. The convenience helpers xpander_agent.get_user_sessions() and xpander_agent.get_session() raise NotImplementedError outside Agno.
The strands_tools wrapper’s input schema is {"payload": <tool.parameters>}, so the LLM is asked to nest its tool arguments under payload. This mirrors how xpander stores connector schemas internally and keeps the same shape across every framework adapter. You don’t need to do anything in your handler; the wrapper unpacks payload before invoking the tool.

Next steps

Quickstart

The 10-minute scaffold-to-deploy walkthrough that produced the handler shown above.

Custom Tools

Wrap private APIs as tools with @register_tool and ship them through strands_tools.

Compare with Agno

What you’d gain by switching: session storage, knowledge-base auto-wiring, Backend.aget_args().

Containers

Ship the handler as a container managed by xpander.

Core Concepts

The SDK class names mapped onto agents, tasks, threads, and memory.

Frameworks overview

What’s auto-wired vs. manual for Agno, OpenAI Agents SDK, LangChain, and AWS Strands.