Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt

Use this file to discover all available pages before exploring further.

LangChain and LangGraph let you build custom tool-calling and graph-based agent runtimes. xpander.ai supplies the agent definition (instructions, tools, model, knowledge-base links), and your handler wires those fields into a native LangGraph flow. In this guide, we’ll build an agent that runs on a native LangGraph ReAct loop, with its tools, model, and instructions all coming from xpander.

What doesn’t come built in

Unlike the Agno path, LangChain doesn’t have a one-call shortcut for pulling everything in at once, so we grab the agent definition from xpander and pass the pieces into create_react_agent ourselves. It’s only a few extra lines, but a few capabilities aren’t auto-wired and you wire them in your graph:
CapabilityHow to wire it
Knowledge-base retrievalCall xpander_agent.knowledge_bases_retriever() and expose it as a LangChain tool (or call it directly inside a node).
Session storageUse LangGraph’s own checkpointer/state flow, or move to Agno if you want the managed session-storage path from Backend.aget_args().
Automatic guardrails, context-optimization plumbing, and multi-agent team runtime wiringBuild those behaviors yourself in your graph, or switch to Agno.

Prerequisites

  • Complete the Quickstart. You should already have the CLI installed, xpander login completed, and a scaffolded agent project.
  • Python 3.12+ for local development.
  • An LLM provider key in your shell that matches your LangChain provider package. For example: OPENAI_API_KEY for langchain-openai, ANTHROPIC_API_KEY for langchain-anthropic.

1. Install

All packages below are required for the default OpenAI example.
# 1. Install xpander runtime and LangChain/LangGraph dependencies.
pip install \
    xpander-sdk \
    langchain \
    langchain-openai \
    langgraph \
    python-dotenv

# 2. Optional: swap provider package for your model vendor.
pip install langchain-anthropic
pip install langchain-ollama

2. Set up scaffolding

# Create an agent named "my-langchain-agent"
xpander agent new \
    --name "my-langchain-agent" \
    --framework "langchain" \
    --folder "."
These files get created:
./

├── xpander_handler.py        # Your @on_task entry point. The file you'll edit most.
├── xpander_config.json       # Agent ID, organization ID, API key, framework selection.
├── agent_instructions.json   # role / goal / general (the agent's system prompt).
├── requirements.txt          # Python dependencies.
├── Dockerfile                # Used by xpander agent deploy.
└── .env                      # XPANDER_API_KEY, XPANDER_ORGANIZATION_ID, XPANDER_AGENT_ID.

xpander_config.json reference

xpander_config.json
{
  "agent_id": "agt_01H...",
  "organization_id": "org_01H...",
  "api_key": "xpd_...",
  "framework": "langchain"
}

3. Create task handler

The full pattern, wrapped in @on_task so the platform routes tasks to it:
xpander_handler.py
from dotenv import load_dotenv
load_dotenv()

from xpander_sdk import on_task, Task, Agents
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent


def system_prompt(instructions):
    # xpander instructions are structured (role, goal, general).
    # LangChain expects one system string.
    parts = []
    if instructions.general:
        parts.append(f"System: {instructions.general}")
    if instructions.goal_str:
        parts.append(f"Goals:\n{instructions.goal_str}")
    if instructions.role:
        parts.append("Instructions:\n" + "\n".join(f"- {r}" for r in instructions.role))
    return "\n\n".join(parts)


@on_task
async def handler(task: Task) -> Task:
    # 1. Load the agent definition from xpander.
    xpander_agent = await Agents(configuration=task.configuration).aget(agent_id=task.agent_id)

    # 2. Build a LangChain chat model from the configured model name.
    llm = ChatOpenAI(model=xpander_agent.model_name, temperature=0)

    # 3. Bind xpander tools as LangGraph-compatible callables.
    react_agent = create_react_agent(llm, xpander_agent.tools.functions)

    # 4. Run the ReAct loop.
    response = await react_agent.ainvoke({
        "messages": [
            ("system", system_prompt(xpander_agent.instructions)),
            ("user", task.to_message()),
        ]
    })

    # 5. Write result back so xpander can persist and display it.
    last = response["messages"][-1]
    task.result = last.content if hasattr(last, "content") else str(last)
    return task
Here’s what’s happening:
  1. Agents(...).aget(agent_id=task.agent_id) returns a fully loaded agent object.
  2. xpander_agent.model_name is used as the LLM model id in your LangChain client.
  3. xpander_agent.tools.functions returns one callable per tool, with a payload schema signature and generated docstrings LangChain/LangGraph can use.
  4. xpander_agent.instructions contains general, role, and goal fields so you can build the system prompt format your graph expects.
  5. task.result = ... hands the output back to xpander for storage and UI/API visibility.

4. Edit the agent system prompt

agent_instructions.json contains the agent’s system prompt and maps directly to agent.instructions in code:
agent_instructions.json
{
  "role": [
    "You are a customer support assistant for Acme.",
    "Always confirm the customer's account ID before taking any action."
  ],
  "goal": [
    "Resolve the customer's issue in as few turns as possible.",
    "Escalate to a human if the request involves a refund over $500."
  ],
  "general": "Be concise, professional, and friendly. Never invent policy details; if you don't know something, say so and offer to escalate."
}
Save the file and the next xpander agent dev or xpander agent deploy syncs it to the control plane.

5. Stream chunks from LangGraph (optional)

For streaming output, use an async generator handler that yields TaskUpdateEvent values:
xpander_handler.py (streaming variant)
from datetime import datetime, timezone
from xpander_sdk import on_task, Task, Agents, TaskUpdateEvent, TaskUpdateEventType
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

@on_task
async def handler(task: Task):
    xpander_agent = await Agents(configuration=task.configuration).aget(agent_id=task.agent_id)
    llm = ChatOpenAI(model=xpander_agent.model_name, temperature=0)
    react_agent = create_react_agent(llm, xpander_agent.tools.functions)

    messages = [
        ("system", system_prompt(xpander_agent.instructions)),
        ("user", task.to_message()),
    ]

    final_content = ""
    async for chunk in react_agent.astream({"messages": messages}):
        if "agent" in chunk:
            for msg in chunk["agent"].get("messages", []):
                if hasattr(msg, "content") and msg.content:
                    final_content = msg.content
                    yield TaskUpdateEvent(
                        type=TaskUpdateEventType.Chunk,
                        task_id=task.id,
                        organization_id=task.organization_id,
                        time=datetime.now(timezone.utc),
                        data=msg.content,
                    )

    task.result = final_content
    yield TaskUpdateEvent(
        type=TaskUpdateEventType.TaskFinished,
        task_id=task.id,
        organization_id=task.organization_id,
        time=datetime.now(timezone.utc),
        data=task,
    )
This pattern is for the decorator’s streaming mode, which is served through POST /invoke as SSE output.

6. Filter tool outputs with schema enforcement (optional)

When a tool returns large payloads, configure output schema filtering in Agent Studio for that tool. This keeps only relevant fields and reduces token usage before results are handed back to your LangChain loop.

7. Test local development

Run the handler with the dev server. Tasks created from any channel (REST, Slack, Agent Studio) route to your laptop:
# Starts the @on_task HTTP server and subscribes to the platform event stream.
xpander agent dev
Routing cloud traffic to a local instance is a preview feature.Inbound traffic goes to your deployed container by default. When a local instance is running via xpander agent dev, it takes over and all tasks route to your locally running agent instead. Only one can be active at a time.If a container is already deployed, run xpander agent stop first, then start dev. When you stop the local server, the cloud-based container automatically reclaims traffic.
For one-shot testing without a server:
# Calls your handler once and exits.
python3 xpander_handler.py \
    --invoke \
    --prompt "Quick test" \
    --output_format json \
    --output_schema '{"answer":"string"}'
--output_format and --output_schema are useful for testing structured output without changing the agent’s settings in the control plane.

8. Deploy to xpander cloud

When the local handler works, push it as a managed container:
# Bundles the project, builds the image, and rolls out a new version.
xpander agent deploy
What happens:
  1. The CLI bundles xpander_handler.py, requirements.txt, the Dockerfile, and the rest of the project.
  2. xpander builds a Docker image, pushes it, and rolls out a new immutable version. The previous version stays available for instant rollback.
  3. Once the rollout finishes, the platform routes inbound tasks to the new container. The first deploy takes a couple of minutes; subsequent deploys are faster thanks to layer caching.
Stream logs from the running container while the rollout settles:
xpander agent logs

Secrets and environment variables

.env ships with the deploy by default. For values you don’t want bundled into the image (production keys, rotating secrets), upload them to xpander’s secret store instead:
# Pushes the variables in your local .env to the agent's secret store
# and injects them into the container at runtime.
xpander secrets-sync
LLM provider keys belong in the secret store, not the bundled .env. Re-run xpander secrets-sync whenever you rotate a secret. Don’t commit .env to source control either way.

Lifecycle hooks

Containers support @on_boot and @on_shutdown for one-time resource setup and teardown. Use them for caches you want to warm before the first task lands, or open connections you want to close cleanly when the container is replaced:
from xpander_sdk import on_boot, on_shutdown

@on_boot
async def warmup():
    # Pre-load a model, open a DB pool, fetch config, etc.
    ...

@on_shutdown
async def cleanup():
    # Flush queues, close connections, etc.
    ...

When to redeploy

Anything that changes Python code, dependencies, or the Dockerfile needs a redeploy. The control-plane bits stay live without one:
  • Live (no redeploy): instructions, model selection, attached agents, attached knowledge bases, tool selection from the catalog.
  • Needs xpander agent deploy: any change to xpander_handler.py, requirements.txt, Dockerfile, or other files in the container.
Full deployment reference, including rollback and lifecycle controls, is on the Containers page.

Next steps

Pre-built tools

What goes into agent.tools.functions, and how connectors authenticate.

Custom tools

Wrap a private API as a tool with @register_tool.

Full LangChain example

A standalone runnable script you can copy.

Frameworks overview

What is auto-wired vs. manual for each supported framework.