LangChain and LangGraph let you build custom tool-calling and graph-based agent runtimes. xpander.ai supplies the agent definition (instructions, tools, model, knowledge-base links), and your handler wires those fields into a native LangGraph flow. In this guide, we’ll build an agent that runs on a native LangGraph ReAct loop, with its tools, model, and instructions all coming from xpander.Documentation Index
Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt
Use this file to discover all available pages before exploring further.
What doesn’t come built in
Unlike the Agno path, LangChain doesn’t have a one-call shortcut for pulling everything in at once, so we grab the agent definition from xpander and pass the pieces intocreate_react_agent ourselves. It’s only a few extra lines, but a few capabilities aren’t auto-wired and you wire them in your graph:
| Capability | How to wire it |
|---|---|
| Knowledge-base retrieval | Call xpander_agent.knowledge_bases_retriever() and expose it as a LangChain tool (or call it directly inside a node). |
| Session storage | Use LangGraph’s own checkpointer/state flow, or move to Agno if you want the managed session-storage path from Backend.aget_args(). |
| Automatic guardrails, context-optimization plumbing, and multi-agent team runtime wiring | Build those behaviors yourself in your graph, or switch to Agno. |
Prerequisites
- Complete the Quickstart. You should already have the CLI installed,
xpander logincompleted, and a scaffolded agent project. - Python 3.12+ for local development.
- An LLM provider key in your shell that matches your LangChain provider package. For example:
OPENAI_API_KEYforlangchain-openai,ANTHROPIC_API_KEYforlangchain-anthropic.
1. Install
All packages below are required for the default OpenAI example.2. Set up scaffolding
xpander_config.json reference
xpander_config.json
3. Create task handler
The full pattern, wrapped in@on_task so the platform routes tasks to it:
xpander_handler.py
Agents(...).aget(agent_id=task.agent_id)returns a fully loaded agent object.xpander_agent.model_nameis used as the LLM model id in your LangChain client.xpander_agent.tools.functionsreturns one callable per tool, with apayloadschema signature and generated docstrings LangChain/LangGraph can use.xpander_agent.instructionscontainsgeneral,role, andgoalfields so you can build the system prompt format your graph expects.task.result = ...hands the output back to xpander for storage and UI/API visibility.
4. Edit the agent system prompt
agent_instructions.json contains the agent’s system prompt and maps directly to agent.instructions in code:
agent_instructions.json
xpander agent dev or xpander agent deploy syncs it to the control plane.
5. Stream chunks from LangGraph (optional)
For streaming output, use an async generator handler that yieldsTaskUpdateEvent values:
xpander_handler.py (streaming variant)
POST /invoke as SSE output.
6. Filter tool outputs with schema enforcement (optional)
When a tool returns large payloads, configure output schema filtering in Agent Studio for that tool. This keeps only relevant fields and reduces token usage before results are handed back to your LangChain loop.7. Test local development
Run the handler with the dev server. Tasks created from any channel (REST, Slack, Agent Studio) route to your laptop:--output_format and --output_schema are useful for testing structured output without changing the agent’s settings in the control plane.
8. Deploy to xpander cloud
When the local handler works, push it as a managed container:- The CLI bundles
xpander_handler.py,requirements.txt, theDockerfile, and the rest of the project. - xpander builds a Docker image, pushes it, and rolls out a new immutable version. The previous version stays available for instant rollback.
- Once the rollout finishes, the platform routes inbound tasks to the new container. The first deploy takes a couple of minutes; subsequent deploys are faster thanks to layer caching.
Secrets and environment variables
.env ships with the deploy by default. For values you don’t want bundled into the image (production keys, rotating secrets), upload them to xpander’s secret store instead:
.env. Re-run xpander secrets-sync whenever you rotate a secret. Don’t commit .env to source control either way.
Lifecycle hooks
Containers support@on_boot and @on_shutdown for one-time resource setup and teardown. Use them for caches you want to warm before the first task lands, or open connections you want to close cleanly when the container is replaced:
When to redeploy
Anything that changes Python code, dependencies, or the Dockerfile needs a redeploy. The control-plane bits stay live without one:- Live (no redeploy): instructions, model selection, attached agents, attached knowledge bases, tool selection from the catalog.
- Needs
xpander agent deploy: any change toxpander_handler.py,requirements.txt,Dockerfile, or other files in the container.
Next steps
Pre-built tools
What goes into
agent.tools.functions, and how connectors authenticate.Custom tools
Wrap a private API as a tool with
@register_tool.Full LangChain example
A standalone runnable script you can copy.
Frameworks overview
What is auto-wired vs. manual for each supported framework.

