Custom tools are plain Python functions you decorate withDocumentation Index
Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt
Use this file to discover all available pages before exploring further.
@register_tool so the agent can call them like any other tool.
Use this when:
- The connectors catalog doesn’t have what you need.
- You want to call a private API.
- You want the LLM to run logic that already lives in your codebase.
Prerequisites
- Complete the Quickstart so the CLI, SDK, and
xpander loginare already set up. - Python 3.12+ for the local handler.
- At least one agent loaded in code via
Agents().aget(agent_id=...)or inside an@on_taskhandler.
1. Turn a Python function into a tool
Write a function with type hints and a docstring, decorate it with@register_tool, and make sure the module is imported by your handler. That’s all the agent needs to start calling it.
xpander_handler.py
- You don’t write the schema yourself. Type hints are the schema.
location: strbecomes a required string parameter; add= "Paris"and it becomes optional with a default. - The docstring is the tool’s pitch to the LLM. It’s the text the model reads when deciding whether to use this tool, so write it for that audience: what the tool does, what kind of input it expects, what it returns.
- The function name becomes the tool name. The LLM calls it as
weather_check. Pick names the model can disambiguate from other tools on the agent. - The decorator runs at import time. As long as the module containing it is imported by your handler, the tool shows up in
agent.tools. Conditional imports break discovery (the tool will be silently missing) so import the module unconditionally and gate the function’s behavior inside its body. - The function stays normal Python. You can unit-test it, call it from scripts, and refactor it without affecting the agent’s other capabilities.
2. Use Pydantic for richer input schemas
When the LLM benefits from structured input (nested objects, enums, validated fields), use a Pydantic model as the parameter type. The decorator picks up the model and exposes the same field constraints to the LLM that you’d get from a normal Pydantic validator.tools.py
- The model’s fields become a nested object in the tool’s schema. The LLM sees
status(required),regionandmin_lifetime_value(optional with defaults), and supplies them as a structured payload. - The SDK validates the incoming payload before your function runs. A bad value never reaches your code; the agent gets a
ValueErrorit can react to in the next turn. - Anything Pydantic can model, you can use here.
Optional,Union,Literal,Enum, field validators. Whatever the model expresses ends up in the schema the LLM sees.
3. Make the tool async when it does I/O
Most tools that matter make a network call, hit a database, or talk to a queue. Make the functionasync def and the framework awaits it for you. Sync and async tools coexist on the same agent; you pick per tool.
async defis the only difference from a sync tool. Type hints, docstring, decorator, all the same.- Use async-native clients for the actual I/O.
httpx.AsyncClientfor HTTP,asyncpgor SQLAlchemy 2.x async for Postgres,aio-pikafor RabbitMQ, etc. A blocking client inside anasync defblocks the event loop and slows every other tool call running concurrently. - Return the value, don’t return the coroutine. Always
awaitthe I/O inside the function. If you forget theawait, the SDK passes the coroutine object back as the tool result and the LLM sees garbage.
4. Wire custom tools into your framework
Once decorated, the tool joinsagent.tools.list alongside connectors and any MCP-server tools. You don’t write per-framework registration glue: each framework reads the same agent property it already uses for connectors, and your custom tool comes along for the ride.
- Agno
- OpenAI Agents SDK
- LangChain
- AWS Strands
Custom tools are already inside the args dict returned by
Backend.aget_args(). Just splat the args into Agno’s Agent:xpander_handler.py
5. Make a tool selectively available
@register_tool adds a tool to the agent’s permanent capability set: every task gets it. If you only want a tool available for a single call (a debug helper, a test stub, a per-request override), skip the decorator and pass it through Backend.aget_args(tools=[...]) instead. The tool gets appended to the resolved tools list for that one call only. The agent’s permanent capability set stays untouched:
@register_tool when a tool belongs to the agent for good. Reach for tools=[...] when it’s situational.
6. Inspect and invoke custom tools directly
Custom tools are queryable through the same APIs as connectors. Reach for these when you want to sanity-check that your decorator picked them up, build a UI showing the agent’s full surface, or invoke a tool by hand outside the LLM loop.- Custom tools and connectors look the same to the platform. They share
agent.tools.list, the same lookup helpers, and the sameainvoke_toolentry point.tool.is_localdistinguishes the two when you need to. - Direct invocation skips the LLM. Useful for migration scripts, batch jobs, or testing the tool’s contract before exposing it to the model.
- The payload structure follows the schema you declared. For a tool typed with primitives, the payload is the keyword args dict. For a Pydantic-typed parameter, the payload is the model’s serialized form.
| Property | Returns | What it’s for |
|---|---|---|
agent.tools.list | list[Tool] | Canonical enumeration. Each Tool carries id, name, description, parameters (JSON schema), is_local, and a Pydantic schema. |
agent.tools.functions | list[Callable] | Normalized callables for every tool with a payload: <PydanticModel> parameter. Bind into LangChain or any “plain Python function” framework. |
agent.tools.get_tool_by_name(name) / get_tool_by_id(id) | Tool | Look up a single tool. id and name are equal for @register_tool functions (both default to the function’s __name__). |
7. Deploy custom tools to the cloud
Custom tools live in your handler’s source code, so they ship withxpander agent deploy like any other code change:
add_to_graph=True:
- Code changes need a redeploy. Adding, editing, or removing a
@register_toolfunction all requirexpander agent deploy. Control-plane changes (instructions, model, attached connectors) stay live without one. add_to_graph=Truesyncs on agent load, not on deploy. The first task after rollout triggers the platform sync; the deploy itself doesn’t. The flag is idempotent. Subsequent loads skip already-synced tools, so you can leave it set permanently.- Leave
add_to_graphoff when you don’t need platform-side features. The LLM doesn’t need it to call the tool, and skipping it keeps Agent Studio’s view scoped to platform-managed configuration.
Troubleshooting
Custom tool doesn't show up in `agent.tools.list`
Custom tool doesn't show up in `agent.tools.list`
The decorator runs at import time. If the module that holds the tool is never imported (because the handler has a conditional import, or because the tool lives in a file the handler doesn’t reference), the decorator never executes and the tool is invisible. Move the
@register_tool function into a module the handler imports unconditionally, or add an explicit from .tools import * to your handler’s top imports.Schema doesn't match what the LLM sends
Schema doesn't match what the LLM sends
Schema generation reads type hints, not runtime values. A parameter typed as
dict produces a generic object schema with no field constraints; the LLM will fill it with whatever it thinks is reasonable. To get a strict schema, model the parameter with a Pydantic BaseModel (Section 2). For unions and enums, use typing.Union and enum.Enum so Pydantic emits the right schema fragments.`TypeError` because a parameter is missing a type hint
`TypeError` because a parameter is missing a type hint
Untyped parameters fall back to
Any, which produces a permissive schema and often leads to runtime errors when the LLM passes the wrong type. Annotate every parameter. If you really need a free-form value, use typing.Any explicitly so future readers know it was a deliberate choice.Async tool runs but the agent never sees the result
Async tool runs but the agent never sees the result
The framework awaits coroutines automatically, but only if the function itself is
async def. If you write def run(...): return some_coroutine() instead of async def run(...), the SDK passes the coroutine object back as the tool result and the LLM gets garbage. Make the function async def and await inside it.`add_to_graph=True` tool isn't appearing in Agent Studio
`add_to_graph=True` tool isn't appearing in Agent Studio
The graph sync runs in the background the next time the agent is loaded with
Agents().aget(agent_id=...). If you only ever call Backend.aget_args(task=task) inside an @on_task handler, the agent is loaded each task and the sync fires there too. Force a sync from a script: load the agent, wait a couple of seconds for the background task, then refresh the Tools tab in Agent Studio.Output schema configured in Agent Studio but the LLM still sees the full payload
Output schema configured in Agent Studio but the LLM still sees the full payload
Output schema filtering applies to remote connector calls, not to
@register_tool functions. Whatever your function returns reaches the LLM verbatim. To shrink local-tool output, project to the fields you want before returning, or wrap the call in a tool hook that rewrites the response.Next steps
Pre-built connectors
The other half of
agent.tools.list. Slack, Gmail, GitHub, and 2,000+ more.Tool hooks
Observe, log, and rewrite every tool call across the agent’s lifetime.
Output Response Filtering
How large tool responses get filtered before reaching the LLM.
Frameworks
How
agent.tools.functions, openai_agents_sdk_tools, and strands_tools map onto each framework.Containers
Ship the handler (and its custom tools) as a managed container.
Custom tools example
A standalone runnable script that wires
@register_tool end to end.
