Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt

Use this file to discover all available pages before exploring further.

Custom tools are plain Python functions you decorate with @register_tool so the agent can call them like any other tool. Use this when:
  • The connectors catalog doesn’t have what you need.
  • You want to call a private API.
  • You want the LLM to run logic that already lives in your codebase.

Prerequisites

  • Complete the Quickstart so the CLI, SDK, and xpander login are already set up.
  • Python 3.12+ for the local handler.
  • At least one agent loaded in code via Agents().aget(agent_id=...) or inside an @on_task handler.

1. Turn a Python function into a tool

Write a function with type hints and a docstring, decorate it with @register_tool, and make sure the module is imported by your handler. That’s all the agent needs to start calling it.
xpander_handler.py
from xpander_sdk import register_tool

@register_tool
def weather_check(location: str) -> str:
    """Check the current weather for a city."""
    return f"Weather in {location}: Sunny, 25C"
What this means in practice:
  1. You don’t write the schema yourself. Type hints are the schema. location: str becomes a required string parameter; add = "Paris" and it becomes optional with a default.
  2. The docstring is the tool’s pitch to the LLM. It’s the text the model reads when deciding whether to use this tool, so write it for that audience: what the tool does, what kind of input it expects, what it returns.
  3. The function name becomes the tool name. The LLM calls it as weather_check. Pick names the model can disambiguate from other tools on the agent.
  4. The decorator runs at import time. As long as the module containing it is imported by your handler, the tool shows up in agent.tools. Conditional imports break discovery (the tool will be silently missing) so import the module unconditionally and gate the function’s behavior inside its body.
  5. The function stays normal Python. You can unit-test it, call it from scripts, and refactor it without affecting the agent’s other capabilities.

2. Use Pydantic for richer input schemas

When the LLM benefits from structured input (nested objects, enums, validated fields), use a Pydantic model as the parameter type. The decorator picks up the model and exposes the same field constraints to the LLM that you’d get from a normal Pydantic validator.
tools.py
from typing import Optional
from pydantic import BaseModel
from xpander_sdk import register_tool

class CustomerFilter(BaseModel):
    status: str
    region: Optional[str] = None
    min_lifetime_value: float = 0.0

@register_tool
def search_customers(filter: CustomerFilter, limit: int = 50) -> list[dict]:
    """Search customers by status, optionally filtered by region and minimum lifetime value."""
    # Your implementation goes here.
    return []
What this means in practice:
  1. The model’s fields become a nested object in the tool’s schema. The LLM sees status (required), region and min_lifetime_value (optional with defaults), and supplies them as a structured payload.
  2. The SDK validates the incoming payload before your function runs. A bad value never reaches your code; the agent gets a ValueError it can react to in the next turn.
  3. Anything Pydantic can model, you can use here. Optional, Union, Literal, Enum, field validators. Whatever the model expresses ends up in the schema the LLM sees.

3. Make the tool async when it does I/O

Most tools that matter make a network call, hit a database, or talk to a queue. Make the function async def and the framework awaits it for you. Sync and async tools coexist on the same agent; you pick per tool.
import httpx
from xpander_sdk import register_tool

@register_tool
async def fetch_external_data(api_endpoint: str, headers: dict = None) -> dict:
    """Fetch JSON from an external HTTP endpoint."""
    async with httpx.AsyncClient() as client:
        response = await client.get(api_endpoint, headers=headers or {})
        return response.json()
What this means in practice:
  1. async def is the only difference from a sync tool. Type hints, docstring, decorator, all the same.
  2. Use async-native clients for the actual I/O. httpx.AsyncClient for HTTP, asyncpg or SQLAlchemy 2.x async for Postgres, aio-pika for RabbitMQ, etc. A blocking client inside an async def blocks the event loop and slows every other tool call running concurrently.
  3. Return the value, don’t return the coroutine. Always await the I/O inside the function. If you forget the await, the SDK passes the coroutine object back as the tool result and the LLM sees garbage.

4. Wire custom tools into your framework

Once decorated, the tool joins agent.tools.list alongside connectors and any MCP-server tools. You don’t write per-framework registration glue: each framework reads the same agent property it already uses for connectors, and your custom tool comes along for the ride.
Custom tools are already inside the args dict returned by Backend.aget_args(). Just splat the args into Agno’s Agent:
xpander_handler.py
from dotenv import load_dotenv
load_dotenv()

from xpander_sdk import on_task, Task, Backend, register_tool
from agno.agent import Agent

@register_tool
def weather_check(location: str) -> str:
    """Check the current weather for a city."""
    return f"Weather in {location}: Sunny, 25C"

@on_task
async def handler(task: Task) -> Task:
    backend = Backend(configuration=task.configuration)
    # weather_check is already in args["tools"] alongside the connectors.
    agno_agent = Agent(**(await backend.aget_args(task=task)))
    result = await agno_agent.arun(input=task.to_message())
    task.result = result.content
    return task
The decorator emits one tool definition; each per-framework property wraps it in the shape that framework expects.

5. Make a tool selectively available

@register_tool adds a tool to the agent’s permanent capability set: every task gets it. If you only want a tool available for a single call (a debug helper, a test stub, a per-request override), skip the decorator and pass it through Backend.aget_args(tools=[...]) instead. The tool gets appended to the resolved tools list for that one call only. The agent’s permanent capability set stays untouched:
from datetime import datetime
from xpander_sdk import Backend

backend = Backend(configuration=task.configuration)

def _local_clock() -> str:
    """Return the developer's local clock for debugging."""
    return datetime.now().isoformat()

args = await backend.aget_args(
    task=task,
    tools=[_local_clock],   # appended to connectors + @register_tool functions
)
Reach for @register_tool when a tool belongs to the agent for good. Reach for tools=[...] when it’s situational.

6. Inspect and invoke custom tools directly

Custom tools are queryable through the same APIs as connectors. Reach for these when you want to sanity-check that your decorator picked them up, build a UI showing the agent’s full surface, or invoke a tool by hand outside the LLM loop.
from xpander_sdk import Agents

agent = await Agents().aget(agent_id="agt_01H...")

# Every tool on the agent: connectors + @register_tool functions + MCP tools.
for tool in agent.tools.list:
    kind = "local" if tool.is_local else "connector"
    print(tool.name, kind, tool.description[:60])

# Look up a single custom tool and invoke it directly.
weather_tool = agent.tools.get_tool_by_name("weather_check")
result = await agent.ainvoke_tool(
    tool=weather_tool,
    payload={"location": "Paris"},
)
print(result.is_success, result.result)
What this means in practice:
  1. Custom tools and connectors look the same to the platform. They share agent.tools.list, the same lookup helpers, and the same ainvoke_tool entry point. tool.is_local distinguishes the two when you need to.
  2. Direct invocation skips the LLM. Useful for migration scripts, batch jobs, or testing the tool’s contract before exposing it to the model.
  3. The payload structure follows the schema you declared. For a tool typed with primitives, the payload is the keyword args dict. For a Pydantic-typed parameter, the payload is the model’s serialized form.
PropertyReturnsWhat it’s for
agent.tools.listlist[Tool]Canonical enumeration. Each Tool carries id, name, description, parameters (JSON schema), is_local, and a Pydantic schema.
agent.tools.functionslist[Callable]Normalized callables for every tool with a payload: <PydanticModel> parameter. Bind into LangChain or any “plain Python function” framework.
agent.tools.get_tool_by_name(name) / get_tool_by_id(id)ToolLook up a single tool. id and name are equal for @register_tool functions (both default to the function’s __name__).

7. Deploy custom tools to the cloud

Custom tools live in your handler’s source code, so they ship with xpander agent deploy like any other code change:
xpander agent deploy
By default, a deployed tool only exists inside your container. The framework can call it because it’s in the local registry, but the platform’s record of the agent doesn’t list it. It won’t appear in Agent Studio’s Tools tab, and you can’t reference it in dependency rules. To register the tool with the platform too, pass add_to_graph=True:
@register_tool(add_to_graph=True)
async def lookup_internal_ticket(ticket_id: str) -> dict:
    """Look up an internal support ticket by ID."""
    ...
What this means in practice:
  1. Code changes need a redeploy. Adding, editing, or removing a @register_tool function all require xpander agent deploy. Control-plane changes (instructions, model, attached connectors) stay live without one.
  2. add_to_graph=True syncs on agent load, not on deploy. The first task after rollout triggers the platform sync; the deploy itself doesn’t. The flag is idempotent. Subsequent loads skip already-synced tools, so you can leave it set permanently.
  3. Leave add_to_graph off when you don’t need platform-side features. The LLM doesn’t need it to call the tool, and skipping it keeps Agent Studio’s view scoped to platform-managed configuration.
For the full deployment surface (rollback, scaling, secrets), see the Containers page.

Troubleshooting

The decorator runs at import time. If the module that holds the tool is never imported (because the handler has a conditional import, or because the tool lives in a file the handler doesn’t reference), the decorator never executes and the tool is invisible. Move the @register_tool function into a module the handler imports unconditionally, or add an explicit from .tools import * to your handler’s top imports.
Schema generation reads type hints, not runtime values. A parameter typed as dict produces a generic object schema with no field constraints; the LLM will fill it with whatever it thinks is reasonable. To get a strict schema, model the parameter with a Pydantic BaseModel (Section 2). For unions and enums, use typing.Union and enum.Enum so Pydantic emits the right schema fragments.
Untyped parameters fall back to Any, which produces a permissive schema and often leads to runtime errors when the LLM passes the wrong type. Annotate every parameter. If you really need a free-form value, use typing.Any explicitly so future readers know it was a deliberate choice.
The framework awaits coroutines automatically, but only if the function itself is async def. If you write def run(...): return some_coroutine() instead of async def run(...), the SDK passes the coroutine object back as the tool result and the LLM gets garbage. Make the function async def and await inside it.
The graph sync runs in the background the next time the agent is loaded with Agents().aget(agent_id=...). If you only ever call Backend.aget_args(task=task) inside an @on_task handler, the agent is loaded each task and the sync fires there too. Force a sync from a script: load the agent, wait a couple of seconds for the background task, then refresh the Tools tab in Agent Studio.
Output schema filtering applies to remote connector calls, not to @register_tool functions. Whatever your function returns reaches the LLM verbatim. To shrink local-tool output, project to the fields you want before returning, or wrap the call in a tool hook that rewrites the response.

Next steps

Pre-built connectors

The other half of agent.tools.list. Slack, Gmail, GitHub, and 2,000+ more.

Tool hooks

Observe, log, and rewrite every tool call across the agent’s lifetime.

Output Response Filtering

How large tool responses get filtered before reaching the LLM.

Frameworks

How agent.tools.functions, openai_agents_sdk_tools, and strands_tools map onto each framework.

Containers

Ship the handler (and its custom tools) as a managed container.

Custom tools example

A standalone runnable script that wires @register_tool end to end.