Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt

Use this file to discover all available pages before exploring further.

Task.areport_metrics records the task’s token usage, tool calls, and execution outcome to the platform’s metrics store. The @on_task runtime calls this automatically at the end of every handler if task.tokens is set, so you only need to invoke it directly when you’re driving a task outside the runtime.
from xpander_sdk import Tokens

task.tokens = Tokens(prompt_tokens=2_140, completion_tokens=380)
task.used_tools = ["web_search", "fetch_pdf"]
task.duration = 4.7

await task.areport_metrics()
task.tokens must be set before calling areport_metrics. The method raises ValueError if tokens is missing.

Parameters

ParameterTypeRequiredDescription
configurationConfigurationNoOverride SDK config. Falls back to task.configuration, then env-derived defaults.

Returns

None.

What gets reported

The metrics report includes:
  • execution_id: task id.
  • source: task source.
  • memory_thread_id: same as task id (memory and execution share an id).
  • task: input text (task.input.text).
  • status: current task.status.
  • internal_status: current task.internal_status.
  • ai_model: fixed to "xpander".
  • api_calls_made: tool names from task.used_tools (or [] for orchestration tasks with return_metrics=True).
  • result: final result string.
  • llm_tokens: token counts wrapped in ExecutionTokens(worker=task.tokens).

Examples

Inside a custom event loop

If you’re driving an agent without @on_task, report metrics manually before returning to the caller:
from xpander_sdk import Backend, Tokens
from xpander_sdk.modules.tasks.sub_modules.task import Task

backend = Backend()
task = await backend.ainvoke_agent(agent_id="agent-123", prompt="...")

# ... run the agent yourself, track tokens ...

task.tokens = Tokens(prompt_tokens=1_500, completion_tokens=320)
task.used_tools = ["search", "summarize"]
task.duration = 3.1

await task.areport_metrics()
This makes the run show up in the dashboard with proper token accounting.

Sync version

task.report_metrics()

Errors

Raises ModuleException on persistence failures, or ValueError if task.tokens is None.