Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt

Use this file to discover all available pages before exploring further.

Task.aevents() is an async generator over TaskUpdateEvents emitted by the platform as the task runs: message chunks, tool-call requests/results, reasoning steps, sub-agent triggers, plan updates, and auth events. Use it to build live UIs or to react to specific event types in real time.
from xpander_sdk import TaskUpdateEventType

# Task must be created with events_streaming=True
task = await agent.acreate_task(
    prompt="Long-running research task",
    events_streaming=True,
)

async for event in task.aevents():
    print(event.type, event.task_id)
    if event.type == TaskUpdateEventType.TaskFinished:
        break
The task must be created with events_streaming=True. Calling aevents() on a non-streaming task raises ValueError immediately.

Parameters

None.

Yields TaskUpdateEvent

Each event has:
FieldTypeDescription
typeTaskUpdateEventTypeWhat kind of event. See below.
task_idstrTask this event belongs to.
organization_idstrOwning organization.
timedatetimeEvent timestamp.
dataAnyEvent-specific payload.

Event types

TaskUpdateEventTypeWhendata payload
TaskCreatedTask is created.A Task object.
TaskUpdatedAny non-final task mutation (status change, result patched).A Task object.
TaskFinishedTask reaches a terminal state.A Task object.
ChunkStreaming text from the LLM.str (text chunk).
AuthEventAuth required (MCP OAuth, ECA).Auth-specific dict. Also dispatched to @on_auth_event handlers.
ToolCallRequestAgent invokes a tool.A ToolCallRequest (operation_id, tool_name, payload, reasoning, …).
ToolCallResultTool finishes.A ToolCallResult (operation_id, result, is_error, …).
SubAgentTriggerA sub-agent task is created.Triggering details.
ThinkReasoning step (think tool).Reasoning data.
AnalyzeReasoning step (analyze tool).Reasoning data.
PlanUpdatedDeep-planning state changes.A DeepPlanning object.
TaskCompactizationAuto-compaction event (Layer 2).A TaskCompactizationEvent.
If a typed payload fails to parse (e.g. due to schema drift), the SDK still yields the event with data set to the raw dict: the envelope (type, task_id, time) always propagates.

Examples

Filter on event type

async for event in task.aevents():
    if event.type == TaskUpdateEventType.Chunk:
        print(event.data, end="", flush=True)
    elif event.type == TaskUpdateEventType.TaskFinished:
        print()
        break
Chunk events have a string data payload (the streaming text), making it easy to render token-by-token output.

Render tool calls

async for event in task.aevents():
    if event.type == TaskUpdateEventType.ToolCallRequest:
        req = event.data
        print(f"→ {req.tool_name}({req.payload})")
    elif event.type == TaskUpdateEventType.ToolCallResult:
        res = event.data
        status = "✗" if res.is_error else "✓"
        print(f"{status} {res.tool_name}: {res.result}")

Stop on first failure

async for event in task.aevents():
    if event.type == TaskUpdateEventType.ToolCallResult and event.data.is_error:
        print("Tool failed, stopping.")
        await task.astop()
        break

Track auth flows in-band

async for event in task.aevents():
    if event.type == TaskUpdateEventType.AuthEvent:
        print(f"Auth required: {event.data}")
        # Show URL to user, etc.
If you’ve registered an @on_auth_event handler, it fires automatically: the in-band event lets you also handle it inline.

Sync version

for event in task.events():
    print(event.type, event.data)
task.events() consumes the async generator on a synchronous executor and yields events in order. Don’t call from inside a running event loop.

Lifecycle

The generator reconnects automatically if the SSE stream drops mid-task. It exits when:
  1. The platform closes the stream (typically after TaskFinished).
  2. The connection fails permanently (network, auth).
  3. You break out of the loop.

Errors

  • ValueError: task wasn’t created with events_streaming=True.
  • Network failures propagate after the SSE retry budget is exhausted.

When not to stream

For batch jobs that don’t care about progress, skip events_streaming and poll task.areload() (or use webhooks). Streaming holds an HTTP connection open per task: fine for one or two, expensive for hundreds.