Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt

Use this file to discover all available pages before exploring further.

The test panel and output console provide visibility during development. The Monitor tab provides the same for production traffic.

Test a workflow from the canvas

The top toolbar includes a green play button labeled with the active test preset name (the default is “Hello World”). Click the play button to run the workflow with the current test configuration. Click the gear icon next to it to open the test configuration panel.

Input modes

The test panel has two input modes, toggled by a switch at the top. Text mode is a plain text area. Type a message and hit run. This is the fastest way to test. JSON mode switches to a structured JSON editor for payloads that match what a webhook or API call would send. Use this when your workflow’s logic depends on specific field names or nested data structures. Below the input field, an expandable Files section lets you drag and drop or browse to upload files for workflows that process documents (OCR nodes, agent analysis, compliance scanning).

Test presets

The test panel includes a name field and a Save as Preset button that saves the current input as a named preset. Saved presets appear in the play button’s label, so you can switch between scenarios without reopening the panel.

Read results in the output console

When you run a workflow, the output console appears at the bottom of the canvas and streams events in real time. Events appear as they happen, so you can watch a multi-step workflow progress through its nodes. Each event shows a timestamp, event type, duration, and status:
EventWhat it means
Task CreatedThe workflow run has started. A new task was created from your test input.
Tool RequestA node is calling an external tool or connector. Shows which tool and what data was sent.
Tool ResultThe tool call completed. Shows the response data returned.
Workflow FinishedAll nodes have executed. Shows final status and total duration.
Clicking any event row expands the full JSON payload, so you can inspect exactly what data flowed between steps. Copy buttons on each payload make it easy to grab a result for comparison. The console toolbar includes:
  • Run Again to re-execute with the same input after making changes
  • Search events to filter a long event list
  • Clear to reset the console
  • Expand for full-screen view
During an active run, a Stop button lets you cancel early. The status bar at the bottom shows the timestamp of the last test run.
Output console vs. Monitor tab: The output console shows events from test runs you trigger from the canvas. The Monitor tab (covered next) shows data from all runs, including production traffic from webhooks, schedules, and API calls. Use the console while building. Use the Monitor tab once the workflow is live.

Monitor production runs

Once your workflow is published and handling real traffic, switch from Builder to Monitor using the toggle in the top toolbar. The Monitor tab has three sub-tabs: Threads, Metrics, and Tasks.

Threads: trace a specific run

The Threads view lists every conversation session your workflow has handled. Each row shows the thread ID, trigger type (Chat, Webhook, API), the input that started the run, and when it last executed. Click any thread to open its log panel, which reconstructs the full execution timeline step by step. The log header shows total execution duration down to the millisecond, and a Download Log button lets you export the full log for offline analysis. The timeline shows each step in order: the input message, each node’s request and response payloads, and the final output. Use this to trace which node produced an unexpected result. On the right side of the log panel, an AI Insights section with an Evaluate button uses AI to analyze the execution and surface observations you might miss when reading raw payloads.

Metrics: track resource consumption

The Metrics sub-tab shows five bar charts, each filterable by date range:
ChartWhat it tracks
RequestsWorkflow runs per day
Outbound API calls by AIExternal API calls agent nodes made per day
Input tokensTokens sent to LLMs per day
Output tokensTokens generated by LLMs per day
Total tokensCombined input and output tokens per day
The requests chart tracks workflow activity. The API calls chart helps spot unexpected spikes from loops or misconfigured nodes. The token charts help track LLM costs and identify optimization opportunities, like switching a summarization step to a lighter model.

Tasks: find runs by status

The Tasks sub-tab provides a flat list of every individual execution, regardless of which thread it belongs to. Each row shows the task ID, creation and update timestamps, and current status (Completed, Failed, or Running). Filter or scan for tasks with a Failed status, then cross-reference the task ID in the Threads view to see the full execution log and diagnose what went wrong.

What’s next

Versioning & Rollback

Publish workflow versions and roll back when a change causes problems.

Triggers

Configure how workflows start: webhooks, API calls, schedules, and more.