Create a new multi-step agentic workflow with a visual canvas of nodes
name is required — all other fields are optional. Data flows left-to-right through the pipeline: from a START trigger, through your processing nodes, to an END output.
openai (default), anthropic, etc.pointer — Invokes one of your xpander AI agents with its full tool set and memoryclassifier — An LLM that classifies, labels, or routes data based on natural language instructionsparallel — Runs multiple branches simultaneouslycode — Executes custom code for deterministic logic (the only node that doesn’t use an LLM)guardrail — An AI judge that evaluates a natural language condition and returns Pass/Failsummarizer — An agent that answers specific questions from large payloadswait — Pauses execution until a condition is met (webhook callback or human approval)send_to_end — Skips remaining nodes and routes directly to the END blocktext or jsonoutput_format is jsonserverless (default) or containerpersonal or organizational (default)WorkflowResponse object with generated ID and webhook URL.
type: orchestration automaticallyAPI Key for authentication
Request model for creating a new workflow on the xpander.ai platform.
A workflow is a directed acyclic graph (DAG) of execution nodes. Each node can be an agent invocation, a classifier (LLM-based routing), a code block, a guardrail, a summarizer, a wait/HITL node, or an action (API call).
Only name is required. The workflow's type is automatically set to
'orchestration' — do not use the Agents API to create workflows.
Agent-specific fields (graph, attached_tools, delegation_*, deep_planning, framework, agno_settings, connectivity_details) are not applicable to workflows and are excluded from this model.
Human-readable name for the workflow. Examples: 'Customer Onboarding Pipeline', 'Daily Report Generator'.
Description of what this workflow does, its trigger conditions, and expected outcomes.
Emoji icon for the workflow.
Avatar identifier for visual representation.
Default LLM provider for LLM-powered nodes (classifiers, guardrails, summarizers) in this workflow. Individual nodes can override this in their own settings.
openai, nim, amazon_bedrock, azure_ai_foundary, huggingFace, friendlyAI, anthropic, gemini, fireworks, google_ai_studio, helicone, bytedance, tzafon_lightcone, open_router, nebius, cloudflare_ai_gw Default model for LLM-powered nodes. Individual nodes can override.
Default reasoning depth for agent nodes in this workflow.
low, medium, high, xhigh Custom API base URL for the LLM provider.
Reference key to stored LLM credentials.
Credential storage type.
xpander, custom Direct LLM credentials (prefer llm_credentials_key).
Extra headers for LLM API requests.
High-level instructions for the workflow. These provide context to agent nodes within the workflow about the overall workflow purpose.
Description of the workflow's expected final output.
Format for the workflow's final output.
text, markdown, json, voice JSON Schema for structured workflow output (when output_format is 'json').
The workflow's execution DAG — an ordered list of nodes that define the execution flow.
Nodes execute sequentially following next_node_ids edges. The first non-special node in the list is the entry point.
Node types:
action: Invokes a connector operation or xpander built-in tool.
agent: Invokes an AI agent as a step in the workflow.
classifier: LLM-based routing — classifies input into groups and routes to different downstream nodes.
guardrail: LLM-based pass/fail check with "pass" and "fail" groups.
summarizer: LLM-based text processing — summarizes, extracts, or transforms the previous node's output.
code: Executes arbitrary Python code.
wait: Pauses execution for an external event (webhook or human-in-the-loop approval).
send_to_end: Routes execution to end nodes (end-summarizer, end-classifier).
parallel: Executes multiple child nodes simultaneously.
Flow control:
Strategies (per-node):
Example (Extract → Email → Summarize): [ {"type": "action", "id": "step-1", "name": "Extract Content", "next_node_ids": ["step-2"], "definition": {"asset_id": "<connection_id><operation_id>", "type": "action", "instructions": "Extract content from the provided URL"}}, {"type": "action", "id": "step-2", "name": "Send Email", "next_node_ids": [], "definition": {"asset_id": "<connection_id><operation_id>", "type": "action", "instructions": "Send summary to recipient"}}, {"type": "summarizer", "id": "end-summarizer", "name": "Output Summarizer", "next_node_ids": [], "definition": {"instructions": "Summarize the workflow execution results", "settings": {"model_provider": "anthropic", "model_name": "claude-sonnet-4-6"}}} ]
Execution strategies: retry (max retries on failure), iterative (repeated execution with stop conditions), stop (halt conditions), max_runs_per_day (daily execution limit), and agentic_context (persistent memory across workflow runs).
Notifications on workflow completion — email, Slack, or webhook on success/error.
How this workflow can be triggered: SDK, scheduled task, webhook, assistant UI, MCP, A2A, Telegram, Slack.
Where the workflow runs. 'serverless' (recommended) or 'container'.
serverless, container Who can access this workflow. 'personal' or 'organizational'.
personal, organizational Target deployment environment.
Enable NeMo guardrails.
Enable prompt caching for LLM-powered nodes.
Enable real-time event streaming.
Require OIDC pre-authentication before workflow execution.
Allowed OIDC token audiences.
Forward OIDC token to LLM provider.
OIDC audience for LLM access.
OIDC audience for MCP server access.
Successful Response
Response model for workflow endpoints.
Inherits from AIAgent but excludes agent-specific fields that are not relevant to workflows (orchestrations). This provides a clean API surface for workflow consumers without exposing confusing agent-only concepts.
The workflow's execution logic is defined in orchestration_nodes — a DAG
of typed nodes. Agent-specific fields like graph, attached_tools,
delegation_*, framework, and agno_settings are hidden.
Enumeration of the agent delegation end strategies.
Attributes: ReturnToStart: when last agent is finished and about to announce "finish" it will summarize and return to the first agent. FinishWithLast: finish at the last agent.
return-to-start, finish-with-last serverless, container personal, organizational Enumeration of possible agent statuses.
Attributes: DRAFT: Agent is in a draft state. ACTIVE: Agent is active and operational. INACTIVE: Agent is inactive and not operational.
DRAFT, ACTIVE, INACTIVE Enumeration of the agent types.
Attributes: Manager: marks the agent as a Managing agent. Regular: marks the agent as a regular agent. A2A: marks the agent as an external agent used via A2A protocol. Curl: marks the agent as an external agent used via a CURL. Orchestration: marks the agent as an Orchestration object.
manager, regular, a2a, curl, orchestration Enumeration of the agent delegation types.
Attributes: Router: Marks the agent as a router agent - xpanderAI's LLM will decide which sub-agent to trigger. Sequence: Marks the agent as a sequence agent - sub-agents will delegate to other sub-agents.
router, sequence Enumeration of the agent delegation memory strategies.
Attributes: Full: The memory object will be passed completely between agents. Summarization: Between each sub-agent delegation, a summarization will occur, and a new thread will be created for each agent. OriginalInput: the sub agent will get the initial task with a fresh memory thread
full, summarization, original-input openai, nim, amazon_bedrock, azure_ai_foundary, huggingFace, friendlyAI, anthropic, gemini, fireworks, google_ai_studio, helicone, bytedance, tzafon_lightcone, open_router, nebius, cloudflare_ai_gw low, medium, high, xhigh text, markdown, json, voice xpander, custom Configuration for event-based notifications.
Attributes: on_success: Notifications to send when an operation succeeds. Maps notification types to a list of notification configurations. on_error: Notifications to send when an operation fails. Maps notification types to a list of notification configurations.
Configuration object for task-level execution strategies.
This model groups optional strategy configurations that control how a task is executed and managed over time, including retries, iterative execution, stopping conditions, and daily run limits.
Attributes: retry_strategy: Optional retry policy configuration that defines how the task should behave when execution fails (e.g., max attempts, backoff rules).
iterative_strategy:
Optional iterative execution configuration for tasks that may run in
repeated cycles/steps until completion or a stop condition is met.
stop_strategy:
Optional stopping policy configuration that defines when the task
should stop running (e.g., timeout, max iterations, success criteria).
max_runs_per_day:
Optional limit on how many times the task is allowed to run within a
24-hour period. If not set, no explicit daily limit is enforced.
agentic_context_enabled:
if agentic memory is enabled and accesible to the executor.