Use this file to discover all available pages before exploring further.
Agent nodes use an LLM to reason about data, make decisions, and generate output. Three types are available, each designed for a specific kind of reasoning.
Agent nodes vs. action nodes: Agent nodes use an LLM to reason about data. Action nodes run deterministic operations (API calls, code, email) without an LLM. Use agent nodes when the step requires judgment; use action nodes when the step has a fixed, predictable outcome.
Node
Purpose
Best for
Agent
Full AI agent with tool access
Multi-step reasoning, querying systems, making decisions
Classifier
Route inputs to different branches
Intent detection, categorization, triage
Summarizer
Condense or extract from content
Distilling long inputs, extracting key fields, formatting output
The Agent node connects a full Xpander agent (with its tools, knowledge base, and memory) into your workflow as a single step. When the workflow reaches this node, it hands the input to the agent along with your instructions, and the agent reasons through the task using whatever connectors and knowledge it has access to.
Select an agent from the dropdown. This can be any agent you’ve built in Agent Studio, arriving with its system prompt, tools, knowledge base, and memory already configured. If you don’t have one yet, click + New agent to create one inline.The Instructions field tailors the agent’s behavior for this specific workflow step. These instructions are appended to the agent’s existing system prompt, so you don’t need to repeat its general configuration. Focus on what this step should accomplish:
Enrich the incoming lead with company data. Look up the companyin Clearbit and cross-reference with our CRM. If the company hasmore than 500 employees, flag it as enterprise tier.
Use {{variable}} placeholders to inject data from previous workflow steps. Click the Workflow variable placeholders helper to see what’s available.Persist memory thread is on by default, meaning the agent reuses the same memory thread across workflow runs and builds up context over time. Turn it off when each run should start fresh.Run asynchronously fires the agent without waiting for its response, useful for side effects (like logging) that shouldn’t block the workflow.Output type controls the format returned: text by default, structured output (JSON to a schema), or voice for audio-enabled agents.
The Classifier node reads input, evaluates it against criteria you define, and sends it down the matching branch. Unlike keyword matching or regex rules, the Classifier uses an LLM to understand intent. “I can’t log in and I’ve been charged twice” routes to billing, not authentication, because the Classifier understands that the core issue is the charge.
The Classifier works through groups. Each group defines a category with evaluation criteria, and each group becomes a separate output branch on the canvas.You start with two groups: Group 1 (which you should rename) and Other. The Other group is a fixed catch-all for anything that doesn’t match your defined groups. It cannot be deleted.For each group, write natural language Evaluation Criteria describing what qualifies:
Group: BillingEvaluation Criteria: The input relates to charges, payments,invoices, refunds, subscription changes, or pricing questions.
Group: Technical SupportEvaluation Criteria: The input describes a bug, error, systemoutage, integration failure, or requests help with configuration.
Click + Add Group to create additional categories. Each new group adds another output branch on the canvas, letting you wire different downstream logic for each category.The Auto-extract relevant data checkbox (on by default) tells the Classifier to pull out the data points relevant to the matched group and pass them downstream. The next node receives a focused extraction, not the raw input.The Classifier has its own LLM settings in the advanced configuration, independent of any agent. You can choose a fast, inexpensive model for classification while reserving a more capable model for complex reasoning in Agent nodes.The Additional instructions field lets you add examples or edge case guidance (“When the input mentions both billing and technical issues, prioritize billing”).
The Summarizer node distills large inputs (full API responses, long documents, multi-message histories) into what the next step needs. Beyond simple condensation, you can instruct it to extract specific fields, reformat data, or highlight only the changes since the last run.
The Summarizer has no agent to select and no groups to define, just an Instructions field where you describe what to do with the input.The default instruction is “Summarize the input content,” but you’ll almost always want to be more specific:
Extract the customer name, account ID, issue category, andseverity level. Ignore the conversation metadata and internalrouting information.
Compare this report against the previous run's output. List onlythe metrics that changed by more than 10%.
Like the Classifier, the Summarizer has independent LLM settings where you can choose a lighter model, since summarization is less demanding than multi-step reasoning.
Summarizer node vs. Output Summarizer: The Summarizer node lives in the middle of your workflow as a processing step between other nodes. The Output Summarizer is part of the END block and formats the final result. Use the Summarizer node when an intermediate step produces too much data for the next step. Use the Output Summarizer when you want to control how the workflow’s final result is presented.