Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt

Use this file to discover all available pages before exploring further.

Xpander connects to multiple LLM providers through a single interface. You can assign different models to different parts of your system so a ticket classifier runs on a fast, cheap model while a customer response agent uses a frontier model.

Supported providers

ProviderExample models
AnthropicClaude Opus 4.6 (T1), Sonnet 4.6 (T2), Sonnet 4.5 (T2)
OpenAIGPT-5.4 (T1), GPT-5 (T2), GPT-5 Mini (T3), GPT-4o (T2)
Google AI StudioGemini models
Amazon BedrockClaude and Titan via AWS
Azure AI FoundryAzure-hosted models
NvidiaNIM-hosted models
FireworksFast inference models
OpenRouterMulti-provider routing
The table above shows common examples. The full list varies by plan. Also available: Nebius Token Factory, Cloudflare AI Gateway, Tzafon LightCone, ByteDance ModelArk.
Provider dropdown expanded
Each provider shows a Featured tab (curated models) and a Custom tab (enter any model ID). Models are labeled by tier: T1 (most capable), T2 (balanced), T3 (fast/cheap). Use Xpander’s built-in API access (no keys needed) or bring your own credentials through Admin Settings (enterprise plans).

Model selection hierarchy

More specific settings override broader ones:
Org default (Admin Settings)
  └── Per-agent (Agent > General > LLM Settings)
        └── Per-workflow (Workflow > Settings > LLM Settings)
              └── Per-node (Classifier / Summarizer / Guardrail)
Agent LLM advanced config
Org default: New agents and workflows inherit this unless overridden. Configure in Admin Settings, LLM Settings tab.
Admin LLM settings
Per-agent: Each agent picks its own provider and model in the General tab. This is where most model decisions happen.
Agent LLM settings
Per-workflow: Applies to all AI nodes in a workflow unless a specific node overrides it. Per-node: Classifier, Summarizer, and Guardrail nodes each have independent LLM settings in Advanced Configuration.
Agent nodes don’t have LLM settings. They inherit the model from the agent you selected. Only Classifier, Summarizer, and Guardrail have per-node model selection.

Pick the right tier for the task

TierCostSpeedUse when
T1 (frontier)HighestSlowestComplex reasoning, customer-facing responses, compliance evaluation
T2 (balanced)ModerateModerateClassification, summarization, data extraction. Best default.
T3 (fast/cheap)LowestFastestHigh-volume routing, sentiment detection, simple categorization

Example: mixed models in one workflow

[Webhook trigger]


[Classifier]          ← T3 (GPT-5 Mini): route tickets by category

    ├── Billing ──► [Agent]            ← T1 (Claude Opus): draft accurate refund response
    │                  │
    │                  ▼
    │              [Guardrail]         ← T2 (Claude Sonnet): check policy compliance

    └── Technical ──► [Agent]          ← T2 (Claude Sonnet): look up docs + draft fix


                   [Summarizer]        ← T3 (GPT-5 Mini): compress for internal log
The classifier runs on T3 because routing is straightforward. The billing agent uses T1 because refund responses need precision. The guardrail uses T2 for balanced quality and speed. The summarizer uses T3 because internal logs don’t need frontier quality.

Output formats

Each agent supports four output formats in the General tab: Default (text), Markdown, Structured output (JSON to a schema), and Voice (audio). Structured output is particularly useful in multi-agent setups. A specialist returning JSON is more reliable for the coordinator to parse than free text.

What’s next

Multi-Agent Tasks

Coordinate specialized agents on shared tasks.

Building Workflows

Backend automation on the visual canvas.