Use this file to discover all available pages before exploring further.
Xpander connects to multiple LLM providers through a single interface. You can assign different models to different parts of your system so a ticket classifier runs on a fast, cheap model while a customer response agent uses a frontier model.
Claude Opus 4.6 (T1), Sonnet 4.6 (T2), Sonnet 4.5 (T2)
OpenAI
GPT-5.4 (T1), GPT-5 (T2), GPT-5 Mini (T3), GPT-4o (T2)
Google AI Studio
Gemini models
Amazon Bedrock
Claude and Titan via AWS
Azure AI Foundry
Azure-hosted models
Nvidia
NIM-hosted models
Fireworks
Fast inference models
OpenRouter
Multi-provider routing
The table above shows common examples. The full list varies by plan. Also available: Nebius Token Factory, Cloudflare AI Gateway, Tzafon LightCone, ByteDance ModelArk.
Each provider shows a Featured tab (curated models) and a Custom tab (enter any model ID). Models are labeled by tier: T1 (most capable), T2 (balanced), T3 (fast/cheap).Use Xpander’s built-in API access (no keys needed) or bring your own credentials through Admin Settings (enterprise plans).
Org default: New agents and workflows inherit this unless overridden. Configure in Admin Settings, LLM Settings tab.
Per-agent: Each agent picks its own provider and model in the General tab. This is where most model decisions happen.
Per-workflow: Applies to all AI nodes in a workflow unless a specific node overrides it.Per-node: Classifier, Summarizer, and Guardrail nodes each have independent LLM settings in Advanced Configuration.
Agent nodes don’t have LLM settings. They inherit the model from the agent you selected. Only Classifier, Summarizer, and Guardrail have per-node model selection.
The classifier runs on T3 because routing is straightforward. The billing agent uses T1 because refund responses need precision. The guardrail uses T2 for balanced quality and speed. The summarizer uses T3 because internal logs don’t need frontier quality.
Each agent supports four output formats in the General tab: Default (text), Markdown, Structured output (JSON to a schema), and Voice (audio).Structured output is particularly useful in multi-agent setups. A specialist returning JSON is more reliable for the coordinator to parse than free text.