Start with LLM Models
Learn about the LLM models supported by xpander.ai
Single Query (Quick Start)
Send one-off queries to your preferred LLM with built-in tool support.
Event streaming (Slack, Teams, RestAPI, Realtime voice)
Handle real-time events and messages from various platforms with continuous LLM interactions.
Multi-Step Tasks
Break down complex tasks into manageable steps with automatic tool execution and state management.
Best Practices for LLM Provider Integration
When integrating with different LLM providers, follow these best practices for consistent behavior:
- Initialize memory without specifying a provider:
- Always specify the provider when retrieving tools:
- Add raw LLM responses directly to memory:
- Always specify the provider when extracting tool calls:
Note: The xpander.ai SDK handles format conversion between the standardized memory format and provider-specific formats. However, you must always specify which provider format to use when retrieving tools and extracting tool calls.
Complete Example: Gemini via OpenAI-compatible API
Here’s a complete example using Gemini through its OpenAI-compatible API:
Message Structure
The xpander.ai SDK stores messages in a standard, OpenAI-compatible format within agent.messages
, regardless of the LLM provider used:
- Each message is a dictionary with
role
andcontent
fields - Roles include
system
,user
,assistant
, andtool
- Tool calls are stored in the
tool_calls
field of assistant messages - The memory structure is consistent across all providers
Next Steps
Add Voice & Chat Interfaces
Add voice, chat and other human-machine interfaces to your agents
Manage Multi-User State
Learn how to handle state across multiple users and sessions
Manage Agent Memory
Configure and manage memory state between your AI agents
Add User Authentication
Implement secure user authentication for your agents