LLM SDKs
Use the xpander SDK with AI Agents you built with LLM provider frameworks
Single Query (Quick Start)
Send one-off queries to your preferred LLM with built-in tool support.
Event streaming (Slack, Teams, RestAPI, Realtime voice)
Handle real-time events and messages from various platforms with continuous LLM interactions.
Multi-Step Tasks
Break down complex tasks into manageable steps with automatic tool execution and state management.
Best Practices for LLM Provider Integration
When integrating with different LLM providers, follow these best practices for consistent behavior:
- Create a task to initialize memory:
- Always specify the provider when retrieving tools:
- Add raw LLM responses directly to memory:
- Always specify the provider when extracting tool calls:
Note: The xpander.ai SDK handles format conversion between the standardized messages format and provider-specific formats. However, you must always specify which provider format to use when retrieving tools and extracting tool calls.
Complete Example: Gemini via OpenAI-compatible API
Here’s a complete example using Gemini through its OpenAI-compatible API:
Message Structure
The xpander.ai SDK stores messages in a standard, OpenAI-compatible format within agent.messages
, regardless of the LLM provider used:
- Each message is a dictionary with
role
andcontent
fields - Roles include
system
,user
,assistant
, andtool
- Tool calls are stored in the
tool_calls
field of assistant messages - The messages structure is consistent across all providers