LLM Integration
Integrating different LLM providers with the xpander.ai SDK
The xpander.ai SDK is designed to work with multiple Large Language Model (LLM) providers. This guide explains how to integrate various LLM providers with your xpander.ai agents.
Supported LLM Providers
The SDK supports the following LLM providers through the LLMProvider
enum:
Provider | Enum Value | Format |
---|---|---|
OpenAI | LLMProvider.OPEN_AI | Standard OpenAI format |
FriendliAI | LLMProvider.FRIENDLI_AI | FriendliAI format (Claude, etc.) |
Gemini | LLMProvider.GEMINI_OPEN_AI | Google Gemini with OpenAI format |
Ollama | LLMProvider.OLLAMA | Ollama format for local models |
LangChain | LLMProvider.LANG_CHAIN | LangChain format |
Real-time OpenAI | LLMProvider.REAL_TIME_OPEN_AI | Real-time OpenAI format |
NVIDIA NIM | LLMProvider.NVIDIA_NIM | NVIDIA NIM format |
Amazon Bedrock | LLMProvider.AMAZON_BEDROCK | Amazon Bedrock format |
Integration Basics
Integrating an LLM provider involves three key steps:
- Initialize memory - Initialize the agent’s memory with execution input and instructions
- Format tools - Get tools formatted for the specific LLM provider
- Extract tool calls - Parse the LLM response to extract tool calls
Provider Specification Requirements
Important: The xpander.ai SDK requires explicit provider specification when getting tools and extracting tool calls. Always specify the
llm_provider
parameter with the appropriate provider enum value.
Initialize memory
Get tools formatted for your LLM provider
Extract tool calls with explicit provider
Provider-Specific Integration
Best Practices
Always specify the correct provider for all operations:
Use the same provider for both retrieving tools and extracting tool calls:
Message Format Handling
The xpander.ai SDK automatically handles all message format conversions for all LLM providers:
xpander.ai SDK stores all messages in a standard format in agent.messages
. This format is compatible with all the LLM Providers.