The xpander.ai SDK is designed to work with multiple Large Language Model (LLM) providers. This guide explains how to integrate various LLM providers with your xpander.ai agents.

Supported LLM Providers

The SDK supports the following LLM providers through the LLMProvider enum:

ProviderEnum ValueFormat
OpenAILLMProvider.OPEN_AIStandard OpenAI format
FriendliAILLMProvider.FRIENDLI_AIFriendliAI format (Claude, etc.)
GeminiLLMProvider.GEMINI_OPEN_AIGoogle Gemini with OpenAI format
OllamaLLMProvider.OLLAMAOllama format for local models
LangChainLLMProvider.LANG_CHAINLangChain format
Real-time OpenAILLMProvider.REAL_TIME_OPEN_AIReal-time OpenAI format
NVIDIA NIMLLMProvider.NVIDIA_NIMNVIDIA NIM format
Amazon BedrockLLMProvider.AMAZON_BEDROCKAmazon Bedrock format

Integration Basics

Integrating an LLM provider involves three key steps:

  1. Initialize memory - Initialize the agent’s memory with execution input and instructions
  2. Format tools - Get tools formatted for the specific LLM provider
  3. Extract tool calls - Parse the LLM response to extract tool calls

Provider Specification Requirements

Important: The xpander.ai SDK requires explicit provider specification when getting tools and extracting tool calls. Always specify the llm_provider parameter with the appropriate provider enum value.

1

Initialize memory

from xpander_sdk import LLMProvider

# Initialize memory - provider specification is optional
agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions
)
2

Get tools formatted for your LLM provider

# Get tools in the format required by your specific LLM provider
# Always specify the provider to get the correct format
openai_tools = agent.get_tools(llm_provider=LLMProvider.OPEN_AI)
claude_tools = agent.get_tools(llm_provider=LLMProvider.FRIENDLI_AI)
gemini_tools = agent.get_tools(llm_provider=LLMProvider.GEMINI_OPEN_AI)
3

Extract tool calls with explicit provider

# Always explicitly specify the provider when extracting tool calls
# This must match the provider used for getting tools
tool_calls = XpanderClient.extract_tool_calls(
    llm_response=response.model_dump(),
    llm_provider=LLMProvider.OPEN_AI
)

Provider-Specific Integration

from xpander_sdk import XpanderClient, LLMProvider
from openai import OpenAI
from dotenv import load_dotenv
import os

# Load environment variables
load_dotenv()
XPANDER_API_KEY = os.environ["XPANDER_API_KEY"]
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]

# Initialize clients
xpander_client = XpanderClient(api_key=XPANDER_API_KEY)
openai_client = OpenAI(api_key=OPENAI_API_KEY)

# Get agent
agent = xpander_client.agents.get(agent_id="agent-1234")

# Add task and initialize memory
agent.add_task(input="What are the latest developments in AI?")
agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions
)

# Run until completion
while not agent.is_finished():
    # Get OpenAI-formatted tools
    tools = agent.get_tools(llm_provider=LLMProvider.OPEN_AI)
    
    # Call OpenAI
    response = openai_client.chat.completions.create(
        model="gpt-4o",
        messages=agent.messages,
        tools=tools,
        tool_choice="auto",
        temperature=0.0
    )
    
    # Add response to memory
    agent.add_messages(messages=response.model_dump())
    
    # Extract tool calls with explicit provider
    tool_calls = XpanderClient.extract_tool_calls(
        llm_response=response.model_dump(),
        llm_provider=LLMProvider.OPEN_AI
    )
    
    # Execute tools if any
    if tool_calls:
        results = agent.run_tools(tool_calls=tool_calls)
        print(f"Executed {len(results)} tools")

# Get final result
result = agent.retrieve_execution_result()
print(result.result)

Best Practices

Always specify the correct provider for all operations:

# For retrieving tools
tools = agent.get_tools(llm_provider=LLMProvider.OPEN_AI)

# For extracting tool calls
tool_calls = XpanderClient.extract_tool_calls(
    llm_response=response.model_dump(),
    llm_provider=LLMProvider.OPEN_AI
)

Use the same provider for both retrieving tools and extracting tool calls:

# Use the same provider in both places
tools = agent.get_tools(llm_provider=LLMProvider.OPEN_AI)
# ...call LLM...
tool_calls = XpanderClient.extract_tool_calls(
    llm_response=response.model_dump(),
    llm_provider=LLMProvider.OPEN_AI  # Same provider as used for tools
)

Message Format Handling

The xpander.ai SDK automatically handles all message format conversions for all LLM providers:

xpander.ai SDK stores all messages in a standard format in agent.messages. This format is compatible with all the LLM Providers.

[
  {
    "role": "system",
    "content": "System instructions..."
  },
  {
    "role": "user",
    "content": "User query..."
  },
  {
    "role": "assistant",
    "content": "Assistant response...",
    "tool_calls": [...]  // If any tools were called
  }
]