The xpander.ai SDK is a powerful library that allows you to build, manage, and deploy AI agents with function calling capabilities. The SDK provides a clean, intuitive interface for working with the xpander.ai platform, enabling you to create agents that can perform complex tasks by combining LLMs with tools.

Language Support

The xpander.ai SDK is available in multiple programming languages:

  • Python - Full support with published packages
  • TypeScript/JavaScript - Full support with published packages
  • C# - Supported but not publicly published (contact support for binaries)
  • Java - Supported but not publicly published (contact support for binaries)

The SDK is built using PROJEN and JSII, allowing for consistent functionality across programming languages.

Installation

Install the xpander.ai SDK using your language’s package manager:

pip install xpander-sdk

Quick Start

1

Import and initialize

from xpander_sdk import XpanderClient
from dotenv import load_dotenv
import os

# Load API key from environment
load_dotenv()
XPANDER_API_KEY = os.environ.get("XPANDER_API_KEY")

# Initialize client
client = XpanderClient(api_key=XPANDER_API_KEY)
2

Get or create an agent

# List your agents
agents = client.agents.list()

# Get an existing agent
if agents:
    agent = client.agents.get(agent_id=agents[0].id)
else:
    # Create a new agent
    agent = client.agents.create(
        name="Research Assistant",
        description="An agent that performs research tasks",
        instructions="You are a helpful research assistant. Provide accurate information and cite sources."
    )
3

Run a task

from xpander_sdk import LLMProvider
from openai import OpenAI

# OpenAI client for LLM interactions
openai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

# Add a task
agent.add_task(input="What are the environmental benefits of renewable energy?")

# Initialize memory
agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions,
    llm_provider=LLMProvider.OPEN_AI
)

# Run until completion
while not agent.is_finished():
    # Get next action from LLM
    response = openai_client.chat.completions.create(
        model="gpt-4o",
        messages=agent.messages,
        tools=agent.get_tools(),
        tool_choice="auto",
        temperature=0.0
    )
    
    # Add LLM response to memory
    agent.add_messages(messages=response.model_dump())
    
    # Extract and run tool calls
    tool_calls = XpanderClient.extract_tool_calls(
        llm_response=response.model_dump(),
        llm_provider=LLMProvider.OPEN_AI
    )
    
    if tool_calls:
        results = agent.run_tools(tool_calls=tool_calls)
        print(f"Executed {len(results)} tools")

# Get results
result = agent.retrieve_execution_result()
print(result.result)

SDK Structure

The xpander.ai SDK is structured into several key components:

Core Components

Additional Components

Key Concepts

Agents

Agents are the central entity in the xpander.ai SDK. Each agent has:

  • Instructions: Guidance on how the agent should behave
  • Memory: Storage for conversation history
  • Tools: Functions the agent can call to perform actions
  • Executions: Tasks assigned to the agent

Tasks and Executions

Tasks are assigned to agents using the add_task() method (Python) or addTask() method (TypeScript). Each task creates an execution, which represents a single run of the agent with a specific input.

Function Calling

The xpander.ai SDK provides a powerful system for function calling, allowing agents to invoke both remote tools (hosted on the xpander.ai platform) and local tools (functions defined in your application).

Examples

Basic Research Agent

from xpander_sdk import XpanderClient, LLMProvider
from openai import OpenAI
from dotenv import load_dotenv
import os
import time

# Load environment variables
load_dotenv()
XPANDER_API_KEY = os.environ.get("XPANDER_API_KEY")
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")

# Initialize clients
xpander_client = XpanderClient(api_key=XPANDER_API_KEY)
openai_client = OpenAI(api_key=OPENAI_API_KEY)

# Create or get agent
agent = xpander_client.agents.create(
    name="Research Assistant",
    description="AI assistant for research tasks",
    instructions="You are a helpful research assistant. Your task is to help users find information from the web, summarize content, and answer questions accurately."
)

# Add a research task
agent.add_task(input="What are the latest advancements in renewable energy?")

# Initialize memory
agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions
)

# Run until completion
while not agent.is_finished():
    # Get next action from LLM
    response = openai_client.chat.completions.create(
        model="gpt-4o",
        messages=agent.messages,
        tools=agent.get_tools(llm_provider=LLMProvider.OPEN_AI),
        tool_choice="auto",
        temperature=0.0
    )
    
    # Add LLM response to memory
    agent.add_messages(messages=response.model_dump())
    
    # Extract and run tool calls
    tool_calls = XpanderClient.extract_tool_calls(
        llm_response=response.model_dump(),
        llm_provider=LLMProvider.OPEN_AI
    )
    
    if tool_calls:
        results = agent.run_tools(tool_calls=tool_calls)
        print(f"Executed {len(results)} tools")
    
    # Short delay to prevent rate limiting
    time.sleep(0.5)

# Get results
result = agent.retrieve_execution_result()
print(result.result)

Agent with Local Tools

from xpander_sdk import XpanderClient, LLMProvider, ToolCallType, ToolCallResult
from openai import OpenAI
from dotenv import load_dotenv
import os
import time
import math

# Load environment variables
load_dotenv()
XPANDER_API_KEY = os.environ.get("XPANDER_API_KEY")
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")

# Initialize clients
xpander_client = XpanderClient(api_key=XPANDER_API_KEY)
openai_client = OpenAI(api_key=OPENAI_API_KEY)

# Create a calculator agent
agent = xpander_client.agents.create(
    name="Calculator Assistant",
    description="An agent that can perform calculations",
    instructions="You are a helpful assistant that can perform mathematical calculations."
)

# Define a local tool for calculations
def calculate(expression):
    """Safely evaluate a mathematical expression."""
    try:
        # Define allowed symbols and functions
        allowed_names = {
            "abs": abs, "round": round, "min": min, "max": max,
            "sum": sum, "pow": pow, "math": math
        }
        
        # Evaluate the expression in a safe way
        result = eval(expression, {"__builtins__": {}}, allowed_names)
        return ToolCallResult(
            function_name="calculate",
            tool_call_id="some_id",
            is_success=True,
            result=str(result)
        )
    except Exception as e:
        return ToolCallResult(
            function_name="calculate",
            tool_call_id="some_id", 
            is_success=False,
            error=f"Error evaluating expression: {str(e)}"
        )

# Register the local tool handler
agent.register_local_tool_handler(tool_name="calculate", handler=calculate)

# Add a task
agent.add_task(input="I need to calculate the area of a circle with radius 5 cm and also the volume of a sphere with the same radius.")

# Initialize memory
agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions
)

# Run agent loop
while not agent.is_finished():
    # Get next action from LLM
    response = openai_client.chat.completions.create(
        model="gpt-4o",
        messages=agent.messages,
        tools=agent.get_tools(llm_provider=LLMProvider.OPEN_AI),
        tool_choice="auto",
        temperature=0.0
    )
    
    # Add LLM response to memory
    agent.add_messages(messages=response.model_dump())
    
    # Extract tool calls
    tool_calls = XpanderClient.extract_tool_calls(
        llm_response=response.model_dump(),
        llm_provider=LLMProvider.OPEN_AI
    )
    
    # Filter for local tool calls
    local_tool_calls = XpanderClient.retrieve_pending_local_tool_calls(
        tool_calls=tool_calls
    )
    
    # Run local tools
    if local_tool_calls:
        results = agent.run_tools(tool_calls=local_tool_calls)
        print(f"Executed {len(results)} calculations")

# Get result
result = agent.retrieve_execution_result()
print(result.result)

Best Practices

Environment Variables

Always store API keys and other sensitive information in environment variables, preferably using a tool like python-dotenv to manage them:

from dotenv import load_dotenv
import os

load_dotenv()
XPANDER_API_KEY = os.environ.get("XPANDER_API_KEY")
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")

Error Handling

Implement robust error handling for API calls and tool executions:

from xpander_sdk.exceptions import AuthenticationError, ResourceNotFoundError, ApiError

try:
    result = agent.run_tool(tool=tool_call)
except AuthenticationError:
    # Handle authentication issues
    print("Authentication failed. Check your API key.")
except ResourceNotFoundError:
    # Handle missing resources
    print("Resource not found. Check your agent ID.")
except ApiError as e:
    # Handle API errors
    print(f"API error: {e.status_code} - {str(e)}")
except Exception as e:
    # Handle unexpected errors
    print(f"Unexpected error: {str(e)}")

Rate Limiting

Add delays between API calls to prevent rate limiting:

import time

# Add a short delay between iterations
time.sleep(1)

Memory Management

Initialize memory properly before running executions:

agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions,
    llm_provider=LLMProvider.OPEN_AI
)

LLM Provider Support

The xpander.ai SDK supports multiple LLM providers:

ProviderEnum ValueDescription
OpenAILLMProvider.OPEN_AIDefault OpenAI format
Claude/AnthropicLLMProvider.FRIENDLI_AIClaude via FriendliAI format
GeminiLLMProvider.GEMINI_OPEN_AIGoogle Gemini, OpenAI format
OllamaLLMProvider.OLLAMAOllama format