Bring Your Own LLM
Guide to using xpander SDK with your own LLM deployment
While xpander.ai offers a serverless solution where you can use AI agents without managing LLM infrastructure, you might want to use your own LLM deployment for:
- Running models locally for lower latency
- Using specific model versions or custom fine-tuned models
- Implementing local function calling
- Maintaining full control over your LLM stack
- Handling sensitive data locally
Choose Your LLM Provider
To integrate your LLM with xpander’s tools and local functions, you’ll need to select one LLM provider and model combination. Import the necessary packages and initialize your client:
Get Tools to the LLM
Configure your LLM client with the appropriate tools based on your provider:
Run Tools for the LLM
Process the LLM response and execute the tools:
Understanding extract_tool_calls
The extract_tool_calls
method is a crucial utility that handles the conversion between different LLM function/tool calling formats. Here’s how it works:
- Purpose: It standardizes various LLM response formats into a unified format that xpander can process:
- Provider-Specific Handling: The method automatically detects and processes different response structures:
- Automatic API Translation: Once standardized, these tool calls can be executed uniformly:
Need help? Visit our Discord community or documentation.