SDK Function Calling
How to use the xpander AI SDK to work with tools and function calling
Getting Tools
The xpander SDK allows you to retrieve tools connected to your agent in the xpander platform.
Request
Response
The tools are returned in the format required by your LLM provider. Here’s an example response for different providers:
Run Tools
To use tools with your LLM, follow these steps:
- Get the tools from your agent
- Pass them to your LLM
- Extract and execute any tool calls
- Process the results
Function Calling Methods
There are two main approaches to executing tools. The SDK supports both parallel and sequential execution of tools
- Automated LLM Parsing (Recommended):
The extract_tool_calls
method to extract tool calls from the LLM response and the run_tools
method to execute according to the rules of the agent.
- Manual Invocation:
Tool Classes
The SDK provides two main classes for working with tools:
ToolCall
A class representing a single tool invocation with the following attributes:
name
: (str) The identifier of the tool to be calledtype
: (ToolCallType) The type of the tool call, usually ToolCallType.XPANDERpayload
: (dict) Contains three parameter dictionaries:bodyParams
: Parameters sent in the request bodyqueryParams
: URL query parameterspathParams
: URL path parameters
ToolCallType
An enum class that defines the types of tool calls:
XPANDER
: Standard xpander.ai platform toolsFUNCTION
: Traditional function calls (For example local functions)
The key differences are:
XpanderClient.extract_tool_calls()
: Static utility method that converts LLM-specific function call formats (OpenAI, Anthropic, etc.) into xpander’s standardized ToolCall formatagent.run_tools()
: Executes one or more tool calls, accepting either:- Parsed ToolCall objects from extract_tool_calls()
- Manual dictionary format for direct invocation
Tool Response
The tool_response
object represents the output from a tool invoked by an AI Agent. It contains key attributes that provide information about the tool invocation’s execution and results. This data is used to build the AI Agent’s memory and guide further interactions.
Tool Response Object Structure
Attribute | Type | Description |
---|---|---|
function_name | str | The name of the invoked function or tool. |
status_code | int | The HTTP status code returned by the tool or system. |
result | dict | The actual response or output from the tool. |
payload | dict | The data payload generated by the AI for the tool invocation. |
tool_call_id | str | A unique identifier for the specific tool invocation. |
Available Providers and Models
Here are all the supported LLM providers and their corresponding models. Choose the combination that best fits your needs:
Framework | Model Name | Model Identifier (String) |
---|---|---|
Amazon Bedrock | Anthropocene Claude 3 Haiku | anthropic.claude-3-haiku-20240307-v1:0 |
Anthropocene Claude 3.5 Sonnet | anthropic.claude-3-5-sonnet-20240620-v1:0 | |
Cohere Command R | cohere.command-r-v1:0 | |
Cohere Command R Plus | cohere.command-r-plus-v1:0 | |
Meta Llama 3 1.8B Instruct | meta.llama3-1-8b-instruct-v1:0 | |
Meta Llama 3 1.70B Instruct | meta.llama3-1-70b-instruct-v1:0 | |
Meta Llama 3 1.405B Instruct | meta.llama3-1-405b-instruct-v1:0 | |
Mistral Large 2402 | mistral.mistral-large-2402-v1:0 | |
Mistral Large 2407 | mistral.mistral-large-2407-v1:0 | |
Mistral Small 2402 | mistral.mistral-small-2402-v1:0 | |
Nvidia NIM | Meta Llama 3.1 70B Instruct | meta/llama-3.1-70b-instruct |
Ollama | Qwen2.5-Coder | qwen2.5-coder |
OpenAI | OpenAI GPT-4 | gpt-4 |
OpenAI GPT-4o | gpt-4o | |
OpenAI GPT-4o Mini | gpt-4o-mini |