Getting Tools

The xpander SDK allows you to retrieve tools connected to your agent in the xpander platform.

Request

Response

The tools are returned in the format required by your LLM provider. Here’s an example response for different providers:

Run Tools

To use tools with your LLM, follow these steps:

  1. Get the tools from your agent
  2. Pass them to your LLM
  3. Extract and execute any tool calls
  4. Process the results
Passing tools to LLM
llm_response = openai_client.chat.completions.create(
    model="gpt-4",
    messages=conversation_history,
    tools=tools
) 

Function Calling Methods

There are two main approaches to executing tools. The SDK supports both parallel and sequential execution of tools

  1. Automated LLM Parsing (Recommended):
# Parse LLM response automatically into tool calls
tools_to_run = XpanderClient.extract_tool_calls(llm_response=llm_response.model_dump())

# Execute multiple tool calls at once
results = agent.run_tools(tools_to_run)  

The extract_tool_calls method to extract tool calls from the LLM response and the run_tools method to execute according to the rules of the agent.

  1. Manual Invocation:
from xpander_sdk import ToolCall, ToolCallType

# Create a ToolCall object
toolcall = ToolCall(
    name="tavily-insights-fetchInsightsFromTavilyAI",
    type=ToolCallType.XPANDER,
    payload={
        "bodyParams": {
            "query": "qwen2.5-coder news on HackerNews",
            "max_results": "5",
            "include_raw_content": "true",
            "include_images": "false",
            "exclude_domains": ["example.com"], 
        },
        "queryParams": {},
        "pathParams": {}
    }
)

# Execute single tool call
result = agent.run_tool(toolcall)

Tool Classes

The SDK provides two main classes for working with tools:

ToolCall

A class representing a single tool invocation with the following attributes:

  • name: (str) The identifier of the tool to be called
  • type: (ToolCallType) The type of the tool call, usually ToolCallType.XPANDER
  • payload: (dict) Contains three parameter dictionaries:
    • bodyParams: Parameters sent in the request body
    • queryParams: URL query parameters
    • pathParams: URL path parameters

ToolCallType

An enum class that defines the types of tool calls:

  • XPANDER: Standard xpander.ai platform tools
  • FUNCTION: Traditional function calls (For example local functions)

The key differences are:

  • XpanderClient.extract_tool_calls(): Static utility method that converts LLM-specific function call formats (OpenAI, Anthropic, etc.) into xpander’s standardized ToolCall format
  • agent.run_tools(): Executes one or more tool calls, accepting either:
    • Parsed ToolCall objects from extract_tool_calls()
    • Manual dictionary format for direct invocation

Tool Response

The tool_response object represents the output from a tool invoked by an AI Agent. It contains key attributes that provide information about the tool invocation’s execution and results. This data is used to build the AI Agent’s memory and guide further interactions.

Tool Response Object Structure

AttributeTypeDescription
function_namestrThe name of the invoked function or tool.
status_codeintThe HTTP status code returned by the tool or system.
resultdictThe actual response or output from the tool.
payloaddictThe data payload generated by the AI for the tool invocation.
tool_call_idstrA unique identifier for the specific tool invocation.

Available Providers and Models

Here are all the supported LLM providers and their corresponding models. Choose the combination that best fits your needs:

FrameworkModel NameModel Identifier (String)
Amazon BedrockAnthropocene Claude 3 Haikuanthropic.claude-3-haiku-20240307-v1:0
Anthropocene Claude 3.5 Sonnetanthropic.claude-3-5-sonnet-20240620-v1:0
Cohere Command Rcohere.command-r-v1:0
Cohere Command R Pluscohere.command-r-plus-v1:0
Meta Llama 3 1.8B Instructmeta.llama3-1-8b-instruct-v1:0
Meta Llama 3 1.70B Instructmeta.llama3-1-70b-instruct-v1:0
Meta Llama 3 1.405B Instructmeta.llama3-1-405b-instruct-v1:0
Mistral Large 2402mistral.mistral-large-2402-v1:0
Mistral Large 2407mistral.mistral-large-2407-v1:0
Mistral Small 2402mistral.mistral-small-2402-v1:0
Nvidia NIMMeta Llama 3.1 70B Instructmeta/llama-3.1-70b-instruct
OllamaQwen2.5-Coderqwen2.5-coder
OpenAIOpenAI GPT-4gpt-4
OpenAI GPT-4ogpt-4o
OpenAI GPT-4o Minigpt-4o-mini