The Agent class is the central component of the xpander.ai SDK, representing an AI agent capable of executing tasks, using tools, and maintaining conversational state.

Overview

An Agent has several key capabilities:

  • Running tasks and managing execution flow
  • Executing tools and function calls
  • Managing conversation memory and state
  • Building and navigating graph-based workflows
  • Integrating with external services through agentic interfaces

All agent instances are retrieved from the XpanderClient. You typically won’t create Agent instances directly.

Retrieving an Agent

from xpander_sdk import XpanderClient
from dotenv import load_dotenv
import os

# Load environment variables
load_dotenv()
XPANDER_API_KEY = os.environ["XPANDER_API_KEY"]
AGENT_ID = os.environ["AGENT_ID"]

# Initialize the client
client = XpanderClient(api_key=XPANDER_API_KEY)

# Get an agent by ID
agent = client.agents.get(agent_id=AGENT_ID)

# Access basic properties
print(f"Agent ID: {agent.id}")
print(f"Agent Name: {agent.name}")
print(f"Description: {agent.metadata.description}")

Core Properties

PropertyTypeDescriptionAvailability
idstringUnique identifier for the agentAlways available
namestringName of the agentAlways available
instructionsstringSystem instructions for the agentAlways available
metadataobjectContains metadata like descriptionAlways available
executionExecutionCurrent execution contextAfter adding a task
memoryMemoryMemory management interfaceAlways available
messagesList[Dict]Conversation messagesAfter memory initialization
graphGraphWorkflow graph systemAlways available
tool_choicestringSetting for tool selection behaviorAlways available

Some properties require initialization before they can be accessed. For example, the execution property is only available after calling add_task() (Python) or addTask() (TypeScript), and messages is only available after initializing memory with memory.init_messages() (Python) or memory.initMessages() (TypeScript).

Initialization Sequence

Most agents follow this initialization sequence:

1

Get an agent

agent = client.agents.get(agent_id=AGENT_ID)
2

Add a task

agent.add_task(input="Find information about AI startups")
3

Initialize memory

agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions,
    llm_provider=LLMProvider.OPEN_AI
)
4

Run the agent loop

while not agent.is_finished():
    # Run LLM and process tools
    # ...

Task Management Methods

add_task() / addTask()

Adds a new task for the agent to process.

# Basic task
execution = agent.add_task(input="Find information about AI startups")

# Task with files
execution = agent.add_task(
    input="Analyze this document",
    files=[{
        "content": "Content of document...",
        "name": "document.txt"
    }],
    use_worker=True
)

# Access execution properties
print(f"Execution status: {agent.execution.status}")
print(f"Input message: {agent.execution.input_message}")

Parameters

ParameterTypeRequiredDescription
inputstringYesTask description or user query
filesList[Dict]NoFiles to include with the task
use_workerbooleanNoWhether to use async worker (default: True)
thread_idstringNoThread ID for conversation continuity

Returns

Returns an Execution object representing the new task.

is_finished() / isFinished()

Checks if the current task execution is finished.

# Run until execution is complete
while not agent.is_finished():
    # Run execution cycle
    pass

# After completion
print("Task completed!")

Returns

Returns a boolean: True if the execution is completed, False otherwise.

For multi-agent workflows, you can enforce transitions to the next agent without checking is_finished(). This allows you to implement custom transition logic based on your specific requirements rather than relying on the agent’s built-in completion status. See Enforcing Agent Transitions Without Manager for details on implementing this advanced pattern.

retrieve_execution_result() / retrieveExecutionResult()

Gets the result of the current execution.

# Get the execution result
execution_result = agent.retrieve_execution_result()

# Access result properties
print(f"Status: {execution_result.status}")
print(f"Result: {execution_result.result}")

# Check for errors
if execution_result.error:
    print(f"Error: {execution_result.error}")

Returns

Returns an ExecutionResult object containing status and result content.

Memory Management Methods

memory.init_messages() / memory.initMessages()

Initializes the agent’s memory with the task input and instructions.

# Initialize memory after adding a task
agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions,
    llm_provider=LLMProvider.OPEN_AI
)

# Check that messages are available
print(f"Message count: {len(agent.messages)}")
for message in agent.messages:
    print(f"Role: {message['role']}, Content: {message['content'][:50]}...")

Parameters

ParameterTypeRequiredDescription
inputstringYesInput message to initialize with
instructionsstringYesSystem instructions for the agent
llm_providerLLMProviderNoThe LLM provider type (default: LLMProvider.OPEN_AI)

add_messages() / addMessages()

Adds messages to the agent’s conversation memory.

# Get response from LLM
response = openai_client.chat.completions.create(
    model="gpt-4o",
    messages=agent.messages,
    tools=agent.get_tools(),
    tool_choice="auto"
)

# Add LLM response to memory
agent.add_messages(messages=response.model_dump())

# Check updated message count
print(f"Updated message count: {len(agent.messages)}")

Parameters

ParameterTypeRequiredDescription
messagesDictYesMessages to add (e.g., LLM response)

Complete Example

Here’s a complete example of using an Agent to execute a task:

from xpander_sdk import XpanderClient, LLMProvider
from openai import OpenAI
from dotenv import load_dotenv
import os

# Load environment variables
load_dotenv()
XPANDER_API_KEY = os.environ["XPANDER_API_KEY"]
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
AGENT_ID = os.environ["AGENT_ID"]

# Initialize clients
xpander_client = XpanderClient(api_key=XPANDER_API_KEY)
openai_client = OpenAI(api_key=OPENAI_API_KEY)

# Get the agent
agent = xpander_client.agents.get(agent_id=AGENT_ID)
print(f"Agent: {agent.name} ({agent.id})")

# Add a task
agent.add_task(input="Find the top 3 AI startups in San Francisco and their main products")

# Initialize memory
agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions,
    llm_provider=LLMProvider.OPEN_AI
)

# Run the agent until the task is complete
while not agent.is_finished():
    # Get next action from LLM
    response = openai_client.chat.completions.create(
        model="gpt-4o",
        messages=agent.messages,
        tools=agent.get_tools(),
        tool_choice="auto",
        temperature=0.0
    )
    
    # Add LLM response to memory
    agent.add_messages(messages=response.model_dump())
    
    # Extract tool calls
    tool_calls = XpanderClient.extract_tool_calls(
        llm_response=response.model_dump(),
        llm_provider=LLMProvider.OPEN_AI
    )
    
    # Run the tools
    if tool_calls:
        results = agent.run_tools(tool_calls=tool_calls)
        print(f"Executed {len(results)} tools")

# Get and print the final result
result = agent.retrieve_execution_result()
print("\nFINAL RESULT:")
print("=" * 50)
print(result.result)