The xpander.ai SDK provides a Memory system for agents to manage conversation history, including user inputs, system instructions, and tool call results. Memory is a critical component that allows agents to maintain context during task execution.

Accessing Memory

Memory is accessed through the memory property of an Agent instance.

from xpander_sdk import XpanderClient

# Initialize client
client = XpanderClient(api_key="your-api-key")

# Get an agent
agent = client.agents.get(agent_id="agent-id")

# Access memory
memory = agent.memory

Key Methods

init_messages() / initMessages()

Initializes the agent’s memory with system instructions and a user input message. This method must be called before attempting to access or modify the agent’s messages.

from xpander_sdk import LLMProvider

# Initialize memory after adding a task
agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions,
    llm_provider=LLMProvider.OPEN_AI
)

# Check that messages are now available
print(f"Number of messages: {len(agent.messages)}")

Parameters

ParameterTypeRequiredDescription
inputstringYesUser input message to initialize with
instructionsstringYesSystem instructions for the agent
llm_providerLLMProviderNoLLM provider format (default: LLMProvider.OPEN_AI)

The init_messages() method sets up the initial conversation state with system instructions and user input. It must be called before any other memory operations.

add_messages() / addMessages()

Adds messages (typically LLM responses) to the agent’s memory.

# Get response from OpenAI
response = openai_client.chat.completions.create(
    model="gpt-4o",
    messages=agent.messages,
    tools=agent.get_tools(),
    temperature=0.0
)

# Add LLM response to memory
agent.memory.add_messages(messages=response.model_dump())

print(f"Added response from LLM to memory")

Parameters

ParameterTypeRequiredDescription
messagesDictYesMessages object to add (e.g., LLM response)

add_tool_call_results() / addToolCallResults()

Adds tool call results to the agent’s memory. This updates the conversation state with the results of executed tools.

from xpander_sdk import ToolCallResult

# Create tool call results
tool_results = [
    ToolCallResult(
        function_name="get_weather",
        tool_call_id="call-123",
        is_success=True,
        result="The weather in New York is sunny with a temperature of 72°F.",
        payload={"location": "New York"}
    )
]

# Add tool results to memory
agent.memory.add_tool_call_results(tool_call_results=tool_results)

# Check updated message count
print(f"Message count after tool results: {len(agent.messages)}")

Parameters

ParameterTypeRequiredDescription
tool_call_resultsList[ToolCallResult]YesResults of tool executions

Working with Messages

After initializing memory, the agent’s messages property contains the conversation history in a format compatible with LLM providers.

# Initialize memory first
agent.memory.init_messages(
    input="Tell me about AI agents",
    instructions=agent.instructions,
    llm_provider=LLMProvider.OPEN_AI
)

# Access messages
for message in agent.messages:
    print(f"Role: {message['role']}")
    print(f"Content: {message.get('content', '')[:50]}...")
    print("---")

# Use messages directly with OpenAI
response = openai_client.chat.completions.create(
    model="gpt-4o",
    messages=agent.messages,
    temperature=0.7
)

Message Format

The format of messages depends on the LLM provider specified during initialization. For OpenAI (the default), messages follow this structure:

Complete Example

Here’s a complete example of memory management during an agent execution cycle:

from xpander_sdk import XpanderClient, LLMProvider
from openai import OpenAI
import os

# Initialize clients
xpander_client = XpanderClient(api_key=os.environ["XPANDER_API_KEY"])
openai_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# Get agent
agent = xpander_client.agents.get(agent_id=os.environ["AGENT_ID"])

# Add a task
agent.add_task(input="What's the weather like in New York?")

# Initialize memory with OpenAI format
agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions,
    llm_provider=LLMProvider.OPEN_AI
)

print(f"Initial message count: {len(agent.messages)}")

# Run the agent loop
while not agent.is_finished():
    # Get next action from LLM
    response = openai_client.chat.completions.create(
        model="gpt-4o",
        messages=agent.messages,
        tools=agent.get_tools(),
        tool_choice="auto",
        temperature=0.0
    )
    
    # Add LLM response to memory
    agent.memory.add_messages(messages=response.model_dump())
    print(f"Message count after LLM response: {len(agent.messages)}")
    
    # Extract tool calls
    tool_calls = XpanderClient.extract_tool_calls(
        llm_response=response.model_dump(),
        llm_provider=LLMProvider.OPEN_AI
    )
    
    # Execute tools if any
    if tool_calls:
        results = agent.run_tools(tool_calls=tool_calls)
        print(f"Executed {len(results)} tools")
        print(f"Message count after tool execution: {len(agent.messages)}")

# Final message count
print(f"Final message count: {len(agent.messages)}")
print("Final message types:")

for i, message in enumerate(agent.messages):
    print(f"{i}: Role: {message.get('role')}, Has content: {'content' in message}")

Memory Management Tips

Message Limits

LLM providers have token limits for input context. When working with long conversations:

  1. Monitor token usage - Count the approximate number of tokens in your messages
  2. Implement summarization - Consider summarizing previous conversation history for very long conversations
  3. Use thread_id parameter - When adding tasks, use the same thread_id to continue conversations

Best Practices

# Good practice: always initialize memory after adding a task
agent.add_task(input="What are the best programming languages for AI?")
agent.memory.init_messages(
    input=agent.execution.input_message,
    instructions=agent.instructions,
    llm_provider=LLMProvider.OPEN_AI
)

# Good practice: ensure messages exist before accessing them
if hasattr(agent, 'messages') and len(agent.messages) > 0:
    print(f"First message role: {agent.messages[0]['role']}")