xpander.ai’s state management system enables agents to maintain continuity across multiple interactions and effectively manage messages. This guide explains how to implement stateful agents that remember previous conversations and maintain context over time.
Thread Concepts
Threads in xpander.ai are the foundation of persistent conversations. A thread represents:
- A sequence of messages between the user and agent(s)
- The conversation context that persists across multiple executions
- An identifiable object that can be retrieved for future interactions
Understanding Agent Messages
Agent messages, which are stored within threads, consists of:
- Message History: Sequential record of all messages in the conversation
- Context Information: Additional data that informs the agent’s understanding
- Tool Call Results: Results from previous tool executions
- System Instructions: Core guidelines defining the agent’s behavior
The messages system ensures that agents maintain a coherent context throughout interactions.
Creating and Using Threads
When you create a new task for an agent, you can either:
- Start a new conversation thread (default behavior)
- Continue an existing thread by providing a
thread_id
# Create a new thread
agent.add_task(input="Hello, I would like to analyze some data")
# A new thread_id is automatically generated
# Continue an existing thread
agent.add_task(
input="What were the key insights from our previous analysis?",
thread_id="thread_abc123" # Use the thread_id from a previous execution
)
Retrieving and Storing Thread IDs
After an execution completes, you can retrieve the thread ID for future use:
# After an execution completes
result = agent.retrieve_execution_result()
thread_id = result.memory_thread_id
# Save the thread ID in your application's database
# In a real application, you would store this in a database
# database.save_thread_id(user_id="user123", thread_id=thread_id)
Agent Execution Loop with Thread Management
Here’s a simple implementation showing how to build an agent that can maintain conversation state:
Important Note About Message Initialization
When you call add_task()
, the system automatically:
- Creates a new thread with the system prompt
- Initializes the agent’s messages with necessary messages
- Makes the execution input available as
agent.execution.input_message
Do not use agent.memory.init_messages()
- it’s deprecated and should be replaced with agent.add_task()
, which handles message initialization automatically and supports thread continuity through the optional thread_id
parameter.
from xpander_sdk import XpanderClient, LLMProvider
from openai import OpenAI
import os
# Function to run an agent with thread continuity
def run_agent_with_thread(prompt, thread_id=None):
"""Run an agent with optional thread continuity"""
# Initialize clients
xpander_client = XpanderClient(api_key=os.environ.get("XPANDER_API_KEY"))
openai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
# Get the agent
agent = xpander_client.agents.get(agent_id=os.environ.get("XPANDER_AGENT_ID"))
# Add a task (with optional thread_id)
# This automatically creates a new thread or continues an existing one
agent.add_task(input=prompt, thread_id=thread_id)
# Run the agent until the task is complete
while not agent.is_finished():
# Get next action from LLM
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=agent.messages,
tools=agent.get_tools(llm_provider=LLMProvider.OPEN_AI),
tool_choice=agent.tool_choice,
temperature=0.0
)
# Add LLM response to messages
agent.add_messages(response.model_dump())
# Extract and execute tool calls
tool_calls = XpanderClient.extract_tool_calls(
llm_response=response.model_dump(),
llm_provider=LLMProvider.OPEN_AI
)
if tool_calls:
agent.run_tools(tool_calls=tool_calls)
# Get the final result
result = agent.retrieve_execution_result()
# Return both the result text and the thread_id for future use
return result.result, result.memory_thread_id
# Example usage: First conversation
result, thread_id = run_agent_with_thread("What's the latest news about AI?")
print(f"Result: {result}")
print(f"Thread ID: {thread_id}")
# Continue the conversation (use previous thread_id)
follow_up, thread_id = run_agent_with_thread(
"Tell me more about the ethical implications.",
thread_id=thread_id
)
print(f"Follow-up: {follow_up}")