Managing conversation history and context for AI agents
The xpander.ai SDK provides a Memory system for agents to manage conversation history, including user inputs, system instructions, and tool call results. Memory is a critical component that allows agents to maintain context during task execution.
Initializes the agent’s memory with system instructions and a user input message. This method must be called before attempting to access or modify the agent’s messages.
from xpander_sdk import LLMProvider# Initialize memory after adding a taskagent.memory.init_messages(input=agent.execution.input_message, instructions=agent.instructions, llm_provider=LLMProvider.OPEN_AI)# Check that messages are now availableprint(f"Number of messages: {len(agent.messages)}")
LLM provider format (default: LLMProvider.OPEN_AI)
The init_messages() method sets up the initial conversation state with system instructions and user input. It must be called before any other memory operations.
Adds messages (typically LLM responses) to the agent’s memory.
# Get response from OpenAIresponse = openai_client.chat.completions.create( model="gpt-4o", messages=agent.messages, tools=agent.get_tools(), temperature=0.0)# Add LLM response to memoryagent.memory.add_messages(messages=response.model_dump())print(f"Added response from LLM to memory")
Adds tool call results to the agent’s memory. This updates the conversation state with the results of executed tools.
from xpander_sdk import ToolCallResult# Create tool call resultstool_results =[ ToolCallResult( function_name="get_weather", tool_call_id="call-123", is_success=True, result="The weather in New York is sunny with a temperature of 72°F.", payload={"location":"New York"})]# Add tool results to memoryagent.memory.add_tool_call_results(tool_call_results=tool_results)# Check updated message countprint(f"Message count after tool results: {len(agent.messages)}")
After initializing memory, the agent’s messages property contains the conversation history in a format compatible with LLM providers.
# Initialize memory firstagent.memory.init_messages(input="Tell me about AI agents", instructions=agent.instructions, llm_provider=LLMProvider.OPEN_AI)# Access messagesfor message in agent.messages:print(f"Role: {message['role']}")print(f"Content: {message.get('content','')[:50]}...")print("---")# Use messages directly with OpenAIresponse = openai_client.chat.completions.create( model="gpt-4o", messages=agent.messages, temperature=0.7)
The format of messages depends on the LLM provider specified during initialization. For OpenAI (the default), messages follow this structure:
[{"role":"system","content":"You are a helpful assistant..."},{"role":"user","content":"Tell me about AI agents"},{"role":"assistant","content":"AI agents are software entities that can...","tool_calls":[...]// Optional},{"role":"tool",// Added when tools are executed"tool_call_id":"call_1234","content":"The weather in New York is sunny with a temperature of 72°F."}]
# Good practice: always initialize memory after adding a taskagent.add_task(input="What are the best programming languages for AI?")agent.memory.init_messages(input=agent.execution.input_message, instructions=agent.instructions, llm_provider=LLMProvider.OPEN_AI)# Good practice: ensure messages exist before accessing themifhasattr(agent,'messages')andlen(agent.messages)>0:print(f"First message role: {agent.messages[0]['role']}")