Message Properties and Methods
The Messages API provides access to conversation history and methods for modifying messages in xpander agents.
agent.messages
Array of conversation messages in OpenAI-compatible format.
messages = agent.messages
Returns: Array of message objects with the following structure:
interface Message {
role: string; // "system", "user", "assistant", or "tool"
content: string; // The message content
tool_calls?: ToolCall[]; // For assistant messages with tool calls
tool_call_id?: string; // For tool messages
}
Messages are automatically populated when agent.add_task()
is called. The array is in OpenAI-compatible format and can be used directly with any supported LLM provider without manual conversion.
add_messages() / addMessages()
Adds one or more messages to the conversation history.
# Add LLM response to messages
agent.add_messages(response.model_dump())
# Add custom message
agent.add_messages({
"role": "system",
"content": "Provide concise answers."
})
# Add multiple messages
agent.add_messages([
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi there!"}
])
Parameter | Type | Required | Description |
---|
messages | LLM response or Message(s) | Yes | Messages to add (accepts raw LLM responses, single messages, or arrays) |
Returns: void
Type Definitions
Message
Standard message object in OpenAI-compatible format.
interface Message {
role: string; // "system", "user", "assistant", or "tool"
content: string; // The message content
tool_calls?: ToolCall[]; // For assistant messages with tool calls
tool_call_id?: string; // For tool messages
}
Usage Examples
Passing Messages to LLM API
# Pass messages directly to LLM API
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=agent.messages,
tools=agent.get_tools(llm_provider=LLMProvider.OPEN_AI),
temperature=0.0
)
Analyzing Messages
# Count tokens
from tiktoken import encoding_for_model
enc = encoding_for_model("gpt-4o")
total_tokens = sum(len(enc.encode(msg.get("content", ""))) for msg in agent.messages)
# Extract user messages
user_messages = [
msg.get("content") for msg in agent.messages
if msg.get("role") == "user"
]
# Find messages containing specific content
relevant_messages = [
msg for msg in agent.messages
if msg.get("content") and "specific term" in msg.get("content").lower()
]
Complete Agent Loop with Messages
# Initialize task and get initial messages
agent.add_task(input="What's the weather in New York?")
while not agent.is_finished():
# Get LLM response
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=agent.messages,
tools=agent.get_tools(llm_provider=LLMProvider.OPEN_AI),
temperature=0
)
# Add LLM response to messages
agent.add_messages(response.model_dump())
# Extract and run tool calls
tool_calls = XpanderClient.extract_tool_calls(
llm_response=response.model_dump(),
llm_provider=LLMProvider.OPEN_AI
)
if tool_calls:
results = agent.run_tools(tool_calls=tool_calls)
# Tool results are automatically added to messages
Integration with Tasks API
The Messages API is tightly integrated with the Tasks API:
- Messages are initialized when
agent.add_task()
is called
- Message history is loaded when a
thread_id
is provided to add_task()
- The execution ends when the LLM calls the
xpfinish-agent-execution-finished
tool
For task management functionality, see the Tasks API documentation.