Update an existing AI agent’s configuration, tools, or knowledge bases
openai)gpt-4o, gpt-4.1)ACTIVE, INACTIVE)AIAgent object with all current configuration.
name, instructions, model_provider, model_name, output_format, and expected_output. To attach tools or knowledge bases, use the xpander.ai platform or the Python SDK.
attached_tools, knowledge_bases, and graph are not supported via PATCHAPI Key for authentication
Display name for the agent
System instructions configuration
Emoji icon representing the agent
Reset the agent's graph configuration (default: false)
AI model provider (e.g., openai, anthropic)
Target environment ID
Specific model version (e.g., gpt-4o, gpt-4.1, claude-sonnet-4-5-20250929)
Agent status (enum: ACTIVE, INACTIVE)
DRAFT, ACTIVE, INACTIVE Successful Response
serverless, container Display name of the agent
Agent framework used (e.g., agno)
Description of the agent's purpose and capabilities
Array of tools available to the agent
Emoji icon representing the agent
Current deployment status: ACTIVE or INACTIVE
DRAFT, ACTIVE, INACTIVE Array of knowledge bases attached to the agent
UUID of the organization that owns this agent
Enumeration of the agent types.
Attributes: Manager: marks the agent as a Managing agent. Regular: marks the agent as a regular agent. A2A: marks the agent as an external agent used via A2A protocol. Curl: marks the agent as an external agent used via a CURL.
manager, regular, a2a, curl AI model provider (e.g., openai, anthropic)
openai, nim, amazon_bedrock, huggingFace, friendlyAI, anthropic, gemini, fireworks, google_ai_studio, helicone, open_router, nebius Specific model version (e.g., gpt-4o, gpt-4.1, claude-sonnet-4-5-20250929)
Output format: markdown or json
text, markdown, json JSON schema for structured output when output_format is json
Natural-language description of the desired output (e.g., "A bulleted list of key findings")
Auto-generated webhook URL for agent invocations
Auto-generated unique identifier for the agent (e.g. 'emerald-emu')
Custom LLM API base URL for self-hosted or proxied model endpoints
LLM configuration per graph node (provider, model, temperature)
Agno framework settings (memory, session storage, moderation, tool limits)
Whether the agent has unpublished configuration changes