Deploy an AI agent to make it active and available for task execution
id: Agent identifiername: Agent display namestatus: Updated status (ACTIVE or INACTIVE)deployment_type: Deployment infrastructuremodel_provider: AI model providermodel_name: Specific model versionframework: Agent framework usedorganization_id: UUID of the organization that owns this agentdescription: Auto-generated from instructions if not previously sethas_pending_changes: Indicates whether there are unpublished changesversion: Current version number of the agentdescription is auto-generated from instructions if not previously sethas_pending_changes returns to false after successful deploymentACTIVE status indicates the agent is ready to handle tasksAPI Key for authentication
Successful Response
serverless, container Display name of the agent
Agent framework used (e.g., agno)
Description of the agent's purpose and capabilities
Array of tools available to the agent
Emoji icon representing the agent
Current deployment status: ACTIVE or INACTIVE
DRAFT, ACTIVE, INACTIVE Array of knowledge bases attached to the agent
UUID of the organization that owns this agent
Enumeration of the agent types.
Attributes: Manager: marks the agent as a Managing agent. Regular: marks the agent as a regular agent. A2A: marks the agent as an external agent used via A2A protocol. Curl: marks the agent as an external agent used via a CURL.
manager, regular, a2a, curl AI model provider (e.g., openai, anthropic)
openai, nim, amazon_bedrock, huggingFace, friendlyAI, anthropic, gemini, fireworks, google_ai_studio, helicone, open_router, nebius Specific model version (e.g., gpt-4o, gpt-4.1, claude-sonnet-4-5-20250929)
Output format: markdown or json
text, markdown, json JSON schema for structured output when output_format is json
Natural-language description of the desired output (e.g., "A bulleted list of key findings")
Auto-generated webhook URL for agent invocations
Auto-generated unique identifier for the agent (e.g. 'emerald-emu')
Custom LLM API base URL for self-hosted or proxied model endpoints
LLM configuration per graph node (provider, model, temperature)
Agno framework settings (memory, session storage, moderation, tool limits)
Whether the agent has unpublished configuration changes