Skip to main content
POST
/
v1
/
agents
/
{agent_id}
/
invoke
curl --request POST \ --url https://api.xpander.ai/v1/agents/{agent_id}/invoke \ --header 'Content-Type: application/json' \ --header 'x-api-key: <api-key>' \ --data ' { "input": { "text": "What is xpander.ai?", "user": { "id": "user-uuid", "email": "user@example.com" } } } '
{
  "id": "02843b37-4d77-48a0-8da7-2c76afe54401",
  "agent_id": "47c3b020-9a6b-4699-8d94-d92d71929b53",
  "organization_id": "91fbe9bc-35b3-41e8-b59d-922fb5a0f031",
  "status": "completed",
  "result": "Hello! 👋",
  "created_at": "2026-04-10T19:01:22.898813Z",
  "finished_at": "2026-04-10T19:01:33.120590Z",
  "source": "api"
}
Invoke an agent synchronously. The request blocks until the agent finishes (typically 5–30 seconds) and returns the completed task with the result. For longer-running tasks, use Invoke Agent (Async) or Invoke Agent (Stream).

Path Parameters

agent_id
string
required
Agent ID (UUID)

Request Body

input.text, input.user.id, and input.user.email are required. user_oidc_token is optional for MCP OAuth-backed tools.
input
object
required
user_oidc_token
string
Optional OIDC token for MCP OAuth-authenticated tools.
id
string
Thread ID for multi-turn conversations. Pass the id from a previous task’s response to continue the same conversation. The agent will have full context of all prior messages. If omitted, a new thread is created automatically.
disable_attachment_injection
boolean
default:"false"
When true, files passed in input.files are not injected into the LLM context window. The file URLs are still available to the agent’s tools, but the raw content won’t be prepended to the prompt.Use this when:
  • Files are large (would exceed the model’s context limit)
  • You want tools to process the files rather than the LLM reading them directly
  • You’re passing many files and don’t need them all in context
Default (false): file contents are downloaded, extracted, and injected directly into the LLM prompt as context.
think_mode
string
default:"default"
Controls the agent’s reasoning depth. default for standard reasoning, harder for deeper chain-of-thought analysis. Use harder for complex multi-step tasks that benefit from more deliberate planning.
instructions_override
string
Additional instructions appended to the agent’s system prompt for this invocation only. Use this to adjust behavior per-request without changing the agent’s configuration — for example, restricting output format, adding constraints, or changing tone.
expected_output
string
Natural-language description of the desired output (e.g., "A bulleted list of key findings"). Guides the agent’s response style.

Query Parameters

version
string
Agent version to invoke. Defaults to the latest deployed version. Use "draft" to test undeployed changes.

Response

The response is the full task object. The key fields you’ll use:
id
string
Task/thread ID. Pass this back as id in your next request to continue the conversation.
status
string
completed, failed, error, or stopped
result
string
The agent’s response. If output_format is json, this is a JSON string — parse it with JSON.parse() or jq.
created_at
string
ISO 8601 timestamp when the task was created
finished_at
string
ISO 8601 timestamp when the task completed

Simplest Possible Invoke

curl -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{"input": {"text": "What is xpander.ai?"}}'
Extract just the result:
curl -s ... | jq -r '.result'

Multi-Turn Conversation

Pass the id from the first response to continue the thread:
# Turn 1
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{"input": {"text": "Hi, my name is David and I work at xpander.ai"}}'

# Response includes: "id": "6525177e-06a1-4063-82fe-37382d2302a5"

# Turn 2 — pass the same id
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {"text": "What is my name and where do I work?"},
    "id": "6525177e-06a1-4063-82fe-37382d2302a5"
  }'

# Agent responds: "Your name is David and you work at xpander.ai."
The agent remembers all previous messages in the thread. Always reuse the same id for follow-ups.

With User Identity

Pass the required user fields so the agent can personalize responses and use identity-aware tools:
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {
      "text": "What is my user ID and email?",
      "user": {
        "id": "user-123",
        "email": "david@xpander.ai"
      }
    }
  }'
{
  "status": "completed",
  "result": "Your user ID is user-123 and your email is david@xpander.ai."
}
The user object is visible to the agent as context. Both id and email are required.

With MCP OAuth

Pass user_oidc_token when the agent uses MCP tools that require OAuth on behalf of the user:
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {
      "text": "Create a calendar event for tomorrow at 10am",
      "user": {
        "id": "user-123",
        "email": "david@xpander.ai"
      }
    },
    "user_oidc_token": "USER_OIDC_TOKEN"
  }'

Processing Files

Files passed in input.files are downloaded and injected directly into the LLM context window by default. This works well for small-to-medium files:
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {
      "text": "What is the abstract of this paper?",
      "files": ["https://assets.xpanderai.io/static/pdf/bitcoin.pdf"]
    }
  }'
{
  "status": "completed",
  "result": "The abstract describes a peer-to-peer electronic cash system that enables direct online payments without financial institutions, solving the double-spending problem through a distributed network using proof-of-work and cryptographic signatures."
}
The 9-page Bitcoin whitepaper (above) processes successfully — its content fits within the model’s context window.

Large Files Fail with Direct Injection

Large files will exceed the model’s context limit and return an error:
# This 185-page PDF will fail — too large for the context window
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {
      "text": "What year was the Apple Macintosh introduced?",
      "files": ["https://assets.xpanderai.io/static/pdf/Introducing_the_Apple_Macintosh_1984.pdf"]
    }
  }'
{
  "status": "error",
  "result": "Error code: 413 - {'error': {'type': 'request_too_large', 'message': 'Request exceeds the maximum size'}}"
}
Files are injected into the LLM context by default. Documents over ~100 pages will typically exceed the model’s token limit. For large documents, use a Knowledge Base instead — add the document to a KB, attach it to the agent, and the agent will search it automatically via RAG.

Disable Context Injection

Set disable_attachment_injection: true to pass the file URL to the agent’s tools without injecting its content into the LLM prompt:
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {
      "text": "Analyze this dataset",
      "files": ["https://example.com/data.csv"]
    },
    "disable_attachment_injection": true
  }'
With this flag, the file URL is available to the agent’s tools but the raw content is not prepended to the prompt. Use this when you want the agent’s tools to process the file rather than the LLM reading it directly.
For very large files (185+ pages), even disable_attachment_injection: true may not be enough — the file can exceed the HTTP request size limit before reaching the LLM. Use a Knowledge Base for production workflows with large documents.

Per-Request Instruction Override

Append instructions for this specific invocation without changing the agent’s configuration:
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {"text": "Tell me about xpander pricing"},
    "title": "Pricing Inquiry",
    "instructions_override": "Always respond in exactly 2 sentences. Never use emojis."
  }'
{
  "status": "completed",
  "title": "Pricing Inquiry",
  "result": "xpander.ai offers two main pricing tiers: a Free plan with 2 serverless agents and 100 AI actions, and an In-House plan at $940/month for up to 10 agents and 200K actions. You can deploy on xpander's managed cloud or your own infrastructure."
}

Structured JSON Output

Request structured output with a JSON Schema:
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{"input": {"text": "Look up Stripe"}}'
To configure output_format and output_schema for structured output, use the Update Agent endpoint or configure it in the dashboard under the Output tab.

Example Response

{
  "id": "6525177e-06a1-4063-82fe-37382d2302a5",
  "agent_id": "<agent-id>",
  "organization_id": "<org-id>",
  "status": "completed",
  "input": {
    "text": "What is xpander.ai in one sentence?",
  },
  "result": "xpander.ai is a full-stack platform for building, deploying, and running autonomous AI agents in production.",
  "source": "api",
  "think_mode": "default",
  "disable_attachment_injection": false,
  "created_at": "2026-02-07T10:15:22.100233Z",
  "started_at": "2026-02-07T10:15:22.500000Z",
  "finished_at": "2026-02-07T10:15:28.997811Z",
  "execution_attempts": 1
}

See Also

Authorizations

x-api-key
string
header
required

API Key for authentication

Path Parameters

agent_id
string
required

Query Parameters

version
string | null

The agent/workflow version to invoke. default = latest

Body

application/json
input
AgentExecutionInput · object
required
id
string

Pass the id from a previous response to continue a multi-turn conversation with the same agent. Omit for a new conversation.

Example:

"02843b37-4d77-48a0-8da7-2c76afe54401"

Response

Successful Response

id
string

Unique task identifier. Use this to poll for results when invoking async.

Example:

"02843b37-4d77-48a0-8da7-2c76afe54401"

agent_id
string
Example:

"47c3b020-9a6b-4699-8d94-d92d71929b53"

organization_id
string
Example:

"91fbe9bc-35b3-41e8-b59d-922fb5a0f031"

status
enum<string>

pending | executing | completed | error

Available options:
pending,
executing,
completed,
error,
failed,
stopped
Example:

"completed"

result
string | null

The agent's response. Populated when status is 'completed'. Null for async until polling shows completed.

Example:

"Hello! 👋"

created_at
string<date-time>
Example:

"2026-04-10T19:01:22.898813Z"

finished_at
string<date-time> | null

Null until the task completes.

Example:

"2026-04-10T19:01:33.120590Z"

source
string
Example:

"api"