Skip to main content
POST
/
v1
/
agents
/
{agent_id}
/
invoke
Invoke Agent (Sync)
curl --request POST \
  --url https://api.xpander.ai/v1/agents/{agent_id}/invoke \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <api-key>' \
  --data '
{
  "input": {
    "text": "hello"
  }
}
'
{
  "id": "5a79676d-f22a-40a8-8f1d-aa63f66e2980",
  "agent_id": "d5be10e3-47f7-4bcd-acbc-48ef4b207821",
  "organization_id": "91fbe9bc-35b3-41e8-b59d-922fb5a0f031",
  "status": "completed",
  "input": {
    "text": "hello",
    "files": []
  },
  "result": "Hello! 👋 Welcome! I'm your SharePoint assistant...",
  "created_at": "2026-02-12T01:10:56.775600Z",
  "started_at": "2026-02-12T01:10:58.000000Z",
  "finished_at": "2026-02-12T01:11:09.471397Z"
}
Invoke an agent synchronously. The request blocks until the agent finishes (typically 5–30 seconds) and returns the completed task with the result. For longer-running tasks, use Invoke Agent (Async) or Invoke Agent (Stream).

Path Parameters

agent_id
string
required
Agent ID (UUID)

Request Body

Only input.text is required. Everything else is optional.
input
object
required
id
string
Thread ID for multi-turn conversations. Pass the id from a previous task’s response to continue the same conversation. The agent will have full context of all prior messages. If omitted, a new thread is created automatically.
disable_attachment_injection
boolean
default:"false"
When true, files passed in input.files are not injected into the LLM context window. The file URLs are still available to the agent’s tools, but the raw content won’t be prepended to the prompt.Use this when:
  • Files are large (would exceed the model’s context limit)
  • You want tools to process the files rather than the LLM reading them directly
  • You’re passing many files and don’t need them all in context
Default (false): file contents are downloaded, extracted, and injected directly into the LLM prompt as context.
think_mode
string
default:"default"
Controls the agent’s reasoning depth. default for standard reasoning, harder for deeper chain-of-thought analysis. Use harder for complex multi-step tasks that benefit from more deliberate planning.
instructions_override
string
Additional instructions appended to the agent’s system prompt for this invocation only. Use this to adjust behavior per-request without changing the agent’s configuration — for example, restricting output format, adding constraints, or changing tone.
expected_output
string
Natural-language description of the desired output (e.g., "A bulleted list of key findings"). Guides the agent’s response style.

Query Parameters

version
string
Agent version to invoke. Defaults to the latest deployed version. Use "draft" to test undeployed changes.

Response

The response is the full task object. The key fields you’ll use:
id
string
Task/thread ID. Pass this back as id in your next request to continue the conversation.
status
string
completed, failed, error, or stopped
result
string
The agent’s response. If output_format is json, this is a JSON string — parse it with JSON.parse() or jq.
created_at
string
ISO 8601 timestamp when the task was created
finished_at
string
ISO 8601 timestamp when the task completed

Simplest Possible Invoke

curl -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{"input": {"text": "What is xpander.ai?"}}'
Extract just the result:
curl -s ... | jq -r '.result'

Multi-Turn Conversation

Pass the id from the first response to continue the thread:
# Turn 1
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{"input": {"text": "Hi, my name is David and I work at xpander.ai"}}'

# Response includes: "id": "6525177e-06a1-4063-82fe-37382d2302a5"

# Turn 2 — pass the same id
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {"text": "What is my name and where do I work?"},
    "id": "6525177e-06a1-4063-82fe-37382d2302a5"
  }'

# Agent responds: "Your name is David and you work at xpander.ai."
The agent remembers all previous messages in the thread. Always reuse the same id for follow-ups.

With User Identity

Pass user details so the agent can personalize its responses and use identity-aware tools (like sending emails on behalf of the user):
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {
      "text": "What is my name?",
      "user": {
        "email": "david@xpander.ai",
        "first_name": "David",
        "last_name": "Twizer"
      }
    }
  }'
{
  "status": "completed",
  "result": "Hi David! Your name is David Twizer, and I can see you're part of the xpander.ai team."
}
The user object is visible to the agent as context. The email field is required when providing user details.

Processing Files

Files passed in input.files are downloaded and injected directly into the LLM context window by default. This works well for small-to-medium files:
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {
      "text": "What is the abstract of this paper?",
      "files": ["https://assets.xpanderai.io/static/pdf/bitcoin.pdf"]
    }
  }'
{
  "status": "completed",
  "result": "The abstract describes a peer-to-peer electronic cash system that enables direct online payments without financial institutions, solving the double-spending problem through a distributed network using proof-of-work and cryptographic signatures."
}
The 9-page Bitcoin whitepaper (above) processes successfully — its content fits within the model’s context window.

Large Files Fail with Direct Injection

Large files will exceed the model’s context limit and return an error:
# This 185-page PDF will fail — too large for the context window
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {
      "text": "What year was the Apple Macintosh introduced?",
      "files": ["https://assets.xpanderai.io/static/pdf/Introducing_the_Apple_Macintosh_1984.pdf"]
    }
  }'
{
  "status": "error",
  "result": "Error code: 413 - {'error': {'type': 'request_too_large', 'message': 'Request exceeds the maximum size'}}"
}
Files are injected into the LLM context by default. Documents over ~100 pages will typically exceed the model’s token limit. For large documents, use a Knowledge Base instead — add the document to a KB, attach it to the agent, and the agent will search it automatically via RAG.

Disable Context Injection

Set disable_attachment_injection: true to pass the file URL to the agent’s tools without injecting its content into the LLM prompt:
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {
      "text": "Analyze this dataset",
      "files": ["https://example.com/data.csv"]
    },
    "disable_attachment_injection": true
  }'
With this flag, the file URL is available to the agent’s tools but the raw content is not prepended to the prompt. Use this when you want the agent’s tools to process the file rather than the LLM reading it directly.
For very large files (185+ pages), even disable_attachment_injection: true may not be enough — the file can exceed the HTTP request size limit before reaching the LLM. Use a Knowledge Base for production workflows with large documents.

Per-Request Instruction Override

Append instructions for this specific invocation without changing the agent’s configuration:
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{
    "input": {"text": "Tell me about xpander pricing"},
    "title": "Pricing Inquiry",
    "instructions_override": "Always respond in exactly 2 sentences. Never use emojis."
  }'
{
  "status": "completed",
  "title": "Pricing Inquiry",
  "result": "xpander.ai offers two main pricing tiers: a Free plan with 2 serverless agents and 100 AI actions, and an In-House plan at $940/month for up to 10 agents and 200K actions. You can deploy on xpander's managed cloud or your own infrastructure."
}

Structured JSON Output

Request structured output with a JSON Schema:
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \
  -H "Content-Type: application/json" \
  -H "x-api-key: <your-api-key>" \
  -d '{"input": {"text": "Look up Stripe"}}'
To configure output_format and output_schema for structured output, use the Update Agent endpoint or configure it in the dashboard under the Output tab.

Example Response

{
  "id": "6525177e-06a1-4063-82fe-37382d2302a5",
  "agent_id": "<agent-id>",
  "organization_id": "<org-id>",
  "status": "completed",
  "input": {
    "text": "What is xpander.ai in one sentence?",
  },
  "result": "xpander.ai is a full-stack platform for building, deploying, and running autonomous AI agents in production.",
  "source": "api",
  "think_mode": "default",
  "disable_attachment_injection": false,
  "created_at": "2026-02-07T10:15:22.100233Z",
  "started_at": "2026-02-07T10:15:22.500000Z",
  "finished_at": "2026-02-07T10:15:28.997811Z",
  "execution_attempts": 1
}

See Also

Authorizations

x-api-key
string
header
required

API Key for authentication

Path Parameters

agent_id
string
required

Body

application/json
input
AgentExecutionInput · object
required

The input to send to the agent

id
string | null

Thread ID for multi-turn conversations. Pass the id from a previous task to continue the conversation.

expected_output
string | null

Natural-language description of the desired output

think_mode
enum<string> | null

Controls the agent's reasoning depth. "default" for standard reasoning, "harder" for deeper analysis.

Available options:
default,
harder
disable_attachment_injection
boolean | null

When true, files in input.files are not injected into the LLM context window

instructions_override
string | null

Additional instructions appended to the agent's system prompt for this invocation only. Use this to adjust behavior per-request without changing the agent's configuration.

Response

Successful Response

Response from invoking an agent. Contains the execution details and result.

id
string
required

Unique identifier for this execution

agent_id
string
required

ID of the agent that was invoked

organization_id
string
required

Organization UUID

input
AgentExecutionInput · object
required

The input that was sent to the agent

status
enum<string> | null
required

Current status of the execution (pending, executing, completed, etc.)

Available options:
pending,
executing,
paused,
error,
failed,
completed,
stopped
created_at
string<date-time>
required

ISO 8601 timestamp when the task was created

result
string | null

The agent's response. If output_format is json, this is a JSON string - parse it with JSON.parse() or jq

started_at
string<date-time> | null

ISO 8601 timestamp when the task started executing

finished_at
string<date-time> | null

ISO 8601 timestamp when the task completed