Invoke an agent synchronously. The request blocks until the agent finishes (typically 5–30 seconds) and returns the completed task with the result.For longer-running tasks, use Invoke Agent (Async) or Invoke Agent (Stream).
URLs of files for the agent to process (PDFs, images, CSVs, etc.). By default, file contents are injected directly into the LLM context window. For large files, set disable_attachment_injection: true to keep them out of context and let tools handle them instead.
Identity of the end user invoking the agent. The agent sees this as context — it can greet the user by name, personalize responses, and use the email for tools like email sending.
Thread ID for multi-turn conversations. Pass the id from a previous task’s response to continue the same conversation. The agent will have full context of all prior messages. If omitted, a new thread is created automatically.
When true, files passed in input.files are not injected into the LLM context window. The file URLs are still available to the agent’s tools, but the raw content won’t be prepended to the prompt.Use this when:
Files are large (would exceed the model’s context limit)
You want tools to process the files rather than the LLM reading them directly
You’re passing many files and don’t need them all in context
Default (false): file contents are downloaded, extracted, and injected directly into the LLM prompt as context.
Controls the agent’s reasoning depth. default for standard reasoning, harder for deeper chain-of-thought analysis. Use harder for complex multi-step tasks that benefit from more deliberate planning.
Additional instructions appended to the agent’s system prompt for this invocation only. Use this to adjust behavior per-request without changing the agent’s configuration — for example, restricting output format, adding constraints, or changing tone.
Pass the id from the first response to continue the thread:
Copy
Ask AI
# Turn 1curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \ -H "Content-Type: application/json" \ -H "x-api-key: <your-api-key>" \ -d '{"input": {"text": "Hi, my name is David and I work at xpander.ai"}}'# Response includes: "id": "6525177e-06a1-4063-82fe-37382d2302a5"# Turn 2 — pass the same idcurl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \ -H "Content-Type: application/json" \ -H "x-api-key: <your-api-key>" \ -d '{ "input": {"text": "What is my name and where do I work?"}, "id": "6525177e-06a1-4063-82fe-37382d2302a5" }'# Agent responds: "Your name is David and you work at xpander.ai."
The agent remembers all previous messages in the thread. Always reuse the same id for follow-ups.
Files passed in input.files are downloaded and injected directly into the LLM context window by default. This works well for small-to-medium files:
Copy
Ask AI
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \ -H "Content-Type: application/json" \ -H "x-api-key: <your-api-key>" \ -d '{ "input": { "text": "What is the abstract of this paper?", "files": ["https://assets.xpanderai.io/static/pdf/bitcoin.pdf"] } }'
Copy
Ask AI
{ "status": "completed", "result": "The abstract describes a peer-to-peer electronic cash system that enables direct online payments without financial institutions, solving the double-spending problem through a distributed network using proof-of-work and cryptographic signatures."}
The 9-page Bitcoin whitepaper (above) processes successfully — its content fits within the model’s context window.
Large files will exceed the model’s context limit and return an error:
Copy
Ask AI
# This 185-page PDF will fail — too large for the context windowcurl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \ -H "Content-Type: application/json" \ -H "x-api-key: <your-api-key>" \ -d '{ "input": { "text": "What year was the Apple Macintosh introduced?", "files": ["https://assets.xpanderai.io/static/pdf/Introducing_the_Apple_Macintosh_1984.pdf"] } }'
Copy
Ask AI
{ "status": "error", "result": "Error code: 413 - {'error': {'type': 'request_too_large', 'message': 'Request exceeds the maximum size'}}"}
Files are injected into the LLM context by default. Documents over ~100 pages will typically exceed the model’s token limit. For large documents, use a Knowledge Base instead — add the document to a KB, attach it to the agent, and the agent will search it automatically via RAG.
With this flag, the file URL is available to the agent’s tools but the raw content is not prepended to the prompt. Use this when you want the agent’s tools to process the file rather than the LLM reading it directly.
For very large files (185+ pages), even disable_attachment_injection: true may not be enough — the file can exceed the HTTP request size limit before reaching the LLM. Use a Knowledge Base for production workflows with large documents.
Append instructions for this specific invocation without changing the agent’s configuration:
Copy
Ask AI
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \ -H "Content-Type: application/json" \ -H "x-api-key: <your-api-key>" \ -d '{ "input": {"text": "Tell me about xpander pricing"}, "title": "Pricing Inquiry", "instructions_override": "Always respond in exactly 2 sentences. Never use emojis." }'
Copy
Ask AI
{ "status": "completed", "title": "Pricing Inquiry", "result": "xpander.ai offers two main pricing tiers: a Free plan with 2 serverless agents and 100 AI actions, and an In-House plan at $940/month for up to 10 agents and 200K actions. You can deploy on xpander's managed cloud or your own infrastructure."}
curl -s -X POST "https://api.xpander.ai/v1/agents/<agent-id>/invoke" \ -H "Content-Type: application/json" \ -H "x-api-key: <your-api-key>" \ -d '{"input": {"text": "Look up Stripe"}}'
To configure output_format and output_schema for structured output, use the Update Agent endpoint or configure it in the dashboard under the Output tab.
Additional instructions appended to the agent's system prompt for this invocation only. Use this to adjust behavior per-request without changing the agent's configuration.
Response
Successful Response
Response from invoking an agent. Contains the execution details and result.