Documentation Index
Fetch the complete documentation index at: https://docs.xpander.ai/llms.txt
Use this file to discover all available pages before exploring further.
Agents act on real systems, not just generate text. The relevant security question shifts from βwhat might the model sayβ to βwhat can the agent do, and under whose authority.β The OWASP Top 10 for LLM Applications risks (excessive agency, sensitive data exposure, prompt injection) only become real when AI gains the ability to act.
xpander.ai enforces limits at two layers:
- Application layer: what each agent can do once running (operation surface, credentials, safety controls)
- Infrastructure layer: where agents run, what networks they can reach, where data lives
Permission model
Each agentβs permissions are part of its operation surface, not its system prompt. A Jira agent built for PR triage doesnβt have access to org-admin endpoints because those operations were never wired into its connector, not because a prompt tells the model βdonβt call them.β A capability that doesnβt exist canβt be jailbroken, leaked through tool injection, or coaxed out by a clever user.
This aligns with NIST Zero Trust Architecture (explicit verification, no implicit trust, least privilege by default), applied at the agentβs operation surface, not just the network edge. The platformβs specialized agents are how this is exposed: a connector with a narrow set of operations, a named owner, and an audit trail.
Credential isolation
The model sees a tool description ("create_lead", with these parameters) and the call result, but never the OAuth token, API key, or IAM session credentials in between. The AI Gateway and Agent Workers broker authentication outside the reasoning loop. An attacker who compromises a prompt or tool output cannot exfiltrate a credential the model never had.
Credentials live as Kubernetes secrets and are read by the gateway and workers at invocation time. Supported patterns:
- OAuth 2.0 with auto-refresh, scoped at the connector level
- API keys and tokens stored as secrets, injected at invocation time
- AWS IAM roles via service accounts (IRSA), where tools assume per-connector roles. See IAM Best Practices.
- End-user delegated identity, where each userβs own OAuth tokens authorize the call (so audit trails attribute actions to the right human)
For self-hosted deployments, create a Kubernetes secret with your LLM provider keys and reference it via envFromSecretKeys in your Helm values rather than passing keys as --set flags. See Self-Hosted Kubernetes for the canonical setup.
Network boundary
xpander self-hosted environments make outbound connections only. The control plane never initiates traffic into your cluster; the data plane (where every task runs) lives entirely inside your VPC, data center, or air-gapped network. Your firewall only needs egress on port 443 to xpanderβs control plane. No inbound rules.
ββββββββββββββββββββββ βββββββββββββββββββββββ
β xpander Cloud β β Your VPC / Cluster β
β (Control Plane) β βββ Outbound TLS ββ β (Data Plane) β
β β metadata + heart- β β
β Agent registry β beats only β Agent Controller β
β Connector defs β β Agent Workers β
β Event logging β β AI Gateway β
β β β No inbound β PostgreSQL Β· Redis β
ββββββββββββββββββββββ βββββββββββββββββββββββ
Two outbound connectivity options:
| Option | Target | Notes |
|---|
| Public TLS | 15.197.85.80, 166.117.85.46 | Encrypted over public internet |
| AWS PrivateLink | com.amazonaws.vpce.us-west-2.vpce-svc-0101884b32f655197 | Private AWS backbone. See PrivateLink setup. |
Clients (the SDK, Slack, MCP, Web Chat, your own application servers) connect directly to your cluster, not through xpander Cloud. Runtime data never transits the control plane:
- Agent invocations stay in your environment
- Chat threads are served from your clusterβs PostgreSQL
- Knowledge base queries hit your vector store
- Task results round-trip through your ingress
What does leave the cluster: the user identity used for Workbench login, and the agent metadata that tells your cluster which agent definition to load.
Encryption
The platform configures encryption at each layer; you provide the substrate.
| Layer | Whatβs encrypted | How |
|---|
| In transit | All platform traffic | TLS, with ingress certs from cert-manager (Letβs Encrypt or internal CA), externally provisioned certs mounted as a secret, or self-signed for dev |
| At rest, application data | Agent state, threads, messages, memory, KB embeddings (PostgreSQL); cache and sessions (Redis) | Encrypted persistent volumes (EBS+KMS, Azure Managed Disk, GCP CMEK) or transparent DB encryption |
| At rest, secrets | Kubernetes secrets in etcd | Envelope encryption (one-flag on EKS/AKS/GKE, KMS configuration on self-managed clusters) |
For xpander Cloud, all of the above uses cloud-provider managed keys with TLS terminated at the platformβs ingress.
Safety controls
Each control operates at the input/output boundary of a tool call. Toggle-on per agent; no separate moderation service.
- PII detection scans both inputs and outputs and can mask values before they reach the model or the downstream tool
- Prompt injection blocking inspects tool outputs for the patterns used to override system instructions through retrieved content (the βignore previous instructionsβ class of attack)
- Content moderation filters unsafe categories in generated responses
- Step limits cap how many tool calls a single task can make, preventing runaway reasoning loops
- Locked parameters let you pin specific tool arguments (e.g., always use a specific account ID) so the model canβt override them at runtime
These are application-layer controls. They complement, not replace, infrastructure isolation.
OWASP Top 10 mapping
| OWASP Top 10 (LLM) risk | Where the platform addresses it |
|---|
| Excessive agency | Operation-surface scoping per agent; approval gates on workflow Wait nodes |
| Insecure output handling | PII masking, content moderation, schema-validated tool outputs |
| Sensitive information disclosure | Credentials never reach the model; deployment isolation; PII detection |
| Prompt injection | Built-in injection detection on tool outputs; sandboxed tool execution |
| Supply chain risks | Pinned chart and image versions; immutable deploy snapshots; version-controlled connector definitions |
| Unbounded consumption | Step limits, token budgets, per-agent cost ceilings |
For deeper context on how these controls fit into a governance program, see enterprise AI governance for secure agentic automation.