All AI agents in xpander are built using our powerful Agent Graph System, which automatically generates and manages multi-step workflows. The journey starts in our visual Agent Builder at app.xpander.ai, where you can design and test your agent’s behavior.

After designing your agent, you have two deployment options:

  1. Serverless Platform - Deploy directly from the console using our managed LLMs

  2. SDK with Your Own LLM - Use the same agent with your preferred LLM deployment

Start with the Visual Agent Builder

Regardless of your deployment choice, begin by designing your agent in the visual builder:

  1. Sign up at app.xpander.ai

  2. Create a new agent using the Agent Builder

  3. Design your agent’s workflow and test its behavior

  4. Choose your deployment option

The Agent Builder provides:

Choose Your Deployment

Option 1: Serverless Platform

After designing your agent, you can start interacting with it using the serverless platform:

  1. Choose your Source Node (Slack, WebUI, or Webhook)

  2. Click “Save” in the Agent Builder

  3. Your agent is ready to use!

You can add a “task” type source node to your agent to schedule its execution. This will automatically inject the specified “prompt” into the agent and initiate its run.

Option 2: SDK with Your Own LLM

Use the same agent design with your preferred LLM deployment:

1

Prerequisites

  1. Install Python 3.8 or higher

  2. Install node.js 18 or higher

  3. xpander account and API key

  4. LLM provider API key

2

Installation

python3 -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
pip install xpander-sdk openai
3

SDK Setup

openai_client = OpenAI(api_key=openAIKey) 
xpander_client = XpanderClient(api_key=xpanderAPIKey) ## Personal API Key
agent1 = xpander_client.agents.get(agent_id=xpanderAgentID) # Agent ID
4

Run

llm_response = openai_client.chat.completions.create(
    model="gpt-4",
    messages=memory,
    tools=agent1.get_tools()    
)
tools_to_run = XpanderClient.extract_tool_calls(llm_response=llm_response.model_dump())
tool_responses = agent1.run_tools(tools_to_run)

Next Steps

  1. Start Building AI Agents:

  2. Select Your Agent Runtime and LLM:

Need assistance? Join our Discord community or consult our documentation.