# Install Ollama (follow instructions at https://ollama.ai)# Pull a modelollama pull gpt-oss:20b
local_model_example.py
Copy
Ask AI
from dotenv import load_dotenvload_dotenv()from xpander_sdk import Backendfrom agno.agent import Agentfrom agno.models.ollama import Ollama# Initialize backendbackend = Backend()# Override model with local Ollamaagno_agent = Agent(**backend.get_args(override={ 'model': Ollama(id="gpt-oss:20b")}))# Test the local modelagno_agent.print_response(message="How to install xpander-cli?")
Make sure Ollama is running with your chosen model
Run the example:
local_model_example.py
Copy
Ask AI
python local_model_example.py
This example shows model override capabilities where you can use local Ollama models while maintaining all Xpander backend benefits including configuration management, conversation memory, and storage. The hybrid architecture provides local privacy for inference while keeping cloud-based features for everything else.