Module 1: Supercharge Your IDE with MCP
Create and optimize a GitHub MCP Server with Agentic RAG
Module Summary
- Goal: Create a custom GitHub MCP Server with Agentic RAG optimization
- Estimated Time: 15-20 minutes
- Prerequisites: GitHub account, Cursor IDE, xpander.ai account
🚀 In this module, you’ll learn how to build a powerful GitHub search agent with optimized Agentic RAG capabilities and expose it as a Model Context Protocol (MCP) Server to supercharge your Cursor IDE experience. By building this agent, you’ll understand how to create practical AI tools that integrate seamlessly with your development workflow, improving coding efficiency and access to real-time code examples.
🔍 Creating Your GitHub Search Agent
Step 1: Set up xpander.ai
-
Navigate to the xpander.ai platform:
Terminal Command -
Sign in with your existing credentials or sign up for a new account
-
In the left navigation menu, go to Agents
-
Click the + New Agent button to open the Workbench
-
When prompted by the agent builder, click Skip (we’ll create the agent manually for this workshop)
Step 2: Add GitHub Search Tools
-
Click on the + (plus) button in your agent canvas
-
Select Apps from the menu
-
Select GitHub Search Manager
-
Click Sign in with GitHub Search Manager
-
Give the interface a name (like “github-search-your-name” or just “github-search”)
-
Click Save
After authorizing GitHub Search, you can see the available operations for this App in the right menu. You can add operations from this menu to your AI Agent, so they will be reflected as Tools in the AI Agent’s runtime.
Step 3: Add GitHub Operations
Add the following GitHub operations to your agent - select and confirm by clicking “Add to agent” at the bottom of the right selection menu.
- Find Code Snippets by Query Terms -
GET /search/code
- Find Repositories by Criteria -
GET /search/repositories
- Search Topics by Criteria -
GET /search/topics
- Find Users by Criteria -
GET /search/users
- Search Commits by Criteria -
GET /search/commits
Step 4: Configure Agent Settings
-
In the AI Workbench, click on the Gear Icon in the top-right corner
-
In the Agent Builder Settings panel, click on Generate Details
-
Enter the following prompt to generate your agent description:
Prompt for generating agent
👉 This step automatically generates the AI Agent instructions for us. Let’s further optimize them to match our exact use-case.
-
Go to the “Instructions” section and define your agent’s role, goal, and instructions:
RoleGoalInstructions -
Click Save Changes and close the settings window and then click Deploy.
Your agent should look similar to this:
🧪 Testing Your Agent with Raw API Responses
Let’s test your newly created GitHub search agent:
-
In the left pane of the builder interface, you’ll find the testing area
-
Type the following query to search for topics:
Prompt -
Look at the generated payload by clicking on it. You’ll see something like:
Example AI Generated PayloadExample response
Using the tester tab is extremely useful to find how the AI Agent executes tool calls, how it populates parameters, and what responses are returned from remote APIs.
-
Now try a repositories search in the testing area:
Prompt
Because the API response for this tool call is huge, it is not optimal for AI consumption and could fail (you might not see a response). Continue to the next step to see how Agentic RAG comes in to help filter and optimize these huge responses.
🧠 Optimizing Responses with Agentic RAG
Optimizing Repository Search
Let’s improve the repository search results using Agentic RAG:
- You can configure specific tool calls. Click on the gear icon next to the Find Repositories by Criteria operation in the graph.
-
Click on Advanced and then switch to Raw Editor (this allows the AI to send natural language queries, and the server will perform semantic search on the API response)
-
Configure the search and return fields:
Searchable Fields (fields the AI can search against):
Searchable Fields ConfigurationReturnable Fields (data the AI will receive):
Returnable Fields Configuration -
Save and deploy your changes
-
Now run the repositories query again:
Prompt
Notice how much cleaner and more focused the response is now! Instead of overwhelming JSON data, you get just the meaningful content that matters to your query.
Configure Agentic RAG for the other operations as well
Go through the three tabs in this section to configure the correct Searchable and Returnable fields for the agent’s tools, in order to use Agentic RAG to optimize the tool calls.
To optimize the user search operation:
- Click on the gear icon next to the Find Users by Criteria operation
- Go to Advanced → Raw Editor
- Configure the fields:
Searchable Fields:
Returnable Fields:
To optimize the user search operation:
- Click on the gear icon next to the Find Users by Criteria operation
- Go to Advanced → Raw Editor
- Configure the fields:
Searchable Fields:
Returnable Fields:
To optimize the commit search operation:
- Click on the gear icon next to the Search Commits by Criteria operation
- Go to Advanced → Raw Editor
- Configure the fields:
Searchable Fields:
Returnable Fields:
Optimize the code snippet operation:
- Click on the gear icon next to the Find Code Snippets by Query Terms operation
- Go to Advanced → Raw Editor
- Configure the fields:
Searchable Fields:
Returnable Fields:
You configured Agentic RAG for four operations (Code-Snippets, Commit search, User search, Topics Search)
Continue from the workshop from the WebUI
To continue testing your agent:
- Keep the AI Workbench tab open
- Open the web chat interface by clicking the Chat source node (at the top of the canvas) and clicking the URL.
- Select the last conversation thread you started before configuring the agentic RAG from your chat history to maintain context
Now test your agent and check the generated payload for each and the response of the Agent. Here are few Prompt optimized to ask the Agent to select specific operation. You want to see if the data returned is optimized or not. You can use the same thread or create new one.
📄 Adding Code Reading Capabilities
Let’s enhance our agent by adding the ability to read code directly from GitHub URLs:
- In the left navigation menu, go to Cloud Functions
- Click the New button
- Replace the example code with the following snippet:
- Name it GitHub Code Reader and click Save
- Return to your agent in the Agents section
- Click the + button
- Select Custom Action
- Add the GitHub Code Reader function to your agent canvas. It could take a minute for the Action to appear in the menu as it gets validated. Refresh the Agent page if you don’t see it.
- Click Deploy
Run the following prompt
The screenshot is from the agent’s WebUI. You can access yours by clicking the ‘Chat’ button on the canvas — the WebUI URL will appear.
🌎 Enhancing Your Agent with Web Search
Your GitHub search agent can currently only access information available on GitHub. Let’s add web search capabilities to make it more versatile:
-
Return to your AI Agent workbench
-
Click the + button to add components to the AI Agent
-
Select Built-in Actions
-
Add Fetch Tavily AI Insights (for comprehensive web searching)
-
Click on the gear icon in your agent canvas to edit the agent settings
-
Update your agent’s instructions by adding these important rules:
Updated Instructions -
Save & Deploy your changes
-
In the Tester tab, try a query that requires recent information:
Test Prompt
Your agent should now provide more comprehensive answers by combining web search results with GitHub code examples:
Sometimes, our autonomous agent might choose to use GitHub before Tavily, despite the explicit instructions that are configured in the system prompt. This happens because the AI model sees all available tools at once and makes decisions based on its understanding of the prompt and tool descriptions.
This reveals a fascinating design challenge in AI systems: How can we create truly autonomous agents capable of making intelligent decisions while ensuring they reliably follow critical business logic when needed? 🤔
Continue to the Dependency Graph section to find out 👉
🔄 Creating the Agent Dependency Graph
In production, to ensure your agent follows the right search strategy, we should always create a dependency graph that deterministically enforces specific business logic behavior:
- In your agent canvas, by attaching lines between operations, you can connect your nodes to create the dependency graph
- Set up a workflow that requires an internet search before accessing GitHub operations:
- Connect the Tavily AI Insights node as a prerequisite to the GitHub operations - simply drag lines from the top of all the GitHub operations to the bottom of the Tavily operation.
- Ensure the flow follows a logical progression
- Repeat the query with explict request to not start with Tavily
You should see that the AI Agent has failed to follow the prompt, because it tried to invoke the wrong tool call against the dependency graph Error: The tool ‘GitHubRepoDiscoveryFindRepositoriesByCriteria’ was selected incorrectly and cannot be executed at this stage.
- Now run a prompt that does not conflict with the dependency graph:
- Verify that the AI follows the correct workflow - first searching for information before accessing GitHub operations:
Congratulations! You’ve built a reliable AI Agent that balances autonomy with guardrails. By creating this structured workflow, you’ve effectively addressed both the API overflow challenge and solved the autonomous multi-step challenges
🌐 Exposing Your Agent as an MCP server
Step 1: Access your Agent MCP URL
The final step is to expose your optimized GitHub agent as a Model Context Protocol (MCP) server:
- In your agent canvas, click the + button
- Select Sources → MCP
- Click Deploy
- After the short deployment phase completes, you’ll receive a unique URL for your MCP server when you click on the MCP source node on the canvas.
- Configure this endpoint in your Cursor IDE
Go to Cursor -> Settings -> MCP and click on ”+ add new global MCP Server” or edit directly the ~/.cursor/mcp.json
:
Make sure to replace https://mcp.xpander.ai/your-server-url/
with your actual MCP server URL from the deployment step!
Step 2: Customize the Cursor Settings
- Add an explicit instruction to prioritize MCP commands in the “system prompt” configuration of your Cursor IDE.
- Make sure the MCP server is active (Cursor Settings -> MCP)
- Restart Cursor IDE to apply the changes
Step 3: Ask Cursor Agent to invoke MCP
Test your MCP integration by providing this prompt in the Cursor AI Agent section:
If you don’t want to manually approve each MCP tool call, you can go to cursor settings and enable auto-run mode — “Allow Agent to run tools without asking for confirmation, such as executing commands and writing to files”
✅ Checkpoint
Congratulations! By completing this module, you should now be able to:
- Create and configure a GitHub search agent with optimized Agentic RAG
- Customize searchable and returnable fields for each tool call
- Add custom code execution to your agent
- Structure agent workflows using dependency graphs
- Deploy your agent as an MCP server for Cursor IDE
🔄 Next Steps
Now that you’ve supercharged your IDE with an optimized GitHub MCP server, you’re ready to build your first coding agent - proceed to the next module.