Overview
Spur’s MCP (Model Context Protocol) server connects your AI assistant directly to your Spur tests. Whether you use Cursor, Claude Code, ChatGPT, or any MCP-compatible client, you can discover tests, trigger runs, and debug failures through natural language – without leaving your development environment. Authentication uses OAuth. When you connect for the first time, your browser will open to authorize access to your Spur account.How to Setup
Cursor
Click this link to install!Claude Code
Run this command in your terminal:ChatGPT
- Enable Dev mode in your workspace settings (requires owner permissions)
- Create a new App
- Set the MCP URL to
https://app.spurtest.com/api/mcp - Set Auth to OAuth
Available Tools
Test Discovery
Test Discovery
list_tests – Lists all tests in the active application with IDs, suites, and descriptionsget_test_details – Returns the full definition of a test: steps, environments, viewports, and scenario rowsTest Execution
Test Execution
run_test – Runs a single test by test_id and env_id. Optionally specify browser, viewport, or scenario rowrun_tests – Runs multiple tests in one collection run, grouped by shared configurationRun Analysis
Run Analysis
get_test_run_overview – Start here. Summarizes a run’s status, step results, warnings, and failuresget_test_run_details – Deep dive into steps, sub-steps, configs, and artifacts. Use after the overviewget_test_runs – Lists the last 50 runs for a given testDebugging
Debugging
get_test_run_console_logs – Browser console output and JavaScript errors (last 100 entries)get_test_run_network_logs – HTTP requests and responses captured during the run (last 5 entries)get_test_run_screenshots – Screenshots from the test execution for visual inspectionApplication Management
Application Management
list_applications – Lists all applications on your account and shows the active oneswitch_application – Switches the active application for subsequent callsCommon Workflows
Discover and Run Tests
Ask your assistant: “What tests do I have for checkout?” The agent callslist_tests, scans names and descriptions, and returns matching tests. You can then run specific tests directly from the conversation.
Show Image Natural language test discovery through your IDE
Execute with Specific Configurations
Ask: “Run the checkout test on staging with mobile viewport” The agent callsrun_test with the correct environment and viewport, then monitors the job status.
Debug Failed Runs
Ask: “What went wrong in this run?” The agent callsget_test_run_overview to identify failing steps, then drills into get_test_run_details, get_test_run_console_logs, or get_test_run_screenshots to surface the root cause.
Show Image Debugging workflow from overview to console logs
Best Practices
- Start with the overview: When analyzing results, always use
get_test_run_overviewfirst. This provides context before diving into detailed logs or screenshots. - Be specific in your prompts: “Run the checkout test on production” works better than “run some tests.” Include environment names, viewport preferences, or scenario details when relevant.
- Model quality matters: More capable models (Claude Opus, GPT-4) are better at choosing the right tools and interpreting results. Smaller models may need more explicit guidance.
- Use multi-test runs strategically: When testing the same flow across environments, use
run_teststo group configurations efficiently rather than running tests individually.
