Skip to main content

Overview

Spur’s MCP (Model Context Protocol) server connects your AI assistant directly to your Spur tests. Whether you use Cursor, Claude Code, ChatGPT, or any MCP-compatible client, you can discover tests, trigger runs, and debug failures through natural language – without leaving your development environment. Authentication uses OAuth. When you connect for the first time, your browser will open to authorize access to your Spur account.

How to Setup

Cursor

Click this link to install! Install MCP Server

Claude Code

Run this command in your terminal:
claude mcp add Spur -- npx -y mcp-remote@latest https://app.spurtest.com/api/mcp

ChatGPT

  1. Enable Dev mode in your workspace settings (requires owner permissions)
  2. Create a new App
  3. Set the MCP URL to https://app.spurtest.com/api/mcp
  4. Set Auth to OAuth

Available Tools

list_tests – Lists all tests in the active application with IDs, suites, and descriptionsget_test_details – Returns the full definition of a test: steps, environments, viewports, and scenario rows
run_test – Runs a single test by test_id and env_id. Optionally specify browser, viewport, or scenario rowrun_tests – Runs multiple tests in one collection run, grouped by shared configuration
get_test_run_overviewStart here. Summarizes a run’s status, step results, warnings, and failuresget_test_run_details – Deep dive into steps, sub-steps, configs, and artifacts. Use after the overviewget_test_runs – Lists the last 50 runs for a given test
get_test_run_console_logs – Browser console output and JavaScript errors (last 100 entries)get_test_run_network_logs – HTTP requests and responses captured during the run (last 5 entries)get_test_run_screenshots – Screenshots from the test execution for visual inspection
list_applications – Lists all applications on your account and shows the active oneswitch_application – Switches the active application for subsequent calls

Common Workflows

Discover and Run Tests

Ask your assistant: “What tests do I have for checkout?” The agent calls list_tests, scans names and descriptions, and returns matching tests. You can then run specific tests directly from the conversation. Show Image Natural language test discovery through your IDE

Execute with Specific Configurations

Ask: “Run the checkout test on staging with mobile viewport” The agent calls run_test with the correct environment and viewport, then monitors the job status.

Debug Failed Runs

Ask: “What went wrong in this run?” The agent calls get_test_run_overview to identify failing steps, then drills into get_test_run_details, get_test_run_console_logs, or get_test_run_screenshots to surface the root cause. Show Image Debugging workflow from overview to console logs

Best Practices

  • Start with the overview: When analyzing results, always use get_test_run_overview first. This provides context before diving into detailed logs or screenshots.
  • Be specific in your prompts: “Run the checkout test on production” works better than “run some tests.” Include environment names, viewport preferences, or scenario details when relevant.
  • Model quality matters: More capable models (Claude Opus, GPT-4) are better at choosing the right tools and interpreting results. Smaller models may need more explicit guidance.
  • Use multi-test runs strategically: When testing the same flow across environments, use run_tests to group configurations efficiently rather than running tests individually.