The Problem You Are Solving
Analytics and tracking implementations break silently. Unlike a broken button or a crashed page, a missing tracking event produces no visible error. The user experience looks fine. But behind the scenes:- Events stop firing after a code deploy
- Required fields disappear from payloads
- Data types change (string becomes number, casing shifts)
- Third-party pixels and affiliate tags get dropped
- UTM parameters are stripped during redirects
How Teams Typically Catch These Issues Today
This process has fundamental limitations:- It does not scale. A site with 30+ tracked events across multiple regions, brands, and browsers creates thousands of combinations to check.
- It is error-prone. Humans miss subtle changes — a field that switched from lowercase to uppercase, a value that went from
"12.99"to12.99. - It is reactive. Manual QA happens after deploys. Issues often reach production before anyone checks.
- It consumes analyst time. Every hour spent in DevTools is an hour not spent on actual data analysis and strategy.
How Spur Automates This
Spur replaces the manual DevTools process with an automated browser agent. Here is how it works at a high level:What the Agent Actually Does
Opens a real browser
Spur launches a real Chrome or Safari browser — not a simulator. It behaves exactly like a user visiting your site, including cookies, consent banners, and third-party scripts.
Performs the user flow
The agent navigates to the right page, clicks through the flow (view a product, add to cart, complete checkout, etc.), and triggers the same events a real user would.
Captures all network traffic
While the agent navigates, every HTTP request and response is captured in real time — including analytics events, tracking pixels, API calls, and third-party scripts. You can inspect this data yourself in the Network & Console Monitoring panel after every run.
Validates against your expectations
You tell Spur what to check using plain language. For example: “Confirm the product_detail event contains product_id, product_name, and price as a number.” The agent searches the captured network data, finds the matching request, and validates field by field. See how the agent breaks down and verifies assertions to understand exactly what happens behind the scenes.
Reports results with evidence
Each validation produces a clear pass or fail, along with the actual data it found — including the endpoint, request method, status code, and payload snippet. You can click through to the full network log for any examined request. No digging through DevTools required.
Key Concepts
Events and Payloads
An event is a network request your site sends to an analytics platform (like Adobe Analytics, Google Analytics, Tealium, Segment, etc.) when something happens — a page loads, a user clicks a button, an order completes. Each event carries a payload: a bundle of data fields describing what happened. For example, a product view event might include:- Did the event fire at all? (The most common failure — roughly 50% of issues)
- Are all required fields present? (About 40% of issues)
- Are the values in the correct format? (About 10% — wrong types, casing, etc.)
Log Steps
In Spur, you validate events using Log steps. A Log step is a plain-language instruction that tells the agent what to check in the captured network data. You write it like you would explain it to a colleague:Network & Console Monitoring
See how Spur captures network traffic and console output, how the agent verifies your assertions, and how to inspect raw logs in test results.
Log Assertions — Full Reference
Technical reference for writing Log steps, adding them to tests, and troubleshooting.
Thinking Through Your Validation Strategy
Before building tests, take a step back and think about what matters most. Not every event needs the same level of scrutiny.Prioritize by Business Impact
P0 — Must validate every deploy:- Order confirmation / purchase events (revenue attribution)
- Affiliate and commission tracking (direct revenue impact)
- Consent and privacy events (legal compliance)
- Core conversion events (signup, subscription)
- Product detail page views (merchandising analytics)
- Search and navigation events (UX analytics)
- Campaign attribution parameters (marketing ROI)
- Page scroll and engagement events
- Feature usage tracking
- A/B test instrumentation
Map Your Validation Matrix
For each priority event, consider the dimensions you need to cover:| Dimension | Example |
|---|---|
| Regions | US, UK, EU, APAC |
| Brands / Properties | Main brand, sub-brands |
| Browsers | Chrome, Safari, mobile |
| Environments | Staging, production |
| User states | Logged in, guest, returning |
Building Your First Validation Test
Here is how to approach it, step by step.Step 1: Pick your highest-priority event
Start with the one event that would cause the most damage if it broke. For most teams, this is either:- Purchase / order confirmation — revenue and attribution
- Main page view event — highest volume, most dependencies
Step 2: Document what “correct” looks like
Write down (or gather from your tech spec):- The event name or endpoint
- Every required field
- The expected data type for each field (string, number, boolean)
- Any format requirements (e.g., currency as decimal, IDs as strings)
Step 3: Create the test in Spur
Build a test that:- Navigates to the page or completes the user flow
- Uses Log steps to validate the event payload
Step 4: Run and iterate
Run the test against your staging environment first. Review the results using the Network & Console Monitoring panel:- Did the agent find the right event? Click the examined request badge to verify.
- Are there false positives (flagging things that are actually fine)?
- Are there fields you forgot to include?
Step 5: Schedule and expand
Once validated, schedule the test to run:- After every deploy — catch regressions immediately
- Daily — catch issues from third-party script updates or infrastructure changes
Common Validation Patterns
Analytics Event Validation
The most common use case. Confirm that tracking events fire with the correct payload during key user flows.Affiliate and UTM Parameter Validation
UTM parameters and affiliate tokens in URLs drive campaign attribution and commission payouts. If they are dropped or malformed at any point in the funnel, revenue goes untracked.Data Layer Validation
Many analytics implementations use a data layer (like Tealium’sutag.data or Google’s dataLayer) that is accessible in the browser. Spur can capture and validate these attributes as part of the same test flow.
Cross-Platform Consistency
Run the same validation across Chrome, Safari, and mobile to ensure events fire consistently across all platforms.What Changes With Automation
| Manual | Automated with Spur | |
|---|---|---|
| Time per validation cycle | 2–4 hours | 5–10 minutes |
| Coverage | Spot-checking (~30%) | 100% — all fields, every run |
| Multi-region | Each tested separately | All regions in parallel |
| Multi-browser | Manual switching | Chrome, Safari, mobile in parallel |
| Error detection | Visual inspection — easy to miss subtle issues | AI flags exact discrepancies with expected vs. actual |
| Documentation | Manual screenshots | Structured reports with network traces |
| Frequency | Ad-hoc after releases | Scheduled daily + on-demand |
| Detection speed | Days to weeks (or never) | Within minutes of a deploy |
Getting Started
Now that you understand the fundamentals of how analytics validation works with Spur, dive into the feature documentation to see exactly how to use it:Network & Console Monitoring
See how validation results look in practice — assertion breakdowns, evidence, and raw network/console logs.
Log Assertions Reference
Technical reference for adding Log steps to your tests.
Create Your First Test
Step-by-step guide to building and running your first Spur test.
Running Tests
Execute and monitor your validation tests across environments.
CI/CD Integration
Trigger validation tests automatically on every deploy.
Scheduling
Set up recurring validation runs on a daily or weekly cadence.
