Skip to main content
If you work in data science, analytics, or marketing technology, you know the pain: every release cycle means hours of manually checking that tracking events fire correctly, payloads contain the right fields, and downstream data pipelines receive clean inputs. When something breaks, it often goes undetected for days or weeks — leading to bad dashboards, broken attribution, and lost revenue. Spur automates this entire validation process. This guide explains how it works, why it matters, and how to get started — even if you have never automated anything before.

The Problem You Are Solving

Analytics and tracking implementations break silently. Unlike a broken button or a crashed page, a missing tracking event produces no visible error. The user experience looks fine. But behind the scenes:
  • Events stop firing after a code deploy
  • Required fields disappear from payloads
  • Data types change (string becomes number, casing shifts)
  • Third-party pixels and affiliate tags get dropped
  • UTM parameters are stripped during redirects
These failures are invisible to end users — but they corrupt your data, break attribution models, and undermine every decision made from that data.

How Teams Typically Catch These Issues Today

This process has fundamental limitations:
  • It does not scale. A site with 30+ tracked events across multiple regions, brands, and browsers creates thousands of combinations to check.
  • It is error-prone. Humans miss subtle changes — a field that switched from lowercase to uppercase, a value that went from "12.99" to 12.99.
  • It is reactive. Manual QA happens after deploys. Issues often reach production before anyone checks.
  • It consumes analyst time. Every hour spent in DevTools is an hour not spent on actual data analysis and strategy.

How Spur Automates This

Spur replaces the manual DevTools process with an automated browser agent. Here is how it works at a high level:

What the Agent Actually Does

1

Opens a real browser

Spur launches a real Chrome or Safari browser — not a simulator. It behaves exactly like a user visiting your site, including cookies, consent banners, and third-party scripts.
2

Performs the user flow

The agent navigates to the right page, clicks through the flow (view a product, add to cart, complete checkout, etc.), and triggers the same events a real user would.
3

Captures all network traffic

While the agent navigates, every HTTP request and response is captured in real time — including analytics events, tracking pixels, API calls, and third-party scripts. You can inspect this data yourself in the Network & Console Monitoring panel after every run.
4

Validates against your expectations

You tell Spur what to check using plain language. For example: “Confirm the product_detail event contains product_id, product_name, and price as a number.” The agent searches the captured network data, finds the matching request, and validates field by field. See how the agent breaks down and verifies assertions to understand exactly what happens behind the scenes.
5

Reports results with evidence

Each validation produces a clear pass or fail, along with the actual data it found — including the endpoint, request method, status code, and payload snippet. You can click through to the full network log for any examined request. No digging through DevTools required.

Key Concepts

Events and Payloads

An event is a network request your site sends to an analytics platform (like Adobe Analytics, Google Analytics, Tealium, Segment, etc.) when something happens — a page loads, a user clicks a button, an order completes. Each event carries a payload: a bundle of data fields describing what happened. For example, a product view event might include:
{
  "event_name": "product_detail",
  "product_id": "ABC-12345",
  "product_name": "Classic Oxford Shirt",
  "price": 68.00,
  "currency": "USD",
  "category": "Men > Shirts",
  "brand": "Main Brand",
  "color": "Blue",
  "size": "M",
  "in_stock": true
}
When Spur validates an event, it checks:
  • Did the event fire at all? (The most common failure — roughly 50% of issues)
  • Are all required fields present? (About 40% of issues)
  • Are the values in the correct format? (About 10% — wrong types, casing, etc.)

Log Steps

In Spur, you validate events using Log steps. A Log step is a plain-language instruction that tells the agent what to check in the captured network data. You write it like you would explain it to a colleague:
Log Confirm the product_detail event fired and contains product_id, 
    product_name, and price as a number
The agent handles the rest — finding the right request, parsing the payload, and checking each field.

Thinking Through Your Validation Strategy

Before building tests, take a step back and think about what matters most. Not every event needs the same level of scrutiny.

Prioritize by Business Impact

P0 — Must validate every deploy:
  • Order confirmation / purchase events (revenue attribution)
  • Affiliate and commission tracking (direct revenue impact)
  • Consent and privacy events (legal compliance)
  • Core conversion events (signup, subscription)
P1 — Validate weekly or after relevant changes:
  • Product detail page views (merchandising analytics)
  • Search and navigation events (UX analytics)
  • Campaign attribution parameters (marketing ROI)
P2 — Validate monthly or on major releases:
  • Page scroll and engagement events
  • Feature usage tracking
  • A/B test instrumentation

Map Your Validation Matrix

For each priority event, consider the dimensions you need to cover:
DimensionExample
RegionsUS, UK, EU, APAC
Brands / PropertiesMain brand, sub-brands
BrowsersChrome, Safari, mobile
EnvironmentsStaging, production
User statesLogged in, guest, returning
Spur runs all of these combinations in parallel — what takes hours manually takes minutes automated.

Building Your First Validation Test

Here is how to approach it, step by step.

Step 1: Pick your highest-priority event

Start with the one event that would cause the most damage if it broke. For most teams, this is either:
  • Purchase / order confirmation — revenue and attribution
  • Main page view event — highest volume, most dependencies

Step 2: Document what “correct” looks like

Write down (or gather from your tech spec):
  • The event name or endpoint
  • Every required field
  • The expected data type for each field (string, number, boolean)
  • Any format requirements (e.g., currency as decimal, IDs as strings)

Step 3: Create the test in Spur

Build a test that:
  1. Navigates to the page or completes the user flow
  2. Uses Log steps to validate the event payload
Example test structure:
1. Navigate to a product detail page
2. Verify the product page loaded (UI check)
3. Log Confirm the product_detail event fired
4. Log Confirm product_detail contains product_id as a non-empty string
5. Log Confirm product_detail contains price as a number greater than 0
6. Log Confirm product_detail contains product_name, category, and brand

Step 4: Run and iterate

Run the test against your staging environment first. Review the results using the Network & Console Monitoring panel:
  • Did the agent find the right event? Click the examined request badge to verify.
  • Are there false positives (flagging things that are actually fine)?
  • Are there fields you forgot to include?
Tune your Log step assertions until the test reliably catches real issues and ignores noise. See best practices for writing effective assertions for tips.

Step 5: Schedule and expand

Once validated, schedule the test to run:
  • After every deploy — catch regressions immediately
  • Daily — catch issues from third-party script updates or infrastructure changes
Then repeat for your next priority event.

Common Validation Patterns

Analytics Event Validation

The most common use case. Confirm that tracking events fire with the correct payload during key user flows.
Log Confirm the purchase event contains order_id, revenue as a number, 
    and items as an array with at least one entry

Affiliate and UTM Parameter Validation

UTM parameters and affiliate tokens in URLs drive campaign attribution and commission payouts. If they are dropped or malformed at any point in the funnel, revenue goes untracked.
Log Confirm the request URL to the affiliate endpoint contains 
    utm_source, utm_medium, and utm_campaign parameters

Data Layer Validation

Many analytics implementations use a data layer (like Tealium’s utag.data or Google’s dataLayer) that is accessible in the browser. Spur can capture and validate these attributes as part of the same test flow.
Log Confirm the data layer contains user_segment and page_type 
    with non-empty values

Cross-Platform Consistency

Run the same validation across Chrome, Safari, and mobile to ensure events fire consistently across all platforms.

What Changes With Automation

ManualAutomated with Spur
Time per validation cycle2–4 hours5–10 minutes
CoverageSpot-checking (~30%)100% — all fields, every run
Multi-regionEach tested separatelyAll regions in parallel
Multi-browserManual switchingChrome, Safari, mobile in parallel
Error detectionVisual inspection — easy to miss subtle issuesAI flags exact discrepancies with expected vs. actual
DocumentationManual screenshotsStructured reports with network traces
FrequencyAd-hoc after releasesScheduled daily + on-demand
Detection speedDays to weeks (or never)Within minutes of a deploy

Getting Started

Now that you understand the fundamentals of how analytics validation works with Spur, dive into the feature documentation to see exactly how to use it: