Skip to main content
Manual test execution gives you full control over when and how your tests run. You can select specific environments, browsers, and scenarios for each test run.

Manual test execution interface

Running a Test

1

Navigate to your test suite

Go to the test suite containing the test you want to run.
2

Select the test

Click on the test name or use the Run button next to it.
3

Configure the run

Choose your run settings:Environments
  • Select one or multiple environments to test
  • Tests will run in parallel across selected environments
  • See Environments for configuration details
Viewport Configuration (optional)
  • Choose between Desktop and Mobile presets, or provide custom dimensions
  • Standard desktop sizes (1920x1080, 1440x900)
  • Mobile device sizes (375x667, 414x896)
Scenarios (for tests with data tables)
  • Select all scenarios or specific ones
  • Different scenarios can target different environments
4

Review run summary

Before starting, review:
  • Total number of test runs
  • Environments selected
  • Viewport configurations
  • Estimated execution time
5

Start the run

Click “Start Run” to begin execution.
6

Monitor progress

Track real-time progress:
  • Test execution status
  • Video recording of test actions
  • Step-by-step logs
  • Any errors or failures
7

View results

When complete, review:
  • Pass/fail status per environment
  • Detailed execution logs
  • Screenshots and videos
  • Performance metrics

Run configuration modal

Multi-Environment Testing

Running a test across multiple environments:
  1. Select Multiple Environments - Choose all environments you want to test (e.g., Dev, Staging, Production)
  2. Parallel Execution - Tests run simultaneously across all environments
  3. Separate Results - Each environment gets its own result set
  4. Comparison View - Easily compare results across environments
Multi-environment testing is perfect for validating that a feature works consistently across all your deployments before a release.

Viewport Configuration

Configure the screen dimensions for your test runs to ensure your application works across different device sizes.

Desktop Viewports

Standard desktop screen sizes:
  • 1920x1080 - Full HD display
  • 1440x900 - Common laptop size
  • 1366x768 - Standard laptop display

Mobile Viewports

Mobile device screen sizes:
  • 375x667 - iPhone SE, iPhone 8
  • 414x896 - iPhone 11, iPhone XR
  • 360x640 - Common Android size

Custom Viewports

Set specific screen dimensions for your requirements:
  • Test responsive breakpoints
  • Verify layouts at specific sizes
  • Simulate unique device configurations
Default viewport settings are configured per environment. You can override these for individual test runs.

Scenario Selection

For tests using data tables:
  • Run All Scenarios - Execute every row in the table
  • Select Specific Scenarios - Choose individual rows to test
  • Environment-Specific Scenarios - Run different scenarios in different environments
Example:
Test: Checkout Flow
Scenarios:
  - Credit Card (All environments)
  - PayPal (Staging + Production)
  - Test Payment (Staging only)

Run Summary

Before execution starts, review the run summary:
MetricDescription
Total RunsNumber of test executions (tests × environments × viewports)
EnvironmentsList of selected environments
ViewportsViewport configurations
ScenariosNumber of scenarios per test
Estimated TimeExpected execution duration
Running tests across many environments and viewports increases execution time and resource usage. Start with a small selection and expand as needed.

Monitoring Execution

During test execution, you can:
  • Watch Live Video - See tests run in real-time
  • View Logs - Check step-by-step execution details
  • Track Progress - Monitor completion percentage
  • Identify Issues - Spot failures as they happen

Real-time test execution monitoring

Results and Debugging

After execution completes:

Pass/Fail Status

  • Overall test result
  • Per-environment breakdown
  • Per-viewport breakdown
  • Per-scenario results

Execution Artifacts

  • Videos - Full recording of test execution
  • Screenshots - Captured at each step and failures
  • Logs - Detailed step-by-step execution log
  • Network Activity - HTTP requests and responses
  • Console Output - Browser console logs

Debugging Failed Tests

When a test fails:
  1. Check the error message and screenshot
  2. Watch the video to see what happened
  3. Review network logs for API issues
  4. Compare results across environments
  5. Check environment-specific variables
If a test passes in one environment but fails in another, the issue is likely environment-specific configuration or data. Check your environment variables and test data.

Best Practices

  1. Start Small - Test in a single environment first, then expand
  2. Use Appropriate Viewports - Test the screen sizes relevant to your users
  3. Select Relevant Scenarios - Run only the scenarios you need to verify
  4. Monitor First Runs - Watch tests execute to catch issues early
  5. Compare Environments - Use multi-environment runs to spot inconsistencies
  6. Check Prerequisites - Ensure environment variables and test data are configured
Manual runs are great for development and debugging. For regular testing, consider using Scheduled Tests or Test Plans.

Quick Actions

Common manual run patterns:

Quick Validation

  • Single environment (usually Dev)
  • Default viewport
  • All scenarios
  • Perfect for rapid iteration

Pre-Deployment Check

  • Staging + Production
  • Desktop and mobile viewports
  • Critical scenarios
  • Validate before release

Bug Investigation

  • Specific environment
  • Single viewport
  • Selected scenario
  • Reproduce and debug issue

Cross-Environment Regression

  • All environments
  • Key viewports
  • All scenarios
  • Ensure consistency

Troubleshooting

Check:
  • All required environment variables are set
  • Selected environment is properly configured
  • Test has no blocking dependencies
Likely causes:
  • Environment-specific data
  • Different application versions
  • Timing or performance differences
Solution: Review environment configurations and test data
Likely causes:
  • Test is restricted to specific environments
  • Missing required environment variables
  • Environment is deactivated
Solution: Check test configuration and environment settings
Test results are typically available within a few minutes of execution. Large test suites or multi-environment runs may take longer.

Next Steps