
Manual test execution interface
Running a Test
1
Navigate to your test suite
Go to the test suite containing the test you want to run.
2
Select the test
Click on the test name or use the Run button next to it.
3
Configure the run
Choose your run settings:Environments
- Select one or multiple environments to test
- Tests will run in parallel across selected environments
- See Environments for configuration details
- Desktop browsers: Chrome, Safari, Firefox
- Mobile browsers: Mobile Chrome, Mobile Safari
- Custom viewports and screen sizes
- Select all scenarios or specific ones
- Different scenarios can target different environments
4
Review run summary
Before starting, review:
- Total number of test runs
- Environments selected
- Browser configurations
- Estimated execution time
5
Start the run
Click “Start Run” to begin execution.
6
Monitor progress
Track real-time progress:
- Test execution status
- Video recording of test actions
- Step-by-step logs
- Any errors or failures
7
View results
When complete, review:
- Pass/fail status per environment
- Detailed execution logs
- Screenshots and videos
- Performance metrics

Run configuration modal
Multi-Environment Testing
Running a test across multiple environments:- Select Multiple Environments - Choose all environments you want to test (e.g., Dev, Staging, Production)
- Parallel Execution - Tests run simultaneously across all environments
- Separate Results - Each environment gets its own result set
- Comparison View - Easily compare results across environments
Multi-environment testing is perfect for validating that a feature works consistently across all your deployments before a release.
Browser Configuration
Desktop Testing
Select from supported desktop browsers:- Chrome - Most common browser
- Safari - Apple ecosystem
- Firefox - Additional coverage
Mobile Testing
Test mobile experiences:- Mobile Chrome - Android devices
- Mobile Safari - iOS devices
- Custom viewport sizes for different devices
Custom Viewports
Set specific screen dimensions:- Standard desktop (1920x1080, 1440x900)
- Tablets (768x1024, 1024x768)
- Mobile devices (375x667, 414x896)
- Custom sizes for specific requirements
Default browser and viewport settings are configured per environment. You can override these for individual test runs.
Scenario Selection
For tests using data tables:- Run All Scenarios - Execute every row in the table
- Select Specific Scenarios - Choose individual rows to test
- Environment-Specific Scenarios - Run different scenarios in different environments
Run Summary
Before execution starts, review the run summary:| Metric | Description |
|---|---|
| Total Runs | Number of test executions (tests × environments × browsers) |
| Environments | List of selected environments |
| Browsers | Browser configurations |
| Scenarios | Number of scenarios per test |
| Estimated Time | Expected execution duration |
Running tests across many environments and browsers increases execution time and resource usage. Start with a small selection and expand as needed.
Monitoring Execution
During test execution, you can:- Watch Live Video - See tests run in real-time
- View Logs - Check step-by-step execution details
- Track Progress - Monitor completion percentage
- Identify Issues - Spot failures as they happen

Real-time test execution monitoring
Results and Debugging
After execution completes:Pass/Fail Status
- Overall test result
- Per-environment breakdown
- Per-browser breakdown
- Per-scenario results
Execution Artifacts
- Videos - Full recording of test execution
- Screenshots - Captured at each step and failures
- Logs - Detailed step-by-step execution log
- Network Activity - HTTP requests and responses
- Console Output - Browser console logs
Debugging Failed Tests
When a test fails:- Check the error message and screenshot
- Watch the video to see what happened
- Review network logs for API issues
- Compare results across environments
- Check environment-specific variables
If a test passes in one environment but fails in another, the issue is likely environment-specific configuration or data. Check your environment variables and test data.
Best Practices
- Start Small - Test in a single environment first, then expand
- Use Appropriate Browsers - Don’t test all browsers unless necessary
- Select Relevant Scenarios - Run only the scenarios you need to verify
- Monitor First Runs - Watch tests execute to catch issues early
- Compare Environments - Use multi-environment runs to spot inconsistencies
- Check Prerequisites - Ensure environment variables and test data are configured
Manual runs are great for development and debugging. For regular testing, consider using Scheduled Tests or Test Plans.
Quick Actions
Common manual run patterns:Quick Validation
- Single environment (usually Dev)
- Default browser
- All scenarios
- Perfect for rapid iteration
Pre-Deployment Check
- Staging + Production
- All browsers
- Critical scenarios
- Validate before release
Bug Investigation
- Specific environment
- Single browser
- Selected scenario
- Reproduce and debug issue
Cross-Environment Regression
- All environments
- Primary browsers
- All scenarios
- Ensure consistency
Troubleshooting
Test won't start
Test won't start
Check:
- All required environment variables are set
- Selected environment is properly configured
- Test has no blocking dependencies
Different results in different environments
Different results in different environments
Likely causes:
- Environment-specific data
- Different application versions
- Timing or performance differences
Cannot select certain environments
Cannot select certain environments
Likely causes:
- Test is restricted to specific environments
- Missing required environment variables
- Environment is deactivated
Test results are typically available within a few minutes of execution. Large test suites or multi-environment runs may take longer.
Next Steps
- Learn about Environments configuration
- Set up Scheduled Tests for automation
- Create Test Plans for comprehensive testing
- Configure Slack notifications for results
