Skip to main content

Test Analytics

Transform test data into actionable insights for improving test reliability and performance.

Overview

Analytics track test execution patterns over time, revealing issues that single-run results miss. Use this data to prioritize fixes, optimize performance, and catch regressions early.

Key Metrics

Pass Rate

Percentage of tests passing over a time period.

RangeStatusAction
95-100%HealthyMonitor for regressions
85-95%Attention neededReview failing tests weekly
Below 85%CriticalPrioritize test fixes immediately

Failure Rate

Tests failing consistently vs. intermittently.

Consistent failures (same test, same error): Likely a real bug or outdated test.

Intermittent failures: Points to flakiness, timing issues, or environment problems.

Flakiness Score

Measures how often a test flip-flops between pass and fail without code changes.

Flakiness = (Flip-flop runs / Total runs) x 100
ScoreClassificationPriority
0-5%StableNo action
5-15%ModerateSchedule review
15%+FlakyFix immediately

Flaky tests erode trust in your test suite and waste developer time investigating false failures.

Execution Time

Track average duration and trends per test.

Warning signs:

  • Sudden spikes: New performance regression
  • Gradual increase: Technical debt accumulating
  • High variance: Inconsistent test environment

Trend Analysis

Time Period Views

ViewBest for
DailyCatching immediate regressions after deploys
WeeklyIdentifying patterns and recurring issues
MonthlyMeasuring overall suite health improvements

Identifying Patterns

Look for correlations between:

  • Deploy times and failures: Regressions from code changes
  • Time of day: Infrastructure issues during peak load
  • Day of week: Environment drift over weekends
  • Test file changes: Brittle test modifications

Tracking Improvements

Set baselines, fix issues, compare metrics after 1-2 weeks, and document what worked.

Acting on Data

Prioritizing Flaky Tests

Sort by impact: Flakiness score x Run frequency. High-frequency flaky tests waste the most time. Export the list, categorize root causes (timing, data, environment), apply fixes, and monitor for improvement over 5+ runs.

Identifying Slow Tests

Tests exceeding baseline duration by 2x deserve attention. Reduce unnecessary waits, mock slow dependencies, parallelize operations, or split large tests.

Setting Up Regression Alerts

Configure notifications when:

ConditionThresholdAlert Type
Pass rate drops>5% decrease in 24hImmediate
New test failuresPreviously stable test fails 2+ timesSame day
Duration spike>50% increase from baselineDaily digest
Flakiness increaseScore rises above 15%Weekly review

Filtering and Grouping

By Project

Compare test health across different projects to identify which need attention.

By Suite

Group related tests to find systemic issues:

  • Authentication suite failing: Check auth service
  • API suite slow: Review backend performance
  • UI suite flaky: Investigate selector stability

By Individual Test

Drill down to specific test history for debugging persistent issues.

By Time Period

FilterUse case
Last 24 hoursPost-deploy verification
Last 7 daysSprint review
Last 30 daysMonthly health report
Custom rangeInvestigating specific incidents

By Environment

Compare results across environments to isolate infrastructure issues. Local vs CI differences indicate setup problems; staging vs production gaps reveal deployment issues.

Quick Actions

GoalSteps
Find flakiest testsFilter by flakiness > 10%, sort descending
Identify slowest testsSort by avg duration, filter > 30s
Check recent regressionsFilter last 24h, status = fail, previously = pass
Review suite healthGroup by suite, compare pass rates

Next: Learn about Test Suites to organize tests for better analytics grouping.