Video Tutorial

WebPageTest Complete Guide: Deep Performance Analysis

Coming soon -- YouTube video WebPageTest Complete Guide: Deep Performance Analysis 20:00
Video coming soon -- subscribe for updates

WebPageTest is the most powerful free performance analysis tool available today. Unlike Lighthouse -- which runs a single simulated test in your local browser -- WebPageTest runs tests from real server infrastructure across dozens of global locations, generates detailed waterfall charts for every network request, captures filmstrip screenshots every 100 milliseconds, and provides granular breakdowns of connection timing, server response, and render progression. For serious performance work, it is an indispensable tool.

This guide walks through every stage of a WebPageTest workflow: from configuring a test correctly, to interpreting the waterfall, to using scripted tests for authenticated pages, and finally to automating test runs through the API in a CI/CD pipeline. Whether you are diagnosing a slow TTFB, hunting down a render-blocking resource causing poor LCP, or measuring the impact of a CDN change, WebPageTest gives you the data to do it with confidence.

Your URL + test config Test Agent Real Chrome 3 runs, median Results Waterfall + Filmstrip CWV + TTFB Action Fix + re-test

Step-by-step walkthrough

1

Pick test location and device

Navigate to webpagetest.org and paste your URL into the main field. The first decision -- test location -- has the biggest impact on your TTFB results. Choose a location that represents your primary user base. If most of your users are in the US East Coast, use Virginia. For European users, London or Frankfurt. For global analysis, run separate tests from multiple regions and compare the TTFB columns to identify geographic CDN gaps.

For device selection, the default "Desktop" Chrome profile is suitable for understanding your core loading pipeline. However, for a realistic representation of real-world CWV scores, always also test using a mobile emulation profile. WebPageTest's "Motorola G4" or "Moto G (gen 4)" emulation provides CPU throttling that closely matches the mid-range Android device performance the Chrome User Experience Report uses as its baseline for mobile scoring.

Tip: Run tests from the same location each time you want to compare results. Switching between Virginia and London between test runs introduces confounding variables that make before/after comparisons unreliable.
2

Configure advanced settings

Click the Advanced Settings tab to reveal the full configuration panel. The most important settings are connection speed, number of test runs, and repeat view. Use 4G (9 Mbps, 170ms RTT) for mobile tests or Cable (5 Mbps, 28ms RTT) for desktop. These presets match typical field conditions and produce waterfall data that correlates well with real-user CrUX scores.

Set Number of Tests to Run to at least 3. WebPageTest automatically identifies the median run based on SpeedIndex and uses that as the representative result. With 3 runs you discard one outlier; with 5 or 9 runs you get even more stable medians for statistical rigor. Enable First View and Repeat View to see both cold-cache and warm-cache performance in a single test submission.

WebPageTest -- recommended advanced settings
Test URL:          https://your-site.com/
Test Location:     Virginia, USA -- EC2 (us-east-1)
Browser:           Chrome
Connection:        4G (9 Mbps / 170ms RTT)
Number of Tests:   3
Repeat View:       First View and Repeat View

Advanced:
  Capture Video:      Yes   (required for filmstrip)
  Capture Timeline:   Yes   (required for long-task analysis)
  Block Ads:          No    (test real-world conditions)
  Ignore SSL Errors:  No
  Inject Script:      (leave blank for baseline)
  DNS Override:       (use for staging environment tests)
Tip: Enable "Capture Video" every time. Without it, you lose access to the Filmstrip view and the visual comparison tool, which are among WebPageTest's most valuable outputs.
3

Read summary cards and CWV scores

When the test completes, the results page opens with a row of summary metric cards at the top. These show LCP, CLS, TBT (Total Blocking Time -- a Lighthouse proxy for INP), TTFB, Start Render, Speed Index, and Time to Interactive. Each metric is color-coded green (Good), orange (Needs Improvement), or red (Poor) against the same Core Web Vitals thresholds Google uses.

Below the summary cards you will find a First View vs. Repeat View comparison table. A dramatic improvement in Repeat View LCP (say, 4.2s down to 0.8s) indicates that your images are successfully cached but are not being preloaded for first-time visitors -- a prime candidate for adding rel="preload" or serving from a CDN with proper cache headers. If Repeat View shows little improvement, your caching strategy needs attention.

LCP 1.8s Good CLS 0.04 Good TBT 320ms Needs work TTFB 210ms Good Speed Index 2.1s Good
4

Waterfall chart analysis

The waterfall chart is the most diagnostic view in WebPageTest. Each horizontal bar represents a single network request. The bar's left edge marks when the request started; its right edge marks when the response completed. The bar is divided into color-coded segments: DNS lookup (teal), initial connection (orange), SSL negotiation (purple), time to first byte (green), and content download (blue).

The critical elements to identify are: the render-blocking line (a solid vertical blue line where the browser first begins painting), any requests whose bars start before this line and extend significantly into the timeline (these are blocking render), long white gaps between resource bars (indicating main-thread work blocking the network stack), and deep request chains where a late-discovered resource triggers additional fetches that delay LCP.

Request 0ms 500ms 1000ms 1500ms index.html critical.css vendor.js hero.webp (LCP) Start Render LCP
Reading the waterfall: Rows that appear far to the right on the timeline -- especially your LCP image -- indicate late discovery. The fix is usually adding a <link rel="preload"> tag in the document head so the browser discovers and starts fetching the resource earlier, before the HTML parser reaches it.
5

Filmstrip and visual progress

Click the Filmstrip View tab to see screenshots captured every 100 milliseconds during the page load. The filmstrip is particularly valuable for diagnosing the perceived loading experience -- the gap between when the page first renders something meaningful and when it is visually complete determines whether users feel the page is fast or slow, independent of the raw network timings.

Look for two patterns. First, a long blank white screen at the beginning of the filmstrip (before Start Render) indicates render-blocking resources. Second, large portions of the layout shifting or appearing late indicates either lazy-loaded content being below-the-fold LCP, or images without explicit width and height attributes causing layout recalculation. The filmstrip view makes these problems visually obvious in a way that metric numbers alone cannot.

Tip: Use the WebPageTest Visual Comparison feature to run side-by-side filmstrips of your page before and after an optimization. This is the most compelling way to demonstrate performance improvements to stakeholders who are not comfortable reading waterfall charts.
6

Content breakdown and third parties

The Content Breakdown section shows a pie chart and table of your page weight divided by content type (HTML, CSS, JavaScript, images, fonts, other) and by domain. Third-party origins -- analytics scripts, chat widgets, A/B testing platforms, social embeds -- commonly account for 30% to 60% of the total request count and frequently appear on the critical path, blocking LCP.

To quantify the impact of a specific third party, use WebPageTest's Block field in the Advanced Settings. Enter the domain to block (for example, googletagmanager.com) and run the test again. Compare LCP and TBT between the blocked and unblocked versions. This gives you a precise dollar figure for the performance cost of each third-party integration, which is essential data when making architectural decisions or negotiating with marketing teams.

Tip: Request Blocking data from WebPageTest is far more actionable than generic "remove unused JavaScript" advice from Lighthouse. It tells you exactly which specific origin is causing the problem and by how much.
7

Scripted tests for login flows

Many of the most performance-critical pages in an application -- dashboards, checkout pages, account settings -- sit behind authentication. WebPageTest's Script feature lets you automate a user journey so the test agent navigates through login before measuring your target page. Scripts run in the actual browser instance and support the full range of browser actions.

The script syntax uses simple tab-delimited commands. Common commands include navigate to load a URL, setValue to fill form fields by selector, submitForm to submit forms, setCookie to inject session cookies directly (faster than a full login flow), and exec to run arbitrary JavaScript. Once you have navigated past the authentication gate, add a final navigate command pointing to your actual target URL -- that final navigation is what WebPageTest measures.

WebPageTest Script -- authenticate then test dashboard
// Step 1: Navigate to login page
navigate  https://your-app.com/login

// Step 2: Wait for form to render
sleep  2000

// Step 3: Fill in credentials
setValue  name=email    your-test-user@example.com
setValue  name=password testpassword123

// Step 4: Submit the login form
submitForm  id=login-form

// Step 5: Wait for post-login redirect to complete
sleep  3000

// Step 6: Navigate to the page you want to test
navigate  https://your-app.com/dashboard/

// WebPageTest measures everything from this final navigate call

// Alternative: inject session cookie to skip login UI entirely
// (faster, more stable)
setCookie  https://your-app.com  session=your-session-token-here
navigate   https://your-app.com/dashboard/
Tip: Use the setCookie approach when possible. Injecting a session cookie skips the login page entirely and produces cleaner, more reproducible test results without the timing variability of form interactions.
8

API automation for CI/CD

Running WebPageTest manually is useful for diagnostics, but real performance discipline requires automated testing on every deployment. WebPageTest provides a REST API and an official webpagetest npm package that lets you submit tests, poll for completion, retrieve results as JSON, and fail CI builds when metrics exceed defined thresholds.

The typical CI integration pattern is: trigger a test after deployment to a staging or preview URL, wait for results (usually 60--120 seconds), extract LCP, CLS, and TBT from the JSON response, compare against your performance budget, and fail the build with a descriptive error message if any metric is out of range. Combine this with the test result URL included in the build log so developers can click through directly to the waterfall for debugging.

JavaScript -- WebPageTest API automation (Node.js)
import WebPageTest from 'webpagetest';

const wpt = new WebPageTest('www.webpagetest.org', process.env.WPT_API_KEY);

const BUDGETS = {
  'largest-contentful-paint': 2500,  // ms
  'cumulative-layout-shift':  0.10,
  'total-blocking-time':      200,   // ms
  'time-to-first-byte':       800,   // ms
};

async function runAudit(url) {
  console.log(`Starting WebPageTest audit for ${url}`);

  const result = await wpt.runTest(url, {
    location:    'Dulles:Chrome',
    connectivity: '4G',
    runs:         3,
    firstViewOnly: false,
    video:        true,
  });

  if (result.statusCode !== 200) {
    throw new Error(`Test failed: ${result.statusText}`);
  }

  const { median } = result.data;
  const fv = median.firstView;

  const metrics = {
    'largest-contentful-paint': fv.chromeUserTiming?.LargestContentfulPaint,
    'cumulative-layout-shift':  fv.CumulativeLayoutShift,
    'total-blocking-time':      fv.TotalBlockingTime,
    'time-to-first-byte':       fv.TTFB,
  };

  console.log('Results:', metrics);
  console.log('Full report:', result.data.summary);

  let passed = true;
  for (const [key, value] of Object.entries(metrics)) {
    const budget = BUDGETS[key];
    if (value > budget) {
      console.error(`FAIL ${key}: ${value} exceeds budget ${budget}`);
      passed = false;
    } else {
      console.log(`PASS ${key}: ${value}`);
    }
  }

  if (!passed) process.exit(1);
}

runAudit(process.env.DEPLOY_URL || 'https://your-site.com/');
Tip: Store your WebPageTest API key as an environment secret in your CI system. The free API key has a daily quota; consider the WebPageTest Pro plan if you are running tests on every pull request in a large team.

Pro tips

Use Test History to track regression over time

WebPageTest stores all test results at a permanent URL. Keep a spreadsheet (or a monitoring dashboard) linking to each test run keyed by deployment date. When a metric regresses, you can diff the waterfall from the last good deployment against the broken one to identify the offending resource or change.

Always test Repeat View, not just First View

For sites with repeat visitors -- blogs, SaaS apps, e-commerce -- Repeat View performance may matter more than First View. Users who return daily will load the majority of their sessions with a warm cache. If your Repeat View LCP is still slow, check your Cache-Control headers and service worker scope.

Block third parties to measure their impact precisely

The Block field in Advanced Settings is one of the most underused WebPageTest features. Block individual third-party domains one at a time, run the test, and record the LCP delta. You will quickly identify which integrations have the highest performance cost, giving you prioritized ammunition for removing or deferring them.

Use the DNS Override field to test staging environments accurately

When testing a staging environment that is not publicly accessible, use WebPageTest's DNS Override feature to point your production domain to your staging server IP. This lets you test with your actual production domain and all its associated SSL certificates, CDN routing, and DNS behavior -- without making the staging server public.

Run multi-step comparison tests with Visual Comparison

WebPageTest's Visual Comparison feature generates side-by-side filmstrip and Speed Index graphs for up to six URLs at once. This is the fastest way to compare your site across page templates, compare your performance to competitors, or demonstrate the before/after impact of an optimization to a non-technical audience.

Common issues

Test results vary too much between runs

High variance between test runs usually indicates one of three causes: server-side variability (inconsistent TTFB from an origin that is not well-cached or auto-scaling), CDN routing variability (the test agent is hitting different CDN edge nodes on each run), or test agent network congestion. The fix is to run more tests -- 5 or 9 runs -- and use the median. Also check whether your TTFB is consistent: a coefficient of variation above 15% in TTFB indicates a server-side problem that should be fixed before you optimize the front end.

WebPageTest results do not match PageSpeed Insights field data

This is expected and normal. WebPageTest is a lab tool running from a specific geographic location with a specific network profile. PageSpeed Insights field data comes from real Chrome users across diverse devices, networks, and locations worldwide. The gap reflects the difference between controlled lab conditions and the messy reality of real-user traffic. Use WebPageTest to debug specific bottlenecks identified by field data -- not as a direct replacement for CrUX-based metrics.

The scripted test fails to authenticate or navigate correctly

Login script failures are usually caused by timing issues: the form or redirect takes longer than the script expects. Add sleep commands after each navigation to allow the page to settle. If the login flow uses JavaScript-rendered forms (React, Vue, Angular), the form elements may not exist in the DOM at the moment the script tries to fill them. Increase the sleep duration or use the exec command to poll for element presence before interacting. Alternatively, switch to the setCookie approach, which bypasses the login UI entirely and is far more reliable.

API tests return "queued" for too long in CI/CD

The public WebPageTest infrastructure has a shared queue, and wait times can reach several minutes during peak usage. For time-sensitive CI pipelines, use a dedicated WebPageTest Pro server instance or a private agent to eliminate queue time. Alternatively, configure your API client with a generous timeout value (180 seconds or more) and implement exponential backoff polling. Avoid polling too aggressively -- once per 10 seconds is sufficient and prevents rate-limiting.

Summary

Feature What to look for Common fix
TTFB Green first TTFB bar in row 1 CDN, server caching, edge compute
Waterfall Long bars before render line defer/async scripts, eliminate render-blocking CSS
Filmstrip Blank screen duration Inline critical CSS, preload key fonts
Content Breakdown Third-party domain weight Defer non-critical third parties, self-host fonts
Scripting Authenticated page load times Use setCookie for reliable auth flow
Repeat View First vs. Repeat LCP delta Improve Cache-Control headers, add service worker
API / CI Automated regression detection Fail builds exceeding performance budgets

Frequently asked questions

Is WebPageTest free to use?

The public instance at webpagetest.org is free for most use cases, with generous rate limits. A paid WebPageTest Pro subscription unlocks additional test agents, private instances, bulk testing, and advanced scripting features. For CI/CD automation with high test volumes, the paid API tier is the recommended option.

How many test runs should I use for accurate results?

Always run at least 3 tests and use the median result. Network conditions, server response times, and CDN routing can vary between individual runs by 10-20%. For high-precision comparisons -- such as before and after an optimization -- run 5 to 9 tests per configuration and compare median values to reduce noise.

What is the difference between First View and Repeat View?

First View simulates a new visitor with an empty browser cache. Repeat View runs the same test immediately after, simulating a returning visitor whose browser has cached static assets. Comparing the two shows how much your caching strategy (Cache-Control headers, service workers) improves performance for repeat visits.

How do I test pages that require authentication?

Use the WebPageTest Script feature to automate authentication before the page load. You can navigate to the login page, fill in credentials using setValue commands, and submit the form -- or inject session cookies directly using setCookie. The script runs in a real browser instance, so JavaScript-heavy authentication flows work as expected.

What should I look for first in a WebPageTest waterfall?

Start at the top of the waterfall and look for the render-blocking line (the vertical blue line marking first render). Any resource bar that starts before this line and is long -- particularly synchronous scripts in the document head or parser-blocking stylesheets -- is a candidate for optimization via defer, async, or preload. Also look for request chains where one resource triggers another, creating sequential delays that push your LCP resource discovery late into the waterfall.

Priya Patel

Performance Consultant at WebVitals.tools

Priya specializes in deep performance analysis and web tooling for enterprise engineering teams. She has conducted WebPageTest audits and built CI performance pipelines for clients across e-commerce, fintech, and media publishing sectors.