Video Tutorial

Lighthouse Audit Walkthrough: Complete Tutorial for 2026

chrome://lighthouse-audit-walkthrough LIGHTHOUSE 94 Performance FCP1.1s LCP1.9s TBT220ms CLS0.04 SI2.3s OPPORTUNITIES Eliminate render-blocking resources -0.45s Properly size images -0.30s Reduce unused JavaScript -0.15s DIAGNOSTICS Minimize main-thread work -- 3.2s Avoid enormous network payloads -- 2.4 MB total Coming soon -- YouTube video

Lighthouse is Chrome's built-in auditing engine -- and it is the most direct way to get a performance score, identify exactly which resources are slowing your page down, and generate an ordered list of fixes. Every web developer should know how to run it, read it accurately, and act on what it says. This walkthrough covers the entire workflow from opening the panel to integrating Lighthouse into your CI pipeline.

Understanding Lighthouse starts with understanding what it actually measures. Unlike field data tools that collect metrics from real users over weeks, Lighthouse runs a single controlled test in a simulated environment: a throttled 4G network and a CPU slowed to 4x below desktop speed to approximate a mid-range Android phone. That controlled environment is intentional -- it gives you a reproducible baseline for debugging, even if the absolute score does not perfectly match what your real users experience.

Step-by-step walkthrough

1

Open DevTools and navigate to the Lighthouse tab

Open Chrome DevTools using F12 on Windows/Linux or Cmd+Opt+I on macOS. The Lighthouse tab is in the main tab bar alongside Elements, Console, Network, and Performance. If you do not see it, click the double-chevron (>>) at the right end of the tab bar -- it holds overflow tabs that do not fit the current window width.

One important prerequisite: run your Lighthouse audit in Incognito mode. Browser extensions can interfere with audit results by injecting scripts, modifying the DOM, or consuming CPU cycles that inflate TBT. In Incognito mode, extensions are disabled by default. Open an Incognito window with Cmd+Shift+N (Mac) or Ctrl+Shift+N (Windows), navigate to your URL, then open DevTools and go to the Lighthouse tab.

Elements Console Sources Network Performance Lighthouse >> Click here
2

Configure audit categories and device mode

The Lighthouse panel presents two key configuration options before running: which categories to audit and which device to simulate. For a performance-focused audit, select Performance at minimum. You can also add Accessibility, Best Practices, and SEO if you want a broader quality check -- each category adds modest time to the audit.

For device, always start with Mobile. Google uses mobile-first indexing and evaluates your site's Core Web Vitals on mobile by default. The mobile simulation applies 4x CPU throttling and a 4G network preset (10 Mbps download, 40ms RTT), which represents a reasonable mid-range Android device on a decent connection. If your analytics show 80%+ desktop traffic, also run a Desktop audit -- but fix mobile first regardless.

Note: The "Simulated throttling" option (default) runs the page at full speed then mathematically adjusts the result. "Applied throttling" actually slows the network and CPU during the recording. Applied throttling is more accurate but slower. For day-to-day debugging, simulated throttling is fine. For the most reliable scores, use the CLI with applied throttling.
3

Run the audit and read the report

Click the Analyze page load button. Lighthouse will reload the page in the throttled environment, collect timing data, run a battery of audits against the loaded page, and generate the report. The whole process takes between 30 and 90 seconds depending on page complexity and machine speed.

At the top of the report you will see colored gauge dials for each audited category, each showing a 0-100 score. The color bands are: 0-49 (red, Poor), 50-89 (orange, Needs Improvement), and 90-100 (green, Good). These bands apply at the category level -- individual metrics inside Performance have their own thresholds (for example, LCP is rated Good below 2.5 seconds, regardless of the overall Performance score).

Shell -- Lighthouse CLI equivalent of one DevTools audit run
# Install Lighthouse globally
npm install -g lighthouse

# Run with mobile simulation (matches DevTools defaults)
lighthouse https://your-site.com \
  --preset=mobile \
  --output=html \
  --output-path=./audit-report.html \
  --view

# For a stable median score across 3 runs
lighthouse https://your-site.com \
  --preset=mobile \
  --runs=3 \
  --output=json \
  --output-path=./audit-results.json
4

Interpret the Performance score breakdown

The Performance score is not any single metric -- it is a weighted average of six lab measurements. Knowing the weights helps you prioritize: fix the heavy-weight metrics first. Here is the breakdown for 2026:

Metric Weight Good threshold What it measures
Total Blocking Time (TBT)30%< 200msMain thread blocking between FCP and TTI
Largest Contentful Paint (LCP)25%< 2.5sTime to render the largest above-fold element
Cumulative Layout Shift (CLS)15%< 0.1Visual stability -- unexpected element shifts
First Contentful Paint (FCP)10%< 1.8sTime to first rendered text or image
Speed Index (SI)10%< 3.4sHow quickly content is visually populated
Time to Interactive (TTI)10%< 3.8sTime until page is fully interactive

Because TBT carries the most weight (30%), reducing main-thread work has the biggest single impact on the score. Large JavaScript bundles, third-party scripts, and long-running event handlers are the primary culprits. The Diagnostics section will point to the specific scripts contributing to TBT.

5

Review Opportunities and Diagnostics

Below the score gauges, the Lighthouse report is divided into three sections: Opportunities, Diagnostics, and Passed audits. Opportunities are the most actionable -- each one shows an estimated potential time savings in seconds, calculated based on the specific resources found on your page. Fix the highest-saving Opportunity first.

Common Opportunities and what they mean in practice:

  • Eliminate render-blocking resources: CSS or JavaScript loaded in <head> that delays FCP. Fix by adding defer to non-critical JS and inlining critical CSS.
  • Properly size images: Images served at larger dimensions than displayed. Fix by serving responsive images with srcset and correct sizes attributes, or using a CDN with on-the-fly resizing.
  • Serve images in next-gen formats: JPEG/PNG instead of WebP or AVIF. Fix by converting images and serving WebP with a JPEG fallback via <picture>.
  • Reduce unused JavaScript: Script bytes downloaded but not executed during page load. Fix by code-splitting and lazy-loading routes with dynamic import().
  • Reduce unused CSS: Large CSS bundles with rules not applied to the current page. Fix using CSS Modules, scoped styles, or a tool like PurgeCSS.

Diagnostics do not carry estimated savings but highlight structural problems. "Avoid enormous network payloads" (over 1.6 MB) suggests your total page weight is excessive. "Minimize main-thread work" breaking down by category (script evaluation, layout, style recalculations) shows where the CPU time actually goes.

6

Save and compare reports

After running an audit, click the download icon (arrow pointing down) in the top-right corner of the report to save the results as a JSON file. This JSON file is a complete machine-readable representation of the entire audit -- every metric, every audit finding, every resource timing.

To compare two saved reports side-by-side, go to lighthouse-viewer.appspot.com and drag-drop your JSON files. The viewer renders a full Lighthouse report from the JSON and lets you examine the historical audit as if you had just run it. For before/after comparisons, load the baseline JSON and the post-fix JSON in separate browser tabs.

Shell -- compare two CLI audit runs programmatically
# Run a baseline audit and save to JSON
lighthouse https://your-site.com \
  --output=json \
  --output-path=./baseline.json \
  --quiet

# After making changes, run again
lighthouse https://your-site.com \
  --output=json \
  --output-path=./after-fix.json \
  --quiet

# Compare performance scores using jq
echo "Baseline score:"
cat baseline.json | jq '.categories.performance.score * 100'

echo "After-fix score:"
cat after-fix.json | jq '.categories.performance.score * 100'

# Compare LCP values
echo "Baseline LCP (ms):"
cat baseline.json | jq '.audits["largest-contentful-paint"].numericValue'

echo "After-fix LCP (ms):"
cat after-fix.json | jq '.audits["largest-contentful-paint"].numericValue'
7

Use the Lighthouse CLI for CI integration

Running Lighthouse manually in DevTools is useful during development, but regressions inevitably slip through when audits are not automated. The Lighthouse CI package (@lhci/cli) integrates with any continuous integration system -- GitHub Actions, GitLab CI, CircleCI, Jenkins -- and blocks merges when performance falls below your thresholds.

The configuration file (.lighthouserc.yml or .lighthouserc.json) specifies which URLs to test, how many runs to average, which assertions to enforce, and where to upload results. The LHCI server (optional) stores historical data so you can see your score trend across every commit.

YAML -- .lighthouserc.yml for GitHub Actions
ci:
  collect:
    url:
      - https://your-site.com/
      - https://your-site.com/blog/
      - https://your-site.com/products/
    numberOfRuns: 3
    settings:
      preset: mobile
  assert:
    assertions:
      categories:performance:
        - error
        - minScore: 0.85
          aggregationMethod: median-run
      largest-contentful-paint:
        - warn
        - maxNumericValue: 2500
          aggregationMethod: median-run
      cumulative-layout-shift:
        - error
        - maxNumericValue: 0.1
          aggregationMethod: median-run
      total-blocking-time:
        - warn
        - maxNumericValue: 300
          aggregationMethod: median-run
  upload:
    target: temporary-public-storage
YAML -- GitHub Actions workflow step
- name: Run Lighthouse CI
  run: |
    npm install -g @lhci/cli@0.14.x
    lhci autorun
  env:
    LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}
8

Prioritize fixes and track progress

A Lighthouse audit will typically surface 10-20 issues. Attempting to fix everything at once is ineffective -- changes interact in unexpected ways, and you lose the ability to attribute score changes to specific fixes. Instead, work in focused sprints: pick the top 2-3 Opportunities by estimated savings, implement only those fixes, re-run the audit, and confirm the expected improvement before moving on.

Maintain a simple tracking document that records the metric values and Lighthouse score before and after each sprint. This builds institutional knowledge about which types of changes move the needle on your specific stack. For a Next.js site, image optimization via next/image might be transformative. For a Shopify theme, third-party script management might dominate. Your before/after data will show you which categories of work deliver the best return for your specific architecture.

Important: Lighthouse scores (0-100) are lab data. Verify that improvements in Lighthouse translate to improvements in your field data (CrUX / PageSpeed Insights field section) after 28 days. The two do not always move in lockstep -- real users on real devices and networks can behave differently from the Lighthouse simulation.

Pro tips

Run three audits and use the median

Lighthouse score variance of 5-15 points between runs on the same page is normal. This is caused by non-determinism in the JavaScript engine, OS scheduler interference, and DNS lookup timing. Run three audits back-to-back (or use --runs=3 in the CLI) and use the median result. Never make decisions based on a single run.

Inspect the LCP element before diving into fixes

The Lighthouse report identifies your LCP element in the "Largest Contentful Paint" audit row -- click "Expand details" to see the exact element. Common LCP elements include hero images, banner images, and large text blocks. Knowing the exact element tells you whether to optimize image delivery, font loading, or server response time.

Throttle in DevTools Network panel to stress-test

After running Lighthouse, switch to the Network tab and set throttling to "Slow 3G". Reload the page manually and observe which resources block rendering longest. This manual walkthrough often surfaces issues (render-blocking fonts, hero images without preload hints) that Lighthouse flags in the abstract but that are easier to understand visually in the waterfall.

Use the treemap to find large JavaScript modules

Click "View Treemap" at the top of the Lighthouse report. This opens a visual breakdown of every JavaScript module loaded, sized proportionally by byte count. Large vendor bundles and unused code paths are immediately visible. Use this to identify candidates for code-splitting and tree-shaking.

Check third-party impact with the blocking filter

Run two audits: one normal and one with third-party scripts blocked via the Network panel's request blocking feature. The score difference tells you exactly how much performance cost your third-party scripts impose. If blocking ads, chat widgets, and analytics improves TBT by 400ms, that is your real target.

Always audit the actual production URL

Staging environments often differ from production in CDN configuration, caching headers, image compression, and third-party script loading. A staging audit can give a false picture -- either optimistically (fewer third-party scripts) or pessimistically (no CDN edge caching). When you are measuring for a production decision, audit the production URL.

Common issues

Score variance between DevTools and PageSpeed Insights

Developers frequently see a score of 92 in DevTools Lighthouse and 71 in PageSpeed Insights (PSI) for the same URL. This discrepancy is normal and expected. DevTools Lighthouse runs on your local machine with your CPU and network, then applies a mathematical throttling model. PSI runs on Google's dedicated servers in controlled data centers with applied throttling on real network hardware. The PSI environment is more representative of actual user conditions.

If your DevTools score consistently runs 15+ points higher than PSI, your page is sensitive to real-world variable conditions: perhaps long DNS resolution times, slow CDN edge nodes, or heavyweight JavaScript that hits harder on the constrained PSI hardware than the simulation models. Focus your optimizations on the PSI score rather than the DevTools score.

High TBT despite small JavaScript bundles

Total Blocking Time accumulates from any task longer than 50ms on the main thread -- not just JavaScript parsing and evaluation. Inline script execution, third-party synchronous scripts, style recalculation triggered by JavaScript DOM manipulation, and even some CSS animations can contribute to TBT. If your bundle analyzer shows small JavaScript files but your TBT is still high, look at the Performance panel's Main thread breakdown. Sort by "Total time" to find which task types are dominating.

Third-party scripts are a frequent surprise. A chat widget loaded synchronously in <head> can add 300ms of TBT on its own. Move all non-critical third-party scripts to load with defer or async, and consider loading the most heavyweight ones only after the first user interaction.

LCP image appears fast in DevTools but slow in field data

Lighthouse discovers your LCP image in a controlled environment where the browser has no cache and the network is throttled uniformly. Real users have varying connection speeds, CDN edge proximity, and device memory constraints. If your LCP image is not being served via a globally distributed CDN, users far from your origin server will experience dramatically longer fetch times than Lighthouse simulates.

Additionally, check whether your LCP image has a fetchpriority="high" attribute. Without it, the browser deprioritizes image loading relative to render-blocking CSS and JavaScript. Adding fetchpriority="high" to your LCP image element can reduce LCP by 200-500ms in field data without changing a single byte of the image itself.

Passed audits masking real issues

The "Passed audits" section is collapsed by default, and developers rarely expand it. This is a mistake. Some passed audits are borderline -- for example, a page might pass "Avoid large layout shifts" with a CLS of 0.09 (just under the 0.1 threshold). If that same page has user-initiated interactions that trigger additional layout shifts, real-world CLS measured by the web-vitals library may exceed the threshold. Always review the specific numeric values in passed audits, not just the fact that they passed.

Summary

Step Action Key outcome Time
1Open DevTools Lighthouse tabNo extension needed, use Incognito1 min
2Configure categories and deviceAlways start with Mobile1 min
3Run audit, read score gauges0-100 score per category2 min
4Interpret Performance metric weightsTBT (30%) and LCP (25%) dominate5 min
5Review Opportunities and DiagnosticsOrdered list of fixes with time savings5 min
6Save JSON, compare before/afterHistorical record of audit results3 min
7Integrate Lighthouse CIAutomated regression prevention30 min setup
8Prioritize and track fix sprintsSteady score improvement over timeOngoing

Frequently asked questions

Why does my Lighthouse score vary between runs?

Lighthouse uses CPU and network throttling to simulate a mid-range mobile device, but the host machine's workload at the moment of the test still affects results. Background processes, garbage collection pauses, and variability in DNS resolution can shift scores by 5-10 points. Run at least three audits and use the median result. For more stable scores, use the Lighthouse CLI with --runs=5 flag and check the variance. If your scores vary by more than 15 points, investigate whether a background process is consuming significant CPU during audits.

What is the difference between TBT and INP?

Total Blocking Time (TBT) is a lab metric that measures the total time the main thread was blocked by tasks longer than 50ms between FCP and TTI. It correlates with INP but is not the same thing. INP is a field metric measuring the actual latency of real user interactions. A low TBT strongly predicts a good INP, but you can have poor INP from specific interaction handlers even with a decent TBT. Use TBT in Lighthouse for debugging and INP from field data (PageSpeed Insights field section, CrUX) for the definitive answer on real-user interactivity.

Should I audit in Incognito mode?

Yes. Running Lighthouse in Incognito mode prevents browser extensions from interfering with the audit. Extensions can inject scripts, modify the DOM, or consume CPU resources that inflate TBT and LCP measurements. Lighthouse does warn when extensions are detected, but disabling them via Incognito is the safest approach for consistent, reproducible results. Note that Incognito mode also clears the disk cache, which means the audit always runs a fresh first-visit simulation -- exactly what you want for a performance baseline.

How do Lighthouse scores translate to Core Web Vitals pass/fail?

Lighthouse scores (0-100) are lab measurements and do not directly determine CWV pass/fail status. Core Web Vitals pass/fail is determined by field data from CrUX using the 75th percentile of real user measurements over a 28-day window. A Lighthouse score of 90+ strongly suggests good field data, but the relationship is not guaranteed -- a page with a score of 85 might still pass CWV if real users have faster devices and connections than the Lighthouse simulation assumes. Conversely, a lab score of 90 can coexist with poor field data if third-party scripts, large images, or server performance degrade the experience for real users in ways the simulation does not fully capture.

Can I run Lighthouse on a staging environment behind a login?

Yes, using the Lighthouse CLI. Use the --extra-headers flag to pass authentication cookies or tokens, or use Puppeteer to navigate to an authenticated page before running the audit programmatically. The Chrome DevTools Lighthouse panel can also audit authenticated pages if you are already logged in -- it runs in the same browser session, preserving your cookies and session state. For Lighthouse CI in automated environments, a common pattern is to use a headless Puppeteer script to log in and extract a session cookie, then pass that cookie as a header to the Lighthouse CLI via --extra-headers='{"Cookie": "session=..."}'.

Alex Rivera

Senior Performance Engineer at WebVitals.tools

Alex specializes in Lighthouse-driven performance optimization and CI integration. With over a decade of experience auditing production sites across e-commerce, SaaS, and media, Alex has developed workflows that translate lab audit findings into measurable field data improvements. Author of the WebVitals.tools Lighthouse CI starter configuration.