Chrome DevTools Performance Panel: Record and Analyze Web Vitals
The Chrome DevTools Performance panel is the most powerful free tool available for diagnosing web performance problems at the code level. Unlike higher-level audit tools that tell you what is slow, DevTools shows you exactly why -- which JavaScript functions are blocking the main thread, when layout shifts occur and what triggers them, how long each user interaction takes before the browser can paint a response, and precisely where the Largest Contentful Paint element loads relative to every other resource request.
This tutorial walks through the complete workflow: opening the panel and configuring recording options, capturing a page-load trace, interpreting the Web Vitals lane and the flame chart, measuring Interaction to Next Paint using the Interactions track, applying CPU and network throttling to simulate real-device conditions, and finally exporting profiles for team sharing. By the end, you will have a repeatable process for finding and verifying performance improvements in any web project.
Step-by-step walkthrough
Open the Performance panel
Open Chrome DevTools by pressing F12 on Windows and Linux, or Cmd+Option+I on macOS. You can also right-click anywhere on the page and choose "Inspect", then navigate to the Performance tab. If the Performance tab is not visible in the tab bar, click the double-arrow overflow icon (>>) on the right side of the tab strip -- the panel is there but hidden due to narrow window width.
For the most usable timeline view, dock DevTools to the bottom of the browser window rather than the right side. A wider horizontal layout gives you more horizontal space in the timeline, making it easier to spot individual events and read the flame chart. You can change the dock position from the three-dot menu in the top-right of the DevTools panel.
Enable Web Vitals, Screenshots, and Memory
Before starting any recording, configure the checkboxes in the Performance panel toolbar. The three most important options are Web Vitals, Screenshots, and Memory. Each adds a separate lane to the timeline that gives you additional context when analyzing the recording.
The Web Vitals checkbox adds a dedicated lane at the very top of the timeline. This lane displays the LCP event as a blue diamond, layout shift events as pink regions, and INP measurements for any interactions recorded. Without this lane enabled, you would need to hunt for these events manually in the flame chart. Screenshots adds a filmstrip across the top of the timeline -- small thumbnail frames captured at regular intervals that let you see exactly what the user was seeing at any point in time. Memory adds a heap allocation graph that helps identify memory leaks and garbage collection pauses that cause visual jank.
Record a page load
To capture a full page-load trace from a cold cache, click the reload button (the circular arrow icon, not the plain record circle). DevTools will automatically disable the cache, reload the page, record the entire loading sequence, and stop recording once the page becomes idle. This gives you an accurate measurement of what first-time visitors experience.
Wait until the recording completes and the timeline renders before interacting with the panel. On pages with heavy JavaScript, this can take 10-20 seconds. Once the timeline appears, you will see colored sections across the top representing CPU activity, a filmstrip if Screenshots was enabled, and the various lanes below (Web Vitals, Main, Interactions, Network, Frames).
// Step-by-step recording checklist:
1. Open DevTools (F12) and go to Performance tab
2. Check: Web Vitals, Screenshots, Memory
3. Set CPU throttle: 4x slowdown (mid-range mobile)
4. Set Network: Fast 3G or Slow 4G
5. Click the RELOAD button (circular arrow)
-- NOT the plain record circle --
6. Wait for recording to stop automatically
7. The timeline renders -- do not click yet
8. Zoom to the first 5 seconds for LCP analysis:
click and drag on the filmstrip row
Read the timeline -- LCP markers and CLS regions
The Web Vitals lane is the most important place to start. Locate the blue diamond -- that is the LCP event. Hover over it to see a tooltip showing the LCP element (e.g., img.hero-image or h1.page-title) and the exact timestamp in milliseconds. Click the diamond to pin the detail in the bottom pane, which shows the element's full CSS selector, its size, and whether it was loaded from the network or from cache.
Layout shift events appear as pink highlighted regions in the Web Vitals lane. Each region represents one or more layout shifts that contributed to your CLS score. Click a layout shift region to see the affected elements in the detail pane -- you will see each element's ID or class, its previous bounding box, its new bounding box after the shift, and its individual impact score. This information is exactly what you need to trace CLS back to a missing image dimension, a late-loading font, or a dynamically injected banner advertisement.
Analyze the main thread with the flame chart
The Main lane contains the JavaScript execution flame chart -- the single most useful section for diagnosing slow interactions and render-blocking work. Expand the Main lane by clicking the triangle next to it. Long tasks appear as wide gray bars with a red triangle in the top-right corner. Any task that takes more than 50ms is considered long because it blocks the browser from responding to user input for that duration.
Click on any long task bar to select it. The detail pane at the bottom will show the total duration and a call stack summary. Switch to the Bottom-Up tab to see functions sorted by self time (time spent executing that function itself, not counting sub-calls). This is the quickest way to identify the exact function doing expensive work. Switch to the Call Tree tab to trace the full execution path from the top-level event handler down to the costly function. If your source maps are configured correctly, function names and file paths will be readable rather than minified.
// In the detail pane (bottom panel), four tabs are available:
Summary -- Total time breakdown: scripting, rendering, painting, idle
Bottom-Up -- Functions sorted by self time (best for finding root cause)
Call Tree -- Top-down execution path from the event root
Event Log -- Chronological log of all events in selected time range
// Workflow for diagnosing a long task:
1. Click the long task bar in the flame chart
2. Open "Bottom-Up" tab
3. Sort by "Self Time" column (click header)
4. The top row is the most expensive function
5. Click the "..." disclosure to expand and see the full call path
6. Click the source link on the right to jump to the code
// Enable source maps for readable function names:
DevTools Settings > Preferences > Sources > Enable JavaScript source maps
Measure INP via the Interactions track
INP (Interaction to Next Paint) measures the latency of the worst user interaction during a page visit. Unlike LCP, INP cannot be observed in a standard page-load recording -- you need to start a recording and then interact with the page. Click the plain record button (not the reload button), then perform realistic interactions: click buttons, open dropdown menus, type in search inputs, dismiss dialogs, and toggle filters. Stop recording after 30-60 seconds of interactions.
In the resulting timeline, expand the Interactions track. Each interaction appears as a horizontal bar. The bar's total length represents the full interaction latency from the user's input event to the browser's next paint. The bar is color-coded: green means the interaction was under 200ms (Good), yellow means 200-500ms (Needs Improvement), and red means over 500ms (Poor). Click any interaction bar to see a three-part breakdown in the detail pane: Input Delay (time from event to event handler start), Processing Time (event handler execution), and Presentation Delay (time from handler end to paint). Most INP problems stem from long Processing Time caused by heavy JavaScript in event handlers, or from long Presentation Delay caused by expensive style recalculation or layout work triggered by DOM mutations.
// INP score thresholds
Good: < 200ms (green)
Needs improvement: 200-500ms (yellow)
Poor: > 500ms (red)
// The three phases of an interaction (click the bar to see):
1. Input Delay
-- From: user presses button / mouse click
-- To: event handler starts executing
-- Caused by: main thread busy with another task
-- Fix: reduce long tasks; defer non-critical JS
2. Processing Time
-- From: event handler starts
-- To: event handler finishes (all microtasks done)
-- Caused by: heavy synchronous JS in click handlers
-- Fix: yield with scheduler.yield() or setTimeout(fn,0)
3. Presentation Delay
-- From: handler finishes
-- To: browser paints the next frame
-- Caused by: layout thrashing, expensive CSS recalcs
-- Fix: batch DOM writes; avoid layout-triggering reads
inside animation frames
Throttle CPU and network for realistic conditions
Performance metrics measured on a developer machine -- typically a fast laptop with a wired or Wi-Fi connection -- will be dramatically better than what median real users experience. A developer machine runs JavaScript 5-10x faster than the median Android phone used by mobile visitors. Without throttling, you risk declaring performance "good" in your lab tests while real users on mobile are experiencing Poor ratings in the field.
The CPU throttle dropdown is in the Performance panel toolbar. Select 4x slowdown to approximate a mid-range Android phone (the recommended setting for general mobile testing, and the value Lighthouse uses). Select 6x slowdown for low-end device simulation. The Network throttle is adjacent: choose Fast 3G (1.6 Mbps down, 750 Kbps up, 150ms latency) or Slow 4G for mobile testing, and leave it at No Throttling for desktop users. Apply both CPU and network throttling simultaneously -- each alone is an incomplete simulation.
Export the profile and share
After capturing a recording that shows a problem, save it immediately before closing DevTools. Click the download icon (or press the Save Profile button) in the Performance panel toolbar. DevTools exports the recording as a .json file -- these files can range from a few hundred kilobytes to tens of megabytes depending on the recording length and the amount of data captured.
Share the exported .json file with teammates or attach it to a GitHub issue or Jira ticket. Anyone can load it back into Chrome DevTools via the upload icon in the Performance panel, without needing to reproduce the issue themselves. This is invaluable for async collaboration -- a back-end engineer can open the profile and see exactly which JavaScript function is the bottleneck, even without access to the production environment. Saved profiles also provide a performance baseline: record a profile before and after an optimization, then compare the two side by side to confirm that long tasks shortened and metric markers moved left.
Pro tips
Use Incognito for clean recordings
Chrome extensions and cached service workers can distort your measurements. Open an Incognito window before recording so you get a clean environment without extension overhead or stale cache artifacts. Hold Shift+Ctrl+N (Windows) or Shift+Cmd+N (macOS) to open an Incognito window quickly.
Add performance marks in your code
Use performance.mark('my-feature-start') and performance.measure('my-feature', 'start', 'end') in your JavaScript to annotate the timeline with named events. These marks appear as vertical lines in the flame chart, making it easy to correlate user-visible events with low-level execution work.
Compare before and after with side-by-side profiles
Load two saved profile files into separate DevTools windows and use Cmd+M (macOS) or Ctrl+M (Windows) to open the DevTools window picker. This lets you directly compare LCP timestamps, long task counts, and interaction latencies between a baseline build and an optimized build.
Inspect third-party scripts separately
In the flame chart, third-party script activity often dominates the main thread. Right-click a long task and choose "Analyze Frame" to highlight only that task's call stack. Cross-reference with the Network tab filtered to third-party domains. If a tag manager or analytics library causes most long tasks, consider loading it with async defer or moving it behind a user-consent interaction.
Use remote debugging for real-device accuracy
Connect an Android phone via USB, enable USB Debugging in Developer Options, and navigate to chrome://inspect on your desktop Chrome. Select your mobile device and open a remote DevTools session. You will get a Performance panel recording from the actual phone hardware, with real-world CPU speed, thermal conditions, and GPU performance -- far more accurate than software throttling on a laptop.
Focus on the 75th-percentile experience
Google's Good threshold for Web Vitals is measured at the 75th percentile of field data -- meaning 75% of real user sessions must meet the target. A single optimized recording in DevTools represents the best case. Aim for lab measurements well below the thresholds (under 1.5s LCP, under 150ms INP) to leave headroom for the variance between users with slower devices and connections.
Common issues
LCP is delayed by a render-blocking resource
The most frequent cause of high LCP in the Performance panel is a render-blocking script or stylesheet that delays the browser from painting the LCP element. In the Network lane of the timeline, look for a long orange or purple bar early in the load that extends past the LCP timestamp. This bar represents a parser-blocking resource. To fix it: add async or defer to non-critical script tags, move inline critical CSS into the document <head>, and use <link rel="preload"> for the LCP image if it is discovered late in the HTML or behind a stylesheet.
CLS caused by web font swapping
A common source of layout shift is web fonts loading after text has already rendered in a fallback font, causing text blocks to reflow as the metric font applies. In the Web Vitals lane, a layout shift at 1-3 seconds into the load with the affected element being a text node is the signature pattern. Fix it by adding font-display: optional or font-display: swap with size-adjusted fallback metrics using the CSS size-adjust, ascent-override, and descent-override descriptors in your @font-face blocks. Preloading the font file with <link rel="preload" as="font"> also reduces the swap window significantly.
High INP from unoptimized event handlers
When the Interactions track shows yellow or red bars, click the longest one and check the Processing Time breakdown. A long Processing Time nearly always means the event handler is doing too much synchronous work in a single call -- filtering a large array, re-rendering an entire component tree, or making a synchronous XHR. The fix depends on the cause: for large data processing, move the work to a Web Worker; for React component re-renders, use React.memo, useMemo, and useTransition to defer non-urgent updates; for animation-related work, batch reads and writes using requestAnimationFrame. After each code change, record a new interaction profile and verify that the bar for the fixed interaction is now green.
Noisy or inconsistent recordings
Performance profiles can vary significantly between runs due to browser caching, background OS processes, garbage collection timing, and JIT compilation state. If your recordings are giving wildly different LCP or task durations on successive runs, try these stabilization steps: disable browser extensions (use Incognito), close other browser tabs and heavy applications, record with the browser window focused (background tabs are throttled), and run at least three recordings and compare the median. If variance persists, the inconsistency itself is meaningful -- it may indicate that a flaky third-party script or non-deterministic lazy-loading mechanism is causing real-world variance in your users' experience.
Summary
| Panel / Feature | What It Shows | Best Used For |
|---|---|---|
| Web Vitals lane | LCP marker, CLS regions, INP events | First pass -- locate CWV events on the timeline |
| Flame chart (Main) | JS call stacks, long tasks, render work | Identifying which function is blocking the thread |
| Interactions track | Per-interaction latency with 3-phase breakdown | Diagnosing INP -- input delay vs processing vs paint |
| Network lane | Resource load timing overlaid on the timeline | Correlating resource loads with rendering events |
| Screenshots filmstrip | Visual progress frames at ~100ms intervals | Verifying what the user sees at any point in loading |
| CPU throttle | Software simulation of slower device JS speed | Realistic mobile performance measurement (4x) |
| Bottom-Up tab | Functions sorted by self execution time | Finding the root-cause function in a long task |
Frequently asked questions
What is the difference between the record button and the reload button in the Performance panel?
The circular record button starts recording without reloading the page -- useful for capturing interactions on an already-loaded page. The reload button (circular arrow) clears the browser cache, reloads the page from scratch, and automatically stops recording when the page becomes idle. For measuring LCP and page-load metrics, always use the reload button to get a cold-cache baseline. For measuring INP and interaction performance, use the plain record button after the page has fully loaded.
How do I find which JavaScript function is causing a long task?
In the flame chart, click a long task bar (gray with red triangle in the top-right corner). The Bottom-Up and Call Tree tabs in the detail pane show the most expensive functions sorted by self time and total time. Self time is the time spent in the function itself, excluding callees, so it is the most useful for identifying the root cause. Enable source maps in DevTools Settings to see original source file names instead of minified identifiers.
Why does my recorded LCP differ from what PageSpeed Insights reports?
PageSpeed Insights shows field data from CrUX (real Chrome users over 28 days) alongside a Lighthouse lab simulation. The Performance panel measures a single lab run on your local machine with your network connection and hardware. Differences arise from network conditions, device speed, cache state, and geographic location. Use DevTools with 4x CPU throttle and Fast 3G network throttle enabled to get a closer approximation of median-user conditions, but always treat CrUX field data as the authoritative source for SEO ranking purposes.
Can I use the Performance panel to debug Cumulative Layout Shift?
Yes. Enable the Web Vitals checkbox before recording, then look for pink highlighted regions in the Web Vitals lane -- these represent layout shift windows. Click a layout shift event to see the affected elements listed in the detail pane, including the element selector, the previous and current bounding boxes, and the individual impact score. The Experience track also labels layout shift events. Combine this with the Layout Shift regions to trace CLS back to specific DOM mutations and their timing relative to other events on the timeline.
How much CPU throttling should I apply to simulate real users?
For global audiences, 4x CPU slowdown approximates a mid-range Android phone -- this is the recommended setting and what Lighthouse uses for its mobile simulation. For emerging markets or budget devices, use 6x. For desktop audiences, no CPU throttling is typically needed unless you are targeting users with older or low-power machines. Always combine CPU throttling with network throttling -- Fast 3G or Slow 4G for mobile testing -- to simulate both the compute and bandwidth constraints that real users experience simultaneously.