Core Web Vitals FAQ

52 plain-English answers about Core Web Vitals -- from the definitions of LCP, CLS, and INP to the specific fixes that move the needle in production. Every answer links to deeper guides so you can take the next step.

This page collects the questions we hear most often from teams shipping performance work -- whether they are trying to pass Google's thresholds for ranking, cut bounce rate, or just catch regressions before they reach users. We keep it up to date as browsers, tools, and Google's guidance evolve. Every answer is sourced from web.dev, the Chrome team's public research, and our own measurements across thousands of real production sites.

If a question you have is missing, check the more detailed LCP, CLS, INP, and TTFB guides, browse the fixes library for framework-specific patterns, or watch one of the video tutorials. If you still cannot find it, the questions below are marked up with FAQPage structured data so Google can surface them directly in search results -- we update this page whenever we learn something new is being asked.

52 questions 5 categories Last updated April 22, 2026 Metrics · Tools · Optimization · Frameworks · Hosting

Metrics

14 questions

What are Core Web Vitals?

Core Web Vitals are a set of field metrics Google uses to measure real-world user experience on the web. As of 2026 the three core metrics are Largest Contentful Paint (LCP) for loading, Cumulative Layout Shift (CLS) for visual stability, and Interaction to Next Paint (INP) for responsiveness. They are collected from real Chrome users and exposed in the Chrome UX Report (CrUX) and Google Search Console.

What is a good LCP score?

A good LCP is 2.5 seconds or less at the 75th percentile of visits to a page. Between 2.5 and 4.0 seconds is classed as "needs improvement" and anything above 4.0 seconds is "poor". LCP measures the render time of the largest image or text block visible within the viewport during initial load.

What is a good CLS score?

A good CLS is 0.1 or less at the 75th percentile. Between 0.1 and 0.25 needs improvement, and above 0.25 is poor. CLS measures how much visible content shifts unexpectedly during the page lifecycle, as a sum of the largest shift windows, not a single worst-case shift.

What is a good INP score?

A good INP is 200 milliseconds or less at the 75th percentile. Between 200 and 500ms is needs improvement, and above 500ms is poor. INP replaced First Input Delay (FID) in March 2024 and measures the full input-to-paint latency for the slowest interaction on a page.

What is TTFB and how does it differ from LCP?

Time to First Byte is the interval between the browser issuing the navigation request and receiving the first byte of the response. LCP includes TTFB plus all the work required to render the largest element (connection, request, response, parsing, asset fetch, paint). A slow TTFB caps how fast LCP can ever be.

Is First Contentful Paint (FCP) part of Core Web Vitals?

No. FCP is a supporting metric: it tracks when the first DOM content paints. It is useful for debugging slow starts but Google does not use it for ranking. LCP, CLS, and INP are the three ranking-relevant metrics.

What replaced First Input Delay (FID)?

Interaction to Next Paint (INP) replaced FID as a Core Web Vital in March 2024. FID only measured the delay before the first interaction handler ran. INP measures the full interaction latency -- input delay, processing time, and presentation delay -- for the worst interaction on the page, which is a much closer proxy for perceived responsiveness.

Why is my lab LCP different from my field LCP?

Lab tools like Lighthouse simulate a single environment (one device profile, one network, one cold cache). Field data from CrUX aggregates the 75th percentile across real devices, networks, and cache states, which is much slower on average than a lab run. Trust the field number for ranking; use lab data to debug.

What does "75th percentile" mean in Core Web Vitals?

Google computes Core Web Vitals thresholds at the p75 of visits, meaning 75% of real-user page views must meet the threshold. This is deliberately stricter than a median so sites cannot mask problems by averaging great sessions with terrible ones. If the p75 LCP is 2.5s, one in four visits was at or below that -- the rest were faster.

Does LCP include text or only images?

LCP considers the largest image OR text block visible in the initial viewport -- whichever is larger. Background images declared in CSS are generally NOT counted, but images set via <img>, <video poster="">, url() on a direct element, and <svg> elements are. See the LCP guide for edge cases.

Does scrolling affect my CLS?

User-initiated shifts (within 500ms of a tap, click, or key) are excluded from CLS. However, shifts caused by scroll-triggered animations, intersection-observer-loaded content, or sticky headers that snap into place often DO count, because the browser cannot link them back to a user input.

Can my Core Web Vitals fail Google's thresholds but my site still rank well?

Yes. Core Web Vitals are a tiebreaker between similarly-relevant results, not a primary ranking signal. Strong content and authority often outweigh middling vitals. That said, failing vitals caps the ceiling -- if a faster competitor has similar content, Google will prefer them, and poor vitals also hurt conversion regardless of rankings.

Do Core Web Vitals apply to mobile and desktop separately?

Yes. Google tracks and ranks mobile and desktop field data independently. Most sites are weaker on mobile due to slower CPUs and networks, which is why the mobile LCP ceiling of 2.5s is genuinely aggressive. Mobile-first indexing means mobile vitals carry more weight for most pages.

How long does it take for a fix to show up in Google Search Console?

Google Search Console's Core Web Vitals report reflects a 28-day rolling window of CrUX data. A fix you ship today will take roughly 14 days to noticeably move the p75 and up to 28 days to fully propagate. To see immediate results use a RUM tool alongside.

Tools

10 questions

What's the difference between Lighthouse and PageSpeed Insights?

Lighthouse is the underlying auditing engine that runs in Chrome DevTools, Node, and CI. PageSpeed Insights is a hosted front-end that runs Lighthouse on Google's servers AND shows you the CrUX field data for the URL. If you want lab + field side by side, use PageSpeed Insights. If you want reproducible runs in CI, use Lighthouse directly.

Should I trust Lighthouse scores over CrUX?

No. CrUX (field data) is what Google uses for ranking and what actual users experience. Lighthouse is a lab simulation with a fixed device and network profile that's useful for debugging specific changes, but its score can be wildly different from field reality.

What is CrUX and where does the data come from?

The Chrome User Experience Report is an anonymized dataset of real-user performance metrics from opted-in Chrome users. It's updated monthly in BigQuery and daily via the CrUX API. Google Search Console, PageSpeed Insights, and most RUM dashboards surface CrUX data.

Do I need a RUM tool if I already have CrUX data?

Yes, for any serious site. CrUX aggregates once per month and only segments by device type and country -- it cannot answer "is LCP worse for logged-in users" or "what URL pattern is failing". A RUM setup captures every session with custom dimensions and gives you near-real-time dashboards.

How does WebPageTest differ from Lighthouse?

WebPageTest is a third-party lab tool that runs tests from physical devices in 40+ locations with granular control over network, browser, and test scripts. It produces waterfall charts, filmstrips, and side-by-side video comparisons that Lighthouse cannot. Use Lighthouse for quick runs and WebPageTest for deep debugging.

What's the best free tool for monitoring Core Web Vitals over time?

For small sites the Google Search Console Core Web Vitals report is free and good enough for weekly checkpoints. For per-deploy monitoring, self-host the open source web-vitals library and send measurements to your existing analytics (GA4, Plausible, PostHog). See our RUM setup tutorial.

Can I measure Core Web Vitals in Chrome DevTools live?

Yes. Open the Performance panel, click record, interact with the page, then stop. The timing track shows LCP, CLS, and the largest INP interaction. You can also enable the Core Web Vitals overlay via the Rendering panel for a live heads-up display. Full walkthrough in the Chrome DevTools tutorial.

What's the simplest snippet to log field LCP to my analytics?

Install the web-vitals library and send each metric to your analytics provider on visibilitychange. A full example with sendBeacon, attribution, and p75 dashboard guidance is in our RUM tutorial.

How do I test Core Web Vitals on a staging site?

Lab tools work on any URL, so run Lighthouse or WebPageTest against staging. For field data you'd need to push real traffic to staging -- most teams instead ship to a canary subdomain or feature-flag the change for a percentage of production users and compare RUM numbers.

Why does PageSpeed Insights say "insufficient data"?

CrUX requires a minimum volume of Chrome traffic before it publishes field data for a URL or origin. Low-traffic pages fall back to origin-level data, and brand-new sites may show no field data at all for 30-60 days. Lab data (Lighthouse) always runs regardless.

Optimization

12 questions

How do I reduce my LCP from 4s to under 2.5s?

Work the waterfall: fix TTFB first (static cache, edge), then preload the hero image and fonts, then move render-blocking scripts to defer or async, then inline critical CSS. The LCP guide walks through a full fix sequence with code.

What's the single highest-leverage fix for LCP?

For most sites: adding fetchpriority="high" and a <link rel="preload" as="image"> on the hero image. That one change commonly shaves 400-1200ms because browsers otherwise discover the hero image late in the preload-scanner pass. Combine with proper sizes and srcset for full effect.

How do I reduce CLS caused by web fonts?

Use size-adjust, ascent-override, and descent-override in a @font-face declaration that matches the fallback metrics to the custom font. Pair with font-display: swap (or optional). Full recipe in the font-loading CLS fix.

Why is my INP bad even though my main-thread work looks fast?

INP is input-to-paint, not just script execution. If a handler finishes in 30ms but then forces a large layout and paint, the total can be 400ms. Break tasks into chunks with scheduler.yield(), defer non-visible work, and avoid forcing layout inside event handlers. See the INP guide.

How do I stop third-party scripts from ruining INP?

Use the partytown library or an <iframe> sandbox to move the script off the main thread. Load tag managers with defer or on user interaction. Never block the initial render on analytics or A/B tests -- ship the page, then hydrate.

What's a performance budget and how do I set one?

A performance budget is a hard ceiling on metrics (LCP, bundle size, total JS) that a page cannot exceed. Set it in Lighthouse CI or our budget calculator, then fail the build if a PR breaches it. Typical starters: 170kB JS on mobile, 2.5s LCP, 0.1 CLS.

Should I lazy-load my hero image?

No. Never lazy-load anything in the initial viewport. Lazy-loading the LCP element delays it by the time it takes the browser to discover it through the preload scanner, which usually adds 200-800ms. Use loading="eager" and fetchpriority="high" instead.

How do I compress my JavaScript bundle?

Serve Brotli over HTTPS (most CDNs do this automatically). Code-split routes with dynamic imports. Tree-shake by using ES modules and avoiding import * as. Replace heavy libs: moment.js -> date-fns or Temporal, lodash -> native array methods or tree-shakable lodash-es. See JavaScript bundle INP fix.

What image format should I use?

AVIF for photos (25-40% smaller than WebP, 50% smaller than JPEG), WebP as a fallback, and the original format (JPEG, PNG) as a final fallback. Use the <picture> element with <source type="image/avif"> so older browsers get the format they support. Most modern CDNs (Cloudflare, Vercel, Netlify) can do this conversion on the fly.

How do I find which CSS is render-blocking?

In Chrome DevTools, open Coverage (Ctrl-Shift-P -> "Show Coverage"), reload with recording on, and look for large CSS files with low use percentages. Anything blocking the critical path should be inlined (up to ~14kB). Move the rest behind media="print" with a small onload swap.

What's the difference between preload, prefetch, and preconnect?

preconnect opens the TCP + TLS connection early (3-way handshake savings). preload fetches a resource you WILL use on the current page, with high priority. prefetch fetches a resource you MIGHT use on a future navigation, with low priority. Using them in the wrong order wastes bandwidth.

How small should my critical CSS be?

Aim for under 14kB compressed (one TCP round trip on slow connections). Anything larger than that and you lose the "first packet" advantage. Generate critical CSS automatically with tools like Critters or Beasties and load the full stylesheet asynchronously.

Frameworks

8 questions

What's the best framework for Core Web Vitals in 2026?

Any framework with strong defaults can hit great vitals, but the easiest paths today are Astro (static by default), SvelteKit (small runtime), and Next.js App Router with the React Server Components pattern. The framework matters less than disciplined use -- we have fixes for every major one in /fixes/.

Why is my Next.js LCP bad when Lighthouse says 95?

Lighthouse tests a cold SSG page. Real users often hit client-side navigations, which run hydration and fetch data after the Next.js shell loads -- that pattern can hurt LCP on soft navigations even when the first load is instant. See our LCP on Next.js fix.

How do I fix CLS in a React app?

Reserve space with aspect-ratio or explicit dimensions on all images and async content. Use suspense boundaries with fixed-size skeletons, never collapsed divs. For lists that hydrate, render server-side markup that matches the hydrated output to avoid shift. Full checklist in CLS in React.

Should I use the Astro Islands pattern?

If your site is mostly static content with a few interactive components, yes -- Astro's selective hydration ships far less JS than a fully-hydrated framework, which is usually the biggest INP win. For heavily dynamic apps, SvelteKit or Next.js App Router often fit better.

How do I improve INP in a WordPress site?

Reduce plugins (each plugin's frontend JS runs on every page), swap the theme's unminified jQuery-based slider for a CSS-based one, move analytics to a tag manager loaded on interaction, and enable persistent object cache. See INP in WordPress.

Is Server Components faster than client rendering for LCP?

Usually yes, because the HTML for the initial view arrives ready-to-render instead of requiring a JS bundle download, parse, execute, and hydrate cycle. The typical LCP delta is 300-1200ms on mobile. The tradeoff is complexity around data fetching and cache invalidation.

Does Turbopack or Vite make my site faster for users?

Neither directly. They speed up DEV builds. User-facing perf depends on the production bundler output (both ship to Rollup-like pipelines in production) and your app's architecture. Don't pick a framework based on dev-server speed alone.

How do I prevent CLS from React component lazy-loading?

Always wrap React.lazy with a Suspense fallback that reserves the exact height of the loaded component. Use content-visibility: auto only for off-screen content. If the component has variable height, measure with ResizeObserver on first render and memoize the height per viewport.

Hosting

8 questions

Does hosting on Vercel improve Core Web Vitals automatically?

Vercel's edge network and automatic image optimization can help TTFB and LCP, but it will not fix a heavy bundle or render-blocking scripts. You still need to follow framework best practices. See TTFB on Vercel for the specific edge-vs-serverless tuning.

Netlify vs Vercel for web performance?

Both are competitive at the edge. Netlify has slightly wider edge POP coverage and cheaper egress; Vercel has better Next.js integration and tighter image optimization defaults. For static sites the difference is marginal; for heavy SSR with ISR, test both. Framework-specific fixes: Vercel, Netlify.

Do I need a CDN if my hosting already has edge caching?

Modern platforms (Vercel, Netlify, Cloudflare Pages) already include a CDN. You don't need a separate one for static assets. You might add a specialized CDN (BunnyCDN, CloudFront) if you serve large video or images and need custom cache rules.

Will switching to Cloudflare Pages fix my TTFB?

If your TTFB is dominated by cold-start times or single-region hosting, yes. Cloudflare Pages runs on workers at the edge with minimal cold starts. If your TTFB is dominated by slow database queries or heavy server rendering, Cloudflare alone won't help -- fix the origin.

What is edge computing and does it help LCP?

Edge compute runs your code on servers geographically close to users (hundreds of POPs instead of one origin region). It cuts TTFB by 50-300ms for users far from your origin, which directly helps LCP. Edge functions for TTFB has details.

Should I serve images from my own domain or a CDN?

Same-origin images skip a DNS lookup and TLS handshake (saves 50-200ms on slow networks). Modern hosting platforms proxy images through your origin automatically. If you must use a third-party CDN, add <link rel="preconnect"> for its origin.

How do I cache HTML at the edge for an SSR site?

Set Cache-Control: s-maxage=60, stale-while-revalidate=86400 (or similar) on pages that change rarely. Use Cache-Control: private for logged-in pages. Most platforms respect these headers at the edge. Vercel uses CDN-Cache-Control specifically for this.

Is shared hosting incompatible with good Core Web Vitals?

It's possible but much harder -- shared hosts often have slow TTFB due to oversubscribed servers and no edge presence. Pair shared hosting with a CDN like Cloudflare in front to get edge caching. For dynamic sites with high traffic, move to managed hosting or a JAMstack platform.

Still stuck?

If your question is not answered here, the deep-dive guides and fix pages cover most real-world failure modes with code you can copy.

  • Read the LCP, CLS, INP, or TTFB guide for the metric you need to fix.
  • Find a framework-specific fix in the fixes library.
  • Watch the video tutorials for Lighthouse, Chrome DevTools, WebPageTest, and RUM setup.
  • Use the CWV checker to get a live read of any URL.