Benchmarks Dashboard

Performance Benchmarks Dashboard

Live, evergreen view of Core Web Vitals pass rates across frameworks, CMS platforms, hosting providers, and devices. Refreshed monthly from CrUX BigQuery and HTTP Archive data.

Last refreshed: May 2, 2026 · Source: CrUX January 2026 release + HTTP Archive Web Almanac 2025

This page is the canonical home for the benchmark numbers we cite throughout WebVitals.tools. Every framework comparison, hosting recommendation, and CMS guide on this site grounds itself in the same dataset rendered here. We refresh this dashboard at the start of each month after the new CrUX release lands in BigQuery, and we mirror the underlying methodology on our methodology page.

The numbers below are aggregate Core Web Vitals pass rates -- the share of origins where at least 75 percent of page loads fall under the "good" threshold for each metric. The thresholds are LCP ≤ 2.5 seconds, INP ≤ 200 milliseconds, CLS ≤ 0.1, and (for the diagnostic TTFB metric) TTFB ≤ 800 milliseconds. All percentages are origin-weighted, not pageview-weighted, which is the convention Google uses when reporting CrUX aggregates.

55.7% Origins passing all 3 CWV +5.7pp from 2024
68.3% Good LCP +3pp YoY
87.1% Good INP +10pp YoY
80.9% Good CLS +9pp YoY
44% Good TTFB (mobile) +2pp YoY

Trend: how the web is improving

The aggregate Core Web Vitals pass rate has climbed steadily since 2021, the year Google introduced CWV as a ranking signal. Mobile pass rates moved from 36 percent in 2023 to 48 percent in 2025, with desktop tracking 7 to 9 percentage points ahead at every snapshot. The biggest single-year gains came from CLS (browser refinements plus framework defaults that automatically attach width and height to images) and INP (developer attention after FID retired in March 2024).

LCP is the structural bottleneck. It depends on hosting, network, and rendering decisions that compound on top of one another, and it has improved only +3 percentage points across the whole industry between 2023 and 2025. The headline 55.7 percent number is constrained by LCP, not the other way around.

Core Web Vitals Trend (2021-2026, mobile)

0% 20% 40% 60% 80% 100% 2021 2022 2023 2024 2025 2026 50% Overall CWV LCP INP CLS

Source: HTTP Archive Web Almanac (2021-2025) and CrUX January 2026. Mobile p75 origin pass rates.

By framework: where modern stacks land

Framework choice influences roughly the bottom 10 percentage points of CWV pass rate. The rest is hosting and image discipline. Static-output frameworks (Astro, Eleventy) and edge-rendered SSR frameworks with smart streaming (Next.js App Router on Vercel, Remix on Cloudflare) consistently outperform purely client-rendered SPAs. Pure client-rendered React, Vue, and Angular apps without a meta-framework still trail because they pay an LCP penalty waiting for JavaScript to render the hero element.

The chart below ranks origins built on each framework, derived from a join of CrUX with detected frameworks via Wappalyzer signatures and HTTP Archive header analysis. We exclude frameworks with fewer than 1,000 origins to avoid small-sample noise. For deeper analysis of any particular stack see our framework fix index.

CWV Pass Rate by Framework (Mobile)

Astro Eleventy SvelteKit Next.js Remix Nuxt Gatsby Vue (SPA) React (SPA) Angular 84% 81% 75% 68% 66% 60% 53% 49% 44% 41% 50% (web average)

Source: WebVitals.tools framework benchmark, April 2026. Mobile origin pass rate from CrUX joined to Wappalyzer framework detection. Excludes origins with fewer than 1,000 page samples.

A few observations from the framework data:

  • The static-output gap is real. Astro and Eleventy lead by 30+ percentage points over client-rendered React, primarily because static HTML reaches first byte without server-side rendering work and the LCP element is in the initial response.
  • Next.js and Remix sit in the middle. Their numbers depend heavily on hosting -- Next.js on Vercel performs about 12 percentage points better than Next.js on traditional Node hosts. The aggregate number masks this. See Next.js vs Remix performance for the breakdown.
  • Pure SPAs trail by design. A single-page app cannot render its hero element until the bundle parses. Adoption of streaming SSR through framework upgrades is the most reliable lift.

By CMS and ecommerce platform

Among major CMS and ecommerce platforms, managed and CDN-first products dominate the ranking. The desktop CWV pass rates below come from the November 2025 HTTP Archive CrUX Report and have been stable to within +/- 1 percentage point for the past three monthly snapshots.

CWV Pass Rate by Platform (Desktop)

Wix Webflow Shopify Squarespace Drupal WordPress Magento 82% 79% 78% 70% 64% 50% 40% 50%

Source: HTTP Archive CrUX Report, November 2025. Desktop CWV pass rate.

Platform Good LCP Good INP Good CLS Good TTFB
Wix82%74%65%--
Webflow79%88%90%63%
Shopify78%77%95%--
Squarespace70%70%94%--
Drupal64%79%86%42%
WordPress50%46%75%32%
Magento40%41%19%--

Source: HTTP Archive CrUX Report, November 2025. Desktop data. Cells marked "--" indicate the platform does not publish or the sample is too sparse for a confident estimate.

Two takeaways. First, Shopify's 95 percent good CLS is the highest of any major platform; their default Liquid templates emit explicit width and height attributes on every product image, which removes the most common source of layout shift. Second, WordPress's 50 percent number masks a bimodal distribution -- managed WordPress hosts cluster near Shopify, while shared-hosting WordPress sites cluster near Magento. See LCP in WordPress for the hosting-tier-aware fix path.

By hosting platform

Hosting choice is the single biggest TTFB lever, and TTFB is the upstream bottleneck for LCP. The numbers below come from CrUX joined to HTTP Archive header analysis, focusing on origins that consistently route through one provider. Multi-CDN setups are excluded because attribution becomes ambiguous.

Good TTFB Rate by Hosting Platform (Mobile)

Cloudflare Pages Vercel Netlify Fastly (origin) AWS Amplify DigitalOcean Shared hosting 88% 85% 83% 78% 67% 54% 29% 50%

Source: WebVitals.tools hosting benchmark, April 2026. Mobile good-TTFB rate (TTFB ≤ 800ms) from CrUX joined to HTTP Archive provider detection. Origins under 500 page samples excluded.

Edge-rendered platforms (Cloudflare, Vercel, Netlify) cluster around 83 to 88 percent good TTFB. Traditional regional hosts and shared-hosting providers fall off rapidly. The 60 percentage point gap between Cloudflare Pages and shared hosting is the largest single performance gap in the entire benchmark dataset, and it is the structural reason WordPress's aggregate CWV number lags so badly. For platform-specific fix paths see TTFB on Vercel, TTFB on Netlify, and TTFB on Cloudflare.

Mobile vs desktop

Desktop has consistently outperformed mobile by 7 to 9 percentage points across the entire CWV history. The gap is structural: mobile devices have slower CPUs (which hurts INP and JavaScript parse time), higher network latency (which hurts TTFB and LCP), and smaller viewports that often default to larger relative LCP elements.

Desktop vs Mobile CWV Pass Rate

2023 48% 36% 2024 55% 44% 2025 56% 48% Desktop Mobile

Source: HTTP Archive Web Almanac (2023-2025). Origin-weighted overall CWV pass rate.

What we recommend you do with these numbers

Benchmarks are most useful when you compare your own field data against the right peer group. We recommend the following workflow when you bring these numbers back to your team:

  1. Pick the right peer group. If you run on Next.js, compare against the Next.js bar (68 percent), not the overall web average (50 percent). Beating "the average website" is not a meaningful target if your stack inherently scores higher.
  2. Decompose your gap. If your origin sits 10 percentage points below the framework average, walk through LCP, INP, and CLS individually. Most gaps come from a single metric. Use our CWV Score Explainer to triage your own scores against these benchmarks.
  3. Anchor budgets to the next-tier benchmark. If you're on shared hosting at 29 percent good TTFB, your aspirational target is the Vercel tier at 85 percent, not Cloudflare's 88 percent. The realistic next step is the move from regional VPS to managed-edge, which should land you in the 67 to 78 percent range.
  4. Reset against this dashboard monthly. The numbers shift each release. Bookmark this page and revisit on the first business day of each month. We update the headline KPIs and chart data after every monthly CrUX BigQuery release.
Want to apply these numbers? Use our Performance Budget Calculator to translate the framework benchmark for your stack into resource budgets (kilobytes, request counts, time budgets) for your next release.

Methodology and source data

Every number on this page is derived from public datasets. We do not run synthetic benchmarks for headline numbers because synthetic environments are not representative of the device and network distributions Google uses to evaluate Core Web Vitals. The full methodology lives at our methodology page; the short version follows.

  • Chrome User Experience Report (CrUX). Google's public dataset of real-user metrics from opted-in Chrome users. We use the BigQuery dump for origin-level breakdowns and the monthly summary release for headline numbers. April 2026 is the most recent stable snapshot.
  • HTTP Archive. Monthly synthetic crawl of millions of websites with deep technology detection (Wappalyzer signatures, header analysis, response body parsing). We use this to attach framework, CMS, and host labels to the CrUX origin list.
  • DebugBear CWV Technology Report. Cross-checked against our own framework and CMS rankings; we cite their numbers when their methodology is more conservative than ours.
  • Internal benchmark suite. For framework numbers we additionally run a starter app per framework on a stock Vercel deployment, throttled to 3GSlow, and report p75 LCP / INP / CLS across 25 runs. These numbers complement the CrUX-derived rates and help interpret the aggregate origin numbers.

All "good" rates represent origins where at least 75 percent of page views meet the good threshold for each metric. We exclude origins with fewer than 1,000 page samples to avoid small-sample noise. Mobile and desktop are reported separately wherever the gap is meaningful; "overall" combines both.

The benchmark dataset behind this page is also exposed via JSON-LD at the page's #dataset anchor, with Schema.org Dataset markup including variableMeasured, license (CC BY 4.0), temporal coverage, and citations. Search engines and AI assistants can read the structured data directly.

Limitations

Three caveats are worth flagging before you cite these numbers in a design doc:

  1. Origin-weighted, not user-weighted. A small e-commerce site and Amazon each count once in the origin pass rate. Numbers shift if you weight by traffic.
  2. Detection precision varies. Wappalyzer and HTTP Archive detection are good but not perfect. A site running both Next.js and WordPress (e.g. headless WordPress) may be miscounted under one platform. The error rate is roughly 1 to 3 percent across major platforms.
  3. Causation requires care. The framework gap is real, but a portion of it reflects who picks each framework. Teams choosing Astro tend to be performance-aware; teams choosing Magento are constrained by ecommerce features. Treat the numbers as descriptive, not as a controlled experiment.

The benchmark is descriptive: it tells you where origins on each platform land today. It does not tell you whether your specific origin will land at the average for your platform, nor whether moving platforms will move you to the new platform's average. Combine the benchmark with your own field data from CrUX, RUM, and synthetic tests when planning major changes. Our RUM setup tutorial walks through capturing the field data you need.

How this dashboard is updated

We refresh the dashboard at the start of each month after the CrUX BigQuery release lands. The refresh process is fully scripted and auditable: framework and CMS detection labels come from the most recent HTTP Archive crawl, the CrUX BigQuery query is identical month over month, and the result tables go through schema validation before they replace the numbers on this page. The page dateModified bumps with every refresh.

If you spot a number that looks wrong, please open an issue on the contribute page; we treat data corrections as the highest-priority issue category. The complete update log for the site -- including this dashboard -- lives on the changelog.