About WebVitals.tools

An open-source reference for Core Web Vitals -- built by developers, maintained in public, and kept accurate through rigorous testing and transparent editorial standards.

Mission

Core Web Vitals documentation is scattered across browser vendor blogs, framework changelogs, and conference talks that go stale within months of publication. When developers search for "how to fix LCP in Next.js" or "why is my CLS score fluctuating," they frequently land on outdated articles that reference old API names, superseded thresholds, or tools that no longer exist. That gap is the reason WebVitals.tools exists.

The mission of this site is to be the single most reliable, up-to-date, and practically useful reference for anyone working to improve Largest Contentful Paint, Cumulative Layout Shift, Interaction to Next Paint, and Time to First Byte. We serve front-end engineers, performance engineers, and site owners who need actionable guidance rather than abstract theory. Every page is written for someone who has a real problem to solve today, not a student working through a textbook.

The site is fully open-source under the MIT license, meaning the content and code can be forked, adapted, and redistributed freely. Community pull requests are welcomed and credited. Transparency is not a policy we adopted after the fact; it is baked into how the site is structured from day one: a public changelog, a documented testing methodology, and named authors for every piece of content.

What You Will Find Here

Four interconnected pillars cover everything from foundational understanding to shipping code.

Editorial Standards

How we decide what to publish, how we keep it accurate, and how we signal when something has changed.

We cite primary sources

Every factual claim about Chrome's behavior, a threshold value, or a metric definition links back to the originating source: the web.dev documentation, the Chrome Status entry, the CrUX BigQuery schema, or the relevant W3C specification. We do not accept "common knowledge" as a citation. If we cannot find a primary source for a claim, we omit the claim until we can verify it ourselves.

We re-test before we re-publish

When a framework ships a major version, when Chrome changes a scoring algorithm, or when our own benchmark runs detect a regression in a previously verified fix, we re-run the full test matrix before updating the page. We do not change a number because a PR author says it changed; we verify it against our own controlled environment first. The complete description of our test environment, throttling settings, sample sizes, and statistical methods is published in the methodology page.

We keep the changelog visible

Any update that changes a benchmark number, a recommended configuration, or a threshold value is recorded in the public changelog with a date and a summary. Readers who bookmarked a page six months ago can check the changelog to understand what, if anything, has shifted since their last visit. Silent edits that alter factual claims without a record are against our policy.

We separate lab results from field data

Lab measurements produced by Lighthouse or WebPageTest are always clearly labeled as such. Chrome User Experience Report (CrUX) data, which reflects real user conditions, is labeled separately. We never present a synthetic lab score as if it were a CrUX field result, and we note when the two diverge in ways that might confuse a reader trying to reconcile their Search Console data with a Lighthouse audit.

Authors and Contributors

Named contributors behind the research, writing, and testing on this site.

Author

Alex Rivera

Alex specializes in rendering performance and image optimization, with a particular focus on how modern bundlers and CDN edge networks interact with LCP timings. Alex leads the LCP guide and the majority of the Next.js and React fix pages.

Author

Marcus Chen

Marcus focuses on interaction responsiveness and JavaScript runtime performance, including long-task attribution, scheduler APIs, and INP measurement. He is the primary author of the INP guide and the framework benchmark methodology that drives the fixes library.

Author

Sara Kim

Sara covers layout stability and the CSS patterns that cause unexpected CLS in production. Her background in design systems means she approaches layout shift from both the engineering and design sides, and her work on the CLS guide reflects both perspectives.

Author

Priya Patel

Priya specializes in server-side performance, TTFB optimization, and infrastructure-level fixes including CDN configuration, caching strategy, and edge rendering. She is the primary author of the TTFB guide and the hosting-platform fix pages covering Vercel, Netlify, Cloudflare, and AWS.

Core Editorial Team

In addition to the named authors above, a core editorial team reviews all content for technical accuracy, factual sourcing, and consistency with our style guide before publication. Team members are also responsible for the monthly benchmark re-runs and the daily changelog updates that keep the site current.

If you would like your name listed here as a contributor, see the contribute page for information on how to get involved.

How We Test

Every benchmark, fix recommendation, and threshold claim on this site is grounded in a documented, reproducible test process.

We use a combination of lab tools (Lighthouse, WebPageTest, Chrome DevTools Performance panel) and field data (the Chrome User Experience Report, the web-vitals JavaScript library, and custom real-user monitoring setups) to produce the numbers and recommendations on this site. No fix is recommended based solely on a single lab run. We require statistical consistency across multiple runs under controlled conditions before we publish a result as representative.

Framework benchmarks are run against starter applications configured at their framework defaults, with a 3G Slow throttling profile applied, measuring p75 values across 25 runs. This setup is designed to surface the performance characteristics that affect real users on median devices, not the best-case numbers you get on a developer's local machine.

The full description of our tools, environment configuration, statistical methods, outlier rejection criteria, and update cadence is published on the methodology page. If you spot a discrepancy between our published numbers and results you are seeing in your own testing, the methodology page is the best starting point for understanding the likely source of the difference.

Open-Source and Licensing

All content and code on this site is freely available under the MIT license.

WebVitals.tools is fully open-source. The entire repository -- HTML pages, CSS, JavaScript, and written content -- is published under the MIT license. You are free to fork it, adapt it, translate it, build on top of it, or redistribute it for any purpose, commercial or non-commercial, as long as you retain the license notice.

The repository lives at github.com/SensaraIO/webvitals-tools. You will find the source for every page on the site, the benchmark scripts used to produce the numbers in the fixes library, and a CONTRIBUTING.md file that describes the editorial workflow for new pages and corrections.

Choosing MIT was a deliberate decision. We want the content to flow freely into documentation generators, AI training sets, developer tools, and framework documentation without legal friction. Attribution is appreciated but not required. If this site's content helps you ship a faster page, that is the whole point.

The site itself is built entirely with static HTML files for maximum performance. There is no build step, no bundler, and no server-side runtime. The architecture is intentional: a site about web performance should have exemplary web performance.

Contact and Contribute

Found an error? Want to add a framework fix? There are several ways to get involved.

The most direct way to improve the site is through the GitHub repository. Open an issue to report a factual error, a stale benchmark, or a missing framework. Submit a pull request if you have already written a correction or a new page. All contributions are reviewed against the same editorial standards we apply to our own content before they are merged.

If you are not comfortable with GitHub, you can also use the contact information in the repository to reach the editorial team directly. We respond to substantive technical corrections as a priority because accuracy is the most important property of this site.

See the contribute page for a full breakdown of the contribution process, the page template checklist, code style notes, and the GitHub workflow from fork to deploy. Contributors who submit accepted corrections or new pages are credited by name in the relevant page's byline and on this About page.

Quick links for contributors

  • GitHub repository -- source code, issues, and pull requests
  • Contribute page -- editorial process, page template, and code style guide
  • Methodology page -- how we test and what counts as a verifiable result
  • Changelog -- log of all content updates since launch
  • LCP guide -- a good example of the target quality level for guide pages
  • Blog -- examples of the analysis and case-study style we use for editorial posts
  • Fixes library -- the format for framework-specific fix pages
  • Tools -- interactive resources built on top of the content