How We Reduced LCP by 60%: A Step-by-Step Case Study
In March 2026, a mid-market e-commerce site running Next.js 15 approached us with a problem: their Largest Contentful Paint was 4.8 seconds on mobile, well into the "poor" range. Google Search Console flagged 78% of their URLs as having poor CWV scores. Their organic traffic had dropped 23% over two months. This is the full story of how we brought LCP down to 1.9 seconds -- a 60% reduction -- in three weeks.
The site, an online furniture retailer with approximately 12,000 product pages, was built on Next.js with a headless CMS. It used server-side rendering for product pages and static generation for category pages. Despite choosing a modern stack, they had accumulated performance debt through unoptimized images, excessive third-party scripts, and render-blocking CSS.
Before and After: LCP Optimization Results
The initial audit
Before touching any code, we spent two days gathering data. We ran PageSpeed Insights on 50 representative URLs (homepage, category pages, product detail pages, and the checkout flow), recorded WebPageTest filmstrips for the top 10 landing pages, and analyzed 28 days of CrUX field data from Search Console.
The data told a clear story. The LCP element on product pages was the hero product image -- a 2.4MB PNG served without responsive sizing. On category pages, it was a promotional banner loaded from a third-party CDN. The homepage LCP was a carousel image that loaded after JavaScript execution.
Audit findings summary
| Issue | Impact | LCP Savings |
|---|---|---|
| Unoptimized hero images (PNG, no srcset) | Critical | -1.2s |
| Render-blocking CSS (380KB uncompressed) | High | -0.6s |
| Font loading blocking render (4 font files) | High | -0.5s |
| Third-party scripts (14 scripts, 890KB) | Medium | -0.4s |
| Server response time (cold SSR: 1.2s) | Medium | -0.2s |
Fix 1: Image optimization (LCP -1.2s)
The hero product image was the single biggest bottleneck. The original implementation served the same 2400x1600 PNG regardless of viewport size -- a 2.4MB file that took 1.8 seconds to download on a typical 4G connection. We implemented four changes.
First, we converted all product images to AVIF with WebP fallback using a Sharp-based image pipeline integrated into the Next.js build process. AVIF delivered 65% smaller files than PNG at equivalent visual quality. Second, we added responsive srcset and sizes attributes so mobile devices downloaded appropriately sized images. Third, we added fetchpriority="high" to the hero image and a corresponding <link rel="preload"> in the document head. Fourth, we removed the lazy loading attribute from the LCP image -- it had loading="lazy" applied globally to all images, which delayed the hero image fetch.
// Before: Single large PNG, lazy loaded, no priority hints
<img
src="/images/products/sofa-hero.png"
alt="Modern sectional sofa"
loading="lazy"
className="product-hero-image"
/>
// After: AVIF/WebP with responsive sizes and priority hints
<picture>
<source
type="image/avif"
srcSet="/images/products/sofa-hero-400.avif 400w,
/images/products/sofa-hero-800.avif 800w,
/images/products/sofa-hero-1200.avif 1200w"
sizes="(max-width: 768px) 100vw, 50vw"
/>
<source
type="image/webp"
srcSet="/images/products/sofa-hero-400.webp 400w,
/images/products/sofa-hero-800.webp 800w,
/images/products/sofa-hero-1200.webp 1200w"
sizes="(max-width: 768px) 100vw, 50vw"
/>
<img
src="/images/products/sofa-hero-800.webp"
alt="Modern sectional sofa"
width="1200"
height="800"
fetchpriority="high"
className="product-hero-image"
/>
</picture>
The preload link in the document head ensured the browser started fetching the hero image before it even parsed the page body:
<link
rel="preload"
as="image"
type="image/avif"
href="/images/products/sofa-hero-800.avif"
imagesrcset="/images/products/sofa-hero-400.avif 400w,
/images/products/sofa-hero-800.avif 800w,
/images/products/sofa-hero-1200.avif 1200w"
imagesizes="(max-width: 768px) 100vw, 50vw"
/>
Image file sizes dropped from 2.4MB (PNG) to 180KB (AVIF at 800w) -- a 92% reduction. The mobile hero image was now 85KB. This single change reduced median LCP by 1.2 seconds.
loading="lazy". Global lazy loading applied to all images is a common anti-pattern that delays the most important image on the page. See our lazy loading pitfalls guide for more detail.
Fix 2: Critical CSS extraction (LCP -0.6s)
The site loaded a single 380KB CSS bundle that blocked rendering. Only about 45KB of that CSS was needed for above-the-fold content. We used the critters Webpack plugin (built into Next.js) to automatically inline critical CSS and defer the rest.
The Next.js configuration change was minimal -- we enabled the experimental optimizeCss feature and added critters to the build pipeline. The result: the browser could start rendering after downloading just 45KB of inline CSS instead of waiting for the entire 380KB bundle.
// next.config.js
module.exports = {
experimental: {
optimizeCss: true, // Enables critters for critical CSS
},
// Also: remove unused Tailwind classes
// (reduced CSS from 380KB to 120KB before critical extraction)
purge: {
content: ['./src/**/*.{js,jsx,ts,tsx}'],
safelist: ['dark'], // Keep dark mode classes
},
};
We also ran a Tailwind CSS purge that removed unused utility classes, reducing the total CSS from 380KB to 120KB. Combined with critical CSS inlining, the render-blocking CSS dropped to 45KB inlined in the HTML. The remaining 75KB loaded asynchronously via media="print" onload="this.media='all'".
Fix 3: Font loading strategy (LCP -0.5s)
The site loaded four custom font files (two weights of a display font and two weights of a body font) via Google Fonts with the default blocking behavior. On slow connections, this added up to 500ms to the first render. We made three changes.
First, we self-hosted the fonts using next/font, which automatically inlines the @font-face declarations and uses font-display: swap. This eliminated the external request to Google Fonts and the associated DNS lookup, TCP connection, and TLS handshake. Second, we preloaded the two primary font files (regular weight for body, bold for headings) with <link rel="preload">. Third, we used the size-adjust CSS property to create metric-compatible fallback fonts, minimizing the layout shift when custom fonts loaded.
// app/fonts.ts
import localFont from 'next/font/local';
export const displayFont = localFont({
src: [
{ path: '../fonts/display-bold.woff2', weight: '700', style: 'normal' },
{ path: '../fonts/display-medium.woff2', weight: '500', style: 'normal' },
],
display: 'swap',
fallback: ['system-ui', 'Arial', 'sans-serif'],
adjustFontFallback: 'Arial', // Generates size-adjust automatically
variable: '--font-display',
preload: true,
});
export const bodyFont = localFont({
src: [
{ path: '../fonts/body-regular.woff2', weight: '400', style: 'normal' },
{ path: '../fonts/body-medium.woff2', weight: '500', style: 'normal' },
],
display: 'swap',
fallback: ['system-ui', 'Arial', 'sans-serif'],
adjustFontFallback: 'Arial',
variable: '--font-body',
preload: true,
});
The font loading optimization reduced render-blocking time by 500ms and also improved CLS from 0.18 to 0.06 by preventing layout shifts from font swaps. For more detail on this approach, see our font loading CLS fix guide.
Fix 4: Third-party script management (LCP -0.4s)
A DevTools audit revealed 14 third-party scripts totaling 890KB. These included analytics (Google Analytics, Hotjar, Segment), advertising (Google Ads, Meta Pixel), customer support (Intercom), and marketing tools (Klaviyo, OptinMonster). Several loaded synchronously in the document head, directly blocking rendering.
We categorized each script by business criticality and loading priority. Analytics could load after the page was interactive. Chat widgets could use facade patterns (loading only when the user clicks). Ad pixels could defer until after LCP. We moved all non-critical scripts to load after the load event using next/script with the afterInteractive or lazyOnload strategy.
import Script from 'next/script';
// Analytics: load after page is interactive
<Script
src="https://www.googletagmanager.com/gtag/js?id=G-XXXXX"
strategy="afterInteractive"
/>
// Chat widget: load only when user interacts
<Script
src="https://widget.intercom.io/widget/xxxxx"
strategy="lazyOnload"
/>
// Ad pixels: defer until after main content
<Script
src="https://connect.facebook.net/en_US/fbevents.js"
strategy="lazyOnload"
/>
For Intercom, we implemented a facade pattern -- a lightweight CSS-only chat button that loaded the full 340KB Intercom widget only when clicked. This alone saved 340KB from the initial page load. Our third-party scripts guide covers this technique in detail.
Fix 5: Server response optimization (LCP -0.2s)
The server-side rendering time for product pages averaged 1.2 seconds on cold starts. After investigation, the bottleneck was the headless CMS API call that fetched product data -- it averaged 800ms with no caching layer. We added three optimizations.
First, we implemented ISR (Incremental Static Regeneration) with a 60-second revalidation window for product pages. This meant the first request after a cache miss still hit the CMS API, but subsequent requests served a cached static page in under 50ms. Second, we added stale-while-revalidate caching headers so edge CDN nodes served stale content while revalidating in the background. Third, we moved the Next.js deployment from a single-region setup to Vercel's edge network, reducing geographic latency for users outside North America.
// app/products/[slug]/page.tsx
export const revalidate = 60; // Revalidate every 60 seconds
export async function generateStaticParams() {
// Pre-generate the top 500 products at build time
const products = await getTopProducts(500);
return products.map((p) => ({ slug: p.slug }));
}
export default async function ProductPage({
params,
}: {
params: { slug: string };
}) {
const product = await getProduct(params.slug);
// ...render product
}
TTFB dropped from 1.2s to 180ms for cached pages (85% of traffic) and 600ms for cache misses (down from 1.2s due to the edge deployment). See our server response TTFB guide for more optimization strategies.
Results after three weeks
The combined effect of all five optimizations brought LCP from 4.8 seconds to 1.9 seconds at the 75th percentile (the threshold Google uses for CWV assessment). Here is the full breakdown of metrics before and after:
Full metrics comparison (p75 field data)
| Metric | Before | After | Change |
|---|---|---|---|
| LCP | 4.8s | 1.9s | -60% |
| CLS | 0.18 | 0.04 | -78% |
| INP | 280ms | 165ms | -41% |
| TTFB | 1.2s | 0.18s | -85% |
| Total JS | 890KB | 245KB | -72% |
| Total CSS | 380KB | 45KB inline | -88% |
Optimization timeline: cumulative LCP reduction
Business impact
Within 28 days of deploying all fixes (the time needed for CrUX data to fully update), the business saw measurable results:
- CWV pass rate: 22% to 94% of URLs rated "Good" in Search Console
- Organic traffic: +18% month-over-month (recovering the earlier 23% loss and then some)
- Bounce rate: -12% on product pages (users no longer abandoned slow-loading pages)
- Conversion rate: +7.3% on mobile (from 1.8% to 1.93%), attributable to faster page loads
- Lighthouse score: 38 to 94 (mobile performance)
The organic traffic recovery alone represented approximately $45,000/month in attributed revenue based on their average order value and organic conversion rate.
Ongoing monitoring setup
To prevent future regressions, we implemented three layers of monitoring. First, a Lighthouse CI check in the GitHub Actions pipeline that blocks merges if LCP exceeds 2.5s in lab conditions. Second, the web-vitals library sending real-user data to their analytics platform with alerting thresholds. Third, a monthly CrUX review via Google Search Console's Core Web Vitals report.
In the two months since the optimization, LCP has remained stable at 1.8-2.0s despite ongoing feature development. The CI check has caught three potential regressions before they reached production. For details on setting up a similar monitoring pipeline, see our performance monitoring tutorial.
Key takeaways
- Always audit before optimizing. Two days of measurement saved weeks of guessing. The data clearly showed images were the biggest bottleneck, not JavaScript -- contrary to the team's initial assumption.
- Image optimization delivers the most LCP impact per effort. Converting formats, adding responsive sizing, and fixing loading priorities took two developer-days and cut 1.2 seconds. It should always be the first thing you fix.
- Check for
loading="lazy"on your LCP element. This is one of the most common LCP anti-patterns and is trivial to fix once identified. - Third-party scripts add up invisibly. No single script was catastrophic, but 14 scripts collectively blocked 400ms of render time. Audit your third-party scripts quarterly.
- ISR is underutilized. Many Next.js sites run full SSR when ISR with a short revalidation window would serve the vast majority of requests from cache.
- Set up monitoring from day one. Without CI checks and RUM alerts, performance improvements erode within months as new features and dependencies are added.
Frequently asked questions
How long did the LCP optimization project take?
The entire optimization effort took approximately three weeks from initial audit to final deployment. The image optimization changes alone took two days and delivered the single biggest improvement. We recommend tackling optimizations in priority order -- images and fonts first, then JavaScript and server-side improvements -- so you see measurable gains within the first week.
What was the biggest single improvement to LCP?
Converting hero images from PNG to AVIF with responsive srcset and adding fetchpriority="high" plus a preload link was the single largest improvement, reducing LCP by approximately 1.2 seconds. Image optimization is almost always the highest-impact LCP fix because the LCP element is an image on over 70% of web pages.
Does this approach work for sites not built with Next.js?
The core principles apply to any framework or platform. Image optimization, font loading strategies, critical CSS extraction, and third-party script management are universal. The specific implementation details differ -- for example, WordPress uses different caching plugins and Shopify uses Liquid templates -- but the diagnostic process and optimization priorities remain the same. See our framework-specific fix guides for implementations across 12 frameworks.
How do you maintain good LCP scores after the initial optimization?
We set up three safeguards: a Lighthouse CI budget in the CI/CD pipeline that fails builds if LCP exceeds 2.5s, real-user monitoring with the web-vitals library sending data to our analytics dashboard, and a monthly performance review using CrUX data from Google Search Console. The CI check catches regressions before they ship, while RUM catches issues in production.
What tools did you use to diagnose the LCP issues?
We used a combination of Chrome DevTools Performance panel for waterfall analysis, WebPageTest for filmstrip comparisons and detailed network timing, PageSpeed Insights for field data from CrUX, and the web-vitals JavaScript library for continuous real-user monitoring. DevTools was most useful for identifying the specific bottlenecks, while field data confirmed the improvements affected real users. See our guide to measuring CWV for tool-by-tool instructions.