Fix LCP in Vite: Optimize Largest Contentful Paint with Modern Bundling
Vite has transformed the frontend development experience with near-instant dev server startup and native ESM serving, but many teams ship Vite applications with Largest Contentful Paint scores well above the 2.5-second threshold. The reason is usually not Vite itself but a handful of configuration decisions left at their defaults: no hero image preload, a monolithic vendor bundle, uncompressed assets, and a React plugin still running Babel instead of SWC. Unlike frameworks such as Next.js, Vite is deliberately un-opinionated about production performance configuration, which means the responsibility for LCP optimization sits entirely with you. This guide covers every meaningful Vite-specific lever, from switching to @vitejs/plugin-react-swc and writing build.rollupOptions.output.manualChunks to enabling the modulepreload polyfill, wiring up vite-plugin-image-optimizer and vite-imagetools, configuring asset inlining thresholds, and using server.warmup to reduce cold-start latency. Follow these steps and it is realistic to drop LCP from 4.1 seconds to 1.5 seconds on a median mobile connection.
- Add
<link rel="preload">for your hero image inindex.html - Switch from
@vitejs/plugin-reactto@vitejs/plugin-react-swc - Set
build.rollupOptions.output.manualChunksto split vendor from app code - Install
vite-plugin-image-optimizerfor build-time image compression - Enable
build.modulePreload.polyfill: truefor broader browser support - Raise
build.assetsInlineLimitto inline small assets and cut round trips
Expected results
The following improvements are typical across Vite 5 projects after applying all six steps. Exact gains depend on your asset sizes and server infrastructure, but the ratios are consistent.
Before
4.1s
LCP score (Poor) — default Vite config, Babel plugin, single vendor chunk, uncompressed images, no hero preload
After
1.5s
LCP score (Good) — SWC plugin, manual chunks, compressed WebP/AVIF images, preloaded hero, modulepreload enabled
Common causes of poor LCP in Vite apps
Before diving into fixes, it helps to understand why Vite applications so frequently score poorly on LCP despite fast development builds. The core issue is that Vite's default production configuration prioritizes correctness and compatibility over aggressive performance optimization. Most of these defaults are sensible starting points, but they leave significant headroom on the table.
- No hero image preloading. Vite does not automatically inject preload hints for images referenced in CSS or JS. The browser discovers the hero image only after parsing and partially executing JavaScript, adding hundreds of milliseconds to LCP on first load. This is the single most impactful fix for most Vite apps and requires nothing more than one line in
index.html. - Single large vendor chunk. By default, Vite (via Rollup) generates one
vendorchunk containing every dependency. React, React DOM, your router, your date library, your form library, and anything else all land in the same file. This bundle is often 400-800KB uncompressed, and the browser must download, parse, and execute the relevant portions before it can render the LCP element. - Babel transformation overhead in the default React plugin. The standard
@vitejs/plugin-reactuses Babel to transform JSX and apply Fast Refresh. While Babel has a rich plugin ecosystem, it is significantly slower than SWC for raw transformation throughput. In development this means longer cold starts; in production it means slightly larger output due to Babel runtime helpers injected per file. - Unoptimized images served at their original resolution and format. Vite copies images from
public/orsrc/assets/to the dist directory without any compression, resizing, or format conversion. A 3MB JPEG hero image on a fast desktop connection goes unnoticed in local testing but wrecks LCP on a 4G mobile device. - Missing modulepreload for dynamically imported chunks. Vite's code-splitting emits dynamic imports for route-level chunks, but without the modulepreload polyfill those chunks are fetched lazily and sequentially rather than in parallel, adding latency that compounds with each nested import.
- Small assets creating unnecessary round trips. The default
assetsInlineLimitis 4096 bytes (4KB). Assets below this threshold are inlined as base64. Raising the threshold eliminates extra HTTP requests for small fonts, icons, and background images that might otherwise each cost 20-50ms on a high-latency connection.
For a deeper understanding of how these factors interact, see the complete LCP guide covering thresholds, measurement tools, and the full optimization hierarchy. If you are also working on image-specific issues across frameworks, the image optimization guide provides format and compression benchmarks.
Step-by-step fix
Step 1: Switch to @vitejs/plugin-react-swc
The SWC-based React plugin compiles JSX 20x faster than Babel in development and produces marginally smaller output in production by omitting per-file Babel runtime helpers. More importantly, eliminating Babel removes a class of subtle interop bugs and simplifies your dependency tree. If you use Babel plugins that have no SWC equivalent (certain AST-level transforms), this step may not be immediately possible, but for the vast majority of Vite React projects the migration is a one-line change.
@vitejs/plugin-legacy for IE11 or old Chromium support, note that legacy mode re-introduces a Babel-based transformation pass for the legacy bundle. The SWC plugin handles the modern bundle while legacy handles the fallback. Both can coexist, but the legacy bundle will be larger and slower — profile real user data before enabling legacy mode in 2026, where global IE11 share is negligible.
npm remove @vitejs/plugin-react
npm install -D @vitejs/plugin-react-swc
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
export default defineConfig({
plugins: [react()],
});
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react-swc';
export default defineConfig({
plugins: [
react({
// SWC decorator support if you use experimental decorators
// jsxImportSource: '@emotion/react', // uncomment for Emotion users
}),
],
});
After this change, run npm run build and compare bundle sizes with du -sh dist/assets/*.js. You will typically see a 2-5% reduction in total JS output due to fewer Babel helper injections. The main benefit, however, is in development iteration speed, which indirectly helps LCP by making optimization work faster to test.
Step 2: Preload the hero image in index.html
This is the highest-impact single change for LCP in any Vite application. By default, the browser discovers above-the-fold images only after parsing HTML, downloading JavaScript, executing it, and rendering the component tree. A <link rel="preload"> tag in index.html tells the browser to fetch the image immediately, in parallel with everything else, as soon as the HTML document begins parsing. This alone typically saves 300-800ms on LCP depending on your image size and server response time.
Vite's index.html is the application shell for all SPA routes. For multi-page apps (using Rollup's multi-entry configuration), add the preload tag to each relevant HTML entry point.
<!-- Preload the LCP hero image -->
<link
rel="preload"
as="image"
href="/images/hero.webp"
fetchpriority="high"
type="image/webp"
>
<!-- If you use responsive images, preload with imagesrcset -->
<link
rel="preload"
as="image"
imagesrcset="
/images/hero-480w.webp 480w,
/images/hero-960w.webp 960w,
/images/hero-1440w.webp 1440w
"
imagesizes="100vw"
fetchpriority="high"
>
The fetchpriority="high" attribute (Fetch Priority API) signals to the browser that this resource is more important than other images on the page. Chromium 101+ and Safari 17.2+ support this attribute. For older browsers, the attribute is safely ignored and the preload still works. Combine this with responsive image techniques covered in the responsive images for LCP guide to maximize coverage across viewport sizes.
public/), its filename will include a content hash like hero.a3b4c5d6.webp. A static preload tag cannot reference a hashed filename. Either place the hero image in the public/ directory (no hashing, stable filename) or use a Vite plugin like vite-plugin-html to inject preload tags dynamically at build time using the manifest.
Step 3: Configure manual chunks with build.rollupOptions
Vite's default chunking strategy puts all node_modules code into a single vendor chunk. This is convenient but suboptimal for LCP because the browser must download and parse the entire vendor bundle before any rendering can begin. Manual chunk configuration lets you separate stable, rarely-changing libraries (React, React DOM) from frequently-changing application dependencies, improving cache hit rates and reducing the JS payload required on first visit.
The goal is not to create as many chunks as possible — excessive splitting increases HTTP overhead and can hurt performance on HTTP/1.1 servers. Aim for 3-5 vendor chunks, splitting along lines of change frequency and usage patterns. Since Vite ships ESM by default, modern browsers can fetch multiple ES module chunks in parallel via HTTP/2 multiplexing without the waterfall problem that affected CommonJS loaders.
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react-swc';
export default defineConfig({
plugins: [react()],
build: {
rollupOptions: {
output: {
manualChunks(id) {
// Core React runtime — most stable, always cached
if (id.includes('node_modules/react/') ||
id.includes('node_modules/react-dom/') ||
id.includes('node_modules/scheduler/')) {
return 'react-core';
}
// Routing — changes rarely
if (id.includes('node_modules/react-router') ||
id.includes('node_modules/@remix-run/router')) {
return 'router';
}
// Data fetching and state
if (id.includes('node_modules/@tanstack/') ||
id.includes('node_modules/zustand/') ||
id.includes('node_modules/jotai/')) {
return 'state';
}
// Everything else in node_modules
if (id.includes('node_modules/')) {
return 'vendor';
}
},
},
},
},
});
After applying this configuration, run npm run build -- --report (or use rollup-plugin-visualizer via npx vite-bundle-visualizer) to inspect chunk sizes. Target a maximum initial JS payload of 150KB compressed for the fastest LCP. Any chunk that exceeds 200KB compressed and is loaded synchronously on the critical path is worth investigating further.
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react-swc';
import { visualizer } from 'rollup-plugin-visualizer';
export default defineConfig({
plugins: [
react(),
// Run: ANALYZE=true npm run build
process.env.ANALYZE && visualizer({
open: true,
gzipSize: true,
brotliSize: true,
filename: 'dist/stats.html',
}),
].filter(Boolean),
build: {
rollupOptions: {
output: {
manualChunks(id) {
if (id.includes('node_modules/react/') ||
id.includes('node_modules/react-dom/') ||
id.includes('node_modules/scheduler/')) {
return 'react-core';
}
if (id.includes('node_modules/react-router')) {
return 'router';
}
if (id.includes('node_modules/')) {
return 'vendor';
}
},
},
},
},
});
CSS code splitting is enabled by default in Vite (build.cssCodeSplit: true). Each JS chunk that imports CSS will produce a corresponding CSS file that is loaded alongside its JS chunk. This is generally beneficial for LCP because critical page CSS can be inlined or preloaded separately from component-level styles. If you find that CSS splitting creates too many small files, you can disable it with build.cssCodeSplit: false to consolidate all styles into a single stylesheet, though this trades chunk granularity for fewer HTTP requests.
Vite also handles dependency pre-bundling automatically via esbuild during development (optimizeDeps). When you first start the dev server, Vite pre-bundles CommonJS dependencies into ESM, caches them in node_modules/.vite/deps, and serves them as single files. This is why the first npm run dev is slower than subsequent runs. Pre-bundling does not affect production builds, but understanding it helps explain why your dev LCP and prod LCP can differ significantly.
Step 4: Optimize images with vite-plugin-image-optimizer and vite-imagetools
Unoptimized images are responsible for the majority of LCP failures in Vite applications. A typical JPEG hero image exported from a design tool is 2-4MB. After compression and WebP conversion it should be 80-200KB. That difference translates directly to seconds of LCP on a median mobile connection. Vite provides two complementary plugins for image optimization, each solving a different part of the problem.
For deeper coverage of image formats, compression tools, and srcset patterns, see the image optimization guide and the dedicated responsive images for LCP fix.
npm install -D vite-plugin-image-optimizer
npm install -D vite-imagetools
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react-swc';
import { ViteImageOptimizer } from 'vite-plugin-image-optimizer';
import { imagetools } from 'vite-imagetools';
export default defineConfig({
plugins: [
react(),
// Compress all images at build time using sharp and svgo
ViteImageOptimizer({
png: {
quality: 80,
compressionLevel: 9,
},
jpeg: {
quality: 80,
progressive: true,
},
jpg: {
quality: 80,
progressive: true,
},
webp: {
lossless: false,
quality: 80,
alphaQuality: 80,
force: false,
},
avif: {
lossless: false,
quality: 60,
force: false,
},
svg: {
// svgo options
multipass: true,
plugins: [
{ name: 'removeViewBox', active: false },
{ name: 'removeDimensions', active: true },
],
},
}),
// Enable query-based transforms: import heroUrl from './hero.jpg?w=960&format=webp'
imagetools(),
],
});
With vite-imagetools active, you can import images with transform directives directly in your component code. This is particularly useful for generating responsive images that reference the correct Vite-hashed URLs in your srcset attributes.
// vite-imagetools generates typed metadata for each variant
import heroSm from './hero.jpg?w=480&format=webp&as=url';
import heroMd from './hero.jpg?w=960&format=webp&as=url';
import heroLg from './hero.jpg?w=1440&format=webp&as=url';
import heroFallback from './hero.jpg?w=960&format=jpeg&as=url';
export function Hero() {
return (
<picture>
<source
type="image/webp"
srcSet={`${heroSm} 480w, ${heroMd} 960w, ${heroLg} 1440w`}
sizes="100vw"
/>
<img
src={heroFallback}
alt="Application hero image"
width={960}
height={540}
fetchPriority="high"
decoding="sync"
/>
</picture>
);
}
Note the fetchPriority="high" and decoding="sync" attributes on the fallback img. These mirror the behavior you get from preload tags: the browser treats this image as high-priority and does not defer its decoding to avoid blocking the main thread during the initial paint. Use decoding="sync" only for the LCP image; use decoding="async" for all below-fold images.
Step 5: Enable modulepreload polyfill and tune asset inlining
Vite emits <link rel="modulepreload"> tags for ES module chunks, which tell supporting browsers to fetch and parse modules before they are needed. The modulepreload polyfill extends this capability to browsers that support ES modules but not the modulepreload link relation, specifically Safari versions before 17 and some Chromium-based browsers on older Android. The polyfill adds approximately 1KB to your main bundle, which is a worthwhile tradeoff.
The asset inlining threshold determines which static assets get base64-encoded directly into the JavaScript or CSS output. The default of 4096 bytes (4KB) is conservative. Raising it to 8192 bytes reduces HTTP requests for small fonts, icons, and background images without meaningfully increasing bundle size, since base64 encoding adds roughly 33% overhead.
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react-swc';
import { ViteImageOptimizer } from 'vite-plugin-image-optimizer';
import { imagetools } from 'vite-imagetools';
export default defineConfig({
plugins: [react(), ViteImageOptimizer(), imagetools()],
build: {
// Inline assets smaller than 8KB as base64 (default: 4096)
assetsInlineLimit: 8192,
// Enable modulepreload polyfill for broader browser support
modulePreload: {
polyfill: true,
// Optionally exclude specific paths from preload injection
// resolveDependencies: (filename, deps, ctx) => deps,
},
// Enable CSS code splitting (on by default, shown for clarity)
cssCodeSplit: true,
// Target modern browsers for smaller output
// Remove if you need legacy support
target: ['es2020', 'chrome87', 'firefox78', 'safari14'],
rollupOptions: {
output: {
manualChunks(id) {
if (id.includes('node_modules/react/') ||
id.includes('node_modules/react-dom/') ||
id.includes('node_modules/scheduler/')) {
return 'react-core';
}
if (id.includes('node_modules/react-router')) {
return 'router';
}
if (id.includes('node_modules/')) {
return 'vendor';
}
},
},
},
},
});
Setting build.target to modern browser targets allows Rollup and esbuild to emit smaller, more efficient JavaScript by using newer syntax (optional chaining, nullish coalescing, native async/await) without transpilation. The targets listed above cover approximately 95% of global browser traffic as of early 2026 according to Can I Use data. If your audience requires older browser support, adjust accordingly and accept the corresponding bundle size increase.
Step 6: Configure server.warmup for dev and preview
Vite 5 introduced server.warmup, which pre-transforms a list of files when the dev server starts, before any browser requests arrive. Without warmup, the first request for each module triggers a cold transformation that adds latency to your initial page load in development and in vite preview. While this does not affect production LCP directly, it meaningfully shortens the feedback loop during optimization work, ensuring that your measured dev-server LCP reflects actual performance characteristics rather than cold-start overhead.
Warmup also helps identify modules that are expensive to transform, which often correlates with runtime performance bottlenecks worth addressing before shipping.
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react-swc';
export default defineConfig({
plugins: [react()],
server: {
warmup: {
// Pre-transform these files on server start
clientFiles: [
'./src/main.tsx',
'./src/App.tsx',
'./src/components/Hero.tsx',
'./src/components/Layout.tsx',
'./src/pages/Home.tsx',
],
},
},
build: {
assetsInlineLimit: 8192,
modulePreload: { polyfill: true },
rollupOptions: {
output: {
manualChunks(id) {
if (id.includes('node_modules/react/') ||
id.includes('node_modules/react-dom/')) {
return 'react-core';
}
if (id.includes('node_modules/')) {
return 'vendor';
}
},
},
},
},
});
List the files that appear on your most-visited routes. The warmup list is path-resolved relative to vite.config.ts, so use the same paths you would reference in imports. Keep the list focused on the critical rendering path; warming up your entire src/ tree defeats the purpose and slows server startup.
Verification
Measure LCP before and after applying these changes using a consistent methodology. Lab measurements (Lighthouse, WebPageTest) provide reproducibility; field measurements (Chrome User Experience Report, PageSpeed Insights) reflect real user conditions including device variability, network quality, and third-party script interference.
Measuring LCP in Vite specifically has one important nuance: because Vite apps are typically SPAs, the LCP element on client-side-navigated pages may be measured differently than on full page loads. Always measure LCP on a full hard reload of the entry route for consistent comparisons during optimization.
- Lighthouse CLI:
npx lighthouse https://your-staging-url.com --output=json --only-categories=performance. Run three times and average the results to reduce variance from network jitter. - WebPageTest (webpagetest.org): Use a Motorola G4 profile on a 4G connection to simulate median mobile conditions. Enable filmstrip mode to visually confirm which element is measured as LCP and when it paints relative to other resources.
- Chrome DevTools Performance tab: Record a page load and look for the LCP timing marker in the timeline. Expand the Network panel and verify that your hero image request starts in the first 200ms of navigation, confirming that the preload tag is working.
- PageSpeed Insights: Enter your production URL to see field data from CrUX alongside lab data. Field data at the 75th percentile is the metric Google uses for search ranking. Lab data alone can be misleading for SPAs with personalization or A/B testing.
- Vite build output: After each change, run
npm run buildand review the chunk sizes printed to stdout. Aim for the largest single JS chunk to be under 200KB before gzip. Chunks above 500KB before gzip are almost always worth splitting further or replacing with lighter alternatives.
For guidance on interpreting these measurements and setting meaningful LCP targets for your specific audience, the JavaScript performance guide covers bundle analysis, main-thread profiling, and the relationship between parse/execute time and LCP. The complete LCP guide has a full section on measurement tooling and field vs. lab data interpretation.
Common pitfalls
- Preloading the wrong image. Only the LCP element should have a high-priority preload. If you preload an image that is not the LCP element, you waste bandwidth competing for the same network connection as the actual LCP resource. Use the Chrome DevTools Performance tab to confirm which element Chrome selects as the LCP candidate before adding preload tags.
- Over-splitting chunks and creating waterfalls. Creating too many small chunks can paradoxically hurt LCP. If chunk A imports chunk B which imports chunk C, and each must be downloaded before the next starts (a waterfall), you are worse off than with one larger chunk. Use
modulepreloadto tell the browser to fetch all chunks in parallel, or consolidate chunks that always load together. - Using vite-imagetools without type declarations.
vite-imagetoolsrequires adding the module declaration to yourtsconfig.jsonor a.d.tsfile, otherwise TypeScript will error on image imports with query strings. Add"vite-imagetools/client"to yourcompilerOptions.typesarray or create asrc/vite-env.d.tswith/// <reference types="vite-imagetools/client" />. - Ignoring the legacy plugin tradeoff. If you enable
@vitejs/plugin-legacy, Vite generates two bundles: a modern ESM bundle and a legacy IIFE bundle for old browsers. The legacy bundle is loaded via a<script nomodule>tag. Modern browsers ignore it, but it still appears in the network waterfall and can affect DNS prefetching and cache priming on CDNs that do not differentiate. Audit whether you genuinely have legacy browser traffic before enabling this plugin in 2026. - Not accounting for dependency pre-bundling differences. Because Vite pre-bundles dependencies in development and uses Rollup in production, your dev bundle and prod bundle can differ in meaningful ways. Always verify that your manual chunk configuration actually takes effect in the production build by inspecting
dist/assets/afternpm run build, not by assuming dev behavior matches.
Quick checklist
-
@vitejs/plugin-react-swcinstalled and Babel plugin removed -
<link rel="preload" as="image" fetchpriority="high">added for hero image inindex.html -
build.rollupOptions.output.manualChunksconfigured to separate React core and router -
vite-plugin-image-optimizerinstalled and compressing JPEG/PNG/WebP at build time -
vite-imagetoolsused to generate responsivesrcsetvariants in WebP/AVIF -
build.modulePreload.polyfill: trueset for Safari and older browser support -
build.assetsInlineLimitraised to 8192 to reduce small-asset round trips -
server.warmup.clientFileslists critical-path components for faster dev feedback - LCP verified with Lighthouse CLI and WebPageTest on a mobile profile before and after
Frequently asked questions
A well-optimized Vite application using hero image preloading, manual chunk splitting, and modern image formats should achieve LCP under 1.6 seconds on a fast connection. If you are seeing LCP above 3 seconds, the most common causes are an unpreloaded hero image, a large monolithic JavaScript vendor chunk, and uncompressed image assets. Measure with WebPageTest on a mobile throttling profile to simulate median real-user conditions rather than relying on your fast development machine and local network.
SWC primarily improves developer experience and build throughput, not runtime LCP in production. The indirect benefit is that faster builds encourage more frequent optimization iterations, and eliminating Babel runtime helpers can slightly reduce bundle size by 2-5%. For production LCP, focus on preloading, chunk splitting, and image optimization first. Switch to SWC for the development speed gain and the simpler dependency graph, not as a primary LCP fix.
Without manual chunks, Vite generates one large vendor bundle that the browser must download and parse before rendering can begin. Separating stable vendor libraries (React, React DOM) from frequently-changing application code means browsers can cache the React bundle across deploys and only re-download app-specific chunks when your code changes. This reduces the first-visit JS payload and speeds up repeat visits. Smaller individual chunks also give the browser's parser more parallelism opportunities on multi-core devices.
Vite emits <link rel="modulepreload"> tags that tell browsers to fetch ES module chunks early, before they are dynamically imported at runtime. The polyfill extends this behavior to browsers that support ES modules but not the modulepreload link relation, specifically Safari versions before 17 and some older Chromium builds. The polyfill adds roughly 1KB to your main bundle. For apps targeting only modern browsers in controlled environments (internal tools, for instance), you can set build.modulePreload.polyfill: false to skip it. For public-facing sites, enabling it is the safer choice.
They solve different problems. vite-plugin-image-optimizer compresses existing images at build time using sharp and svgo, reducing file sizes without changing your component markup or import statements. vite-imagetools lets you import images with transform directives in your component code, generating responsive srcsets, modern formats, and resized variants that reference Vite's hashed filenames. Most production Vite apps benefit from both: use the optimizer for baseline compression across all images, and imagetools for the hero and other above-the-fold images where responsive srcsets directly impact LCP.
Related resources
Complete LCP Guide
The comprehensive guide to understanding and optimizing Largest Contentful Paint across all frameworks and deployment targets.
FixFix LCP in React
Framework-agnostic React techniques for preloading, lazy loading, and reducing hydration delay that complement Vite-specific optimizations.
FixFix LCP in Next.js
See how Next.js handles LCP optimization with its built-in Image component, ISR, and React Server Components.
FixResponsive Images for LCP
Cross-framework guide to srcset, sizes, picture elements, and modern image formats for maximum LCP improvement.
Continue learning
JavaScript Performance
Bundle analysis, main-thread profiling, and the relationship between JavaScript parse time and LCP in modern SPAs.
GuideImage Optimization
Format comparisons, compression benchmarks, and tooling for WebP, AVIF, and responsive images.
FixFix LCP in React
React-specific patterns for preloading, Suspense boundaries, and hydration that apply regardless of your build tool.
GuideLCP Deep Dive
Measurement tooling, thresholds, CrUX field data, and the complete optimization hierarchy for Largest Contentful Paint.