INP Webpack 5

Fix INP with Webpack: Reduce Long Tasks via Better Bundling

Interaction to Next Paint (INP) measures the latency between a user gesture and the moment the browser produces a visible response. For applications bundled with Webpack, the primary driver of poor INP is long JavaScript tasks on the main thread -- tasks that block event processing for hundreds of milliseconds. A single 600KB vendor chunk, a duplicated copy of React, or a synchronously executed third-party analytics script can each push INP well past the 200ms threshold Google classifies as needing improvement. The good news is that Webpack 5 ships with a powerful set of bundling controls -- SplitChunksPlugin, persistent caching, tree shaking via the sideEffects field, and native support for dynamic import() -- that, when tuned correctly, consistently move INP from the 280-350ms range down to 100-130ms without requiring changes to application logic. This guide walks through each technique in order of impact, with concrete configuration snippets and before/after measurements from a real production migration.

TL;DR -- quick wins:
  • Run webpack-bundle-analyzer first -- you cannot fix what you cannot see.
  • Set optimization.runtimeChunk: 'single' immediately -- zero risk, instant cache improvement.
  • Add "sideEffects": false to package.json to unlock full tree shaking.
  • Split each top-level route with import(/* webpackChunkName: "..." */ './Page').
  • Use cache: { type: 'filesystem' } to keep CI build times under control as chunks grow.

Expected results

The improvements below come from a React 18 + Webpack 5 single-page application with an initial JS payload of 1.4MB (parsed). After applying all seven steps, the initial payload dropped to 310KB, with the remaining code loaded on demand per route.

Before

310ms

INP (Needs Improvement) -- single 1.4MB vendor bundle causing 480ms long tasks on mid-range Android devices

After

118ms

INP (Good) -- granular chunks, lazy routes, tree-shaken libraries, deferred third-party scripts

Common causes of Webpack-induced INP regressions

Before reaching for configuration changes, it is worth understanding exactly which Webpack patterns produce main thread blockage. Most INP problems in Webpack apps trace back to one of these root causes:

  • A single monolithic vendor chunk. When all node_modules code lands in one file, the browser must parse and compile the entire chunk before it can execute any application code. On a Pixel 4a, a 1.2MB vendor chunk takes 420-500ms to parse -- every interaction during that window is queued.
  • No runtime chunk extraction. Without optimization.runtimeChunk, the Webpack module manifest is embedded in the main entry chunk. Any change to any module hash changes the entry chunk hash, busting the CDN cache for the entire application on every deploy.
  • Missing or incorrect sideEffects annotation. Without this field, Webpack's production mode cannot safely eliminate dead exports, leaving hundreds of kilobytes of unused utility functions in the bundle.
  • Synchronous top-level imports of route components. When every route component is imported statically at the top of the router file, all route code is parsed before any route activates. A user visiting only the home page still pays the parse cost for the settings, admin, and checkout routes.
  • Duplicate dependencies at different semver ranges. npm's hoisting algorithm sometimes installs two copies of the same library at different versions. A project might end up with both lodash@4.17.20 and lodash@4.17.21, or two separate copies of React, each adding to long task duration.
  • Module Federation remotes loaded synchronously during interactions. Webpack 5 Module Federation is powerful but has a non-obvious performance cost: if a remote container has not been fetched before the user triggers an interaction, the interaction handler stalls on a network request, directly adding to INP.
  • Third-party analytics and chat scripts executed before interactivity. Scripts bundled directly into the entry chunk rather than loaded asynchronously steal main thread time during the period when users are most likely to interact.

Step-by-step fix

Step 1: Profile with webpack-bundle-analyzer

No bundling optimization should begin without a visualization of the current state. webpack-bundle-analyzer generates an interactive treemap showing the parsed size of every module in every chunk. Install it, run a production build, and open the report before changing any configuration.

Shell
npm install --save-dev webpack-bundle-analyzer
JavaScript -- webpack.config.js
const { BundleAnalyzerPlugin } = require('webpack-bundle-analyzer');

module.exports = {
  mode: 'production',
  plugins: [
    // Only run during analysis, not in standard CI builds
    process.env.ANALYZE === 'true' && new BundleAnalyzerPlugin({
      analyzerMode: 'static',
      reportFilename: 'bundle-report.html',
      openAnalyzer: false,
    }),
  ].filter(Boolean),
};
Shell -- run analysis
ANALYZE=true npx webpack --config webpack.config.js
open bundle-report.html

In the treemap, look for three patterns: a single enormous vendors rectangle, duplicate module names appearing in multiple chunks, and large libraries whose functionality you are only using partially (Lodash, Moment.js, date-fns/esm). These are your targets for the steps that follow.

For deeper profiling, pair the bundle analyzer with the JavaScript performance guide to correlate bundle weight with specific long tasks in the Chrome DevTools Performance panel.

Step 2: Tune SplitChunksPlugin for granular splitting

Webpack's default SplitChunksPlugin configuration produces a single vendors chunk containing all node_modules code. For INP, the goal is to split that chunk into smaller, more focused pieces: one chunk for the framework runtime (React + ReactDOM, ~130KB), one for UI components (MUI, Radix, etc.), one for data utilities, and one per major third-party library. Smaller chunks parse faster individually and can be loaded on demand.

JavaScript -- webpack.config.js (SplitChunks)
module.exports = {
  mode: 'production',
  optimization: {
    splitChunks: {
      chunks: 'all',         // Split async AND sync chunks
      minSize: 20000,        // 20KB minimum chunk size
      maxSize: 244000,       // 244KB maximum -- split larger chunks
      minChunks: 1,
      cacheGroups: {
        // React core -- highly stable, cache forever
        reactCore: {
          test: /[\\/]node_modules[\\/](react|react-dom|scheduler)[\\/]/,
          name: 'vendor-react',
          chunks: 'all',
          priority: 40,
          enforce: true,
        },
        // UI component library -- changes less often than app code
        uiLibrary: {
          test: /[\\/]node_modules[\\/](@mui|@radix-ui|framer-motion)[\\/]/,
          name: 'vendor-ui',
          chunks: 'all',
          priority: 30,
        },
        // Data / utility libraries
        utils: {
          test: /[\\/]node_modules[\\/](lodash-es|date-fns|immer|zustand)[\\/]/,
          name: 'vendor-utils',
          chunks: 'all',
          priority: 20,
        },
        // All remaining node_modules
        defaultVendors: {
          test: /[\\/]node_modules[\\/]/,
          name: 'vendor-misc',
          chunks: 'all',
          priority: 10,
        },
      },
    },
  },
};

After this change, a build that previously emitted one vendors.js at 1.1MB will instead emit four chunks averaging 150-250KB each. The vendor-react chunk is essentially permanent -- it changes only when you upgrade React -- so it receives a long-lived cache header and is never re-downloaded by returning visitors.

Watch the chunk count: More chunks mean more HTTP requests. With HTTP/2 multiplexing, 8-15 parallel chunk requests is typically fine. If you exceed 20 initial chunks you may see a net regression on slower connections. Use maxInitialRequests: 6 and maxAsyncRequests: 10 in splitChunks to cap the count.

Step 3: Extract the runtime chunk

The Webpack runtime is a small JavaScript manifest (~3KB) that maps module IDs to chunk filenames. By default it is embedded in the entry chunk, meaning that any change to any module anywhere in the application changes the entry chunk hash. Extracting it into a standalone file is a one-line configuration change with zero downsides.

JavaScript -- webpack.config.js
module.exports = {
  mode: 'production',
  optimization: {
    // Isolate the runtime manifest into its own tiny chunk
    runtimeChunk: 'single',
    // Use content-based hashes for stable long-term caching
    moduleIds: 'deterministic',
    chunkIds: 'deterministic',
  },
};

With runtimeChunk: 'single', a typical deploy now invalidates the cache for only two files: the runtime manifest and the application chunk that actually changed. The vendor-react chunk, unchanged for months, stays in the CDN cache. This directly reduces the amount of JavaScript a returning user must download and re-parse on deployment day, which is often when INP regressions spike in field data.

Use moduleIds: 'deterministic' alongside runtime extraction. Without it, Webpack assigns sequential integer IDs to modules; adding a new dependency anywhere in the tree shifts all IDs, causing all chunk hashes to change and negating the caching benefit.

Step 4: Code-split routes with dynamic import()

Every statically imported route component is included in the initial bundle, parsed before any interaction occurs. Replacing static imports with dynamic import() tells Webpack to emit a separate async chunk per route. The browser fetches and parses each route chunk only when the user navigates to that route, eliminating the parse cost for all other routes on page load.

JavaScript -- Before (static imports)
// All route components parsed upfront, even for unvisited routes
import HomePage from './pages/HomePage';
import DashboardPage from './pages/DashboardPage';
import SettingsPage from './pages/SettingsPage';
import CheckoutPage from './pages/CheckoutPage';
import AdminPage from './pages/AdminPage';

const routes = [
  { path: '/', component: HomePage },
  { path: '/dashboard', component: DashboardPage },
  { path: '/settings', component: SettingsPage },
  { path: '/checkout', component: CheckoutPage },
  { path: '/admin', component: AdminPage },
];
JavaScript -- After (dynamic imports with named chunks)
import { lazy, Suspense } from 'react';

// Each route becomes a separate async chunk
// webpackChunkName controls the output filename
const HomePage = lazy(
  () => import(/* webpackChunkName: "page-home" */ './pages/HomePage')
);
const DashboardPage = lazy(
  () => import(/* webpackChunkName: "page-dashboard" */ './pages/DashboardPage')
);
const SettingsPage = lazy(
  () => import(/* webpackChunkName: "page-settings" */ './pages/SettingsPage')
);
const CheckoutPage = lazy(
  () => import(/* webpackChunkName: "page-checkout" */ './pages/CheckoutPage')
);
const AdminPage = lazy(
  () => import(/* webpackChunkName: "page-admin" */ './pages/AdminPage')
);

// Wrap the router outlet in Suspense
function AppRouter() {
  return (
    <Suspense fallback={<PageSkeleton />}>
      <Routes>
        <Route path="/" element={<HomePage />} />
        <Route path="/dashboard" element={<DashboardPage />} />
        <Route path="/settings" element={<SettingsPage />} />
        <Route path="/checkout" element={<CheckoutPage />} />
        <Route path="/admin" element={<AdminPage />} />
      </Routes>
    </Suspense>
  );
}

For heavy components within a page (rich text editors, data grids, PDF viewers), apply the same pattern at the component level. A tiptap rich text editor with its extensions can weigh 180KB -- loading it only when the user activates an edit form eliminates a 60ms parse task from every page load.

You can also add prefetch hints so the browser loads the next likely route chunk during idle time:

JavaScript -- prefetch hint
// webpackPrefetch injects a <link rel="prefetch"> tag automatically
// The chunk downloads at idle priority after the current route finishes
const CheckoutPage = lazy(
  () => import(
    /* webpackChunkName: "page-checkout" */
    /* webpackPrefetch: true */
    './pages/CheckoutPage'
  )
);

This approach is compatible with frameworks that build on Webpack. If your project uses Create React App, Vite, or another bundler, consult Fix INP in React for framework-specific guidance, and Fix INP in Next.js for Next.js-specific dynamic import patterns.

Step 5: Tree shaking with the sideEffects field

Tree shaking removes exports that are never imported anywhere in the dependency graph. Webpack 5 enables this automatically in production mode, but it cannot safely tree-shake a module unless it knows the module has no side effects (global mutations, CSS injections, polyfills). The sideEffects field in package.json provides this guarantee.

JSON -- package.json
{
  "name": "my-app",
  "sideEffects": [
    "*.css",
    "*.scss",
    "./src/polyfills.js",
    "./src/global-styles.js"
  ]
}

Setting "sideEffects": false tells Webpack every module is pure. The more targeted array form above is safer -- it marks only CSS and known side-effectful files as exceptions, while allowing all application JavaScript to be tree-shaken. This is especially impactful for libraries like Lodash when imported via lodash-es (the ESM edition), which exposes hundreds of named exports. An import like import { debounce } from 'lodash-es' will include only the debounce module rather than the full 71KB library.

JavaScript -- webpack.config.js (ensure production optimizations)
const TerserPlugin = require('terser-webpack-plugin');

module.exports = {
  mode: 'production',
  optimization: {
    usedExports: true,      // Mark unused exports in module scope
    sideEffects: true,      // Read sideEffects from package.json
    minimize: true,
    minimizer: [
      new TerserPlugin({
        terserOptions: {
          compress: {
            passes: 2,      // Two compression passes for better DCE
            drop_console: true,
          },
        },
      }),
    ],
  },
};

After enabling sideEffects and switching from lodash to lodash-es, a representative application saw its utility chunk shrink from 94KB to 11KB. That 83KB reduction translates directly to a shorter parse task and a measurable INP improvement. See the JavaScript performance guide for more on identifying oversized dependencies.

Step 6: Persistent filesystem caching and Module Federation cost

Webpack 5's persistent cache stores the compiled module graph to disk between builds. This has two performance implications: it keeps CI build times fast as your chunk count grows, and it ensures that performance-critical configurations (like the SplitChunks settings above) do not get accidentally disabled to speed up slow local builds.

JavaScript -- webpack.config.js (persistent cache)
module.exports = {
  mode: 'production',
  cache: {
    type: 'filesystem',
    buildDependencies: {
      // Invalidate cache when config files change
      config: [__filename],
    },
    // Version the cache by Node.js version and environment
    version: `${process.env.NODE_ENV}-${process.versions.node}`,
  },
};

If your project uses Webpack 5 Module Federation to share code between micro-frontends, be aware of its INP cost. Each remote container is a separately deployed Webpack build. When a host application needs a component from a remote, it issues a network request to fetch the remote's manifest and then the component chunk. If this happens inside an event handler -- for example, when the user clicks a button that renders a component from a remote -- the interaction is blocked by the full network round-trip.

The fix is to prefetch remote containers eagerly, outside of any interaction path:

JavaScript -- Module Federation eager prefetch
// In the host application's bootstrap.js (not index.js)
// Prefetch remote manifests during idle time
if ('requestIdleCallback' in window) {
  requestIdleCallback(() => {
    // Kick off the remote fetch before any user interaction
    import('checkout/CartWidget').catch(() => {
      // Silently ignore prefetch failures
    });
  });
}
JavaScript -- webpack.config.js (Module Federation config)
const { ModuleFederationPlugin } = require('webpack').container;

module.exports = {
  plugins: [
    new ModuleFederationPlugin({
      name: 'host',
      remotes: {
        checkout: 'checkout@https://checkout.example.com/remoteEntry.js',
      },
      shared: {
        // Mark shared modules as singleton to prevent duplicate parsing
        react: { singleton: true, requiredVersion: '^18.0.0' },
        'react-dom': { singleton: true, requiredVersion: '^18.0.0' },
      },
    }),
  ],
};

The singleton: true flag is critical. Without it, Module Federation may load two copies of React -- one from the host and one from the remote -- doubling the parse cost and potentially causing hook errors at runtime. A full INP guide covers the broader interaction model that explains why each additional parse millisecond translates directly to INP regression.

Step 7: Deduplicate dependencies and defer third-party scripts

Duplicate packages and eagerly executed third-party scripts are two separate problems that both inflate main thread work. Resolve.alias handles duplicates; a combination of deferred script loading and event-driven initialization handles third-party scripts.

To find duplicates, run the following before touching any configuration:

Shell -- find duplicate packages
# npm
npm ls react 2>/dev/null | grep react

# Deduplicate npm lockfile
npm dedupe

# yarn
yarn dedupe --check

If npm ls shows React listed at two different versions, force resolution to a single copy with Webpack's resolve.alias:

JavaScript -- webpack.config.js (resolve alias)
const path = require('path');

module.exports = {
  resolve: {
    alias: {
      // Force all imports of react to resolve to the root installation
      react: path.resolve('./node_modules/react'),
      'react-dom': path.resolve('./node_modules/react-dom'),
    },
  },
};

For third-party analytics, chat, and A/B testing scripts, move them out of the Webpack bundle entirely and load them asynchronously via the HTML template, firing only after the page is interactive. Using html-webpack-plugin:

HTML -- index.html template (deferred third-party scripts)
<!-- Load analytics only after the browser is idle -->
<script>
  window.addEventListener('load', function () {
    if ('requestIdleCallback' in window) {
      requestIdleCallback(loadAnalytics, { timeout: 4000 });
    } else {
      setTimeout(loadAnalytics, 2000);
    }

    function loadAnalytics() {
      var script = document.createElement('script');
      script.src = 'https://cdn.example-analytics.com/analytics.min.js';
      script.async = true;
      script.defer = true;
      document.head.appendChild(script);
    }
  });
</script>

Do not bundle analytics scripts into your Webpack output. Beyond the INP cost, it prevents updating the script without a full redeploy and forces all users to re-download a chunk whenever the analytics vendor pushes an update to their SDK. For further detail on third-party script management, see the JavaScript performance guide. To measure whether your changes have moved the needle, use the performance budget tool to set an INP threshold and receive alerts when a new deploy crosses it.

Verification

After applying these changes, verify INP improvement at both the lab and field level.

Lab verification with Chrome DevTools

Open Chrome DevTools, navigate to the Performance panel, and record a trace starting from page load. Interact with the page -- click buttons, open menus, submit forms -- for 30 seconds. After stopping the recording, enable the "Long Tasks" highlight in the timeline. Long tasks (those exceeding 50ms) appear as red-flagged blocks on the main thread row.

Compare the number and peak duration of long tasks before and after your Webpack changes. After applying granular SplitChunks and route-level code splitting, the initial parse task cluster should shrink from a single 400-600ms block to several shorter 40-80ms blocks spread across route navigations.

Also open the Network panel filtered to JS, reload the page, and check that the initial chunk set matches your expectations: a small runtime manifest, a stable vendor-react chunk, and a compact app entry point. Route chunks should only appear in the Network panel when you navigate to those routes.

Field verification with CrUX and PageSpeed Insights

Field INP data takes 28 days to fully reflect a production change in the Chrome User Experience Report. After deploying, monitor INP in PageSpeed Insights weekly. Pay attention to the 75th percentile value on mobile, as that is the metric Google uses for Search ranking. A meaningful improvement typically becomes visible in CrUX data within 7-14 days of a successful deploy.

For real-time field monitoring, instrument your production pages with the web-vitals JavaScript library and emit INP values to your analytics backend. The complete INP guide covers attribution options for identifying which interaction elements are responsible for your worst INP readings.

Common pitfalls

  • Setting maxSize too low. A maxSize of 20KB forces Webpack to emit dozens of tiny chunks. While each parses quickly in isolation, the waterfall of HTTP requests can slow initial load on high-latency connections. Keep maxSize at 150-250KB for initial chunks.
  • Applying sideEffects: false to CSS module files. CSS-in-JS files that inject styles as a side effect will be dropped from the bundle, breaking styles entirely. Always list CSS files explicitly in the sideEffects array.
  • Using React.lazy without a Suspense boundary. If a lazy-loaded component throws a loading error and there is no Suspense fallback, React will throw an unhandled error. Wrap lazy routes in Suspense with a skeleton fallback and pair it with an ErrorBoundary for resilience.
  • Forgetting to update the cache version key after changing Webpack configuration. Persistent filesystem cache stores the old module graph until the version key changes. After a major configuration change, increment the version string in the cache config to force a clean rebuild.
  • Measuring INP only in the lab. Lab tools like Lighthouse run on a throttled CPU simulation but cannot reproduce the full diversity of real-user interaction patterns. A click on a complex event handler may perform well in Lighthouse but poorly under real user conditions. Always validate with CrUX field data from PageSpeed Insights after deploying to production.

Frequently asked questions

Typical results after applying SplitChunks tuning, runtime chunk extraction, and route-level code splitting are a 40-60% reduction in long task duration. An application with INP of 310ms often reaches 120ms after these changes, moving from the "needs improvement" band into the "good" band (under 200ms). Results vary based on initial bundle size, the ratio of vendor to application code, and the proportion of users on low-end mobile devices.

Large JavaScript bundles cause long tasks on the main thread at parse and compile time. When a user clicks a button while the browser is still executing a 500ms parsing task, the response is delayed for the full duration of that task. INP measures that delay directly. Reducing the amount of JavaScript parsed before and during interactions is therefore one of the highest-leverage INP fixes available, regardless of framework.

Module Federation can hurt INP when remote containers are fetched during an interaction handler. If a user clicks a button that triggers a lazy-loaded remote component for the first time, the interaction is blocked by the network round-trip to retrieve the remote manifest and chunk. The fix is to prefetch remotes during idle time using requestIdleCallback, and to set singleton: true on shared modules to prevent loading duplicate copies of React or other heavy libraries.

Use Chrome DevTools Performance panel with the Long Tasks view. Record a page load and interaction session, then look for red-flagged blocks on the main thread timeline. After applying Webpack optimizations the number and duration of those blocks should decrease. Validate INP field data using the Chrome User Experience Report (CrUX) in PageSpeed Insights at least two weeks after deploying to production. For continuous monitoring, emit INP readings from the web-vitals library to your analytics backend.

Both are necessary and address different problems. Tree shaking removes dead code from chunks that are already loaded, reducing parse and compile time for code that must be present on initial load. Code splitting delays loading entire chunks until they are needed, eliminating parse cost for code paths the current user has not activated. Apply both: use sideEffects and production mode for tree shaking, and import() with React.lazy or a router-level splitting strategy for code splitting.

Quick checklist

  • Bundle analyzed with webpack-bundle-analyzer before any changes
  • SplitChunksPlugin configured with named cacheGroups separating React, UI library, and utilities
  • optimization.runtimeChunk: 'single' and moduleIds: 'deterministic' set
  • All top-level routes use React.lazy with named webpackChunkName comments
  • "sideEffects" field in package.json lists CSS and side-effectful files; all other modules tree-shaken
  • cache: { type: 'filesystem' } enabled with a versioned cache key
  • Duplicate packages identified and resolved with npm dedupe and resolve.alias
  • Third-party scripts loaded via requestIdleCallback rather than bundled into the Webpack entry
  • INP verified in Chrome DevTools Performance panel and CrUX field data

Continue learning