JavaScript Performance Guide: Bundle Size, Code Splitting, Main Thread

JavaScript is the single largest contributor to poor Core Web Vitals across the web. Unlike images or fonts, a JavaScript file does not just consume download bandwidth -- it also occupies the browser main thread for parse, compile, and execution time. On a mid-range mobile device those three phases together can consume hundreds of milliseconds for a 500 KB bundle, blocking every user interaction in the process. This guide walks through the complete JavaScript performance stack: measuring and reducing bundle size, splitting code to load only what each route needs, keeping the main thread free with modern scheduling APIs, taming third-party scripts, and applying the right framework-level knobs in React, Astro, and Vue.

Whether you are shipping a Next.js e-commerce store with a bloated vendor chunk, a Vue SPA that stalls on every button click, or a content site throttled by tag-manager chaos, the techniques here will move your INP score into the green and improve user-perceived responsiveness across every device category.

Why JavaScript dominates Core Web Vitals

JavaScript uniquely affects all three Core Web Vitals, but its impact on Interaction to Next Paint (INP) is the most direct. INP measures the delay from a user gesture (click, tap, keypress) to the next visual update. For a response to feel instant, that delay must stay under 200 ms at the 75th percentile. The main thread is the only thread that can both handle user events and update the DOM, so any synchronous work running on it during a user gesture extends the INP measurement directly.

JavaScript contributes to this in four distinct ways:

  • Parse and compile cost. Every kilobyte of JavaScript the browser downloads must be tokenized, parsed into an AST, and compiled to bytecode before any of it runs. On a Moto G4 running Android 11, a 1 MB JavaScript bundle can take 3-4 seconds to parse and compile, and that work happens entirely on the main thread.
  • Long event handlers. Synchronous event listeners that do expensive DOM queries, layout reads, or complex state updates create long tasks. The browser cannot respond to a second user gesture until the first event handler completes.
  • Third-party script contention. Analytics, A/B testing, and chat widgets run their own JavaScript on the same main thread. A third-party script that fires a 200 ms long task during a user click directly worsens your INP -- even though the code is not yours.
  • Hydration cost for server-rendered apps. Frameworks that ship full HTML from the server still need to attach event listeners in the browser (hydration). A large React application can spend 1-2 seconds on hydration alone, during which the page looks interactive but is not.

JavaScript also affects LCP when the largest element is rendered client-side, and CLS when scripts inject content that shifts layout. But INP is where JavaScript optimization delivers the most measurable Core Web Vitals improvement. The JavaScript bundle and INP fix guide covers framework-specific measurement steps in detail.

Key stat: According to the HTTP Archive 2025 Web Almanac, the median page loads 509 KB of JavaScript on desktop and 461 KB on mobile. Pages in the top 10% by JavaScript weight load over 2.1 MB -- a figure that makes good INP nearly impossible on budget hardware.

Reducing bundle size: tree shaking, code splitting, dynamic imports

The most effective way to reduce JavaScript's impact on the main thread is to ship less of it. Three techniques work together to accomplish this: tree shaking removes dead code, code splitting divides the bundle into per-route chunks, and dynamic imports defer loading until the code is actually needed.

Tree shaking

Tree shaking is the process of statically analyzing your ES module import graph and removing exports that are never imported. It requires three conditions: source modules must use import/export syntax (not require()), the package must declare "sideEffects": false (or list specific side-effectful files) in its package.json, and your bundler must run in production mode.

The most common tree-shaking failure is importing from a barrel file (index.js) that re-exports everything from a library. Lodash is the classic example: import _ from 'lodash' imports the entire 72 KB library. The fix is either named imports from the modular build (import debounce from 'lodash/debounce') or switching to a tree-shakeable alternative like lodash-es.

Verify tree shaking is working with your bundler's bundle analyzer. For webpack, install webpack-bundle-analyzer; for Rollup and Vite, use rollup-plugin-visualizer. Look for large modules appearing in chunks that should not need them -- that signals a missing sideEffects declaration or a CommonJS import.

Route-level code splitting

Even after tree shaking, a large application will have more JavaScript than any single route needs. Code splitting at the route boundary ensures each page only downloads the code it requires. Most modern meta-frameworks handle this automatically: Next.js splits by page, Nuxt splits by route, SvelteKit splits by layout. For custom webpack configurations, the entry points or SplitChunksPlugin control this boundary.

The target for a well-split bundle is:

  • Initial JavaScript (blocking or render-critical) under 100 KB compressed
  • Per-route JavaScript under 50 KB compressed for typical content pages
  • Vendor chunks cached separately from application code so incremental deploys do not bust the vendor cache

Dynamic imports

Dynamic imports (import()) are the runtime complement to build-time code splitting. They let you defer the loading of a module until the moment it is needed -- on user interaction, on route change, or on intersection with the viewport.

JavaScript
// Before: static import loads the heavy chart library on every page
import { Chart } from 'chart.js';

// After: dynamic import loads it only when the user opens the analytics tab
async function openAnalyticsTab() {
  const { Chart } = await import('chart.js');
  const ctx = document.getElementById('revenue-chart').getContext('2d');
  new Chart(ctx, { type: 'bar', data: chartData });
}

// Pattern: load on interaction, not on page load
document.getElementById('analytics-tab').addEventListener('click', openAnalyticsTab);

// TypeScript: dynamic import with type safety
type ChartModule = typeof import('chart.js');
const loadChart = (): Promise<ChartModule> => import('chart.js');

Dynamic imports are particularly powerful for modal dialogs, rich text editors, video players, and map components -- features that are present on the page but only needed after a specific user action. Moving these imports from static to dynamic is often a 1-line change that removes 100+ KB from the initial bundle.

Avoiding long tasks: scheduler.yield, Web Workers, requestIdleCallback

A long task is any continuous block of main-thread work exceeding 50 ms. The Chrome DevTools Performance panel highlights them in red in the Main thread row. Every long task is a window where user input is silently queued -- clicks, taps, and keypresses accumulate unprocessed until the task completes. For INP, what matters is the worst-case delay across all interactions; a single 300 ms long task triggered by clicking a dropdown can make an otherwise fast page feel sluggish.

scheduler.yield()

The Scheduling API (scheduler.yield(), currently in origin trial and available behind the scheduler.postTask() umbrella) provides a standard way to yield control back to the browser between chunks of work. Unlike setTimeout(fn, 0), scheduler.yield() uses a task-priority signal that allows user input to preempt long-running work.

JavaScript
// Process a large array without blocking input
async function processItems(items) {
  const CHUNK_SIZE = 50;

  for (let i = 0; i < items.length; i += CHUNK_SIZE) {
    const chunk = items.slice(i, i + CHUNK_SIZE);

    for (const item of chunk) {
      renderItem(item); // DOM work stays on main thread
    }

    // Yield to the browser after each chunk
    // scheduler.yield is available in Chrome 115+ behind a flag
    // and in stable Chrome 124+ for origin trial participants
    if ('scheduler' in globalThis && 'yield' in scheduler) {
      await scheduler.yield();
    } else {
      // Polyfill: yield via zero-timeout
      await new Promise(resolve => setTimeout(resolve, 0));
    }
  }
}

// Usage -- the loop now pauses between chunks,
// allowing the browser to handle queued clicks
processItems(thousandItems);

Web Workers

Web Workers run JavaScript on a separate thread, entirely outside the main thread. They cannot touch the DOM, but they can perform any CPU-intensive computation and pass results back via postMessage(). Use Workers for: JSON parsing of large payloads, CSV/Excel processing, cryptographic operations, sorting and filtering large datasets, and image manipulation via OffscreenCanvas.

JavaScript
// worker.js -- runs on a separate thread
self.onmessage = function(event) {
  const { items } = event.data;

  // Heavy computation -- sorting 100k items by multiple criteria
  const sorted = items.sort((a, b) => {
    if (a.category !== b.category) return a.category.localeCompare(b.category);
    return b.score - a.score;
  });

  self.postMessage({ sorted });
};

// main.js -- keeps the main thread free during the sort
const worker = new Worker(new URL('./worker.js', import.meta.url));

function sortProductsInBackground(products) {
  return new Promise((resolve) => {
    worker.onmessage = (event) => resolve(event.data.sorted);
    worker.postMessage({ items: products });
  });
}

// The UI remains fully responsive while sorting runs
const sorted = await sortProductsInBackground(productCatalog);
renderProductGrid(sorted);

requestIdleCallback

For work that is genuinely non-urgent -- prefetching data, pre-warming caches, logging analytics events -- requestIdleCallback() schedules execution during browser idle periods. Unlike setTimeout, it explicitly communicates low priority and will not run during frame-critical moments. Always provide a timeout option to ensure the work eventually runs even on busy pages.

Do not use requestIdleCallback for work that must complete before a user interaction; its scheduling is not deterministic enough. For interaction-critical work that simply needs to be broken into chunks, scheduler.yield() or scheduler.postTask() with a user-visible priority is more appropriate.

Managing third-party scripts: defer/async, Partytown, consent gating

Third-party scripts are often the largest source of uncontrolled main-thread work. A standard e-commerce page includes analytics, tag managers, A/B testing, chat widgets, social sharing buttons, retargeting pixels, and consent management platforms -- easily 15-20 separate script loads. Each one competes for the same main thread your application code needs to respond to user input.

See the third-party scripts and LCP fix guide for a complete audit workflow, and INP fixes for Next.js which covers the Next.js Script component loading strategies in detail.

defer and async attributes

The most universally applicable fix is ensuring every third-party script uses either defer or async. A synchronous script tag in <head> blocks HTML parsing and delays everything -- including the LCP element. The difference between the two attributes matters:

  • async: downloads in parallel, executes immediately when the download completes (may interrupt parsing). Best for fully independent scripts like Google Analytics where order does not matter.
  • defer: downloads in parallel, executes in order after HTML parsing is complete. Best for scripts that need the DOM or that depend on each other.

Partytown

Partytown is an open-source library from Builder.io that relocates third-party scripts to a Web Worker. It works by proxying DOM reads/writes between the worker thread and the main thread via synchronous XHR. The result: Google Tag Manager, HubSpot, Segment, and similar heavy scripts no longer block main-thread input handling, because they execute on a worker.

Partytown has trade-offs: the synchronous proxy adds latency to DOM-touching third-party calls, and some scripts with complex DOM dependencies (live chat, A/B testing that must hide content immediately) require additional configuration. It is most reliable for pure analytics and tracking scripts. Integration is straightforward with Next.js via @builder.io/partytown and the type="text/partytown" script attribute.

GDPR and CCPA compliance creates a natural performance opportunity: if a user has not consented to analytics tracking, do not load the analytics script at all. Consent-gated loading means the majority of first-page-load sessions for users in consent-required jurisdictions see zero third-party script overhead. Implement this by listening to your CMP (Consent Management Platform) callback before injecting third-party script tags, rather than loading scripts unconditionally and suppressing data collection afterward.

Framework-specific levers: React Server Components, Astro islands, Vue async components

Modern frameworks have introduced architectural patterns that fundamentally reduce the amount of JavaScript shipped to the browser. These are not micro-optimizations -- they are design decisions that can reduce JavaScript payload by 40-80% compared to a conventional SPA approach.

React Server Components

React Server Components (RSC), stable since Next.js 13 App Router, render on the server and send only HTML and a lightweight serialized component tree to the client. Server Components have zero JavaScript weight in the browser bundle because they never hydrate. The client receives rendered HTML for the server component subtree plus a JSON representation used for navigation -- no React component code, no event listener code, no imported library code from the server component.

The key rule: Server Components handle data fetching and static rendering; Client Components (marked with "use client") handle interactivity. Move as much of your component tree into Server Components as possible. A data-heavy dashboard that previously shipped 200 KB of components + data-fetching code might ship only 40 KB of interactive Client Component code with RSC. For INP, less hydration work means the page becomes truly interactive faster and with fewer main-thread tasks. See the Next.js INP guide for measurement and migration steps.

Astro islands

Astro ships zero JavaScript by default. Every component is rendered to static HTML at build time unless you explicitly opt into client-side hydration with a client:* directive. The island architecture means only the interactive components on a page hydrate -- the navigation, a search box, a shopping cart -- while the rest of the page is pure HTML with no JavaScript overhead.

Astro's client directives give precise control over when hydration happens:

  • client:load -- hydrate immediately on page load (use sparingly)
  • client:idle -- hydrate when the browser is idle via requestIdleCallback
  • client:visible -- hydrate when the component scrolls into the viewport via IntersectionObserver
  • client:media="(max-width: 768px)" -- hydrate only when a CSS media query matches

For most content-heavy sites (blogs, documentation, marketing pages), Astro with client:visible on interactive widgets delivers INP scores that are structurally impossible to achieve with a conventional React or Vue SPA, because the initial page load involves almost no JavaScript execution.

Vue async components

Vue's defineAsyncComponent() wraps any component in a dynamic import, deferring its code until the component is actually rendered. Combined with Vue Router's lazy-loaded routes and Nuxt's built-in code splitting, async components give fine-grained control over when component code loads.

JavaScript
// Vue 3: async component with loading and error states
import { defineAsyncComponent } from 'vue';

// The HeavyChart component is not included in the initial bundle.
// Its code is fetched only when the component mounts.
const HeavyChart = defineAsyncComponent({
  loader: () => import('./HeavyChart.vue'),
  loadingComponent: ChartSkeleton,  // shown while downloading
  errorComponent: ChartError,       // shown on network failure
  delay: 200,                       // show loading component after 200ms
  timeout: 5000                     // error after 5s
});

// Vue Router: lazy-loaded route -- each route gets its own chunk
const routes = [
  {
    path: '/analytics',
    component: () => import('./views/AnalyticsView.vue')
  },
  {
    path: '/settings',
    component: () => import('./views/SettingsView.vue')
  }
];

In Nuxt 3, the <NuxtLazyHydration> component (Nuxt 3.13+) applies the same island pattern as Astro: components can be configured to hydrate on idle, on visible, or on interaction, matching Astro's directive system within the Vue ecosystem.

Common mistakes

  • Lazy-loading with loading="lazy" is not JavaScript lazy loading. The HTML attribute only applies to images and iframes. JavaScript code splitting requires import() -- adding defer to a script only changes execution timing, not whether the code is parsed and compiled.
  • Splitting into too many tiny chunks. HTTP/2 handles multiplexing well, but a page that loads 80 separate 5 KB script files still incurs request overhead, header compression loss, and browser scheduling cost. Aim for a moderate number of chunks (5-15 for a medium-sized app) rather than one chunk per component.
  • Putting expensive work in scroll and resize handlers without debouncing. These events fire dozens of times per second. Any synchronous work inside them runs on the main thread at that frequency and will cause dropped frames and poor INP on interactions that trigger viewport changes.
  • Using requestIdleCallback for interaction-critical prefetching. If you prefetch the next-page bundle in an idle callback, but the user navigates before the callback fires, the navigation stalls. Use rel="prefetch" or explicit on-hover prefetching instead for navigation-critical resources.
  • Not auditing third-party script weight after adding new vendors. Each new marketing tool, A/B testing platform, or analytics integration adds JavaScript that silently degrades INP. Run a quarterly third-party audit using DevTools Coverage and the Network panel filtered to third-party origins.
  • Blocking the main thread with synchronous localStorage or sessionStorage reads in event handlers. While typically fast, these calls can block on certain mobile browsers under memory pressure. For data that does not need to be synchronous, prefer async alternatives like the IndexedDB-backed Cache API.

Tools and validation

Measuring JavaScript performance correctly requires both lab tools for debugging and field data for ground-truth INP. See the performance glossary for definitions of key terms referenced in these tools.

Bundle analysis

  • webpack-bundle-analyzer -- generates an interactive treemap of your webpack bundle. Run with npx webpack-bundle-analyzer stats.json after building with --json flag. Look for duplicated packages and unexpectedly large vendor chunks.
  • rollup-plugin-visualizer / vite-bundle-visualizer -- equivalent treemap for Vite/Rollup projects. Add to vite.config.ts plugins array and open the generated stats.html.
  • Bundlephobia (bundlephobia.com) -- checks the minified + gzipped size of any npm package before you install it. Shows tree-shakeable weight alongside total weight.
  • Import Cost (VS Code extension) -- shows inline package sizes as you type import statements, giving immediate feedback during development.

Main thread profiling

  • Chrome DevTools Performance panel -- records a full main-thread trace. Look for red-flagged long tasks, the INP candidate interaction, and the breakdown of scripting vs. rendering work in the summary pie chart. Use CPU throttling (4x or 6x slowdown) to simulate mobile device conditions.
  • web-vitals library -- the official web-vitals npm package measures INP in real user sessions and reports the responsible interaction element. Essential for understanding which specific interaction drives poor field INP.
  • Chrome DevTools Performance Insights panel -- a higher-level view that calls out INP blockers, long tasks, and render-blocking resources with actionable labels, without requiring manual trace analysis.

Field data

  • PageSpeed Insights -- shows field INP at the 75th percentile from CrUX data, broken down by desktop and mobile. This is the number that affects Google ranking.
  • Chrome User Experience Report (CrUX) API -- query historical field data by URL or origin to track INP trends over time and compare against competitors.
  • Lighthouse CI -- integrates lab-measured JavaScript performance into your CI/CD pipeline. Set performance budget assertions to catch bundle size regressions before they reach production.

Frequently asked questions

Large JavaScript bundles delay INP in two ways: first, they occupy the main thread during parse and compile, blocking any input response; second, they push more event handler work into a single synchronous call stack. Reducing bundle size by 50% with tree shaking and code splitting is often the single fastest way to move INP from poor into the good range (under 200 ms).

A long task is any main-thread work that runs continuously for more than 50 ms. During a long task the browser cannot process user input, so any click or keypress queued during that window will be delayed until the task finishes. The delay between the input event and the first frame of visual response is what INP measures. Breaking long tasks with scheduler.yield(), requestIdleCallback(), or Web Workers is the primary technique for improving INP.

Both async and defer prevent a script from blocking HTML parsing while it downloads. The difference is in execution timing: async scripts execute as soon as they finish downloading, which can interrupt parsing; defer scripts execute in order after the HTML is fully parsed but before DOMContentLoaded. For most third-party scripts that do not need to run early, defer is the safer choice. Use async only for independent scripts like analytics where order does not matter.

Tree shaking relies on static ES module analysis and requires three conditions: the source must use ES module syntax (import/export); the package must mark side-effect-free files in its package.json sideEffects field; and the bundler must be set to production mode. Webpack 5 and Rollup perform tree shaking automatically when these conditions are met. CommonJS modules (require) cannot be tree-shaken and must be replaced with ES module equivalents.

Use scheduler.yield() (or a postTask wrapper) when you need to break up a sequence of DOM-touching work across multiple frames -- it yields control back to the browser without leaving the main thread. Use a Web Worker when the computation is CPU-intensive and has no need to touch the DOM: data parsing, image processing, cryptographic operations, sorting large arrays. Workers run on a separate thread entirely, so they cannot block user input regardless of how long they run.

Continue with framework-specific fixes and deeper dives into INP optimization:

Written by

Marcus Chen

INP specialist at WebVitals.tools. Focuses on main-thread scheduling, bundle optimization, and interaction latency across React and Vue applications.