TTFB Vue 3 / Nuxt 3

Fix TTFB in Vue / Nuxt 3

Time to First Byte (TTFB) measures how long it takes for the browser to receive the first byte of the HTML document. For a Vue SPA, the HTML shell arrives quickly but contains no content -- data fetching happens entirely in the browser after JavaScript boots. Nuxt 3's server-side rendering eliminates this pattern, but TTFB can still be high if SSR pages call slow APIs or are not cached at the edge. This guide shows how to move from a slow SPA to a fast, edge-cached Nuxt 3 SSR setup.

Expected results

Before

1,100ms

TTFB (Poor) -- SPA with client-side data fetching, no caching, remote origin

After

185ms

TTFB (Good) -- Nuxt SSR, Nitro edge, ISR routeRules, long-lived cache headers

Step-by-step fix

Switch from SPA to Nuxt SSR

A Vue SPA sends an empty <div id="app"></div> shell. The browser downloads the JS bundle, boots Vue, then fetches data -- all before any content is painted. With Nuxt SSR, the server renders the full HTML on the first request and streams it to the browser. Use useFetch with server: true (the default) so data is fetched during SSR and inlined in the HTML response rather than making a second round-trip from the browser.

Nuxt 3 -- useFetch with SSR data fetching
// pages/products/[slug].vue

<script setup>
// Bad (Vue SPA pattern): fetches in onMounted -- client only, high TTFB
// onMounted(async () => {
//   product.value = await $fetch(`/api/products/${route.params.slug}`);
// });

// Good: useFetch runs on the server during SSR
// The fetched data is serialized into the HTML, hydrated on the client
const route = useRoute();
const { data: product, error } = await useFetch(
  `/api/products/${route.params.slug}`,
  {
    // key prevents re-fetching on client hydration
    key: `product-${route.params.slug}`,
    // server: true is default -- runs during SSR
    // Set a server-side cache to avoid hitting the DB on every request
    getCachedData(key, nuxtApp) {
      return nuxtApp.payload.data[key] ?? nuxtApp.static.data[key];
    },
  }
);

// SEO: set meta from SSR data
useSeoMeta({
  title: product.value?.name,
  description: product.value?.description,
});
</script>

<template>
  <div v-if="product">
    <h1>{{ product.name }}</h1>
    <p>{{ product.description }}</p>
  </div>
  <div v-else-if="error">Failed to load product.</div>
</template>

Configure Nitro routeRules for ISR and SWR

Even fast SSR adds latency per request because the server must render HTML before sending the first byte. Nitro's routeRules let you cache rendered pages at the CDN edge. isr (incremental static regeneration) serves a cached page and regenerates it in the background after the TTL expires. swr (stale-while-revalidate) behaves the same way via Cache-Control headers. Use prerender: true for pages with no dynamic data that can be pre-built at deploy time.

nuxt.config.ts -- Nitro routeRules
// nuxt.config.ts
export default defineNuxtConfig({
  nitro: {
    routeRules: {
      // Pre-render at build time -- zero TTFB (served from CDN)
      '/': { prerender: true },
      '/about': { prerender: true },

      // ISR: re-generate after 60 s, serve stale in the meantime
      '/products/**': { isr: 60 },

      // SWR: serve cached, revalidate in background every 5 min
      '/blog/**': { swr: 300 },

      // API routes: cache at CDN for 10 s, stale for 60 s
      '/api/products/**': {
        headers: {
          'cache-control': 's-maxage=10, stale-while-revalidate=60',
        },
      },

      // Auth routes: never cache
      '/account/**': { cache: false },
    },
  },
});

// Result: /products/* pages are served from edge cache with ~10 ms TTFB
// after the first SSR warms the cache. Subsequent requests skip the server.

Deploy to the edge with a Nitro preset

Hosting Nuxt on a single-region Node.js server means every user outside that region incurs round-trip latency before the first byte. Edge deployment runs Nuxt's Nitro server in distributed workers close to users worldwide, reducing that latency to milliseconds. Configure the Nitro preset for your provider in nuxt.config.ts -- no code changes to your app are needed.

nuxt.config.ts -- Nitro edge preset
// nuxt.config.ts

// Option A: Deploy to Vercel Edge (Vercel Edge Functions / Edge Runtime)
export default defineNuxtConfig({
  nitro: {
    preset: 'vercel-edge',
  },
});

// Option B: Deploy to Cloudflare Pages (Cloudflare Workers)
export default defineNuxtConfig({
  nitro: {
    preset: 'cloudflare-pages',
  },
});

// Option C: Deploy to AWS Lambda@Edge via Netlify
export default defineNuxtConfig({
  nitro: {
    preset: 'netlify-edge',
  },
});

// The preset is auto-detected when deploying via the provider's CLI.
// Override it manually only when the auto-detection picks the wrong target.

// Verify the build output target:
// $ npx nuxi build
// Nitro build output: .output/server/index.mjs (edge-compatible module)

Set long-lived Cache-Control headers for static assets

Nuxt outputs JavaScript, CSS, and image assets with content hashes in their filenames under /_nuxt/. Because the hash changes whenever the file changes, it is safe to cache these assets for up to a year. Configure your hosting platform to send Cache-Control: public, max-age=31536000, immutable for all /_nuxt/** paths. Repeat visitors will load these assets from the browser cache with zero network round trips.

vercel.json -- immutable asset caching
{
  "headers": [
    {
      "source": "/_nuxt/(.*)",
      "headers": [
        {
          "key": "Cache-Control",
          "value": "public, max-age=31536000, immutable"
        }
      ]
    },
    {
      "source": "/favicon.ico",
      "headers": [
        { "key": "Cache-Control", "value": "public, max-age=86400" }
      ]
    }
  ]
}

# Equivalent Netlify _headers file:
# /_nuxt/*
#   Cache-Control: public, max-age=31536000, immutable

# Equivalent .htaccess for Apache self-hosting:
# <LocationMatch "^/_nuxt/">
#   Header set Cache-Control "public, max-age=31536000, immutable"
# </LocationMatch>

Enable HTTP/2 and Brotli compression

HTTP/2 multiplexes multiple asset requests over a single TCP connection, eliminating head-of-line blocking that inflates TTFB on asset-heavy pages. Brotli compresses HTML, JS, and CSS more efficiently than gzip, reducing transfer size by 15--25% which directly cuts the time to transfer the HTML document. Both are available in Nitro's built-in compression middleware and are automatic on most modern hosting platforms.

nuxt.config.ts + .htaccess -- compression
// nuxt.config.ts -- enable Nitro compression (for self-hosted Node)
export default defineNuxtConfig({
  nitro: {
    compressPublicAssets: {
      brotli: true,  // generate .br files for all public assets
      gzip: true,    // fallback .gz files for clients without Brotli
    },
    // Serve pre-compressed files automatically
    serveStatic: true,
  },
});

// Vercel / Cloudflare Pages: Brotli and HTTP/2 are automatic.
// No configuration is needed -- the platform handles it.

// .htaccess for Apache self-hosting:
// LoadModule brotli_module modules/mod_brotli.so
// <IfModule mod_brotli.c>
//   AddOutputFilterByType BROTLI_COMPRESS text/html text/css
//   AddOutputFilterByType BROTLI_COMPRESS application/javascript
//   AddOutputFilterByType BROTLI_COMPRESS application/json
// </IfModule>

// Verify compression in Chrome DevTools:
// Network tab > HTML document > Response Headers
// content-encoding: br  (Brotli active)
// content-encoding: gzip  (gzip fallback)

Quick checklist

  • App uses Nuxt 3 SSR instead of a client-only Vue SPA
  • Data is fetched with useFetch (server-side) not onMounted (client-side)
  • Nitro routeRules configure ISR or SWR caching for all page routes
  • Nitro preset is set to an edge provider (vercel-edge, cloudflare-pages, netlify-edge)
  • /_nuxt/** assets have Cache-Control: public, max-age=31536000, immutable
  • Brotli compression is active -- verify content-encoding: br in Network tab response headers

Frequently asked questions

A Vue SPA returns an empty HTML shell almost immediately, but the browser must download and execute the JavaScript bundle before any data is fetched and meaningful content is painted. The document TTFB is fast, but Largest Contentful Paint is delayed by JS execution and the subsequent API round-trip. Nuxt SSR streams fully-rendered HTML so both TTFB and LCP improve significantly. The SPA pattern can easily produce 3--5 s LCP on slow connections; SSR with edge caching can bring it under 1.5 s.

Not automatically. A Nuxt app that calls a slow upstream API on every SSR request will have higher TTFB than a Vue SPA whose empty shell is cached on a CDN. The performance gain comes from combining SSR with response caching at the Nitro layer using routeRules ISR or SWR. Once the cache is warm, Nuxt serves pre-rendered HTML from edge nodes and TTFB drops to under 50 ms for those routes.

Google's Core Web Vitals thresholds rate TTFB as Good when under 800 ms, Needs Improvement between 800 ms and 1,800 ms, and Poor above 1,800 ms. In practice, to achieve a Good LCP score, TTFB typically needs to be under 400 ms since it is the first component of the LCP timeline. For edge-deployed cached responses, target under 100 ms; for uncached SSR, target under 400 ms.

In the browser, open DevTools Network panel, reload the page, click the HTML document request, and read the TTFB value in the Timing tab. For field data, install the web-vitals package and call onTTFB(console.log) in a Nuxt plugin. For synthetic monitoring, WebPageTest shows TTFB prominently in its waterfall view and separates DNS, TCP, TLS, and server processing time. Nuxt DevTools shows server timing headers in development mode.

Pinia itself adds negligible TTFB overhead. The risk is in Pinia stores that call external APIs during SSR -- the API response time adds directly to TTFB. Mitigate this by caching API responses in Nitro's built-in storage layer, using useFetch cache keys, or fronting the upstream API with a CDN so the SSR data fetch is served from cache in under 10 ms.

The most common causes are: uncached server-side rendering (each request triggers full page generation), slow database queries without indexes, hosting on a single-region origin server far from users, and missing CDN caching headers. For Vue, check that static/ISR pages are being served from CDN edge nodes rather than hitting the origin on every request.

Continue learning