Fix TTFB in Next.js
TTFB in Next.js is almost entirely determined by your choice of rendering strategy. Pages using getServerSideProps (Pages Router) or dynamic rendering (App Router) must query a database and render React to HTML on every request -- easily adding 200-800ms to TTFB. Switching to SSG or ISR eliminates server processing time entirely, serving pre-built HTML from CDN edge nodes with TTFB under 50ms. This guide covers every optimization in order of impact.
Expected results
Before
1.4s
TTFB (Poor) -- SSR with unoptimized DB queries, no caching, central server
After
45ms
TTFB (Good) -- ISR served from Vercel edge, streaming for dynamic routes
Step-by-step fix
Switch from SSR to SSG or ISR
The single biggest TTFB improvement in Next.js is replacing getServerSideProps with getStaticProps (Pages Router) or switching from dynamic to static rendering (App Router). SSG pages are pre-built and served from CDN edge nodes with TTFB under 50ms. SSR pages generate on every request and can take 200-800ms.
// app/products/[slug]/page.tsx
// This makes the page STATIC with revalidation (ISR)
export const revalidate = 3600; // regenerate at most once per hour
// Without this export, dynamic data fetches make it dynamic (SSR)
// With it, Next.js pre-renders and caches at the edge
export async function generateStaticParams() {
const products = await getTopProducts(1000); // pre-build top 1000
return products.map(p => ({ slug: p.slug }));
}
export default async function ProductPage({ params }: Params) {
const product = await getProduct(params.slug);
return <ProductContent product={product} />;
}
// pages/products/[slug].tsx (Pages Router equivalent)
export async function getStaticProps({ params }) {
const product = await getProduct(params.slug);
return {
props: { product },
revalidate: 3600, // ISR: regenerate every hour
};
}
Use streaming SSR with Suspense
When a page must be dynamic (SSR), use Suspense to stream the initial HTML shell immediately while data-dependent sections load asynchronously. The browser receives the first byte almost instantly (matching a static page's TTFB for the shell), and the slow parts stream in progressively.
// app/dashboard/page.tsx
// The page shell streams immediately; data sections stream as they're ready
import { Suspense } from 'react';
export default function DashboardPage() {
return (
<main>
{/* This renders immediately -- shell TTFB is <50ms */}
<DashboardHeader />
<nav>...</nav>
{/* These stream in when their async data is ready */}
<Suspense fallback={<MetricsSkeleton />}>
<DashboardMetrics /> {/* awaits DB query -- streams in */}
</Suspense>
<Suspense fallback={<ChartSkeleton />}>
<RevenueChart /> {/* awaits API call -- streams independently */}
</Suspense>
</main>
);
}
// Each async component fetches its own data -- parallel, not serial
async function DashboardMetrics() {
const metrics = await db.metrics.findMany({ ... }); // 200ms
return <MetricsGrid metrics={metrics} />;
}
Cache data fetches in the App Router
In the App Router, fetch requests are automatically memoized within a single render (so multiple components can call the same endpoint without duplicate requests). For persistence across requests, use cache options or unstable_cache to cache expensive database calls.
import { unstable_cache } from 'next/cache';
// Cache this function's result for 1 hour, tagged for invalidation
const getCachedProduct = unstable_cache(
async (slug: string) => {
return db.products.findUnique({ where: { slug } });
},
['product'],
{ revalidate: 3600, tags: ['products'] }
);
// Using fetch with caching
async function getProducts() {
const res = await fetch('https://api.example.com/products', {
next: {
revalidate: 3600, // cache for 1 hour
// OR:
// cache: 'force-cache', // cache indefinitely
// cache: 'no-store', // never cache (SSR behavior)
},
});
return res.json();
}
// On-demand revalidation from a Server Action or Route Handler
import { revalidateTag } from 'next/cache';
async function updateProduct(id: string, data: ProductData) {
await db.products.update({ where: { id }, data });
revalidateTag('products'); // Purge all product cache entries
}
Use Edge Functions for dynamic personalization
For pages that must be dynamic but only need lightweight personalization (locale detection, A/B tests, cookie-based auth), use Edge Functions. They run on Vercel's edge network worldwide, reducing TTFB from 200-400ms (central serverless) to under 50ms (edge).
// app/api/user-data/route.ts
export const runtime = 'edge'; // Run on Vercel's edge network
export async function GET(request: Request) {
const country = request.headers.get('x-vercel-ip-country') ?? 'US';
const userId = getCookieUserId(request);
// Edge functions: fast KV reads, not complex DB queries
const userData = await kv.get(`user:${userId}`);
return Response.json({ country, userData });
}
// middleware.ts -- Runs at edge, before any page loads
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
export function middleware(request: NextRequest) {
const country = request.headers.get('x-vercel-ip-country');
// Redirect to locale-specific page at edge -- no origin roundtrip
if (country === 'DE' && !request.nextUrl.pathname.startsWith('/de')) {
return NextResponse.redirect(new URL('/de' + request.nextUrl.pathname, request.url));
}
}
Optimize database queries on SSR routes
For routes that must use SSR, minimize database query time. Add indexes on columns used in WHERE and ORDER BY clauses, use connection pooling (Prisma Accelerate, PgBouncer), and run independent queries in parallel with Promise.all instead of serially.
// Bad: serial queries add up
async function getDashboardData() {
const user = await db.users.findUnique({ where: { id } }); // 50ms
const orders = await db.orders.findMany({ where: { userId: id } }); // 80ms
const metrics = await db.metrics.findFirst({ where: { userId: id } }); // 60ms
// Total: 190ms serial
}
// Good: parallel queries complete in max(50, 80, 60) = 80ms
async function getDashboardData() {
const [user, orders, metrics] = await Promise.all([
db.users.findUnique({ where: { id } }),
db.orders.findMany({ where: { userId: id } }),
db.metrics.findFirst({ where: { userId: id } }),
]);
// Total: 80ms parallel
}
Quick checklist
- Pages that don't need per-request data use SSG or ISR (not SSR)
- Dynamic routes use Suspense for streaming SSR
-
Data fetches use
unstable_cacheor fetch cache options - Site deployed on Vercel or CDN-backed platform
-
Independent data fetches run in parallel with
Promise.all - Database queries have indexes on WHERE and ORDER BY columns
Frequently asked questions
High TTFB is almost always caused by SSR (getServerSideProps or dynamic App Router routes) combined with slow database queries, no caching, or geographic distance. Each SSR page must query the database, render React to HTML, and transmit the response before the browser receives the first byte. Static pages have none of these delays.
Yes, for static and ISR pages -- these are cached at edge nodes worldwide with TTFB under 50ms. For SSR pages, Vercel routes to the nearest serverless function region, reducing latency but not eliminating server processing time. Only static or ISR content gets CDN-level TTFB.
With ISR, regeneration happens in the background -- users always receive cached HTML immediately (TTFB under 50ms) while the regeneration happens for the next visitor. With SSR, every request waits for a fresh render (200-800ms depending on database complexity). ISR gives you SSG performance with near-SSR freshness.
Edge Functions generate responses at the edge, eliminating the round trip to a central function. They dramatically improve TTFB for dynamic content but have constraints: no Node.js APIs, limited memory, restricted package support. Best for lightweight personalization, auth redirects, A/B testing, and simple data lookups via edge KV stores.
Use unstable_cache() for non-fetch data sources (Prisma, SDKs). Use { next: { revalidate: 60 } } on fetch calls for time-based caching. Use { cache: 'force-cache' } for indefinite caching. Invalidate by tag with revalidateTag() from Server Actions when data changes.
The most common causes are: uncached server-side rendering (each request triggers full page generation), slow database queries without indexes, hosting on a single-region origin server far from users, and missing CDN caching headers. For Nextjs, check that static/ISR pages are being served from CDN edge nodes rather than hitting the origin on every request.
Related resources
Complete TTFB Guide
Deep dive into what TTFB measures, CDNs, server caching, and edge computing.
FixFix LCP in Next.js
TTFB is the floor for LCP -- fix TTFB first, then optimize the rendering pipeline.
FixFix INP in Next.js
Reduce interaction latency with Server Components, code splitting, and startTransition.
Continue learning
Complete TTFB Guide
Deep dive into TTFB -- thresholds, measurement, and optimization strategies.
FixFix LCP in Next.js
Related performance optimization for the same framework.
FixFix CLS in Next.js
Related performance optimization for the same framework.
ToolCWV Score Explainer
Enter your scores for personalized fix recommendations.