Comparison
Static vs Server-Side Rendering: Web Performance in 2026
The rendering strategy you choose for a web application is one of the highest-leverage performance decisions you will make. Static Site Generation (SSG), Incremental Static Regeneration (ISR), traditional Server-Side Rendering (SSR), and edge SSR each carry distinct trade-offs across TTFB, LCP, hydration cost, INP, and operational scaling. For years, "go static" was safe advice for any content-driven site. In 2026, the landscape is more nuanced: edge runtimes like Vercel Edge Functions and Cloudflare Workers have pushed SSR TTFB down to 60-120ms globally, narrowing the gap that once made SSG an obvious default. This comparison uses controlled benchmark data and CrUX field data to tell you exactly where each strategy wins — and where it costs you.
How this comparison was conducted
All benchmarks ran identical application builds — a content-heavy article page, an e-commerce product listing page, and a dashboard page with user-specific data — deployed across four rendering strategies on the same infrastructure. Static and ISR builds were deployed to Vercel's global edge network (350+ edge nodes). SSR builds ran on Vercel Serverless Functions in us-east-1 (Node.js 22 runtime). Edge SSR builds ran on Vercel Edge Functions using the Edge Runtime, and separately on Cloudflare Workers to cross-validate results.
Lab measurements used Lighthouse 12 on a simulated mobile device (Moto G Power, 4G throttled at 10 Mbps down / 750 Kbps up, 40ms RTT). Field data came from the Chrome User Experience Report (CrUX) March 2026 dataset, filtered to origins using each rendering pattern. We measured TTFB, Largest Contentful Paint (LCP), Interaction to Next Paint (INP), Cumulative Layout Shift (CLS), and hydration JavaScript payload.
All React-based pages used Next.js 15.3 with the App Router. ISR pages used revalidate: 60 (time-based) and on-demand revalidation via cache tags. SSR pages used dynamic = 'force-dynamic'. Edge SSR pages used the runtime = 'edge' export. Remix 3.2 was included for the SSR baseline as a second data point.
TTFB: the metric where rendering strategy matters most
Time to First Byte is where the four strategies diverge most sharply. TTFB directly feeds into LCP — the browser cannot begin rendering until the first HTML byte arrives — so a 400ms TTFB difference compounds into a substantially worse LCP score, especially on mobile networks.
TTFB by Rendering Strategy (p75, Global)
The ISR cache miss figure deserves attention. When a page is being regenerated in the background — the stale-while-revalidate pattern — the user still receives the cached (stale) version instantly. Only the first visitor after a page has expired from cache triggers a cold regeneration, and even that request typically receives the previous cached version. The 460ms figure applies only to the rare uncached cold path, not to typical user traffic. In practice, ISR's effective p75 TTFB stays below 50ms for high-traffic pages.
Traditional regional SSR's 420ms p75 TTFB includes roughly 180ms of cold-start latency for serverless functions, plus database query time averaging 90ms, plus 150ms of geographic distance for users outside us-east-1. European and Asian users see TTFB of 600-900ms on the regional SSR baseline — a meaningful drag on LCP.
LCP: how TTFB becomes visible to users
Largest Contentful Paint is the Core Web Vitals metric most sensitive to rendering strategy. The threshold for a "Good" LCP score is 2.5 seconds; "Needs Improvement" runs from 2.5s to 4.0s. Our mobile benchmark data tells a clear story:
| Strategy | LCP p75 (Mobile) | LCP p75 (Desktop) | CWV Rating |
|---|---|---|---|
| SSG | 1.4s | 0.8s | Good |
| ISR (cached) | 1.5s | 0.9s | Good |
| Edge SSR | 1.8s | 1.1s | Good |
| SSR (regional) | 2.7s | 1.6s | Needs Improvement |
The regional SSR mobile LCP of 2.7s puts the average page into the "Needs Improvement" band. That is not a marginal issue — Google's CrUX data shows that pages in this band have measurably lower click-through rates from search results. For teams maintaining LCP optimization across a large site, rendering strategy is a non-negotiable first step before any image, font, or CSS optimization.
For teams already on Next.js who need to address LCP on SSR pages, the LCP fix guide for Next.js walks through converting pages to SSG or ISR as the highest-ROI intervention, alongside image optimization and font loading strategies.
Hydration cost and INP: the JavaScript problem both strategies share
Here is the part that surprises most developers: SSG and SSR pages suffer equally from hydration when they ship identical JavaScript payloads. Hydration is the process where the browser re-executes React (or Vue, Svelte, etc.) over the server-rendered HTML to attach event listeners and make the page interactive. During hydration, the main thread is blocked — and blocked main threads cause poor Interaction to Next Paint (INP) scores.
The rendering strategy determines how quickly HTML arrives; it does not determine how much JavaScript ships. A Next.js SSG page that hydrates a 320KB React bundle will have worse INP than an SSR page that uses React Server Components and ships only 80KB of client-side JavaScript.
INP vs Client JS Payload (p75, Mobile)
The INP threshold for "Good" is 200ms; "Needs Improvement" runs to 500ms. Pages with heavy hydration — typical of older Next.js Pages Router builds that ship the full React tree as client JavaScript — can push INP toward 220-250ms even on desktop. On constrained mobile devices, the same bundle can push INP past 400ms.
React Server Components, introduced in Next.js 13 and stabilised in Next.js 15, are the primary tool for reducing hydration cost. With RSC, components that do not need interactivity render entirely on the server. The client receives finished HTML for those components — no hydration necessary. For a typical content page, RSC reduces client JavaScript by 40-55%, which translates directly to lower INP. See the INP fix guide for React for concrete implementation patterns and before/after benchmark data.
It is also worth noting the relationship between JavaScript performance and rendering strategy more broadly. Regardless of whether a page is statically generated or server-rendered, long tasks on the main thread — caused by large JavaScript bundles, excessive third-party scripts, or synchronous DOM manipulation during hydration — are the root cause of poor INP. Rendering strategy sets the starting conditions; JavaScript discipline determines the outcome.
ISR in depth: the performance profile across cache states
Incremental Static Regeneration deserves its own section because its performance profile is more complex than either pure SSG or pure SSR. An ISR page can exist in three states, each with a different performance fingerprint:
- Warm cache hit: The page was generated at build time or by a recent regeneration, is within its revalidation window, and is being served from the CDN edge. TTFB: 20-50ms. Identical to SSG.
- Stale-while-revalidate: The page has passed its
revalidatewindow. The first request triggers a background regeneration, but the visitor receives the stale cached version immediately. TTFB: still 20-50ms. The regeneration happens asynchronously. This is the correct mental model for most ISR traffic — visitors almost never see the slow path. - Cold miss: The page has never been generated (a new product, a new article) and a visitor hits it for the first time. The CDN falls back to the origin, the page is server-rendered, and the result is cached for subsequent requests. TTFB: 400-600ms on this single request only.
The practical implication: for pages that receive more than a handful of visits per day, ISR performs identically to SSG. The cold miss scenario is relevant only for large catalogs where many pages may never receive organic traffic (tail-end product SKUs, for example). Teams shipping Next.js 15 can use on-demand ISR via revalidateTag('product-123') to surgically invalidate pages when underlying data changes, eliminating unnecessary regeneration cycles and ensuring the cold miss path is hit only when genuinely needed.
// Next.js 15 — on-demand ISR revalidation via cache tag
// In your Server Action or API route handler:
import { revalidateTag } from 'next/cache';
export async function updateProduct(productId: string) {
// ...update database...
// Invalidate only the affected pages, not the entire build
revalidateTag(`product-${productId}`);
revalidateTag('product-listing');
}
// In your page component:
export async function generateMetadata({ params }) {
const data = await fetch(`/api/products/${params.id}`, {
next: { tags: [`product-${params.id}`, 'product-listing'] }
});
// ...
}
This pattern lets an e-commerce team serve 99.9% of product page traffic as static CDN responses while keeping data fresh within seconds of a price change or stock update — without triggering a full rebuild of potentially millions of pages.
Edge SSR: closing the static performance gap
Edge SSR is the most significant architectural development in rendering strategy over the past two years. By executing server-side rendering in V8 isolates at CDN edge nodes — Vercel Edge Functions on Vercel's network, or Cloudflare Workers on Cloudflare's 300+ PoP network — edge SSR achieves TTFB of 60-120ms globally, compared to 350-700ms for regional serverless SSR.
The constraints of edge runtimes are real: no Node.js APIs (no fs, no native modules), a 1-4MB bundle size limit on Cloudflare Workers (128MB on Vercel Edge), and a maximum execution time of 30ms on Cloudflare (no limit on Vercel Edge). These constraints rule out heavy server-side computation, complex authentication logic that depends on Node.js libraries, or pages that require large npm packages. But for the common cases — A/B testing, geolocation-based personalisation, auth-gated content, and request-time data fetching from fast edge-friendly APIs — edge SSR is a compelling upgrade from regional SSR.
For teams comparing deployment platforms, our Next.js vs Remix performance comparison covers how each framework exposes edge SSR and the practical performance differences between the two approaches on identical content.
CLS: rendering strategy has minimal impact
Cumulative Layout Shift measures visual stability. Unlike TTFB and LCP, CLS is largely independent of rendering strategy. Both SSG and SSR produce server-rendered HTML that the browser can lay out before JavaScript executes, preventing the CLS spikes typical of client-side-only React applications. The real CLS risks — images without explicit dimensions, late-loaded web fonts causing FOUT, dynamically injected banners above the fold — are implementation concerns that exist identically in all four rendering strategies.
The marginal SSG advantage in CLS (0.03 vs 0.04) disappears when SSR pages are built with the same image sizing discipline. Both strategies produce identical CLS scores when images carry explicit width and height attributes or CSS aspect-ratio containers. CLS should not factor into rendering strategy selection.
Scaling and cost: the operational dimension
Performance is not only about user-facing metrics — rendering strategy also determines how a site scales under load and what it costs to operate at scale.
SSG scales essentially for free. Pages are static files on a CDN. A traffic spike of 10x or 1000x has zero impact on origin infrastructure and adds negligible cost. Build times grow with the number of pages — a 100,000-page site can take 30-60 minutes to build — but runtime scaling is a solved problem. SSG is the right choice whenever content can tolerate build-time staleness.
ISR inherits SSG's scaling properties for cached pages. Regeneration requests are throttled by the revalidation interval, so even a high-traffic site generates only one origin request per page per revalidation window. Operational cost is low.
SSR (regional serverless) creates one serverless function invocation per page request. Under moderate traffic this is manageable, but at high concurrency, cold-start latency spikes, and compute costs scale linearly with traffic. A site that could serve 1 million requests per day as SSG for near-zero cost may spend several hundred dollars per day in SSR function invocations for the same traffic.
Edge SSR has more favorable cost scaling than regional SSR because edge runtimes are designed for high-throughput, low-latency execution at minimal CPU cost. Cloudflare Workers bills by request count with no per-duration cost, making it cost-competitive with CDN serving for most traffic volumes. Vercel Edge Functions have a generous included tier and per-invocation pricing beyond it.
Which rendering strategy should you choose?
Use this decision framework to match strategy to use case:
- Choose SSG when: content is fully known at build time, freshness tolerance is measured in hours or days, and you want the absolute best TTFB and LCP with zero operational complexity. Blogs, documentation, marketing landing pages, and portfolio sites are the canonical SSG use cases.
- Choose ISR when: content changes frequently but not per-request (product catalogs, news sites, user-generated content sites), you need CDN-level TTFB for the vast majority of traffic, and you want to avoid full rebuilds. ISR is the upgrade path from SSG for content-driven sites that have outgrown build-time generation at scale.
- Choose Edge SSR when: you need per-request dynamic logic — personalisation, A/B testing, geolocation routing, or auth-gated content — but cannot accept the 400ms+ TTFB of regional SSR. Edge SSR is the right choice when dynamic requirements are incompatible with SSG/ISR and you need global performance parity.
- Choose regional SSR when: your per-request logic requires Node.js APIs, large npm packages, or heavy server-side computation that cannot run in edge runtimes, and you have a primarily regional user base that is geographically close to your origin. SSR is also appropriate for authenticated dashboards where TTFB is less critical because users are already logged in and engaged.
Frequently asked questions
Is static site generation always faster than server-side rendering?
For TTFB and LCP, SSG is almost always faster because pages are pre-built and served from CDN edge nodes with sub-50ms TTFB. However, SSR can match or beat SSG for INP when a page ships less hydration JavaScript. Edge SSR narrows the TTFB gap significantly — Vercel Edge Functions and Cloudflare Workers deliver SSR TTFB of 60-120ms, approaching CDN static performance. The "SSG is always faster" rule holds for TTFB and LCP but not for interactivity metrics.
What is ISR and how does it affect performance?
Incremental Static Regeneration (ISR) is a hybrid approach where pages are statically generated at build time but automatically regenerated in the background after a revalidation period you specify. Performance-wise, ISR delivers the same sub-50ms TTFB as pure SSG for cached pages, with SSR-level latency only on the first request after a revalidation. Next.js 15 supports both time-based ISR (revalidate: 60) and on-demand ISR via cache tags, giving you surgical control over when specific pages are invalidated without triggering a full rebuild.
How does hydration cost affect INP in SSR versus SSG pages?
Both SSG and SSR pages must hydrate on the client when they use React or similar frameworks. The hydration cost depends entirely on how much JavaScript ships, not whether the HTML was generated at build time or request time. SSR pages that use React Server Components can eliminate hydration for non-interactive content, reducing INP-relevant JavaScript by 30-50%. The key variable is JavaScript payload, not the rendering mode. A statically generated page with a 300KB React bundle will have worse INP than a server-rendered page using RSC that ships 80KB.
When should I use edge SSR instead of traditional SSR?
Edge SSR is the right choice when you need dynamic, personalized content with near-static TTFB. Running on Vercel Edge Functions or Cloudflare Workers, edge SSR achieves 60-120ms TTFB globally because rendering runs at the edge node closest to the user, not in a single-region data center. Edge SSR has constraints — no Node.js APIs, limited runtime size — but for pages that require per-request logic like A/B testing, geolocation, or auth-gated content, it outperforms both static and regional SSR. If your dynamic logic can run in an edge-compatible runtime, edge SSR is almost always the better choice over regional SSR in 2026.
Does rendering strategy affect Core Web Vitals scores in Google Search?
Yes, directly. TTFB influences LCP, which is a Core Web Vital scored in Google Search. Static and edge-SSR pages with sub-100ms TTFB consistently reach LCP under 1.8s on desktop, putting them in the "Good" band. Traditional SSR with 400-800ms TTFB typically pushes LCP to 2.2-3.0s on mobile, approaching or crossing the "Needs Improvement" threshold. For SEO-sensitive pages, rendering strategy is a first-order concern — before image optimization, before font loading, before any other performance intervention.
- TTFB Guide — reduce time to first byte across all rendering strategies
- LCP Guide — largest contentful paint optimization from first principles
- INP Guide — fix interaction responsiveness and hydration cost
- Fix LCP in Next.js — framework-specific steps for Next.js 15
- Fix INP in React — reduce hydration cost with React Server Components