TTFB Edge Functions

Edge Functions for TTFB: How to Achieve Sub-100ms Response Times

Edge functions are the most powerful TTFB optimization available today, capable of reducing server response time from 1.5 seconds to under 40ms for users anywhere in the world. Traditional server-side rendering runs in a single data center, meaning every user pays a network round-trip penalty proportional to their distance from that origin. A user in Tokyo visiting a US-hosted site incurs 150-200ms of network latency before the server even starts processing the request. Edge functions eliminate this by executing your rendering logic in one of 200-300+ globally distributed nodes, 20-30ms from almost any user.

The technology behind this speed is V8 isolates -- the same sandboxing mechanism used inside Chrome -- instead of traditional containers or virtual machines. Isolates start in under 1ms with no cold start penalty, consume minimal memory, and handle thousands of concurrent requests per node. Cloudflare Workers pioneered this model; Vercel Edge Runtime and Netlify Edge Functions are built on the same principle. The catch is a constrained runtime: no native Node.js modules, no persistent filesystem, and limited memory per request.

This guide walks through five steps: choosing your edge runtime, deciding what to migrate, implementing edge-compatible data patterns, handling auth and personalization at the edge, and setting up robust fallback strategies. For platform-specific TTFB optimizations, also see fixing TTFB on Vercel, fixing TTFB on Cloudflare Pages, and the server response time guide. The complete TTFB guide covers all factors affecting server response time.

Expected results

Following all steps in this guide typically produces these improvements:

Before

1.5s

TTFB (Poor) -- Single-region SSR, network latency to distant users, cold start penalty

After

40ms

TTFB (Good) -- Edge rendering 20-30ms from users, sub-1ms cold starts, edge KV for data

Step-by-step fix

Understand edge computing architectures

Before migrating to edge functions, you need to understand the capabilities and constraints of each platform. The major edge runtimes differ significantly in their PoP count, runtime compatibility, data access options, and pricing model. Choosing the wrong platform for your use case can result in a complex migration that delivers disappointing TTFB gains.

The comparison below covers the four most widely deployed edge runtimes as of 2026. Cloudflare Workers has the most mature ecosystem with the largest PoP network and the richest set of edge-native services (KV, D1, R2, Queues, Durable Objects). Vercel Edge Runtime has the tightest Next.js integration. Netlify Edge Functions offer the simplest migration path for existing Netlify deployments. Lambda@Edge runs closest to the full Node.js runtime but has significant cold start overhead and limited PoP placement.

Markdown -- Edge runtime comparison table
┌─────────────────────┬──────────────┬──────────────┬───────────────┬──────────────────┐
│ Feature             │ CF Workers   │ Vercel Edge  │ Netlify Edge  │ Lambda@Edge      │
├─────────────────────┼──────────────┼──────────────┼───────────────┼──────────────────┤
│ PoP count           │ 300+         │ ~50          │ ~50           │ 13 (regions)     │
│ Cold start          │ <1ms         │ <5ms         │ <5ms          │ 100-3000ms       │
│ Runtime             │ V8 Isolates  │ V8 Isolates  │ Deno          │ Node.js 18-20    │
│ Max execution time  │ 30s (paid)   │ 25s          │ 30s           │ 5s (viewer)      │
│ Memory limit        │ 128MB        │ 128MB        │ 512MB         │ 128MB-10GB       │
│ Node.js compat.     │ Partial      │ Partial      │ Partial       │ Full             │
│ Native binaries     │ No           │ No           │ No            │ Yes              │
│ File system access  │ No           │ No           │ No            │ Read-only /tmp   │
│ Edge KV store       │ Workers KV   │ Vercel KV    │ Netlify Blobs │ No (ElastiCache) │
│ Edge SQL            │ D1 (SQLite)  │ Neon/Turso   │ No            │ No               │
│ Streaming response  │ Yes          │ Yes          │ Yes           │ Partial          │
│ Pricing model       │ Per-request  │ Per-request  │ Included      │ Per-request+GB   │
│ Best for            │ High traffic │ Next.js      │ Netlify sites │ Full Node.js API │
└─────────────────────┴──────────────┴──────────────┴───────────────┴──────────────────┘

Key insight: Lambda@Edge is NOT a true edge runtime for TTFB purposes.
It runs in 13 AWS regions, not 300+ PoPs, meaning users in Southeast Asia,
Africa, and Latin America still experience 100-200ms network latency.
Use Lambda@Edge only when you need full Node.js compatibility.
For TTFB, prefer Cloudflare Workers, Vercel Edge, or Netlify Edge.
JavaScript -- Cloudflare Workers: minimal Hello World with timing
// worker.js -- Cloudflare Workers basic structure
// Deploy: npx wrangler deploy worker.js

export default {
  async fetch(request, env, ctx) {
    const start = Date.now();

    // Request metadata available without origin round trip:
    const country = request.cf?.country;        // 'US', 'DE', 'JP' ...
    const city    = request.cf?.city;           // 'New York', 'Berlin' ...
    const colo    = request.cf?.colo;           // 'EWR', 'FRA', 'NRT' (Cloudflare PoP)
    const asn     = request.cf?.asn;            // Autonomous System Number
    const tlsVersion = request.cf?.tlsVersion;  // 'TLSv1.3'

    // Generate response at the edge -- no origin needed
    const html = `<!DOCTYPE html><html><body>
      <p>Served from ${colo} in ${country} | ${Date.now() - start}ms</p>
    </body></html>`;

    return new Response(html, {
      headers: {
        'Content-Type': 'text/html;charset=UTF-8',
        'Cache-Control': 'public, s-maxage=60',
        // Expose edge timing for debugging
        'Server-Timing': `edge;dur=${Date.now() - start};desc="CF-${colo}"`,
      },
    });
  },
};

// Benchmark: ~1ms execution time, 10-30ms total TTFB from any location

Migrate server-rendered pages to edge runtime

Not every route is a good candidate for edge runtime. The edge runtime's constraints -- no native Node.js modules, no persistent connections, limited memory -- mean some routes must stay on origin. The key is identifying the routes that serve the most users and have the highest TTFB impact, then checking whether they are edge-compatible.

The migration decision is mostly about data dependencies. A page that renders entirely from data in a KV store or that is mostly static with a short revalidation window is an ideal edge candidate. A page that requires a complex multi-table PostgreSQL query joining 10 tables with 500ms of query time should stay on origin -- moving it to the edge just adds a 15ms edge-to-origin round trip on top of the existing 500ms query time. Focus edge migration on high-traffic, low-data-complexity pages where TTFB directly affects conversion: homepages, category landing pages, and marketing pages.

TypeScript -- Next.js middleware: route-level edge runtime assignment
// middleware.ts -- Runs on edge runtime for every matched request
// This file ALWAYS runs on the edge (Vercel, Netlify, or Cloudflare)

import { NextRequest, NextResponse } from 'next/server';

export const config = {
  // Only run middleware on these paths
  matcher: ['/((?!api|_next/static|_next/image|favicon.ico).*)'],
};

export function middleware(request: NextRequest) {
  const { pathname, searchParams } = request.nextUrl;

  // Geolocation-based redirect (no origin needed)
  const country = request.geo?.country || 'US';
  if (pathname === '/' && country !== 'US') {
    return NextResponse.redirect(
      new URL(`/${country.toLowerCase()}${pathname}`, request.url)
    );
  }

  // A/B test assignment at edge (no origin round trip)
  const variant = request.cookies.get('ab-variant')?.value
    || (Math.random() < 0.5 ? 'a' : 'b');

  const response = NextResponse.next();
  response.cookies.set('ab-variant', variant, {
    maxAge: 60 * 60 * 24 * 30, // 30 days
    sameSite: 'lax',
  });
  response.headers.set('x-ab-variant', variant);

  return response;
}

// Execution: ~0.5ms on Vercel Edge, ~0.2ms on Cloudflare
TypeScript -- Next.js: per-route edge runtime with compatibility check
// app/blog/[slug]/page.tsx
// Mark this route as edge runtime

export const runtime = 'edge'; // Opt into edge runtime

// Edge-compatible data fetching (HTTP-based, no persistent connection)
async function getPost(slug: string) {
  // Use fetch() -- available in edge runtime
  // Turso: libSQL over HTTP (edge-compatible)
  const response = await fetch(`${process.env.TURSO_DB_URL}/v2/pipeline`, {
    method: 'POST',
    headers: {
      Authorization: `Bearer ${process.env.TURSO_AUTH_TOKEN}`,
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({
      requests: [{
        type: 'execute',
        stmt: {
          sql: 'SELECT title, content, published_at FROM posts WHERE slug = ?',
          args: [{ type: 'text', value: slug }],
        },
      }],
    }),
  });

  const data = await response.json();
  return data.results[0]?.rows[0];
}

export default async function BlogPost({ params }: { params: { slug: string } }) {
  const post = await getPost(params.slug);
  if (!post) return notFound();

  return (
    <article>
      <h1>{post[0]}</h1>
      <div dangerouslySetInnerHTML={{ __html: post[1] }} />
    </article>
  );
}

// Routes NOT suitable for edge runtime (keep on Node.js origin):
// - Pages using sharp, canvas, or other native binaries
// - Routes with >128MB memory requirements
// - Pages with long-running computation (>25s)
// - API routes using node:crypto, node:fs, node:child_process
JavaScript -- Edge compatibility audit: detect Node.js-only APIs
// scripts/edge-compat-audit.mjs
// Run before migrating routes to edge runtime
// Usage: node scripts/edge-compat-audit.mjs

import { readFileSync, readdirSync, statSync } from 'fs';
import { join } from 'path';

const NODE_ONLY_APIS = [
  'require(', 'process.env.', '__dirname', '__filename',
  "from 'fs'", "from 'path'", "from 'crypto'", "from 'child_process'",
  "from 'net'", "from 'http'", "from 'https'", "from 'stream'",
  'new Buffer(', 'Buffer.from',  // Use Uint8Array instead
  "require('sharp')", "require('bcrypt')", "require('canvas')",
];

function auditFile(filePath) {
  const content = readFileSync(filePath, 'utf8');
  const issues = [];

  for (const api of NODE_ONLY_APIS) {
    if (content.includes(api)) {
      issues.push(api);
    }
  }

  if (issues.length > 0) {
    console.log(`\n⚠️  ${filePath}`);
    console.log(`   Node.js-only APIs found: ${issues.join(', ')}`);
    console.log(`   ↳ Keep on origin runtime`);
  } else {
    console.log(`✅ ${filePath} -- edge-compatible`);
  }
}

function auditDirectory(dir) {
  for (const entry of readdirSync(dir)) {
    const fullPath = join(dir, entry);
    const stat = statSync(fullPath);
    if (stat.isDirectory() && !entry.startsWith('.') && entry !== 'node_modules') {
      auditDirectory(fullPath);
    } else if (entry.match(/\.(ts|tsx|js|jsx)$/)) {
      auditFile(fullPath);
    }
  }
}

auditDirectory('./app');

Implement edge data patterns

The biggest challenge with edge functions is data access. Traditional server rendering assumes a persistent connection pool to a PostgreSQL or MySQL database. Edge functions spin up in under 1ms and cannot maintain persistent connections -- they need data access patterns that work over HTTP with per-request connections. Fortunately, a well-designed edge data architecture can keep total data access time under 10ms for the common case.

The three-tier edge data pattern covers most use cases: a KV store at the edge for user session state and feature flags (1-5ms read latency), an HTTP-based edge database like Turso or Neon for relational queries that cannot be avoided (10-30ms), and the Cache API for short-lived computed responses that save re-executing queries for multiple users requesting the same data. Cloudflare Durable Objects provide the fourth tier for stateful coordination (e.g., rate limiting, presence) that requires consistency across requests.

TypeScript -- Cloudflare Workers KV: read and write session data at edge
// Cloudflare Workers KV -- edge key-value store
// KV reads: ~1-5ms globally, KV writes: eventual consistency (~60s propagation)
// Ideal for: session state, feature flags, user preferences, A/B assignments

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    const sessionId = request.headers.get('Cookie')
      ?.match(/session=([^;]+)/)?.[1];

    // Read user session from KV (1-5ms, edge-local cache hit is ~0.1ms)
    let userData = null;
    if (sessionId) {
      const raw = await env.SESSIONS.get(sessionId, { type: 'json' });
      userData = raw as { userId: string; role: string; locale: string } | null;
    }

    // Read feature flags from KV (cached per-PoP for 60s by default)
    const flags = await env.FEATURE_FLAGS.get('global', {
      type: 'json',
      cacheTtl: 60,  // PoP-local cache for 60s -- reduces KV API calls
    }) as Record<string, boolean> | null;

    // Generate personalized response without origin round trip
    const locale = userData?.locale || 'en';
    const showNewDashboard = flags?.['new-dashboard'] && userData?.role === 'admin';

    const html = renderPage({ locale, showNewDashboard, user: userData });

    return new Response(html, {
      headers: {
        'Content-Type': 'text/html;charset=UTF-8',
        'Cache-Control': 'private, max-age=0',
      },
    });
  },
};

// Write session after login (origin handles auth, edge handles reads):
// await env.SESSIONS.put(sessionId, JSON.stringify(userData), {
//   expirationTtl: 86400  // 24h TTL
// });

// KV read latency breakdown:
// PoP-local cache hit: ~0.1ms
// Regional KV read:    ~2-5ms
// Cross-region KV:     ~10-30ms (rare, only before replication)
TypeScript -- Edge Cache API: cache computed page fragments at edge
// Cache API in Cloudflare Workers / Vercel Edge
// Stores responses at the edge PoP -- faster than KV for large HTML fragments
// Cache-API stores are per-PoP (not globally replicated like KV)

export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
    // Only cache GET requests for non-personalized pages
    if (request.method !== 'GET') {
      return fetchFromOrigin(request);
    }

    const cache = caches.default;

    // Build a normalized cache key (strip tracking params)
    const cacheUrl = new URL(request.url);
    ['utm_source','utm_medium','utm_campaign','fbclid','gclid'].forEach(
      p => cacheUrl.searchParams.delete(p)
    );
    const cacheKey = new Request(cacheUrl.toString(), request);

    // Check edge cache first (sub-1ms on hit)
    let response = await cache.match(cacheKey);

    if (response) {
      // Append cache status header for debugging
      const headers = new Headers(response.headers);
      headers.set('X-Cache', 'HIT');
      return new Response(response.body, { ...response, headers });
    }

    // Cache miss: fetch from origin database
    const startOrigin = Date.now();
    response = await fetchFromOrigin(request);
    const originMs = Date.now() - startOrigin;

    if (response.ok) {
      const headers = new Headers(response.headers);
      headers.set('Cache-Control', 'public, s-maxage=300, stale-while-revalidate=86400');
      headers.set('X-Cache', 'MISS');
      headers.set('X-Origin-Ms', String(originMs));

      const cachedResponse = new Response(response.body, { ...response, headers });
      // Store in edge cache without blocking the response
      ctx.waitUntil(cache.put(cacheKey, cachedResponse.clone()));
      return cachedResponse;
    }

    return response;
  },
};
TypeScript -- Turso edge database: HTTP-based SQL for edge functions
// Turso: SQLite distributed via HTTP -- works in any edge runtime
// install: npm install @libsql/client

import { createClient } from '@libsql/client/http';

// Turso client connects over HTTPS -- no persistent TCP required
const db = createClient({
  url: process.env.TURSO_DATABASE_URL!,   // libsql://db-name-org.turso.io
  authToken: process.env.TURSO_AUTH_TOKEN!,
});

// In edge function:
export async function getProductPage(slug: string) {
  // Single query -- ~10-20ms from Turso replica closest to the edge PoP
  const { rows } = await db.execute({
    sql: `
      SELECT p.name, p.description, p.price, p.image_url,
             c.name as category
      FROM products p
      JOIN categories c ON p.category_id = c.id
      WHERE p.slug = ?
      LIMIT 1
    `,
    args: [slug],
  });

  return rows[0] ?? null;
}

// Neon serverless Postgres is an alternative for teams already on Postgres:
// import { neon } from '@neondatabase/serverless';
// const sql = neon(process.env.DATABASE_URL!);
// const [post] = await sql`SELECT * FROM posts WHERE slug = ${slug}`;
//
// Both Turso and Neon have read replicas that colocate with edge PoPs,
// reducing round-trip time to 5-15ms vs 50-200ms for a single-region DB.

Handle authentication and personalization at the edge

Authentication is one of the most impactful operations you can move to the edge. Traditional auth patterns redirect users to an origin auth endpoint, verify the session against a database, and redirect back -- adding 2-4 round trips and 300-800ms to every authenticated page load. At the edge, you can validate a JWT token cryptographically without any network call, and use the claims in the token to personalize the response immediately.

The pattern that works best for edge auth is signed JWTs (using RS256 or ES256) where the JWT contains all the claims needed for rendering decisions (user ID, role, locale, subscription tier). The edge function imports the public key, verifies the signature using Web Crypto, reads the claims, and renders the appropriate response -- all in 2-5ms. This replaces a multi-step origin auth flow that typically takes 200-600ms. The same JWT validation approach powers geolocation-based content delivery and A/B test bucketing without the overhead of origin personalization services.

TypeScript -- Edge JWT validation using Web Crypto API (no origin needed)
// edge-auth.ts -- JWT validation at the edge
// Compatible with Cloudflare Workers, Vercel Edge, Netlify Edge (Deno)

const PUBLIC_KEY_PEM = `-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA...
-----END PUBLIC KEY-----`;

// Import the RSA public key for ES256 JWT verification
async function importPublicKey(pem: string): Promise<CryptoKey> {
  const pemBody = pem
    .replace(/-----BEGIN PUBLIC KEY-----/, '')
    .replace(/-----END PUBLIC KEY-----/, '')
    .replace(/\s/g, '');

  const binaryKey = Uint8Array.from(atob(pemBody), c => c.charCodeAt(0));

  return crypto.subtle.importKey(
    'spki',
    binaryKey.buffer,
    { name: 'RSASSA-PKCS1-v1_5', hash: 'SHA-256' },
    false,
    ['verify']
  );
}

export async function verifyJWT(token: string): Promise<Record<string, unknown> | null> {
  try {
    const [headerB64, payloadB64, signatureB64] = token.split('.');
    if (!headerB64 || !payloadB64 || !signatureB64) return null;

    // Verify signature using Web Crypto (sub-1ms -- no network call)
    const key = await importPublicKey(PUBLIC_KEY_PEM);
    const data = new TextEncoder().encode(`${headerB64}.${payloadB64}`);
    const signature = Uint8Array.from(
      atob(signatureB64.replace(/-/g, '+').replace(/_/g, '/')),
      c => c.charCodeAt(0)
    );

    const valid = await crypto.subtle.verify('RSASSA-PKCS1-v1_5', key, signature, data);
    if (!valid) return null;

    // Decode payload and check expiry
    const payload = JSON.parse(atob(payloadB64));
    if (payload.exp < Math.floor(Date.now() / 1000)) return null;

    return payload;
  } catch {
    return null;
  }
}

// Usage in edge function:
// const token = request.cookies.get('auth-token')?.value;
// const user = token ? await verifyJWT(token) : null;
// Execution time: ~2-4ms (no origin round trip)
// Equivalent origin auth: ~200-600ms (DB session lookup)
TypeScript -- Geolocation and A/B testing personalization at edge
// Cloudflare Workers: geolocation + A/B + auth -- fully at the edge
import { verifyJWT } from './edge-auth';

export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
    const url = new URL(request.url);
    const cf = (request as any).cf; // Cloudflare request metadata

    // 1. Geolocation-based content (no origin round trip)
    const country    = cf?.country     || 'US';
    const region     = cf?.region      || '';
    const timezone   = cf?.timezone    || 'America/New_York';
    const continent  = cf?.continent   || 'NA';

    // Show EU cookie banner without any origin logic
    const showCookieBanner = continent === 'EU';

    // 2. A/B test assignment (stable per-user via hashed cookie)
    let abVariant = request.headers
      .get('Cookie')?.match(/ab_pricing=([^;]+)/)?.[1];

    if (!abVariant) {
      // Assign to a bucket deterministically (no origin needed)
      abVariant = Math.random() < 0.5 ? 'control' : 'treatment';
    }

    // 3. Auth validation (no origin round trip)
    const token = request.headers.get('Cookie')?.match(/auth=([^;]+)/)?.[1];
    const user = token ? await verifyJWT(token) : null;

    // 4. Render edge response with all personalization applied
    const html = buildHTML({ country, showCookieBanner, abVariant, user });

    const response = new Response(html, {
      headers: {
        'Content-Type': 'text/html;charset=UTF-8',
        'Cache-Control': 'private, max-age=0',
        'Set-Cookie': abVariant
          ? `ab_pricing=${abVariant}; Path=/; Max-Age=2592000; SameSite=Lax`
          : '',
        'Vary': 'Cookie',
      },
    });

    return response;
  },
};

// Total edge execution time: ~3-8ms
// All 3 operations (geo, A/B, auth) run with zero origin round trips

Set up edge-to-origin fallback patterns

Edge functions that fail without a fallback turn a performance optimization into an availability risk. A bug in your edge function, an edge runtime deployment issue, or a KV store outage can return 500 errors to all users unless you have a graceful degradation strategy. The correct pattern is: try the edge path first, and fall back to the origin server on any error, so the worst case is "slightly slower" rather than "broken."

Beyond error fallback, streaming responses from the edge are one of the most powerful TTFB techniques available. With streaming, the edge function can send the HTML <head> and above-the-fold content to the browser immediately while concurrently fetching below-the-fold content from the origin or database. The browser starts rendering and downloading critical assets before the full response body is available. This is particularly effective for Largest Contentful Paint -- even if the full page takes 300ms to generate, the LCP element can be painting at 50ms because the image URL was in the first chunk of HTML. See also the broader TTFB guide for how streaming relates to Core Web Vitals.

TypeScript -- Cloudflare Workers: graceful origin fallback on edge errors
// worker.ts -- Edge-first with automatic origin fallback

export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
    // Enable automatic passthrough if the worker throws an unhandled exception
    // This means an uncaught error routes to origin instead of returning 500
    ctx.passThroughOnException();

    try {
      // Attempt to serve from edge (KV cache or generated response)
      const cached = await tryEdgeCache(request, env);
      if (cached) {
        return addTimingHeaders(cached, 'edge-cache-hit');
      }

      const generated = await generateAtEdge(request, env);
      if (generated) {
        // Store in edge cache for subsequent requests (non-blocking)
        ctx.waitUntil(storeInEdgeCache(request, generated.clone(), env));
        return addTimingHeaders(generated, 'edge-generated');
      }
    } catch (err) {
      // Log the error but don't fail -- fall through to origin
      console.error('Edge function error:', err);
      // ctx.passThroughOnException() above handles this automatically,
      // but explicit catch gives us error logging
    }

    // Fallback: proxy to origin server
    const originResponse = await fetch(request, {
      // Set a reasonable timeout -- if origin is slow, still serve something
      signal: AbortSignal.timeout(5000),
    });

    return addTimingHeaders(originResponse, 'origin-fallback');
  },
};

function addTimingHeaders(response: Response, source: string): Response {
  const headers = new Headers(response.headers);
  headers.set('X-Served-By', source);
  return new Response(response.body, { ...response, headers });
}
TypeScript -- Streaming HTML response from edge for faster LCP
// Cloudflare Workers: stream HTML head immediately, fetch body concurrently
// Browser starts rendering <head> (fonts, LCP preload) before body arrives

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    // The HTML head -- rendered synchronously, sent immediately
    const headHTML = `
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width,initial-scale=1">
  <link rel="preconnect" href="https://img.example.com">
  <link rel="preload" as="image" href="https://img.example.com/hero.avif"
    fetchpriority="high">
  <link rel="stylesheet" href="/styles/critical.css">
  <title>My Page</title>
</head>
<body>
  <header><!-- static header rendered at edge --></header>
  <main>`;

    // Start fetching page body from origin/DB concurrently with stream setup
    const bodyPromise = fetchPageBody(request, env);

    // Create a TransformStream to pipe chunks as they arrive
    const { readable, writable } = new TransformStream();
    const writer = writable.getWriter();
    const encoder = new TextEncoder();

    // Write head immediately (browser receives this ~5ms after request)
    async function streamContent() {
      await writer.write(encoder.encode(headHTML));

      // Await the body (origin fetch or DB query)
      try {
        const body = await bodyPromise;  // e.g. ~80ms for DB query
        await writer.write(encoder.encode(body));
      } catch {
        await writer.write(encoder.encode('<p>Error loading content.</p>'));
      }

      await writer.write(encoder.encode('</main></body></html>'));
      await writer.close();
    }

    // Start streaming without awaiting completion
    streamContent();

    return new Response(readable, {
      headers: {
        'Content-Type': 'text/html;charset=UTF-8',
        'Transfer-Encoding': 'chunked',
        'X-Content-Type-Options': 'nosniff',
      },
    });
  },
};

// Effect on Core Web Vitals:
// TTFB: ~5ms (first chunk arrives immediately)
// LCP:  browser preloads hero image during head streaming
// Even though full page takes 85ms, LCP is determined by when
// the preloaded image finishes loading -- not when </html> arrives

Quick checklist

  • Edge runtime selected (Cloudflare Workers / Vercel Edge / Netlify Edge) based on PoP coverage and data needs
  • High-traffic, low-data-complexity routes audited and migrated to edge runtime
  • Edge KV store used for session data, feature flags, and user preferences
  • HTTP-based edge database (Turso or Neon) used for SQL queries in edge functions
  • JWT validated at edge using Web Crypto API (no origin session lookup)
  • Origin fallback configured for all edge functions (passThroughOnException or try/catch with origin fetch)
  • Streaming HTML implemented on high-traffic pages to flush <head> with LCP preload before body completes

Frequently asked questions

Traditional serverless functions (AWS Lambda, Google Cloud Functions) run in a single region and have cold starts of 200-3000ms. Edge functions run in 200-300+ locations worldwide with near-zero cold starts (0-5ms) because they use V8 isolates instead of containers. The trade-off is a restricted runtime -- edge functions run a subset of Node.js APIs and cannot use packages that depend on native binaries. For TTFB, edge functions are transformative: a user in Singapore visiting a US-hosted site goes from a 300ms network round trip to a 15ms response from a Singapore PoP.

Yes, with the right database. Traditional databases require a persistent TCP connection pool that edge functions cannot maintain across requests. Use databases with HTTP-based query APIs designed for edge runtimes: Turso (libSQL over HTTP), Neon (serverless Postgres with HTTP driver), PlanetScale (MySQL via HTTP), Upstash Redis (Redis via HTTP), or Cloudflare D1 (SQLite at the edge, Workers only). For read-heavy workloads, cache query results in KV or the Cache API to avoid paying the database round trip on every request. Edge DB round-trip latency is typically 5-30ms when the database has a replica colocated with the edge PoP.

No. Vercel Edge Runtime uses a reduced Web Standard API surface and does not support the full Node.js runtime. Unsupported in edge runtime: native Node.js modules (fs, path, crypto beyond Web Crypto), packages that depend on native binaries (sharp, bcrypt), Node.js streams, and synchronous APIs. Supported: fetch, Request/Response, Web Crypto, TextEncoder, URL, ReadableStream, the Cache API, and most pure-JavaScript npm packages. Run next build and check the build output for edge compatibility warnings before deploying.

Edge functions have full access to request cookies and can set response cookies. For session validation, verify a signed JWT in the cookie using Web Crypto (no network call, ~2-4ms). For session storage beyond what fits in a JWT, use an edge KV store: read the session key from the cookie, look up the data in KV (~2-5ms), and include it in the edge response. The full flow -- cookie read, KV lookup, response generation -- adds 5-15ms to TTFB instead of the 100-500ms required for an origin database session lookup.

Edge functions should always have an origin fallback. If the edge function throws an unhandled exception, most platforms will pass the request through to the origin server -- but only if you configure this. In Cloudflare Workers, call ctx.passThroughOnException() at the start of your fetch handler to enable automatic origin fallback on errors. In Vercel, wrap your edge logic in try/catch and manually fetch(request) the origin on error. In Netlify Edge Functions, use context.next() to pass through to the next handler. Never deploy edge functions without fallback logic -- a TTFB optimization that breaks your site on failure is not worth the risk.

Related resources