Fix TTFB on Cloudflare Pages: Optimize Time to First Byte
Cloudflare Pages runs on the world's largest edge network with over 300 locations, making it one of the fastest hosting platforms for static and dynamic content. Despite this infrastructure advantage, poorly configured projects can still suffer from slow TTFB due to unnecessary function invocations, missing cache rules, cold starts on Workers, and suboptimal KV store usage.
Cloudflare Pages integrates tightly with Cloudflare Workers for server-side rendering and API routes. Static assets are served from Cloudflare's edge with near-zero TTFB. Dynamic content processed by Workers Functions also runs at the edge, but requires proper caching and architecture to deliver consistently fast responses.
This guide covers five Cloudflare Pages optimizations: leveraging the static asset pipeline, configuring Workers Functions efficiently, using Cache API and KV for edge caching, implementing Smart Placement for database-heavy functions, and optimizing with Cloudflare-specific headers. These techniques deliver consistent sub-50ms TTFB globally.
Expected results
Following all steps in this guide typically produces these improvements:
Before
900ms
TTFB (Poor) -- Uncached Workers Functions fetching from distant origin databases on every request
After
35ms
TTFB (Good) -- Edge-cached responses with KV-backed data and Smart Placement for origin queries
Step-by-step fix
Maximize static asset serving
Cloudflare Pages automatically serves static files from its 300+ edge locations with aggressive caching. The fastest TTFB comes from pre-built HTML that never hits a Workers Function. Use static site generators or framework adapters that produce static HTML wherever possible, and reserve Workers Functions for truly dynamic routes.
// astro.config.mjs
import { defineConfig } from 'astro/config';
import cloudflare from '@astrojs/cloudflare';
export default defineConfig({
output: 'hybrid', // Static by default, opt-in SSR
adapter: cloudflare({
mode: 'directory', // Recommended for Pages
runtime: {
mode: 'local',
type: 'pages',
},
}),
});
// src/pages/blog/[slug].astro
// STATIC: pre-built at deploy time (~10ms TTFB)
export async function getStaticPaths() {
const posts = await getAllPosts();
return posts.map(p => ({ params: { slug: p.slug }, props: { post: p } }));
}
// src/pages/search.astro
// DYNAMIC: runs on Workers at edge (~30ms TTFB)
export const prerender = false;
// next.config.js
// Use @cloudflare/next-on-pages for optimal Cloudflare integration
module.exports = {
// Static pages are pre-rendered and served from edge
// Dynamic pages run on Workers
// ISR-like behavior via fetch caching
experimental: {
// No need for unstable_cache -- use Cache API directly
},
// Headers for static content
async headers() {
return [{
source: '/(.*)',
headers: [{
key: 'CDN-Cache-Control',
value: 'max-age=3600', // Cloudflare-specific CDN cache
}],
}];
},
};
// Deploy: npx @cloudflare/next-on-pages
// Automatically splits static and dynamic routes
Optimize Workers Functions with Cache API
Workers Functions on Cloudflare Pages run at the edge on every request to dynamic routes. Without caching, each request processes the full function logic. The Cloudflare Cache API lets you cache function responses at the edge, serving subsequent requests in under 5ms.
// functions/api/products/[id].ts
// Cache API: store function responses at the edge
export async function onRequestGet(context) {
const { params, request } = context;
const cacheKey = new Request(request.url, request);
// Check edge cache first
const cache = caches.default;
const cached = await cache.match(cacheKey);
if (cached) return cached; // ~3ms TTFB
// Cache miss: fetch from origin
const product = await fetch(
\`\${context.env.API_URL}/products/\${params.id}\`
).then(r => r.json());
const response = new Response(JSON.stringify(product), {
headers: {
'Content-Type': 'application/json',
'Cache-Control': 'public, s-maxage=300, stale-while-revalidate=600',
},
});
// Store in edge cache (non-blocking)
context.waitUntil(cache.put(cacheKey, response.clone()));
return response; // ~50ms TTFB (origin fetch)
// Next request: ~3ms (edge cache hit)
}
// functions/products/[slug].ts
// Cache rendered HTML pages at the edge
export async function onRequestGet(context) {
const { params, request, env } = context;
// Create cache key based on the path (ignore query params for HTML)
const cacheUrl = new URL(request.url);
cacheUrl.search = ''; // Normalize cache key
const cacheKey = new Request(cacheUrl.toString());
const cache = caches.default;
const cached = await cache.match(cacheKey);
if (cached) return cached; // ~3ms
// Render HTML
const product = await env.DB.prepare(
'SELECT * FROM products WHERE slug = ?'
).bind(params.slug).first();
const html = renderProductPage(product);
const response = new Response(html, {
headers: {
'Content-Type': 'text/html',
'Cache-Control': 'public, s-maxage=3600, stale-while-revalidate=86400',
},
});
context.waitUntil(cache.put(cacheKey, response.clone()));
return response;
}
Use KV and D1 for edge-native data access
Cloudflare KV provides globally distributed key-value storage with sub-10ms reads at the edge. D1 is Cloudflare's serverless SQLite database that runs close to your Workers. Both eliminate the need for long-distance origin database calls that are the primary cause of slow TTFB in dynamic applications.
// functions/api/config.ts
// KV reads are ~5ms at the edge
export async function onRequestGet(context) {
const { env } = context;
// Read from KV (globally distributed, ~5ms)
const config = await env.SITE_CONFIG.get('homepage', { type: 'json' });
if (!config) {
return new Response('Not found', { status: 404 });
}
return Response.json(config, {
headers: {
'Cache-Control': 'public, max-age=60',
// KV itself acts as a cache, but CDN caching adds another layer
},
});
}
// Populate KV from a build script or webhook:
// wrangler kv:put --binding=SITE_CONFIG "homepage" '{"featured": [...]}'
// functions/api/search.ts
// D1: SQLite at the edge, ~10-20ms queries
export async function onRequestGet(context) {
const { env, request } = context;
const url = new URL(request.url);
const q = url.searchParams.get('q') || '';
// D1 query runs close to the Worker (no cross-region latency)
const { results } = await env.DB.prepare(
'SELECT id, title, excerpt FROM articles WHERE title LIKE ? LIMIT 20'
).bind(\`%\${q}%\`).all();
return Response.json({ results }, {
headers: {
'Cache-Control': 'public, s-maxage=60, stale-while-revalidate=300',
},
});
}
// D1 query: ~10ms (same-location as Worker)
// vs. external DB: ~100-300ms (cross-region network)
// Combined with Cache API: ~3ms for repeated queries
Enable Smart Placement for origin-dependent functions
When your Workers Functions must query a traditional database (PostgreSQL, MySQL) hosted in a specific region, edge execution can actually hurt TTFB because the Worker runs far from the database. Cloudflare Smart Placement automatically detects which functions benefit from running near the origin rather than at the edge, and routes them accordingly.
# wrangler.toml
name = "my-site"
compatibility_date = "2026-04-01"
[placement]
mode = "smart"
# Cloudflare automatically determines optimal placement:
# - Functions with KV/D1 access: run at the edge
# - Functions with external DB calls: run near the database
# Manual override for specific routes:
# [[placement.overrides]]
# pattern = "/api/admin/*"
# mode = "origin" # Always run near the database
// functions/api/dashboard.ts
// This function queries an external PostgreSQL database
// Smart Placement runs it near the database, not at the edge
export async function onRequestGet(context) {
const { env, request } = context;
// External database query (PostgreSQL via Hyperdrive)
// Hyperdrive provides connection pooling + caching for external DBs
const client = env.HYPERDRIVE.connectionString;
const result = await fetch(env.API_URL + '/dashboard-data', {
headers: { 'Authorization': \`Bearer \${env.API_TOKEN}\` },
});
const data = await result.json();
return Response.json(data, {
headers: {
'Cache-Control': 'private, max-age=30',
},
});
}
// Without Smart Placement (Worker at edge, DB in us-east):
// Worker to DB: ~200ms round trip
// Total TTFB: ~250ms
// With Smart Placement (Worker near DB):
// Worker to DB: ~5ms round trip
// Total TTFB: ~40ms
Configure Cloudflare-specific performance headers
Cloudflare supports several unique headers and features that optimize TTFB beyond standard HTTP caching. These include CDN-Cache-Control for edge-specific caching, Early Hints for preloading critical resources, and Tiered Caching for reducing origin load.
// functions/_middleware.ts
// Apply performance headers to all responses
export async function onRequest(context) {
const response = await context.next();
// Clone response to modify headers
const newResponse = new Response(response.body, response);
// CDN-Cache-Control: separate from browser Cache-Control
newResponse.headers.set(
'CDN-Cache-Control', 'max-age=3600'
);
// Early Hints: preload critical resources before HTML arrives
// (Cloudflare sends 103 Early Hints automatically for Link headers)
newResponse.headers.set('Link', [
'</styles/main.css>; rel=preload; as=style',
'</scripts/app.js>; rel=preload; as=script',
'<https://api.fontshare.com>; rel=preconnect',
].join(', '));
// Server-Timing: expose performance metrics
newResponse.headers.set('Server-Timing',
\`edge;dur=\${Date.now() - context.data.startTime}\`
);
return newResponse;
}
# Cloudflare Dashboard -> Caching -> Cache Rules
Rule 1: Cache HTML pages aggressively
When: URI Path contains "/blog/" OR URI Path contains "/docs/"
Then:
- Cache eligibility: Eligible for cache
- Edge TTL: 1 day
- Browser TTL: Respect origin
- Cache Key: Include host, path (ignore query string)
Rule 2: Bypass cache for authenticated routes
When: Cookie contains "session_id"
Then:
- Cache eligibility: Bypass cache
Rule 3: Cache API responses
When: URI Path starts with "/api/"
Then:
- Edge TTL: 5 minutes
- Respect stale-while-revalidate
# Enable Tiered Caching (Settings -> Caching):
# Reduces origin requests by using upper-tier edge nodes as cache
Quick checklist
- Static pages pre-built at deploy time with automatic edge serving
- Workers Functions use Cache API to store responses at the edge
- KV used for globally distributed data with sub-10ms reads
- D1 configured for edge-native database queries
- Smart Placement enabled for origin-dependent functions
- CDN-Cache-Control headers set separately from browser caching
- Early Hints configured for critical resource preloading
Frequently asked questions
Cloudflare Pages typically delivers the fastest TTFB for static content because Cloudflare operates the largest edge network (300+ locations vs. Vercel's 30+ and Netlify's 100+). For dynamic content, all three are comparable when properly configured, but Cloudflare's Workers run at the edge by default (no cold starts) while Vercel and Netlify serverless functions run in single regions.
Cache API caches HTTP responses at individual edge locations and is best for caching rendered HTML or API responses with standard HTTP TTLs. KV is a globally distributed key-value store that replicates data everywhere and is best for structured data you read frequently (configuration, user sessions, feature flags). Use Cache API for page caching and KV for application data.
Workers use V8 isolates that spin up in under 5ms, effectively eliminating cold starts. This is fundamentally different from AWS Lambda (used by Vercel and Netlify) where cold starts take 200-2000ms. This is Cloudflare's biggest TTFB advantage for dynamic content.
Use D1 for read-heavy workloads where data can be SQLite-compatible and eventual consistency is acceptable. D1 is much faster for Workers because it runs in the same network. Use an external database (PostgreSQL, MySQL) for complex queries, multi-table joins, and workloads requiring strong consistency. When using external databases, enable Hyperdrive for connection pooling.
Without Tiered Caching, a cache miss at an edge location goes directly to your origin. With Tiered Caching, it first checks a nearby upper-tier data center that aggregates cache from multiple edge locations. This reduces origin requests by 60-90% and means most cache misses are served from a closer upper-tier node instead of the distant origin.
Related resources
Complete TTFB Guide
Deep dive into Time to First Byte -- thresholds, measurement, and optimization.
FixFix TTFB on Vercel
Vercel-specific TTFB optimizations with ISR and Edge Runtime.
FixFix TTFB on Netlify
Netlify-specific TTFB optimizations with Edge Functions and caching.
FixEdge Functions for TTFB
How edge computing achieves sub-100ms TTFB globally.