LCP Sanity headless CMS

Fix LCP in Sanity

Largest Contentful Paint (LCP) on a Sanity-powered site usually fails for one of three reasons: the front-end fetches a GROQ query on every request, the hero image is served as a raw upload instead of through Sanity's image pipeline, or the framework's image component is bypassed and the placeholder pattern blocks the LCP candidate. Sanity itself is fast -- the CDN, the hotspot crop tool, and the auto-format negotiation are all production-grade. The fixes below align your front-end with what Sanity already does well, and remove the runtime GROQ fetch from the critical path. Most teams move from a 3.5 to 5 second LCP into Google's Good band (under 2.5 seconds) by working through these five steps in order, and the changes are framework-agnostic enough to apply to Next.js, Astro, SvelteKit, Nuxt, or Remix front-ends.

Expected results

Before

4.6s

LCP (Poor) -- raw image, GROQ on request, no preload, blur placeholder on hero

After

1.8s

LCP (Good) -- CDN image, ISR cache, preload + priority, framework image component

Step-by-step fix

Use the Sanity image CDN with explicit width, format, and quality

Sanity stores every uploaded image as a single source asset and exposes a CDN at cdn.sanity.io that can resize, crop, recompress, and reformat the image on demand. The @sanity/image-url helper builds those URLs for you. The mistake most teams make is rendering the raw asset URL directly, which serves the full-resolution original and ignores AVIF and WebP support entirely. For an LCP image that ends up around 1200 pixels wide, you almost never want more than 1500 pixels of width, quality 75 to 80, and auto=format so the CDN can pick AVIF for browsers that accept it. The CDN's response sizes typically drop from 800KB on the original JPEG to under 90KB on AVIF, which is the single biggest LCP improvement on most Sanity sites.

TypeScript -- lib/sanity-image.ts
import imageUrlBuilder from '@sanity/image-url';
import { sanityClient } from './sanity-client';

const builder = imageUrlBuilder(sanityClient);

// Bad: raw asset URL, full resolution, no auto-format
// const heroSrc = post.hero.asset.url;

// Good: explicit width, auto-format, quality 78
export function heroImage(image: SanityImage) {
  return builder
    .image(image)
    .width(1500)
    .height(844)        // explicit aspect ratio prevents CLS too
    .fit('crop')
    .auto('format')     // AVIF or WebP based on Accept header
    .quality(78)
    .url();
}

// For responsive srcset, generate multiple widths:
export function heroSrcset(image: SanityImage) {
  return [400, 800, 1200, 1500]
    .map(w => `${builder.image(image).width(w).auto('format').quality(78).url()} ${w}w`)
    .join(', ');
}

Project only the fields you need with GROQ

A common Sanity anti-pattern is fetching the full document with *[_type == "post" && slug.current == $slug][0] and never narrowing the projection. Sanity returns every field, every reference, and every blocks[] array entry. Even when the front-end only renders the title and the hero image, you have paid for a 30 or 50 KB JSON response. Tightening the projection cuts the GROQ payload to under 5 KB on most pages, which lets the CDN return faster and the framework hydrate sooner. Always pull the asset's metadata.dimensions so you can set width and height attributes without a second round-trip.

GROQ -- queries/post-by-slug.ts
// Bad: pulls every field and every reference
export const POST_FAT = `*[_type == "post" && slug.current == $slug][0]`;

// Good: tight projection, only what the page renders
export const POST_LEAN = `*[_type == "post" && slug.current == $slug][0]{
  _id,
  title,
  "slug": slug.current,
  publishedAt,
  excerpt,
  "hero": hero{
    "alt": alt,
    asset->{
      _id,
      "url": url,
      "lqip": metadata.lqip,
      "dimensions": metadata.dimensions
    }
  },
  "author": author->{ name, "slug": slug.current },
  "body": body[]{ ..., _type == "image" => { ..., asset->{ url, metadata } } }
}`;

Preload the LCP image at the page boundary

Even with a fast CDN and a tight GROQ payload, the browser cannot start fetching the hero image until it has parsed the <img> tag. With most JavaScript frameworks, that tag does not appear in the initial HTML until well into the document, and on streaming responses it can arrive even later. A rel="preload" link tag in the <head> tells the browser to begin the fetch on byte zero of the response. Compute the URL on the server using the same image builder from step 1 so you preload exactly what you render. On Next.js App Router, this lives in your page server component. On Astro, generate it in the frontmatter. On SvelteKit, generate it in +page.server.ts and inject from the layout.

TSX -- app/blog/[slug]/page.tsx (Next.js App Router)
import { heroImage, heroSrcset } from '@/lib/sanity-image';
import { sanityClient } from '@/lib/sanity-client';
import { POST_LEAN } from '@/queries/post-by-slug';

export default async function PostPage({ params }: { params: { slug: string } }) {
  const post = await sanityClient.fetch(POST_LEAN, { slug: params.slug }, {
    next: { tags: [`post:${params.slug}`] }
  });

  const heroSrc = heroImage(post.hero);
  const heroSet = heroSrcset(post.hero);

  return (
    <>
      {/* Preload the hero before any layout work happens */}
      <link
        rel="preload"
        as="image"
        href={heroSrc}
        imageSrcSet={heroSet}
        imageSizes="(max-width: 768px) 100vw, 1200px"
        fetchPriority="high"
      />
      <article>
        <h1>{post.title}</h1>
        {/* hero image rendered with framework component below */}
      </article>
    </>
  );
}

Cache the GROQ response on the edge with ISR or stega tags

A GROQ query that runs on every request is by far the largest contributor to a slow LCP on Sanity sites. The fix is to render statically and revalidate on demand. Next.js exposes revalidateTag, which pairs with the next.tags option on each fetch and lets your Sanity webhook bust the cache for one document at a time. Astro's static adapter and SvelteKit's adapter-static do the same thing on rebuild, and the Sanity Studio's webhook plugin can call your build hook on publish. Whichever framework you choose, the goal is the same -- the GROQ round-trip moves out of the user's request path. Use Sanity's stega mode for visual editing, but only enable the live-preview path on a separate /preview/ route so production never pays the latency cost.

TypeScript -- app/api/revalidate/route.ts (Next.js)
import { revalidateTag } from 'next/cache';
import { NextRequest } from 'next/server';

export async function POST(req: NextRequest) {
  const body = await req.json();
  const secret = req.headers.get('x-sanity-webhook-secret');
  if (secret !== process.env.SANITY_WEBHOOK_SECRET) {
    return new Response('Unauthorized', { status: 401 });
  }
  // Sanity sends { _id, _type, slug } per the webhook projection
  if (body._type === 'post' && body.slug?.current) {
    revalidateTag(`post:${body.slug.current}`);
  }
  return Response.json({ revalidated: true });
}

// Configure the Sanity webhook (Settings > API > Webhooks):
// URL:        https://your-site.com/api/revalidate
// Trigger:    On Create, On Update, On Delete
// Filter:     _type == "post"
// Projection: { _id, _type, slug }

Use the framework image component, not raw img

The final step is the most overlooked. Once the image URL is right and the preload is in place, render the hero through your framework's image component rather than a raw <img> tag. next/image will set fetchpriority=high when you pass priority, will skip lazy loading, and will inject the responsive sizes attribute. astro:assets integrates Sanity URLs through a custom image service and respects build-time width hints. @sveltejs/enhanced-img rewrites images at build time and emits AVIF and WebP fallbacks. Whichever component you use, disable Sanity's blur placeholder for the LCP element. The placeholder is a 32 by 18 pixel base64 string, far too small to be a valid LCP candidate, and the swap from placeholder to full image counts as the LCP event. With preload + priority you no longer need the placeholder anyway.

TSX -- components/HeroImage.tsx (Next.js)
import Image from 'next/image';
import { heroImage } from '@/lib/sanity-image';

type Props = { hero: SanityImage; alt: string };

export function HeroImage({ hero, alt }: Props) {
  const src = heroImage(hero);
  const { width, height } = hero.asset.dimensions;

  return (
    <Image
      src={src}
      alt={alt}
      width={width}
      height={height}
      sizes="(max-width: 768px) 100vw, 1200px"
      priority           // sets fetchpriority=high, skips lazy
      placeholder="empty" // disable LQIP for LCP element
      quality={78}
    />
  );
}

Quick checklist

  • Hero rendered via @sanity/image-url with explicit width, auto('format'), and quality(75-80)
  • GROQ projection narrowed to render-only fields, with asset->metadata.dimensions included
  • rel="preload" link with imagesrcset and fetchpriority="high" emitted in head on server
  • Sanity webhook posts to /api/revalidate; production never runs GROQ on the request path
  • Hero rendered through next/image, astro:assets, or enhanced-img with priority and no LQIP placeholder
  • Field LCP measured in PageSpeed Insights at the 75th percentile after 28 days of CrUX data

Frequently asked questions

Aim for an LCP under 2.0 seconds at the 75th percentile from Chrome User Experience Report data. Sanity's CDN serves images quickly, so most of the LCP budget is spent on framework hydration and font loading. Pages that wait on a GROQ fetch on every request commonly score 3.5 to 5 seconds. Pages that pre-render with ISR or static output and use Sanity's image pipeline correctly land under 2.0 seconds without trouble.

Yes. Sanity's image-url builder supports the auto=format parameter, which negotiates the best format the requesting browser supports. Modern Chromium and Safari builds receive AVIF, older browsers receive WebP, and very old browsers fall back to JPEG. Pair auto=format with explicit quality (q=75 to q=80) to keep payloads small without visible artifacts.

For most marketing and editorial pages, full static rendering with on-demand revalidation gives the best LCP. Use ISR (incremental static regeneration) when the underlying query takes longer than the build budget can absorb or when you need per-segment freshness. Both patterns remove the GROQ round-trip from the user request, which is the single biggest source of LCP regressions on Sanity sites.

Compute the LCP image URL during page render, not on the client. In Next.js App Router, build the imagesrcset string in your page server component and pass it to a custom preload component. In Astro, use the imageService API at build time. In SvelteKit, use +page.server.ts and emit the link from the layout. The point is the preload tag must be in the HTML the browser parses, not added by client JavaScript after hydration.

Sanity's LQIP (low-quality image placeholder) renders a small base64 string immediately, then swaps in the full image once it loads. For the LCP element, that swap is the LCP event, and the placeholder is not the eligible LCP candidate because it is too small. Disable the placeholder for the hero image and rely on preload + priority instead. For below-fold imagery, the placeholder is fine and helps with perceived performance.

Open Chrome DevTools, switch to the Performance tab, throttle the network to Slow 4G, and record a load. The Largest Contentful Paint marker in the timings track will tell you the element and its URL. Compare the request waterfall before and after each change in this guide. Field data from PageSpeed Insights running CrUX will confirm the improvement after about 28 days of real-user traffic.

Continue learning