Tutorial

How to Set Up Performance Monitoring for Your Website

Performance monitoring is the practice of continuously collecting, analyzing, and alerting on your website's speed metrics. Without monitoring, performance improvements are guesswork and regressions go undetected until users complain (or leave). This guide walks through setting up a complete monitoring stack from scratch.

A robust monitoring setup has three layers: Real User Monitoring (RUM) for field data from actual visitors, synthetic testing for consistent lab measurements, and alerting to catch regressions before they impact business metrics.

Step-by-step guide

1

Choose between RUM and synthetic monitoring

Both approaches have strengths. RUM captures the real experience across all devices, networks, and geographies your users actually have. Synthetic monitoring provides consistent, reproducible measurements from controlled environments -- perfect for detecting regressions in CI/CD pipelines.

Most production sites need both. Use RUM as the source of truth (it matches CrUX data), and synthetic as a fast feedback loop during development.

2

Add the web-vitals library to your site

The web-vitals library is the standard for collecting CWV field data. It uses the same measurement methodology as CrUX, ensuring your data matches what Google sees. Install it as an npm package or load it from a CDN.

JavaScript -- Basic RUM setup
import { onLCP, onCLS, onINP, onFCP, onTTFB } from 'web-vitals';

const ENDPOINT = '/api/performance-metrics';

function sendMetric(metric) {
  const data = {
    name: metric.name,
    value: metric.value,
    rating: metric.rating,
    delta: metric.delta,
    id: metric.id,
    page: window.location.pathname,
    userAgent: navigator.userAgent,
    connectionType: navigator.connection?.effectiveType || 'unknown',
    timestamp: Date.now(),
  };

  // sendBeacon is reliable during page unload
  navigator.sendBeacon(ENDPOINT, JSON.stringify(data));
}

onLCP(sendMetric);
onCLS(sendMetric);
onINP(sendMetric);
onFCP(sendMetric);
onTTFB(sendMetric);
3

Build a performance data endpoint

Your metrics endpoint receives data from the web-vitals library, validates it, and stores it. The endpoint should handle high volumes (every page view sends multiple metrics) and be fast to avoid impacting the user experience. Use a serverless function or a lightweight API route.

TypeScript -- Next.js API route example
// app/api/performance-metrics/route.ts
import { NextRequest, NextResponse } from 'next/server';

export async function POST(request: NextRequest) {
  try {
    const metric = await request.json();

    // Validate required fields
    if (!metric.name || metric.value === undefined) {
      return NextResponse.json({ error: 'Invalid metric' }, { status: 400 });
    }

    // Store in your database / analytics service
    await db.performanceMetrics.create({
      data: {
        metricName: metric.name,
        value: metric.value,
        rating: metric.rating,
        page: metric.page,
        connectionType: metric.connectionType,
        timestamp: new Date(metric.timestamp),
      },
    });

    return NextResponse.json({ ok: true });
  } catch (error) {
    return NextResponse.json({ error: 'Server error' }, { status: 500 });
  }
}
4

Set up a performance dashboard

Visualize your metric data on a dashboard showing p75 values over time, broken down by page, device type, and connection speed. The p75 (75th percentile) is the standard CrUX threshold -- if 75% of your users have a Good experience, the metric passes.

Popular dashboard tools include Grafana (open source), Datadog, and custom dashboards built with Chart.js or D3. At minimum, show LCP, CLS, and INP p75 trend lines with the Good/Poor threshold lines overlaid.

Tip: Add deployment markers to your time-series charts. When you see a performance change, being able to correlate it with a specific deployment makes debugging vastly faster.
5

Configure performance budgets and alerts

Define your performance budgets based on CWV thresholds: LCP under 2.5s, CLS under 0.1, INP under 200ms. Then add tighter budgets for your high-priority pages. Configure alerts via Slack, email, or PagerDuty when the p75 of any metric crosses from Good to Needs Improvement.

JSON -- Performance budget example
{
  "budgets": [
    {
      "pages": ["/*"],
      "metrics": {
        "LCP": { "warn": 2000, "critical": 2500 },
        "CLS": { "warn": 0.08, "critical": 0.1 },
        "INP": { "warn": 150, "critical": 200 },
        "TBT": { "warn": 200, "critical": 300 }
      }
    },
    {
      "pages": ["/checkout/*"],
      "metrics": {
        "LCP": { "warn": 1500, "critical": 2000 },
        "INP": { "warn": 100, "critical": 150 }
      }
    }
  ],
  "alerts": {
    "slack": "#perf-alerts",
    "email": "team@example.com"
  }
}
6

Add Lighthouse CI to your deployment pipeline

Lighthouse CI runs Lighthouse audits as part of your CI/CD pipeline, blocking deployments that degrade performance. Install it globally or as a dev dependency, configure performance assertions, and add it to your GitHub Actions or GitLab CI workflow.

YAML -- GitHub Actions workflow
name: Lighthouse CI
on: [pull_request]

jobs:
  lighthouse:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20

      - run: npm ci
      - run: npm run build

      - name: Run Lighthouse CI
        uses: treosh/lighthouse-ci-action@v11
        with:
          urls: |
            http://localhost:3000/
            http://localhost:3000/blog/
          budgetPath: ./lighthouse-budget.json
          uploadArtifacts: true
          temporaryPublicStorage: true
Tip: Start with warn assertions during the first week so you do not block deployments. Once your baseline is established, switch critical pages to error assertions that block merges.

Frequently asked questions

What is the difference between RUM and synthetic monitoring?

Real User Monitoring (RUM) collects performance data from actual site visitors in real-time. Synthetic monitoring runs automated Lighthouse or WebPageTest audits on a schedule from fixed locations. RUM shows you what users actually experience (including diverse devices and networks), while synthetic gives you consistent, reproducible measurements for regression detection.

How much does performance monitoring cost?

Free options include the web-vitals library with your own analytics endpoint, Lighthouse CI with temporary public storage, and Google Search Console CWV reports. Paid services like SpeedCurve, Calibre, or DebugBear offer dashboards, historical data, and advanced features starting around $20-50 per month for small sites.

How do I know if a performance regression is real?

Look at the p75 (75th percentile) of your metric over at least 7 days. Single data points can be noisy due to network conditions and device variability. A sustained change in the p75 that lasts more than 3 days is likely a real regression. Correlate timing with deployments using deployment markers on your dashboard.

Should I monitor all pages or just key pages?

Start with your highest-traffic pages and key conversion paths (homepage, product pages, checkout). The web-vitals library automatically collects data from every page load, but focus your alerting and dashboards on pages that matter most to business outcomes. Expand coverage as your monitoring infrastructure matures.

How quickly should I respond to performance alerts?

Treat performance regressions like availability incidents. A sudden LCP jump from 1.5s to 4s can reduce conversion rates by 20-30%. Set critical alerts for metrics that cross the Poor threshold and warning alerts for Needs Improvement. Critical alerts should trigger the same response as a downtime alert.

Marcus Chen

Performance Engineer at WebVitals.tools

Marcus specializes in web performance measurement and monitoring. He has optimized Core Web Vitals for over 200 production sites across e-commerce, SaaS, and publishing.