← Back to Performance

Key metrics to measure and monitor web performance

VITALSCore Web Vitals

Google's essential metrics for user experience quality

LCP

Largest Contentful Paint

Loading performance metric

Good
≤ 2.5s
Needs Work
2.5-4.0s
Poor
> 4.0s

Measures when largest element becomes visible

CLS

Cumulative Layout Shift

Visual stability metric

Good
≤ 0.1
Needs Work
0.1-0.25
Poor
> 0.25

Measures unexpected layout shifts during page load

FID

First Input Delay

Interactivity metric

Good
≤ 100ms
Needs Work
100-300ms
Poor
> 300ms

Time between first user interaction and browser response

EXTRAAdditional Performance Metrics

Other important metrics for comprehensive performance monitoring

⏱️ Loading Metrics

FCP (First Contentful Paint)
Target: ≤ 1.8s
TTI (Time to Interactive)
Target: ≤ 3.8s
Speed Index
Target: ≤ 3.4s

🔄 Runtime Metrics

TBT (Total Blocking Time)
Target: ≤ 200ms
Long Tasks
Tasks > 50ms
Memory Usage
JS Heap Size
TOOLSMeasurement Tools & APIs

Tools and APIs for collecting performance metrics

// Web Vitals library
import { getCLS, getFID, getFCP, getLCP, getTTFB } from 'web-vitals'

getCLS(console.log)
getFID(console.log)
getFCP(console.log)
getLCP(console.log)
getTTFB(console.log)

// Performance Observer API
const observer = new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    if (entry.entryType === 'largest-contentful-paint') {
      console.log('LCP:', entry.startTime)
    }
    if (entry.entryType === 'layout-shift') {
      console.log('CLS:', entry.value)
    }
  }
})

observer.observe({
  entryTypes: ['largest-contentful-paint', 'layout-shift', 'first-input']
})

// Navigation Timing API
const navigation = performance.getEntriesByType('navigation')[0]
console.log('DOM Load:', navigation.domContentLoadedEventEnd)
console.log('Page Load:', navigation.loadEventEnd)

// Resource Timing API
const resources = performance.getEntriesByType('resource')
resources.forEach(resource => {
  console.log(`${resource.name}: ${resource.duration}ms`)
})
RUMReal User Monitoring (RUM)

Collecting and analyzing real user performance data

📈 RUM Implementation

// Custom RUM analytics
class PerformanceTracker {
  constructor(endpoint) {
    this.endpoint = endpoint
    this.metrics = {}
    this.initTracking()
  }

  initTracking() {
    // Track Core Web Vitals
    import('web-vitals').then(({ getCLS, getFID, getLCP }) => {
      getCLS(this.sendMetric.bind(this))
      getFID(this.sendMetric.bind(this))
      getLCP(this.sendMetric.bind(this))
    })

    // Track custom metrics
    this.trackCustomMetrics()
  }

  sendMetric(metric) {
    const data = {
      name: metric.name,
      value: metric.value,
      rating: metric.rating,
      url: location.href,
      timestamp: Date.now(),
      userAgent: navigator.userAgent,
      connection: navigator.connection?.effectiveType
    }

    // Send to analytics endpoint
    fetch(this.endpoint, {
      method: 'POST',
      body: JSON.stringify(data),
      keepalive: true
    })
  }

  trackCustomMetrics() {
    // Track feature usage
    this.trackFeaturePerformance()

    // Track user interactions
    this.trackInteractionMetrics()
  }
}

🛠️ Analytics Platforms

Google Analytics 4

Core Web Vitals reporting

Free
Google PageSpeed Insights

Field + Lab data

Free
Web Vitals Chrome Extension

Real-time monitoring

Free
Lighthouse CI

Automated testing

Free
BUDGETSPerformance Budget Monitoring

Setting and monitoring performance budgets

// Lighthouse CI budget configuration
module.exports = {
  ci: {
    collect: {
      url: ['http://localhost:3000/'],
      numberOfRuns: 3
    },
    assert: {
      assertions: {
        'categories:performance': ['warn', { minScore: 0.9 }],
        'categories:accessibility': ['error', { minScore: 0.9 }],

        // Core Web Vitals budgets
        'largest-contentful-paint': ['warn', { maxNumericValue: 2500 }],
        'first-input-delay': ['warn', { maxNumericValue: 100 }],
        'cumulative-layout-shift': ['warn', { maxNumericValue: 0.1 }],

        // Resource budgets
        'resource-summary:script:size': ['error', { maxNumericValue: 350000 }],
        'resource-summary:stylesheet:size': ['warn', { maxNumericValue: 50000 }],
        'resource-summary:image:size': ['warn', { maxNumericValue: 500000 }],

        // Network budgets
        'network-requests': ['warn', { maxNumericValue: 50 }],
        'total-byte-weight': ['warn', { maxNumericValue: 1000000 }]
      }
    }
  }
}

// Custom budget monitoring
const PERFORMANCE_BUDGETS = {
  LCP: 2500,
  FID: 100,
  CLS: 0.1,
  FCP: 1800,
  TTI: 3800
}

function checkBudgets(metrics) {
  Object.entries(metrics).forEach(([name, value]) => {
    const budget = PERFORMANCE_BUDGETS[name]
    if (budget && value > budget) {
      console.warn(`Performance budget exceeded: ${name} = ${value} (budget: ${budget})`)
      // Send alert to monitoring system
    }
  })
}
SYNTHETICSynthetic Performance Testing

Automated performance testing in controlled environments

🔬 Lighthouse Automation

// Automated Lighthouse testing
const lighthouse = require('lighthouse')
const chromeLauncher = require('chrome-launcher')

async function runLighthouse(url) {
  const chrome = await chromeLauncher.launch()

  const options = {
    logLevel: 'info',
    output: 'json',
    onlyCategories: ['performance'],
    port: chrome.port
  }

  const result = await lighthouse(url, options)
  const metrics = result.lhr.audits

  return {
    performance: result.lhr.categories.performance.score * 100,
    fcp: metrics['first-contentful-paint'].numericValue,
    lcp: metrics['largest-contentful-paint'].numericValue,
    cls: metrics['cumulative-layout-shift'].numericValue,
    fid: metrics['max-potential-fid'].numericValue
  }
}

// CI/CD Integration
runLighthouse(process.env.DEPLOY_URL)
  .then(metrics => {
    console.log('Performance Score:', metrics.performance)
    if (metrics.performance < 90) {
      process.exit(1) // Fail build
    }
  })

🌐 WebPageTest API

// WebPageTest automation
const WebPageTest = require('webpagetest')

const wpt = new WebPageTest('www.webpagetest.org', 'API_KEY')

const options = {
  location: 'Dulles:Chrome',
  connectivity: '3G',
  runs: 3,
  firstViewOnly: false,
  video: true
}

wpt.runTest('https://example.com', options, (err, data) => {
  if (data.statusCode === 200) {
    const testId = data.data.testId

    // Poll for results
    wpt.getTestResults(testId, (err, results) => {
      const metrics = results.data.average.firstView

      console.log({
        loadTime: metrics.loadTime,
        firstByte: metrics.TTFB,
        startRender: metrics.render,
        speedIndex: metrics.SpeedIndex,
        lighthouse: metrics.lighthouse
      })
    })
  }
})
TIPSMonitoring Best Practices

Best practices for effective performance monitoring

✅ DO

  • • Monitor real user metrics (RUM)
  • • Set up automated performance testing
  • • Track metrics over time
  • • Set up alerts for regressions
  • • Monitor on various devices/networks
  • • Focus on user-centric metrics
  • • Segment metrics by user groups
  • • Correlate performance with business metrics

❌ AVOID

  • • Relying only on synthetic testing
  • • Ignoring mobile performance
  • • Testing only on fast connections
  • • Focusing on vanity metrics
  • • Not monitoring continuously
  • • Ignoring performance regressions
  • • Testing only production builds
  • • Not considering user context