Bringing Lighthouse to the App: Building Performance Metrics for React Native

At Indeed we’ve open sourced a new React Native repository which makes it simple to measure Lighthouse scores in your mobile apps. We think it will help other organizations better measure their app performance, especially for companies similar to Indeed who are transitioning from a web-first to an app-first approach.

You can check out the code here, and read on for more details.

The Challenge

Indeed had traditionally been a web company. Site speed wasn’t just a nice-to-have — it was fundamental to how we built systems. We believed good software was always fast, and for many years now, we had relied on Lighthouse to keep us honest. In the past we’ve written in depth on this topic, but as we’ve transitioned to a Mobile App first company, we needed a way of bringing the same performance rigor to our native code.

As React Native proliferated across our most critical pages—ViewJob, SERP, Homepage—we found ourselves flying blind. We had no standardized way to measure whether our mobile performance was improving, degrading, or holding steady. We needed answers to fundamental questions: How fast did our screens load? When could users actually interact with them? Were we maintaining the performance standards that Indeed was known for?

The Solution: Core Web Vitals for React Native

Rather than reinvent the wheel, we looked to the industry standards that had proven effective on the web: Core Web Vitals. These metrics — designed by Google to capture the essence of user experience — translated remarkably well to mobile apps. We just needed to adapt them for React Native’s unique threading model and lifecycle.

The Metrics That Matter

  1. Time to First Frame (TTFF) — When users saw content
    Our analog to Largest Contentful Paint (LCP). It measures how quickly users see meaningful content after a component starts mounting. In a native app context, this needs to be fast — there’s no network request to fetch the document, no HTML parsing, no CSS cascade. Code is pre-bundled. Users expect instant visual feedback.
    Threshold: < 300ms is good, > 800ms is poor.
  2. Time to Interactive (TTI) — When users could actually do something
    The most critical metric for mobile apps. We measure when a component transitions from “loading” to “ready for interaction.” Unlike the web, where TTI was algorithmically determined, we let components self-report when they’re truly interactive — when data is loaded, UI is rendered, and touch handlers are ready. While not ideal in every case, we’ve found algorithmic TTI (e.g., TTI Polyfill) can also be inaccurate.
    Threshold: < 500ms is good, > 1500ms is poor.
  3. First Input Delay (FID) — How responsive the app felt
    Captures the delay between a user’s first touch and when the app responds. On mobile, touch interactions should feel instantaneous. Any perceptible lag breaks the illusion of direct manipulation that makes mobile apps feel native.
    Threshold: < 50ms is good, > 150ms is poor.

Why These Thresholds?

Our thresholds are significantly stricter than Core Web Vitals (roughly 40% tighter). This was intentional. Native apps need to be faster than web apps:

  • ✅ No network requests for initial render
  • ✅ Code is pre-bundled in the app
  • ✅ No HTML/CSS/JS parsing overhead
  • ✅ Users expect native app speed

For context, Core Web Vitals consider LCP < 2.5s as “good.” We consider TTFF > 800ms as “poor” — about 6× stricter. Mobile users have different expectations, and our thresholds reflect that reality.

Integration: Dead Simple

The entire system is packaged as a single React hook. Integration takes three steps:

function MyComponent(): JSX.Element {
  // 1. Add the hook
  const { markInteractive, panResponder } = usePerformanceMeasurement({
    provider: 'myapp',
    componentName: 'MyComponent' as const,
  });

  // 2. Mark when interactive
  useEffect(() => {
    if (dataLoaded) {
      markInteractive();
    }
  }, [dataLoaded]);

  // 3. Attach pan responder to root view
  return (
    <View {...panResponder.panHandlers}>
      {/* Your component */}
    </View>
  );
}

That’s it. No configuration files, no complex setup, virtually no performance overhead in production. The hook handles everything: timing, measurement, logging, and cleanup.

Technical Implementation

Architecture Overview

The measurement system follows a component’s lifecycle from mount to interaction:

Component Mount → TTFF Measurement → TTI Marking → FID Capture → Logging

1. Measuring Time to First Frame

React Native’s InteractionManager is key. It lets us run code after the current frame finishes rendering — the perfect hook for measuring TTFF:

useEffect(() => {
  const handle = InteractionManager.runAfterInteractions(() => {
    const ttff = Date.now() - mountStartTime;
    // TTFF captured after first frame renders
  });
  return () => handle.cancel();
}, []);

2. Marking Time to Interactive

Components know best when they’re truly interactive. Rather than trying to algorithmically determine this (as Lighthouse does for web), we provide a markInteractive() callback that components call when they’re ready:

const { markInteractive } = usePerformanceMeasurement({
  provider: 'viewjob',
  componentName: 'ViewJobMainContent'
});

useEffect(() => {
  if (dataLoaded && uiReady) {
    markInteractive(); // Component decides when it's interactive
  }
}, [dataLoaded, uiReady]);

3. Capturing First Input Delay

React Native’s PanResponder gives us comprehensive input capture across all touch types. We measure the delay between touch start and when the main thread can process it:

const panResponder = PanResponder.create({
  onStartShouldSetPanResponder: () => {
    const inputTime = Date.now();
    setImmediate(() => {
      const processingTime = Date.now();
      const fid = processingTime - inputTime; // Main thread delay
    });
    return false; // Don't capture the gesture
  }
});

The setImmediate is crucial — it ensures we measure the actual main thread processing delay, not just the touch handler execution time.

4. Smart Logging Strategy

  • Wait for FID: Delay logging until first user interaction
  • Timeout fallback: Log after 5 seconds even without interaction
  • Single event: All metrics logged together for easier analysis

This approach gives complete performance profiles while avoiding metric fragmentation.

Real-World Results

We first integrated this system into ViewJob, one of Indeed’s highest-traffic pages. Here’s what we learned:

Console Output (Development)

[Performance-Debug] TTI marked: 172ms
[Performance-Debug] TTFF captured: 347ms
[Performance] viewjob/ViewJobMainContent: {
  TTFF_ms: 347,
  TTI_ms: 172,
  FID_ms: 0,
  FID_type: "touch"
}

The Lighthouse Score Equivalent

To make performance actionable, we created a composite score (0–100) that mirrors Lighthouse scoring:

const PERFORMANCE_WEIGHTS = {
  TTFF: 0.25, // Visual loading
  TTI: 0.45,  // Interactivity (most critical)
  FID: 0.30   // Responsiveness
};

TTI gets the highest weight (45%) because mobile users expect immediate interactivity. Visual loading and responsiveness are important, but nothing frustrates users more than tapping a button that doesn’t respond.

ViewJob Performance:
• Average score: 81 (Good)
• P75 score: 95 (Excellent)

These scores give us a single number to track over time, making it easy to spot regressions and measure improvements.

What We Learned

1. Native Apps Should Be Faster

Our initial thresholds were too lenient — we started with web-based Core Web Vitals and quickly realized native apps should perform better. The absence of network latency and parsing overhead means users rightfully expect faster experiences.

2. Components Know Best

Letting components self-report interactivity (markInteractive()) proved more accurate than algorithmic detection. Components understand their own loading states, data dependencies, and UI readiness in ways that external observers cannot.

3. Complete Profiles Matter

Waiting to log all metrics together (rather than logging each individually) made analysis significantly easier. It’s much simpler to query for “sessions with TTI > 500ms” than to join three separate metric events.

Looking Forward

This measurement system is now our foundation for mobile performance at Indeed. We’re expanding it beyond ViewJob to SERP, Homepage, and other React Native surfaces. Each integration gives us more data, more insights, and more confidence that we’re maintaining the performance standards Indeed is known for.

But measurement is just the beginning. The real value comes from what we do with the data:

  • Automated alerts when performance degrades
  • Performance budgets enforced in CI/CD
  • A/B testing to validate that optimizations actually improve user experience
  • Correlation analysis between performance and business metrics

We’re no longer flying blind in the mobile world. We have the metrics, the thresholds, and the tooling to ensure that as Indeed becomes app-first, we remain performance-first.

Get Involved

At Indeed we’ve open sourced this repository because we think it will help other organizations better measure their app performance, especially for companies similar to Indeed who are transitioning from a web-first to an app-first approach. To contribute, please see the details in our contribution guidelines: CONTRIBUTING.md.