React Scalability & Performance Guide for Interviews
Table of Contentsโ
- Performance Optimization Fundamentals
- Handling 1M+ Users
- Code Splitting & Lazy Loading
- State Management at Scale
- Rendering Optimization
- Network & Data Optimization
- Bundle Size Optimization
- Caching Strategies
- Monitoring & Metrics
- Interview Question Examples
Performance Optimization Fundamentalsโ
Key Concepts to Explainโ
What is Performance in React? Performance in React means ensuring your application remains responsive, loads quickly, and provides smooth user interactions even under heavy load or with large datasets.
Core Metrics:
- FCP (First Contentful Paint): When first content appears (< 1.8s is good)
- LCP (Largest Contentful Paint): When main content loads (< 2.5s is good)
- TTI (Time to Interactive): When page becomes fully interactive (< 3.8s is good)
- CLS (Cumulative Layout Shift): Visual stability (< 0.1 is good)
- FID (First Input Delay): Responsiveness to user interaction (< 100ms is good)
Interview Answer Frameworkโ
When asked about performance, structure your answer:
- Identify the bottleneck (rendering, network, bundle size)
- Measure first (use React DevTools Profiler, Chrome DevTools)
- Apply appropriate technique (memoization, virtualization, code splitting)
- Verify improvement (measure again)
Handling 1M+ Usersโ
Infrastructure Considerationsโ
Client-Side Scalability:
- Efficient rendering: Only render what's visible
- Memory management: Prevent memory leaks, clean up listeners
- Bundle optimization: Keep initial bundle < 200KB gzipped
- CDN usage: Serve static assets from edge locations
Server-Side Considerations:
- SSR/SSG: Pre-render pages to reduce client-side work
- API pagination: Never load all 1M records at once
- Rate limiting: Protect backend from overload
- Caching layers: Redis, CDN caching for frequently accessed data
Architectural Patterns for Scaleโ
// Bad: Loading everything at once
const UsersList = () => {
const [users, setUsers] = useState([]);
useEffect(() => {
fetch('/api/users') // Returns 1M users!
.then(res => res.json())
.then(setUsers);
}, []);
return users.map(user => <UserCard key={user.id} {...user} />);
};
// Good: Pagination + Virtualization
import { useVirtualizer } from '@tanstack/react-virtual';
const UsersList = () => {
const [page, setPage] = useState(1);
const { data, isLoading } = useQuery({
queryKey: ['users', page],
queryFn: () => fetch(`/api/users?page=${page}&limit=50`)
});
const parentRef = useRef();
const virtualizer = useVirtualizer({
count: data?.total || 0,
getScrollElement: () => parentRef.current,
estimateSize: () => 100,
});
return (
<div ref={parentRef} style={{ height: '600px', overflow: 'auto' }}>
<div style={{ height: `${virtualizer.getTotalSize()}px` }}>
{virtualizer.getVirtualItems().map(virtualRow => (
<div key={virtualRow.index} style={{ transform: `translateY(${virtualRow.start}px)` }}>
<UserCard user={data.users[virtualRow.index]} />
</div>
))}
</div>
</div>
);
};
Code Splitting & Lazy Loadingโ
Why It Mattersโ
For 1M users, your initial bundle must be tiny. Code splitting means users only download code they need.
Techniques to Explainโ
Route-Based Code Splitting:
import { lazy, Suspense } from 'react';
import { BrowserRouter, Routes, Route } from 'react-router-dom';
// These components are split into separate chunks
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Analytics = lazy(() => import('./pages/Analytics'));
const Settings = lazy(() => import('./pages/Settings'));
function App() {
return (
<BrowserRouter>
<Suspense fallback={<LoadingSpinner />}>
<Routes>
<Route path="/dashboard" element={<Dashboard />} />
<Route path="/analytics" element={<Analytics />} />
<Route path="/settings" element={<Settings />} />
</Routes>
</Suspense>
</BrowserRouter>
);
}
Component-Based Code Splitting:
// Heavy chart library loaded only when needed
const HeavyChart = lazy(() => import('./components/HeavyChart'));
function DashboardPage() {
const [showChart, setShowChart] = useState(false);
return (
<div>
<button onClick={() => setShowChart(true)}>View Chart</button>
{showChart && (
<Suspense fallback={<div>Loading chart...</div>}>
<HeavyChart data={data} />
</Suspense>
)}
</div>
);
}
Library Code Splitting:
// Load heavy libraries only when needed
const loadPDF = async () => {
const { jsPDF } = await import('jspdf');
return new jsPDF();
};
const exportToPDF = async () => {
const pdf = await loadPDF();
// Use PDF library
};
State Management at Scaleโ
Choosing the Right Solutionโ
For 1M+ Users:
- Local state: Use
useStatefor component-specific data - Server state: Use React Query/TanStack Query (automatic caching, background refetching)
- Global state: Use Zustand or Jotai (minimal re-renders)
- Form state: Use React Hook Form (uncontrolled components)
Anti-Patterns to Avoidโ
// Bad: Everything in global state
const useGlobalStore = create((set) => ({
users: [],
products: [],
orders: [],
currentUser: null,
theme: 'light',
// 50 more fields...
}));
// Good: Separate concerns
const useAuth = create((set) => ({
currentUser: null,
login: (user) => set({ currentUser: user }),
}));
const useTheme = create((set) => ({
theme: 'light',
setTheme: (theme) => set({ theme }),
}));
// Server data managed separately with React Query
const { data: users } = useQuery({
queryKey: ['users'],
queryFn: fetchUsers,
staleTime: 5 * 60 * 1000, // 5 minutes
});
State Normalizationโ
// Bad: Nested data causes deep re-renders
const state = {
posts: [
{
id: 1,
title: 'Post 1',
author: { id: 1, name: 'John' },
comments: [
{ id: 1, text: 'Comment 1', author: { id: 2, name: 'Jane' } }
]
}
]
};
// Good: Normalized structure
const state = {
posts: {
byId: {
1: { id: 1, title: 'Post 1', authorId: 1, commentIds: [1] }
},
allIds: [1]
},
users: {
byId: {
1: { id: 1, name: 'John' },
2: { id: 2, name: 'Jane' }
}
},
comments: {
byId: {
1: { id: 1, text: 'Comment 1', authorId: 2 }
}
}
};
Rendering Optimizationโ
React.memo and useMemoโ
When to Use:
- Component receives same props frequently
- Component has expensive calculations
- Rendering thousands of items in a list
// Expensive list item that shouldn't re-render unless its own data changes
const UserCard = memo(({ user, onSelect }) => {
console.log('Rendering UserCard', user.id);
return (
<div onClick={() => onSelect(user.id)}>
<h3>{user.name}</h3>
<p>{user.email}</p>
</div>
);
});
// Parent component
function UserList() {
const [selectedId, setSelectedId] = useState(null);
const [searchTerm, setSearchTerm] = useState('');
// Memoize callback to prevent UserCard re-renders
const handleSelect = useCallback((id) => {
setSelectedId(id);
}, []);
// Memoize expensive filtering
const filteredUsers = useMemo(() => {
return users.filter(user =>
user.name.toLowerCase().includes(searchTerm.toLowerCase())
);
}, [users, searchTerm]);
return (
<div>
<input value={searchTerm} onChange={e => setSearchTerm(e.target.value)} />
{filteredUsers.map(user => (
<UserCard key={user.id} user={user} onSelect={handleSelect} />
))}
</div>
);
}
Virtualization for Large Listsโ
import { FixedSizeList } from 'react-window';
// Render 100,000 items efficiently
function VirtualizedList({ items }) {
const Row = ({ index, style }) => (
<div style={style}>
{items[index].name}
</div>
);
return (
<FixedSizeList
height={600}
itemCount={items.length}
itemSize={50}
width="100%"
>
{Row}
</FixedSizeList>
);
}
Debouncing and Throttlingโ
import { useDebouncedValue } from './hooks';
function SearchComponent() {
const [search, setSearch] = useState('');
const debouncedSearch = useDebouncedValue(search, 300);
// Only fires API call when user stops typing for 300ms
const { data } = useQuery({
queryKey: ['search', debouncedSearch],
queryFn: () => searchAPI(debouncedSearch),
enabled: debouncedSearch.length > 2,
});
return (
<input
value={search}
onChange={e => setSearch(e.target.value)}
placeholder="Search..."
/>
);
}
Network & Data Optimizationโ
API Request Optimizationโ
Techniques for 1M+ Users:
- Request Deduplication: Prevent duplicate simultaneous requests
- Prefetching: Load data before user needs it
- Background Refetching: Keep data fresh without blocking UI
- Infinite Queries: Load data as user scrolls
- Optimistic Updates: Update UI immediately, rollback on error
// React Query handles all these automatically
const { data, fetchNextPage, hasNextPage } = useInfiniteQuery({
queryKey: ['posts'],
queryFn: ({ pageParam = 1 }) => fetchPosts(pageParam),
getNextPageParam: (lastPage) => lastPage.nextPage,
staleTime: 5 * 60 * 1000, // Cache for 5 minutes
refetchOnWindowFocus: true, // Refresh when tab regains focus
});
// Prefetch next page
const prefetchNextPage = () => {
queryClient.prefetchQuery({
queryKey: ['posts', nextPage],
queryFn: () => fetchPosts(nextPage),
});
};
// Optimistic update
const mutation = useMutation({
mutationFn: updatePost,
onMutate: async (newPost) => {
await queryClient.cancelQueries(['posts']);
const previous = queryClient.getQueryData(['posts']);
// Optimistically update
queryClient.setQueryData(['posts'], old => [...old, newPost]);
return { previous };
},
onError: (err, newPost, context) => {
// Rollback on error
queryClient.setQueryData(['posts'], context.previous);
},
});
GraphQL for Efficient Data Fetchingโ
// Fetch only what you need
const USER_QUERY = gql`
query GetUser($id: ID!) {
user(id: $id) {
id
name
email
# Only fetch required fields
}
}
`;
// Batch multiple queries
const DASHBOARD_QUERY = gql`
query GetDashboard {
user { id name }
posts { id title }
notifications { id message }
}
`;
Bundle Size Optimizationโ
Analysis Toolsโ
# Analyze bundle size
npm run build
npx vite-bundle-visualizer
# or
npx webpack-bundle-analyzer
Optimization Techniquesโ
1. Tree Shaking:
// Bad: Imports entire library
import _ from 'lodash';
// Good: Import only what you need
import debounce from 'lodash/debounce';
import throttle from 'lodash/throttle';
2. Dynamic Imports:
// Load heavy dependency only when needed
const exportToExcel = async (data) => {
const XLSX = await import('xlsx');
const wb = XLSX.utils.book_new();
// ... rest of export logic
};
3. Replace Heavy Libraries:
// Instead of moment.js (288KB)
import moment from 'moment';
// Use date-fns (tree-shakeable, ~13KB for common functions)
import { format, addDays } from 'date-fns';
4. Image Optimization:
// Use next-gen formats
<picture>
<source srcSet="image.webp" type="image/webp" />
<source srcSet="image.jpg" type="image/jpeg" />
<img src="image.jpg" alt="Description" loading="lazy" />
</picture>
// Or use a service
<img src="https://cdn.example.com/image.jpg?w=400&q=80" />
Caching Strategiesโ
Multi-Layer Cachingโ
1. HTTP Caching (Browser/CDN):
// Set proper cache headers on server
res.setHeader('Cache-Control', 'public, max-age=31536000, immutable');
2. Service Worker Caching:
// Cache static assets
self.addEventListener('install', (event) => {
event.waitUntil(
caches.open('v1').then((cache) => {
return cache.addAll([
'/index.html',
'/styles.css',
'/app.js',
]);
})
);
});
3. React Query Caching:
const queryClient = new QueryClient({
defaultOptions: {
queries: {
staleTime: 5 * 60 * 1000, // 5 minutes
cacheTime: 10 * 60 * 1000, // 10 minutes
refetchOnWindowFocus: false,
retry: 3,
},
},
});
4. localStorage/IndexedDB:
// Cache user preferences
const savePreferences = (prefs) => {
localStorage.setItem('userPrefs', JSON.stringify(prefs));
};
// For larger datasets, use IndexedDB
const db = await openDB('myDatabase', 1, {
upgrade(db) {
db.createObjectStore('users', { keyPath: 'id' });
},
});
await db.put('users', userData);
Monitoring & Metricsโ
What to Monitor at Scaleโ
Real User Monitoring (RUM):
// Web Vitals reporting
import { getCLS, getFID, getFCP, getLCP, getTTFB } from 'web-vitals';
function sendToAnalytics(metric) {
const body = JSON.stringify(metric);
// Use sendBeacon for reliability
navigator.sendBeacon('/analytics', body);
}
getCLS(sendToAnalytics);
getFID(sendToAnalytics);
getFCP(sendToAnalytics);
getLCP(sendToAnalytics);
getTTFB(sendToAnalytics);
Error Tracking:
// Use Sentry, LogRocket, or similar
import * as Sentry from "@sentry/react";
Sentry.init({
dsn: "your-dsn",
integrations: [new Sentry.BrowserTracing()],
tracesSampleRate: 0.1, // Sample 10% of transactions at scale
});
Performance Profiling:
// React DevTools Profiler API
import { Profiler } from 'react';
function onRenderCallback(
id, // Component identifier
phase, // "mount" or "update"
actualDuration, // Time spent rendering
) {
if (actualDuration > 16) { // More than one frame
console.warn(`Slow render: ${id} took ${actualDuration}ms`);
}
}
<Profiler id="Navigation" onRender={onRenderCallback}>
<Navigation />
</Profiler>
Interview Question Examplesโ
Question 1: "How would you handle rendering 100,000 table rows?"โ
Answer Structure:
- Never render all at once: "I'd use virtualization with react-window or @tanstack/react-virtual"
- Explain virtualization: "Only renders visible rows plus a buffer, dramatically reducing DOM nodes"
- Code example: Show FixedSizeList or useVirtualizer
- Mention alternatives: "For simple cases, pagination might be better UX"
- Consider backend: "Ideally, implement server-side pagination too"
Question 2: "Your React app is slow. How do you debug it?"โ
Answer Framework:
- Measure first: "Use React DevTools Profiler to identify slow components"
- Check for unnecessary re-renders: "Look for components rendering without prop changes"
- Use Performance tab: "Check for long tasks, layout thrashing"
- Network waterfall: "Check for sequential requests that could be parallel"
- Bundle analysis: "Check if bundle is too large"
- Fix systematically: memo, useMemo, useCallback, code splitting
Question 3: "How do you optimize a React app for 1 million users?"โ
Comprehensive Answer:
Infrastructure:
- CDN for static assets
- Server-side rendering or static generation for initial load
- Horizontal scaling of API servers
- Database optimization (indexing, read replicas)
Frontend:
- Aggressive code splitting (< 200KB initial bundle)
- Virtualization for large lists
- Optimistic updates for better perceived performance
- Service worker for offline capability and asset caching
- Lazy loading of images and non-critical components
Data Management:
- React Query for automatic caching and deduplication
- Normalized state structure
- Pagination or infinite scroll instead of loading everything
- Debounced search and autocomplete
Monitoring:
- Real user monitoring (Web Vitals)
- Error tracking
- Performance budgets in CI/CD
- A/B testing for performance improvements
Question 4: "Explain the difference between useMemo and useCallback"โ
Answer:
- useMemo: Memoizes a computed value. Returns the result of a function.
- useCallback: Memoizes a function itself. Returns the function.
// useMemo: Prevents recalculating expensive values
const expensiveValue = useMemo(() => {
return data.reduce((sum, item) => sum + item.value, 0);
}, [data]);
// useCallback: Prevents creating new function instances
const handleClick = useCallback(() => {
console.log('Clicked', id);
}, [id]);
When to use:
- useMemo: Expensive calculations, filtered/sorted arrays
- useCallback: Passing callbacks to memoized child components
Question 5: "What's your approach to state management in a large app?"โ
Answer: "I follow a multi-tier approach:
- Server state (React Query/TanStack Query): API data, background sync, caching
- URL state (React Router): Current page, filters, search params
- Local state (useState): Component-specific UI state
- Global state (Zustand/Jotai): Authentication, theme, truly global UI state
I avoid putting everything in global state. Most 'global' data is actually server data that should be cached, not managed globally. This keeps each concern separate and optimizes re-renders."
Key Takeaways for Interviewsโ
-
Always measure before optimizing: "Premature optimization is the root of all evil"
-
Know the trade-offs: Every optimization has costs (complexity, development time)
-
Think about scale: How would this work with 1M users? 1B records?
-
Consider user experience: Fast perceived performance > actual performance
-
Use the right tool: Not every app needs virtualization or complex state management
-
Monitor in production: What you measure in dev โ real user experience
-
Optimize the critical path: Initial load matters most
-
Be specific in answers: Give concrete examples, show code, mention libraries
-
Discuss trade-offs: "React.memo adds memory overhead but reduces render time"
-
Stay current: Mention React Server Components, Suspense for data fetching if relevant