Skip to main content
Early access — new tools and guides added regularly
🟣 Power User Workflows — Guide 9 of 11
View track
>_ claude codeAdvanced30 min

Performance Optimisation with AI

Profile your application, analyse bundle sizes, fix Core Web Vitals, and optimise rendering — with AI helping you find and fix the bottlenecks that matter.

What you will build
A measurably faster application with optimised bundles, rendering, and Core Web Vitals scores

Measuring before optimising

The cardinal rule of performance optimisation is: measure first, optimise second. Optimising without measuring is guessing — you might spend hours improving something that was not a bottleneck. Ask Claude Code: Set up a comprehensive performance measurement baseline for a Next.js application. Install and configure the following tools. First, Lighthouse CI: create a lighthouserc.js configuration that runs Lighthouse on 5 key pages (home, product listing, product detail, checkout, and dashboard) and saves the results as JSON. Second, create a bundle analysis setup: install @next/bundle-analyzer and configure next.config.js to generate a bundle report when ANALYZE=true is set. Third, add Core Web Vitals tracking using the web-vitals library: create a utility at src/lib/performance.ts that measures LCP (Largest Contentful Paint), FID (First Input Delay), CLS (Cumulative Layout Shift), INP (Interaction to Next Paint), and TTFB (Time to First Byte), and sends the results to an API endpoint for logging. Run the Lighthouse audit: npx lhci collect. Review the scores. Green is 90 to 100, yellow is 50 to 89, red is 0 to 49. Focus your optimisation on whatever scores red or yellow. Run the bundle analysis: ANALYZE=true npm run build. The report opens in your browser showing every package and its size in your JavaScript bundles. Look for packages that are surprisingly large — these are your first targets. Ask Claude Code: Create a performance dashboard page that shows the current Lighthouse scores, bundle size, and Core Web Vitals for the last 30 days as a chart. This dashboard is your compass — every change should move the numbers in the right direction.

Bundle size optimisation

JavaScript bundle size directly affects load time. Every kilobyte takes time to download, parse, and execute. Ask Claude Code: Analyse the bundle report and identify optimisation opportunities. Common findings include: large libraries imported for a single function (import entire lodash when you only use debounce), client-side dependencies that should be server-side only, duplicate packages at different versions, and polyfills for features the target browsers already support. For each finding, show me the current import, the optimised alternative, and the size savings. For lodash: replace import _ from 'lodash' with import debounce from 'lodash/debounce' or replace with a native implementation. For moment.js (often 300KB+): replace with date-fns (tree-shakeable) or the native Intl API. For large icon libraries: import individual icons instead of the entire library. Ask Claude Code: Implement code splitting. Identify components that are not needed on initial page load and lazy-load them. Use dynamic imports: const HeavyChart = dynamic(() => import('./HeavyChart'), { loading: () => <ChartSkeleton /> }). The chart component only downloads when it is about to render, not on page load. Apply this to: below-the-fold content, modals and dialogs, admin-only features, and complex forms. Run the bundle analysis again after each change: ANALYZE=true npm run build. Compare the before and after bundle sizes. A 30 percent reduction in JavaScript bundle size typically translates to 15 to 20 percent faster page loads. Ask Claude Code: Add a bundle size budget to the CI pipeline. Create a script that fails the build if the main bundle exceeds 200KB gzipped. This prevents future regressions — nobody can add a large dependency without the build catching it.

Core Web Vitals: LCP, CLS, and INP

Google uses Core Web Vitals as a ranking factor. They also directly correlate with user experience and conversion rates. Ask Claude Code: Analyse and fix Core Web Vitals for the application. Start with LCP (Largest Contentful Paint) — the time until the largest visible element loads. Target: under 2.5 seconds. Common causes of poor LCP: large hero images not optimised for web, render-blocking CSS or JavaScript, slow server response time, and web fonts blocking text display. Fixes: use Next.js Image component with priority prop for above-the-fold images, preload critical fonts with display: swap, and ensure server-side rendering for the initial content. Ask Claude Code: Fix CLS (Cumulative Layout Shift) — visual elements moving around as the page loads. Target: under 0.1. Common causes: images without explicit width and height, dynamically injected content like ads or banners, web fonts causing text to reflow, and client-side data fetching that changes layout after initial render. Fixes: always set width and height on images and iframes, reserve space for dynamic content with min-height, use font-display: optional to prevent layout shifts from fonts, and use server-side data fetching to render complete layouts. Ask Claude Code: Fix INP (Interaction to Next Paint) — how quickly the page responds to user input. Target: under 200 milliseconds. Common causes: heavy JavaScript execution blocking the main thread, synchronous operations in event handlers, large DOM trees causing slow re-renders, and third-party scripts consuming main thread time. Fixes: use web workers for heavy computation, debounce rapid interactions like search-as-you-type, virtualise long lists instead of rendering thousands of DOM nodes, and defer non-critical third-party scripts. After applying fixes, re-run Lighthouse and compare scores.

Rendering performance for React applications

React re-renders components when state or props change. Unnecessary re-renders waste CPU cycles and make the UI feel sluggish. Ask Claude Code: Set up React rendering profiling. Install React DevTools profiler integration. Create a script that wraps key components with a render counter: log a message every time a component renders, including the component name and render reason. Add the React Profiler component to the app layout to capture render timing data. Run the application and interact with it normally. Identify components that re-render excessively — a component that re-renders 50 times when the user types in a search box is a problem. Ask Claude Code: Apply targeted rendering optimisations. For components that re-render because their parent re-renders but their own props have not changed, wrap them with React.memo. For expensive calculations that run on every render, use useMemo to cache the result: const sortedItems = useMemo(() => items.sort(compareFn), [items]). For callback functions passed as props that cause child re-renders, use useCallback to maintain a stable reference. Important: do not memoise everything. React.memo has overhead — comparing props takes time. Only memoise components where re-rendering is measurably expensive (large lists, complex charts, deep component trees). Ask Claude Code: Implement virtualisation for the product listing page that shows hundreds of items. Install react-window and replace the simple map rendering with a VirtualizedList component that only renders the items currently visible in the viewport — typically 10 to 20 items out of hundreds. This reduces DOM nodes from hundreds to dozens and makes scrolling silky smooth. Measure the render time before and after: the improvement should be dramatic for lists over 50 items.

Image and asset optimisation

Images are typically the heaviest assets on a web page. Ask Claude Code: Audit and optimise all images in the application. Run a script that lists every image loaded by the application with its file format, dimensions, file size, and whether it is above or below the fold. For each image, recommend optimisations. Convert PNG and JPEG to WebP (typically 25 to 35 percent smaller with equivalent quality). Serve AVIF format to browsers that support it (additional 20 percent savings). Resize images to the maximum display size — serving a 3000px image that displays at 300px wastes bandwidth. Generate responsive image sizes: 640w, 768w, 1024w, 1280w. Use Next.js Image component for automatic optimisation: <Image src={url} width={400} height={300} alt="description" sizes="(max-width: 768px) 100vw, 400px" />. The sizes attribute tells the browser which image size to download based on the viewport width. Ask Claude Code: Add lazy loading for below-the-fold images. The browser only downloads images when they are about to enter the viewport. Next.js Image does this by default, but verify it is working with the Network tab in DevTools — images should load as you scroll, not all at once. For above-the-fold images, add the priority prop to disable lazy loading and trigger immediate download. Ask Claude Code: Optimise web fonts. Most sites load 4 to 8 font weights but only use 2 or 3. Audit which weights are actually used in the CSS and remove unused ones. Use font-display: swap so text renders immediately with a fallback font while the custom font downloads. Subset fonts to include only the characters used on your site — a full font file might be 100KB but a Latin-only subset is 20KB. Preload critical fonts with a link tag in the document head. Each of these optimisations individually saves a little time but together they can cut page weight by 50 percent or more.

Server-side and API performance

Frontend optimisations mean nothing if the server takes 3 seconds to respond. Ask Claude Code: Profile the API routes in the application. Add timing middleware that logs the duration of every API request: the route, the method, the duration in milliseconds, and the response status code. Run the application under realistic load using a script that simulates 50 concurrent users making typical requests. Identify the slowest endpoints. Common server-side bottlenecks include: unoptimised database queries (covered in the database management guide), missing caching for expensive operations, synchronous operations that could be parallel, and large response payloads. Ask Claude Code: Add caching at the appropriate level. For data that changes infrequently (product catalogue, configuration), add a 5-minute cache using a simple in-memory Map with TTL (time-to-live). For per-request deduplication (the same API called multiple times during a page render), use React cache() or a request-scoped cache. For CDN caching of static pages, set appropriate Cache-Control headers: public, max-age=3600 for pages that can be cached, private, no-cache for personalised content. Ask Claude Code: Implement response compression. Verify that the server sends gzip or Brotli compressed responses by checking the Content-Encoding header. If not, add compression middleware. Brotli typically achieves 15 to 20 percent better compression than gzip. For JSON API responses, also consider reducing the response size: remove null fields, use shorter key names in high-volume endpoints, and paginate large lists instead of returning everything at once. Re-measure API response times after each optimisation. Create a before-and-after report showing the improvement for each endpoint. The target is under 200 milliseconds for simple reads and under 500 milliseconds for complex operations.

Continuous performance monitoring

Performance is not a one-time fix — it degrades over time as features are added and dependencies grow. Ask Claude Code: Set up continuous performance monitoring. Add Lighthouse CI to the GitHub Actions pipeline. On every pull request, run Lighthouse on the 5 key pages and post the results as a PR comment showing the score change from the main branch. Flag any PR that decreases a Lighthouse score by more than 5 points. Add a performance budget file at performance-budget.json. Define limits: maximum JavaScript bundle size 250KB gzipped, maximum image payload per page 500KB, LCP under 2.5 seconds, CLS under 0.1, INP under 200 milliseconds. Create a CI check that fails when any budget is exceeded. Ask Claude Code: Add Real User Monitoring. The web-vitals tracking we set up earlier captures data from real users. Create a weekly performance report that aggregates this data: P50 and P95 for each Core Web Vital (P50 is the median experience, P95 is the worst 5 percent), breakdown by device type (mobile vs desktop), breakdown by connection speed, and the slowest pages by LCP. P95 matters more than P50 — your slowest users are often your most frustrated. If P50 LCP is 1.5 seconds but P95 is 6 seconds, you have a problem that averages hide. Ask Claude Code: Create a performance regression detector. Compare this week's metrics to last week's. If any metric degrades by more than 10 percent, generate an alert with the specific pages and metrics affected. Link the degradation to the commits deployed during that period so you can identify which change caused the regression. This continuous monitoring loop — measure, budget, alert, fix — ensures your application stays fast as it grows. Performance is a feature, and like any feature, it needs ongoing attention and maintenance.

Related Lesson

Performance Engineering

This guide is hands-on and practical. The full curriculum covers the conceptual foundations in depth with structured lessons and quizzes.

Go to lesson