Skip to main content
Early access — new tools and guides added regularly
🔵 Build Real Projects — Guide 13 of 16
View track
>_ claude codeIntermediate35 min

Build a File Upload Pipeline

Handle file uploads properly: client-side validation, cloud storage with S3/R2, image resizing, progress indicators, and secure download links.

What you will build
A file upload system with cloud storage, image processing, and progress tracking

Understanding file upload architecture

File uploads seem simple but involve multiple components working together. The browser reads a file from the user's device, sends it over HTTP to a server, the server processes it (validation, resizing, virus scanning), and stores it somewhere durable (cloud storage, not local disk). There are two main upload patterns: server-side upload where the file goes to your server first, then your server forwards it to cloud storage; and direct upload where the browser uploads directly to cloud storage using a presigned URL, bypassing your server entirely. Direct upload is preferred for large files because it avoids your server becoming a bottleneck. Ask Claude Code: Create a new Next.js project for a file upload system. Set up TypeScript and Tailwind CSS. Create a types file with interfaces for UploadedFile (id, original name, stored name, size in bytes, mime type, url, thumbnail url for images, uploaded at timestamp) and UploadProgress (file name, percent complete, status as pending or uploading or processing or complete or error, error message). Create an API route at src/app/api/upload/route.ts that accepts a multipart form data POST request, validates the file type is an image or PDF with maximum size 10MB, generates a unique filename, and saves it to a local uploads directory for now. Return the file metadata as JSON. Test with curl: curl -X POST -F "file=@test-image.jpg" http://localhost:3000/api/upload. You should get back a JSON response with the file details. This local storage approach works for development; we will switch to cloud storage in a later section.

Building the upload UI with progress tracking

Users need visual feedback during uploads. Ask Claude Code: Create a file upload component at src/components/FileUploader.tsx. It should have a drag-and-drop zone with a dashed border that highlights when a file is dragged over it. Also include a traditional file input button as a fallback. When files are selected, show a list of pending uploads with file name, size in a human-readable format like 2.4 MB, and a progress bar for each file. Upload files one at a time using fetch with an XMLHttpRequest wrapper that reports progress events. Update the progress bar in real time as bytes are sent. Show a checkmark when complete and a red X with error message on failure. The drag-and-drop zone uses the onDragOver, onDragEnter, onDragLeave, and onDrop events. The key for progress tracking is XMLHttpRequest's upload.onprogress event which provides loaded and total byte counts. Calculate percent with Math.round((loaded / total) * 100). Ask Claude Code: Add client-side file validation before uploading. Check file type against an allowed list (jpg, png, gif, webp, pdf). Check file size is under 10MB. Show validation errors inline — for example a red message below the file saying File too large, maximum 10MB. For images, generate a local preview using URL.createObjectURL before uploading. Show the preview thumbnail next to the progress bar. Test by dragging multiple files of different types and sizes. Valid images should show previews and upload with progress. Invalid files should show error messages immediately without attempting upload. Ask Claude Code: Add a cancel button on each uploading file that aborts the XMLHttpRequest and removes the file from the list. Also add a retry button on failed uploads.

Cloud storage with S3 or Cloudflare R2

Local file storage does not scale — if your server restarts, moves, or scales to multiple instances, locally stored files can be lost or inaccessible. Cloud object storage solves this. Cloudflare R2 is S3-compatible (same API) with no egress fees, making it ideal for file storage. Ask Claude Code: Set up Cloudflare R2 for file storage. Install the @aws-sdk/client-s3 package. Create a storage utility at src/lib/storage.ts that configures the S3 client for R2 using environment variables for the account ID, access key, secret key, and bucket name. Create functions for uploadFile (takes a buffer and key, returns the URL), getSignedUrl (generates a temporary download URL), deleteFile, and listFiles. For local development, create a mock storage that saves to disk so I can develop without R2 credentials. Add the R2 environment variables to .env.example. Ask Claude Code: Update the upload API route to use the cloud storage utility instead of local file saving. Generate a unique key for each file using a UUID prefix to avoid name collisions. Store the file in R2 and save the metadata including the R2 key to an in-memory array for now. Update the GET endpoint to return file metadata with signed download URLs. The signed URL pattern is important for security: instead of making files publicly accessible, you generate temporary URLs that expire after a set time. This prevents unauthorized access and hotlinking. Ask Claude Code: Set the signed URL expiration to 1 hour. Add an API endpoint that generates a fresh signed URL when a user wants to download a file. Test the complete flow: upload a file through the UI, verify it appears in your R2 bucket via the Cloudflare dashboard, and download it using the signed URL.

Image processing and thumbnails

Uploaded images need processing: resizing for thumbnails, compressing for web display, and converting formats for compatibility. Ask Claude Code: Set up image processing using the sharp library. Install sharp: npm install sharp. Create an image processor at src/lib/image-processor.ts with functions to resize an image to specific dimensions while maintaining aspect ratio, generate a thumbnail at 200x200 pixels, convert to WebP format for smaller file sizes, strip EXIF data for privacy (EXIF can contain GPS coordinates), and get image dimensions and metadata. The process for each uploaded image should be: validate it is a real image (not a renamed exe), read dimensions and metadata, generate a web-optimized version at max 1920px wide in WebP format, generate a thumbnail at 200x200 in WebP format, and upload all three versions (original, web, thumbnail) to R2. Ask Claude Code: Update the upload pipeline to process images automatically. When an image is uploaded, run it through the processor to create the web and thumbnail versions. Store all versions in R2 with related keys like uploads/abc123/original.jpg, uploads/abc123/web.webp, and uploads/abc123/thumb.webp. Return URLs for all versions in the API response. For non-image files like PDFs, skip the image processing and just store the original. Test by uploading a large photo and verifying that three versions appear in R2. Check that the web version is significantly smaller than the original and the thumbnail is 200x200. Ask Claude Code: Add a fallback for when sharp fails to process an image. Some image formats or corrupted files can cause sharp to throw. Catch errors, log them, and fall back to storing just the original without processed versions.

Presigned uploads for large files

For files over a few megabytes, uploading through your server creates unnecessary load and latency. Presigned URLs let the browser upload directly to R2 or S3, bypassing your server entirely. Ask Claude Code: Create a presigned upload flow. Add an API endpoint at POST /api/upload/presign that accepts the file name, file type, and file size. It validates the request, generates a presigned PUT URL for R2, and returns the URL along with the required headers. The frontend then uses this URL to upload the file directly from the browser to R2. After the upload completes, the frontend calls POST /api/upload/confirm with the file key to trigger any server-side processing like thumbnail generation. Update the FileUploader component to use presigned uploads. The flow is: user selects a file, frontend requests a presigned URL from the API, frontend uploads directly to R2 using the presigned URL with progress tracking via XMLHttpRequest, and frontend confirms the upload to trigger processing. This architecture handles files of any size without straining your server. Ask Claude Code: Add multipart upload support for files over 100MB. The S3 API supports splitting large files into parts that upload in parallel. Create a function that splits a file into 10MB chunks, requests presigned URLs for each part, uploads chunks in parallel with a concurrency limit of 3, and completes the multipart upload. Show aggregate progress across all chunks. This is how professional file hosting services handle large uploads. Test with a 200MB file — it should upload in parallel chunks with accurate progress, completing much faster than a single upload. Ask Claude Code: Add upload resume capability. If a chunk fails, retry just that chunk. If the browser tab closes, allow resuming the upload by checking which parts already exist.

File management and gallery view

Users need to browse, organize, and manage their uploaded files. Ask Claude Code: Create a gallery page at src/app/gallery/page.tsx that displays all uploaded files. Images show as a thumbnail grid with file name, size, and upload date below each thumbnail. Non-image files show as icons based on file type — a document icon for PDFs, a spreadsheet icon for CSV files. Add view switching between grid view (thumbnails) and list view (table with columns for thumbnail, name, size, type, uploaded date, and action buttons). Add click-to-preview: clicking an image opens a modal with the full-size web-optimized version. Clicking a PDF opens it in a new tab. Add a download button that uses the signed URL. Ask Claude Code: Add file organization features. Let users create folders and move files between them. Add a rename function that updates the display name without changing the storage key. Add multi-select with checkboxes so users can delete or move multiple files at once. Add a confirmation dialog for deletion that shows the file names and warns that this cannot be undone. Ask Claude Code: Add a search feature that filters files by name, file type, and date range. Add sorting by name, size, date, and type. The search should filter in real time as the user types. For file types, add filter buttons like All, Images, Documents, and Other that toggle visibility. Test the gallery with a mix of images and documents. Verify that grid and list views display correctly, previews work for different file types, multi-select and batch operations work, and search filters the results accurately. This gallery component is reusable — any application that handles files needs exactly this interface.

Security and validation hardening

File uploads are one of the most common attack vectors in web applications. Ask Claude Code: Create a comprehensive security layer for the file upload system. Add the following protections. File type validation on both client and server: do not trust the file extension alone. Read the file magic bytes to verify the actual file type matches the claimed type. For example, a JPEG starts with bytes FF D8 FF and a PNG starts with 89 50 4E 47. Reject files where the extension does not match the magic bytes. File size limits enforced on both client and server. The client check provides fast feedback but is not a security measure since it can be bypassed. The server check is the real security boundary. Filename sanitization: strip special characters, path traversal sequences like ../, and null bytes from filenames. Generate a UUID-based storage key so the original filename is only used for display, never for storage paths. Content scanning: for image files, attempt to load them with sharp and verify they are valid images. Malicious files disguised as images will fail to parse. Reject any file that fails validation. Rate limiting: limit uploads to 10 files per minute per IP address. Use a simple in-memory rate limiter that tracks request counts per IP with a sliding window. Return 429 Too Many Requests when the limit is exceeded. Ask Claude Code: Create a test file that attempts each attack vector: a renamed executable with a jpg extension, a file with path traversal in the name, a file exceeding the size limit, and rapid successive uploads exceeding the rate limit. Verify that every attack is blocked with an appropriate error message. Run the tests and confirm all pass.

Deploying and monitoring the upload pipeline

Deploy the complete system with monitoring to track usage and catch issues. Ask Claude Code: Prepare the file upload system for production deployment. Replace the in-memory file metadata storage with a PostgreSQL database on Railway. Create a files table with all the UploadedFile fields plus a user_id column for future authentication. Set up database connection pooling with a maximum of 10 connections. Add database indexes on user_id and uploaded_at for query performance. Deploy the Next.js app to Vercel and the database to Railway. For monitoring, ask Claude Code: Add logging throughout the upload pipeline. Log each upload attempt with file type, size, and duration. Log processing steps like thumbnail generation with timing. Log errors with full context including the file details and the error stack trace. Use structured JSON logging so logs are searchable. Calculate and log aggregate metrics: total uploads today, total storage used, average upload duration, and error rate. Ask Claude Code: Create a simple admin page at /admin/uploads that shows upload statistics: total files, total storage used, uploads per day chart for the last 30 days, breakdown by file type, and recent errors. Add a file browser that lets the admin view any uploaded file, see its metadata and all versions, and delete files. This admin view is essential for monitoring storage costs and catching abuse. Test the complete production flow: upload through the UI, verify the file appears in R2, verify the metadata is in the database, verify thumbnails were generated, download via signed URL, and check the admin stats page. Your file upload pipeline now handles everything a production application needs: client validation, cloud storage, image processing, progress tracking, security hardening, and monitoring.

Related Lesson

Working with Cloud Services

This guide is hands-on and practical. The full curriculum covers the conceptual foundations in depth with structured lessons and quizzes.

Go to lesson