Automated Code Review
Set up automated code review with Git hooks, security scanning, PR-level analysis, and quality gates — catch issues before they reach production.
Why automated code review matters
Manual code review is essential but slow. A reviewer needs to read every line, understand the context, check for bugs, verify style consistency, spot security issues, and ensure test coverage. Automated review handles the mechanical parts — style violations, common bugs, security vulnerabilities, and test coverage — so human reviewers can focus on architecture, logic, and design decisions. The automation pyramid has three layers. Pre-commit hooks catch issues before code is even committed. They run in seconds and give instant feedback. CI pipeline checks run on every push and take minutes. They catch integration issues and run the full test suite. PR-level analysis runs when a pull request is opened and provides a comprehensive review. Together, these layers create a quality gate that catches the majority of issues automatically. Ask Claude Code: Create a new project for demonstrating automated code review. Initialise a Git repository with git init. Create a basic Node.js project with TypeScript. Add a few source files with deliberate issues we will catch later: a file with inconsistent formatting, a file with an unused variable, a file with a hardcoded API key, and a file with a SQL injection vulnerability. Install the core tools: npm install -D eslint prettier husky lint-staged. These four packages form the foundation of the automation. ESLint catches code quality issues. Prettier enforces formatting. Husky manages Git hooks. lint-staged runs checks only on staged files for speed.
Pre-commit hooks with Husky and lint-staged
Pre-commit hooks run automatically before every commit. If they fail, the commit is rejected. Ask Claude Code: Set up Husky and lint-staged for pre-commit hooks. Run npx husky init to create the .husky directory. Configure lint-staged in package.json to run Prettier formatting on all TypeScript and JavaScript files, ESLint with the fix flag on all TypeScript files, and TypeScript type checking with tsc --noEmit on the entire project. The Husky pre-commit hook should run npx lint-staged. Configure ESLint with a strict ruleset: install @typescript-eslint/eslint-plugin and @typescript-eslint/parser. Create an .eslintrc.json that extends the recommended configs plus strict type-checking rules. Enable rules for no-unused-variables, no-explicit-any, consistent-return, and prefer-const. Create a .prettierrc with opinionated settings: single quotes, no semicolons, 2-space indent, and trailing commas. Test by staging the file with formatting issues and running git commit -m "test". The commit should be blocked with ESLint errors. Fix one issue and try again. The commit should auto-format with Prettier, fix auto-fixable ESLint issues, and succeed only when all checks pass. Ask Claude Code: Add a commit message hook using commitlint. Install @commitlint/cli and @commitlint/config-conventional. Create a commitlint.config.js that enforces conventional commits: the message must start with a type like feat, fix, docs, style, refactor, test, or chore, followed by a colon and description. This standardised format enables automatic changelog generation and semantic versioning later. Test with a bad commit message like updated stuff — it should be rejected. Try fix: resolve null pointer in user lookup — it should succeed.
Security scanning in the commit pipeline
Security vulnerabilities caught before commit never reach your repository. Ask Claude Code: Add security scanning to the pre-commit pipeline. Install gitleaks as a pre-commit check. Gitleaks scans staged files for secrets: API keys, passwords, tokens, private keys, and other sensitive data. Download the gitleaks binary and add a Husky pre-commit step that runs gitleaks protect --staged. Test by staging the file with the hardcoded API key. The commit should be blocked with a message identifying the leaked secret. Create a .gitleaks.toml configuration file that customises the rules: add your specific patterns (for example, your company's internal API key format), and add allowlist entries for test files and documentation that may contain example keys. Ask Claude Code: Add npm audit to the pre-push hook. Create a .husky/pre-push file that runs npm audit --audit-level=high. This checks all dependencies for known security vulnerabilities before code is pushed to the remote repository. If a high-severity vulnerability is found, the push is blocked. Add a script that generates a security report: npm audit --json > security-report.json. Parse this JSON file and create a human-readable summary showing the vulnerability name, severity, affected package, and recommended fix (usually a version upgrade). Ask Claude Code: Add a dependency licence checker. Install license-checker and add a script that verifies all dependencies use permissible licences — MIT, Apache-2.0, BSD, and ISC are typically safe. Flag any GPL or AGPL dependencies that might have copyleft implications. Flag any dependencies with unknown or missing licences. Run this check weekly or before releases. This catches legal issues that technical reviews miss — using a GPL library in a proprietary product can have serious legal consequences.
CI pipeline with GitHub Actions
Pre-commit hooks run locally and can be bypassed. CI pipelines run on the server and cannot be skipped. Ask Claude Code: Create a GitHub Actions workflow at .github/workflows/code-review.yml that runs on every pull request. The workflow should have four jobs running in parallel: lint (run ESLint on the entire codebase and report issues as annotations on the PR), typecheck (run tsc --noEmit and report type errors), test (run the test suite with coverage and fail if coverage drops below 80 percent), and security (run gitleaks on the diff and npm audit). Each job should use Node.js 20, cache node_modules for speed, and report results clearly. For the test job, upload the coverage report as an artifact and post a coverage summary as a PR comment. Ask Claude Code: Add a build verification job that runs npm run build and verifies the production build succeeds. Many issues only surface during the build — unused imports that tree-shaking removes, server-side rendering errors, and missing environment variables. This job should run after lint and typecheck pass, saving time by not building obviously broken code. Ask Claude Code: Create a workflow status badge and add it to the README. When someone opens the repository, they immediately see whether the main branch is passing all checks. Configure branch protection on main: require all CI checks to pass and require at least one PR approval before merging. This means no code reaches main without passing every automated check and receiving human review. The combination of required CI checks and branch protection creates an enforceable quality gate — even repository admins cannot bypass it without deliberately changing the branch protection settings.
AI-powered PR review with Claude Code
Automated linting catches syntax issues. AI-powered review catches logic issues, design problems, and improvement opportunities. Ask Claude Code: Create a GitHub Actions workflow that runs Claude Code as an automated reviewer on every PR. The workflow should check out the PR branch, install Claude Code, and run it with a prompt that instructs it to review the diff and post comments. The review prompt should cover: identify potential bugs or logic errors, flag performance concerns like N+1 queries or unnecessary re-renders, check error handling completeness, suggest simplifications or better patterns, verify documentation for new public functions, and check that test coverage addresses the changed code paths. Claude Code should post its review as a GitHub PR comment using the gh CLI. Format the review with clear sections: a summary of changes, issues found ranked by severity, suggestions for improvement, and a positive note about what was done well. Ask Claude Code: Add a configuration file at .github/review-config.json that customises the review. Define areas of focus per directory: for src/api/ focus on input validation and error handling, for src/components/ focus on accessibility and performance, for src/lib/ focus on type safety and documentation. Define ignore patterns: skip generated files, lock files, and migration files. Define severity thresholds: only flag issues rated medium or higher to avoid noise. Ask Claude Code: Create a review summary dashboard. After each PR review, append the findings to a review-log.json file. Create a script that analyses the log and reports: most common issues found (helps identify team knowledge gaps), files that generate the most review comments (indicates complex or problematic areas), and trend of issues over time (are we improving). This data turns code review from a gatekeeping activity into a learning tool.
Custom linting rules and code standards
Generic linting rules catch generic issues. Custom rules catch your project's specific pitfalls. Ask Claude Code: Create custom ESLint rules for our project. Write a rule that flags direct database queries outside of the data access layer — any file outside src/lib/db/ that imports the database client should trigger a warning. Write a rule that requires all API route handlers to include error handling (a try-catch block wrapping the main logic). Write a rule that flags console.log statements in production code (allow them in test files). Save each rule in a eslint-rules/ directory as a separate plugin. Register them in the ESLint configuration. Ask Claude Code: Add an architecture boundary checker. Create a script that analyses imports across the project and verifies architectural rules: components should not import from API routes, API routes should not import from components, the data layer should not import from the presentation layer, and utility functions should not have side effects. Violations indicate architectural drift — code that breaks the intended separation of concerns. Run this check in CI and flag violations on PRs. Ask Claude Code: Create a code complexity checker. Using a script that analyses TypeScript AST, flag functions that exceed 50 lines, files with more than 10 imports, deeply nested conditionals (more than 3 levels), and functions with more than 5 parameters. These are not hard rules but signals — when code exceeds these thresholds, it usually benefits from refactoring. Post the complexity report as a PR comment showing any new threshold violations introduced by the PR. The combination of standard linting, security scanning, custom rules, and AI review creates a comprehensive quality gate. Code that passes all these checks is significantly more likely to be correct, secure, and maintainable.
Monitoring and maintaining the review pipeline
Automated review pipelines need maintenance — rules need updating, false positives need suppressing, and new patterns need new rules. Ask Claude Code: Create a pipeline health dashboard as a simple HTML page. Track and display: average CI pipeline duration (should be under 5 minutes for fast feedback), failure rate by job (which checks fail most often), false positive rate (how often automated comments are dismissed without action), most common ESLint violations (indicates areas needing team education or rule adjustment), and security vulnerability trend (are we accumulating or resolving). Ask Claude Code: Add a feedback mechanism for automated review comments. When a developer sees an AI review comment on their PR, they can react with a thumbs up (useful) or thumbs down (not useful). Track these reactions and report on the AI review's usefulness. If a specific type of comment consistently gets thumbs down, adjust the review prompt to reduce that type of feedback. Ask Claude Code: Create a quarterly review pipeline audit checklist. Include: update all linting packages to the latest version, review and update custom rules based on the last quarter's false positive data, run npm audit fix to address any new dependency vulnerabilities, review the CI pipeline duration and optimise slow steps, check that branch protection rules are still correctly configured, and update the review prompt based on the most common issues that automated review missed but human reviewers caught. The automated review pipeline is a living system. It needs regular attention to stay effective, but the time it saves dwarfs the maintenance cost. A well-tuned pipeline catches 80 percent of mechanical issues automatically, freeing human reviewers to focus on the 20 percent that requires human judgement — architecture decisions, user experience implications, and business logic correctness.
Code Quality and CI/CD
This guide is hands-on and practical. The full curriculum covers the conceptual foundations in depth with structured lessons and quizzes.
Go to lesson