Code Review¶
Bucket: Engineering ·
Slash command: /zsl:code-review ·
Source: skills/engineering/code-review/SKILL.md
When this skill activates
Claude Code matches this skill against the trigger text below. You can also invoke it explicitly with the slash command.
Comprehensive pre-PR code review of the current branch with an issues-only tone and an approval gate before applying fixes. Use when user wants a code review, mentions /code-review, or asks to scan the branch before opening a PR.
Please review this pull request and provide feedback on: - Code quality and best practices - Potential bugs or issues - Performance considerations - Security concerns - Test coverage (see coverage analysis below)
Use the repository's CLAUDE.md for guidance on style and conventions. Be constructive and helpful in your feedback.
Sentry / Error Handling Analysis¶
Before suggesting Sentry additions:
-
Check existing patterns — Search for similar code in the codebase. If 5+ routes handle 401s without Sentry, that's the project pattern.
-
Distinguish error types:
- DO log to Sentry: Unexpected exceptions, external API failures, missing configuration, data corruption, catch blocks
-
DO NOT log to Sentry: Expected HTTP responses (401 unauthenticated, 403 forbidden, 400 validation errors, 404 not found for user input)
-
Apply the "would this wake someone up?" test: If this condition happening at 3am shouldn't trigger an alert, don't log it as an exception.
-
Expected conditions are not exceptions:
- User not authenticated → 401 (expected, don't log)
- Invalid input → 400 (expected, don't log)
- External API down → 503 (unexpected, DO log)
- Database query fails → (unexpected, DO log)
If suggesting Sentry, verify the codebase doesn't already handle similar cases differently.
Coverage Exclusion Analysis¶
Before suggesting coverage exclusions:
-
Read existing config —
jest.config.ts(collectCoverageFrom) andcodecov.yml(ignore). Skip files already excluded or outside collection scope. -
Analyze file content (don't pattern-match paths):
- SHOULD HAVE UNIT TESTS: Exports pure functions (no Supabase/React/fetch imports, input→output, no side effects). Examples:
validation.ts,*-utils.ts,mock-data.tswith logic. -
SHOULD BE EXCLUDED: Direct
createClient()usage, route handlers, React components withuseEffect/useState. -
Verify before suggesting: Is file in collection scope? Does existing pattern cover it? Does file contain ANY pure functions?
If pure functions exist → suggest unit tests, not exclusion.
SQL Migration Analysis¶
Before suggesting SQL changes:
-
Check existing comments — Inline SQL comments (
-- comment) are valid documentation. Don't suggestCOMMENT ONstatements if inline comments already explain the logic. -
COMMENT ON is for tooling — Use
COMMENT ONwhen database documentation tools need metadata. Use inline comments for developer readability. Both are valid; don't require both. -
Index comments — Partial indexes with
WHEREclauses benefit from inline comments explaining the filtering logic.COMMENT ON INDEXis optional and only useful if you use database documentation generators.
Type Safety / Runtime Validation Analysis¶
Before suggesting runtime type checks or validation:
-
Check database constraints first — If a column has
CHECK,FOREIGN KEY,NOT NULL, or enum type constraints, the database already enforces valid values. Runtime validation is redundant. -
Type casts from DB are often necessary — Supabase generates
stringfor constrained columns (e.g.,country: stringeven withCHECK (country IN ('NZ', 'AU'))). A type cast likeas "NZ" | "AU"is correct—it tells TypeScript what the DB guarantees. -
Don't add validation for impossible states:
- Redundant:
if (country !== "NZ" && country !== "AU") throwwhen DB has CHECK constraint -
Appropriate: Validation at API boundaries where user input hasn't been validated yet
-
If suggesting runtime checks, verify the constraint isn't already enforced at:
- Database level (CHECK, FK, NOT NULL, enum types)
- API/form validation layer (zod schemas, form validation)
- Type system (discriminated unions, branded types)
Defensive code for impossible states adds noise without value.
Documentation Convention Analysis¶
Before suggesting JSDoc, comments, or documentation:
-
Check existing conventions — Search for
@param,@returns,/**patterns in the codebase. If <5% of functions have JSDoc, don't suggest adding it to one file. -
Self-documenting code is preferred — Clear prop names, TypeScript interfaces, and descriptive function names often eliminate the need for JSDoc.
-
Don't suggest inconsistent documentation:
- Wrong: Add JSDoc to one component when 50 others have none
-
Right: Note that documentation is sparse project-wide (if it's actually a problem)
-
CLAUDE.md guidance: "Don't add docstrings, comments, or type annotations to code you didn't change"
Documentation suggestions should match codebase conventions, not ideal-world standards.
Over-Engineering Analysis¶
Before suggesting configurability or abstraction:
- Constants vs environment variables:
- Use constants: Values that rarely change and have sensible defaults (cache TTLs, timeouts, retry counts)
-
Use env vars: Values that MUST differ per environment (API keys, URLs, feature flags for A/B tests)
-
Apply the "how often would this change?" test: If the answer is "rarely" or "never in production," a constant is sufficient. Environment variables add deployment complexity for no benefit.
-
DRY without over-abstracting:
- DO suggest: Extracting magic numbers to named constants (improves readability)
-
DO NOT suggest: Environment variables for internal implementation details
-
Examples:
- Cache duration
300→CACHE_MAX_AGE_SECONDS = 300(good) - Cache duration →
process.env.CACHE_MAX_AGE(over-engineering unless proven need) - Debounce delay
150→BLUR_DELAY_MS = 150(good) - Creating a
ConfigServicefor 3 constants (over-engineering)
If suggesting configurability, explain the concrete use case requiring runtime changes.
Instructions¶
- Use
git diff main...HEAD(orstaging...HEADdepending on the project's base branch) to see changes since branching - Read all modified files completely (minimum 1500 lines)
- Before suggesting any change, search for similar patterns in the codebase to understand existing conventions
- Review against the checklist above
- Provide specific, actionable feedback with file:line references
- Highlight any issues that should be fixed before creating a PR
- Note any validation that should be run (
make lint) - Tone: issues only — Do not praise code or summarize what was done well. Only report things that should be improved, fixed, or reconsidered. If there are no issues, state "No issues found." and nothing else.
Workflow (IMPORTANT)¶
- Present findings first — After reviewing, present ALL findings as a numbered list with file:line references and confidence levels. Group by severity (Critical / Important / Minor).
- Propose a fix plan — List which findings you plan to fix and which you recommend skipping (with reasoning). For any findings you believe are false positives, mark them and explain why.
- Ask for approval — End with: "Shall I proceed with these fixes?" Wait for explicit approval.
- Only then apply fixes — Do not edit files until the user approves the plan.
This approval gate prevents applying incorrect fixes based on false positives and lets the user decide which issues are worth addressing.
Be thorough and critical - this review should catch issues before they reach the PR stage.