ASK KNOX
beta
LESSON 183

The Read Limit Trap (and How to Fix It)

Claude silently truncates files at 2000 lines — and doesn't know it missed anything. Here's how to build a guard that forces chunked reads on large files.

6 min read·Claude Code Operations

There is a bug in every Claude Code session involving large files. Most developers have never noticed it.

When Claude reads a file, it reads a maximum of 2000 lines. If the file is 3000 lines, Claude reads lines 1-2000. Then it continues working. It does not know there are 1000 more lines. It does not warn you. It does not report a truncation. It simply proceeds.

The Failure Mode in Practice

Consider a configuration file for a complex system — say, a 2800-line Kubernetes manifest or a long pytest conftest. Claude reviews it for issues. It reads lines 1-2000, finds nothing wrong, and reports the file is clean.

The problem is in lines 2001-2800. Claude never saw them.

Or consider a 3000-line service module. Claude is asked to add a new method. It reads the file, concludes the method does not exist, and adds it. The method already exists — in line 2400. Claude has introduced a duplicate.

These failures are particularly hard to debug because Claude's analysis is internally consistent. It is not wrong about what it read. It is just missing 30% of the data.

Where Large Files Hide

Files most likely to exceed 2000 lines in typical projects:

  • Generated code (migrations, protobuf outputs, schema snapshots)
  • Long configuration files (Terraform state, complex YAML configs)
  • Comprehensive test suites (conftest.py with many fixtures, large test files)
  • Data fixtures (seed files, mock API responses)
  • Legacy modules that were never refactored (the dreaded utils.py)
  • Compiled or minified assets that somehow ended up in source control

Run this to find them in your current project:

find . -name "*.py" -o -name "*.ts" -o -name "*.js" -o -name "*.yaml" -o -name "*.json" \
  | xargs wc -l 2>/dev/null \
  | sort -rn \
  | head -20

Any file over 2000 lines is a potential trap.

Fix 1 — CLAUDE.md Instruction

The first line of defense is a behavioral instruction in your project's CLAUDE.md. Claude follows these reliably:

## File Reading Rules

The Read tool has a 2000-line hard limit per call. It will not warn you when truncation occurs.

Before reading any file that might be large:
1. Run `wc -l <filepath>` to check the line count
2. If the count exceeds 2000, use offset and limit parameters to read in chunks:
   - Chunk 1: offset=0, limit=2000
   - Chunk 2: offset=2000, limit=2000
   - Continue until you reach the end of the file
3. Synthesize your understanding across all chunks before acting

Never assume a single Read call captured the full file.

This works well for prompted tasks where Claude is explicitly asked to read a file. It is less reliable for incidental reads that happen automatically as part of a larger task.

Fix 2 — The PreToolUse Guard Hook

The more robust solution is a hook that intercepts every Read call before it executes. If the target file exceeds 2000 lines and no offset/limit parameters were provided, the hook exits 2 — blocking the read and forcing Claude to chunk it correctly.

This is a complete implementation:

#!/bin/bash
# read-limit-guard.sh
# PreToolUse hook on Read
# Blocks reads of files >2000 lines when no offset/limit is set
#
# Place at: ~/.claude/hooks/read-limit-guard.sh
# Make executable: chmod +x ~/.claude/hooks/read-limit-guard.sh

set -euo pipefail

# Tool input JSON arrives on stdin
TOOL_INPUT=$(cat)

# Extract fields from tool input
FILE_PATH=$(echo "$TOOL_INPUT" | python3 -c "
import sys, json
try:
    d = json.load(sys.stdin)
    print(d.get('file_path', ''))
except:
    print('')
" 2>/dev/null)

OFFSET=$(echo "$TOOL_INPUT" | python3 -c "
import sys, json
try:
    d = json.load(sys.stdin)
    val = d.get('offset', '')
    print(str(val) if val != '' else '')
except:
    print('')
" 2>/dev/null)

LIMIT=$(echo "$TOOL_INPUT" | python3 -c "
import sys, json
try:
    d = json.load(sys.stdin)
    val = d.get('limit', '')
    print(str(val) if val != '' else '')
except:
    print('')
" 2>/dev/null)

# Skip if no file path resolved
[ -z "$FILE_PATH" ] && exit 0

# Skip if offset or limit already provided — Claude is chunking correctly
[ -n "$OFFSET" ] && exit 0
[ -n "$LIMIT" ] && exit 0

# Skip if file doesn't exist (let Claude handle the error naturally)
[ ! -f "$FILE_PATH" ] && exit 0

# Skip binary and media files — line counts are meaningless for them
case "$FILE_PATH" in
  *.png|*.jpg|*.jpeg|*.gif|*.svg|*.ico|*.webp|\
  *.pdf|*.zip|*.tar|*.gz|*.whl|*.pyc|\
  *.mp4|*.mp3|*.mov|*.avi|*.wav)
    exit 0
    ;;
esac

# Count lines
LINE_COUNT=$(wc -l < "$FILE_PATH" 2>/dev/null || echo "0")

# Block if over the limit
if [ "$LINE_COUNT" -gt 2000 ]; then
  echo "READ BLOCKED: '$FILE_PATH' has $LINE_COUNT lines." >&2
  echo "" >&2
  echo "The Read tool is limited to 2000 lines per call." >&2
  echo "To read this file completely, use offset and limit parameters:" >&2
  echo "  - Read(file_path='$FILE_PATH', offset=0, limit=2000)" >&2
  echo "  - Read(file_path='$FILE_PATH', offset=2000, limit=2000)" >&2
  echo "  - Continue until you reach line $LINE_COUNT" >&2
  exit 2
fi

exit 0

Wiring It Into settings.json

{
  "cleanupPeriodDays": 365,
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Read",
        "hooks": [
          {
            "type": "command",
            "command": "~/.claude/hooks/read-limit-guard.sh"
          }
        ]
      }
    ]
  }
}

After adding this, make the script executable:

chmod +x ~/.claude/hooks/read-limit-guard.sh

Testing the Hook

Verify it works before trusting it:

# Create a test file with 2100 lines
seq 1 2100 > /tmp/test-large-file.txt

# Simulate what Claude's Read tool sends as input
echo '{"file_path": "/tmp/test-large-file.txt"}' | ~/.claude/hooks/read-limit-guard.sh
echo "Exit code: $?"

You should see the blocking message and exit code 2. Then verify that chunked reads pass through:

echo '{"file_path": "/tmp/test-large-file.txt", "offset": 0, "limit": 2000}' | ~/.claude/hooks/read-limit-guard.sh
echo "Exit code: $?"

This should exit 0 silently — offset is set, so Claude is already chunking correctly.

Edge Cases the Script Handles

Binary and media files. Line counts are meaningless for images, PDFs, and compiled files. The script skips these by extension.

Non-existent files. If the file does not exist, the hook exits 0 and lets Claude encounter the error naturally. The hook is not the right place to handle missing files.

Already-chunking reads. If offset or limit is set, Claude is already working around the limit. The hook exits 0 — no need to block a read that is already being done correctly.

Tool input parsing failures. If the JSON parse fails for any reason, the script exits 0. Fail open on the guard: it is better to allow a potentially problematic read than to block all reads because of a parsing error.

The Generalizable Pattern

The read-limit guard is an instance of a broader pattern: PreToolUse hooks as guardrails.

The pattern:

  1. Identify a mistake Claude makes silently (no error, no warning)
  2. Write a PreToolUse hook that detects the conditions leading to that mistake
  3. Exit 2 with a clear error message explaining what to do instead
  4. Wire it into settings.json

Other applications of this pattern:

  • Block git push to main directly (exit 2 with "create a branch and PR instead")
  • Block file writes to production config directories
  • Block bash commands that include API keys as arguments (credential leak prevention)
  • Block large file creation without confirmation

Every one of these is a silent mistake that Claude might make without guardrails. With a PreToolUse hook and exit 2, they become enforced rules.

Lesson 183 Drill

  1. Copy the read-limit-guard.sh script to ~/.claude/hooks/
  2. Make it executable with chmod +x
  3. Wire it into your settings.json PreToolUse hooks
  4. Find the three largest files in your current project
  5. Test that the hook blocks a simulated large-file read
  6. Test that chunked reads (with offset/limit) pass through correctly

When you see Claude automatically chunk its reads on large files — without you having to remind it — the hook is doing its job.