Great resource. One topic that could add value to the mastery guide: context pre-filtering as a token optimization strategy.
I tracked Claude Code on a real project (~800 files) and found it averages 23 tool calls per prompt, consuming ~180K tokens. Roughly 70% of that context is files the model reads and ignores.
Adding an MCP server that pre-indexes the codebase (tree-sitter AST + dependency graph) and serves only relevant code per query reduced tool calls to 2.3 and cost from $0.78 to $0.33 per task.
Could be a useful addition to the guide as an advanced optimization technique.
Data: vexp.dev/benchmark · FastAPI writeup: reddit.com/r/ClaudeCode
Great resource. One topic that could add value to the mastery guide: context pre-filtering as a token optimization strategy.
I tracked Claude Code on a real project (~800 files) and found it averages 23 tool calls per prompt, consuming ~180K tokens. Roughly 70% of that context is files the model reads and ignores.
Adding an MCP server that pre-indexes the codebase (tree-sitter AST + dependency graph) and serves only relevant code per query reduced tool calls to 2.3 and cost from $0.78 to $0.33 per task.
Could be a useful addition to the guide as an advanced optimization technique.
Data: vexp.dev/benchmark · FastAPI writeup: reddit.com/r/ClaudeCode