tokenburnv0.3

where the money went · by token type

typeest. cost% of cost
cache reads$1042055.9%
cache creates$653235.0%
output$16749.0%
input$27.800.1%
costs estimated using Opus 4.7 rates ($15/$75/$18.75/$1.50 per MTok) · actual spend may differ for Sonnet/Haiku turns

model spend · concentration

modelspend% of total
claude-opus-4-7$1057660.4%
claude-opus-4-6$678338.7%
claude-sonnet-4-6$99.480.6%
claude-haiku-4-5-20251001$41.770.2%
claude-opus-4-5-20251101$7.6610.0%
claude-sonnet-4-5-20250929$0.6400.0%
nemotron-3-nano$0.3150.0%

fix opportunities · 5 found

99% of spend on Opus
Opus is 5× more expensive than Sonnet for input and 5× for output. Routine work (file edits, summaries, explanations) does not require Opus.
Set model to Sonnet for non-critical sessions. Potential savings: ~79% of total spend.
84% of spend from home-directory sessions
Sessions started without a project directory load your full ~/.claude/ context (global CLAUDE.md, all memory files) on every turn — creating more cache writes and longer prompts.
Always run `claude` from inside a project folder. Home-dir sessions also don't get project-level cost attribution, making it harder to see what's expensive.
Cache writes = $6532 (37% of spend)
Cache creates happen when Claude loads a large context for the first time in a session. Long CLAUDE.md files, large skills, and many MCP tool lists all trigger expensive cache writes.
Audit your global CLAUDE.md — trim to under 2K tokens. Disable MCP servers you don't use in every session. Each new session re-creates the cache.
Cache reads = $10420 — but that's good
High cache-read cost means Claude is successfully reusing context instead of re-reading raw files. You're already saving money on repeated context — this is working as intended.
No action needed. Keep sessions long (don't restart Claude mid-task) to maximize cache reuse within a session.
72% of injections unclassified
These are context injections tokenburn couldn't identify as a skill, hook, or MCP result. They may be large tool results, system prompts, or custom contexts from plugins.
Check your /sources page for the full injection list. Unclassified injections with high token counts are worth investigating — they may be redundant context being added repeatedly.

injected context · by source type

these tokens are injected INTO your prompts by skills, hooks, and MCP servers — on top of what you type

sourceest. tokens% of injected
unknown4.90M72.1%
skill_inject1.49M22.0%
hook401.7K5.9%
injected tokens become input tokens on each turn they appear · see /sources for name-level drill-down