Track AI coding costs & generate client invoices
Estimate AI token costs across models, add your developer rate, and generate invoice-ready breakdowns for clients.
| Model | Input $/1M tokens | Output $/1M tokens | Context | Best For |
|---|---|---|---|---|
| Claude Opus 4 | $15.00 | $75.00 | 200K | Complex architecture, debugging |
| Claude Sonnet 4 | $3.00 | $15.00 | 200K | Everyday coding, balanced |
| GPT-4o | $2.50 | $10.00 | 128K | General coding, multimodal |
| GPT-4.1 | $2.00 | $8.00 | 1M | Long context coding |
| Gemini 2.5 Pro | $1.25 | $10.00 | 1M | Large codebases |
| Claude Haiku 4 | $0.80 | $4.00 | 200K | Quick edits, boilerplate |
| GPT-4o Mini | $0.15 | $0.60 | 128K | High-volume, simple tasks |
| DeepSeek V3 | $0.27 | $1.10 | 128K | Budget coding |
Log your AI model, token usage, and time for each coding session. Most API dashboards provide usage data. Cursor shows token counts in settings.
Break invoices into AI token costs and developer review time. Clients appreciate transparency and it justifies your rates.
Standard markup is 1.5x-3x on AI costs. You're providing expertise in prompt engineering, code review, and architecture decisions.
Use expensive models (Opus) for complex tasks, cheaper models (Haiku, GPT-4o Mini) for boilerplate. This optimizes client costs and your margins.
Note which features were AI-assisted vs hand-coded. Some clients want to understand the AI's role in their codebase.
Prorate your Cursor/Copilot subscription costs and include them. $20/month across 160 work hours is only $0.125/hour but adds up.