About AIPriceBoard
The definitive resource for comparing AI API pricing across every major LLM provider — updated regularly so you always know what you're paying for.
Our Mission
AI API pricing changes constantly. Providers release new models, adjust per-token rates, and introduce tiered pricing that makes it hard for developers and businesses to make informed decisions. AIPriceBoard was built to solve that problem — giving you a single, clear view of what every major LLM costs per 1M input and output tokens.
Whether you're a solo developer prototyping an AI application, a startup optimizing for cost-efficiency, or an enterprise evaluating multi-model strategies, AIPriceBoard gives you the data you need to make smart choices.
What We Track
- Input token pricing — cost per 1M tokens sent to the model
- Output token pricing — cost per 1M tokens generated by the model
- Context window size — maximum tokens the model can process at once
- Model capabilities — vision, function calling, fine-tuning availability
- Provider tiers — free tiers, rate limits, and enterprise pricing
- Speed benchmarks — tokens per second for latency-sensitive applications
Providers We Cover
Who Uses AIPriceBoard
Data Accuracy Notice: Pricing data on AIPriceBoard is sourced from official provider documentation and pricing pages. AI API prices change frequently — always verify current pricing directly with the provider before making production or budget decisions. AIPriceBoard is not affiliated with OpenAI, Anthropic, Google, Meta, Mistral, or any other AI company.