What is Gram?
Gram is a free, privacy-focused token estimator and cost calculator for AI language models. It
helps you estimate how many tokens your text will use and calculate the approximate cost across
different AI providers including OpenAI, Anthropic, Google, DeepSeek, and Mistral.
How accurate are the token estimates?
Token counts are based on approximation algorithms and may differ slightly from actual API
responses. Different models use different tokenizers (GPT-based or Llama-based), which can
produce varying token counts for the same text. Gram is designed to help you estimate costs and
plan API usage, not to provide exact counts for billing purposes.
Is my data sent to any servers?
No! Gram runs entirely in your browser. All token estimation and calculations happen on your
device. Your text never leaves your computer, ensuring complete privacy.
How is pricing calculated?
Pricing is based on publicly available pricing from each provider's documentation. Costs are
calculated using the estimated token count multiplied by the provider's rate per million tokens.
Prices may change over time, so always verify with the official provider documentation for
production use.
What models are supported?
Gram supports major models from OpenAI (GPT), Anthropic (Claude), Google (Gemini),
DeepSeek, and Mistral. You can compare costs across all supported models using the "Compare
Models" feature.
Can I upload files?
Yes! You can drag and drop text files directly into the input area, or paste content directly.
Gram will automatically process and tokenize the content.
What is the history feature?
Gram saves your recent estimations locally in your browser (using localStorage). This allows you
to quickly revisit previous calculations. Your history is private and stored only on your
device.
Who built Gram?
Gram is built by Peter Abu-Ekpeshie