Skip to content

Credits

Open in ChatGPT ↗
Ask ChatGPT about this page
Open in Claude ↗
Ask Claude about this page
Copied!

Details on Warp credits and how they are calculated.

Any interaction with Warp’s Agent consumes credits. Credits are primarily based on AI usage — the number of credits a task consumes varies based on the size and complexity of your codebase, the size of the task, the model you’re using, the amount of context the agent needs to gather, and more.

Credits also include a small hosting fee, charged only when running agents in the cloud, hosted on Warp’s infrastructure. For details on cloud agent credits, see Cloud Agent Credits.

Each interaction consumes at least one credit, though more complex interactions may use multiple credits. Because of factors such as codebase size, model choice, number of tool calls, and the nature of LLMs, credit usage is non-deterministic — two similar prompts can still use a different number of credits.

Since there’s no exact formula for predicting usage, we recommend building an intuitive understanding by experimenting with different prompts, models, and tracking how many credits they consume.

Tracking your credit usage

In an Agent conversation, a turn represents a single exchange (a response from the LLM). To see how many credits a turn consumed, hover over the credit count chip at the bottom of the Agent’s response:

The conversation usage footer shows how many credits a conversation has consumed, and breaks down the usage by credits, tool calls, context window, files changed, diffs applied, and more.

  • Seat-level allocation: On team plans, credit limits apply per seat — each team member has their own allowance. Individual users (not on a team) also have their own credit allocation.
  • Cloud Agent Credits: Individual users can run cloud agents via CLI/API using their normal Warp credits, Cloud Agent Credits, or a Build plan with available credits. Integrations (Slack, Linear) require team membership.
  • Hitting the credit limits: Once you hit your monthly credit limit, your access will depend on your plan. On the Free plan, AI access stops until your next billing cycle. On paid plans, you can continue using AI with usage-based billing via Add-on Credits.

In addition to direct Agent conversations, the following features also consume credits:

  • Generate helps you look up commands and suggestions as you type. As you refine your input, multiple credits may be used before you select a final suggestion.
  • AI Autofill in Workflows counts as a credit each time it is run.

A credit in Warp is a unit of work representing the total processing required to complete an interaction with an Agent. It is not the same as “one user message” — instead, it scales with the number of tokens processed during the interaction.

In short: the more tokens used, the more credits consumed.

Several factors influence how many credits are counted for a single interaction:

Generally, smaller, faster models typically consume fewer credits than larger, reasoning-based models.

For example, Claude Opus 4.6 and Claude Opus 4.5 tend to consume the most tokens and credits in Warp, followed by Claude Sonnet 4.6, GPT-5.4, GPT-5.3 Codex, Gemini 3 Pro, and others in roughly that order. This generally correlates with model pricing as well.

Warp’s Agents make a variety of tool calls, including:

  • Searching for files (grep)
  • Retrieving and reading files
  • Making and applying code diffs
  • Gathering web or documentation context
  • Running other utilities

Some prompts require only a couple of tool calls, while others may trigger many — especially if the Agent needs to explore your development environment, navigate a large codebase, or apply complex changes. More tool calls = more credits.

Some tasks are straightforward and may require only a single quick response, without much thinking or reasoning. Others can involve multiple stages—such as planning, generating intermediate outputs, verifying results, applying changes, and self-correcting—each of which can add to the credits count.

Prompts that include large amounts of context (such as attached blocks, long user query messages, etc.) or file attachments like images may also increase the number of credits used due to increased token consumption.

Many model prompts include repeated content, like system instructions:

  • Cache hits: if the model provider can match a prefix or a part of the prompt from a past request, it can reuse results from the cache, reducing both tokens consumed and latency.
  • Cache misses: if no match is found, the full prompt may be processed again, which can increase credit consumption.

Because cache results depend on model provider behavior and timing, two similar prompts may still have different credit counts, depending on when you run the commands.

These are the most common factors affecting credit usage, though there are others. Understanding them can help you manage your credits more efficiently and get the most from your plan.

Cloud Agent Credits are a type of credit consumed only by cloud agent runs — AI requests that run on Warp-hosted compute.

The following scenarios use Cloud Agent Credits:

  • First-party integrations — Running agents through Slack or Linear integrations
  • Cloud agent runs — Using oz agent run-cloud via the CLI
  • Oz API — Running agents through Warp’s Oz API
  • Cloud Mode — Running an agent from Cloud Mode in the Warp app

The following scenarios do not use Cloud Agent Credits:

  • Local agent runs — Using oz agent run on your local machine
  • Self-hosted compute — Using oz agent run on GitHub Actions, CI/CD pipelines, or other self-hosted infrastructure