Agent FAQs
# Agent FAQs ## General ### What data is sent and/or stored when using Agents in Warp? See our [Privacy Page](/support-and-community/privacy-and-security/privacy/) for more information on how we handle data used by Agents in Warp. ### What happened to the old Warp AI chat panel? Agent Mode has replaced the previous AI chat panel. Agent Mode is more powerful in all of the chat panel's use cases. Not only can Agent Mode run commands for you, it can also gather context without you needing to copy and paste. To start a similar chat panel, click the AI button in the menu bar to open a new AI pane. ### Is my data used for model training? Warp reserves the right to use data collected to train models and improve Warp. Warp has Zero Data Retention with all its model providers (e.g. Anthropic, OpenAI, etc.). Please learn more about telemetry in our [Privacy Page](/support-and-community/privacy-and-security/privacy/). ### What model are you using for Agent Mode? Warp supports a curated list of LLMs from providers like OpenAI, Anthropic, and Gemini. To view the full list of supported models and learn how to switch between them, visit the [Model Choice](/agent-platform/capabilities/model-choice/) page. ### Can I use my own LLM API key? Warp supports [Bring Your Own Key (BYOK)](/support-and-community/plans-and-billing/bring-your-own-api-key/) for users on paid plans (starting with Build). You can connect your own Anthropic, OpenAI, or Google API keys to route requests directly through your account. Organizations on the Enterprise plan can additionally enable managed "Bring Your Own LLM" configurations to meet strict security or compliance requirements. ## Billing Every Warp plan includes a set number of credits per user per month. See [pricing](https://www.warp.dev/pricing) to compare plans. Credit limits apply to Agent Mode, Generate (Legacy), and [AI autofill in Workflows](/knowledge-and-collaboration/warp-drive/workflows/#ai-autofill). For questions about what counts as a credit, what counts as a token, and how often credits refresh, see [Credits](/support-and-community/plans-and-billing/credits/) and the [Plans & Pricing](/support-and-community/plans-and-billing/plans-pricing-refunds/) page. ## Common AI error messages #### **"Message token limit exceeded" error** This error means your input (plus attached context) exceeds the maximum context window of the model you're using. If you exceed the limit for your selected model, you may receive no output. To fix this, try: * Starting a new conversation * Reducing the number of blocks or lines attached to your query #### "Monthly request limit exceeded" or "Monthly credit limit exceeded" errors Once you exceed your monthly credit limit (see [pricing](https://www.warp.dev/pricing) for current limits), premium models will be disabled until your quota resets at the start of your next billing cycle. On paid plans with Add-on Credits, you can continue using AI with usage-based billing. **Request failed with error: QuotaLimit** Once you exceed your AI token limits, all models will be disabled. Note that credits and tokens are calculated separately, and even though the plans may have a set number of credits, they also have a limited number of tokens. **Request failed with error: ErrorStatus (403, "Your account has been blocked from using AI features")** This message means your account has been blocked from using AI features, typically due to a violation of our [Terms of Service](https://www.warp.dev/terms-of-service) or suspected abuse (e.g. attempting to bypass credit or token limits). To resolve or clarify this, please contact our team at [appeals@warp.dev](mailto:appeals@warp.dev) if you believe this was an error. We'll review your case and respond as soon as possible. :::caution Note that any error that does not mention appeals@warp.dev isn't related to being blocked and should be reported as feedback or a bug. See [Sending Us Feedback](/support-and-community/troubleshooting-and-support/sending-us-feedback/) for more. ::: ## Gathering AI debugging ID In cases where you have issues with the Agent, we may ask for the AI debugging ID to troubleshoot the specific conversation. To gather the debugging ID, see [Gathering AI Debugging ID](/support-and-community/troubleshooting-and-support/sending-us-feedback/#gathering-ai-debugging-id) for detailed steps.Frequently asked questions about Warp's AI features, including supported models, privacy practices, credit limits, billing, and usage guidelines.
General
Section titled “General”What data is sent and/or stored when using Agents in Warp?
Section titled “What data is sent and/or stored when using Agents in Warp?”See our Privacy Page for more information on how we handle data used by Agents in Warp.
What happened to the old Warp AI chat panel?
Section titled “What happened to the old Warp AI chat panel?”Agent Mode has replaced the previous AI chat panel. Agent Mode is more powerful in all of the chat panel’s use cases. Not only can Agent Mode run commands for you, it can also gather context without you needing to copy and paste. To start a similar chat panel, click the AI button in the menu bar to open a new AI pane.
Is my data used for model training?
Section titled “Is my data used for model training?”Warp reserves the right to use data collected to train models and improve Warp. Warp has Zero Data Retention with all its model providers (e.g. Anthropic, OpenAI, etc.). Please learn more about telemetry in our Privacy Page.
What model are you using for Agent Mode?
Section titled “What model are you using for Agent Mode?”Warp supports a curated list of LLMs from providers like OpenAI, Anthropic, and Gemini. To view the full list of supported models and learn how to switch between them, visit the Model Choice page.
Can I use my own LLM API key?
Section titled “Can I use my own LLM API key?”Warp supports Bring Your Own Key (BYOK) for users on paid plans (starting with Build). You can connect your own Anthropic, OpenAI, or Google API keys to route requests directly through your account. Organizations on the Enterprise plan can additionally enable managed “Bring Your Own LLM” configurations to meet strict security or compliance requirements.
Billing
Section titled “Billing”Every Warp plan includes a set number of credits per user per month. See pricing to compare plans.
Credit limits apply to Agent Mode, Generate (Legacy), and AI autofill in Workflows.
For questions about what counts as a credit, what counts as a token, and how often credits refresh, see Credits and the Plans & Pricing page.
Common AI error messages
Section titled “Common AI error messages””Message token limit exceeded” error
Section titled “”Message token limit exceeded” error”This error means your input (plus attached context) exceeds the maximum context window of the model you’re using. If you exceed the limit for your selected model, you may receive no output.
To fix this, try:
- Starting a new conversation
- Reducing the number of blocks or lines attached to your query
”Monthly request limit exceeded” or “Monthly credit limit exceeded” errors
Section titled “”Monthly request limit exceeded” or “Monthly credit limit exceeded” errors”Once you exceed your monthly credit limit (see pricing for current limits), premium models will be disabled until your quota resets at the start of your next billing cycle. On paid plans with Add-on Credits, you can continue using AI with usage-based billing.
Request failed with error: QuotaLimit
Once you exceed your AI token limits, all models will be disabled. Note that credits and tokens are calculated separately, and even though the plans may have a set number of credits, they also have a limited number of tokens.
Request failed with error: ErrorStatus (403, “Your account has been blocked from using AI features”)
This message means your account has been blocked from using AI features, typically due to a violation of our Terms of Service or suspected abuse (e.g. attempting to bypass credit or token limits).
To resolve or clarify this, please contact our team at appeals@warp.dev if you believe this was an error. We’ll review your case and respond as soon as possible.
Gathering AI debugging ID
Section titled “Gathering AI debugging ID”In cases where you have issues with the Agent, we may ask for the AI debugging ID to troubleshoot the specific conversation. To gather the debugging ID, see Gathering AI Debugging ID for detailed steps.