AI Credits
How credits work, what affects cost, and real examples of credit usage for the AI Documentation Agent and AI Assistant.
How AI credits work
Documentation.AI uses credits for two AI features:
- The AI Documentation Agent (editor-facing — helps you write and edit docs)
- The AI Assistant or Ask AI widget (end-user facing — answers questions for your users)
Only these features consume credits. Browsing, editing, publishing, and normal site usage do not consume credits.
Credits reset every billing cycle. Unused credits do not roll over.
AI Assistant / Ask AI
The AI Assistant powers the on-page Ask AI widget for your end users.
Each end-user question costs exactly 0.1 credit, regardless of answer length or page.
For example, on the Professional plan (500 credits), your users can ask up to 5,000 questions per month.
AI Documentation Agent
The AI Documentation Agent helps you create, edit, and organize documentation. Documentation.AI recently upgraded the writer and code analysis model stack, so more capable models can improve output quality, but complex tasks may use more credits as a result.
What determines credit cost
Four practical factors affect how many credits a task uses:
-
Number of steps — Each action the agent takes, such as reading files, searching, editing, or validating, adds to the total. A small change may take 2–3 steps, while a broader task can take many more.
-
Amount of content processed — Larger files, more pages, and longer conversations require the agent to read and work through more content. A short page edit costs less than updating a large section of your docs.
-
Type of operation — Reading and searching usually cost less than writing, rewriting, or validating content. Tasks that create new pages or update multiple files generally use more credits.
-
Model usage and task complexity — Some tasks require deeper planning, code-aware analysis, or background validation. When the system uses more capable models to complete a more complex task, credit usage can be higher.
Typical credit usage
These examples reflect current agent behavior. If you saw older examples elsewhere, complex code-aware or background tasks may now be roughly 2x to 3x higher because the agent can do more planning, analysis, and validation during a request.
| Task | Steps | Approx. credits | Example |
|---|---|---|---|
| Quick edit | 2–3 | 0.4 – 0.7 | Fix a typo, change a heading, update a link |
| Single page edit | 3–4 | 0.8 – 1.3 | Rewrite a section, add a callout, update a code example |
| Navigation change | 3–4 | 0.8 – 1.3 | Add a tab, move a page to a different group, add a nav icon |
| Structural edit | 4–6 | 1.5 – 2.8 | Reorganize navigation groups, add access roles, update site config |
| Create new page | 3–5 | 1.2 – 2.2 | Create a new MDX page with frontmatter, sections, and components |
| Multi-page task | 8–15 | 3.0 – 6.5 | Create 5+ pages, update navigation, validate structure |
| Large or iterative task | 15–30 | 7.0 – 18.0 | Import 10+ pages from external source, restructure entire navigation, long back-and-forth conversations with multiple revisions |
Worked example: "Check the latest commits and update the documentation"
This is a common task where you connect a GitHub repository and ask the agent to review recent code changes and update docs accordingly. The system may analyze connected repositories and may use different models during planning, code analysis, writing, and validation behind the scenes, which is why this kind of request often costs more than a basic edit.
| Step | What the agent does | Why |
|---|---|---|
| 1 | Analyzes code from connected repos | Fetches and reads recent commits from GitHub |
| 2 | Inspects file tree | Understands current doc structure |
| 3 | Reads site config | Understands navigation and existing pages |
| 4 | Searches existing documentation | Finds pages that need updating |
| 5 | Reads relevant pages (×2–4) | Loads current content of pages that need changes |
| 6 | Generates execution plan | Plans updates across multiple pages |
| 7–10 | Edits each page (×2–4) | Updates content to reflect the code changes |
| 11 | Marks tasks complete | Tracks progress through the plan |
| 12 | Responds to you | Summarizes all changes made |
Total: ~8–18 credits ($0.80–$1.80) depending on how many pages need updating
This task costs more because the agent needs to read from GitHub, search your docs, and then edit multiple pages. If only 1–2 pages need updating, it will usually land near the lower end of the range. If the commits affect many areas of your docs, it can move well beyond 10 credits.
Tasks that involve many files, long conversations, or multiple revisions can use 5–10+ credits in a single session. If you are working on a large task, check your credit balance in the dashboard periodically.
Worked example: "Fix a typo"
Here is a step-by-step breakdown of a simple edit task — asking the agent to change a single word on a page.
| Step | What the agent does | Why |
|---|---|---|
| 1 | Reads the file | Needs to see the current content before editing |
| 2 | Edits the file | Makes the requested change |
| 3 | Responds to you | Confirms what was changed |
Total: ~0.6 credits ($0.06)
Worked example: "Create 5 new pages from our API docs"
Here is a step-by-step breakdown of a complex, multi-page creation task.
| Step | What the agent does | Why |
|---|---|---|
| 1 | Inspects file tree | Understands current doc structure |
| 2 | Reads site config | Understands navigation layout |
| 3 | Generates execution plan | Plans all 5 pages before starting |
| 4–8 | Creates each page (×5) | Writes MDX content for each page |
| 9 | Updates navigation | Adds all new pages to the sidebar |
| 10 | Validates structure | Checks JSON and MDX are valid |
| 11 | Responds to you | Summarizes everything that was done |
Total: ~4.5 credits ($0.45)
Multi-turn conversations
If you send multiple messages in the same conversation (for example, asking the agent to edit a page and then asking a follow-up), each message is a separate request that uses credits independently. However, follow-up messages in the same conversation tend to be cheaper because:
- The agent already has context from previous messages
- Repeated content is cached, reducing processing costs
A typical follow-up message costs 30–50% less than the first message in a conversation.
Why the same task can cost different amounts
You might notice that similar tasks sometimes use slightly different amounts of credits. This is normal and happens because:
- Document size matters — Editing a 200-line page costs more than editing a 50-line page because the agent processes more content.
- Retries — If the agent's first edit attempt produces invalid JSON or doesn't match the file correctly, it retries automatically. Each retry adds a small amount of credits.
- Conversation length — Later messages in a long conversation process more context (the full conversation history), which increases cost slightly.
Where to view credit usage
You can view your current credit balance and usage history from Settings → AI Usage in your Documentation.AI dashboard.
The usage page shows:
- Credits remaining in your current billing cycle
- Credit usage over time
- A breakdown by feature (Documentation Agent vs. AI Assistant)
If you run out of credits, the AI Documentation Agent and AI Assistant features will be disabled until your next billing cycle or until you upgrade your plan.
Tips to use credits efficiently
- Be specific — "Change the title of getting-started.mdx to Quick Start" costs less than "update the getting started page" because the agent doesn't need to search and read first.
- Use follow-ups — After the agent edits a page, ask for changes in the same conversation rather than starting a new one. Follow-ups are cheaper.
- Batch related changes — "Add pages for authentication, rate limiting, and webhooks" in one message is cheaper than three separate conversations.
- Use the AI Assistant for questions — If you just need to look up something in your docs, use Ask AI (0.1 credits) instead of the Documentation Agent.
Last updated 4 days ago
Built with Documentation.AI