2026-01-09 02:21:17 +00:00
---
2026-01-30 03:15:10 +01:00
summary: "How OpenClaw builds prompt context and reports token usage + costs"
2026-01-09 02:21:17 +00:00
read_when:
- Explaining token usage, costs, or context windows
- Debugging context growth or compaction behavior
2026-01-31 16:04:03 -05:00
title: "Token Use and Costs"
2026-01-09 02:21:17 +00:00
---
2026-01-31 21:13:13 +09:00
2026-01-09 02:21:17 +00:00
# Token use & costs
2026-01-30 03:15:10 +01:00
OpenClaw tracks **tokens ** , not characters. Tokens are model-specific, but most
2026-01-09 02:21:17 +00:00
OpenAI-style models average ~4 characters per token for English text.
## How the system prompt is built
2026-01-30 03:15:10 +01:00
OpenClaw assembles its own system prompt on every run. It includes:
2026-01-09 02:21:17 +00:00
- Tool list + short descriptions
- Skills list (only metadata; instructions are loaded on demand with `read` )
- Self-update instructions
2026-02-14 18:36:35 -05:00
- Workspace + bootstrap files (`AGENTS.md` , `SOUL.md` , `TOOLS.md` , `IDENTITY.md` , `USER.md` , `HEARTBEAT.md` , `BOOTSTRAP.md` when new, plus `MEMORY.md` and/or `memory.md` when present). Large files are truncated by `agents.defaults.bootstrapMaxChars` (default: 20000), and total bootstrap injection is capped by `agents.defaults.bootstrapTotalMaxChars` (default: 24000). `memory/*.md` files are on-demand via memory tools and are not auto-injected.
2026-01-09 02:21:17 +00:00
- Time (UTC + user timezone)
- Reply tags + heartbeat behavior
- Runtime metadata (host/OS/model/thinking)
See the full breakdown in [System Prompt ](/concepts/system-prompt ).
## What counts in the context window
Everything the model receives counts toward the context limit:
- System prompt (all sections listed above)
- Conversation history (user + assistant messages)
- Tool calls and tool results
- Attachments/transcripts (images, audio, files)
- Compaction summaries and pruning artifacts
- Provider wrappers or safety headers (not visible, but still counted)
2026-01-15 01:09:21 +00:00
For a practical breakdown (per injected file, tools, skills, and system prompt size), use `/context list` or `/context detail` . See [Context ](/concepts/context ).
2026-01-09 02:21:17 +00:00
## How to see current token usage
Use these in chat:
2026-01-09 03:14:39 +00:00
- `/status` → **emoji‑ rich status card ** with the session model, context usage,
2026-01-09 02:21:17 +00:00
last response input/output tokens, and **estimated cost ** (API key only).
2026-01-18 05:35:22 +00:00
- `/usage off|tokens|full` → appends a **per-response usage footer ** to every reply.
2026-01-09 02:21:17 +00:00
- Persists per session (stored as `responseUsage` ).
- OAuth auth **hides cost ** (tokens only).
2026-01-30 03:15:10 +01:00
- `/usage cost` → shows a local cost summary from OpenClaw session logs.
2026-01-09 02:21:17 +00:00
Other surfaces:
2026-01-18 05:35:22 +00:00
- **TUI/Web TUI:** `/status` + `/usage` are supported.
2026-01-30 03:15:10 +01:00
- **CLI:** `openclaw status --usage` and `openclaw channels list` show
2026-01-09 02:21:17 +00:00
provider quota windows (not per-response costs).
## Cost estimation (when shown)
Costs are estimated from your model pricing config:
```
models.providers.<provider>.models[].cost
```
These are **USD per 1M tokens ** for `input` , `output` , `cacheRead` , and
2026-01-30 03:15:10 +01:00
`cacheWrite` . If pricing is missing, OpenClaw shows tokens only. OAuth tokens
2026-01-09 02:21:17 +00:00
never show dollar cost.
2026-01-21 20:23:30 +00:00
## Cache TTL and pruning impact
2026-01-30 03:15:10 +01:00
Provider prompt caching only applies within the cache TTL window. OpenClaw can
2026-01-21 20:23:30 +00:00
optionally run **cache-ttl pruning ** : it prunes the session once the cache TTL
has expired, then resets the cache window so subsequent requests can re-use the
freshly cached context instead of re-caching the full history. This keeps cache
write costs lower when a session goes idle past the TTL.
Configure it in [Gateway configuration ](/gateway/configuration ) and see the
behavior details in [Session pruning ](/concepts/session-pruning ).
Heartbeat can keep the cache **warm ** across idle gaps. If your model cache TTL
is `1h` , setting the heartbeat interval just under that (e.g., `55m` ) can avoid
re-caching the full prompt, reducing cache write costs.
For Anthropic API pricing, cache reads are significantly cheaper than input
tokens, while cache writes are billed at a higher multiplier. See Anthropic’ s
prompt caching pricing for the latest rates and TTL multipliers:
2026-02-06 10:08:59 -05:00
[https://docs.anthropic.com/docs/build-with-claude/prompt-caching ](https://docs.anthropic.com/docs/build-with-claude/prompt-caching )
2026-01-21 20:23:30 +00:00
### Example: keep 1h cache warm with heartbeat
```yaml
agents:
defaults:
model:
2026-02-05 16:54:44 -05:00
primary: "anthropic/claude-opus-4-6"
2026-01-21 20:23:30 +00:00
models:
2026-02-05 16:54:44 -05:00
"anthropic/claude-opus-4-6":
2026-01-21 20:23:30 +00:00
params:
2026-02-01 23:16:37 +09:00
cacheRetention: "long"
2026-01-21 20:23:30 +00:00
heartbeat:
every: "55m"
```
2026-01-09 02:21:17 +00:00
## Tips for reducing token pressure
- Use `/compact` to summarize long sessions.
- Trim large tool outputs in your workflows.
- Keep skill descriptions short (skill list is injected into the prompt).
- Prefer smaller models for verbose, exploratory work.
See [Skills ](/tools/skills ) for the exact skill list overhead formula.