From 8ced108f7450f5a29f7b33e93a1d38b36e0bf366 Mon Sep 17 00:00:00 2001 From: "B.A. Baracus" Date: Mon, 6 Apr 2026 02:26:51 +0000 Subject: [PATCH 1/2] =?UTF-8?q?docs:=20README=20overhaul=20=E2=80=94=20add?= =?UTF-8?q?=20analyze.js,=20wake=20integration,=20Quick=20start,=20fix=20p?= =?UTF-8?q?rovider=20table=20and=20architecture?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Six changes: - Add ## Quick start block (monitor.js, analyze.js, token-status.sh) - Add ## Analysis section with all 8 analyze.js subcommands and output descriptions - Add ## Wake integration section — token-status.sh docs, output format, cache guard note - Provider support table: add google-gemini and xai-* rows - Architecture block: add analyze.js, gemini.js, xai.js, docs/analyze.md - Related: add token-status.sh as first item, fix issue link to trentuna/token-monitor#1 164/164 tests pass. --- README.md | 64 ++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 63 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 799e9fa..b839eb7 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,14 @@ Modular LLM API quota and usage visibility tool. Extracts rate-limit and usage d **Why it exists:** team-vigilio hit its 7-day rate limit (9+ days of 429s). api-ateam ran out of credit mid-session. We kept flying blind. This tool surfaces quota health before the failure. +## Quick start + +```bash +node monitor.js # run now — human-readable output + log +node analyze.js # analyze accumulated logs — burn rates, rotation +~/os/token-status.sh # what Vigilio's wake prompt sees (automated path) +``` + ## Usage ```bash @@ -14,6 +22,29 @@ node monitor.js --provider team-nadja # single provider node monitor.js --no-log # suppress log file write ``` +## Analysis + +```bash +node analyze.js # full report +node analyze.js --burn-rate # burn rate per account +node analyze.js --weekly # weekly budget reconstruction +node analyze.js --stagger # reset schedule (next 48h) +node analyze.js --rotation # rotation recommendation +node analyze.js --json # JSON output (all sections) +node analyze.js --provider team-nadja # filter to one provider +node analyze.js --prune [--dry-run] # archive and prune logs > 30 days +``` + +**Burn Rate** — delta analysis of 7d utilization over time, projected exhaustion at current rate. + +**Reset Schedule** — providers resetting within the next 48 hours, sorted ascending by time to reset. + +**Weekly Reconstruction** — peak and average 7d utilization per provider per ISO week. Shows exhaustion events. + +**Rotation Recommendation** — ranked provider list by headroom, deprioritizing maxed/rejected/invalid-key accounts. + +**Underspend Alerts** — active accounts with ≥ 40% of 5h window unused and < 2h until reset. + ## Example output ``` @@ -80,6 +111,8 @@ Overall: 1 CRITICAL, 3 OK, 3 UNKNOWN | team-vigilio, team-ludo, team-molto, team-nadja, team-buio | Anthropic Teams (direct) | 5h/7d utilization (0–100%), status, reset countdown, severity | | shelley-proxy | Shelley/exe.dev proxy | Token headroom, request headroom, per-call USD cost | | api-ateam | Anthropic API (pay-per-use) | Key validity only — no billing API exists | +| google-gemini | Gemini API (free tier) | Quota violation detail, retry delay, key validity | +| xai-face, xai-amy, xai-murdock, xai-ba, xai-vigilio | xAI/Grok | Request/token remaining counts, rate limit status | ## Severity levels @@ -124,14 +157,19 @@ grep '"team-vigilio"' ~/.logs/token-monitor/*.jsonl | \ ``` monitor.js — CLI entrypoint, orchestrates probes +analyze.js — analysis CLI (burn rates, weekly, stagger, rotation) providers/ index.js — reads ~/.pi/agent/models.json, returns typed provider list anthropic-teams.js — unified schema parser (oat01 keys, all team-* providers) anthropic-api.js — pay-per-use (api03 keys) — reports "no billing data" shelley-proxy.js — classic schema + Exedev-Gateway-Cost header + gemini.js — Gemini API (free tier, quota via response body) + xai.js — x.ai/Grok (rate-limit headers) logger.js — JSONL log to ~/.logs/token-monitor/ report.js — human-readable summary + severity logic test.js — test suite (run: node test.js) +docs/ + analyze.md — analysis CLI full reference ``` ## How to add a new provider @@ -152,6 +190,30 @@ The tool makes one minimal API call per provider to extract headers: ## Related +- `~/os/token-status.sh` — wake-prompt integration (calls monitor.js, formats for beat.sh) - `~/projects/provider-check/` — predecessor (liveness only, no quota depth) - `~/.pi/agent/models.json` — provider configuration source -- Forgejo issue: trentuna/a-team#91 +- Forgejo issue: trentuna/token-monitor#1 + +## Wake integration + +`~/os/token-status.sh` is the automated interface. It runs `monitor.js --json` +and formats the output into a compact summary block for injection into +Vigilio's wake prompt via `beat.sh`. + +```bash +# Manual invocation (same as what the wake prompt sees) +~/os/token-status.sh + +# Output format: +## Token Economics +Anthropic Teams (5 seats): + team-vigilio ✗ MAXED 7d:100% resets 23h + team-molto ✓ 5h:32% 7d:45% resets 4h + ... +→ Current recommendation: use team-molto | avoid team-vigilio +``` + +File location: `~/os/token-status.sh` +Called by: `~/os/beat.sh` (Vigilio wake script) +Uses: `monitor.js --json` with 20-minute cache guard (won't double-probe within a session) From b504977853d929dd762948277c8f6e8f65b0347b Mon Sep 17 00:00:00 2001 From: "H.M. Murdock" Date: Mon, 6 Apr 2026 02:27:23 +0000 Subject: [PATCH 2/2] =?UTF-8?q?docs:=20add=20phase3-piggyback.md=20?= =?UTF-8?q?=E2=80=94=20piggyback=20header=20capture=20+=20repo=20location?= =?UTF-8?q?=20recommendation?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/phase3-piggyback.md | 85 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 docs/phase3-piggyback.md diff --git a/docs/phase3-piggyback.md b/docs/phase3-piggyback.md new file mode 100644 index 0000000..9e16c94 --- /dev/null +++ b/docs/phase3-piggyback.md @@ -0,0 +1,85 @@ +# Phase 3: Piggyback Header Capture + +## What it is + +Instead of making a dedicated probe call at each wake to read rate-limit state, **piggyback capture** reads the same headers from LLM responses that are already happening during normal pi sessions. Zero extra API calls. No probe latency. Every real conversation generates a data point automatically. + +## Why it's better + +The current probe approach has a structural limitation: it samples state once per wake, at most every 20 minutes (cache guard). Real usage happens between probes — the 5h window can fill and reset while Vigilio is working. Piggyback gets a reading on every turn, tied to actual usage events, with sub-minute resolution. + +- **Zero overhead** — no extra API calls, no added latency, no token spend +- **Temporal accuracy** — readings tied to real usage moments, not arbitrary probe intervals +- **Richer signal** — burn rate analysis improves dramatically with higher sample frequency +- **No polling logic** — the data arrives when there's data to report + +## Headers already present + +Every Anthropic API response already carries the full rate-limit family. These are the same headers `anthropic-teams.js` parses today: + +``` +anthropic-ratelimit-unified-status +anthropic-ratelimit-unified-5h-utilization +anthropic-ratelimit-unified-5h-reset +anthropic-ratelimit-unified-7d-utilization +anthropic-ratelimit-unified-7d-reset +anthropic-ratelimit-unified-representative-claim +anthropic-ratelimit-unified-reset +``` + +Present on every response — 200 and 429 alike. The data is already flowing; we just aren't capturing it. + +## Where pi would need instrumentation + +Pi's extension system exposes `before_provider_request` for outbound payload inspection, but **no documented hook exposes raw response headers** on the inbound side. The intercept point is the **custom provider wrapper**. + +Pi supports `registerProvider("anthropic", { baseUrl: ... })` to override the endpoint. A thin proxy wrapper could: + +1. Forward the request to the real Anthropic endpoint +2. Capture response headers before returning the stream to pi +3. Append a JSONL entry to the piggyback log + +This is a `~/.pi/agent/extensions/token-monitor-piggyback.ts` — a pi extension that registers itself as the Anthropic provider, wraps the actual call, and writes the side channel. + +Alternative approach if a response hook is added to pi in future: `pi.on("after_provider_response", ...)` with `event.headers` exposed. Cleaner, no proxy indirection. Worth filing upstream. + +## Data interface: minimal viable integration + +Pi extension writes to: + +``` +~/.logs/token-monitor/piggyback.jsonl +``` + +Same path structure as today's probe logs. Same JSONL format: + +```jsonl +{"ts":"2026-04-06T14:23:01Z","source":"piggyback","provider":"team-vigilio","type":"teams-direct","status":"allowed","utilization_5h":0.42,"utilization_7d":0.61,"reset_in_seconds":14400} +``` + +`analyze.js` reads both probe entries and piggyback entries from the same file. The `source` field distinguishes them. Probe entries continue working as fallback when no real conversation has occurred yet (first wake of the day, dormant accounts). + +## What remains unknown + +- **Header exposure**: Pi doesn't currently expose raw response headers in any extension event. The custom provider proxy approach works but adds complexity. Check whether a future pi release adds `after_provider_response`. +- **Streaming interception**: Anthropic responses stream. Headers arrive before the body. The proxy needs to capture headers and write the log entry without buffering the full response — should be fine but needs testing. +- **Multi-provider coverage**: Piggyback naturally works for whichever provider is active. Dormant accounts still need probe calls to confirm they're dormant. Hybrid approach is probably permanent. +- **Extension packaging**: Should this live as a pi extension in `commons/pi/extensions/` alongside bootstrap, or as a standalone script? Depends on repo location decision below. + +--- + +## Repo location recommendation + +**Options assessed:** + +1. **Stay as `trentuna/token-monitor`** — Works fine, but it's isolated. The tool serves all trentuna members; a separate repo means separate cloning, separate updates, and `token-status.sh` already lives in `~/os/` outside it. The split is already awkward. + +2. **Move into `trentuna/commons`** — Natural fit. `commons` is the shared config layer for all trentuna members; `bootstrap.sh` already handles pi setup. Token monitoring is infrastructure, not a standalone product. `analyze.js`, `monitor.js`, and a future piggyback extension would sit alongside other shared operational tools. Ludo explicitly named this option. + +3. **Split: code in token-monitor, `token-status.sh` to `vigilio/os`** — The split already exists informally (`token-status.sh` is in `~/os/`). Formalizing it adds a cross-repo dependency without resolving the underlying issue. More moving parts, same problem. + +4. **Merge into `vigilio/os` entirely** — `token-status.sh` already lives here and it's close to `vigilio.sh`. But `vigilio/os` is vigilio-specific; the monitor is multi-member infrastructure. Wrong home. + +**Recommendation: Option 2 — move into `trentuna/commons`.** + +The monitor is trentuna infrastructure. `commons` already owns bootstrap, pi config, and model provisioning for all members. Token monitoring belongs in the same layer. A future piggyback extension would live in `commons/pi/extensions/`, wired up by bootstrap automatically for every member. `token-status.sh` stays in `~/os/` (vigilio-local runtime script) and just calls the tool from its new location — one path update.