token-monitor/providers
Vigilio Desto c7e6438398
Fix xai probe: double /v1 URL bug, use /v1/models instead of chat completion
Two bugs caused all xai providers to show 'error' in the monitor:

1. Double /v1 in URL: models.json baseUrl is https://api.x.ai/v1 (OpenAI-
   compatible convention), and the probe was appending /v1/chat/completions,
   producing https://api.x.ai/v1/v1/chat/completions → HTTP 4xx.
   Fix: strip trailing /vN from baseUrl before constructing the probe URL.

2. Wrong model: probe used grok-3-mini, which requires specific x.ai
   console permissions not granted to our keys. Keys have access to
   grok-4-1-fast-reasoning only.
   Fix: use GET /v1/models instead — lightweight, no model guessing,
   returns 200 (valid key) or 401 (invalid). Includes available models
   in result for visibility.

158/158 tests pass (unit tests for parseXaiHeaders unchanged).
2026-04-05 06:31:29 +00:00
..
anthropic-api.js build: token-monitor v0.1.0 — modular LLM API quota visibility 2026-04-04 17:01:05 +00:00
anthropic-teams.js build: token-monitor v0.1.0 — modular LLM API quota visibility 2026-04-04 17:01:05 +00:00
gemini.js build: add gemini and xai provider modules 2026-04-04 17:52:37 +00:00
index.js test: add gemini and xai parser unit tests 2026-04-04 17:51:38 +00:00
shelley-proxy.js build: token-monitor v0.1.0 — modular LLM API quota visibility 2026-04-04 17:01:05 +00:00
xai.js Fix xai probe: double /v1 URL bug, use /v1/models instead of chat completion 2026-04-05 06:31:29 +00:00