Two bugs caused all xai providers to show 'error' in the monitor:
1. Double /v1 in URL: models.json baseUrl is https://api.x.ai/v1 (OpenAI-
compatible convention), and the probe was appending /v1/chat/completions,
producing https://api.x.ai/v1/v1/chat/completions → HTTP 4xx.
Fix: strip trailing /vN from baseUrl before constructing the probe URL.
2. Wrong model: probe used grok-3-mini, which requires specific x.ai
console permissions not granted to our keys. Keys have access to
grok-4-1-fast-reasoning only.
Fix: use GET /v1/models instead — lightweight, no model guessing,
returns 200 (valid key) or 401 (invalid). Includes available models
in result for visibility.
158/158 tests pass (unit tests for parseXaiHeaders unchanged).