Skip to main content

Error Format

All errors follow a consistent JSON format:
{
  "error": {
    "message": "Human-readable error description",
    "type": "error_type"
  }
}

Error Types

HTTP StatusError TypeDescription
400invalid_request_errorMalformed request body or missing required fields
401authentication_errorMissing, invalid, or expired API key
402payment_requiredInsufficient credits — top up your account
403permission_error / forbidden_errorKey doesn’t have access to the requested provider or resource
413invalid_request_errorRequest body too large (max 10 MB)
429rate_limit_errorRate limit exceeded (RPM or daily cap)
502server_errorUpstream provider request failed
503server_errorNo available providers for the requested model

Common Errors and Solutions

401 — Invalid API Key

{
  "error": {
    "message": "invalid api key",
    "type": "authentication_error"
  }
}
Fix: Check that your API key is correct and hasn’t been revoked.

403 — Provider Not Allowed

{
  "error": {
    "message": "provider 'anthropic' is not allowed for this API key",
    "type": "forbidden_error"
  }
}
Fix: Your API key has an allowed_providers restriction. Use a different key or update the allowed providers via the management API.

429 — Rate Limited

{
  "error": {
    "message": "rate limit exceeded",
    "type": "rate_limit_error"
  }
}
Fix: Implement exponential backoff. Consider raising your key’s rate limit.

502 — Upstream Failed

{
  "error": {
    "message": "upstream request failed",
    "type": "server_error"
  }
}
Fix: The LLM provider returned an error or is unreachable. ARouter automatically handles key failover, but the provider itself may be experiencing issues. Retry or switch to a different provider.

Handling Errors in Code

from openai import OpenAI, APIStatusError, APIConnectionError

client = OpenAI(
    base_url="https://api.arouter.ai/v1",
    api_key="lr_live_xxxx",
)

try:
    response = client.chat.completions.create(
        model="openai/gpt-5.4",
        messages=[{"role": "user", "content": "Hello!"}],
    )
except APIStatusError as e:
    if e.status_code == 401:
        print("Invalid API key")
    elif e.status_code == 402:
        print("Insufficient credits — top up your account")
    elif e.status_code == 429:
        print("Rate limited, backing off...")
    elif e.status_code in (502, 503):
        print("Provider error, retrying...")
    else:
        print(f"API error {e.status_code}: {e.message}")
except APIConnectionError:
    print("Network error — check your connection")

Retry Strategy

For production applications, we recommend:
  1. Retry on 429 and 502 with exponential backoff
  2. Do not retry on 400, 401, 403 — these are permanent errors
  3. Set a max retry count (e.g., 3 attempts)
  4. Consider multi-model routing — if one model cannot serve the request, send an ordered candidate list via models and route
import time
from openai import OpenAI, APIStatusError

def call_with_retry(client, request, max_retries=3):
    for attempt in range(max_retries):
        try:
            return client.chat.completions.create(**request)
        except APIStatusError as e:
            if e.status_code in (429, 502, 503) and attempt < max_retries - 1:
                time.sleep(2 ** attempt)
                continue
            raise

Handling Errors During Streaming

When streaming (stream: true), errors behave differently depending on when they occur:
  • Before any tokens are sent — ARouter returns a standard HTTP error response with a non-200 status code. Handle this the same as non-streaming errors.
  • After tokens have been sent — The HTTP status is already 200 OK. The error is delivered as an SSE event in the stream body.
Mid-stream errors look like:
{
  "id": "chatcmpl-xxx",
  "object": "chat.completion.chunk",
  "error": {
    "code": "server_error",
    "message": "Provider disconnected unexpectedly"
  },
  "choices": [
    { "index": 0, "delta": { "content": "" }, "finish_reason": "error" }
  ]
}
Check the finish_reason on each chunk. If it’s "error", the stream has terminated abnormally.
for await (const chunk of stream) {
  // Check for mid-stream error
  if ("error" in chunk) {
    console.error(`Stream error: ${(chunk as any).error.message}`);
    break;
  }
  if (chunk.choices[0]?.finish_reason === "error") {
    console.error("Stream terminated due to an error");
    break;
  }
  const content = chunk.choices[0]?.delta?.content;
  if (content) process.stdout.write(content);
}
See the Streaming Guide for complete error handling examples.