Skip to main content

Provider List

ARouter supports the following providers. All are accessible through the OpenAI-compatible /v1/chat/completions endpoint using the provider/model format.
Provider ID: openai
ModelDescription
gpt-5.4Flagship multimodal model
gpt-5.4-miniFast and cost-efficient
gpt-5.4-nanoUltra-lightweight, ideal for classification & extraction
o4-miniLatest compact reasoning model
o3Advanced reasoning model
response = client.chat.completions.create(
    model="openai/gpt-5.4",
    messages=[{"role": "user", "content": "Hello!"}],
)
Provider ID: anthropic
ModelDescription
claude-sonnet-4.6Latest Sonnet — best balanced performance
claude-opus-4.6Most capable Claude model
claude-haiku-4.5Fast and lightweight
Works with both the OpenAI-compatible endpoint and the native Anthropic endpoint:
# Via OpenAI SDK
response = client.chat.completions.create(
    model="anthropic/claude-sonnet-4.6", ...
)

# Via Anthropic SDK (native)
message = anthropic_client.messages.create(
    model="claude-sonnet-4.6", ...
)
Provider ID: google
ModelDescription
gemini-2.5-flashFast multimodal model with built-in thinking
gemini-2.5-proMost capable Gemini model, 1M context
Works with both the OpenAI-compatible endpoint and the native Gemini endpoint:
# Via OpenAI SDK
response = client.chat.completions.create(
    model="google/gemini-2.5-flash", ...
)

# Via Gemini SDK (native)
model = genai.GenerativeModel("gemini-2.5-flash")
response = model.generate_content("Hello!")
Provider ID: deepseek
ModelDescription
deepseek-v3.2Flagship general model — GPT-5 class at a fraction of the cost
deepseek-r1Chain-of-thought reasoning model
response = client.chat.completions.create(
    model="deepseek/deepseek-v3.2", ...
)
Provider ID: x-ai
ModelDescription
grok-4.20Latest flagship model with lowest hallucination rate
grok-4.1-fastUltra-fast model with 2M context window
response = client.chat.completions.create(
    model="x-ai/grok-4.20", ...
)
Provider ID: mistralai
ModelDescription
mistral-large-2512Mistral Large 3 — most capable Mistral model
mistral-medium-3.1Balanced performance and cost
codestral-2508Optimized for code generation
response = client.chat.completions.create(
    model="mistralai/mistral-large-2512", ...
)
Provider ID: groq
ModelDescription
meta-llama/llama-4-maverickLlama 4 Maverick on Groq — multimodal, ultra-fast
meta-llama/llama-4-scoutLlama 4 Scout on Groq — 10M context window
response = client.chat.completions.create(
    model="groq/meta-llama/llama-4-maverick", ...
)
Provider ID: moonshotai
ModelDescription
kimi-k2.5Latest Kimi flagship — multimodal with vision
response = client.chat.completions.create(
    model="moonshotai/kimi-k2.5", ...
)
Provider ID: minimax
ModelDescription
minimax-m2.7Latest flagship — long-context reasoning
Native MiniMax endpoint is also supported:
response = client.chat.completions.create(
    model="minimax/minimax-m2.7", ...
)
Provider ID: meta-llama
ModelDescription
llama-4-maverickLlama 4 flagship — natively multimodal, 128K context
llama-4-scoutLlama 4 Scout — 10M context window, single GPU efficient
response = client.chat.completions.create(
    model="meta-llama/llama-4-maverick", ...
)
Provider ID: qwen
ModelDescription
qwen3.5-397b-a17bQwen 3.5 flagship — multimodal with video understanding
qwen3-coder480B coding specialist
response = client.chat.completions.create(
    model="qwen/qwen3.5-397b-a17b", ...
)

Listing Available Models

Use the models endpoint to see all models accessible with your current API key:
curl https://api.arouter.ai/v1/models \
  -H "Authorization: Bearer lr_live_xxxx"
Response:
{
  "object": "list",
  "data": [
    {
      "id": "openai/gpt-5.4",
      "object": "model",
      "owned_by": "openai"
    },
    {
      "id": "anthropic/claude-sonnet-4.6",
      "object": "model",
      "owned_by": "anthropic"
    }
  ]
}
The models list is filtered based on your API key’s allowed_providers setting. If your key restricts providers, only models from allowed providers will appear.