Provider List
ARouter supports the following providers. All are accessible through the OpenAI-compatible/v1/chat/completions endpoint using the provider/model format.
OpenAI
OpenAI
Provider ID:
openai| Model | Description |
|---|---|
gpt-5.4 | Flagship multimodal model |
gpt-5.4-mini | Fast and cost-efficient |
gpt-5.4-nano | Ultra-lightweight, ideal for classification & extraction |
o4-mini | Latest compact reasoning model |
o3 | Advanced reasoning model |
Anthropic
Anthropic
Provider ID:
Works with both the OpenAI-compatible endpoint and the native Anthropic endpoint:
anthropic| Model | Description |
|---|---|
claude-sonnet-4.6 | Latest Sonnet — best balanced performance |
claude-opus-4.6 | Most capable Claude model |
claude-haiku-4.5 | Fast and lightweight |
Google Gemini
Google Gemini
Provider ID:
Works with both the OpenAI-compatible endpoint and the native Gemini endpoint:
google| Model | Description |
|---|---|
gemini-2.5-flash | Fast multimodal model with built-in thinking |
gemini-2.5-pro | Most capable Gemini model, 1M context |
DeepSeek
DeepSeek
Provider ID:
deepseek| Model | Description |
|---|---|
deepseek-v3.2 | Flagship general model — GPT-5 class at a fraction of the cost |
deepseek-r1 | Chain-of-thought reasoning model |
xAI (Grok)
xAI (Grok)
Provider ID:
x-ai| Model | Description |
|---|---|
grok-4.20 | Latest flagship model with lowest hallucination rate |
grok-4.1-fast | Ultra-fast model with 2M context window |
Mistral
Mistral
Provider ID:
mistralai| Model | Description |
|---|---|
mistral-large-2512 | Mistral Large 3 — most capable Mistral model |
mistral-medium-3.1 | Balanced performance and cost |
codestral-2508 | Optimized for code generation |
Groq
Groq
Provider ID:
groq| Model | Description |
|---|---|
meta-llama/llama-4-maverick | Llama 4 Maverick on Groq — multimodal, ultra-fast |
meta-llama/llama-4-scout | Llama 4 Scout on Groq — 10M context window |
Kimi (Moonshot)
Kimi (Moonshot)
Provider ID:
moonshotai| Model | Description |
|---|---|
kimi-k2.5 | Latest Kimi flagship — multimodal with vision |
MiniMax
MiniMax
Provider ID:
Native MiniMax endpoint is also supported:
minimax| Model | Description |
|---|---|
minimax-m2.7 | Latest flagship — long-context reasoning |
Meta Llama
Meta Llama
Provider ID:
meta-llama| Model | Description |
|---|---|
llama-4-maverick | Llama 4 flagship — natively multimodal, 128K context |
llama-4-scout | Llama 4 Scout — 10M context window, single GPU efficient |
Qwen (Alibaba)
Qwen (Alibaba)
Provider ID:
qwen| Model | Description |
|---|---|
qwen3.5-397b-a17b | Qwen 3.5 flagship — multimodal with video understanding |
qwen3-coder | 480B coding specialist |
Listing Available Models
Use the models endpoint to see all models accessible with your current API key:The models list is filtered based on your API key’s
allowed_providers setting.
If your key restricts providers, only models from allowed providers will appear.