Skip to main content

Provider List

ARouter supports the following providers. All are accessible through the OpenAI-compatible /v1/chat/completions endpoint using the provider/model format.
Provider ID: openai
ModelDescription
gpt-4oMost capable multimodal model
gpt-4o-miniFast and affordable
gpt-4-turboGPT-4 with vision
gpt-4Original GPT-4
gpt-3.5-turboFast and low cost
o1Reasoning model
o1-miniLightweight reasoning
o3-miniLatest compact reasoning
response = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)
Provider ID: anthropic
ModelDescription
claude-sonnet-4-20250514Latest Sonnet, best balanced
claude-3-5-sonnet-20241022Previous generation Sonnet
claude-3-5-haiku-20241022Fast and compact
claude-3-opus-20240229Most capable Claude 3
Works with both the OpenAI-compatible endpoint and the native Anthropic endpoint:
# Via OpenAI SDK
response = client.chat.completions.create(
    model="anthropic/claude-sonnet-4-20250514", ...
)

# Via Anthropic SDK (native)
message = anthropic_client.messages.create(
    model="claude-sonnet-4-20250514", ...
)
Provider ID: google
ModelDescription
gemini-2.0-flashFast multimodal model
gemini-2.0-flash-liteUltra-fast lightweight
gemini-1.5-proLong-context reasoning
gemini-1.5-flashFast and efficient
Works with both the OpenAI-compatible endpoint and the native Gemini endpoint:
# Via OpenAI SDK
response = client.chat.completions.create(
    model="google/gemini-2.0-flash", ...
)

# Via Gemini SDK (native)
model = genai.GenerativeModel("gemini-2.0-flash")
response = model.generate_content("Hello!")
Provider ID: deepseek
ModelDescription
deepseek-chatGeneral chat model
deepseek-reasonerChain-of-thought reasoning
response = client.chat.completions.create(
    model="deepseek/deepseek-chat", ...
)
Provider ID: xai
ModelDescription
grok-2Most capable Grok model
grok-2-miniLightweight Grok
grok-betaBeta release
response = client.chat.completions.create(
    model="xai/grok-2", ...
)
Provider ID: mistral
ModelDescription
mistral-large-latestMost capable Mistral model
mistral-small-latestFast and efficient
codestral-latestOptimized for code
response = client.chat.completions.create(
    model="mistral/mistral-large-latest", ...
)
Provider ID: groq
ModelDescription
llama-3.3-70b-versatileLlama 3.3 on Groq hardware
llama-3.1-8b-instantUltra-fast 8B model
mixtral-8x7b-32768Mixtral MoE on Groq
response = client.chat.completions.create(
    model="groq/llama-3.3-70b-versatile", ...
)
Provider ID: kimi
ModelContextDescription
moonshot-v1-8k8KStandard context
moonshot-v1-32k32KExtended context
moonshot-v1-128k128KUltra-long context
response = client.chat.completions.create(
    model="kimi/moonshot-v1-128k", ...
)
Provider ID: minimax
ModelDescription
abab6.5s-chatSpeed-optimized
abab6.5-chatBalanced
abab5.5-chatPrevious generation
Native MiniMax endpoint is also supported:
response = client.chat.completions.create(
    model="minimax/abab6.5s-chat", ...
)

Listing Available Models

Use the models endpoint to see all models accessible with your current API key:
curl https://api.arouter.com/v1/models \
  -H "Authorization: Bearer lr_live_xxxx"
Response:
{
  "object": "list",
  "data": [
    {
      "id": "openai/gpt-4o",
      "object": "model",
      "owned_by": "openai"
    },
    {
      "id": "anthropic/claude-sonnet-4-20250514",
      "object": "model",
      "owned_by": "anthropic"
    }
  ]
}
The models list is filtered based on your API key’s allowed_providers setting. If your key restricts providers, only models from allowed providers will appear.