Skip to main content
ARouter’s provider proxy lets you forward requests directly to any supported provider’s native API, using your ARouter API key as authentication. You don’t need individual API keys for each provider.

Endpoint

POST /{provider}/{path}
GET  /{provider}/{path}
Where {provider} is the provider slug and {path} is the provider’s native API path.

When to Use the Provider Proxy

Use caseRecommended approach
Standard LLM chat completions/v1/chat/completions with provider/model format
Provider-specific features (e.g., Anthropic Batches, Gemini cached content)/{provider}/{path} proxy
Native streaming format/{provider}/{path} proxy
Provider-specific parameters not in the ARouter schema/{provider}/{path} proxy

Supported Providers

ProviderSlugBase URL
OpenAIopenaihttps://api.openai.com
Anthropicanthropichttps://api.anthropic.com
Google Geminigooglehttps://generativelanguage.googleapis.com
DeepSeekdeepseekhttps://api.deepseek.com
xAIxaihttps://api.x.ai
Kimi (Moonshot)kimihttps://api.moonshot.cn
MiniMaxminimaxhttps://api.minimax.chat
Mistralmistralhttps://api.mistral.ai
Groqgroqhttps://api.groq.com
Coherecoherehttps://api.cohere.com
NVIDIAnvidiahttps://integrate.api.nvidia.com
Dashscope (Alibaba)dashscopehttps://dashscope.aliyuncs.com

Examples

OpenAI — Chat Completions

curl https://api.arouter.ai/openai/v1/chat/completions \
  -H "Authorization: Bearer lr_live_xxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.4",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Anthropic — Messages

curl https://api.arouter.ai/anthropic/v1/messages \
  -H "Authorization: Bearer lr_live_xxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-sonnet-4.6",
    "max_tokens": 1024,
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Anthropic — Message Batches

# Create a batch
curl https://api.arouter.ai/anthropic/v1/messages/batches \
  -X POST \
  -H "Authorization: Bearer lr_live_xxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "requests": [
      {
        "custom_id": "request-1",
        "params": {
          "model": "claude-sonnet-4.6",
          "max_tokens": 1024,
          "messages": [{"role": "user", "content": "Hello!"}]
        }
      }
    ]
  }'

# Check batch status
curl https://api.arouter.ai/anthropic/v1/messages/batches/msgbatch_xxx \
  -H "Authorization: Bearer lr_live_xxxx"

Google Gemini — Generate Content

curl "https://api.arouter.ai/google/v1beta/models/gemini-2.5-flash:generateContent" \
  -X POST \
  -H "Authorization: Bearer lr_live_xxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "contents": [
      {"role": "user", "parts": [{"text": "Hello!"}]}
    ]
  }'

Google Gemini — List Models

curl "https://api.arouter.ai/google/v1beta/models" \
  -H "Authorization: Bearer lr_live_xxxx"

DeepSeek — Chat Completions

curl https://api.arouter.ai/deepseek/v1/chat/completions \
  -H "Authorization: Bearer lr_live_xxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-v3.2",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Notes

  • Authentication: ARouter injects the provider’s API key automatically. You only need your ARouter key.
  • Native format: Requests and responses are passed through as-is — ARouter does not transform them.
  • Usage tracking: Token usage is still recorded against your ARouter account.
  • Path passthrough: Everything after /{provider}/ is forwarded to the provider unchanged.
When using provider proxy for Anthropic, the anthropic-version header is injected automatically. You don’t need to include it.