ARouter gives you one API key and one endpoint for leading LLM providers. If you already use an OpenAI-compatible client, the migration is usually just base_url, api_key, and optionally your app attribution headers.
1. Get Your API Key
Sign up at the ARouter Dashboard and create an API key.
Your key will look like lr_live_xxxxxxxxxxxx.
2. Install the ARouter SDK
The first-party @arouter/sdk works in any Node.js or TypeScript project and supports npm, yarn, and pnpm.
import { ARouter } from "@arouter/sdk" ;
const client = new ARouter ({
apiKey: "lr_live_xxxx" ,
baseURL: "https://api.arouter.ai" ,
});
const response = await client . chatCompletion ({
model: "openai/gpt-5.4" ,
messages: [{ role: "user" , content: "Hello!" }],
});
console . log ( response . choices [ 0 ]. message . content );
See the Node.js / TypeScript SDK guide for streaming, key management, and x402 payment examples.
3. Use Your Existing SDK
Already using OpenAI, Anthropic, or Go? The only changes are base_url and api_key.
Python (OpenAI)
Node.js (OpenAI)
Python (Anthropic)
Go
cURL
fetch
from openai import OpenAI
client = OpenAI(
base_url = "https://api.arouter.ai/v1" ,
api_key = "lr_live_xxxx" ,
default_headers = {
"HTTP-Referer" : "https://myapp.com" , # Optional
"X-Title" : "My AI App" , # Optional
},
)
response = client.chat.completions.create(
model = "openai/gpt-5.4" ,
messages = [{ "role" : "user" , "content" : "Hello!" }],
)
print (response.choices[ 0 ].message.content)
import OpenAI from "openai" ;
const client = new OpenAI ({
baseURL: "https://api.arouter.ai/v1" ,
apiKey: "lr_live_xxxx" ,
defaultHeaders: {
"HTTP-Referer" : "https://myapp.com" , // Optional
"X-Title" : "My AI App" , // Optional
},
});
const response = await client . chat . completions . create ({
model: "openai/gpt-5.4" ,
messages: [{ role: "user" , content: "Hello!" }],
});
console . log ( response . choices [ 0 ]. message . content );
import anthropic
client = anthropic.Anthropic(
base_url = "https://api.arouter.ai" ,
api_key = "lr_live_xxxx" ,
)
message = client.messages.create(
model = "claude-sonnet-4.6" ,
max_tokens = 1024 ,
messages = [{ "role" : "user" , "content" : "Hello!" }],
)
print (message.content[ 0 ].text)
package main
import (
" context "
" fmt "
" log "
" github.com/arouter-ai/arouter-go "
)
func main () {
client := arouter . NewClient ( "lr_live_xxxx" ,
arouter . WithBaseURL ( "https://api.arouter.ai/v1" ),
arouter . WithHeader ( "HTTP-Referer" , "https://myapp.com" ),
arouter . WithHeader ( "X-Title" , "My AI App" ),
)
resp , err := client . CreateChatCompletion ( context . Background (), arouter . ChatCompletionRequest {
Model : "openai/gpt-5.4" ,
Messages : [] arouter . Message {
{ Role : "user" , Content : "Hello!" },
},
})
if err != nil {
log . Fatal ( err )
}
fmt . Println ( resp . Choices [ 0 ]. Message . Content )
}
curl https://api.arouter.ai/v1/chat/completions \
-H "Authorization: Bearer lr_live_xxxx" \
-H "HTTP-Referer: https://myapp.com" \
-H "X-Title: My AI App" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-5.4",
"messages": [{"role": "user", "content": "Hello!"}]
}'
const response = await fetch ( 'https://api.arouter.ai/v1/chat/completions' , {
method: 'POST' ,
headers: {
'Authorization' : 'Bearer lr_live_xxxx' ,
'Content-Type' : 'application/json' ,
'HTTP-Referer' : 'https://myapp.com' , // optional: source tracking
'X-Title' : 'My AI App' , // optional: display name
},
body: JSON . stringify ({
model: 'openai/gpt-5.4' ,
messages: [{ role: 'user' , content: 'Hello!' }],
}),
});
const data = await response . json ();
console . log ( data . choices [ 0 ]. message . content );
HTTP-Referer and X-Title are optional. Include them if you want ARouter Dashboard analytics to attribute requests to a specific app or workflow.
4. Use the API Directly
If you prefer not to install an SDK, you can call ARouter with any HTTP client:
import json
import requests
response = requests.post(
"https://api.arouter.ai/v1/chat/completions" ,
headers = {
"Authorization" : "Bearer lr_live_xxxx" ,
"HTTP-Referer" : "https://myapp.com" , # Optional
"X-Title" : "My AI App" , # Optional
"Content-Type" : "application/json" ,
},
data = json.dumps({
"model" : "openai/gpt-5.4" ,
"messages" : [{ "role" : "user" , "content" : "Hello!" }],
}),
)
print (response.json()[ "choices" ][ 0 ][ "message" ][ "content" ])
5. Try Different Providers
With ARouter, switching providers is just changing the model string:
# OpenAI
response = client.chat.completions.create( model = "openai/gpt-5.4" , ... )
# Anthropic (via OpenAI SDK!)
response = client.chat.completions.create( model = "anthropic/claude-sonnet-4.6" , ... )
# Google Gemini
response = client.chat.completions.create( model = "google/gemini-2.5-flash" , ... )
# DeepSeek
response = client.chat.completions.create( model = "deepseek/deepseek-v3.2" , ... )
If you omit the provider prefix (e.g. just "gpt-5.4"), ARouter defaults to OpenAI.
What’s Next?
Authentication Learn about the three auth methods
Model Routing Understand the provider/model format and multi-model routing
Billing & Credits Review pricing, balance, and credit rules
Request Attribution Attribute traffic to your app in Dashboard analytics
Streaming Enable real-time streaming responses
Tool Calling Give models access to your functions
Structured Outputs Force models to return valid JSON schema
Prompt Caching Reduce cost and latency for repeated prompts
Key Management Create scoped keys for your team
FAQ Common questions answered