Query Parameters
The Models API supports query parameters to filter results.output_modalities
Filter models by their output capabilities. Accepts a comma-separated list of modalities or "all" to include every model.
| Value | Description |
|---|---|
text | Models that produce text output (default) |
image | Models that generate images |
audio | Models that produce audio output |
embeddings | Embedding models |
all | Include all models, skip modality filtering |
supported_parameters
Filter models by the API parameters they support. For example, to find models that support tool calling:
List Models
Response Format
Model Object Schema
Each model in thedata array contains the following fields:
| Field | Type | Description |
|---|---|---|
id | string | Unique model identifier used in API requests, e.g. "openai/gpt-5.4" |
canonical_slug | string | Permanent slug for the model that never changes |
name | string | Human-readable display name |
created | number | Unix timestamp of when the model was added to ARouter |
description | string | Detailed description of the model’s capabilities |
context_length | number | Maximum context window size in tokens |
architecture | Architecture | Technical capabilities object |
pricing | Pricing | Cost structure for using this model (USD per token) |
top_provider | TopProvider | Configuration details for the primary provider |
per_request_limits | object | null | Rate limiting information (null if no limits) |
supported_parameters | string[] | Array of supported API parameters |
default_parameters | object | null | Default parameter values (null if none) |
expiration_date | string | null | Deprecation date (null if not deprecated) |
Architecture Object
Pricing Object
All pricing values are in USD per token. A value of"0" means the feature is free.
Top Provider Object
Supported Parameters
Thesupported_parameters array lists which OpenAI-compatible parameters work with a model:
| Parameter | Description |
|---|---|
tools | Function calling capabilities |
tool_choice | Tool selection control |
max_tokens | Response length limiting |
temperature | Randomness control |
top_p | Nucleus sampling |
reasoning | Internal reasoning mode |
include_reasoning | Include reasoning in response |
structured_outputs | JSON schema enforcement |
response_format | Output format specification |
stop | Custom stop sequences |
frequency_penalty | Repetition reduction |
presence_penalty | Topic diversity |
seed | Deterministic outputs |
Using Models
Use theid directly as the model field in your requests:
- Python
- TypeScript
- cURL
Filtering by Supported Parameters
Find models that support tool calling:Auto Routing
In addition to specific model IDs, ARouter supports automatic model selection:| Model | Description |
|---|---|
"auto" | ARouter automatically selects the best available model for your request |
model field always shows the model that was actually used. See Model Routing for details.
Model Variants
You can append suffixes to any model ID to influence routing behavior:| Suffix | Effect |
|---|---|
:nitro | Route to the highest-throughput instance — optimized for speed |
:floor | Route to the lowest-cost instance — optimized for price |
:free | Route to the free-tier instance (rate limits apply) |
:thinking | Enable extended reasoning / chain-of-thought mode |
Tokenization
Different models tokenize text differently. Some models (GPT, Claude, Llama) break text into multi-character chunks; others tokenize by character (PaLM). This means token counts — and therefore costs — vary between models even for identical inputs and outputs. Costs are billed according to the tokenizer for the model in use. Use theusage field in each response to get the exact token counts:
Notes
- The model list is filtered by your account’s enabled providers. If a provider is not enabled, its models will not appear.
- New models are added automatically as providers release them.
- Use model IDs from this list directly in the
modelfield of your chat completion requests.