Skip to main content
Structured Outputs force the model to return a JSON object that matches a schema you define. This eliminates the need to parse, validate, or retry unstructured model output. ARouter supports two structured output modes:
ModeDescription
json_objectGuarantees valid JSON; no schema enforcement
json_schemaGuarantees JSON matching your exact schema

Using Structured Outputs

Pass response_format in your request body:

JSON Object Mode

{
  "model": "openai/gpt-5.4",
  "messages": [
    {
      "role": "user",
      "content": "Return information about the planet Mars as JSON"
    }
  ],
  "response_format": { "type": "json_object" }
}
The model returns valid JSON, but the schema is not enforced:
{
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "{\"name\":\"Mars\",\"diameter_km\":6779,\"moons\":2,\"habitable\":false}"
      }
    }
  ]
}

JSON Schema Mode

Use json_schema to enforce a strict schema:
{
  "model": "openai/gpt-5.4",
  "messages": [
    {
      "role": "system",
      "content": "You are a data extraction assistant."
    },
    {
      "role": "user",
      "content": "Extract the planet information: Mars is a terrestrial planet with 2 moons and a diameter of 6779 km."
    }
  ],
  "response_format": {
    "type": "json_schema",
    "json_schema": {
      "name": "planet_info",
      "strict": true,
      "schema": {
        "type": "object",
        "properties": {
          "name": { "type": "string" },
          "diameter_km": { "type": "number" },
          "moons": { "type": "integer" },
          "habitable": { "type": "boolean" }
        },
        "required": ["name", "diameter_km", "moons", "habitable"],
        "additionalProperties": false
      }
    }
  }
}
Response:
{
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "{\"name\":\"Mars\",\"diameter_km\":6779,\"moons\":2,\"habitable\":false}"
      },
      "finish_reason": "stop"
    }
  ]
}

Full Example

import json
from openai import OpenAI
from pydantic import BaseModel

client = OpenAI(
    base_url="https://api.arouter.ai/v1",
    api_key="lr_live_xxxx",
)

# Define your schema using Pydantic
class PlanetInfo(BaseModel):
    name: str
    diameter_km: float
    moons: int
    habitable: bool

# Using OpenAI's parse() method (recommended)
response = client.beta.chat.completions.parse(
    model="openai/gpt-5.4",
    messages=[
        {"role": "system", "content": "You are a data extraction assistant."},
        {"role": "user", "content": "Tell me about Mars."},
    ],
    response_format=PlanetInfo,
)

planet = response.choices[0].message.parsed
print(planet.name)        # "Mars"
print(planet.moons)       # 2
print(planet.diameter_km) # 6779.0

# Or manually using response_format dict
response = client.chat.completions.create(
    model="openai/gpt-5.4",
    messages=[
        {"role": "user", "content": "Tell me about Mars as JSON."},
    ],
    response_format={
        "type": "json_schema",
        "json_schema": {
            "name": "planet_info",
            "strict": True,
            "schema": PlanetInfo.model_json_schema(),
        },
    },
)
data = json.loads(response.choices[0].message.content)
planet = PlanetInfo(**data)

Model Support

json_schema mode with strict: true is supported by:
  • openai/gpt-5.4, openai/gpt-5.4-pro, openai/o3, openai/o4-mini
  • anthropic/claude-sonnet-4.6, anthropic/claude-opus-4.5
  • google/gemini-2.5-flash, google/gemini-2.5-pro
json_object mode (no schema enforcement) is more broadly supported. Check GET /v1/models for the latest capability information.

Streaming with Structured Outputs

Structured outputs work with streaming. The JSON content is delivered incrementally and you assemble it client-side:
const stream = await client.chat.completions.create({
  model: "openai/gpt-5.4",
  messages: [{ role: "user", content: "Tell me about Mars." }],
  response_format: {
    type: "json_schema",
    json_schema: {
      name: "planet_info",
      strict: true,
      schema: { /* ... */ },
    },
  },
  stream: true,
});

let jsonBuffer = "";
for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content;
  if (content) jsonBuffer += content;
}

const data = JSON.parse(jsonBuffer);

Best Practices

  1. Use strict: true — This guarantees schema compliance. Without it, the model may return valid JSON that doesn’t match the schema exactly.
  2. Set additionalProperties: false — Required for strict mode. Prevents the model from adding extra keys.
  3. List all required fields explicitly — In strict mode, every field in properties should also be in required.
  4. Include a system prompt — Telling the model its role as a data extraction or structured output assistant improves reliability.

Error Handling

If the model cannot produce valid JSON that matches your schema (e.g., a prompt that fundamentally conflicts with the schema), the response will have finish_reason: "length" or finish_reason: "content_filter". Always check finish_reason before parsing content.
response = client.chat.completions.create(...)

choice = response.choices[0]
if choice.finish_reason != "stop":
    raise ValueError(f"Unexpected finish reason: {choice.finish_reason}")

data = json.loads(choice.message.content)