Skip to main content
Instructor extracts structured outputs from LLMs, available in Python & JS.

Quick Start

import instructor
from pydantic import BaseModel
from openai import OpenAI

client = instructor.from_openai(OpenAI(
    base_url="https://api.portkey.ai/v1",
    api_key="PORTKEY_API_KEY",
    default_headers={"x-portkey-provider": "@openai-prod"}
))

class User(BaseModel):
    name: str
    age: int

user = client.chat.completions.create(
    model="gpt-4o",
    response_model=User,
    messages=[{"role": "user", "content": "John Doe is 30 years old."}]
)

print(user.name, user.age)  # John Doe 30

Setup

1

Add Provider

Go to Model CatalogAdd Provider → Select your provider → Enter credentials.
2

Get Portkey API Key

Create your API key at app.portkey.ai/api-keys.

Switch Providers

Change the provider header to use different LLMs:
client = instructor.from_openai(OpenAI(
    base_url="https://api.portkey.ai/v1",
    api_key="PORTKEY_API_KEY",
    default_headers={"x-portkey-provider": "@anthropic-prod"}
))

user = client.chat.completions.create(
    model="claude-sonnet-4-20250514",
    response_model=User,
    messages=[{"role": "user", "content": "John Doe is 30 years old."}]
)

Add Caching

Reduce costs with response caching:
from portkey_ai import createHeaders

cache_config = {"cache": {"mode": "simple"}}

client = instructor.from_openai(OpenAI(
    base_url="https://api.portkey.ai/v1",
    api_key="PORTKEY_API_KEY",
    default_headers=createHeaders(
        provider="@openai-prod",
        config=cache_config  # Or use config ID from Portkey app
    )
))

Advanced Features

Add fallbacks, load balancing, timeouts, or retries via Portkey Configs.