Skip to main content
Portkey works with any tool or application that supports OpenAI-compatible APIs. Add enterprise features—observability, reliability, cost controls—with just 2 configuration changes.

Quick Start

Find your project’s OpenAI settings and update:
  1. Base URL: https://api.portkey.ai/v1
  2. API Key: Your Portkey API key
That’s it! Your app now routes through Portkey.

All requests appear in Portkey logs

You now get:
  • ✅ Full observability (costs, latency, logs)
  • ✅ Access to 250+ LLM providers
  • ✅ Automatic fallbacks and retries
  • ✅ Budget controls per team/project

Why Add Portkey?

Enterprise Observability

Every request logged with costs, latency, tokens. Track usage across teams.

Multi-Provider Access

Switch between OpenAI, Anthropic, Google, and 250+ models without code changes.

Production Reliability

Automatic fallbacks, retries, load balancing—configured once, works everywhere.

Cost & Access Control

Budget limits per team. Rate limiting. Centralized credential management.

Setup

1. Add Provider in Model Catalog

  1. Go to Model Catalog → Add Provider
  2. Select your provider (OpenAI, Anthropic, Google, etc.)
  3. Choose existing credentials or create new by entering your API keys
  4. Name your provider (e.g., openai-prod)
Your provider slug will be @openai-prod (or whatever you named it).

Complete Model Catalog Guide →

Set up budgets, rate limits, and manage credentials

2. Get Portkey API Key

Create your Portkey API key at app.portkey.ai/api-keys

3. Configure Your Application

Most OpenAI-compatible apps have settings for: Base URL / Endpoint
https://api.portkey.ai/v1
API Key
Your Portkey API Key (from step 2)
Model (if configurable)
@openai-prod/gpt-4o
If your app requires the OpenAI format (just model name), use a Portkey config with your default model.

Common Integration Patterns

If your app allows custom base URL and API key:
Base URL: https://api.portkey.ai/v1
API Key: PORTKEY_API_KEY
Model: @openai-prod/gpt-4o

Pattern 2: With Config

If your app only accepts model names like gpt-4o:
  1. Create a config in Portkey dashboard:
{
  "override_params": {
    "model": "@openai-prod/gpt-4o"
  }
}
  1. Use the config in your app:
Base URL: https://api.portkey.ai/v1
API Key: PORTKEY_API_KEY
Model: gpt-4o  (the config will override this)
Add config to API key defaults or pass via header.

Pattern 3: Environment Variables

Many apps use environment variables:
OPENAI_API_BASE=https://api.portkey.ai/v1
OPENAI_API_KEY=PORTKEY_API_KEY
OPENAI_MODEL=@openai-prod/gpt-4o

Switching Providers

Change the model string to switch providers:
@openai-prod/gpt-4o        # OpenAI
@anthropic-prod/claude-sonnet-4    # Anthropic
@google-prod/gemini-2.0-flash      # Google
All without changing your application code!

Advanced Features via Configs

For production features like fallbacks, caching, and load balancing:
  1. Create a config in Portkey dashboard
  2. Attach it to your API key defaults
  3. Or pass via header: x-portkey-config: your-config-id
Example config with fallbacks:
{
  "strategy": {"mode": "fallback"},
  "targets": [
    {"override_params": {"model": "@openai-prod/gpt-4o"}},
    {"override_params": {"model": "@anthropic-prod/claude-sonnet-4"}}
  ]
}

Learn About Configs →

Fallbacks, retries, caching, load balancing, and more

Enterprise Governance

For teams and organizations:

Budget Controls

Set budget limits per team or project:
  1. Go to Model Catalog
  2. Create separate providers for each team
  3. Set budget and rate limits
  4. Distribute team-specific API keys

Budget Limits Guide →

Set up cost controls and alerts

Access Management

Control who can access which models:
  • Model provisioning - Enable/disable models per provider
  • API key scopes - Limit what each key can do
  • Team workspaces - Isolate teams with separate workspaces

Access Control Guide →

Set up RBAC and team permissions

Usage Tracking

Track usage by team, project, or user:
  • Add metadata to API keys
  • Filter logs by metadata tags
  • Monitor costs per team
  • Set up alerts for unusual usage

Metadata Guide →

Track and filter by custom tags

Common Applications

Portkey works with: AI Tools - Cursor, Windsurf, Cline, Continue
Chat UIs - Open WebUI, LibreChat, AnythingLLM
Automation - n8n, Make, Zapier
No-Code - LangFlow, Flowise, Dify
Frameworks - Any OpenAI-compatible SDK
Custom Apps - Your own applications
See specific integration guides:

Troubleshooting

Can’t find OpenAI settings?

Look for sections labeled:
  • “OpenAI Compatible”
  • “Custom Endpoint”
  • “API Configuration”
  • “LLM Settings”
  • “Model Provider”

Model not working?

Make sure you’re using the full provider slug:
  • @openai-prod/gpt-4o (correct)
  • gpt-4o (needs provider slug)
Or use a config to set the default model.

Getting authentication errors?

  1. Verify your Portkey API key is correct
  2. Check that base URL is exactly: https://api.portkey.ai/v1
  3. Make sure provider slug matches what’s in Model Catalog

Next Steps

Questions? Join our Discord Community or check out more integrations.