Quick Start
Find your project’s OpenAI settings and update:- Base URL:
https://api.portkey.ai/v1 - API Key: Your Portkey API key

All requests appear in Portkey logs
- ✅ Full observability (costs, latency, logs)
- ✅ Access to 250+ LLM providers
- ✅ Automatic fallbacks and retries
- ✅ Budget controls per team/project
Why Add Portkey?
Enterprise Observability
Every request logged with costs, latency, tokens. Track usage across teams.
Multi-Provider Access
Switch between OpenAI, Anthropic, Google, and 250+ models without code changes.
Production Reliability
Automatic fallbacks, retries, load balancing—configured once, works everywhere.
Cost & Access Control
Budget limits per team. Rate limiting. Centralized credential management.
Setup
1. Add Provider in Model Catalog
- Go to Model Catalog → Add Provider
- Select your provider (OpenAI, Anthropic, Google, etc.)
- Choose existing credentials or create new by entering your API keys
- Name your provider (e.g.,
openai-prod)
@openai-prod (or whatever you named it).
Complete Model Catalog Guide →
Set up budgets, rate limits, and manage credentials
2. Get Portkey API Key
Create your Portkey API key at app.portkey.ai/api-keys3. Configure Your Application
Most OpenAI-compatible apps have settings for: Base URL / EndpointCommon Integration Patterns
Pattern 1: Direct Configuration (Recommended)
If your app allows custom base URL and API key:Pattern 2: With Config
If your app only accepts model names likegpt-4o:
- Create a config in Portkey dashboard:
- Use the config in your app:
Pattern 3: Environment Variables
Many apps use environment variables:Switching Providers
Change the model string to switch providers:Advanced Features via Configs
For production features like fallbacks, caching, and load balancing:- Create a config in Portkey dashboard
- Attach it to your API key defaults
- Or pass via header:
x-portkey-config: your-config-id
Learn About Configs →
Fallbacks, retries, caching, load balancing, and more
Enterprise Governance
For teams and organizations:Budget Controls
Set budget limits per team or project:- Go to Model Catalog
- Create separate providers for each team
- Set budget and rate limits
- Distribute team-specific API keys
Budget Limits Guide →
Set up cost controls and alerts
Access Management
Control who can access which models:- Model provisioning - Enable/disable models per provider
- API key scopes - Limit what each key can do
- Team workspaces - Isolate teams with separate workspaces
Access Control Guide →
Set up RBAC and team permissions
Usage Tracking
Track usage by team, project, or user:- Add metadata to API keys
- Filter logs by metadata tags
- Monitor costs per team
- Set up alerts for unusual usage
Metadata Guide →
Track and filter by custom tags
Common Applications
Portkey works with: ✅ AI Tools - Cursor, Windsurf, Cline, Continue✅ Chat UIs - Open WebUI, LibreChat, AnythingLLM
✅ Automation - n8n, Make, Zapier
✅ No-Code - LangFlow, Flowise, Dify
✅ Frameworks - Any OpenAI-compatible SDK
✅ Custom Apps - Your own applications See specific integration guides:
Cursor
AI code editor integration
Cline
VS Code AI assistant
Open WebUI
Self-hosted chat interface
n8n
Workflow automation
Troubleshooting
Can’t find OpenAI settings?
Look for sections labeled:- “OpenAI Compatible”
- “Custom Endpoint”
- “API Configuration”
- “LLM Settings”
- “Model Provider”
Model not working?
Make sure you’re using the full provider slug:- ✅
@openai-prod/gpt-4o(correct) - ❌
gpt-4o(needs provider slug)
Getting authentication errors?
- Verify your Portkey API key is correct
- Check that base URL is exactly:
https://api.portkey.ai/v1 - Make sure provider slug matches what’s in Model Catalog
Next Steps
Model Catalog
Set up providers and budgets
Configs
Configure fallbacks and routing
Observability
Track costs and performance
Guardrails
Add PII detection and filtering

