- 1600+ LLMs through one interface - switch providers instantly
- Observability - track costs, tokens, and latency for every request
- Reliability - automatic fallbacks, retries, and caching
- Governance - budget limits, usage tracking, and team access controls
For enterprise deployments across teams, see Enterprise Governance.
1. Setup
1
Add Provider
Go to Model Catalog → Add Provider.

2
Configure Credentials
Select your provider (OpenAI, Anthropic, etc.), enter your API key, and create a slug like 
openai-prod.
3
Get Portkey API Key
Go to API Keys and generate your Portkey API key.
2. Configure Codex CLI
Create or edit~/.codex/config.json:
Add to
~/.zshrc or ~/.bashrc for persistence.Switch Providers
Change models by updating themodel field in your config:
Want fallbacks, load balancing, or caching? Create a Portkey Config and attach it to your API key. See Enterprise Governance for examples.
3. Enterprise Governance
For organizations deploying Codex CLI across development teams, Portkey provides:- Cost Management: Budget limits and spend tracking per team
- Access Control: Team-specific API keys with role-based permissions
- Usage Analytics: Track patterns across teams and projects
- Model Management: Control which models teams can access
Set Budget Limits Per Team
Set Budget Limits Per Team
Create team-specific providers with budget and rate limits:
- Go to Model Catalog → Add Provider
- Create providers for each team (e.g.,
openai-frontend,anthropic-backend) - Set budget and rate limits per provider

Control Model Access
Control Model Access
Provision only the models each team needs:
Each team’s provider slug gives access only to their approved models.

Add Reliability Features
Add Reliability Features
Use Portkey Configs for fallbacks, load balancing, and caching.Example: Load-balance across providersCreate configs at Configs.
Create Team API Keys
Create Team API Keys
Monitor Usage
Monitor Usage
Track everything in the Portkey dashboard:
- Cost by team
- Model usage patterns
- Request volumes and errors
- Detailed logs for debugging

Portkey Features
Observability
Track 40+ metrics including cost, tokens, and latency across all providers. Filter by team or project using metadata.
Request Logs
Every request logged with complete details:- Full request/response payloads
- Cost breakdown
- Performance metrics

1600+ LLMs
Switch between any model through one interface:Supported Providers
View all 1600+ supported models
Metadata Tracking
Track custom metrics:- Language and framework usage
- Task types (generation vs. completion)
- Project-specific patterns
Custom Metadata
Enterprise Access
Budget Controls
Set spending limits with automatic cutoffs
SSO
Enterprise SSO integration
Organization Management
Teams, projects, and role-based access
Audit Logs
Compliance and audit logging
Reliability
Fallbacks
Auto-switch on provider failures
Conditional Routing
Route based on complexity or language
Load Balancing
Distribute across providers
Caching
Cache common patterns
Smart Retries
Automatic retry with backoff
Budget Limits
Enforce spending limits
Security Guardrails
Protect your codebase:- Prevent API key exposure
- Block malicious patterns
- Enforce coding standards
- PII detection and masking
Guardrails
FAQs
How do I update budget limits?
How do I update budget limits?
Go to Model Catalog → click your provider → update limits → save.
Can I use multiple providers with one API key?
Can I use multiple providers with one API key?
Yes. Create a config with multiple providers and attach it to your API key.
Can I use Portkey with the open source Codex CLI?
Can I use Portkey with the open source Codex CLI?
Yes! Portkey fully integrates with OpenAI’s open source Codex CLI.
What happens when a team exceeds their budget?
What happens when a team exceeds their budget?
Requests are blocked until limits are adjusted. Admins receive notifications.
Next Steps
Join our CommunityFor enterprise support and custom features, contact our enterprise team.

