Skip to main content
OpenAI Codex CLI is a lightweight coding agent that runs in your terminal. Add Portkey to get:
  • 1600+ LLMs through one interface - switch providers instantly
  • Observability - track costs, tokens, and latency for every request
  • Reliability - automatic fallbacks, retries, and caching
  • Governance - budget limits, usage tracking, and team access controls
This guide shows how to configure Codex CLI with Portkey in under 5 minutes.
For enterprise deployments across teams, see Enterprise Governance.

1. Setup

1

Add Provider

Go to Model CatalogAdd Provider.
2

Configure Credentials

Select your provider (OpenAI, Anthropic, etc.), enter your API key, and create a slug like openai-prod.
3

Get Portkey API Key

Go to API Keys and generate your Portkey API key.

2. Configure Codex CLI

Create or edit ~/.codex/config.json:
{
  "provider": "portkey",
  "model": "@openai-prod/gpt-4o",
  "providers": {
    "portkey": {
      "name": "Portkey",
      "baseURL": "https://api.portkey.ai/v1",
      "envKey": "PORTKEY_API_KEY"
    }
  }
}
Set your environment variable:
export PORTKEY_API_KEY="your-portkey-api-key"
Add to ~/.zshrc or ~/.bashrc for persistence.
Test your integration:
codex "explain this repository to me"
Done! Monitor usage in the Portkey Dashboard.

Switch Providers

Change models by updating the model field in your config:
@anthropic-prod/claude-3-5-sonnet-20241022
@openai-prod/gpt-4o
@google-prod/gemini-2.0-flash-exp
Want fallbacks, load balancing, or caching? Create a Portkey Config and attach it to your API key. See Enterprise Governance for examples.

3. Enterprise Governance

For organizations deploying Codex CLI across development teams, Portkey provides:
  • Cost Management: Budget limits and spend tracking per team
  • Access Control: Team-specific API keys with role-based permissions
  • Usage Analytics: Track patterns across teams and projects
  • Model Management: Control which models teams can access
Create team-specific providers with budget and rate limits:
  1. Go to Model CatalogAdd Provider
  2. Create providers for each team (e.g., openai-frontend, anthropic-backend)
  3. Set budget and rate limits per provider
Provision only the models each team needs:
Each team’s provider slug gives access only to their approved models.
Use Portkey Configs for fallbacks, load balancing, and caching.Example: Load-balance across providers
{
	"strategy": { "mode": "load-balance" },
	"targets": [
		{ "override_params": { "model": "@openai-prod/gpt-4o" } },
		{ "override_params": { "model": "@anthropic-prod/claude-3-5-sonnet-20241022" } }
	]
}
Create configs at Configs.
Generate API keys with metadata tracking:
from portkey_ai import Portkey

portkey = Portkey(api_key="YOUR_ADMIN_API_KEY")

api_key = portkey.api_keys.create(
    name="frontend-team",
    type="organisation",
    workspace_id="YOUR_WORKSPACE_ID",
    defaults={
        "metadata": {
            "environment": "production",
            "team": "frontend"
        }
    },
    scopes=["logs.view", "configs.read"]
)
See API Keys docs.
Track everything in the Portkey dashboard:
  • Cost by team
  • Model usage patterns
  • Request volumes and errors
  • Detailed logs for debugging

Portkey Features

Observability

Track 40+ metrics including cost, tokens, and latency across all providers. Filter by team or project using metadata.

Request Logs

Every request logged with complete details:
  • Full request/response payloads
  • Cost breakdown
  • Performance metrics

1600+ LLMs

Switch between any model through one interface:

Supported Providers

View all 1600+ supported models

Metadata Tracking

Track custom metrics:
  • Language and framework usage
  • Task types (generation vs. completion)
  • Project-specific patterns

Custom Metadata

Enterprise Access

Reliability

Security Guardrails

Protect your codebase:
  • Prevent API key exposure
  • Block malicious patterns
  • Enforce coding standards
  • PII detection and masking

Guardrails

FAQs

Go to Model Catalog → click your provider → update limits → save.
Yes. Create a config with multiple providers and attach it to your API key.
Yes! Portkey fully integrates with OpenAI’s open source Codex CLI.
Requests are blocked until limits are adjusted. Admins receive notifications.

Next Steps

Join our Community
For enterprise support and custom features, contact our enterprise team.