Skip to main content
Add Portkey to LibreChat to get:
  • Unified access to 1600+ LLMs through a single API
  • Real-time observability with 40+ metrics and detailed logs
  • Enterprise governance with budget limits and RBAC
  • Security guardrails for PII detection and content filtering
For enterprise governance setup, see Enterprise Governance.

1. Setup Portkey

1

Add Provider

Go to Model Catalog → AI Providers and add your provider (OpenAI, Anthropic, etc.) with your API credentials.
Add Provider
2

Create Config (Optional)

Go to Configs and create a config for routing, fallbacks, or other features:
{"override_params": {"model": "@openai-prod/gpt-4o"}}
3

Create Portkey API Key

Go to API KeysCreate New API Key. Optionally attach your config from Step 2.

2. Integrate with LibreChat

Configure Files

docker-compose.override.yml (docs)
docker-compose.override.yml
services:
  api:
    volumes:
    - type: bind
      source: ./librechat.yaml
      target: /app/librechat.yaml
.env
.env
PORTKEY_API_KEY=YOUR_PORTKEY_API_KEY
PORTKEY_GATEWAY_URL=https://api.portkey.ai/v1
librechat.yaml - Choose one option:
version: 1.1.4
cache: true
endpoints:
  custom:
    - name: "Portkey"
      apiKey: "dummy"
      baseURL: ${PORTKEY_GATEWAY_URL}
      headers:
        x-portkey-api-key: "${PORTKEY_API_KEY}"
        x-portkey-config: "pc-libre-xxx"
      models:
        default: ["@openai-prod/gpt-4o"]
        fetch: true
      titleConvo: true
      titleModel: "current_model"
      modelDisplayLabel: "Portkey"
LibreChat requires an apiKey field—use "dummy" since auth is via Portkey headers.
For per-user cost tracking in centralized deployments, see this community guide.

3. Enterprise Governance

Create providers per team with budget & rate limits in Model Catalog.
Use Model Catalog to provision which models are exposed to each workspace.
{"strategy": {"mode": "single"}, "targets": [{"override_params": {"model": "@openai-prod/gpt-4o"}}]}
Create configs in Configs. Update anytime without redeploying.
Create API keys with metadata for tracking and scoped permissions:
from portkey_ai import Portkey

portkey = Portkey(api_key="YOUR_ADMIN_API_KEY")

api_key = portkey.api_keys.create(
    name="engineering-team",
    type="organisation",
    workspace_id="YOUR_WORKSPACE_ID",
    defaults={
        "config_id": "your-config-id",
        "metadata": {"environment": "production", "department": "engineering"}
    },
    scopes=["logs.view", "configs.read"]
)
Distribute API keys and monitor in Portkey dashboard: cost tracking, model usage patterns, request volumes, error rates.

Portkey Features

Reliability

Enterprise

FAQs

Yes. Create multiple providers in Model Catalog, add them to a single config, and attach that config to your API key.
Create separate providers per team, use metadata tags in configs, or set up team-specific API keys. Monitor in the analytics dashboard.
Requests are blocked, admins notified, usage stats remain visible. Adjust limits as needed.

Next Steps

For enterprise support, contact our enterprise team.