- Unified access to 1600+ LLMs through a single API
- Real-time observability with 40+ metrics and detailed logs
- Enterprise governance with budget limits and RBAC
- Security guardrails for PII detection and content filtering
For enterprise governance setup, see Enterprise Governance.
1. Setup Portkey
1
Add Provider
Go to Model Catalog → AI Providers and add your provider (OpenAI, Anthropic, etc.) with your API credentials.

2
Create Config (Optional)
3
Create Portkey API Key
Go to API Keys → Create New API Key. Optionally attach your config from Step 2.

2. Integrate with LibreChat
Configure Files
docker-compose.override.yml (docs)docker-compose.override.yml
.env
LibreChat requires an
apiKey field—use "dummy" since auth is via Portkey headers.For per-user cost tracking in centralized deployments, see this community guide.
3. Enterprise Governance
Budget Controls & Rate Limits
Budget Controls & Rate Limits
Create providers per team with budget & rate limits in Model Catalog.

Model Access Rules
Model Access Rules
Use Model Catalog to provision which models are exposed to each workspace.Create configs in Configs. Update anytime without redeploying.
Team-Specific API Keys
Team-Specific API Keys
Create API keys with metadata for tracking and scoped permissions:
Deploy & Monitor
Deploy & Monitor
Distribute API keys and monitor in Portkey dashboard: cost tracking, model usage patterns, request volumes, error rates.
Portkey Features
Observability
Track 40+ metrics: cost, tokens, latency. Filter by custom metadata.
1600+ LLMs
Switch providers by changing the model slug in your config.
Guardrails
PII detection, content filtering, compliance controls.
Custom Metadata
Filter logs, track usage, attribute costs by team.
Reliability
Fallbacks
Auto-switch to backup on failure.
Load Balancing
Distribute requests by weight.
Caching
Reduce costs with response caching.
Retries
Exponential backoff on failures.
Conditional Routing
Route by metadata conditions.
Budget Limits
Control spending per team.
Enterprise
SSO
SAML 2.0, Okta, Azure AD support.
Organization Management
Workspaces, teams, RBAC.
Audit Logs
Access control and compliance tracking.
Budget Controls
Granular spending limits.
FAQs
Can I use multiple LLM providers with the same API key?
Can I use multiple LLM providers with the same API key?
Yes. Create multiple providers in Model Catalog, add them to a single config, and attach that config to your API key.
How do I track costs for different teams?
How do I track costs for different teams?
Create separate providers per team, use metadata tags in configs, or set up team-specific API keys. Monitor in the analytics dashboard.
What happens if a team exceeds their budget limit?
What happens if a team exceeds their budget limit?
Requests are blocked, admins notified, usage stats remain visible. Adjust limits as needed.
Next Steps
For enterprise support, contact our enterprise team.


