- 1600+ LLMs through one interface - switch providers instantly
- Observability - track costs, tokens, and latency for every request
- Reliability - automatic fallbacks, retries, and caching
- Governance - budget limits, usage tracking, and team access controls
For enterprise deployments across teams, see Enterprise Governance.
1. Setup
1
Add Provider
Go to Model Catalog → Add Provider.

2
Configure Credentials
Select your provider (OpenAI, Anthropic, etc.), enter your API key, and create a slug like 
openai-prod.
3
Get Portkey API Key
Go to API Keys and generate your Portkey API key.
2. Configure Cline
1
Open Settings
In VS Code:
- Press
Cmd/Ctrl + Shift + P - Search for
Cline: Open in new tab - Click the settings gear icon ⚙️
2
Add Portkey Configuration
In API Configuration, enter:
- API Provider:
OpenAI Compatible - Base URL:
https://api.portkey.ai/v1 - API Key: Your Portkey API key
- Model ID:
@openai-prod/gpt-4o(or your provider slug + model)

Switch Providers
Change models by updating the Model ID field:Want fallbacks, load balancing, or caching? Create a Portkey Config, attach it to your API key, and set Model ID to
dummy. See Enterprise Governance for examples.3. Enterprise Governance
For organizations deploying Cline across development teams, Portkey provides:- Cost Management: Budget limits and spend tracking per team
- Access Control: Team-specific API keys with role-based permissions
- Usage Analytics: Track patterns across teams and projects
- Model Management: Control which models teams can access
Set Budget Limits Per Team
Set Budget Limits Per Team
Create team-specific providers with budget and rate limits:
- Go to Model Catalog → Add Provider
- Create providers for each team (e.g.,
openai-frontend,anthropic-backend) - Set budget and rate limits per provider

Control Model Access
Control Model Access
Provision only the models each team needs:
Each team’s provider slug gives access only to their approved models.

Add Reliability Features
Add Reliability Features
Use Portkey Configs for fallbacks, load balancing, and caching.Example: Load-balance across providersCreate configs at Configs.
Create Team API Keys
Create Team API Keys
Monitor Usage
Monitor Usage
Track everything in the Portkey dashboard:
- Cost by team
- Model usage patterns
- Request volumes and errors
- Detailed logs for debugging

Portkey Features
Observability
Track 40+ metrics including cost, tokens, and latency across all providers. Filter by team or project using metadata.
Request Logs
Every request logged with complete details:- Full request/response payloads
- Cost breakdown
- Performance metrics

1600+ LLMs
Switch between any model through one interface:Supported Providers
View all 1600+ supported models
Metadata Tracking
Track custom metrics:- Language and framework usage
- Task types (generation vs. completion)
- Project-specific patterns
Custom Metadata
Enterprise Access
Budget Controls
Set spending limits with automatic cutoffs
SSO
Enterprise SSO integration
Organization Management
Teams, projects, and role-based access
Audit Logs
Compliance and audit logging
Reliability
Fallbacks
Auto-switch on provider failures
Conditional Routing
Route based on complexity or language
Load Balancing
Distribute across providers
Caching
Cache common patterns
Smart Retries
Automatic retry with backoff
Budget Limits
Enforce spending limits
Security Guardrails
Protect your codebase:- Prevent API key exposure
- Block malicious patterns
- Enforce coding standards
- PII detection and masking
Guardrails
FAQs
How do I update budget limits?
How do I update budget limits?
Go to Model Catalog → click your provider → update limits → save.
Can I use multiple providers with one API key?
Can I use multiple providers with one API key?
Yes. Create a config with multiple providers and attach it to your API key.
How do I track costs per team?
How do I track costs per team?
Options:
- Create separate providers for each team
- Use metadata tags in requests
- Set up team-specific API keys
- Filter in the analytics dashboard
What happens when a team exceeds their budget?
What happens when a team exceeds their budget?
Requests are blocked until limits are adjusted. Admins receive notifications.
Can I use local or self-hosted models?
Can I use local or self-hosted models?
Yes. Add your local endpoint (Ollama, etc.) as a provider in Model Catalog.
How do I prevent sensitive data exposure?
How do I prevent sensitive data exposure?
Use Portkey’s Guardrails for:
- API key detection
- PII masking
- Request/response filtering
- Custom security rules
Next Steps
Schedule a Demo
Schedule a 1:1 call with our team to see how Portkey can transform your development workflow with Cline
For enterprise support and custom features, contact our enterprise team.

