Skip to main content
Cline is an AI coding assistant for VS Code. Add Portkey to get:
  • 1600+ LLMs through one interface - switch providers instantly
  • Observability - track costs, tokens, and latency for every request
  • Reliability - automatic fallbacks, retries, and caching
  • Governance - budget limits, usage tracking, and team access controls
This guide shows how to configure Cline with Portkey in under 5 minutes.
For enterprise deployments across teams, see Enterprise Governance.

1. Setup

1

Add Provider

Go to Model CatalogAdd Provider.
2

Configure Credentials

Select your provider (OpenAI, Anthropic, etc.), enter your API key, and create a slug like openai-prod.
3

Get Portkey API Key

Go to API Keys and generate your Portkey API key.

2. Configure Cline

1

Open Settings

In VS Code:
  1. Press Cmd/Ctrl + Shift + P
  2. Search for Cline: Open in new tab
  3. Click the settings gear icon ⚙️
2

Add Portkey Configuration

In API Configuration, enter:
  • API Provider: OpenAI Compatible
  • Base URL: https://api.portkey.ai/v1
  • API Key: Your Portkey API key
  • Model ID: @openai-prod/gpt-4o (or your provider slug + model)
Done! Monitor usage in the Portkey Dashboard.

Switch Providers

Change models by updating the Model ID field:
@anthropic-prod/claude-3-5-sonnet-20241022
@openai-prod/gpt-4o
@google-prod/gemini-2.0-flash-exp
All requests route through Portkey automatically.
Want fallbacks, load balancing, or caching? Create a Portkey Config, attach it to your API key, and set Model ID to dummy. See Enterprise Governance for examples.

3. Enterprise Governance

For organizations deploying Cline across development teams, Portkey provides:
  • Cost Management: Budget limits and spend tracking per team
  • Access Control: Team-specific API keys with role-based permissions
  • Usage Analytics: Track patterns across teams and projects
  • Model Management: Control which models teams can access
Create team-specific providers with budget and rate limits:
  1. Go to Model CatalogAdd Provider
  2. Create providers for each team (e.g., openai-frontend, anthropic-backend)
  3. Set budget and rate limits per provider
Provision only the models each team needs:
Each team’s provider slug gives access only to their approved models.
Use Portkey Configs for fallbacks, load balancing, and caching.Example: Load-balance across providers
{
	"strategy": { "mode": "load-balance" },
	"targets": [
		{ "override_params": { "model": "@openai-prod/gpt-4o" } },
		{ "override_params": { "model": "@anthropic-prod/claude-3-5-sonnet-20241022" } }
	]
}
Create configs at Configs.
Generate API keys with metadata tracking:
from portkey_ai import Portkey

portkey = Portkey(api_key="YOUR_ADMIN_API_KEY")

api_key = portkey.api_keys.create(
    name="frontend-team",
    type="organisation",
    workspace_id="YOUR_WORKSPACE_ID",
    defaults={
        "metadata": {
            "environment": "production",
            "team": "frontend"
        }
    },
    scopes=["logs.view", "configs.read"]
)
See API Keys docs.
Track everything in the Portkey dashboard:
  • Cost by team
  • Model usage patterns
  • Request volumes and errors
  • Detailed logs for debugging

Portkey Features

Observability

Track 40+ metrics including cost, tokens, and latency across all providers. Filter by team or project using metadata.

Request Logs

Every request logged with complete details:
  • Full request/response payloads
  • Cost breakdown
  • Performance metrics

1600+ LLMs

Switch between any model through one interface:

Supported Providers

View all 1600+ supported models

Metadata Tracking

Track custom metrics:
  • Language and framework usage
  • Task types (generation vs. completion)
  • Project-specific patterns

Custom Metadata

Enterprise Access

Reliability

Security Guardrails

Protect your codebase:
  • Prevent API key exposure
  • Block malicious patterns
  • Enforce coding standards
  • PII detection and masking

Guardrails

FAQs

Go to Model Catalog → click your provider → update limits → save.
Yes. Create a config with multiple providers and attach it to your API key.
Options:
  • Create separate providers for each team
  • Use metadata tags in requests
  • Set up team-specific API keys
  • Filter in the analytics dashboard
Requests are blocked until limits are adjusted. Admins receive notifications.
Yes. Add your local endpoint (Ollama, etc.) as a provider in Model Catalog.
Use Portkey’s Guardrails for:
  • API key detection
  • PII masking
  • Request/response filtering
  • Custom security rules

Next Steps

Schedule a Demo

Schedule a 1:1 call with our team to see how Portkey can transform your development workflow with Cline
Join our Community
For enterprise support and custom features, contact our enterprise team.