Skip to main content
Cursor is an AI-first code editor. Add Portkey to get:
  • 1600+ LLMs — Not just OpenAI & Anthropic
  • Observability — Track costs, tokens, latency for every request
  • Governance — Budget limits, rate limits, RBAC
  • Guardrails — PII detection, content filtering
When using Portkey, Cursor-specific features (autocomplete, Apply from Chat, inline refactoring) require Cursor Pro/Enterprise plans.
For enterprise governance, see Enterprise Governance.

1. Setup

1

Add Provider

Go to Model CatalogAdd Provider.
2

Configure Credentials

Select your provider (OpenAI, Anthropic, etc.), enter your API key, and create a slug like openai-prod.
3

Create Config

Go to Configs and create:
{
  "override_params": {
    "model": "@openai-prod/gpt-4o"
  }
}
Save and note the Config ID.
4

Get Portkey API Key

Go to API Keys → Create new key → Attach your config → Save.

2. Configure Cursor

  1. Open Cursor → Settings → Cursor Settings → Models
  2. Scroll to API Keys section
  3. Enable OpenAI API Key toggle and enter your Portkey API Key
  4. Enable Override OpenAI Base URL and enter: https://api.portkey.ai/v1
  5. Click Verify
Done! Monitor usage in the Portkey Dashboard.

Enterprise Governance

For organizations using Cursor, Portkey adds governance controls:
Create providers with spending limits per team:
  1. Go to Model Catalog
  2. Create provider for each team with budget/rate limits
Control which models teams can access at the integration level.
Use Configs for fallbacks, load balancing, caching:
{
  "strategy": { "mode": "loadbalance" },
  "targets": [
    { "override_params": { "model": "@openai-prod/gpt-4o" } },
    { "override_params": { "model": "@anthropic-prod/claude-sonnet-4-20250514" } }
  ]
}
Create team-specific API keys with metadata:
from portkey_ai import Portkey

portkey = Portkey(api_key="YOUR_ADMIN_API_KEY")

api_key = portkey.api_keys.create(
    name="frontend-team",
    workspace_id="YOUR_WORKSPACE_ID",
    defaults={
        "config_id": "your-config-id",
        "metadata": {"team": "frontend", "environment": "production"}
    },
    scopes=["logs.view", "configs.read"]
)

Features

Observability

Track 40+ metrics: cost, tokens, latency, performance. Filter by custom metadata.

Logs

Complete request/response tracking with metadata tags and cost attribution.

Reliability

Guardrails

Protect code and data with real-time checks:
  • PII detection and masking
  • Content filtering
  • Custom security rules

Guardrails

Configure input/output protection

FAQs

Yes. Create multiple providers and attach them to a single config. The config connects to your API key.
Create separate providers per team, use metadata tags, or set up team-specific API keys.
Requests are blocked. Admins get notified. Limits can be adjusted anytime.

Next Steps

Community: Discord · GitHub