Skip to main content
Zed is a next-generation code editor built for AI-powered development. Add Portkey to get:
  • 1600+ LLMs through one interface - switch providers instantly
  • Observability - track costs, tokens, and latency for every request
  • Reliability - automatic fallbacks, retries, and caching
  • Governance - budget limits, usage tracking, and team access controls
This guide shows how to configure Zed with Portkey in under 5 minutes.
For enterprise deployments across teams, see Enterprise Governance.

1. Setup

1

Add Provider

Go to Model CatalogAdd Provider.
2

Configure Credentials

Select your provider (OpenAI, Anthropic, etc.), enter your API key, and create a slug like openai-prod.
3

Get Portkey API Key

Go to API Keys and generate your Portkey API key.

2. Configure Zed

Open Settings.json in Zed using Command Palette (cmd-shift-p / ctrl-shift-p) and run zed: open settings. Add this configuration:
{
  "language_models": {
    "openai": {
      "api_url": "https://api.portkey.ai/v1",
      "available_models": [
        {
          "name": "@openai-prod/gpt-4o",
          "display_name": "GPT-4o via Portkey",
          "max_tokens": 128000
        },
        {
          "name": "@anthropic-prod/claude-3-5-sonnet-20241022",
          "display_name": "Claude 3.5 Sonnet via Portkey",
          "max_tokens": 200000
        }
      ]
    }
  }
}
Then set your Portkey API key as your OpenAI API key in Zed’s settings. Done! Monitor usage in the Portkey Dashboard.

Switch Providers

Add more models to your available_models array:
{
  "name": "@google-prod/gemini-2.0-flash-exp",
  "display_name": "Gemini 2.0 Flash via Portkey",
  "max_tokens": 1000000
}
All requests route through Portkey automatically.
Want fallbacks, load balancing, or caching? Create a Portkey Config, attach it to your API key, and set model name to dummy. See Enterprise Governance for examples.

3. Enterprise Governance

For organizations deploying Zed across development teams, Portkey provides:
  • Cost Management: Budget limits and spend tracking per team
  • Access Control: Team-specific API keys with role-based permissions
  • Usage Analytics: Track patterns across teams and projects
  • Model Management: Control which models teams can access
Create team-specific providers with budget and rate limits:
  1. Go to Model CatalogAdd Provider
  2. Create providers for each team (e.g., openai-frontend, anthropic-backend)
  3. Set budget and rate limits per provider
Provision only the models each team needs:
Each team’s provider slug gives access only to their approved models.
Use Portkey Configs for fallbacks, load balancing, and caching.Example: Load-balance across providers
{
	"strategy": { "mode": "load-balance" },
	"targets": [
		{ "override_params": { "model": "@openai-prod/gpt-4o" } },
		{ "override_params": { "model": "@anthropic-prod/claude-3-5-sonnet-20241022" } }
	]
}
Create configs at Configs.
Generate API keys with metadata tracking:
from portkey_ai import Portkey

portkey = Portkey(api_key="YOUR_ADMIN_API_KEY")

api_key = portkey.api_keys.create(
    name="frontend-team",
    type="organisation",
    workspace_id="YOUR_WORKSPACE_ID",
    defaults={
        "metadata": {
            "environment": "production",
            "team": "frontend"
        }
    },
    scopes=["logs.view", "configs.read"]
)
See API Keys docs.
Track everything in the Portkey dashboard:
  • Cost by team
  • Model usage patterns
  • Request volumes and errors
  • Detailed logs for debugging

Portkey Features

Observability

Track 40+ metrics including cost, tokens, and latency across all providers. Filter by team or project using metadata.

Request Logs

Every request logged with complete details:
  • Full request/response payloads
  • Cost breakdown
  • Performance metrics

1600+ LLMs

Switch between any model through one interface:

Supported Providers

View all 1600+ supported models

Metadata Tracking

Track custom metrics:
  • Language and framework usage
  • Task types (generation vs. completion)
  • Project-specific patterns

Custom Metadata

Enterprise Access

Reliability

Security Guardrails

Protect your codebase:
  • Prevent API key exposure
  • Block malicious patterns
  • Enforce coding standards
  • PII detection and masking

Guardrails

FAQs

Go to Model Catalog → click your provider → update limits → save.
Yes. Create a config with multiple providers and attach it to your API key.
Options:
  • Create separate providers for each team
  • Use metadata tags in requests
  • Set up team-specific API keys
  • Filter in the analytics dashboard
Requests are blocked until limits are adjusted. Admins receive notifications.

Next Steps

Join our Community
For enterprise support and custom features, contact our enterprise team.