Skip to main content
AnythingLLM is an all-in-one Desktop & Docker AI application with built-in RAG and AI agents. Add Portkey to get:
  • 1600+ LLMs through one interface - switch providers instantly
  • Observability - track costs, tokens, and latency for every request
  • Reliability - automatic fallbacks, retries, and caching
  • Governance - budget limits, usage tracking, and team access controls
This guide shows how to configure AnythingLLM with Portkey in under 5 minutes.
For enterprise deployments across teams, see Enterprise Governance.

1. Setup

1

Add Provider

Go to Model CatalogAdd Provider.
2

Configure Credentials

Select your provider (OpenAI, Anthropic, etc.), enter your API key, and create a slug like openai-prod.
3

Get Portkey API Key

Go to API Keys and generate your Portkey API key.

2. Configure AnythingLLM

1

Open Settings

Launch AnythingLLM and navigate to Settings > AI Providers > LLM.
2

Configure Provider

  1. Select Generic OpenAI from the LLM Provider dropdown
  2. Configure:
    • Base URL: https://api.portkey.ai/v1
    • API Key: Your Portkey API key
    • Chat Model: @openai-prod/gpt-4o (or your provider slug + model)
    • Token Context Window: Set based on your model’s limits
    • Max Tokens: Configure according to your needs
Done! Monitor usage in the Portkey Dashboard.

Switch Providers

Change models by updating the Chat Model field:
@anthropic-prod/claude-3-5-sonnet-20241022
@openai-prod/gpt-4o
@google-prod/gemini-2.0-flash-exp
All requests route through Portkey automatically.
Want fallbacks, load balancing, or caching? Create a Portkey Config, attach it to your API key, and set Chat Model to dummy. See Enterprise Governance for examples.

3. Enterprise Governance

For organizations deploying AnythingLLM across teams, Portkey provides:
  • Cost Management: Budget limits and spend tracking per team
  • Access Control: Team-specific API keys with role-based permissions
  • Usage Analytics: Track patterns across teams and projects
  • Model Management: Control which models teams can access
Create team-specific providers with budget and rate limits:
  1. Go to Model CatalogAdd Provider
  2. Create providers for each team (e.g., openai-frontend, anthropic-backend)
  3. Set budget and rate limits per provider
Provision only the models each team needs:
Each team’s provider slug gives access only to their approved models.
Use Portkey Configs for fallbacks, load balancing, and caching.Example: Load-balance across providers
{
	"strategy": { "mode": "load-balance" },
	"targets": [
		{ "override_params": { "model": "@openai-prod/gpt-4o" } },
		{ "override_params": { "model": "@anthropic-prod/claude-3-5-sonnet-20241022" } }
	]
}
Create configs at Configs.
Generate API keys with metadata tracking:
from portkey_ai import Portkey

portkey = Portkey(api_key="YOUR_ADMIN_API_KEY")

api_key = portkey.api_keys.create(
    name="frontend-team",
    type="organisation",
    workspace_id="YOUR_WORKSPACE_ID",
    defaults={
        "metadata": {
            "environment": "production",
            "team": "frontend"
        }
    },
    scopes=["logs.view", "configs.read"]
)
See API Keys docs.
Track everything in the Portkey dashboard:
  • Cost by team
  • Model usage patterns
  • Request volumes and errors
  • Detailed logs for debugging

Portkey Features

Observability

Track 40+ metrics including cost, tokens, and latency across all providers. Filter by team or project using metadata.

Request Logs

Every request logged with complete details:
  • Full request/response payloads
  • Cost breakdown
  • Performance metrics

1600+ LLMs

Switch between any model through one interface:

Supported Providers

View all 1600+ supported models

Metadata Tracking

Track custom metrics:
  • Language and framework usage
  • Task types (generation vs. completion)
  • Project-specific patterns

Custom Metadata

Enterprise Access

Reliability

Security Guardrails

Protect your data:
  • Prevent sensitive data leaks
  • PII detection and masking
  • Content filtering
  • Custom security rules

Guardrails

FAQs

Go to Model Catalog → click your provider → update limits → save.
Yes. Create a config with multiple providers and attach it to your API key.
Options:
  • Create separate providers for each team
  • Use metadata tags in requests
  • Set up team-specific API keys
  • Filter in the analytics dashboard
Requests are blocked until limits are adjusted. Admins receive notifications.

Next Steps

Join our Community
For enterprise support and custom features, contact our enterprise team.