Migrating from Helicone
Switch from Helicone to BrainstormRouter — concept mapping, what changes, and what you gain.
Overview
Helicone is an LLM observability platform. BrainstormRouter is a runtime control plane that includes observability but adds intelligent routing, budget enforcement, and multi-provider failover. This guide maps Helicone concepts to BrainstormRouter equivalents.
Concept mapping
| Helicone | BrainstormRouter | Notes |
|---|---|---|
Proxy URL (oai.helicone.ai) | api.brainstormrouter.com | Same pattern — swap the base URL |
Helicone-Auth header | Authorization: Bearer br_live_... | Standard Bearer auth |
Custom properties (Helicone-Property-*) | X-BR-* request headers | Similar per-request metadata |
| Request logging | Automatic — every request logged | No opt-in needed |
| Cost tracking | X-BR-Actual-Cost response header + dashboard | Real-time, per-request |
| Rate limiting | Per-key rate limits + budget caps | Configured per API key |
| Caching | Semantic cache (automatic) | Deduplicates similar requests |
| User tracking | Tenant + API key isolation | Multi-tenant by default |
| Prompt templates | Routing presets (@preset/slug) | Model selection + config bundles |
What changes
1. Base URL
- base_url = "https://oai.helicone.ai/v1"
+ base_url = "https://api.brainstormrouter.com/v1"
2. Authentication
- headers = {"Helicone-Auth": "Bearer sk-helicone-..."}
- # Plus your OpenAI key in Authorization header
+ headers = {"Authorization": "Bearer br_live_..."}
+ # Provider keys configured in dashboard, not per-request
With BrainstormRouter, you add provider keys once in the dashboard. The gateway uses them automatically — you never send provider keys in API calls.
3. Model names
- model = "gpt-4o"
+ model = "openai/gpt-4o" # explicit provider
+ model = "auto" # or let BR pick the best model
What BrainstormRouter adds
- Intelligent routing — Thompson sampling learns which models work best for your traffic
- Multi-provider failover — If OpenAI is down, automatically routes to Anthropic or Google
- Budget enforcement — Per-key daily spend caps that actually block requests when hit
- Circuit breakers — Isolate failing providers without affecting other traffic
- Quality scoring — Track model quality over time, not just cost and latency
Migration steps
- Sign up at brainstormrouter.com/dashboard
- Add your provider keys (OpenAI, Anthropic, etc.) in Configure → Providers
- Create a gateway API key in Configure → API Keys
- Change your base URL and auth header
- (Optional) Set up routing presets to replace Helicone prompt templates
SDK example
from openai import OpenAI
# Before (Helicone)
# client = OpenAI(
# base_url="https://oai.helicone.ai/v1",
# default_headers={"Helicone-Auth": "Bearer sk-helicone-..."},
# )
# After (BrainstormRouter)
client = OpenAI(
base_url="https://api.brainstormrouter.com/v1",
api_key="br_live_...",
)
response = client.chat.completions.create(
model="auto", # intelligent routing — or specify "openai/gpt-4o"
messages=[{"role": "user", "content": "Hello"}],
)