Migrating from Helicone

Switch from Helicone to BrainstormRouter — concept mapping, what changes, and what you gain.

Overview

Helicone is an LLM observability platform. BrainstormRouter is a runtime control plane that includes observability but adds intelligent routing, budget enforcement, and multi-provider failover. This guide maps Helicone concepts to BrainstormRouter equivalents.

Concept mapping

HeliconeBrainstormRouterNotes
Proxy URL (oai.helicone.ai)api.brainstormrouter.comSame pattern — swap the base URL
Helicone-Auth headerAuthorization: Bearer br_live_...Standard Bearer auth
Custom properties (Helicone-Property-*)X-BR-* request headersSimilar per-request metadata
Request loggingAutomatic — every request loggedNo opt-in needed
Cost trackingX-BR-Actual-Cost response header + dashboardReal-time, per-request
Rate limitingPer-key rate limits + budget capsConfigured per API key
CachingSemantic cache (automatic)Deduplicates similar requests
User trackingTenant + API key isolationMulti-tenant by default
Prompt templatesRouting presets (@preset/slug)Model selection + config bundles

What changes

1. Base URL

- base_url = "https://oai.helicone.ai/v1"
+ base_url = "https://api.brainstormrouter.com/v1"

2. Authentication

- headers = {"Helicone-Auth": "Bearer sk-helicone-..."}
- # Plus your OpenAI key in Authorization header
+ headers = {"Authorization": "Bearer br_live_..."}
+ # Provider keys configured in dashboard, not per-request

With BrainstormRouter, you add provider keys once in the dashboard. The gateway uses them automatically — you never send provider keys in API calls.

3. Model names

- model = "gpt-4o"
+ model = "openai/gpt-4o"     # explicit provider
+ model = "auto"              # or let BR pick the best model

What BrainstormRouter adds

  • Intelligent routing — Thompson sampling learns which models work best for your traffic
  • Multi-provider failover — If OpenAI is down, automatically routes to Anthropic or Google
  • Budget enforcement — Per-key daily spend caps that actually block requests when hit
  • Circuit breakers — Isolate failing providers without affecting other traffic
  • Quality scoring — Track model quality over time, not just cost and latency

Migration steps

  1. Sign up at brainstormrouter.com/dashboard
  2. Add your provider keys (OpenAI, Anthropic, etc.) in Configure → Providers
  3. Create a gateway API key in Configure → API Keys
  4. Change your base URL and auth header
  5. (Optional) Set up routing presets to replace Helicone prompt templates

SDK example

from openai import OpenAI

# Before (Helicone)
# client = OpenAI(
#     base_url="https://oai.helicone.ai/v1",
#     default_headers={"Helicone-Auth": "Bearer sk-helicone-..."},
# )

# After (BrainstormRouter)
client = OpenAI(
    base_url="https://api.brainstormrouter.com/v1",
    api_key="br_live_...",
)

response = client.chat.completions.create(
    model="auto",  # intelligent routing — or specify "openai/gpt-4o"
    messages=[{"role": "user", "content": "Hello"}],
)