Your First 5 Minutes
From zero to your first AI API call — account, provider keys, and your first request.
How BrainstormRouter works
BrainstormRouter is a free, BYOK (Bring Your Own Key) AI gateway. You plug in your existing provider API keys (OpenAI, Anthropic, Google), and BrainstormRouter routes traffic across them with intelligent model selection, caching, memory, and guardrails.
You pay your providers directly. BrainstormRouter never marks up token costs and never touches your provider billing. The gateway itself is free to use — no credit card required, no trial period, no usage limits beyond rate limiting.
Research Project: BrainstormRouter is a free AI research project. No warranties, SLAs, or company associations. Use at your own discretion.
Step 1: Sign in
Go to brainstormrouter.com/dashboard and sign in with GitHub or Google. No email verification needed.
On first sign-in, BrainstormRouter automatically creates your workspace (tenant) and gives you admin access. This takes about 2 seconds.
Step 2: Add your provider keys (BYOK)
Navigate to Configure → Providers in the sidebar.
Click Add Provider and paste your API key for any supported provider:
| Provider | Where to get a key |
|---|---|
| Anthropic | console.anthropic.com |
| OpenAI | platform.openai.com |
| Google AI | aistudio.google.com |
| Groq | console.groq.com |
| Mistral | console.mistral.ai |
| Cohere | dashboard.cohere.com |
Your keys are encrypted at rest and never sent back to the browser. BrainstormRouter uses them only to route requests on your behalf.
You can add multiple providers. The more providers you add, the more models become available for routing, fallback, and cost optimization.
Step 3: Generate a gateway API key
Navigate to Configure → API Keys in the sidebar.
Click Create Key and configure:
- Name — A label like "Development" or "Production"
- Rate limit — Requests per minute (optional)
- Budget — Daily or monthly spend cap in USD (optional)
Click Create. Your key appears once — copy it immediately:
This is the key you'll use in all API calls. It authenticates you to the BrainstormRouter gateway, which then uses your provider keys to call models.
Step 4: Send your first request
Use any OpenAI-compatible SDK. Just change the base URL:
from openai import OpenAI
client = OpenAI(
base_url="https://api.brainstormrouter.com/v1",
api_key="br_live_...", # Your gateway key from Step 3
)
response = client.chat.completions.create(
model="auto", # BrainstormRouter picks the best model
messages=[{"role": "user", "content": "Hello, world!"}],
)
print(response.choices[0].message.content)
Setting model: "auto" lets BrainstormRouter pick the optimal model based on request complexity and your available providers. Or specify a model directly: anthropic/claude-sonnet-4-5, openai/gpt-4o, google/gemini-2.0-flash.
Step 5: Set up MCP for Claude Code (optional)
If you use Claude Code and want BrainstormRouter tools available as an MCP server:
claude mcp add brainstormrouter \
-e BRAINSTORMROUTER_API_KEY=br_live_... \
-- npx -y @brainstormrouter/mcp
This gives Claude Code access to 15 tools: routing, model selection, memory, presets, observability, and more.
For MCP, your key needs the agent role (or admin, which includes all permissions). Keys created from the dashboard default to admin.
What's happening under the hood
When you send a request, BrainstormRouter:
- Authenticates your gateway key and checks rate limits / budget
- Selects a model using Thompson sampling, complexity assessment, and cost-quality frontiers
- Routes the request to the best available provider using your BYOK keys
- Falls back automatically if a provider fails (circuit breaker + cascade)
- Records usage, cost, and quality metrics for your dashboard
- Caches semantically similar requests to reduce latency and cost
All of this happens in a single request — no configuration required.
Next steps
- Authentication — Roles, RBAC, and key constraints
- Auto Mode — How intelligent routing works
- Memory — Persistent context across requests
- Routing — How the router picks models
- Usage & Billing — Monitor spend in the dashboard