LangChain
Use BrainstormRouter as the LLM backend for LangChain agents and chains.
Setup
npm install @langchain/openai brainstormrouter
Basic usage
LangChain's OpenAI integration accepts a custom baseURL:
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
configuration: {
baseURL: "https://api.brainstormrouter.com/v1",
},
apiKey: "br_live_...",
model: "anthropic/claude-sonnet-4",
});
const response = await model.invoke("Explain quantum computing");
console.log(response.content);
With memory
BrainstormRouter's unique advantage: persistent memory across conversations. Pre-load context with the SDK, then LangChain uses it automatically in agentic mode:
import BrainstormRouter from "brainstormrouter";
import { ChatOpenAI } from "@langchain/openai";
// 1. Pre-load memory via SDK
const br = new BrainstormRouter({ apiKey: "br_live_..." });
await br.memory.append("User is building a Next.js e-commerce app", { block: "project" });
await br.memory.append("User prefers concise answers", { block: "human" });
// 2. LangChain uses the enriched model
const model = new ChatOpenAI({
configuration: { baseURL: "https://api.brainstormrouter.com/v1" },
apiKey: "br_live_...",
model: "anthropic/claude-sonnet-4",
modelKwargs: { mode: "agentic" }, // enables memory access
});
// The model now has access to stored context
const response = await model.invoke("What framework am I using?");
// "You're building a Next.js e-commerce app."
Tool calling
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const weatherTool = tool(async ({ city }) => `72°F and sunny in ${city}`, {
name: "get_weather",
description: "Get weather for a city",
schema: z.object({ city: z.string() }),
});
const model = new ChatOpenAI({
configuration: { baseURL: "https://api.brainstormrouter.com/v1" },
apiKey: "br_live_...",
model: "anthropic/claude-sonnet-4",
}).bindTools([weatherTool]);
const response = await model.invoke("What's the weather in San Francisco?");
Streaming
const stream = await model.stream("Write a haiku about APIs");
for await (const chunk of stream) {
process.stdout.write(chunk.content as string);
}
Multi-model comparison
Use presets to A/B test different models through LangChain:
const cheap = new ChatOpenAI({
configuration: { baseURL: "https://api.brainstormrouter.com/v1" },
apiKey: "br_live_...",
model: "@preset/fast-cheap",
});
const quality = new ChatOpenAI({
configuration: { baseURL: "https://api.brainstormrouter.com/v1" },
apiKey: "br_live_...",
model: "@preset/high-quality",
});