Overview
The Aden SDK integrates seamlessly with popular AI frameworks. Because Aden instruments the underlying LLM SDKs (OpenAI, Anthropic, Gemini), any framework that uses these SDKs is automatically tracked.
For frameworks that make direct HTTP calls instead of using the official SDKs, use fetch instrumentation.
Vercel AI SDK
The Vercel AI SDK makes direct HTTP calls to LLM APIs, so use fetch instrumentation:
import { instrumentFetch, createConsoleEmitter } from "aden";
import { generateText, streamText, generateObject } from "ai";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";
// Instrument fetch BEFORE making any AI calls
await instrumentFetch({
emitMetric: createConsoleEmitter({ pretty: true }),
});
generateText
const result = await generateText({
model: openai("gpt-4o-mini"),
prompt: "What is TypeScript?",
});
console.log(result.text);
// Metrics automatically captured: latency, tokens, model, etc.
streamText
const result = await streamText({
model: openai("gpt-4o-mini"),
prompt: "Count from 1 to 10",
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
// Streaming metrics captured with final token counts
generateObject
import { z } from "zod";
const result = await generateObject({
model: openai("gpt-4o-mini"),
schema: z.object({
name: z.string(),
age: z.number(),
}),
prompt: "Generate a fictional person",
});
console.log(result.object);
Multi-Provider
Works with all Vercel AI SDK providers:
// OpenAI
await generateText({ model: openai("gpt-4o-mini"), prompt: "Hello" });
// Anthropic
await generateText({ model: anthropic("claude-3-5-haiku-latest"), prompt: "Hello" });
// Google
await generateText({ model: google("gemini-2.0-flash"), prompt: "Hello" });
LangChain.js
LangChain.js uses nested copies of LLM SDKs, so use fetch instrumentation for reliable capture:
import { instrumentFetch, createConsoleEmitter } from "aden";
import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
await instrumentFetch({
emitMetric: createConsoleEmitter({ pretty: true }),
});
Basic Chat
const model = new ChatOpenAI({ model: "gpt-4o-mini" });
const result = await model.invoke([
new HumanMessage("What is TypeScript?"),
]);
console.log(result.content);
Streaming
const model = new ChatOpenAI({ model: "gpt-4o-mini", streaming: true });
const stream = await model.stream([
new HumanMessage("Count from 1 to 5"),
]);
for await (const chunk of stream) {
process.stdout.write(String(chunk.content));
}
LCEL Chains
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";
const model = new ChatOpenAI({ model: "gpt-4o-mini" });
const parser = new StringOutputParser();
const chain = RunnableSequence.from([model, parser]);
const result = await chain.invoke([
new SystemMessage("You are a helpful assistant."),
new HumanMessage("What is 2+2?"),
]);
Multi-Model Chains
const researcher = new ChatOpenAI({ model: "gpt-4o-mini" });
const summarizer = new ChatAnthropic({ model: "claude-3-5-haiku-latest" });
const parser = new StringOutputParser();
// Research with OpenAI
const research = await researcher.invoke([
new SystemMessage("You are a research assistant."),
new HumanMessage("What are 3 facts about the moon?"),
]);
// Summarize with Anthropic
const summary = await summarizer.pipe(parser).invoke([
new SystemMessage("Summarize the following in one sentence."),
new HumanMessage(String(research.content)),
]);
const model = new ChatOpenAI({ model: "gpt-4o-mini" }).bindTools([
{
name: "get_weather",
description: "Get the weather for a location",
parameters: {
type: "object",
properties: {
location: { type: "string" },
},
required: ["location"],
},
},
]);
const result = await model.invoke([
new HumanMessage("What's the weather in Tokyo?"),
]);
console.log("Tool calls:", result.tool_calls?.length ?? 0);
LlamaIndex.ts
LlamaIndex.ts uses fetch-based API calls, so use fetch instrumentation:
import { instrumentFetch, createConsoleEmitter } from "aden";
import { OpenAI } from "@llamaindex/openai";
import { Anthropic } from "@llamaindex/anthropic";
import { Document, VectorStoreIndex, Settings } from "llamaindex";
await instrumentFetch({
emitMetric: createConsoleEmitter({ pretty: true }),
});
Basic LLM
const llm = new OpenAI({ model: "gpt-4o-mini" });
const result = await llm.complete({ prompt: "What is TypeScript?" });
console.log(result.text);
Chat Interface
const llm = new OpenAI({ model: "gpt-4o-mini" });
const response = await llm.chat({
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is TypeScript?" },
],
});
console.log(response.message.content);
Streaming
const llm = new OpenAI({ model: "gpt-4o-mini" });
const stream = await llm.complete({
prompt: "Count from 1 to 5",
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.text);
}
RAG Pipeline
// Set the global LLM
Settings.llm = new OpenAI({ model: "gpt-4o-mini" });
// Create documents
const documents = [
new Document({ text: "TypeScript is a typed superset of JavaScript." }),
new Document({ text: "LlamaIndex is a data framework for LLM applications." }),
];
// Create index and query engine
const index = await VectorStoreIndex.fromDocuments(documents);
const queryEngine = index.asQueryEngine();
// Query (LLM calls automatically tracked)
const response = await queryEngine.query({
query: "What is TypeScript?",
});
console.log(response.toString());
Mastra
Mastra is an agent framework built on Vercel AI SDK. Use fetch instrumentation combined with Aden’s withAgent() for agent context tracking:
import { instrumentFetch, createConsoleEmitter, withAgent } from "aden";
import { Agent, createTool } from "@mastra/core";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
await instrumentFetch({
emitMetric: createConsoleEmitter({ pretty: true }),
});
Basic Agent
const agent = new Agent({
name: "Assistant",
instructions: "You are a helpful assistant.",
model: openai("gpt-4o-mini"),
});
// Wrap with Aden's withAgent for agent-level tracking
const response = await withAgent("AssistantAgent", async () => {
return agent.generate("What is TypeScript in one sentence?");
});
console.log(response.text);
// Metrics include: agent_stack: ["AssistantAgent"]
const calculatorTool = createTool({
id: "calculator",
description: "Perform basic arithmetic",
inputSchema: z.object({
operation: z.enum(["add", "subtract", "multiply", "divide"]),
a: z.number(),
b: z.number(),
}),
execute: async ({ operation, a, b }) => {
const ops = { add: a + b, subtract: a - b, multiply: a * b, divide: a / b };
return { result: ops[operation] };
},
});
const agent = new Agent({
name: "Calculator",
instructions: "Use the calculator tool to solve math problems.",
model: openai("gpt-4o-mini"),
tools: { calculator: calculatorTool },
});
const response = await withAgent("CalculatorAgent", async () => {
return agent.generate("What is 42 * 17?");
});
Multi-Agent Workflow
// Research agent (OpenAI)
const researcher = new Agent({
name: "Researcher",
instructions: "Research and provide detailed information.",
model: openai("gpt-4o-mini"),
});
// Summarizer agent (Anthropic)
const summarizer = new Agent({
name: "Summarizer",
instructions: "Summarize information concisely.",
model: anthropic("claude-3-5-haiku-latest"),
});
// Step 1: Research
const research = await withAgent("ResearcherAgent", async () => {
return researcher.generate("What are the key features of TypeScript?");
});
// → agent_stack: ["ResearcherAgent"]
// Step 2: Summarize
const summary = await withAgent("SummarizerAgent", async () => {
return summarizer.generate(`Summarize this: ${research.text}`);
});
// → agent_stack: ["SummarizerAgent"]
Streaming Agent
const agent = new Agent({
name: "Storyteller",
instructions: "Tell short, engaging stories.",
model: openai("gpt-4o-mini"),
});
const stream = await withAgent("StorytellerAgent", async () => {
return agent.stream("Tell a very short story about a robot.");
});
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
Choosing Instrumentation Method
| Framework | Recommended Method | Reason |
|---|
| Vercel AI SDK | instrumentFetch() | Makes direct HTTP calls |
| LangChain.js | instrumentFetch() | Uses nested SDK copies |
| LlamaIndex.ts | instrumentFetch() | Uses fetch-based calls |
| Mastra | instrumentFetch() + withAgent() | Fetch-based + agent tracking |
| Direct SDK usage | instrument() | Patches SDK prototypes |
When using instrumentFetch(), all HTTP calls to OpenAI, Anthropic, and Google APIs are automatically captured regardless of which library makes them.
Cleanup
Don’t forget to clean up instrumentation on shutdown:
import { uninstrumentFetch } from "aden";
// On application shutdown
uninstrumentFetch();
Next Steps