LLM Providers
| Provider | Status | Instrumentation Method | Features |
|---|
| OpenAI | ✅ Supported | instrument() | Streaming, Tool calls, Vision, JSON mode |
| Anthropic | ✅ Supported | instrument() | Streaming, Tool calls, Prompt caching |
| Google Gemini | ✅ Supported | instrument() | Streaming, Tool calls, Chat sessions |
Agent Frameworks
| Framework | Status | Instrumentation Method | Notes |
|---|
| Vercel AI SDK | ✅ Supported | instrumentFetch() | generateText, streamText, generateObject |
| LangChain.js | ✅ Supported | instrumentFetch() | Chains, LCEL, agents, tool binding |
| LlamaIndex.ts | ✅ Supported | instrumentFetch() | RAG pipelines, chat engines |
| Mastra | ✅ Supported | instrumentFetch() + withAgent() | Agents, tools, workflows |
Frameworks that bundle their own SDK copies require instrumentFetch() to intercept HTTP calls. Direct SDK usage can use instrument().
Feature Support
| Feature | OpenAI | Anthropic | Gemini |
|---|
| Basic completions | ✅ | ✅ | ✅ |
| Streaming | ✅ | ✅ | ✅ |
| Tool/Function calls | ✅ | ✅ | ✅ |
| Token tracking | ✅ | ✅ | ✅ |
| Cached token tracking | ✅ | ✅ | ❌ |
| Rate limit tracking | ✅ | ✅ | ❌ |
| Latency metrics | ✅ | ✅ | ✅ |
| Error tracking | ✅ | ✅ | ✅ |
| Multi-agent tracking | ✅ | ✅ | ✅ |
| Cost control | ✅ | ✅ | ✅ |
Instrumentation Methods
instrument() - SDK Instrumentation
Best for direct SDK usage. Patches SDK prototypes at startup.
import { instrument } from "aden";
import OpenAI from "openai";
import Anthropic from "@anthropic-ai/sdk";
import { GoogleGenerativeAI } from "@google/generative-ai";
await instrument({
emitMetric: myEmitter,
sdks: { OpenAI, Anthropic, GoogleGenerativeAI },
});
Use when:
- Using OpenAI, Anthropic, or Gemini SDKs directly
- Building custom agents without a framework
instrumentFetch() - HTTP Instrumentation
Best for frameworks that make direct HTTP calls or bundle their own SDK copies.
import { instrumentFetch } from "aden";
await instrumentFetch({
emitMetric: myEmitter,
});
Use when:
- Using Vercel AI SDK
- Using LangChain.js
- Using LlamaIndex.ts
- Using Mastra or other fetch-based frameworks
Examples
| Example | Description | File |
|---|
| OpenAI Basic | Completions, streaming, tool calls | openai-basic.ts |
| Anthropic Basic | Messages, streaming, prompt caching | anthropic-basic.ts |
| Gemini Basic | Content generation, chat sessions | gemini-basic.ts |
| Vercel AI SDK | generateText, streamText, generateObject | vercel-ai-sdk.ts |
| LangChain | LCEL chains, multi-model, tool binding | langchain-example.ts |
| LlamaIndex | RAG pipelines, chat interface | llamaindex-example.ts |
| Mastra | Agents, tools, multi-agent workflows | mastra-example.ts |
| Multi-Agent | Sequential, parallel, debate patterns | multi-agent-example.ts |
| Cost Control | Local policy engine without server | cost-control-local.ts |
| Control Actions | All 5 control actions demo | control-actions.ts |
View Examples on GitHub
Browse complete example code
Coming Soon
| Framework | Status | ETA |
|---|
| CrewAI.js | 🔜 Planned | - |
| AutoGen.js | 🔜 Planned | - |