Skip to main content

Overview

The Aden TypeScript SDK provides real-time usage tracking, budget enforcement, and cost control for LLM applications. It automatically instruments OpenAI, Anthropic Claude, and Google Gemini API calls without modifying your application code.

GitHub Repository

View source code and contribute

Key Features

Multi-Provider Support

Works with OpenAI, Anthropic, and Google Gemini with a single integration.

Zero Code Changes

Automatic instrumentation - just call instrument() once at startup.

Real-Time Cost Control

Budget limits, throttling, and automatic model degradation.

Comprehensive Metrics

Track tokens, latency, costs, tool calls, and more.

Supported Providers

ProviderSDK PackageStatus
OpenAIopenaiFull support (Chat, Responses API, streaming)
Anthropic@anthropic-ai/sdkFull support (Messages API, streaming, tools)
Google Gemini@google/generative-aiFull support (generateContent, chat)

Framework Compatibility

The SDK works seamlessly with popular AI frameworks:
  • Vercel AI SDK - Via fetch instrumentation
  • LangChain - Instruments underlying LLM providers
  • LlamaIndex - Works with instrumented providers
  • Mastra - Full agent stack tracking support

What Gets Tracked

Every LLM API call is captured with:
MetricDescription
input_tokensPrompt/input tokens used
output_tokensCompletion/output tokens generated
cached_tokensTokens served from prompt cache
reasoning_tokensReasoning tokens (o1/o3 models)
latency_msRequest duration in milliseconds
modelModel name (e.g., gpt-4o, claude-3-5-sonnet)
providerProvider name (openai, anthropic, gemini)
tool_callsFunction/tool calls made
trace_idOpenTelemetry-compatible trace ID
agent_stackNamed agent context for multi-agent systems

Quick Example

import { instrument, createConsoleEmitter } from "aden";
import OpenAI from "openai";

// 1. Instrument at startup (before creating clients)
await instrument({
  emitMetric: createConsoleEmitter({ pretty: true }),
  sdks: { OpenAI },
});

// 2. Use your SDK normally - metrics are captured automatically
const openai = new OpenAI();
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }],
});

// Console output:
// {
//   provider: "openai",
//   model: "gpt-4o",
//   input_tokens: 10,
//   output_tokens: 25,
//   latency_ms: 342,
//   ...
// }

Next Steps