Skip to main content

LLM Observability & Cost Control

Aden provides SDKs for TypeScript and Python that automatically track every LLM API call in your application - usage, latency, costs - and give you real-time controls to prevent budget overruns.

Key Features

Multi-Provider Support

Works with OpenAI, Anthropic Claude, and Google Gemini with a single integration.

Zero Code Changes

Automatic instrumentation - just call instrument() once at startup.

Real-Time Cost Control

Set budgets, throttle requests, and automatically degrade to cheaper models.

OpenTelemetry Compatible

Trace IDs, span IDs, and agent stacks for multi-agent observability.

Quick Start

import { instrument, createConsoleEmitter } from "aden";
import OpenAI from "openai";

// Instrument at startup
await instrument({
  emitMetric: createConsoleEmitter({ pretty: true }),
  sdks: { OpenAI },
});

// Use your SDK normally - metrics captured automatically
const openai = new OpenAI();
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }],
});

What Gets Tracked

Every LLM API call is captured with comprehensive metrics:
MetricDescription
input_tokensPrompt/input tokens used
output_tokensCompletion/output tokens generated
cached_tokensTokens served from prompt cache
latency_msRequest duration in milliseconds
modelModel name (e.g., gpt-4o, claude-3-5-sonnet)
providerProvider name (openai, anthropic, gemini)
tool_callsFunction/tool calls made
trace_idOpenTelemetry-compatible trace ID
agent_stackNamed agent context for multi-agent systems

Cost Control Actions

Prevent budget overruns with real-time controls:
ActionEffect
allowRequest proceeds normally
blockRequest rejected when budget exhausted
throttleRequest delayed for rate limiting
degradeSwitch to cheaper model when approaching budget
alertProceed with notification at warning threshold

Platform Products

Need Help?

Book a Discovery Call

Schedule a call with our team to learn more.