Skip to main content

Overview

The Aden Python SDK provides real-time usage tracking, budget enforcement, and cost control for LLM applications. It automatically instruments OpenAI, Anthropic Claude, and Google Gemini API calls without modifying your application code.

GitHub Repository

View source code and contribute

Key Features

Multi-Provider Support

Works with OpenAI, Anthropic, and Google Gemini with a single integration.

Zero Code Changes

Automatic instrumentation - just call instrument() once at startup.

Real-Time Cost Control

Budget limits, throttling, and automatic model degradation.

Comprehensive Metrics

Track tokens, latency, costs, tool calls, and more.

Supported Providers

ProviderPackageStatus
OpenAIopenaiFull support (Chat Completions, streaming, tools)
AnthropicanthropicFull support (Messages API, streaming, tools)
Google Geminigoogle-generativeaiFull support (generateContent, chat)

Framework Compatibility

The SDK works with popular Python AI frameworks:
  • PydanticAI - Full integration support
  • LangChain - Instruments underlying LLM providers
  • LlamaIndex - Works with instrumented providers
  • LiveKit Voice Agents - Specialized voice agent support

What Gets Tracked

Every LLM API call is captured with:
MetricDescription
input_tokensPrompt/input tokens used
output_tokensCompletion/output tokens generated
cached_tokensTokens served from prompt cache
reasoning_tokensReasoning tokens (o1/o3 models)
latency_msRequest duration in milliseconds
modelModel name (e.g., gpt-4o, claude-3-5-sonnet)
providerProvider name (openai, anthropic, gemini)
tool_callsFunction/tool calls made
trace_idOpenTelemetry-compatible trace ID
rate_limitRate limit information from headers

Quick Example

from aden import instrument, MeterOptions, create_console_emitter
from openai import OpenAI

# 1. Instrument at startup (before creating clients)
instrument(MeterOptions(
    emit_metric=create_console_emitter(pretty=True),
))

# 2. Use your SDK normally - metrics are captured automatically
client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)

# Console output:
# {
#   "provider": "openai",
#   "model": "gpt-4o",
#   "input_tokens": 10,
#   "output_tokens": 25,
#   "latency_ms": 342,
#   ...
# }

Next Steps