Overview
The Aden Python SDK provides real-time usage tracking, budget enforcement, and cost control for LLM applications. It automatically instruments OpenAI, Anthropic Claude, and Google Gemini API calls without modifying your application code.GitHub Repository
View source code and contribute
Key Features
Multi-Provider Support
Works with OpenAI, Anthropic, and Google Gemini with a single integration.
Zero Code Changes
Automatic instrumentation - just call
instrument() once at startup.Real-Time Cost Control
Budget limits, throttling, and automatic model degradation.
Comprehensive Metrics
Track tokens, latency, costs, tool calls, and more.
Supported Providers
| Provider | Package | Status |
|---|---|---|
| OpenAI | openai | Full support (Chat Completions, streaming, tools) |
| Anthropic | anthropic | Full support (Messages API, streaming, tools) |
| Google Gemini | google-generativeai | Full support (generateContent, chat) |
Framework Compatibility
The SDK works with popular Python AI frameworks:- PydanticAI - Full integration support
- LangChain - Instruments underlying LLM providers
- LlamaIndex - Works with instrumented providers
- LiveKit Voice Agents - Specialized voice agent support
What Gets Tracked
Every LLM API call is captured with:| Metric | Description |
|---|---|
input_tokens | Prompt/input tokens used |
output_tokens | Completion/output tokens generated |
cached_tokens | Tokens served from prompt cache |
reasoning_tokens | Reasoning tokens (o1/o3 models) |
latency_ms | Request duration in milliseconds |
model | Model name (e.g., gpt-4o, claude-3-5-sonnet) |
provider | Provider name (openai, anthropic, gemini) |
tool_calls | Function/tool calls made |
trace_id | OpenTelemetry-compatible trace ID |
rate_limit | Rate limit information from headers |