LLM Observability & Cost Control
Aden provides SDKs for TypeScript and Python that automatically track every LLM API call in your application - usage, latency, costs - and give you real-time controls to prevent budget overruns.TypeScript SDK
Instrument OpenAI, Anthropic, and Gemini in Node.js applications.
Python SDK
Track LLM usage in Python with support for FastAPI, Django, and more.
Key Features
Multi-Provider Support
Works with OpenAI, Anthropic Claude, and Google Gemini with a single integration.
Zero Code Changes
Automatic instrumentation - just call
instrument() once at startup.Real-Time Cost Control
Set budgets, throttle requests, and automatically degrade to cheaper models.
OpenTelemetry Compatible
Trace IDs, span IDs, and agent stacks for multi-agent observability.
Quick Start
- TypeScript
- Python
What Gets Tracked
Every LLM API call is captured with comprehensive metrics:| Metric | Description |
|---|---|
input_tokens | Prompt/input tokens used |
output_tokens | Completion/output tokens generated |
cached_tokens | Tokens served from prompt cache |
latency_ms | Request duration in milliseconds |
model | Model name (e.g., gpt-4o, claude-3-5-sonnet) |
provider | Provider name (openai, anthropic, gemini) |
tool_calls | Function/tool calls made |
trace_id | OpenTelemetry-compatible trace ID |
agent_stack | Named agent context for multi-agent systems |
Cost Control Actions
Prevent budget overruns with real-time controls:| Action | Effect |
|---|---|
| allow | Request proceeds normally |
| block | Request rejected when budget exhausted |
| throttle | Request delayed for rate limiting |
| degrade | Switch to cheaper model when approaching budget |
| alert | Proceed with notification at warning threshold |
Platform Products
General Ledger API
Complete API for financial operations - chart of accounts, journal entries, AP/AR, and reporting.
MCP Server
Model Context Protocol server for AI-powered financial operations.
Need Help?
Book a Discovery Call
Schedule a call with our team to learn more.