By the end of this guide, you’ll see every LLM call automatically tracked with latency, tokens, and cost.
Prerequisites
Runtime
Node.js 18+ or Python 3.9+
LLM Provider
API key for OpenAI, Anthropic, or Google
Step 1: Install the SDK
npm install aden-ts dotenv openai
pip install aden-py python-dotenv openai
Step 2: Set Up Environment
Create a .env file in your project root:
OPENAI_API_KEY=sk-xxx
ADEN_API_URL=https://kube.acho.io
ADEN_API_KEY=your-aden-api-key
Don’t have an Aden API key yet? You can still follow along using the console emitter for local testing.
Step 3: Instrument Your Application
Add instrumentation before creating any LLM clients. This wraps the SDK to capture metrics automatically.
import "dotenv/config";
import OpenAI from "openai";
import { instrument, createConsoleEmitter } from "aden-ts";
// Instrument at startup - must come before creating clients
await instrument({
apiKey: process.env.ADEN_API_KEY,
serverUrl: process.env.ADEN_API_URL,
emitMetric: createConsoleEmitter({ pretty: true }),
onAlert: (alert) => console.log(`[Aden ${alert.level}] ${alert.message}`),
sdks: { OpenAI },
});
import os
from dotenv import load_dotenv
load_dotenv()
from aden import instrument, MeterOptions, create_console_emitter
# Instrument at startup - must come before creating clients
instrument(MeterOptions(
api_key=os.environ.get("ADEN_API_KEY"),
server_url=os.environ.get("ADEN_API_URL"),
emit_metric=create_console_emitter(pretty=True),
on_alert=lambda alert: print(f"[Aden {alert.level}] {alert.message}"),
))
Step 4: Make Your First Call
Use your LLM SDK exactly as you normally would. Aden captures metrics transparently.
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What is 2+2?" }],
});
console.log(response.choices[0].message.content);
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is 2+2?"}],
)
print(response.choices[0].message.content)
Step 5: Handle Budget Errors
When cost controls are active, requests may be blocked if budget is exhausted. Handle this gracefully:
import { RequestCancelledError } from "aden-ts";
async function runAgent(userInput: string): Promise<string> {
try {
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: userInput }],
});
return response.choices[0]?.message?.content ?? "";
} catch (e) {
if (e instanceof RequestCancelledError) {
return `Sorry, your budget has been exhausted. ${e.message}`;
}
throw e;
}
}
from aden import RequestCancelledError
def run_agent(user_input: str) -> str:
try:
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": user_input}],
)
return response.choices[0].message.content
except RequestCancelledError as e:
return f"Sorry, your budget has been exhausted. {e}"
Step 6: Clean Up on Exit
Always call uninstrument() when your application shuts down to flush remaining metrics:
import { uninstrument } from "aden-ts";
// In your shutdown handler
await uninstrument();
from aden import uninstrument
# In your shutdown handler
uninstrument()
Complete Example
Here’s everything together:
import "dotenv/config";
import OpenAI from "openai";
import {
instrument,
uninstrument,
createConsoleEmitter,
RequestCancelledError,
} from "aden-ts";
// Initialize Aden FIRST
await instrument({
apiKey: process.env.ADEN_API_KEY,
serverUrl: process.env.ADEN_API_URL,
emitMetric: createConsoleEmitter({ pretty: true }),
onAlert: (alert) => console.log(`[Aden ${alert.level}] ${alert.message}`),
sdks: { OpenAI },
});
// Your agent function
async function runAgent(userInput: string): Promise<string> {
try {
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: userInput }],
});
return response.choices[0]?.message?.content ?? "";
} catch (e) {
if (e instanceof RequestCancelledError) {
return `Sorry, your budget has been exhausted. ${e.message}`;
}
throw e;
}
}
// Main entry point
async function main() {
try {
const result = await runAgent("Hello, world!");
console.log(result);
} finally {
await uninstrument();
}
}
main();
import os
from dotenv import load_dotenv
load_dotenv()
from openai import OpenAI
from aden import (
instrument,
uninstrument,
MeterOptions,
create_console_emitter,
RequestCancelledError,
)
# Initialize Aden FIRST
instrument(MeterOptions(
api_key=os.environ.get("ADEN_API_KEY"),
server_url=os.environ.get("ADEN_API_URL"),
emit_metric=create_console_emitter(pretty=True),
on_alert=lambda alert: print(f"[Aden {alert.level}] {alert.message}"),
))
# Your agent function
def run_agent(user_input: str) -> str:
try:
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": user_input}],
)
return response.choices[0].message.content
except RequestCancelledError as e:
return f"Sorry, your budget has been exhausted. {e}"
# Main entry point
if __name__ == "__main__":
try:
result = run_agent("Hello, world!")
print(result)
finally:
uninstrument()
You’re Instrumented
Run your code and you’ll see metrics in your console:
[aden] openai/gpt-4o | 45 tokens | 892ms | $0.00135
Every LLM call is now tracked with latency, token usage, model, and estimated cost.
Next Steps