Skip to main content
By the end of this guide, you’ll see every LLM call automatically tracked with latency, tokens, and cost.

Prerequisites

Runtime

Node.js 18+ or Python 3.9+

LLM Provider

API key for OpenAI, Anthropic, or Google

Step 1: Install the SDK

npm install aden-ts dotenv openai

Step 2: Set Up Environment

Create a .env file in your project root:
OPENAI_API_KEY=sk-xxx
ADEN_API_URL=https://kube.acho.io
ADEN_API_KEY=your-aden-api-key
Don’t have an Aden API key yet? You can still follow along using the console emitter for local testing.

Step 3: Instrument Your Application

Add instrumentation before creating any LLM clients. This wraps the SDK to capture metrics automatically.
import "dotenv/config";
import OpenAI from "openai";
import { instrument, createConsoleEmitter } from "aden-ts";

// Instrument at startup - must come before creating clients
await instrument({
  apiKey: process.env.ADEN_API_KEY,
  serverUrl: process.env.ADEN_API_URL,
  emitMetric: createConsoleEmitter({ pretty: true }),
  onAlert: (alert) => console.log(`[Aden ${alert.level}] ${alert.message}`),
  sdks: { OpenAI },
});

Step 4: Make Your First Call

Use your LLM SDK exactly as you normally would. Aden captures metrics transparently.
const openai = new OpenAI();

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "What is 2+2?" }],
});

console.log(response.choices[0].message.content);

Step 5: Handle Budget Errors

When cost controls are active, requests may be blocked if budget is exhausted. Handle this gracefully:
import { RequestCancelledError } from "aden-ts";

async function runAgent(userInput: string): Promise<string> {
  try {
    const openai = new OpenAI();
    const response = await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [{ role: "user", content: userInput }],
    });
    return response.choices[0]?.message?.content ?? "";
  } catch (e) {
    if (e instanceof RequestCancelledError) {
      return `Sorry, your budget has been exhausted. ${e.message}`;
    }
    throw e;
  }
}

Step 6: Clean Up on Exit

Always call uninstrument() when your application shuts down to flush remaining metrics:
import { uninstrument } from "aden-ts";

// In your shutdown handler
await uninstrument();

Complete Example

Here’s everything together:
import "dotenv/config";
import OpenAI from "openai";
import {
  instrument,
  uninstrument,
  createConsoleEmitter,
  RequestCancelledError,
} from "aden-ts";

// Initialize Aden FIRST
await instrument({
  apiKey: process.env.ADEN_API_KEY,
  serverUrl: process.env.ADEN_API_URL,
  emitMetric: createConsoleEmitter({ pretty: true }),
  onAlert: (alert) => console.log(`[Aden ${alert.level}] ${alert.message}`),
  sdks: { OpenAI },
});

// Your agent function
async function runAgent(userInput: string): Promise<string> {
  try {
    const openai = new OpenAI();
    const response = await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [{ role: "user", content: userInput }],
    });
    return response.choices[0]?.message?.content ?? "";
  } catch (e) {
    if (e instanceof RequestCancelledError) {
      return `Sorry, your budget has been exhausted. ${e.message}`;
    }
    throw e;
  }
}

// Main entry point
async function main() {
  try {
    const result = await runAgent("Hello, world!");
    console.log(result);
  } finally {
    await uninstrument();
  }
}

main();

You’re Instrumented

Run your code and you’ll see metrics in your console:
[aden] openai/gpt-4o | 45 tokens | 892ms | $0.00135
Every LLM call is now tracked with latency, token usage, model, and estimated cost.

Next Steps