Skip to main content

Basic Setup

1

Install dependencies

npm install aden openai
2

Instrument at startup

Add instrumentation before creating any LLM clients:
import { instrument, createConsoleEmitter } from "aden";
import OpenAI from "openai";

await instrument({
  emitMetric: createConsoleEmitter({ pretty: true }),
  sdks: { OpenAI },
});
3

Use your SDK normally

const openai = new OpenAI();

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "What is 2+2?" }],
});

console.log(response.choices[0].message.content);
4

Clean up on shutdown

import { uninstrument } from "aden";

// In your shutdown handler
await uninstrument();

Multi-Provider Example

Instrument all providers at once:
import { instrument, createConsoleEmitter } from "aden";
import OpenAI from "openai";
import Anthropic from "@anthropic-ai/sdk";
import { GoogleGenerativeAI } from "@google/generative-ai";

await instrument({
  emitMetric: createConsoleEmitter({ pretty: true }),
  sdks: {
    OpenAI,
    Anthropic,
    GoogleGenerativeAI,
  },
});

// All providers are now instrumented
const openai = new OpenAI();
const anthropic = new Anthropic();
const gemini = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);

Streaming Support

Streaming calls are fully supported. Metrics are emitted when the stream completes:
const stream = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Write a haiku" }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
// Metrics emitted here after stream completes

Tool Calls

Tool calls are automatically tracked:
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
  tools: [
    {
      type: "function",
      function: {
        name: "get_weather",
        description: "Get weather for a location",
        parameters: {
          type: "object",
          properties: {
            location: { type: "string" },
          },
        },
      },
    },
  ],
});

// Metric includes:
// tool_call_count: 1
// tool_names: "get_weather"

Production Setup

For production, connect to the Aden control server:
import { instrument } from "aden";
import OpenAI from "openai";

await instrument({
  apiKey: process.env.ADEN_API_KEY,
  serverUrl: process.env.ADEN_API_URL,
  sdks: { OpenAI },

  // Track usage per user for budgets
  getContextId: () => getCurrentUserId(),

  // Handle alerts
  onAlert: (alert) => {
    console.warn(`[${alert.level}] ${alert.message}`);
    // Send to Slack, PagerDuty, etc.
  },
});

Complete Example

import {
  instrument,
  uninstrument,
  createConsoleEmitter,
  createBatchEmitter,
  createHttpTransport,
  createMultiEmitter,
} from "aden";
import OpenAI from "openai";

async function main() {
  // Set up multiple emitters
  const emitter = createMultiEmitter([
    // Log to console in development
    createConsoleEmitter({ pretty: true }),
    // Batch and send to backend
    createBatchEmitter({
      handler: createHttpTransport({
        url: "https://your-backend.com/metrics",
        headers: { "X-API-Key": process.env.METRICS_API_KEY! },
      }),
      batchSize: 50,
      flushIntervalMs: 5000,
    }),
  ]);

  await instrument({
    emitMetric: emitter,
    sdks: { OpenAI },
    trackCallRelationships: true,
    trackToolCalls: true,
  });

  const openai = new OpenAI();

  // Make API calls...
  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Hello, world!" }],
  });

  console.log(response.choices[0].message.content);

  // Clean up
  await uninstrument();
}

main().catch(console.error);

Next Steps