Skip to main content

Overview

The Aden SDK provides OpenTelemetry-compatible tracing for LLM calls, including:
  • Trace IDs - Group related operations
  • Span IDs - Unique identifiers per operation
  • Parent-child relationships - Link calls in a chain
  • Agent stacks - Track nested agent contexts
  • Call sequences - Order of calls within a trace

Automatic Session Tracking

Related LLM calls are automatically grouped:
// First call starts a new trace
await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "What is AI?" }],
});
// → trace_id: "abc123", span_id: "span1", call_sequence: 1

// Subsequent calls continue the trace
await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Tell me more" }],
});
// → trace_id: "abc123", span_id: "span2", parent_span_id: "span1", call_sequence: 2

Named Agent Tracking

Use withAgent() to track calls under named agents:
import { withAgent } from "aden";

await withAgent("ResearchAgent", async () => {
  await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Research topic X" }],
  });
  // → agent_stack: ["ResearchAgent"]
});

Nested Agents

Agent contexts can be nested:
await withAgent("OrchestratorAgent", async () => {
  // Orchestrator's call
  await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Plan the research" }],
  });
  // → agent_stack: ["OrchestratorAgent"]

  // Delegate to sub-agent
  await withAgent("WebSearchAgent", async () => {
    await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [{ role: "user", content: "Search for X" }],
    });
    // → agent_stack: ["OrchestratorAgent", "WebSearchAgent"]
  });

  // Back to orchestrator
  await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Summarize findings" }],
  });
  // → agent_stack: ["OrchestratorAgent"]
});

Request Context Isolation

Use enterMeterContext() to isolate metrics per HTTP request or user session:
import { enterMeterContext } from "aden";
import express from "express";

const app = express();

app.use((req, res, next) => {
  enterMeterContext({
    sessionId: req.headers["x-request-id"] as string,
    metadata: {
      userId: req.userId,
      endpoint: req.path,
    },
  });
  next();
});

app.post("/chat", async (req, res) => {
  // All LLM calls here share this request's trace
  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: req.body.messages,
  });
  // → trace_id isolated to this request
});

Async Context Wrapper

For more control, use withMeterContextAsync():
import { withMeterContextAsync } from "aden";

await withMeterContextAsync(
  async () => {
    // All calls in this function share the same trace
    await openai.chat.completions.create({ ... });
    await anthropic.messages.create({ ... });
  },
  {
    sessionId: "session-123",
    metadata: { userId: "user-456" },
  }
);

Accessing Current Context

import { getCurrentContext } from "aden";

const context = getCurrentContext();

console.log(context);
// {
//   trace_id: "abc123",
//   span_id: "current-span",
//   call_sequence: 3,
//   agent_stack: ["ResearchAgent", "WebSearchAgent"],
//   session_id: "session-123",
//   metadata: { userId: "user-456" }
// }

Custom ID Generators

Provide custom trace and span ID generators:
import { v4 as uuidv4 } from "uuid";

await instrument({
  emitMetric: myEmitter,
  sdks: { OpenAI },

  generateTraceId: () => `trace-${uuidv4()}`,
  generateSpanId: () => `span-${uuidv4()}`,
});

Integration with OpenTelemetry

The SDK’s trace/span IDs are compatible with OpenTelemetry. You can correlate LLM metrics with your existing traces:
import { trace } from "@opentelemetry/api";

await instrument({
  emitMetric: myEmitter,
  sdks: { OpenAI },

  // Use OpenTelemetry's current trace
  generateTraceId: () => {
    const span = trace.getActiveSpan();
    return span?.spanContext().traceId || crypto.randomUUID();
  },
});

Multi-Agent Patterns

Sequential Agents

async function research(topic: string) {
  return withAgent("ResearchAgent", async () => {
    const plan = await withAgent("PlanningAgent", async () => {
      return await openai.chat.completions.create({
        model: "gpt-4o",
        messages: [{ role: "user", content: `Plan research for: ${topic}` }],
      });
    });

    const data = await withAgent("DataGatheringAgent", async () => {
      return await openai.chat.completions.create({
        model: "gpt-4o",
        messages: [{ role: "user", content: `Gather data based on: ${plan}` }],
      });
    });

    return await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [{ role: "user", content: `Synthesize: ${data}` }],
    });
  });
}

Parallel Agents

async function multiPerspectiveAnalysis(topic: string) {
  return withAgent("OrchestratorAgent", async () => {
    const [technical, business, user] = await Promise.all([
      withAgent("TechnicalAnalyst", async () =>
        openai.chat.completions.create({
          model: "gpt-4o",
          messages: [{ role: "user", content: `Technical analysis of: ${topic}` }],
        })
      ),
      withAgent("BusinessAnalyst", async () =>
        openai.chat.completions.create({
          model: "gpt-4o",
          messages: [{ role: "user", content: `Business analysis of: ${topic}` }],
        })
      ),
      withAgent("UserResearcher", async () =>
        openai.chat.completions.create({
          model: "gpt-4o",
          messages: [{ role: "user", content: `User perspective on: ${topic}` }],
        })
      ),
    ]);

    return { technical, business, user };
  });
}

Debate Pattern

async function debate(topic: string) {
  return withAgent("DebateModerator", async () => {
    const proPosition = await withAgent("ProDebater", async () =>
      openai.chat.completions.create({
        model: "gpt-4o",
        messages: [{ role: "user", content: `Argue FOR: ${topic}` }],
      })
    );

    const conPosition = await withAgent("ConDebater", async () =>
      openai.chat.completions.create({
        model: "gpt-4o",
        messages: [{ role: "user", content: `Argue AGAINST: ${topic}` }],
      })
    );

    return await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [{
        role: "user",
        content: `Synthesize these positions:\nPro: ${proPosition}\nCon: ${conPosition}`,
      }],
    });
  });
}

Visualizing Agent Flows

The collected metrics enable visualization of agent flows:
trace_id: abc123
├── OrchestratorAgent (span: s1)
│   ├── call 1: gpt-4o, 50ms
│   ├── WebSearchAgent (span: s2, parent: s1)
│   │   └── call 2: gpt-4o, 120ms
│   ├── DataAnalysisAgent (span: s3, parent: s1)
│   │   └── call 3: gpt-4o, 200ms
│   └── call 4: gpt-4o, 80ms (synthesis)

Next Steps