The Aden Python SDK integrates seamlessly with popular AI frameworks. Because Aden instruments the underlying LLM SDKs (OpenAI, Anthropic, Gemini), any framework that uses these SDKs is automatically tracked.
PydanticAI is a Python agent framework that uses OpenAI, Anthropic, and other LLM providers under the hood. Aden’s global instrumentation automatically captures all LLM calls made by PydanticAI agents.
from pydantic_ai import Agentagent = Agent( "openai:gpt-4o-mini", system_prompt="You are a helpful assistant. Keep responses brief.",)result = await agent.run("What is the capital of France?")print(result.output)# Metrics automatically captured: latency, tokens, model, etc.
PydanticAI excels at structured output with Pydantic models:
Copy
from pydantic import BaseModelfrom pydantic_ai import Agentclass TaskAnalysis(BaseModel): task: str complexity: str # low, medium, high estimated_steps: int required_tools: list[str] recommendation: stragent = Agent( "openai:gpt-4o-mini", output_type=TaskAnalysis, system_prompt="Analyze the given task and provide a structured analysis.",)result = await agent.run("Build a REST API for user authentication")analysis = result.outputprint(f"Task: {analysis.task}")print(f"Complexity: {analysis.complexity}")print(f"Steps: {analysis.estimated_steps}")print(f"Tools: {', '.join(analysis.required_tools)}")
from pydantic_ai import Agent, RunContextagent = Agent( "openai:gpt-4o-mini", system_prompt="You are a helpful assistant with access to weather data.",)@agent.toolasync def get_weather(ctx: RunContext[None], location: str) -> str: """Get current weather for a location.""" # Your weather API implementation return f"Weather in {location}: 72°F, Sunny, Humidity: 45%"@agent.toolasync def get_forecast(ctx: RunContext[None], location: str, days: int = 3) -> str: """Get weather forecast for a location.""" return f"{days}-day forecast for {location}: Mostly sunny."result = await agent.run("What's the weather like in San Francisco?")print(result.output)# Tool calls tracked with tool_call_count and tool_names in metrics
PydanticAI supports multiple LLM providers, all tracked by Aden:
Copy
# OpenAI agentopenai_agent = Agent( "openai:gpt-4o-mini", system_prompt="You are a creative writer. Be concise.",)# Anthropic agentanthropic_agent = Agent( "anthropic:claude-3-5-haiku-latest", system_prompt="You are a technical analyst. Be precise.",)# Gemini agentgemini_agent = Agent( "gemini-1.5-flash", system_prompt="You are a helpful assistant.",)# All calls automatically trackedcreative = await openai_agent.run("Describe a sunset in one sentence.")technical = await anthropic_agent.run("Explain why the sky appears red during sunset.")
agent = Agent( "openai:gpt-4o-mini", system_prompt="You are a storyteller.",)async with agent.run_stream("Tell me a very short story about a robot.") as result: async for text in result.stream_text(): print(text, end="", flush=True)# Streaming metrics captured with final token counts
# Research agentresearcher = Agent( "openai:gpt-4o-mini", system_prompt="You are a researcher. Gather key facts briefly.",)# Writer agentwriter = Agent( "openai:gpt-4o-mini", system_prompt="You are a writer. Create content from research notes.",)# Step 1: Researchresearch_result = await researcher.run( "Research the key benefits of renewable energy. List 3 bullet points.")# → trace_id: "abc123", call_sequence: 1# Step 2: Write based on researchwrite_result = await writer.run( f"Write a brief paragraph based on these notes:\n{research_result.output}")# → trace_id: "abc123", call_sequence: 2, parent_span_id links to research callprint(write_result.output)
Any Python framework that uses the official OpenAI, Anthropic, or Google Generative AI SDKs will automatically work with Aden’s global instrumentation.
from aden import uninstrument# On application shutdownuninstrument()# Or use a context manager patterntry: # Your application code passfinally: uninstrument()