Skip to content

SDKs

The OpenSearch AI Observability SDKs instrument LLM applications using standard OpenTelemetry. They handle the gap that general-purpose OTel doesn’t cover: tracing your own agent logic — the workflows, agents, and tools that sit above raw LLM calls — and submitting evaluation scores back through the same pipeline.

The SDKs are thin wrappers. They do not replace OpenTelemetry, they configure it. Remove a decorator and your code still works unchanged.

Pipeline setup — one call (register()) creates a TracerProvider, wires up an OTLP exporter, and activates auto-instrumentation for any installed LLM library instrumentors (OpenAI, Anthropic, Bedrock, LangChain, and more).

Application tracing — decorators (Python) or wrapper functions (JavaScript) that produce OTEL spans with GenAI semantic convention attributes for four span types:

TypeUse for
workflowTop-level orchestration — the entry point of a pipeline run
taskA discrete unit of work inside a workflow
agentAutonomous decision-making logic that calls tools or LLMs
toolA function invoked by an agent

Evaluation scoringscore() emits evaluation metrics as OTEL spans at span, trace, or session level. No separate client or index needed — scores travel through the same OTLP pipeline as traces.

AWS support — built-in SigV4 signing for OpenSearch Ingestion (OSIS) and OpenSearch Service endpoints.

flowchart LR
    A["Your Application<br/>@workflow / @agent / @tool<br/>score()"] -->|"OTLP HTTP/gRPC"| B["OTel Collector<br/>or Data Prepper"]
    B --> C["OpenSearch<br/>traces + scores"]

The SDK configures a BatchSpanProcessor that exports in the background — your application code is never blocked waiting on network I/O.