Skip to content

Send Data

The OpenSearch Observability Stack ingests telemetry through a standards-based pipeline. Applications emit traces, metrics, and logs using OpenTelemetry, which are collected, processed, and routed to their respective storage backends.

This section covers every layer of the ingestion pipeline, from instrumenting your code to configuring the collectors and processors that deliver data to OpenSearch and Prometheus.

The following diagram shows the end-to-end data flow from your applications to the storage and query layer:

flowchart LR
    A["Application<br/>(OTel SDK)"] -->|"OTLP gRPC :4317<br/>OTLP HTTP :4318"| B["OTel Collector"]
    B -->|"OTLP :21890"| C["Data Prepper"]
    B -->|"OTLP HTTP :9090"| D["Prometheus"]
    C --> E["OpenSearch"]
    D --> F["OpenSearch<br/>Dashboards"]
    E --> F

Traces and logs flow through the OTel Collector into Data Prepper, which processes and indexes them in OpenSearch. Metrics are forwarded from the OTel Collector to Prometheus via OTLP HTTP, where they are stored and queried by OpenSearch Dashboards.

ProtocolPortTransportUse Case
OTLP gRPC4317HTTP/2 + ProtobufSDK default, highest throughput
OTLP HTTP4318HTTP/1.1 + Protobuf or JSONBrowser, serverless, firewall-restricted environments

Both endpoints accept traces, metrics, and logs. CORS is enabled on the HTTP endpoint for browser-based instrumentation.

ComponentPortProtocolDescription
OTel Collector (gRPC)4317OTLP gRPCPrimary telemetry ingestion
OTel Collector (HTTP)4318OTLP HTTPHTTP telemetry ingestion
OTel Collector (metrics)8888Prometheus scrapeCollector self-monitoring
Data Prepper21890OTLP gRPCTrace and log processing
Prometheus9090OTLP HTTPMetrics storage

Core instrumentation framework. Learn about the OTel Collector configuration, auto-instrumentation for zero-code setup, manual instrumentation for custom telemetry, and sampling strategies for controlling data volume.

Language-specific guides for instrumenting your services. Covers Python, Java, Node.js, Go, .NET, and browser applications, plus dedicated guidance for AI/LLM agent observability.

Configure the backend processing layer. Covers Data Prepper pipelines for trace and log processing, Prometheus for metrics storage, and index management in OpenSearch.

Collect telemetry from your infrastructure. Covers host metrics, container monitoring, Kubernetes observability, and cloud provider integrations.

To start sending data from any application, set two environment variables and run with auto-instrumentation:

Terminal window
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
export OTEL_SERVICE_NAME="my-service"

Then follow the language-specific guide in the Applications section, or jump straight to Auto-Instrumentation for zero-code setup.

  • Get Started — Platform overview and sandbox setup
  • Investigate — Query and explore ingested data
  • APM — Application performance monitoring views