Skip to content

Kubernetes (Helm)

Deploy the Observability Stack to any Kubernetes cluster using the Helm umbrella chart. This creates the same observability platform as the local Docker Compose stack — OpenSearch, OpenSearch Dashboards, Data Prepper, OTel Collector, and Prometheus — as Kubernetes workloads.

  • Kubernetes cluster (1.26+) — kind, EKS, GKE, AKS, etc.
  • Helm v3.12+
  • kubectl configured for your cluster
Terminal window
git clone https://github.com/opensearch-project/observability-stack.git
cd observability-stack
helm install obs charts/observability-stack

This deploys all components with sensible defaults. The stack is ready when all pods are running:

Terminal window
kubectl get pods
Terminal window
kubectl port-forward svc/obs-opensearch-dashboards 5601:5601

Open http://localhost:5601. Default credentials: admin / My_password_123!@#

Services running inside the cluster can send telemetry directly to the OTel Collector:

ProtocolEndpoint
gRPCobs-opentelemetry-collector:4317
HTTPobs-opentelemetry-collector:4318

Set the environment variable in your pod spec or deployment:

env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://obs-opentelemetry-collector:4317

For sending telemetry from outside the cluster (e.g. local development), port-forward the OTLP endpoints:

Terminal window
kubectl port-forward svc/obs-opentelemetry-collector 4317:4317 4318:4318
Terminal window
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317

Key values in charts/observability-stack/values.yaml:

ValueDefaultDescription
opensearchUsernameadminOpenSearch admin username
opensearchPasswordMy_password_123!@#OpenSearch admin password
examples.enabledtrueDeploy example agents that generate sample telemetry
opentelemetry-demo.enabledfalseDeploy the OTel Demo microservices app
ismRetentionDays7Days to retain trace/log indices (0 = rollover only)
gateway.enabledfalseEnable Gateway API ingress
opensearchExporter.enabledtrueDeploy Prometheus exporter for OpenSearch metrics

Override values at install time:

Terminal window
helm install obs charts/observability-stack \
--set opensearchPassword="YourSecurePassword123!" \
--set examples.enabled=false

All components read credentials from a single Kubernetes Secret (opensearch-credentials), sourced from opensearchUsername and opensearchPassword in values.yaml. No passwords are hardcoded in sub-chart configurations. Pods read the Secret as environment variables at startup, so changing the password requires both updating OpenSearch’s internal security index (via the Security API) and running helm upgrade with the new value to update the Secret and restart pods.

For local development (kind):

Terminal window
helm install obs charts/observability-stack \
--set opensearch.singleNode=true \
--set opensearch.replicas=1 \
--set opensearch.resources.requests.memory=1Gi \
--set opensearch.resources.limits.memory=1Gi \
--set opensearch.opensearchJavaOpts="-Xms512m -Xmx512m" \
--set opensearch.persistence.size=2Gi \
--set data-prepper.resources.requests.memory=512Mi \
--set data-prepper.resources.limits.memory=512Mi

For production sizing, dedicated node roles, and advanced cluster topologies, see the chart README sizing guide and the official Tuning your cluster documentation.

For public-facing demos where login isn’t needed:

Terminal window
helm install obs charts/observability-stack \
-f charts/observability-stack/values-anonymous-auth.yaml

A Terraform module for deploying to AWS EKS is included at terraform/aws/. A single terraform apply provisions:

  • VPC with public/private subnets
  • EKS cluster with managed node groups
  • ALB with TLS termination via ACM certificate, routing to OpenSearch Dashboards
  • WAF rules for web application protection
  • Route 53 DNS records pointing to ALB
  • Helm release of the full Observability Stack
  • Optional anonymous authentication for public demos

See the EKS Deployment Guide for the step-by-step checklist, including S3 state backend setup, multi-region support, verification steps, troubleshooting, and cost estimates.

Terminal window
# Keep data (PVCs)
helm uninstall obs
# Remove everything including data
helm uninstall obs
kubectl delete pvc -l app.kubernetes.io/part-of=observability-stack