Skip to main content
Version: v1.0.0-rc.1 (pre-release)

Run OpenChoreo on Your Environment

This guide walks you through setting up OpenChoreo on any Kubernetes cluster (k3s, GKE, EKS, DOKS, AKS, or self-managed). You will install each plane one at a time, and after each one you will do something real with it: log in, deploy a service, or trigger a build.

It uses a single-cluster topology (all planes in one cluster). For split-cluster setups, follow Multi-Cluster Connectivity.

All gateways are configured with HTTPS using self-signed certificates by default. You can replace them with certificates from a real CA later.

OpenChoreo has four planes:

  • Control Plane runs the API, console, identity provider, and controllers.
  • Data Plane runs your workloads and routes traffic to them.
  • Workflow Plane builds container images from source using Argo Workflows.
  • Observability Plane collects logs and metrics from all other planes.

What you will get:

  • OpenChoreo running on your Kubernetes cluster with HTTPS
  • A reachable console URL over your cluster LoadBalancer
  • A deployed web app you can open in your browser
  • Optional source-to-image build pipeline and log collection

Prerequisites​

ToolVersionPurpose
kubectlv1.32+Kubernetes CLI
Helmv3.12+Package manager

Recommended cluster baseline: Kubernetes 1.32+, LoadBalancer support, and a default StorageClass.

Verify everything is installed:

kubectl version --client
helm version --short
kubectl get nodes
kubectl auth can-i '*' '*' --all-namespaces

Step 1: Install Prerequisites​

These are third-party components that OpenChoreo depends on. None of them are OpenChoreo-specific, they are standard Kubernetes building blocks.

Gateway API CRDs​

The Gateway API is the Kubernetes-native way to manage ingress and routing. OpenChoreo uses it to route traffic to workloads in every plane.

kubectl apply --server-side \
-f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.1/experimental-install.yaml

cert-manager​

cert-manager automates TLS certificate management. OpenChoreo uses it to issue certificates for internal communication between planes and for gateway TLS.

helm upgrade --install cert-manager oci://quay.io/jetstack/charts/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.19.2 \
--set crds.enabled=true \
--wait --timeout 180s

External Secrets Operator​

External Secrets Operator syncs secrets from external providers into Kubernetes. OpenChoreo uses it to pull secrets from a ClusterSecretStore into the right namespaces. For alternative backends, see Secret Management.

helm upgrade --install external-secrets oci://ghcr.io/external-secrets/charts/external-secrets \
--namespace external-secrets \
--create-namespace \
--version 1.3.2 \
--set installCRDs=true \
--wait --timeout 180s

kgateway​

kgateway is the Gateway API implementation that actually handles traffic. It watches for Gateway and HTTPRoute resources across all namespaces, so installing it once is enough. Every plane creates its own Gateway resource in its own namespace, and this single kgateway controller manages all of them.

helm upgrade --install kgateway-crds oci://cr.kgateway.dev/kgateway-dev/charts/kgateway-crds \
--create-namespace --namespace openchoreo-control-plane \
--version v2.2.1
helm upgrade --install kgateway oci://cr.kgateway.dev/kgateway-dev/charts/kgateway \
--namespace openchoreo-control-plane --create-namespace \
--version v2.2.1 \
--set controller.extraEnv.KGW_ENABLE_GATEWAY_API_EXPERIMENTAL_FEATURES=true

OpenBao (Secret Backend)​

OpenChoreo uses External Secrets Operator to manage secrets. All secrets are stored in a ClusterSecretStore named default and synced into the right namespaces using ExternalSecret resources. This guide uses OpenBao (an open-source Vault fork) as the secret backend. You can swap it for any ESO-supported provider by replacing the default ClusterSecretStore. See Secret Management for details.

helm upgrade --install openbao oci://ghcr.io/openbao/charts/openbao \
--namespace openbao \
--create-namespace \
--version 0.25.6 \
--values https://raw.githubusercontent.com/openchoreo/openchoreo/main/install/k3d/common/values-openbao.yaml \
--wait --timeout 300s
Production

For production, provide your own values file with server.dev.enabled=false and configure proper storage and unsealing. See the OpenBao Helm chart docs.

The values file runs a postStart script that configures Kubernetes auth, creates reader/writer policies, and seeds the following secrets into the store:

SecretValueUsed By
backstage-backend-secretlocal-dev-backend-secretBackstage session signing
backstage-client-secretbackstage-portal-secretBackstage OAuth with Thunder
backstage-jenkins-api-keyplaceholder-not-in-usePlaceholder
opensearch-usernameadminOpenSearch access
opensearch-passwordThisIsTheOpenSearchPassword1OpenSearch access

Create the ClusterSecretStore​

kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-secrets-openbao
namespace: openbao
---
apiVersion: external-secrets.io/v1
kind: ClusterSecretStore
metadata:
name: default
spec:
provider:
vault:
server: "http://openbao.openbao.svc:8200"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "openchoreo-secret-writer-role"
serviceAccountRef:
name: "external-secrets-openbao"
namespace: "openbao"
EOF

Step 2: Setup TLS​

Create a self-signed CA that all planes will use for TLS. A bootstrap ClusterIssuer generates a CA certificate, then a second ClusterIssuer uses that CA to issue certificates cluster-wide.

kubectl apply -f - <<'EOF'
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-bootstrap
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: openchoreo-ca
namespace: cert-manager
spec:
isCA: true
commonName: openchoreo-ca
secretName: openchoreo-ca-secret
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: selfsigned-bootstrap
kind: ClusterIssuer
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: openchoreo-ca
spec:
ca:
secretName: openchoreo-ca-secret
EOF

Wait for the CA to be ready:

kubectl wait --for=condition=Ready certificate/openchoreo-ca \
-n cert-manager --timeout=60s
Production TLS

Replace the openchoreo-ca ClusterIssuer with a real CA (e.g. Let's Encrypt via an ACME ClusterIssuer) for trusted certificates. All Certificate resources in this guide reference openchoreo-ca, so swapping the issuer is a single change.

tip

All curl commands in this guide use -k to skip certificate verification since we are using self-signed certs. Browsers will also show a certificate warning that you need to accept.

Step 3: Setup Control Plane​

The control plane is the brain of OpenChoreo. It runs the API server, the web console, the identity provider, and the controllers that reconcile your resources.

First, install a minimal control plane to get a LoadBalancer address. The chart requires valid hostnames upfront, so temporary values are used here and replaced with real ones after discovering the external IP.

helm upgrade --install openchoreo-control-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-control-plane \
--version 1.0.0-rc.1 \
--namespace openchoreo-control-plane \
--create-namespace \
--values - <<'EOF'
openchoreoApi:
http:
hostnames:
- "api.placeholder.tld"
backstage:
baseUrl: "https://console.placeholder.tld"
secretName: backstage-secrets
http:
hostnames:
- "console.placeholder.tld"
security:
oidc:
issuer: "https://thunder.placeholder.tld"
gateway:
tls:
enabled: false
EOF

Some pods will crash-loop at this point because Thunder and Backstage secrets are not configured yet. That is expected. The only thing needed from this step is the Gateway's LoadBalancer address.

EKS only: make the LoadBalancer internet-facing
kubectl patch svc gateway-default -n openchoreo-control-plane \
-p '{"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-scheme":"internet-facing"}}}'

Wait for the service to get an external address, then resolve it:

kubectl get svc gateway-default -n openchoreo-control-plane -w
CP_LB_IP=$(kubectl get svc gateway-default -n openchoreo-control-plane -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
if [ -z "$CP_LB_IP" ]; then
CP_LB_HOSTNAME=$(kubectl get svc gateway-default -n openchoreo-control-plane -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
CP_LB_IP=$(dig +short "$CP_LB_HOSTNAME" | head -1)
fi

export CP_BASE_DOMAIN="openchoreo.${CP_LB_IP//./-}.nip.io"
echo "Control plane base domain: ${CP_BASE_DOMAIN}"
echo " Console: console.${CP_BASE_DOMAIN}"
echo " API: api.${CP_BASE_DOMAIN}"
echo " Thunder: thunder.${CP_BASE_DOMAIN}"

If your cluster returns only a hostname and dig is not available, use nslookup to resolve the IP, or use your own DNS domain instead of nip.io.

Create the Control Plane TLS Certificate​

Issue a wildcard certificate covering all control plane subdomains:

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: cp-gateway-tls
namespace: openchoreo-control-plane
spec:
secretName: cp-gateway-tls
issuerRef:
name: openchoreo-ca
kind: ClusterIssuer
dnsNames:
- "*.${CP_BASE_DOMAIN}"
- "${CP_BASE_DOMAIN}"
privateKey:
rotationPolicy: Always
EOF
kubectl wait --for=condition=Ready certificate/cp-gateway-tls \
-n openchoreo-control-plane --timeout=60s

Install Thunder (Identity Provider)​

Thunder handles authentication and OAuth flows. The setup job is a pre-install helm hook that bootstraps users, groups, and OAuth applications on the very first helm install. To change these later, uninstall Thunder, delete the PVC, and reinstall.

curl -fsSL https://raw.githubusercontent.com/openchoreo/openchoreo/main/install/k3d/common/values-thunder.yaml \
| sed "s#http://thunder.openchoreo.localhost:8080#https://thunder.${CP_BASE_DOMAIN}#g" \
| sed "s#thunder.openchoreo.localhost#thunder.${CP_BASE_DOMAIN}#g" \
| sed "s#http://openchoreo.localhost:8080#https://console.${CP_BASE_DOMAIN}#g" \
| sed "s#port: 8080#port: 443#g" \
| sed 's#scheme: "http"#scheme: "https"#g' \
| helm upgrade --install thunder oci://ghcr.io/asgardeo/helm-charts/thunder \
--namespace thunder \
--create-namespace \
--version 0.26.0 \
--values -

Wait for Thunder to be ready:

kubectl wait -n thunder \
--for=condition=available --timeout=300s deployment -l app.kubernetes.io/name=thunder

You can browse and modify the Thunder configuration at:

echo "https://thunder.${CP_BASE_DOMAIN}/develop"
UsernamePassword
adminadmin

Backstage Secrets​

The web console (Backstage) needs a backend secret for session signing and an OAuth client secret to authenticate with Thunder:

kubectl apply -f - <<EOF
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: backstage-secrets
namespace: openchoreo-control-plane
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: default
target:
name: backstage-secrets
data:
- secretKey: backend-secret
remoteRef:
key: backstage-backend-secret
property: value
- secretKey: client-secret
remoteRef:
key: backstage-client-secret
property: value
- secretKey: jenkins-api-key
remoteRef:
key: backstage-jenkins-api-key
property: value
EOF

Configure the Control Plane​

Upgrade with real hostnames, TLS enabled, and JWKS skip-verify for self-signed certs. Backstage also needs NODE_TLS_REJECT_UNAUTHORIZED=0 so its Node.js runtime can reach Thunder's token endpoint through the self-signed gateway.

helm upgrade openchoreo-control-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-control-plane \
--version 1.0.0-rc.1 \
--namespace openchoreo-control-plane \
--reuse-values \
--values - <<EOF
openchoreoApi:
config:
server:
publicUrl: "https://api.${CP_BASE_DOMAIN}"
security:
authentication:
jwt:
jwks:
skip_tls_verify: true
http:
hostnames:
- "api.${CP_BASE_DOMAIN}"
backstage:
secretName: backstage-secrets
baseUrl: "https://console.${CP_BASE_DOMAIN}"
http:
hostnames:
- "console.${CP_BASE_DOMAIN}"
auth:
redirectUrls:
- "https://console.${CP_BASE_DOMAIN}/api/auth/openchoreo-auth/handler/frame"
extraEnv:
- name: NODE_TLS_REJECT_UNAUTHORIZED
value: "0"
security:
oidc:
issuer: "https://thunder.${CP_BASE_DOMAIN}"
jwksUrl: "https://thunder.${CP_BASE_DOMAIN}/oauth2/jwks"
authorizationUrl: "https://thunder.${CP_BASE_DOMAIN}/oauth2/authorize"
tokenUrl: "https://thunder.${CP_BASE_DOMAIN}/oauth2/token"
gateway:
tls:
enabled: true
hostname: "*.${CP_BASE_DOMAIN}"
certificateRefs:
- name: cp-gateway-tls
EOF

Update Thunder's HTTPRoute to use the real domain:

helm upgrade thunder oci://ghcr.io/asgardeo/helm-charts/thunder \
--namespace thunder \
--version 0.26.0 \
--reuse-values \
--set "httproute.hostnames[0]=thunder.${CP_BASE_DOMAIN}"

Wait for all deployments to come up:

kubectl wait -n openchoreo-control-plane \
--for=condition=available --timeout=300s deployment --all

Wait for Cluster Gateway CA​

The control plane creates a self-signed CA for internal communication between the cluster-gateway and cluster-agents. Wait for cert-manager to issue it before proceeding. Other planes will copy this CA certificate from the Secret directly.

kubectl wait -n openchoreo-control-plane \
--for=condition=Ready certificate/cluster-gateway-ca --timeout=120s

Verify HTTPS is working:

curl -sk https://thunder.${CP_BASE_DOMAIN}/health/readiness
curl -sk https://api.${CP_BASE_DOMAIN}/health
curl -sk -o /dev/null -w "%{http_code}" https://console.${CP_BASE_DOMAIN}/

Step 4: Install Default Resources​

OpenChoreo needs some base resources before you can deploy anything: a project, environments, component types, and a deployment pipeline.

kubectl apply -f https://raw.githubusercontent.com/openchoreo/openchoreo/main/samples/getting-started/all.yaml

Label the default namespace as a control plane namespace:

kubectl label namespace default openchoreo.dev/control-plane=true

Step 5: Setup Data Plane​

The data plane is where your workloads actually run. It has its own gateway for routing traffic, and a cluster-agent that connects back to the control plane to receive deployment instructions.

Namespace and Certificates​

Each plane needs a copy of the cluster-gateway CA certificate so its agent can establish a trusted connection to the control plane. The CA is stored in the cluster-gateway-ca Secret created by cert-manager in the control plane namespace.

kubectl create namespace openchoreo-data-plane --dry-run=client -o yaml | kubectl apply -f -

# Copy cluster-gateway CA from the cert-manager Secret so the agent can verify the gateway server
kubectl get secret cluster-gateway-ca -n openchoreo-control-plane \
-o jsonpath='{.data.ca\.crt}' | base64 -d | \
kubectl create configmap cluster-gateway-ca \
--from-file=ca.crt=/dev/stdin \
-n openchoreo-data-plane \
--dry-run=client -o yaml | kubectl apply -f -

Install the Data Plane​

helm upgrade --install openchoreo-data-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-data-plane \
--version 1.0.0-rc.1 \
--namespace openchoreo-data-plane \
--create-namespace \
--set gateway.tls.enabled=false
Single-node clusters (k3s, Rancher Desktop, minikube)

On single-node clusters all LoadBalancer services share the same IP. Add --set gateway.httpPort=8080 --set gateway.httpsPort=8443 to avoid port conflicts with the control plane gateway. Update publicHTTPPort / publicHTTPSPort in the ClusterDataPlane registration below to match.

Multi-cluster deployments

When each plane runs in its own cluster, you typically get separate LoadBalancer IPs, so port conflicts are not an issue. However, if you use non-standard ports (e.g., NodePort), set gateway.httpPort and gateway.httpsPort accordingly and make sure publicHTTPPort / publicHTTPSPort in the ClusterDataPlane registration match the externally reachable ports. See Multi-Cluster Connectivity for the full setup.

EKS only: make the LoadBalancer internet-facing
kubectl patch svc gateway-default -n openchoreo-data-plane \
-p '{"metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-scheme":"internet-facing"}}}'

Get the data plane IP and derive a domain:

kubectl get svc gateway-default -n openchoreo-data-plane -w
DP_LB_IP=$(kubectl get svc gateway-default -n openchoreo-data-plane -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
if [ -z "$DP_LB_IP" ]; then
DP_LB_HOSTNAME=$(kubectl get svc gateway-default -n openchoreo-data-plane -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
DP_LB_IP=$(dig +short "$DP_LB_HOSTNAME" | head -1)
fi

export DP_DOMAIN="apps.openchoreo.${DP_LB_IP//./-}.nip.io"
echo "Data plane domain: *.${DP_DOMAIN}"

Create the Data Plane TLS Certificate​

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: dp-gateway-tls
namespace: openchoreo-data-plane
spec:
secretName: dp-gateway-tls
issuerRef:
name: openchoreo-ca
kind: ClusterIssuer
dnsNames:
- "*.${DP_DOMAIN}"
- "${DP_DOMAIN}"
privateKey:
rotationPolicy: Always
EOF

kubectl wait --for=condition=Ready certificate/dp-gateway-tls \
-n openchoreo-data-plane --timeout=60s

Enable TLS on the Data Plane​

helm upgrade openchoreo-data-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-data-plane \
--version 1.0.0-rc.1 \
--namespace openchoreo-data-plane \
--reuse-values \
--values - <<EOF
gateway:
tls:
enabled: true
hostname: "*.${DP_DOMAIN}"
certificateRefs:
- name: dp-gateway-tls
EOF

Register the Data Plane​

The ClusterDataPlane resource tells the control plane about this data plane. It includes the agent's CA certificate (so the control plane trusts its WebSocket connection) and the gateway's public address (so the control plane knows how to route traffic to workloads). As a cluster-scoped resource, it is visible to all namespaces.

Multi-cluster deployments

When clusterAgent.tls.generateCerts=true is set (as in this guide), the agent creates its own self-signed CA stored in the cluster-agent-tls Secret. The cluster-gateway-ca Secret copied earlier is only used by the agent to trust the control plane's cluster-gateway server. The ClusterDataPlane registration must reference the agent's own CA so the cluster-gateway can verify incoming agent connections. See Multi-Cluster Connectivity for details.

AGENT_CA=$(kubectl get secret cluster-agent-tls \
-n openchoreo-data-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: ClusterDataPlane
metadata:
name: default
spec:
planeID: default
clusterAgent:
clientCA:
value: |
$(echo "$AGENT_CA" | sed 's/^/ /')
secretStoreRef:
name: default
gateway:
ingress:
external:
http:
host: ${DP_DOMAIN}
listenerName: http
port: 80
name: gateway-default
namespace: openchoreo-data-plane
EOF

Try it: Log In and Deploy​

Open the OpenChoreo console in your browser:

echo "https://console.${CP_BASE_DOMAIN}"
UsernamePassword
admin@openchoreo.devAdmin@123

You should see the OpenChoreo console. Deploy a sample web app:

kubectl apply -f https://raw.githubusercontent.com/openchoreo/openchoreo/main/samples/from-image/react-starter-web-app/react-starter.yaml
kubectl wait --for=condition=available deployment \
-l openchoreo.dev/component=react-starter -A --timeout=180s

HOSTNAME=$(kubectl get httproute -A -l openchoreo.dev/component=react-starter \
-o jsonpath='{.items[0].spec.hostnames[0]}')
DP_HTTPS_PORT=$(kubectl get gateway gateway-default -n openchoreo-data-plane \
-o jsonpath='{.spec.listeners[?(@.protocol=="HTTPS")].port}')
PORT_SUFFIX=$([ "$DP_HTTPS_PORT" = "443" ] && echo "" || echo ":${DP_HTTPS_PORT}")

echo "https://${HOSTNAME}${PORT_SUFFIX}"

Open that URL in your browser (accept the self-signed certificate warning). You should see the React starter application running.

Step 6: Setup Workflow Plane (Optional)​

The workflow plane takes source code, builds a container image, pushes it to a registry, and tells the control plane about the new image. It uses Argo Workflows to run build pipelines.

Namespace and Certificates​

Same process as the data plane. Copy the cluster-gateway CA from the cert-manager Secret so the workflow plane's agent can connect to the control plane:

kubectl create namespace openchoreo-workflow-plane --dry-run=client -o yaml | kubectl apply -f -

# Copy cluster-gateway CA from the cert-manager Secret so the agent can verify the gateway server
kubectl get secret cluster-gateway-ca -n openchoreo-control-plane \
-o jsonpath='{.data.ca\.crt}' | base64 -d | \
kubectl create configmap cluster-gateway-ca \
--from-file=ca.crt=/dev/stdin \
-n openchoreo-workflow-plane \
--dry-run=client -o yaml | kubectl apply -f -

Install the Workflow Plane​

helm upgrade --install openchoreo-workflow-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-workflow-plane \
--version 1.0.0-rc.1 \
--namespace openchoreo-workflow-plane \
--create-namespace \
--set clusterAgent.tls.generateCerts=true

Install Workflow Templates​

Build pipelines are defined as ClusterWorkflowTemplates. Each build workflow (docker, react, etc.) is composed from smaller shared templates, so you can swap out individual pieces without touching the rest.

Start with the checkout step. This controls how source code is cloned. If your repos live behind a corporate proxy or need a custom auth flow, this is the template you would edit.

kubectl apply -f https://raw.githubusercontent.com/openchoreo/openchoreo/main/samples/getting-started/workflow-templates/checkout-source.yaml

Next, install the build coordinator templates (docker, react, ballerina-buildpack, google-cloud-buildpacks) and the generate-workload step:

kubectl apply -f https://raw.githubusercontent.com/openchoreo/openchoreo/main/samples/getting-started/workflow-templates.yaml

Finally, install the publish step. This controls where built images get pushed. The default below uses ttl.sh, an anonymous ephemeral registry where images expire automatically.

TTL.sh

Images on ttl.sh expire after 24 hours. For production, use your cloud provider's registry (ECR, GAR, ACR). See Container Registry Configuration.

kubectl apply -f https://raw.githubusercontent.com/openchoreo/openchoreo/main/samples/getting-started/workflow-templates/publish-image.yaml

Register the Workflow Plane​

The workflow plane references the same default ClusterSecretStore created in Step 1.

AGENT_CA=$(kubectl get secret cluster-agent-tls \
-n openchoreo-workflow-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: ClusterWorkflowPlane
metadata:
name: default
spec:
planeID: default
clusterAgent:
clientCA:
value: |
$(echo "$AGENT_CA" | sed 's/^/ /')
secretStoreRef:
name: default
EOF

Try it: Build from Source​

Apply a sample component that builds a Go service from source:

kubectl apply -f https://raw.githubusercontent.com/openchoreo/openchoreo/main/samples/from-source/services/go-docker-greeter/greeting-service.yaml

Watch the build progress:

kubectl get workflow -n workflows-default --watch

After the build completes, wait for the deployment:

kubectl wait --for=condition=available deployment \
-l openchoreo.dev/component=greeting-service -A --timeout=300s

Resolve the hostname and call the service:

HOSTNAME=$(kubectl get httproute -A -l openchoreo.dev/component=greeting-service \
-o jsonpath='{.items[0].spec.hostnames[0]}')
PATH_PREFIX=$(kubectl get httproute -A -l openchoreo.dev/component=greeting-service \
-o jsonpath='{.items[0].spec.rules[0].matches[0].path.value}')
DP_HTTPS_PORT=$(kubectl get gateway gateway-default -n openchoreo-data-plane \
-o jsonpath='{.spec.listeners[?(@.protocol=="HTTPS")].port}')
PORT_SUFFIX=$([ "$DP_HTTPS_PORT" = "443" ] && echo "" || echo ":${DP_HTTPS_PORT}")

curl -k "https://${HOSTNAME}${PORT_SUFFIX}${PATH_PREFIX}/greeter/greet"

OpenChoreo built your code, pushed the image to ttl.sh, and deployed it to the data plane.

k3s / Rancher Desktop

On k3s with the Docker runtime, the generate-workload-cr build step may fail with crun: the requested cgroup controller 'pids' is not available. This happens because the step runs podman inside the workflow pod and Docker does not delegate all cgroup v2 controllers. The image build and push still succeed. You can work around this by manually creating the Workload CR using the workload descriptor from the source repo and referencing the built image. Switching Rancher Desktop to the containerd runtime avoids this issue.

Step 7: Setup Observability Plane (Optional)​

OpenChoreo follows a modular architecture. The observability plane consists of system services plus various observability modules that you can install to get observability features. For example, if you require observability logs features, you can install a logs module. This guide installs the observability plane with OpenSearch-based logs and tracing modules and a Prometheus-based metrics module by default.

Namespace and Certificates​

kubectl create namespace openchoreo-observability-plane --dry-run=client -o yaml | kubectl apply -f -

# Copy cluster-gateway CA from the cert-manager Secret so the agent can verify the gateway server
kubectl get secret cluster-gateway-ca -n openchoreo-control-plane \
-o jsonpath='{.data.ca\.crt}' | base64 -d | \
kubectl create configmap cluster-gateway-ca \
--from-file=ca.crt=/dev/stdin \
-n openchoreo-observability-plane \
--dry-run=client -o yaml | kubectl apply -f -

Observability Plane Secrets​

The observability plane needs two secrets:

  • opensearch-admin-credentials β€” used by the OpenSearch cluster setup job (username, password keys).
  • observer-secret β€” injected via envFrom into the Observer deployment. Required keys: OPENSEARCH_USERNAME, OPENSEARCH_PASSWORD, UID_RESOLVER_OAUTH_CLIENT_SECRET.
kubectl apply -f - <<EOF
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: opensearch-admin-credentials
namespace: openchoreo-observability-plane
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: default
target:
name: opensearch-admin-credentials
data:
- secretKey: username
remoteRef:
key: opensearch-username
property: value
- secretKey: password
remoteRef:
key: opensearch-password
property: value
---
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: observer-secret
namespace: openchoreo-observability-plane
spec:
refreshInterval: 1h
secretStoreRef:
kind: ClusterSecretStore
name: default
target:
name: observer-secret
data:
- secretKey: OPENSEARCH_USERNAME
remoteRef:
key: opensearch-username
property: value
- secretKey: OPENSEARCH_PASSWORD
remoteRef:
key: opensearch-password
property: value
- secretKey: UID_RESOLVER_OAUTH_CLIENT_SECRET
remoteRef:
key: observer-oauth-client-secret
property: value
EOF

kubectl wait -n openchoreo-observability-plane \
--for=condition=Ready externalsecret/opensearch-admin-credentials \
externalsecret/observer-secret --timeout=60s

Install the Observability Plane​

OpenChoreo uses a modular observability plane. In this step you will:

  • Install the observability plane core services in the openchoreo-observability-plane namespace.
  • Install the logs module (observability-logs-opensearch) to collect and search logs in OpenSearch.
  • Install the metrics module (observability-metrics-prometheus) for Prometheus-compatible metrics collection.
  • Install the traces module (observability-tracing-opensearch) for distributed tracing on OpenSearch.

Install the observability plane core​

helm upgrade --install openchoreo-observability-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-observability-plane \
--version 1.0.0-rc.1 \
--namespace openchoreo-observability-plane \
--create-namespace \
--timeout 25m \
--values - <<EOF
observer:
openSearchSecretName: opensearch-admin-credentials
secretName: observer-secret
gateway:
tls:
enabled: false
EOF
Single-node clusters (k3s, Rancher Desktop, minikube)

On single-node clusters all LoadBalancer services share the same IP. Add --set gateway.httpPort=9080 --set gateway.httpsPort=9443 to avoid port conflicts with the control plane and data plane gateways.

Install the logs module (OpenSearch)​

helm upgrade --install observability-logs-opensearch \
oci://ghcr.io/openchoreo/helm-charts/observability-logs-opensearch \
--create-namespace \
--namespace openchoreo-observability-plane \
--version 0.3.8 \
--set openSearchSetup.openSearchSecretName="opensearch-admin-credentials"

Install the metrics module (Prometheus)​

helm upgrade --install observability-metrics-prometheus \
oci://ghcr.io/openchoreo/helm-charts/observability-metrics-prometheus \
--create-namespace \
--namespace openchoreo-observability-plane \
--version 0.2.4

Install the traces module (OpenSearch)​

helm upgrade --install observability-traces-opensearch \
oci://ghcr.io/openchoreo/helm-charts/observability-tracing-opensearch \
--create-namespace \
--namespace openchoreo-observability-plane \
--version 0.3.7 \
--set openSearch.enabled=false \
--set openSearchSetup.openSearchSecretName="opensearch-admin-credentials"

Create the Observability Plane TLS Certificate​

Get the observer gateway IP and create a certificate:

OBS_LB_IP=$(kubectl get svc gateway-default -n openchoreo-observability-plane -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
if [ -z "$OBS_LB_IP" ]; then
OBS_LB_HOSTNAME=$(kubectl get svc gateway-default -n openchoreo-observability-plane -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
OBS_LB_IP=$(dig +short "$OBS_LB_HOSTNAME" | head -1)
fi

export OBS_BASE_DOMAIN="openchoreo.observability.${OBS_LB_IP//./-}.nip.io"
export OBS_DOMAIN="observer.${OBS_BASE_DOMAIN}"
echo "Observer domain: ${OBS_DOMAIN}"
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: obs-gateway-tls
namespace: openchoreo-observability-plane
spec:
secretName: obs-gateway-tls
issuerRef:
name: openchoreo-ca
kind: ClusterIssuer
dnsNames:
- "*.${OBS_BASE_DOMAIN}"
- "${OBS_DOMAIN}"
privateKey:
rotationPolicy: Always
EOF

kubectl wait --for=condition=Ready certificate/obs-gateway-tls \
-n openchoreo-observability-plane --timeout=60s

Configure the Observability Plane to use the newly created certificate​

helm upgrade openchoreo-observability-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-observability-plane \
--version 1.0.0-rc.1 \
--namespace openchoreo-observability-plane \
--reuse-values \
--timeout 10m \
--values - <<EOF
observer:
openSearchSecretName: opensearch-admin-credentials
secretName: observer-secret
controlPlaneApiUrl: "https://api.${CP_BASE_DOMAIN}"
http:
hostnames:
- "observer.${OBS_BASE_DOMAIN}"
cors:
allowedOrigins:
- "https://console.${CP_BASE_DOMAIN}"
authzTlsInsecureSkipVerify: true
security:
oidc:
issuer: "https://thunder.${CP_BASE_DOMAIN}"
jwksUrl: "https://thunder.${CP_BASE_DOMAIN}/oauth2/jwks"
tokenUrl: "https://thunder.${CP_BASE_DOMAIN}/oauth2/token"
jwksUrlTlsInsecureSkipVerify: "true"
uidResolverTlsInsecureSkipVerify: "true"
gateway:
tls:
enabled: true
hostname: "*.${OBS_BASE_DOMAIN}"
certificateRefs:
- name: obs-gateway-tls
EOF

Register the Observability Plane​

AGENT_CA=$(kubectl get secret cluster-agent-tls \
-n openchoreo-observability-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

OBS_HTTPS_PORT=$(kubectl get gateway gateway-default -n openchoreo-observability-plane \
-o jsonpath='{.spec.listeners[?(@.protocol=="HTTPS")].port}')
OBS_PORT_SUFFIX=$([ "$OBS_HTTPS_PORT" = "443" ] && echo "" || echo ":${OBS_HTTPS_PORT}")

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: ClusterObservabilityPlane
metadata:
name: default
spec:
planeID: default
clusterAgent:
clientCA:
value: |
$(echo "$AGENT_CA" | sed 's/^/ /')
observerURL: https://${OBS_DOMAIN}${OBS_PORT_SUFFIX}
EOF
Single-cluster vs multi-cluster

The observerURL uses the gateway-exposed HTTPS URL so it works in both single-cluster and multi-cluster topologies as its accessed by clients outside the cluster.

Tell the data plane (and workflow plane, if installed) where to send their telemetry:

kubectl patch clusterdataplane default --type merge \
-p '{"spec":{"observabilityPlaneRef":{"kind":"ClusterObservabilityPlane","name":"default"}}}'

# If you installed the workflow plane:
kubectl patch clusterworkflowplane default --type merge \
-p '{"spec":{"observabilityPlaneRef":{"kind":"ClusterObservabilityPlane","name":"default"}}}'

Enable Fluent Bit for log collection

helm upgrade -n openchoreo-observability-plane observability-logs-opensearch \
oci://ghcr.io/openchoreo/helm-charts/observability-logs-opensearch \
--version 0.3.8 \
--reuse-values \
--set fluent-bit.enabled=true

Verify the observer is reachable:

OBS_HTTPS_PORT=$(kubectl get gateway gateway-default -n openchoreo-observability-plane \
-o jsonpath='{.spec.listeners[?(@.protocol=="HTTPS")].port}')
OBS_PORT_SUFFIX=$([ "$OBS_HTTPS_PORT" = "443" ] && echo "" || echo ":${OBS_HTTPS_PORT}")

curl -sk "https://${OBS_DOMAIN}${OBS_PORT_SUFFIX}/health"
Self-signed certificates

The observer runs on a different domain than the console. Your browser needs to accept the self-signed certificate for the observer domain separately. Open the observer URL in your browser and accept the certificate warning, otherwise the console's observability features (logs, metrics, traces) will fail with a certificate error.

echo "https://${OBS_DOMAIN}${OBS_PORT_SUFFIX}/health"

Production Configuration​

This guide gets all planes running on a real cluster with self-signed TLS. For production hardening, see:

Cleanup​

Delete plane registrations:

kubectl delete clusterdataplane default 2>/dev/null
kubectl delete clusterworkflowplane default 2>/dev/null
kubectl delete clusterobservabilityplane default 2>/dev/null

Uninstall OpenChoreo planes and prerequisites:

helm uninstall openchoreo-observability-plane -n openchoreo-observability-plane 2>/dev/null
helm uninstall openchoreo-workflow-plane -n openchoreo-workflow-plane 2>/dev/null
helm uninstall openchoreo-data-plane -n openchoreo-data-plane 2>/dev/null
helm uninstall openchoreo-control-plane -n openchoreo-control-plane 2>/dev/null
helm uninstall thunder -n thunder 2>/dev/null
helm uninstall kgateway -n openchoreo-control-plane 2>/dev/null
helm uninstall kgateway-crds 2>/dev/null
helm uninstall external-secrets -n external-secrets 2>/dev/null
helm uninstall cert-manager -n cert-manager 2>/dev/null

Delete namespaces:

kubectl delete namespace \
openchoreo-control-plane \
thunder \
openchoreo-data-plane \
openchoreo-workflow-plane \
openchoreo-observability-plane \
external-secrets \
cert-manager 2>/dev/null

Next Steps​