Skip to main content
Version: v0.14.x

On Managed Kubernetes

Try OpenChoreo on managed Kubernetes services (GKE, EKS, AKS, etc.) with automatic TLS certificates. This guide uses nip.io for free wildcard DNS based on your LoadBalancer IP.

nip.io + Let's Encrypt limits

nip.io is convenient for quick tests, but Let's Encrypt rate limits are common on shared/public test domains. If certificate issuance fails with rate-limit errors, switch to your own domain and a DNS-01 ClusterIssuer.

What you'll get:

  • Control plane with Let's Encrypt TLS
  • Data plane over HTTPS (self-signed on nip.io)
  • Single cluster deployment
  • Access via *.nip.io domains
  • ~30 minutes to complete

Prerequisites​

  • Kubernetes 1.32+ cluster with at least 3 nodes (4 CPU, 8GB RAM each)
  • LoadBalancer support (cloud provider or MetalLB)
  • Public IP accessible from the internet (for Let's Encrypt HTTP-01 validation)
  • kubectl v1.32+ configured to access your cluster
  • Helm v3.12+
  • Gateway API CRDs, cert-manager, and External Secrets Operator installed in your cluster
kubectl version
helm version --short
kubectl get nodes
kubectl auth can-i '*' '*' --all-namespaces
Don't have Gateway API CRDs, cert-manager, or External Secrets Operator? Install them here

Install Gateway API CRDs:

kubectl apply --server-side \
-f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.1/experimental-install.yaml

Install cert-manager:

helm upgrade --install cert-manager oci://quay.io/jetstack/charts/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.19.2 \
--set crds.enabled=true

kubectl wait --for=condition=Available deployment/cert-manager -n cert-manager --timeout=180s

Install External Secrets Operator:

helm upgrade --install external-secrets oci://ghcr.io/external-secrets/charts/external-secrets \
--namespace external-secrets \
--create-namespace \
--version 1.3.2 \
--set installCRDs=true

kubectl wait --for=condition=Available deployment/external-secrets -n external-secrets --timeout=180s

Step 1: Setup Control Plane​

helm upgrade --install openchoreo-control-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-control-plane \
--version 0.14.0 \
--namespace openchoreo-control-plane \
--create-namespace \
--set global.baseDomain=placeholder.nip.io \
--set thunder.configuration.server.publicUrl=http://thunder.openchoreo.placeholder.nip.io \
--set thunder.configuration.gateClient.hostname=thunder.openchoreo.placeholder.nip.io

This does an initial deployment with placeholder values. We'll update it with the real domain once we know the LoadBalancer IP.

This installs:

  • controller-manager: the controllers that reconcile OpenChoreo resources and manage the platform lifecycle.
  • openchoreo-api: REST API server that the console and CLI talk to.
  • backstage: the web console for managing your platform.
  • thunder: built-in identity provider handling authentication and OAuth flows.
  • cluster-gateway: accepts WebSocket connections from cluster-agents in remote planes.
  • kgateway: gateway controller for routing external traffic to services.
  • OpenChoreo CRDs: Organization, Project, Component, Environment, DataPlane, BuildPlane, and others that define the platform's API.

The control plane is OpenChoreo's brain. In production, you'd typically run this in its own dedicated cluster, isolated from your workloads.

For all available configuration options, see the Control Plane Helm Reference.

Get LoadBalancer IP

Wait for LoadBalancer to get an external IP (press Ctrl+C once EXTERNAL-IP appears):

kubectl get svc gateway-default -n openchoreo-control-plane -w

Get the IP address:

CP_LB_IP=$(kubectl get svc gateway-default -n openchoreo-control-plane -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Configure Domain and TLS

Set domain variable (converts IP to nip.io format):

export CP_DOMAIN="openchoreo.${CP_LB_IP//./-}.nip.io"
echo "Control Plane Domain: $CP_DOMAIN"

Create Issuer and Certificate:

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-http01
namespace: openchoreo-control-plane
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: tryout@openchoreo.dev
privateKeySecretRef:
name: letsencrypt-http01
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- name: gateway-default
namespace: openchoreo-control-plane
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: control-plane-tls
namespace: openchoreo-control-plane
spec:
secretName: control-plane-tls
issuerRef:
name: letsencrypt-http01
kind: Issuer
dnsNames:
- "console.${CP_DOMAIN}"
- "api.${CP_DOMAIN}"
- "thunder.${CP_DOMAIN}"
EOF

This creates a Let's Encrypt issuer using HTTP-01 validation. The Certificate resource requests certificates for the console, API, and Thunder (identity provider) domains. Let's Encrypt will make HTTP requests to these domains to verify you control them, which is why your LoadBalancer needs to be publicly accessible.

Wait until READY shows True (may take 1-2 minutes):

kubectl get certificate control-plane-tls -n openchoreo-control-plane -w

Upgrade with TLS

helm upgrade --install openchoreo-control-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-control-plane \
--version 0.14.0 \
--namespace openchoreo-control-plane \
--reuse-values \
--set global.baseDomain=${CP_DOMAIN} \
--set global.tls.enabled=true \
--set global.tls.secretName=control-plane-tls \
--set gateway.selfSignedIssuer.enabled=false \
--set backstage.baseUrl=https://console.${CP_DOMAIN} \
--set-string backstage.auth.redirectUrls\[0\]=https://console.${CP_DOMAIN}/api/auth/openchoreo-auth/handler/frame \
--set thunder.configuration.server.publicUrl=https://thunder.${CP_DOMAIN} \
--set thunder.configuration.gateClient.hostname=thunder.${CP_DOMAIN} \
--set thunder.configuration.gateClient.port=443 \
--set thunder.configuration.gateClient.scheme="https" \
--set-string thunder.configuration.cors.allowedOrigins\[0\]=https://console.${CP_DOMAIN} \
--set-string thunder.configuration.cors.allowedOrigins\[1\]=https://thunder.${CP_DOMAIN} \
--set-string thunder.configuration.passkey.allowedOrigins\[0\]=https://console.${CP_DOMAIN}

This upgrades the control plane with these settings:

  • global.baseDomain: the real nip.io domain based on your LoadBalancer IP.
  • global.tls.enabled and global.tls.secretName: enables TLS using the certificate we just created.
  • backstage.baseUrl and backstage.auth.redirectUrls: serves Backstage on console.<domain> with correct OAuth callbacks.
  • thunder.configuration.*: updates Thunder external URLs, CORS, and passkey origins for HTTPS.

Patch Backstage HTTPRoute host to console.<domain>:

kubectl patch httproute openchoreo-backstage -n openchoreo-control-plane --type merge \
-p "{\"spec\":{\"hostnames\":[\"console.${CP_DOMAIN}\"]}}"

Apply Default Resources

The control plane needs default resources (project, environments, component types, workflows) to function. These are normally created by Helm post-install hooks, but hooks can fail silently on some clusters (especially EKS). Apply them explicitly to ensure they exist:

kubectl apply -f https://raw.githubusercontent.com/openchoreo/openchoreo/release-v0.14/samples/getting-started/all.yaml

Verify the resources were created:

kubectl get project,environment,componenttype -n default

You should see the default project, three environments (development, staging, production), and component types (service, web-application, scheduled-task).

Label the default namespace

Label the default namespace to mark it as a control plane namespace:

kubectl label namespace default openchoreo.dev/controlplane-namespace=true

Step 2: Setup Data Plane​

helm upgrade --install openchoreo-data-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-data-plane \
--version 0.14.0 \
--namespace openchoreo-data-plane \
--set gateway.httpPort=80 \
--set gateway.httpsPort=443 \
--set gatewayController.enabled=false \
--set gateway.selfSignedIssuer.enabled=true \
--create-namespace

This installs the data plane into the openchoreo-data-plane namespace, with these settings:

  • gateway.httpPort and gateway.httpsPort: the ports where KGateway listens for traffic to your applications.
  • gatewayController.enabled=false: reuses the gateway controller already installed with the control plane (single-cluster mode).
  • gateway.selfSignedIssuer.enabled=true: keeps the data plane certificate flow self-signed in this HTTP-01 guide.

This installs:

  • cluster-agent: maintains a WebSocket connection to the control plane's cluster-gateway. This is how the control plane sends deployment instructions to the data plane.
  • gateway: KGateway with Envoy proxy that routes incoming traffic to your deployed applications.
  • fluent-bit: collects logs from your workloads and forwards them to the observability plane.
  • Gateway API CRDs: Gateway, HTTPRoute, and other resources for traffic routing.

The data plane is where your workloads actually run. In this guide we're installing it in the same cluster as the control plane, but in production you'd typically have it in a completely separate cluster. This separation is intentional: your application code never runs alongside the control plane, and the control plane's credentials are never exposed to your workloads.

For all available configuration options, see the Data Plane Helm Reference.

Get Gateway LoadBalancer IP

Wait for LoadBalancer to get an external IP (press Ctrl+C once EXTERNAL-IP appears):

kubectl get svc gateway-default -n openchoreo-data-plane -w

Get the IP address:

DP_LB_IP=$(kubectl get svc gateway-default -n openchoreo-data-plane -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Configure Domain

export DP_BASE_DOMAIN="openchoreo.${DP_LB_IP//./-}.nip.io"
export DP_DOMAIN="app.${DP_BASE_DOMAIN}"
echo "Data Plane Base Domain: $DP_BASE_DOMAIN"
echo "Data Plane Virtual Host: $DP_DOMAIN"

Configure TLS

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: openchoreo-gateway-tls
namespace: openchoreo-data-plane
spec:
secretName: openchoreo-gateway-tls
issuerRef:
name: openchoreo-selfsigned-issuer
kind: ClusterIssuer
dnsNames:
- "*.development.${DP_DOMAIN}"
- "*.staging.${DP_DOMAIN}"
- "*.production.${DP_DOMAIN}"
- "*.default.development.${DP_DOMAIN}"
- "*.default.staging.${DP_DOMAIN}"
- "*.default.production.${DP_DOMAIN}"
EOF
Wildcard Certificates

This guide intentionally keeps the data plane on self-signed TLS.

  • *.development.${DP_DOMAIN}, *.staging.${DP_DOMAIN}, and *.production.${DP_DOMAIN} cover service hostnames: <namespace>.<environment>.${DP_DOMAIN}.
  • *.default.development.${DP_DOMAIN}, *.default.staging.${DP_DOMAIN}, and *.default.production.${DP_DOMAIN} cover web-application hostnames in the default namespace: <component>-<environment>.default.${DP_DOMAIN}. If you deploy web applications in non-default namespaces, add explicit dnsNames entries. For trusted certificates, use your own domain with DNS-01 and a custom ClusterIssuer.

Upgrade with Domain

helm upgrade --install openchoreo-data-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-data-plane \
--version 0.14.0 \
--namespace openchoreo-data-plane \
--reuse-values \
--set gateway.tls.certName=openchoreo-gateway-tls \
--set gateway.tls.hostname="*.${DP_DOMAIN}"

Remove the HTTPS listener hostname filter so all environment hostnames can match:

kubectl patch gateway gateway-default -n openchoreo-data-plane --type json \
-p '[{"op":"remove","path":"/spec/listeners/1/hostname"}]' || true

Register with the Control Plane

CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-data-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

The control plane only accepts agent connections signed by a CA it recognizes. When you installed the data plane, cert-manager generated a CA and used it to sign the cluster-agent's client certificate. This command extracts that CA so you can tell the control plane about it.

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: DataPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-dataplane"
clusterAgent:
clientCA:
value: |
$(echo "$CA_CERT" | sed 's/^/ /')
gateway:
organizationVirtualHost: "${DP_DOMAIN}"
publicVirtualHost: "${DP_DOMAIN}"
publicHTTPPort: 80
publicHTTPSPort: 443
organizationHTTPPort: 80
organizationHTTPSPort: 443
secretStoreRef:
name: default
EOF

This creates a DataPlane resource that tells the control plane about your data plane, with these settings:

  • planeID: identifies this data plane. Must match what the cluster-agent was configured with, which defaults to default-dataplane.
  • clusterAgent.clientCA: the CA certificate that signed the agent's client certificate. The control plane uses this to verify incoming connections.
  • gateway.publicVirtualHost: where your deployed applications become accessible.
  • gateway.publicHTTPPort and gateway.publicHTTPSPort: explicit invoke URL ports used by the DataPlane API model.
  • secretStoreRef: references the External Secrets ClusterSecretStore for managing secrets.

Verify

kubectl get dataplane -n default

The cluster-agent should now be connected. You can check its logs:

kubectl logs -n openchoreo-data-plane -l app=cluster-agent --tail=10

Step 3: Setup Build Plane (Optional)​

The Build Plane runs Argo Workflows to build container images from your source code. You need a container registry to store built images.

ttl.sh is a free, anonymous registry. Images expire after 24 hours. No setup required.

export REGISTRY_PREFIX=$(cat /dev/urandom | LC_ALL=C tr -dc 'a-z0-9' | fold -w 8 | head -n 1)
echo "Your registry prefix: $REGISTRY_PREFIX"
helm upgrade --install openchoreo-build-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-build-plane \
--version 0.14.0 \
--namespace openchoreo-build-plane \
--create-namespace \
--set global.defaultResources.registry.host=ttl.sh \
--set global.defaultResources.registry.repoPath=$REGISTRY_PREFIX \
--set global.defaultResources.registry.tlsVerify=true
note

Images on ttl.sh expire after 24 hours. For production, use your cloud provider's registry (ECR, GCR, ACR). See Container Registry Configuration.

This installs:

  • cluster-agent: connects to the control plane to receive build instructions.
  • argo-workflows: executes the actual build pipelines as Kubernetes workflows.
  • Argo Workflows CRDs: Workflow, WorkflowTemplate, and other resources for defining build pipelines.

For all available configuration options, see the Build Plane Helm Reference.

Register with the Control Plane

BP_CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-build-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: BuildPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-buildplane"
secretStoreRef:
name: openbao
clusterAgent:
clientCA:
value: |
$(echo "$BP_CA_CERT" | sed 's/^/ /')
EOF

Verify

kubectl get buildplane -n default
kubectl logs -n openchoreo-build-plane -l app=cluster-agent --tail=10

Step 4: Setup Observability Plane (Optional)​

helm upgrade --install openchoreo-observability-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-observability-plane \
--version 0.14.0 \
--namespace openchoreo-observability-plane \
--create-namespace \
--set openSearch.enabled=true \
--set openSearchCluster.enabled=false \
--set security.oidc.jwksUrl="https://thunder.${CP_DOMAIN}/oauth2/jwks" \
--set security.oidc.tokenUrl="https://thunder.${CP_DOMAIN}/oauth2/token" \
--set clusterAgent.enabled=true \
--timeout 10m

This installs the observability plane with these settings:

  • openSearch.enabled: deploys OpenSearch for storing logs and traces.
  • openSearchCluster.enabled: set to false to use the simpler single-node deployment instead of the operator-based cluster.
  • security.oidc.jwksUrl: the JWKS endpoint for validating JWT tokens. This points to Thunder's JWKS endpoint (using HTTPS since we configured TLS) so the Observer API can authenticate requests.

This installs:

  • cluster-agent: connects to the control plane.
  • opensearch: stores logs and traces from your workloads.
  • observer: REST API that abstracts OpenSearch. The console and other components query logs through this instead of talking to OpenSearch directly.
  • opentelemetry-collector: receives traces and metrics from your applications.

The observability plane collects logs, metrics, and traces from your data and build planes. Like the other planes, it could run in a completely separate cluster in production.

For all available configuration options, see the Observability Plane Helm Reference.

Register with the Control Plane

OP_CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-observability-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: ObservabilityPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-observabilityplane"
clusterAgent:
clientCA:
value: |
$(echo "$OP_CA_CERT" | sed 's/^/ /')
observerURL: http://observer.openchoreo-observability-plane.svc.cluster.local:8080
EOF

The observerURL tells the control plane where to find the Observer API.

Link Other Planes to Observability

kubectl patch dataplane default -n default --type merge -p '{"spec":{"observabilityPlaneRef":"default"}}'
kubectl patch buildplane default -n default --type merge -p '{"spec":{"observabilityPlaneRef":"default"}}'

This tells the data plane and build plane to send their logs and traces to this observability plane.

Verify

kubectl get observabilityplane -n default
kubectl logs -n openchoreo-observability-plane -l app=cluster-agent --tail=10

Access OpenChoreo​

ServiceURL
Consolehttps://console.${CP_DOMAIN}
APIhttps://api.${CP_DOMAIN}
Deployed Appshttps://<namespace>.<environment>.${DP_DOMAIN}/<component>/...
Registryhttps://registry.${CP_DOMAIN} (if Build Plane installed)

Default credentials: admin@openchoreo.dev / Admin@123

Two Different IPs

The Control Plane and Data Plane have separate LoadBalancer IPs. nip.io resolves based on the IP embedded in the hostname, so openchoreo.1-2-3-4.nip.io resolves to 1.2.3.4. This means your console and your deployed apps can have completely different IPs, which is actually how you'd set things up in production with separate clusters.


Moving to Production​

This guide provides a quick way to explore OpenChoreo. For production deployments, follow these guides to harden your setup:

  1. Identity & Security: Replace default credentials with a real Identity Provider.

  2. Networking & Domains: Move away from nip.io to your own domains.

  3. Infrastructure: Scale out and isolate your planes.

Controlplane Namespaces​

The installation automatically creates and configures a default namespace with all required resources so you can immediately start creating projects and deploying components.

What's already configured in the default namespace:

  • DataPlane resource (connected to your Data Plane cluster)
  • Component Types: service, webapp, scheduled-task
  • Component Workflows: docker, google-cloud-buildpacks, ballerina-buildpack, react
  • Environments: development, staging, production
  • DeploymentPipeline: default (promotes development β†’ staging β†’ production)
  • Project: default
Additional Namespaces (Optional)

If you need to create additional namespaces to organize resources for different teams, or business units, follow the Namespace Management Guide for step-by-step instructions on provisioning all required resources.

Next Steps​

  1. Deploy your first component to see OpenChoreo in action.
  2. Start planning your production architecture with the Deployment Topology guide.

Cleanup​

Delete plane registrations:

kubectl delete dataplane default -n default 2>/dev/null
kubectl delete buildplane default -n default 2>/dev/null
kubectl delete observabilityplane default -n default 2>/dev/null

Uninstall OpenChoreo components:

helm uninstall openchoreo-observability-plane -n openchoreo-observability-plane 2>/dev/null
helm uninstall openchoreo-build-plane -n openchoreo-build-plane 2>/dev/null
helm uninstall openchoreo-data-plane -n openchoreo-data-plane
helm uninstall openchoreo-control-plane -n openchoreo-control-plane
helm uninstall external-secrets -n external-secrets
helm uninstall cert-manager -n cert-manager

Delete namespaces:

kubectl delete namespace openchoreo-control-plane openchoreo-data-plane openchoreo-build-plane openchoreo-observability-plane external-secrets cert-manager 2>/dev/null

Delete CRDs:

kubectl get crd -o name | grep -E '\.openchoreo\.dev$' | xargs -r kubectl delete
kubectl get crd -o name | grep -E '\.cert-manager\.io$' | xargs -r kubectl delete
kubectl get crd -o name | grep -E '\.gateway\.networking\.k8s\.io$' | xargs -r kubectl delete

Troubleshooting​

Certificate not issuing​

kubectl describe certificate control-plane-tls -n openchoreo-control-plane
kubectl get challenges -A
kubectl describe issuer letsencrypt-http01 -n openchoreo-control-plane

Common issues:

  • LoadBalancer not publicly accessible (HTTP-01 validation requires public access)
  • Firewall blocking port 80
  • Rate limits (Let's Encrypt has rate limits). If this happens on nip.io, use your own domain and DNS-01.

Agent not connecting​

kubectl logs -n openchoreo-data-plane -l app=cluster-agent --tail=20
kubectl logs -n openchoreo-control-plane -l app=cluster-gateway --tail=20

Look for connection errors in the logs. Common issues:

  • PlaneID mismatch: The planeID in the plane CR must match the clusterAgent.planeId Helm value
  • CA certificate mismatch

Wildcard certificates with HTTP-01 validation​

HTTP-01 validation cannot be used for wildcard certificates (*.domain.com). The data plane gateway uses a wildcard hostname by default.

Options:

  • Self-signed certificates: Use the default self-signed issuer (quick test path in this guide).
  • DNS-01 validation: Use a custom domain and cert-manager DNS-01 solver for trusted certificates.
  • Explicit SANs: Add exact hostnames used by your project/environment layout.