Skip to main content
Version: v0.9.x

On Self-Hosted Kubernetes

Try OpenChoreo on any self-hosted Kubernetes cluster - whether it's running on your laptop, a VM, or on-premise environment. This is the fastest way to explore OpenChoreo without cloud provider costs.

What you'll get:

  • Full OpenChoreo installation on your cluster
  • All four planes: Control, Data, Build, and Observability
  • Access via .localhost domains
  • ~15-20 minutes to complete

Prerequisites

k3d runs k3s in Docker containers.

  • Docker v26.0+ with at least 8 GB RAM and 4 CPU cores allocated
  • Disk space: ~10 GB free
  • k3d v5.8+
  • kubectl v1.32+
  • Helm v3.12+
docker --version && docker info > /dev/null
k3d --version
kubectl version --client
helm version --short
Mac Users

For optimal compatibility and to avoid buildpack build issues, we recommend using Colima with VZ and Rosetta support:

colima start --vm-type=vz --vz-rosetta --cpu 4 --memory 8
Colima Users

Set K3D_FIX_DNS=0 when creating clusters to avoid DNS issues. See k3d-io/k3d#1449.

Create Cluster

curl -s https://raw.githubusercontent.com/openchoreo/openchoreo/main/install/k3d/single-cluster/config.yaml | k3d cluster create --config=-

This creates a cluster named openchoreo with:

  • 1 server node (no agents)
  • Port mappings: Control Plane (8080/8443), Data Plane (19080/19443), Build Plane (10081)
  • kubectl context set to k3d-openchoreo

Install cert-manager

helm upgrade --install cert-manager oci://quay.io/jetstack/charts/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true

Wait for cert-manager to be ready:

kubectl wait --for=condition=available deployment/cert-manager -n cert-manager --timeout=120s

Step 1: Setup Control Plane

helm upgrade --install openchoreo-control-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-control-plane \
--version 0.0.0-latest-dev \
--namespace openchoreo-control-plane \
--create-namespace \
--set global.baseDomain=openchoreo.localhost \
--set global.port=":8080" \
--set traefik.ports.web.exposedPort=8080 \
--set traefik.ports.websecure.exposedPort=8443 \
--set thunder.configuration.server.publicUrl=http://thunder.openchoreo.localhost:8080 \
--set thunder.configuration.gateClient.hostname=thunder.openchoreo.localhost \
--set thunder.configuration.gateClient.port=8080 \
--set thunder.configuration.gateClient.scheme="http"

Wait for deployment to be ready:

kubectl wait -n openchoreo-control-plane --for=condition=available --timeout=300s deployment --all
kubectl wait -n openchoreo-control-plane --for=condition=complete job --all

Step 2: Setup Data Plane

macOS Users

Due to Rosetta emulation issues, macOS users (Rancher Desktop, Docker Desktop, k3d, kind, or Colima) should add --set gateway.envoy.mountTmpVolume=true. Non-macOS users can omit this flag if needed.

helm upgrade --install openchoreo-data-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-data-plane \
--version 0.0.0-latest-dev \
--namespace openchoreo-data-plane \
--create-namespace \
--set gateway.httpPort=19080 \
--set gateway.httpsPort=19443 \
--set external-secrets.enabled=true \
--set gateway.envoy.mountTmpVolume=true

Create a Certificate for Gateway TLS:

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: openchoreo-gateway-tls
namespace: openchoreo-data-plane
spec:
secretName: openchoreo-gateway-tls
issuerRef:
name: openchoreo-selfsigned-issuer
kind: ClusterIssuer
dnsNames:
- "*.openchoreoapis.localhost"
EOF

Register with the control plane:

PlaneID Consistency

The planeID in the DataPlane CR must match the clusterAgent.planeId Helm value (default: "default-dataplane"). If you customized clusterAgent.planeId during Helm installation, update the planeID field below to match.

CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-data-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: DataPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-dataplane"
clusterAgent:
clientCA:
value: |
$(echo "$CA_CERT" | sed 's/^/ /')
gateway:
organizationVirtualHost: "openchoreoapis.internal"
publicVirtualHost: "openchoreoapis.localhost"
secretStoreRef:
name: default
EOF

Verify:

kubectl get dataplane -n default
kubectl logs -n openchoreo-data-plane -l app=cluster-agent --tail=10

Step 3: Setup Build Plane (Optional)

The Build Plane enables OpenChoreo's built-in CI capabilities.

helm upgrade --install openchoreo-build-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-build-plane \
--version 0.0.0-latest-dev \
--namespace openchoreo-build-plane \
--create-namespace \
--set external-secrets.enabled=false \
--set global.defaultResources.registry.endpoint=host.k3d.internal:10082 \
--set registry.service.type=LoadBalancer

Register with the control plane:

PlaneID Consistency

The planeID in the BuildPlane CR must match the clusterAgent.planeId Helm value (default: "default-buildplane"). If you customized clusterAgent.planeId during Helm installation, update the planeID field below to match.

BP_CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-build-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: BuildPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-buildplane"
clusterAgent:
clientCA:
value: |
$(echo "$BP_CA_CERT" | sed 's/^/ /')
EOF

Verify:

kubectl get buildplane -n default
kubectl logs -n openchoreo-build-plane -l app=cluster-agent --tail=10

Step 4: Setup Observability Plane (Optional)

helm upgrade --install openchoreo-observability-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-observability-plane \
--version 0.0.0-latest-dev \
--namespace openchoreo-observability-plane \
--create-namespace \
--set openSearch.enabled=true \
--set openSearchCluster.enabled=false \
--set security.oidc.jwksUrl="http://thunder-service.openchoreo-control-plane.svc.cluster.local:8090/oauth2/jwks" \
--set external-secrets.enabled=false \
--timeout 10m

Register with the control plane:

PlaneID Consistency

The planeID in the ObservabilityPlane CR must match the clusterAgent.planeId Helm value (default: "default-observabilityplane"). If you customized clusterAgent.planeId during Helm installation, update the planeID field below to match.

OP_CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-observability-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)

kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: ObservabilityPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-observabilityplane"
clusterAgent:
clientCA:
value: |
$(echo "$OP_CA_CERT" | sed 's/^/ /')
observerURL: http://observer.openchoreo-observability-plane.svc.cluster.local:8080
EOF

Link the Data Plane (and Build Plane if installed) to use observability:

kubectl patch dataplane default -n default --type merge -p '{"spec":{"observabilityPlaneRef":"default"}}'
kubectl patch buildplane default -n default --type merge -p '{"spec":{"observabilityPlaneRef":"default"}}'

Verify:

kubectl get observabilityplane -n default
kubectl logs -n openchoreo-observability-plane -l app=cluster-agent --tail=10

Access OpenChoreo

ServiceURL
Consolehttp://openchoreo.localhost:8080
APIhttp://api.openchoreo.localhost:8080
Deployed Appshttp://<env>.openchoreoapis.localhost:19080/<component>/...

Default credentials: admin@openchoreo.dev / Admin@123

Remote Cluster Access

If your cluster is running on a remote VM or server, use SSH tunneling to access OpenChoreo from your local machine:

ssh -L 8080:localhost:8080 \
-L 8443:localhost:8443 \
-L 19080:localhost:19080 \
-L 19443:localhost:19443 \
user@remote-host

This forwards:

  • 8080/8443: Control Plane UI (Console and API)
  • 19080/19443: Data Plane Gateway (Deployed applications)

Keep this SSH session open and access OpenChoreo via http://openchoreo.localhost:8080 in your local browser.


Moving to Production

This guide provides a quick way to explore OpenChoreo. For production deployments, follow these guides to harden your setup:

  1. Identity & Security: Replace default credentials with a real Identity Provider.

  2. Networking & Domains: Move away from localhost to your own domains.

  3. Infrastructure: Scale out and isolate your planes.

Next Steps

  1. Deploy your first component to see OpenChoreo in action.
  2. Explore the sample applications.

Cleanup

Uninstall OpenChoreo components:

helm uninstall openchoreo-observability-plane -n openchoreo-observability-plane 2>/dev/null
helm uninstall openchoreo-build-plane -n openchoreo-build-plane 2>/dev/null
helm uninstall openchoreo-data-plane -n openchoreo-data-plane
helm uninstall openchoreo-control-plane -n openchoreo-control-plane
helm uninstall cert-manager -n cert-manager

Delete namespaces and plane registrations:

kubectl delete dataplane default -n default 2>/dev/null
kubectl delete buildplane default -n default 2>/dev/null
kubectl delete observabilityplane default -n default 2>/dev/null
kubectl delete namespace openchoreo-control-plane openchoreo-data-plane openchoreo-build-plane openchoreo-observability-plane cert-manager 2>/dev/null

If you created a k3d cluster for this guide:

k3d cluster delete openchoreo

Troubleshooting

Pods stuck in Pending

kubectl describe pod <pod-name> -n <namespace>

Common causes:

  • Insufficient resources (increase RAM/CPU allocation)
  • PVC issues (check storage provisioner)

Agent not connecting

kubectl logs -n openchoreo-data-plane -l app=cluster-agent --tail=20
kubectl logs -n openchoreo-control-plane -l app=cluster-gateway --tail=20

Common issues:

  • DataPlane/BuildPlane CR not created
  • PlaneID mismatch: The planeID in the plane CR must match the clusterAgent.planeId Helm value
  • CA certificate mismatch
  • Network connectivity between namespaces

Gateway pods crash on macOS

If you see "Failed to create temporary file" errors:

helm upgrade openchoreo-data-plane ... --set gateway.envoy.mountTmpVolume=true