On Self-Hosted Kubernetes
Try OpenChoreo on any self-hosted Kubernetes cluster - whether it's running on your laptop, a VM, or on-premise environment. This is the fastest way to explore OpenChoreo without cloud provider costs.
What you'll get:
- Full OpenChoreo installation on your cluster
- All four planes: Control, Data, Build, and Observability
- Access via
.localhostdomains - ~15-20 minutes to complete
Prerequisitesβ
- k3d
- Existing Cluster
k3d runs k3s in Docker containers.
- Docker v26.0+ with at least 8 GB RAM and 4 CPU cores allocated
- Disk space: ~10 GB free
- k3d v5.8+
- kubectl v1.32+
- Helm v3.12+
- cert-manager installed in your cluster
docker --version && docker info > /dev/null
k3d --version
kubectl version --client
helm version --short
Set K3D_FIX_DNS=0 when creating clusters to avoid DNS issues. See k3d-io/k3d#1449.
Create Cluster
curl -s https://raw.githubusercontent.com/openchoreo/openchoreo/release-v0.8/install/k3d/single-cluster/config.yaml | k3d cluster create --config=-
This creates a cluster named openchoreo with:
- 1 server node (no agents)
- Port mappings: Control Plane (8080/8443), Data Plane (19080/19443), Build Plane (10081)
- kubectl context set to
k3d-openchoreo
- Kubernetes 1.32+ cluster with at least 8 GB RAM and 4 CPU cores
- kubectl v1.32+ configured to access your cluster
- Helm v3.12+
- cert-manager installed in your cluster
kubectl version
helm version --short
kubectl get nodes
kubectl auth can-i '*' '*' --all-namespaces
Note Your Ingress Configuration
Check if you have an ingress controller:
kubectl get ingressclass
Common configurations:
- Rancher Desktop: Built-in Traefik on ports 80/443, class name
traefik - Docker Desktop: No default ingress
- OrbStack: Built-in ingress on ports 80/443
Install cert-manager
helm upgrade --install cert-manager oci://quay.io/jetstack/charts/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true
Wait for cert-manager to be ready:
kubectl wait --for=condition=available deployment/cert-manager -n cert-manager --timeout=120s
Step 1: Setup Control Planeβ
helm upgrade --install openchoreo-control-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-control-plane \
--version 0.8.0 \
--namespace openchoreo-control-plane \
--create-namespace \
--set global.baseDomain=openchoreo.localhost \
--set global.port=":8080" \
--set traefik.ports.web.exposedPort=8080 \
--set traefik.ports.websecure.exposedPort=8443 \
--set thunder.configuration.server.publicUrl=http://thunder.openchoreo.localhost:8080 \
--set thunder.configuration.gateClient.hostname=thunder.openchoreo.localhost \
--set thunder.configuration.gateClient.port=8080 \
--set thunder.configuration.gateClient.scheme="http"
Wait for pods to be ready:
kubectl get pods -n openchoreo-control-plane -w
Step 2: Setup Data Planeβ
Due to Rosetta emulation issues, macOS users (Rancher Desktop, Docker Desktop, k3d, kind, or Colima) should add --set gateway.envoy.mountTmpVolume=true. Non-macOS users can omit this flag if needed.
helm upgrade --install openchoreo-data-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-data-plane \
--version 0.8.0 \
--namespace openchoreo-data-plane \
--create-namespace \
--set gateway.httpPort=19080 \
--set gateway.httpsPort=19443 \
--set external-secrets.enabled=true \
--set gateway.envoy.mountTmpVolume=true
Create a Certificate for Gateway TLS:
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: openchoreo-gateway-tls
namespace: openchoreo-data-plane
spec:
secretName: openchoreo-gateway-tls
issuerRef:
name: openchoreo-selfsigned-issuer
kind: ClusterIssuer
dnsNames:
- "*.openchoreoapis.localhost"
EOF
Register with the control plane:
CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-data-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: DataPlane
metadata:
name: default
namespace: default
spec:
agent:
enabled: true
clientCA:
value: |
$(echo "$CA_CERT" | sed 's/^/ /')
gateway:
organizationVirtualHost: "openchoreoapis.internal"
publicVirtualHost: "openchoreoapis.localhost"
secretStoreRef:
name: default
EOF
Verify:
kubectl get dataplane -n default
kubectl logs -n openchoreo-data-plane -l app=cluster-agent --tail=10
Step 3: Setup Build Plane (Optional)β
The Build Plane enables OpenChoreo's built-in CI capabilities.
- k3d
- Existing Cluster
helm upgrade --install openchoreo-build-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-build-plane \
--version 0.8.0 \
--namespace openchoreo-build-plane \
--create-namespace \
--set external-secrets.enabled=false \
--set global.defaultResources.registry.endpoint=host.k3d.internal:10082 \
--set registry.service.type=LoadBalancer
helm upgrade --install openchoreo-build-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-build-plane \
--version 0.8.0 \
--namespace openchoreo-build-plane \
--create-namespace \
--set external-secrets.enabled=false \
--set global.defaultResources.registry.endpoint=<REGISTRY_ENDPOINT> \
--set registry.service.type=LoadBalancer \
--set registry.service.port=10082
The Build Plane deploys an HTTP container registry. You may need to configure two things based on your cluster setup:
1. Registry Endpoint: Update global.defaultResources.registry.endpoint to an address accessible from both the build pods (for pushing) and the kubelet (for pulling). Common values:
host.docker.internal:10082- For clusters that support host.docker.internal<node-ip>:10082- Using the node's IP addressregistry.openchoreo-build-plane.svc.cluster.local:5000- In-cluster only (won't work if kubelet can't reach cluster services)
2. HTTP Registry Access: Container runtimes default to HTTPS for registries. If image pulls fail with "http: server gave HTTP response to HTTPS client", configure your container runtime to allow HTTP for this registry. For Rancher Desktop users, see Configuring Private Registries. For other platforms, consult your Kubernetes distribution's documentation on configuring insecure registries.
Register with the control plane:
BP_CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-build-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: BuildPlane
metadata:
name: default
namespace: default
spec:
agent:
enabled: true
clientCA:
value: |
$(echo "$BP_CA_CERT" | sed 's/^/ /')
EOF
Verify:
kubectl get buildplane -n default
kubectl logs -n openchoreo-build-plane -l app=cluster-agent --tail=10
Step 4: Setup Observability Plane (Optional)β
helm upgrade --install openchoreo-observability-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-observability-plane \
--version 0.8.0 \
--namespace openchoreo-observability-plane \
--create-namespace \
--set openSearch.enabled=true \
--set openSearchCluster.enabled=false \
--set external-secrets.enabled=false \
--set clusterAgent.enabled=true \
--timeout 10m
Register with the control plane:
OP_CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-observability-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: ObservabilityPlane
metadata:
name: default
namespace: default
spec:
agent:
enabled: true
clientCA:
value: |
$(echo "$OP_CA_CERT" | sed 's/^/ /')
observerURL: http://observer.openchoreo-observability-plane.svc.cluster.local:8080
EOF
Link the Data Plane (and Build Plane if installed) to use observability:
kubectl patch dataplane default -n default --type merge -p '{"spec":{"observabilityPlaneRef":"default"}}'
kubectl patch buildplane default -n default --type merge -p '{"spec":{"observabilityPlaneRef":"default"}}'
Verify:
kubectl get observabilityplane -n default
kubectl logs -n openchoreo-observability-plane -l app=cluster-agent --tail=10
Access OpenChoreoβ
| Service | URL |
|---|---|
| Console | http://openchoreo.localhost:8080 |
| API | http://api.openchoreo.localhost:8080 |
| Deployed Apps | http://<env>.openchoreoapis.localhost:19080/<component>/... |
Default credentials: admin@openchoreo.dev / Admin@123
If your cluster is running on a remote VM or server, use SSH tunneling to access OpenChoreo from your local machine:
ssh -L 8080:localhost:8080 \
-L 8443:localhost:8443 \
-L 19080:localhost:19080 \
-L 19443:localhost:19443 \
user@remote-host
This forwards:
- 8080/8443: Control Plane UI (Console and API)
- 19080/19443: Data Plane Gateway (Deployed applications)
Keep this SSH session open and access OpenChoreo via http://openchoreo.localhost:8080 in your local browser.
Next Stepsβ
- Deploy your first component
- Explore the sample applications
Cleanupβ
Uninstall OpenChoreo components:
helm uninstall openchoreo-observability-plane -n openchoreo-observability-plane 2>/dev/null
helm uninstall openchoreo-build-plane -n openchoreo-build-plane 2>/dev/null
helm uninstall openchoreo-data-plane -n openchoreo-data-plane
helm uninstall openchoreo-control-plane -n openchoreo-control-plane
helm uninstall cert-manager -n cert-manager
Delete namespaces and plane registrations:
kubectl delete dataplane default -n default 2>/dev/null
kubectl delete buildplane default -n default 2>/dev/null
kubectl delete observabilityplane default -n default 2>/dev/null
kubectl delete namespace openchoreo-control-plane openchoreo-data-plane openchoreo-build-plane openchoreo-observability-plane cert-manager 2>/dev/null
If you created a k3d cluster for this guide:
k3d cluster delete openchoreo
Troubleshootingβ
Pods stuck in Pendingβ
kubectl describe pod <pod-name> -n <namespace>
Common causes:
- Insufficient resources (increase RAM/CPU allocation)
- PVC issues (check storage provisioner)
Agent not connectingβ
kubectl logs -n openchoreo-data-plane -l app=cluster-agent --tail=20
kubectl logs -n openchoreo-control-plane -l app=cluster-gateway --tail=20
Common issues:
- DataPlane/BuildPlane CR not created
- CA certificate mismatch
- Network connectivity between namespaces
Gateway pods crash on macOSβ
If you see "Failed to create temporary file" errors:
helm upgrade openchoreo-data-plane ... --set gateway.envoy.mountTmpVolume=true