On Self-Hosted Kubernetes
Try OpenChoreo on any self-hosted Kubernetes cluster - whether it's running on your laptop, a VM, or on-premise environment. This is the fastest way to explore OpenChoreo without cloud provider costs.
What you'll get:
- Full OpenChoreo installation on your cluster
- All four planes: Control, Data, Build, and Observability
- Access via
.localhostdomains - ~15-20 minutes to complete
Prerequisites
- k3d
- Existing Cluster
k3d runs k3s in Docker containers.
- Docker v26.0+ with at least 8 GB RAM and 4 CPU cores allocated
- Disk space: ~10 GB free
- k3d v5.8+
- kubectl v1.32+
- Helm v3.12+
docker --version && docker info > /dev/null
k3d --version
kubectl version --client
helm version --short
For optimal compatibility and to avoid buildpack build issues, we recommend using Colima with VZ and Rosetta support:
colima start --vm-type=vz --vz-rosetta --cpu 4 --memory 8
Set K3D_FIX_DNS=0 when creating clusters to avoid DNS issues. See k3d-io/k3d#1449.
Create Cluster
curl -s https://raw.githubusercontent.com/openchoreo/openchoreo/main/install/k3d/single-cluster/config.yaml | k3d cluster create --config=-
This creates a cluster named openchoreo with:
- 1 server node (no agents)
- Port mappings: Control Plane (8080/8443), Data Plane (19080/19443), Build Plane (10081)
- kubectl context set to
k3d-openchoreo
Install cert-manager
helm upgrade --install cert-manager oci://quay.io/jetstack/charts/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true
Wait for cert-manager to be ready:
kubectl wait --for=condition=available deployment/cert-manager -n cert-manager --timeout=120s
- Kubernetes 1.32+ cluster with at least 8 GB RAM and 4 CPU cores
- kubectl v1.32+ configured to access your cluster
- Helm v3.12+
- cert-manager installed in your cluster
Use containerd as the container runtime. If you plan to use the Build Plane, configure your runtime to allow HTTP registries before proceeding—see the note in Step 3: Setup Build Plane.
kubectl version
helm version --short
kubectl get nodes
kubectl auth can-i '*' '*' --all-namespaces
Note Your Ingress Configuration
Check if you have an ingress controller:
kubectl get ingressclass
Common configurations:
- Rancher Desktop: Built-in Traefik on ports 80/443, class name
traefik - Docker Desktop: No default ingress
- OrbStack: Built-in ingress on ports 80/443
Step 1: Setup Control Plane
helm upgrade --install openchoreo-control-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-control-plane \
--version 0.0.0-latest-dev \
--namespace openchoreo-control-plane \
--create-namespace \
--set global.baseDomain=openchoreo.localhost \
--set global.port=":8080" \
--set traefik.ports.web.exposedPort=8080 \
--set traefik.ports.websecure.exposedPort=8443 \
--set thunder.configuration.server.publicUrl=http://thunder.openchoreo.localhost:8080 \
--set thunder.configuration.gateClient.hostname=thunder.openchoreo.localhost \
--set thunder.configuration.gateClient.port=8080 \
--set thunder.configuration.gateClient.scheme="http"
Wait for deployment to be ready:
kubectl wait -n openchoreo-control-plane --for=condition=available --timeout=300s deployment --all
kubectl wait -n openchoreo-control-plane --for=condition=complete job --all
Step 2: Setup Data Plane
Due to Rosetta emulation issues, macOS users (Rancher Desktop, Docker Desktop, k3d, kind, or Colima) should add --set gateway.envoy.mountTmpVolume=true. Non-macOS users can omit this flag if needed.
helm upgrade --install openchoreo-data-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-data-plane \
--version 0.0.0-latest-dev \
--namespace openchoreo-data-plane \
--create-namespace \
--set gateway.httpPort=19080 \
--set gateway.httpsPort=19443 \
--set external-secrets.enabled=true \
--set gateway.envoy.mountTmpVolume=true
Create a Certificate for Gateway TLS:
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: openchoreo-gateway-tls
namespace: openchoreo-data-plane
spec:
secretName: openchoreo-gateway-tls
issuerRef:
name: openchoreo-selfsigned-issuer
kind: ClusterIssuer
dnsNames:
- "*.openchoreoapis.localhost"
EOF
Register with the control plane:
The planeID in the DataPlane CR must match the clusterAgent.planeId Helm value (default: "default-dataplane"). If you customized clusterAgent.planeId during Helm installation, update the planeID field below to match.
CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-data-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: DataPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-dataplane"
clusterAgent:
clientCA:
value: |
$(echo "$CA_CERT" | sed 's/^/ /')
gateway:
organizationVirtualHost: "openchoreoapis.internal"
publicVirtualHost: "openchoreoapis.localhost"
secretStoreRef:
name: default
EOF
Verify:
kubectl get dataplane -n default
kubectl logs -n openchoreo-data-plane -l app=cluster-agent --tail=10
Step 3: Setup Build Plane (Optional)
The Build Plane enables OpenChoreo's built-in CI capabilities.
- k3d
- Existing Cluster
helm upgrade --install openchoreo-build-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-build-plane \
--version 0.0.0-latest-dev \
--namespace openchoreo-build-plane \
--create-namespace \
--set external-secrets.enabled=false \
--set global.defaultResources.registry.endpoint=host.k3d.internal:10082 \
--set registry.service.type=LoadBalancer
helm upgrade --install openchoreo-build-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-build-plane \
--version 0.0.0-latest-dev \
--namespace openchoreo-build-plane \
--create-namespace \
--set external-secrets.enabled=false \
--set global.defaultResources.registry.endpoint=<REGISTRY_ENDPOINT> \
--set registry.service.type=LoadBalancer \
--set registry.service.port=10082
The Build Plane deploys an HTTP container registry. You may need to configure two things based on your cluster setup:
1. Registry Endpoint: Update global.defaultResources.registry.endpoint to an address accessible from both the build pods (for pushing) and the kubelet (for pulling). Common values:
host.docker.internal:10082- For clusters that support host.docker.internal<node-ip>:10082- Using the node's IP addressregistry.openchoreo-build-plane.svc.cluster.local:5000- In-cluster only (won't work if kubelet can't reach cluster services)
2. HTTP Registry Access: Container runtimes default to HTTPS for registries. If image pulls fail with "http: server gave HTTP response to HTTPS client", configure your container runtime to allow HTTP for this registry. For Rancher Desktop users, see Configuring Private Registries. For other platforms, consult your Kubernetes distribution's documentation on configuring insecure registries.
Register with the control plane:
The planeID in the BuildPlane CR must match the clusterAgent.planeId Helm value (default: "default-buildplane"). If you customized clusterAgent.planeId during Helm installation, update the planeID field below to match.
BP_CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-build-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: BuildPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-buildplane"
clusterAgent:
clientCA:
value: |
$(echo "$BP_CA_CERT" | sed 's/^/ /')
EOF
Verify:
kubectl get buildplane -n default
kubectl logs -n openchoreo-build-plane -l app=cluster-agent --tail=10
Step 4: Setup Observability Plane (Optional)
helm upgrade --install openchoreo-observability-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-observability-plane \
--version 0.0.0-latest-dev \
--namespace openchoreo-observability-plane \
--create-namespace \
--set openSearch.enabled=true \
--set openSearchCluster.enabled=false \
--set security.oidc.jwksUrl="http://thunder-service.openchoreo-control-plane.svc.cluster.local:8090/oauth2/jwks" \
--set external-secrets.enabled=false \
--timeout 10m
Register with the control plane:
The planeID in the ObservabilityPlane CR must match the clusterAgent.planeId Helm value (default: "default-observabilityplane"). If you customized clusterAgent.planeId during Helm installation, update the planeID field below to match.
OP_CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-observability-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: ObservabilityPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-observabilityplane"
clusterAgent:
clientCA:
value: |
$(echo "$OP_CA_CERT" | sed 's/^/ /')
observerURL: http://observer.openchoreo-observability-plane.svc.cluster.local:8080
EOF
Link the Data Plane (and Build Plane if installed) to use observability:
kubectl patch dataplane default -n default --type merge -p '{"spec":{"observabilityPlaneRef":"default"}}'
kubectl patch buildplane default -n default --type merge -p '{"spec":{"observabilityPlaneRef":"default"}}'
Verify:
kubectl get observabilityplane -n default
kubectl logs -n openchoreo-observability-plane -l app=cluster-agent --tail=10
Access OpenChoreo
| Service | URL |
|---|---|
| Console | http://openchoreo.localhost:8080 |
| API | http://api.openchoreo.localhost:8080 |
| Deployed Apps | http://<env>.openchoreoapis.localhost:19080/<component>/... |
Default credentials: admin@openchoreo.dev / Admin@123
If your cluster is running on a remote VM or server, use SSH tunneling to access OpenChoreo from your local machine:
ssh -L 8080:localhost:8080 \
-L 8443:localhost:8443 \
-L 19080:localhost:19080 \
-L 19443:localhost:19443 \
user@remote-host
This forwards:
- 8080/8443: Control Plane UI (Console and API)
- 19080/19443: Data Plane Gateway (Deployed applications)
Keep this SSH session open and access OpenChoreo via http://openchoreo.localhost:8080 in your local browser.
Moving to Production
This guide provides a quick way to explore OpenChoreo. For production deployments, follow these guides to harden your setup:
-
Identity & Security: Replace default credentials with a real Identity Provider.
- Identity Configuration (Google, Okta, etc.)
- Secret Management (Vault, AWS Secrets Manager)
-
Networking & Domains: Move away from localhost to your own domains.
- Deployment Topology (TLS certificates, Multi-region, Multi-cluster)
-
Infrastructure: Scale out and isolate your planes.
- Multi-Cluster Connectivity (Isolate Control Plane from Data Planes)
- Container Registry (Switch to ECR/GCR/ACR)
- Observability (Configure persistent OpenSearch and retention)
Next Steps
- Deploy your first component to see OpenChoreo in action.
- Explore the sample applications.
Cleanup
Uninstall OpenChoreo components:
helm uninstall openchoreo-observability-plane -n openchoreo-observability-plane 2>/dev/null
helm uninstall openchoreo-build-plane -n openchoreo-build-plane 2>/dev/null
helm uninstall openchoreo-data-plane -n openchoreo-data-plane
helm uninstall openchoreo-control-plane -n openchoreo-control-plane
helm uninstall cert-manager -n cert-manager
Delete namespaces and plane registrations:
kubectl delete dataplane default -n default 2>/dev/null
kubectl delete buildplane default -n default 2>/dev/null
kubectl delete observabilityplane default -n default 2>/dev/null
kubectl delete namespace openchoreo-control-plane openchoreo-data-plane openchoreo-build-plane openchoreo-observability-plane cert-manager 2>/dev/null
If you created a k3d cluster for this guide:
k3d cluster delete openchoreo
Troubleshooting
Pods stuck in Pending
kubectl describe pod <pod-name> -n <namespace>
Common causes:
- Insufficient resources (increase RAM/CPU allocation)
- PVC issues (check storage provisioner)
Agent not connecting
kubectl logs -n openchoreo-data-plane -l app=cluster-agent --tail=20
kubectl logs -n openchoreo-control-plane -l app=cluster-gateway --tail=20
Common issues:
- DataPlane/BuildPlane CR not created
- PlaneID mismatch: The
planeIDin the plane CR must match theclusterAgent.planeIdHelm value - CA certificate mismatch
- Network connectivity between namespaces
Gateway pods crash on macOS
If you see "Failed to create temporary file" errors:
helm upgrade openchoreo-data-plane ... --set gateway.envoy.mountTmpVolume=true