On Self-Hosted Kubernetes
Try OpenChoreo on any self-hosted Kubernetes cluster, whether it's running on your laptop, a VM, or on-premise environment. This is the fastest way to explore OpenChoreo without cloud provider costs.
What you'll get:
- Full OpenChoreo installation on your cluster
- All four planes: Control, Data, Build, and Observability
- Access via
.localhostdomains - ~15-20 minutes to complete
Prerequisitesβ
- k3d
- Existing Cluster
k3d runs k3s in Docker containers.
- Docker v26.0+ with at least 8 GB RAM and 4 CPU cores allocated
- Disk space: ~10 GB free
- k3d v5.8+
- kubectl v1.32+
- Helm v3.12+
docker --version && docker info > /dev/null
k3d --version
kubectl version --client
helm version --short
For optimal compatibility and to avoid buildpack build issues, we recommend using Colima with VZ and Rosetta support:
colima start --vm-type=vz --vz-rosetta --cpu 4 --memory 8
Set K3D_FIX_DNS=0 when creating clusters to avoid DNS issues. See k3d-io/k3d#1449.
Create Cluster
curl -s https://raw.githubusercontent.com/openchoreo/openchoreo/release-v0.10/install/k3d/single-cluster/config.yaml | k3d cluster create --config=-
This creates a cluster named openchoreo with port mappings for Control Plane (8080/8443), Data Plane (19080/19443), and Build Plane (10081). Your kubectl context is automatically set to k3d-openchoreo.
Fluent Bit (used for observability log collection) requires a unique Machine ID (/etc/machine-id) to start. By default, k3d node containers do not generate this file. If you are enabling observability, you must manually generate this ID on every node in your cluster before installation:
docker exec <node name> sh -c "cat /proc/sys/kernel/random/uuid | tr -d '-' > /etc/machine-id"
For example, to generate the machine ID for the k3d-openchoreo-op-server-0 node:
docker exec k3d-openchoreo-op-server-0 sh -c "cat /proc/sys/kernel/random/uuid | tr -d '-' > /etc/machine-id"
Install cert-manager
helm upgrade --install cert-manager oci://quay.io/jetstack/charts/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true
Wait for cert-manager to be ready:
kubectl wait --for=condition=available deployment/cert-manager -n cert-manager --timeout=120s
- Kubernetes 1.32+ cluster with at least 8 GB RAM and 4 CPU cores
- kubectl v1.32+ configured to access your cluster
- Helm v3.12+
- cert-manager installed in your cluster
Use containerd as the container runtime. If you plan to use the Build Plane, configure your runtime to allow HTTP registries before proceedingβsee the note in Step 3: Setup Build Plane.
kubectl version
helm version --short
kubectl get nodes
kubectl auth can-i '*' '*' --all-namespaces
Note Your Ingress Configuration
Check if you have an ingress controller:
kubectl get ingressclass
Common configurations:
- Rancher Desktop: Built-in Traefik on ports 80/443, class name
traefik - Docker Desktop: No default ingress
- OrbStack: Built-in ingress on ports 80/443
OpenChoreo's control plane gateway needs to bind to ports 80/443. If you have Traefik or another ingress controller running on these ports, you must either:
- Disable it (e.g., for Rancher Desktop: disable Traefik in Preferences β Kubernetes)
- Configure it to use different ports
You can verify port availability with:
# Check if anything is listening on port 80
lsof -i :80
Step 1: Setup Control Planeβ
Due to Rosetta emulation issues, macOS users (Rancher Desktop, Docker Desktop, k3d, kind, or Colima) should add --set gateway.envoy.mountTmpVolume=true. Non-macOS users can omit this flag if needed.
- k3d
- Existing Cluster
helm upgrade --install openchoreo-control-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-control-plane \
--version 0.10.0 \
--namespace openchoreo-control-plane \
--create-namespace \
--set global.baseDomain=openchoreo.localhost \
--set global.port=":8080" \
--set gateway.httpPort=80 \
--set gateway.httpsPort=443 \
--set thunder.configuration.server.publicUrl=http://thunder.openchoreo.localhost:8080 \
--set thunder.configuration.gateClient.hostname=thunder.openchoreo.localhost \
--set thunder.configuration.gateClient.port=8080 \
--set thunder.configuration.gateClient.scheme="http" \
--set gateway.envoy.mountTmpVolume=true
helm upgrade --install openchoreo-control-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-control-plane \
--version 0.10.0 \
--namespace openchoreo-control-plane \
--create-namespace \
--set global.baseDomain=openchoreo.localhost \
--set global.port=":80" \
--set gateway.httpPort=80 \
--set gateway.httpsPort=443 \
--set thunder.configuration.server.publicUrl=http://thunder.openchoreo.localhost:80 \
--set thunder.configuration.gateClient.hostname=thunder.openchoreo.localhost \
--set thunder.configuration.gateClient.port=80 \
--set thunder.configuration.gateClient.scheme="http" \
--set gateway.envoy.mountTmpVolume=true
This installs the control plane into the openchoreo-control-plane namespace, with these settings:
global.baseDomain: the base domain for all services. The console will be atopenchoreo.localhost, the API atapi.openchoreo.localhost.global.port: appended to URLs since we're using non-standard port 8080.gateway.httpPortandgateway.httpsPort: the ports where KGateway listens for incoming traffic.thunder.configuration.*: configures Thunder, the built-in identity provider. These settings tell Thunder where it's accessible and how to reach the API gateway.
This installs:
controller-manager: the controllers that reconcile OpenChoreo resources and manage the platform lifecycle.openchoreo-api: REST API server that the console and CLI talk to.backstage: the web console for managing your platform.thunder: built-in identity provider handling authentication and OAuth flows.cluster-gateway: accepts WebSocket connections from cluster-agents in remote planes.kgateway: gateway controller for routing external traffic to services.- OpenChoreo CRDs: Organization, Project, Component, Environment, DataPlane, BuildPlane, and others that define the platform's API.
The control plane is OpenChoreo's brain. In production, you'd typically run this in its own dedicated cluster, isolated from your workloads.
For all available configuration options, see the Control Plane Helm Reference.
kubectl wait -n openchoreo-control-plane --for=condition=available --timeout=300s deployment --all
kubectl wait -n openchoreo-control-plane --for=condition=complete job --all
Create a Certificate for Gateway TLS:
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: control-plane-tls
namespace: openchoreo-control-plane
spec:
secretName: control-plane-tls
issuerRef:
name: openchoreo-selfsigned-issuer
kind: ClusterIssuer
dnsNames:
- "*.openchoreo.localhost"
EOF
Step 2: Setup Data Planeβ
helm upgrade --install openchoreo-data-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-data-plane \
--version 0.10.0 \
--namespace openchoreo-data-plane \
--create-namespace \
--set gateway.httpPort=19080 \
--set gateway.httpsPort=19443 \
--set external-secrets.enabled=true \
--set gatewayController.enabled=false \
--set gateway.envoy.mountTmpVolume=true \
--set gateway.selfSignedIssuer.enabled=false
This installs the data plane into the openchoreo-data-plane namespace, with these settings:
gateway.httpPortandgateway.httpsPort: the ports where KGateway listens for traffic to your applications. We use 19080/19443 to keep it distinct from the control plane's ports.external-secrets.enabled: installs the External Secrets Operator for syncing secrets from external stores.gateway.envoy.mountTmpVolume: fixes Envoy crashes on macOS. Non-macOS users can omit this flag.
This installs:
cluster-agent: maintains a WebSocket connection to the control plane's cluster-gateway. This is how the control plane sends deployment instructions to the data plane.gateway: KGateway with Envoy proxy that routes incoming traffic to your deployed applications.fluent-bit: collects logs from your workloads and forwards them to the observability plane.external-secrets: syncs secrets from external secret stores like Vault or AWS Secrets Manager.- Gateway API CRDs: Gateway, HTTPRoute, and other resources for traffic routing.
The data plane is where your workloads actually run. In this guide we're installing it in the same cluster as the control plane, but in production you'd typically have it in a completely separate cluster. This separation is intentional: your application code never runs alongside the control plane, and the control plane's credentials are never exposed to your workloads.
For all available configuration options, see the Data Plane Helm Reference.
Create a Certificate for Gateway TLS
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: openchoreo-gateway-tls
namespace: openchoreo-data-plane
spec:
secretName: openchoreo-gateway-tls
issuerRef:
name: openchoreo-selfsigned-issuer
kind: ClusterIssuer
dnsNames:
- "*.openchoreoapis.localhost"
EOF
Register with the Control Plane
CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-data-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
The control plane only accepts agent connections signed by a CA it recognizes. When you installed the data plane, cert-manager generated a CA and used it to sign the cluster-agent's client certificate. This command extracts that CA so you can tell the control plane about it.
kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: DataPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-dataplane"
clusterAgent:
clientCA:
value: |
$(echo "$CA_CERT" | sed 's/^/ /')
gateway:
organizationVirtualHost: "openchoreoapis.internal"
publicVirtualHost: "openchoreoapis.localhost"
secretStoreRef:
name: default
EOF
This creates a DataPlane resource that tells the control plane about your data plane, with these settings:
planeID: identifies this data plane. Must match what the cluster-agent was configured with, which defaults todefault-dataplane.clusterAgent.clientCA: the CA certificate that signed the agent's client certificate. The control plane uses this to verify incoming connections.gateway.publicVirtualHost: where your deployed applications become accessible. When you deploy a component later, it'll be reachable at something likehttp://dev.openchoreoapis.localhost:19080/your-component/.secretStoreRef: references the External Secrets ClusterSecretStore for managing secrets.
Verify
kubectl get dataplane -n default
The cluster-agent should now be connected. You can check its logs:
kubectl logs -n openchoreo-data-plane -l app=cluster-agent --tail=10
Step 3: Setup Build Plane (Optional)β
The Build Plane enables OpenChoreo's built-in CI capabilities. It runs Argo Workflows and hosts a container registry for your built images.
- k3d
- Existing Cluster
helm upgrade --install openchoreo-build-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-build-plane \
--version 0.10.0 \
--namespace openchoreo-build-plane \
--create-namespace \
--set external-secrets.enabled=false \
--set global.defaultResources.registry.endpoint=host.k3d.internal:10082 \
--set registry.service.type=LoadBalancer
This installs the build plane with these settings:
global.defaultResources.registry.endpoint: the address where built images are pushed and pulled.host.k3d.internalis a special hostname that k3d nodes can resolve to the host machine.registry.service.type: exposes the registry via LoadBalancer so it's accessible from outside the cluster.
This installs:
cluster-agent: connects to the control plane to receive build instructions.argo-workflows: executes the actual build pipelines as Kubernetes workflows.registry: a container registry that stores your built images.- Argo Workflows CRDs: Workflow, WorkflowTemplate, and other resources for defining build pipelines.
For all available configuration options, see the Build Plane Helm Reference.
helm upgrade --install openchoreo-build-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-build-plane \
--version 0.10.0 \
--namespace openchoreo-build-plane \
--create-namespace \
--set external-secrets.enabled=false \
--set global.defaultResources.registry.endpoint=<REGISTRY_ENDPOINT> \
--set registry.service.type=LoadBalancer \
--set registry.service.port=10082
This installs the build plane with these settings:
global.defaultResources.registry.endpoint: the address where built images are pushed and pulled. This needs to be accessible from both the build pods (for pushing) and the kubelet (for pulling). Common values arehost.docker.internal:10082or your node's IP address.registry.service.typeandregistry.service.port: exposes the registry via LoadBalancer.
This installs:
cluster-agent: connects to the control plane to receive build instructions.argo-workflows: executes the actual build pipelines as Kubernetes workflows.registry: a container registry that stores your built images.- Argo Workflows CRDs: Workflow, WorkflowTemplate, and other resources for defining build pipelines.
For all available configuration options, see the Build Plane Helm Reference.
The Build Plane deploys an HTTP container registry. Container runtimes default to HTTPS for registries. If image pulls fail with "http: server gave HTTP response to HTTPS client", configure your container runtime to allow HTTP for this registry. For Rancher Desktop users, see Configuring Private Registries.
Register with the Control Plane
BP_CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-build-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: BuildPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-buildplane"
clusterAgent:
clientCA:
value: |
$(echo "$BP_CA_CERT" | sed 's/^/ /')
EOF
The planeID must match the helm chart's default of default-buildplane. Like the data plane, the build plane could run in a completely separate cluster if you wanted to isolate your CI workloads.
Verify
kubectl get buildplane -n default
kubectl logs -n openchoreo-build-plane -l app=cluster-agent --tail=10
Step 4: Setup Observability Plane (Optional)β
helm upgrade --install openchoreo-observability-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-observability-plane \
--version 0.10.0 \
--namespace openchoreo-observability-plane \
--create-namespace \
--set openSearch.enabled=true \
--set openSearchCluster.enabled=false \
--set security.oidc.jwksUrl="http://thunder-service.openchoreo-control-plane.svc.cluster.local:8090/oauth2/jwks" \
--set external-secrets.enabled=false \
--timeout 10m
This installs the observability plane with these settings:
openSearch.enabled: deploys OpenSearch for storing logs and traces.openSearchCluster.enabled: set to false to use the simpler single-node deployment instead of the operator-based cluster.security.oidc.jwksUrl: the JWKS endpoint for validating JWT tokens. This points to Thunder's JWKS endpoint so the Observer API can authenticate requests.
This installs:
cluster-agent: connects to the control plane.opensearch: stores logs and traces from your workloads.observer: REST API that abstracts OpenSearch. The console and other components query logs through this instead of talking to OpenSearch directly.opentelemetry-collector: receives traces and metrics from your applications.prometheus: collects metrics from your workloads (via kube-prometheus-stack).
The observability plane collects logs, metrics, and traces from your data and build planes. Like the other planes, it could run in a completely separate cluster in production.
For all available configuration options, see the Observability Plane Helm Reference.
Register with the Control Plane
OP_CA_CERT=$(kubectl get secret cluster-agent-tls -n openchoreo-observability-plane -o jsonpath='{.data.ca\.crt}' | base64 -d)
kubectl apply -f - <<EOF
apiVersion: openchoreo.dev/v1alpha1
kind: ObservabilityPlane
metadata:
name: default
namespace: default
spec:
planeID: "default-observabilityplane"
clusterAgent:
clientCA:
value: |
$(echo "$OP_CA_CERT" | sed 's/^/ /')
observerURL: http://observer.openchoreo-observability-plane.svc.cluster.local:8080
EOF
The observerURL tells the control plane where to find the Observer API.
Link Other Planes to Observability
kubectl patch dataplane default -n default --type merge -p '{"spec":{"observabilityPlaneRef":"default"}}'
kubectl patch buildplane default -n default --type merge -p '{"spec":{"observabilityPlaneRef":"default"}}'
This tells the data plane and build plane to send their logs and traces to this observability plane.
Verify
kubectl get observabilityplane -n default
kubectl logs -n openchoreo-observability-plane -l app=cluster-agent --tail=10
Enable logs collection:
helm upgrade --install openchoreo-observability-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-observability-plane \
--version 0.10.0 \
--namespace openchoreo-observability-plane \
--reuse-values \
--set fluent-bit.enabled=true \
--timeout 10m
Access OpenChoreoβ
| Service | URL |
|---|---|
| Console | http://openchoreo.localhost:8080 |
| API | http://api.openchoreo.localhost:8080 |
| Deployed Apps | http://<env>.openchoreoapis.localhost:19080/<component>/... |
Default credentials: admin@openchoreo.dev / Admin@123
If your cluster is running on a remote VM or server, use SSH tunneling to access OpenChoreo from your local machine:
ssh -L 8080:localhost:8080 \
-L 8443:localhost:8443 \
-L 19080:localhost:19080 \
-L 19443:localhost:19443 \
user@remote-host
This forwards the Control Plane UI (8080/8443) and Data Plane Gateway (19080/19443) to your local machine. Keep this SSH session open and access OpenChoreo via http://openchoreo.localhost:8080 in your local browser.
Moving to Productionβ
This guide provides a quick way to explore OpenChoreo. For production deployments, follow these guides to harden your setup:
-
Identity & Security: Replace default credentials with a real Identity Provider.
- Identity Configuration (Google, Okta, etc.)
- Secret Management (Vault, AWS Secrets Manager)
-
Networking & Domains: Move away from localhost to your own domains.
- Deployment Topology (TLS certificates, Multi-region, Multi-cluster)
-
Infrastructure: Scale out and isolate your planes.
- Multi-Cluster Connectivity (Isolate Control Plane from Data Planes)
- Container Registry (Switch to ECR/GCR/ACR)
- Observability (Configure persistent OpenSearch and retention)
Next Stepsβ
- Deploy your first component to see OpenChoreo in action.
- Explore the sample applications.
Cleanupβ
Uninstall OpenChoreo components:
helm uninstall openchoreo-observability-plane -n openchoreo-observability-plane 2>/dev/null
helm uninstall openchoreo-build-plane -n openchoreo-build-plane 2>/dev/null
helm uninstall openchoreo-data-plane -n openchoreo-data-plane
helm uninstall openchoreo-control-plane -n openchoreo-control-plane
helm uninstall cert-manager -n cert-manager
Delete namespaces and plane registrations:
kubectl delete dataplane default -n default 2>/dev/null
kubectl delete buildplane default -n default 2>/dev/null
kubectl delete observabilityplane default -n default 2>/dev/null
kubectl delete namespace openchoreo-control-plane openchoreo-data-plane openchoreo-build-plane openchoreo-observability-plane cert-manager 2>/dev/null
If you created a k3d cluster for this guide:
k3d cluster delete openchoreo
Troubleshootingβ
Pods stuck in Pendingβ
kubectl describe pod <pod-name> -n <namespace>
Common causes:
- Insufficient resources (increase RAM/CPU allocation)
- PVC issues (check storage provisioner)
Agent not connectingβ
kubectl logs -n openchoreo-data-plane -l app=cluster-agent --tail=20
kubectl logs -n openchoreo-control-plane -l app=cluster-gateway --tail=20
Common issues:
- DataPlane/BuildPlane CR not created
- PlaneID mismatch: The
planeIDin the plane CR must match theclusterAgent.planeIdHelm value - CA certificate mismatch
- Network connectivity between namespaces
Gateway pods crash on macOSβ
If you see "Failed to create temporary file" errors:
helm upgrade openchoreo-data-plane ... --set gateway.envoy.mountTmpVolume=true