Skip to main content
Version: v0.6.x

Single Cluster Setup

This guide provides step-by-step instructions for setting up a local development environment for OpenChoreo using k3d (k3s in Docker).

Prerequisites​

  • Docker – Just have it installed on your machine, and you're good to go.
  • k3d v5.8+ installed
  • kubectl v1.32+ installed
  • Helm v3.12+ installed (helm v4 is not supported yet)

Verify Prerequisites​

Before proceeding, verify that all tools are installed and meet the minimum version requirements:

# Check Docker (should be v20.10+)
docker --version

# Check k3d (should be v5.8+)
k3d --version

# Check kubectl (should be v1.32+)
kubectl version --client

# Check Helm (should be v3.12+)
helm version --short

Make sure Docker is running:

docker info
note

If you're using Colima, set the K3D_FIX_DNS=0 environment variable when creating clusters to avoid DNS issues. See k3d-io/k3d#1449 for more details.

Quick Setup​

This setup uses pre-built images and Helm charts from the OpenChoreo registry.

1. Create OpenChoreo k3d Cluster​

Create a new k3d cluster using the provided configuration:

curl -s https://raw.githubusercontent.com/openchoreo/openchoreo/release-v0.6/install/k3d/single-cluster/config.yaml | k3d cluster create --config=-

This will:

  • Create a cluster named "openchoreo"
  • Set up a single k3d cluster with 1 server and 2 agents
  • Configure port mappings for accessing OpenChoreo services:
    • Control Plane: localhost:8080 (HTTP), localhost:8443 (HTTPS)
    • Data Plane: localhost:9080 (HTTP), localhost:9443 (HTTPS) - for deployed workloads
    • Build Plane: localhost:10081 (Argo Workflows UI), localhost:10082 (Container Registry)
    • Observability Plane: localhost:11081 (OpenSearch Dashboard), localhost:11082 (OpenSearch API)
  • Set kubectl context to "k3d-openchoreo"
tip

For faster setup or if you have slow network, consider preloading images after creating the cluster. See the Image Preloading section at the end of this guide.

Verify the cluster is running:

kubectl get nodes

You should see nodes in Ready status.

2. Install OpenChoreo Control Plane​

Install the OpenChoreo control plane using the following helm install command. This will create the openchoreo-control-plane namespace automatically:

helm install openchoreo-control-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-control-plane \
--version 0.6.0 \
--kube-context k3d-openchoreo \
--namespace openchoreo-control-plane \
--create-namespace \
--values https://raw.githubusercontent.com/openchoreo/openchoreo/release-v0.6/install/k3d/single-cluster/values-cp.yaml

Wait for the installation to complete and verify all pods are running:

kubectl get pods -n openchoreo-control-plane

You should see pods for:

  • controller-manager (Running)
  • cluster-gateway-* (Running) - Gateway for agent-based data plane communication
  • cert-manager-* (3 pods, all Running)

3. Install OpenChoreo Data Plane​

Install the OpenChoreo data plane using the following helm install command. This will create the openchoreo-data-plane namespace automatically:

helm install openchoreo-data-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-data-plane \
--version 0.6.0 \
--kube-context k3d-openchoreo \
--namespace openchoreo-data-plane \
--create-namespace \
--values https://raw.githubusercontent.com/openchoreo/openchoreo/release-v0.6/install/k3d/single-cluster/values-dp.yaml

Wait for dataplane components to be ready:

kubectl get pods -n openchoreo-data-plane

You should see pods for:

  • cluster-agent-* (Running) - Agent for secure control plane communication
  • openchoreo-data-plane-gateway-* (Running)
  • external-secrets-* (3 pods, all Running)
  • fluent-bit-* (Running on each node)
  • gateway-default-* (Running)

Configure DataPlane​

Register the data plane with the control plane by running:

curl -s https://raw.githubusercontent.com/openchoreo/openchoreo/release-v0.6/install/add-data-plane.sh | bash -s -- --control-plane-context k3d-openchoreo --enable-agent --agent-ca-namespace openchoreo-control-plane

This script creates a DataPlane resource in the default namespace with agent-based communication enabled. The agent provides secure WebSocket-based connectivity between the control plane and data plane without requiring direct Kubernetes API access.

Verify the DataPlane was created and check its status:

# Check DataPlane resource
kubectl get dataplane -n default

# Verify agent mode is enabled
kubectl get dataplane default -n default -o jsonpath='{.spec.agent.enabled}'

# Check DataPlane status
kubectl get dataplane default -n default -o jsonpath='{.status.conditions[?(@.type=="Ready")]}' | jq '.'

The agent.enabled field should show true, and the Ready condition should have status True once the agent successfully connects to the control plane.

4. Install OpenChoreo Build Plane (Optional)​

The Build Plane is required if you plan to use OpenChoreo's internal CI capabilities. If you're only deploying pre-built container images, you can skip this step.

Install the OpenChoreo build plane using the following helm install command for CI/CD capabilities. This will create the openchoreo-build-plane namespace automatically:

helm install openchoreo-build-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-build-plane \
--version 0.6.0 \
--kube-context k3d-openchoreo \
--namespace openchoreo-build-plane \
--create-namespace \
--values https://raw.githubusercontent.com/openchoreo/openchoreo/release-v0.6/install/k3d/single-cluster/values-bp.yaml

Wait for the build plane components to be ready:

kubectl get pods -n openchoreo-build-plane

You should see pods for:

  • argo-server-* (Running)
  • argo-workflow-controller-* (Running)
  • registry-* (Running)

Configure BuildPlane​

Register the build plane with the control plane by running:

curl -s https://raw.githubusercontent.com/openchoreo/openchoreo/release-v0.6/install/add-build-plane.sh | bash -s -- --control-plane-context k3d-openchoreo

This script creates a BuildPlane resource in the default namespace.

Verify that the BuildPlane was created:

kubectl get buildplane -n default

5. Install OpenChoreo Observability Plane (Optional)​

Non-HA mode​

By default, the OpenChoreo Observability Plane deploys OpenSearch in a highly available (HA) configuration, which requires increased CPU and memory resources. If you are running in a resource-constrained environment or simply do not require HA guarantees, you can install the Observability Plane in Non-HA mode instead. This lightweight option reduces resource usage and is suitable for small or development clusters. This command will create the openchoreo-observability-plane namespace automatically.

helm install openchoreo-observability-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-observability-plane \
--create-namespace \
--version 0.6.0 \
--kube-context k3d-openchoreo \
--namespace openchoreo-observability-plane \
--values https://raw.githubusercontent.com/openchoreo/openchoreo/release-v0.6/install/k3d/single-cluster/values-op.yaml \
--set openSearch.enabled=true \
--set openSearchCluster.enabled=false

HA mode​

Prerequisites​

Install the OpenSearch Kubernetes operator as follows. This will create the openchoreo-observability-plane namespace automatically:

helm repo add opensearch-operator https://opensearch-project.github.io/opensearch-k8s-operator/
helm repo update
helm install opensearch-operator opensearch-operator/opensearch-operator \
--create-namespace \
--namespace openchoreo-observability-plane \
--version 2.8.0

Install the OpenChoreo observability plane using the following helm install command for monitoring and logging capabilities.

helm install openchoreo-observability-plane oci://ghcr.io/openchoreo/helm-charts/openchoreo-observability-plane \
--version 0.6.0 \
--kube-context k3d-openchoreo \
--namespace openchoreo-observability-plane \
--values https://raw.githubusercontent.com/openchoreo/openchoreo/release-v0.6/install/k3d/single-cluster/values-op.yaml

Wait for the observability plane components to be ready:

kubectl get pods -n openchoreo-observability-plane

You should see pods for:

  • observer-* (Running) - Log processing service
  • opensearch-master-0 (Running) - Log storage backend
  • opensearch-cluster-setup-* (Completed) - One-time setup job

Verify that all pods are ready:

kubectl wait --for=condition=Ready pod --all -n openchoreo-observability-plane --timeout=600s

Verify FluentBit is sending logs to OpenSearch:

# Check if kubernetes indices are being created
kubectl exec -n openchoreo-observability-plane opensearch-master-0 -- curl -s "http://localhost:9200/_cat/indices?v" | grep kubernetes

# Check recent log count
kubectl exec -n openchoreo-observability-plane opensearch-master-0 -- curl -s "http://localhost:9200/kubernetes-*/_count" | jq '.count'

If the indices exist and the count is greater than 0, FluentBit is successfully collecting and storing logs.

Configure Observer Integration​

Configure the DataPlane and BuildPlane to use the observer service.

# Configure DataPlane to use observer service
kubectl patch dataplane default -n default --type merge -p '{"spec":{"observer":{"url":"http://observer.openchoreo-observability-plane:8080","authentication":{"basicAuth":{"username":"dummy","password":"dummy"}}}}}'

# Configure BuildPlane to use observer service
kubectl patch buildplane default -n default --type merge -p '{"spec":{"observer":{"url":"http://observer.openchoreo-observability-plane:8080","authentication":{"basicAuth":{"username":"dummy","password":"dummy"}}}}}'

Important: Without this configuration, build logs will not be pushed to the observability plane and application logs will not be visible in the Backstage portal, significantly impacting the developer experience.

This configuration enables:

  • Application logs to appear in Backstage portal
  • Enhanced logging and monitoring across build and data planes
  • Integration with the observability plane for comprehensive platform monitoring
  • Centralized log publishing and access through the observer service

Verify the observer configuration:

# Check DataPlane observer config
kubectl get dataplane default -n default -o jsonpath='{.spec.observer}' | jq '.'

# Check BuildPlane observer config
kubectl get buildplane default -n default -o jsonpath='{.spec.observer}' | jq '.'

6. Verify OpenChoreo Installation​

Check that default OpenChoreo resources were created:​

# Check default organization and project
kubectl get organizations,projects,environments -A

# Check default component types
kubectl get componenttypes -n default

# Check all OpenChoreo CRDs
kubectl get crds | grep openchoreo

# Check gateway resources
kubectl get gateway,httproute -n openchoreo-data-plane

Check that all components are running:​

# Check cluster info
kubectl cluster-info --context k3d-openchoreo

# Check control plane pods
kubectl get pods -n openchoreo-control-plane

# Check data plane pods
kubectl get pods -n openchoreo-data-plane

# Check build plane pods (if installed)
kubectl get pods -n openchoreo-build-plane

# Check observability plane pods (if installed)
kubectl get pods -n openchoreo-observability-plane

# Check nodes (should be Ready)
kubectl get nodes

Next Steps​

After completing this setup you can:

  1. Deploy your first component to get started with OpenChoreo
  2. Test the GCP microservices demo to see multi-component applications in action
  3. Deploy additional sample applications from the OpenChoreo samples
  4. Develop and test new OpenChoreo features

Image Preloading (Optional)​

If you have slow network or want to save bandwidth when re-creating clusters, you can preload images before installing components. This pulls images to your host machine first, then imports them to the k3d cluster.

Run this after creating the cluster (Step 1) but before installing components (Step 2). See the k3d single-cluster README for detailed preloading instructions.

Cleaning Up​

To completely remove the development environment:

# Delete the k3d cluster
k3d cluster delete openchoreo