Skip to content

Ceph and Rook-Ceph

So what's Rook-Ceph then ? Rook is an open source cloud-native storage orchestrator, with Ceph as the underlying technology. It is a CNCF graduated project, meaning it has a high level of maturity and support in the cloud-native community.

A storage orchestrator provides automation of deployment, configuration, management, scaling, monitoring and more of the storage layer.


Before we install our storage solution, we need to do one quick step. Kubernetes CSI has the concept of snapshots and the feature is enabled in modern Kubernetes distributions.. however, we need to install the resource description files (or CRD in k8s speak) to use the resources. If we do this prior to installing Ceph it means Ceph will install the appropriate configuration to work with CSI snapshots in a k8s native fashion.

You can find full details at

For our purposes, let's just default everything:

git clone
cd external-snapshotter
kubectl kustomize client/config/crd | kubectl create -f -
kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f -

Install the rook-ceph operator

Rook-Ceph consists of two parts, the Rook-Ceph operator and a CephCluster, or multiple CephClusters. Operators are a typical pattern you will encounter within Kubernetes that we have not covered before, and in short, they extend Kubernetes functionality to manage new kinds of resources. In this case, the CephCluster resource.

Technically, Ceph can use any of the following:

  • Raw devices (no partitions or formatted filesystem)
  • Raw partitions (no formatted filesystem)
  • LVM Logical Volumes (no formatted filesystem)
  • Encrypted devices (no formatted filesystem)
  • Multipath devices (no formatted filesystem)
  • Persistent Volumes available from a storage class in block mode

We're going to stick with a dedicated disk however, and in our case it's /dev/sda on each node.

Let's install the operator using our go-to tool, Helm. For our use case the operator defaults are mostly okay here, but you may want to have a browse through the official documentation

helm install --create-namespace \
  --namespace rook-ceph \
  rook-ceph \
  --set csi.kubeletDirPath=/var/lib/k0s/kubelet \

You should see the Helm chart's readme output if successful:

NAME: rook-ceph
LAST DEPLOYED: Sat Mar  2 07:05:39 2024
NAMESPACE: rook-ceph
STATUS: deployed
The Rook Operator has been installed. Check its status by running:
  kubectl --namespace rook-ceph get pods -l "app=rook-ceph-operator"

Visit for instructions on how to create and configure Rook clusters

Important Notes:
- You must customize the 'CephCluster' resource in the sample manifests for your cluster.
- Each CephCluster must be deployed to its own namespace, the samples use `rook-ceph` for the namespace.
- The sample manifests assume you also installed the rook-ceph operator in the `rook-ceph` namespace.
- The helm chart includes all the RBAC required to create a CephCluster CRD in the same namespace.
- Any disk devices you add to the cluster in the 'CephCluster' must be empty (no filesystem and no partitions).

Make sure the operator has been installed, and is up and running:

kubectl -n rook-ceph get pods
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-operator-7b585b7fb6-2qktk   1/1     Running   0          36s

Install a CephCluster

Now that we have the Ceph operator, we can use it to instantiate a new CephCluster.

We'll set some basic parameters, but check out the full configuration for options that may be applicable to your setup.

helm install --create-namespace \
  --namespace rook-ceph \
  rook-ceph-cluster \
  --set operatorNamespace=rook-ceph \
  --set toolbox.enabled=true \

Examine storage status

Given that we chose to install the Ceph toolbox above, this is easy:

kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0]}') --  ceph status
    id:     7ca62de3-1762-4da1-9392-7a8bfc5f77ac
    health: HEALTH_OK

    mon: 3 daemons, quorum a,b,c (age 13m)
    mgr: a(active, since 11m), standbys: b
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 12m), 3 in (since 12m)
    rgw: 1 daemon active (1 hosts, 1 zones)

    volumes: 1/1 healthy
    pools:   12 pools, 169 pgs
    objects: 238 objects, 799 KiB
    usage:   98 MiB used, 1.4 TiB / 1.4 TiB avail
    pgs:     169 active+clean

    client:   852 B/s rd, 1 op/s rd, 0 op/s wr

The key thing to see here is the health: HEALTH_OK line. It also shows us that we have 3 mon instances, 2 mgr instances, and 3 OSDs.

See the asciicast below, the operator will progress through provisioning a ceph cluster first settings up mons, then mgr and finally OSD resources.