Dynamic Volume provisioning using OpenEBS

Dynamic Volume provisioning using OpenEBS
Photo by Thomas Verleene / Unsplash

A freshly deployed cluster on bare metal requires a lot of attention when it comes to volume provisioning. We can provide a hostPath volume. It mounts a file or directory from the host node's filesystem into a Pod. But this is not something that most Pods will need. Another volume type is the local one. A local volume represents a mounted local storage device such as a disk, partition, or directory. But local volumes can only be used as statically created PersistentVolume. Dynamic volume provisioning is not supported by default.

Since my cluster is not deployed in the cloud, but on my own physical servers managed by MAAS, I wanted to extend the cluster's capabilities with dynamic volume provisioning. OpenEBS turned out to be very helpful in this situation.

What is Dynamic Volume provisioning?

Dynamic volume provisioning allows storage volumes to be created on demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users. To enable dynamic provisioning, a cluster administrator needs to pre-create one or more StorageClass objects for users. StorageClass objects define which provisioner should be used and what parameters should be passed to that provisioner when dynamic provisioning is invoked.

What is OpenEBS?

OpenEBS helps Developers and Platform SREs easily deploy Kubernetes Stateful Workloads that require fast and highly reliable container-attached storage. OpenEBS turns any storage available on the Kubernetes worker nodes into local or distributed Kubernetes Persistent Volumes.
OpenEBS - Kubernetes storage simplified

OpenEBS manages the storage available on each of the Kubernetes nodes and uses that storage to provide Local or Distributed (also known as Replicated) PersistentVolumes to Stateful workloads.

source https://openebs.io/docs

Local Volumes

Local Volumes are accessible only from a single node in the cluster. Pods using Local Volume have to be scheduled on the node where volume is provisioned. Local Volumes are typically preferred for distributed workloads like Cassandra, MongoDB, Elastic, etc. that are distributed in nature and have high availability built into them.

OpenEBS can create PersistentVolumes using raw block devices or partitions, or using sub-directories on Hostpaths, or using LVM, ZFS, or sparse files.
The local volumes are directly mounted into the Stateful Pod, without any added overhead from OpenEBS in the data path, decreasing latency. OpenEBS provides additional tooling for Local Volumes for monitoring, backup/restore, disaster recovery, snapshots when backed by ZFS or LVM, capacity-based scheduling, and more.

Replicated (Distributed) Volumes

Replicated Volumes have their data synchronously replicated to multiple nodes. Volumes can sustain node failures. The replication also can be set up across availability zones helping applications move across availability zones. Replicated Volumes are capable of enterprise storage features like snapshots, clones, volume expansion, and so forth. Replicated Volumes are preferred for Stateful workloads like MySQL, Jira, GitLab, etc. Depending on the type of storage attached to Kubernetes worker nodes and application performance requirements, we can select from different storage engines: Jiva, cStor, or Mayastor. In the case of Replicated Volumes OpenEBS creates a microservice for each Distributed PersistentVolume using one of its engines. The Stateful Pod writes the data to the OpenEBS engines that synchronously replicate the data to multiple nodes in the cluster. The OpenEBS engine itself is deployed as a Pod and orchestrated by Kubernetes. When the node running the Stateful Pod fails, the Pod will be rescheduled to another node in the cluster and OpenEBS provides access to the data using the available data copies on other nodes. The Stateful Pods connect to the OpenEBS Distributed PersistentVolume using iSCSI (cStor and Jiva) or NVMeoF (Mayastor).

It is worth mentioning that each storage provider or each engine can have different volume capabilities.

An Access Mode

A PersistentVolume can be mounted on a host in any way supported by the resource provider. The way Pods can access PersistentVolume is described by Access Modes.

Access Mode Abbreviative Description
ReadWriteOnce RWO the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.
ReadOnlyMany ROX the volume can be mounted as read-only by many nodes.
ReadWriteMany RWX the volume can be mounted as read-write by many nodes.
ReadWriteOncePod RWOP the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across the whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.
Why it is so important? Because not every resource provider supports PersistentVolume with, for instance, ReadWriteMany (RWX) mode set.

Dynamic NFS Provisioner

It turns out that when running applications that are not designated to run in a cloud natively, it may happen that they will require a PersistentVolume that can be accessed by multiple Pods simultaneously.

OpenEBS Dynamic NFS Provisioner can be used to dynamically provision NFS Volumes using different kinds of block storage available on the Kubernetes nodes. PersistentVolumes provisioned by the NFS provisioner has ReadWriteMany (RWX) capabilities. When using those volumes, we can share volume data across Pods running on different node machines.

Please note that the status of the Dynamic NFS Provisioner project is still BETA.

Configuring provisioners in the cluster

I decided to install the following provisioners:

My cluster has three worker nodes. Each worker node has two disks, 50GB and 100GB respectively. 50GB disk is used for the operating system. 100GB disk does not have any filesystem and is not mounted to the node as cStor requires raw block devices.

We have to start with installing the iSCSI driver. On the Ubuntu system following commands will do the work.

sudo apt-get install open-iscsi -y
sudo systemctl enable --now iscsid

Then we can add openebs repo to the Helm.

helm repo add openebs https://openebs.github.io/charts

To enable cStor and nfs-provisioner we have to pass additional flags set to true while installing openebs.

helm upgrade --install openebs openebs/openebs -n openebs \
        --set cstor.enabled=true \
        --set nfs-provisioner.enabled=true \
        --namespace openebs --create-namespace

Now we have to wait a bit. Once it finishes, we can see new Pods running in the openebs namespace.

❯ kubectl get pods
NAME                                                              READY   STATUS    RESTARTS      AGE
openebs-cstor-admission-server-689d6687f-rhvp5                    1/1     Running   0             16d
openebs-cstor-csi-controller-0                                    6/6     Running   0             16d
openebs-cstor-csi-node-6xm66                                      2/2     Running   0             16d
openebs-cstor-csi-node-876m7                                      2/2     Running   0             16d
openebs-cstor-csi-node-txh5s                                      2/2     Running   0             16d
openebs-cstor-cspc-operator-7dffb6f55-gzc7l                       1/1     Running   0             16d
openebs-cstor-cvc-operator-7c545f6c94-nngjc                       1/1     Running   0             16d
openebs-localpv-provisioner-686b564b5d-v7x5d                      1/1     Running   1 (16d ago)   16d
openebs-ndm-8c67w                                                 1/1     Running   0             16d
openebs-ndm-9xqvr                                                 1/1     Running   0             16d
openebs-ndm-nldhl                                                 1/1     Running   0             16d
openebs-ndm-operator-7ddccf59c4-5h8bj                             1/1     Running   0             16d
openebs-nfs-provisioner-787d694555-8k72g                          1/1     Running   0             16d

We also have new StorageClasses in the cluster. However, cStor is still not configured.

❯ kubectl get storageclasses
openebs-device                        openebs.io/local       Delete          WaitForFirstConsumer   false                  16d
openebs-hostpath                      openebs.io/local       Delete          WaitForFirstConsumer   false                  16d
openebs-kernel-nfs                    openebs.io/nfsrwx      Delete          Immediate              false                  16d

Now let's see block devices attached to the nodes.

❯ kubectl get blockdevices
NAMESPACE   NAME                                           NODENAME   SIZE          CLAIMSTATE   STATUS   AGE
openebs     blockdevice-19d2d2fdc1c0e274aa3ba199d8fba897   vm0302     99998934528   Unclaimed      Active   16d
openebs     blockdevice-2806021afad58e5ef20c5c82b78fd943   vm0202     99998934528   Unclaimed      Active   16d
openebs     blockdevice-a251ba13122b4b5f8c2ce9471cf4b03e   vm0102     99998934528   Unclaimed      Active   16d

So we have 3 block devices, 100GB each, 1 per node. We will use them to create CStorPoolCluster. When creating it, we have to provide the name, node selector, block device name, and data RAID group type - we can choose either "mirror" or "stripe".

apiVersion: cstor.openebs.io/v1
kind: CStorPoolCluster
 name: openebs-cstor-disk-pool
 namespace: openebs
   - nodeSelector:
       kubernetes.io/hostname: "vm0102"
       - blockDevices:
           - blockDeviceName: "blockdevice-a251ba13122b4b5f8c2ce9471cf4b03e"
       dataRaidGroupType: "stripe"
   - nodeSelector:
       kubernetes.io/hostname: "vm0202"
       - blockDevices:
           - blockDeviceName: "blockdevice-2806021afad58e5ef20c5c82b78fd943"
       dataRaidGroupType: "stripe"
   - nodeSelector:
       kubernetes.io/hostname: "vm0302"
       - blockDevices:
           - blockDeviceName: "blockdevice-19d2d2fdc1c0e274aa3ba199d8fba897"
       dataRaidGroupType: "stripe"

While the pool is being created, we can notice new Pods being spawned in the openebs namespace.

❯ kubectl get pods
NAME                                                              READY   STATUS    RESTARTS      AGE
openebs-cstor-disk-pool-d5rg-75b66c4fbf-f86vt                     3/3     Running   0             17d
openebs-cstor-disk-pool-pxgl-7cb86c885b-zfntm                     3/3     Running   0             17d
openebs-cstor-disk-pool-rvl2-6fc5798694-j57lx                     3/3     Running   0             16d

Once the pool gets initialized we are ready to create a new StorageClass. Let's name it openebs-cstor-csi-default and make it default to the cluster.

kind: StorageClass
apiVersion: storage.k8s.io/v1
  name: openebs-cstor-csi-default
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: cstor.csi.openebs.io
allowVolumeExpansion: true
  cas-type: cstor
  cstorPoolCluster: openebs-cstor-disk-pool
  replicaCount: "3"

And that's it! Now we can start deploying applications to the cluster.

For more insight on NFS provisioner read the following post.

The Pod is stuck on ContainerCreating
I am running a blog that is deployed in the Kubernetes cluster. And as with every self-hosted application, it requires a bit of maintenance from time to time. Every application has bugs and those bugs get fixed at some point. It is good to update an application regularly to apply


OpenEBS provides multiple provisioning engines out of the box with a great variety of different capabilities. Additionally, it is easy to install and simple to maintain. Definitely, it is a good candidate when it comes to dynamic volume provisioning solutions.