StorPool CSI

Requirements

  • A working StorPool cluster;

  • A working Kubernetes deployment. Please see Kubelet configuration for details;

  • All hosts running a Kubelet must have the storpool_block service running;

  • The kubernetes StorPool module must be installed on all hosts running a Kubelet as well as on all nodes with the storpool_mgmt service running;

  • kubectl must be installed on all nodes with the storpool_mgmt service running;

  • The jq tool must be installed. It is used for parsing JSON output from the command line.

Kubelet configuration

Due to the architecture of StorPool and Kubelet, the default configuration of the Kubelet will fail its startup as a result of CGroup configuration mismatch. If not specified, the Kubelet will try to use all available cores to enforce Pod QoS policies. If StorPool has been already deployed the Kubelet will fail to start as the kernel won’t allow it to use the cores exclusively dedicated to StorPool. Changing one or more parameters can resolve the issue in a couple of ways:

  • recommended Set the SP_MACHINE_SLICE_NAME variable value to kubepods. This is the default CGroup name the Kubelet tries to place pods into. By letting StorPool create a CGroup that excludes the exclusively provisioned cores, the Kubelet can successfully start and enforce QoS policies for Pods using the remaining resources. Also, set the cgroup driver to cgroupfs as StorPool cannot modify cgroups via the systemd driver. Remember to also set the cgroup driver to cgroupfs at your container runtime.

  • not recommended Disable Pod QoS policy via the --cgroups-per-qos Kubelet parameter set to false or by setting cgroupsPerQOS option in the Kubelet config file to false. This setting will effectively turn off QoS policy enforcement and is not recommended.

Installation

Create a StorPool ServiceAccount, ClusterRole and ClusterRoleBinding for the StorPool CSI and give it the required permissions.

Note

StorPool example ServiceAccount, ClusterRole and ClusterRoleBinding configuration can be found in /usr/share/doc/storpool/examples/kubernetes/storpool-provisioner.yaml

[k8s-master]# cat > storpool-provisioner.yaml << _EOF_
kind: ServiceAccount
apiVersion: v1
metadata:
  name: storpool-provisioner
  namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: storpool-provisioner
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "update", "create", "delete", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments/status"]
    verbs: ["patch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: storpool-provisioner-role
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: storpool-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: storpool-provisioner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: csi-driver.storpool.com
spec:
  attachRequired: false
  podInfoOnMount: false
  volumeLifecycleModes: # added in Kubernetes 1.16
  - Persistent
  - Ephemeral
_EOF_

[k8s-master]# kubectl apply -f storpool-provisioner.yaml

Prepare a kubeconfig file

Note

In the following example we are using the certificate-authority-data and kubernetes master’s endpoint from ~/.kube/config on Kubernetes master.

[k8s-master]# CLUSTER_NAME=cluster1
[k8s-master]# USER_NAME=storpool

[k8s-master]# SP_SECRET_NAME=$(kubectl get serviceaccounts --namespace=kube-system storpool-provisioner -o json | jq -r '.secrets[0].name')
[k8s-master]# echo "SP_SECRET_NAME=$SP_SECRET_NAME"

[k8s-master]# PROVISIONER_TOKEN=$(kubectl get secret --namespace=kube-system "$SP_SECRET_NAME" -o json | jq -r .data.token | base64 -d)
[k8s-master]# echo "PROVISIONER_TOKEN=$PROVISIONER_TOKEN"

[k8s-master]# head -n5 ~/.kube/config > kubeconfig

[k8s-master]# cat > kubeconfig << _EOF_
name: $CLUSTER_NAME
contexts:
- context:
    cluster: $CLUSTER_NAME
    user: $USER_NAME@$CLUSTER_NAME
  name: $CLUSTER_NAME
current-context: ""
kind: Config
preferences: {}
users:
- name: $USER_NAME@$CLUSTER_NAME
  user:
    token: $PROVISIONER_TOKEN
_EOF_

Note

The node config file supports multiple clusters, configurations and/or users. If there are more Kubernetes clusters, they need to be added to the config file.

Example configuration file with two kubernetes clusters:

apiVersion: v1
kind: Config
current-context: ""
preferences: {}

clusters:
- cluster:
    certificate-authority-data: DQ0FTSXdEUVlKS29aSWh2Y05BUUVCQl...
    server: https://10.8.111.253:6443
  name: $CLUSTER_NAME_1

- cluster:
    certificate-authority-data: 1Q5a0VJdUtmUnd2NkhpRwpNcWJDSHNT...
    server: https://10.8.222.253:6443
  name: $CLUSTER_NAME_2

contexts:
- context:
    cluster: $CLUSTER_NAME_1
    user: $USER_NAME_1@$CLUSTER_NAME_1
  name: $CLUSTER_NAME_1

- context:
    cluster: $CLUSTER_NAME_2
    user: $USER_NAME_2@$CLUSTER_NAME_2
  name: $CLUSTER_NAME_2

users:
- name: $USER_NAME_1@$CLUSTER_NAME_1
  user:
    token: $PROVISIONER_TOKEN_FOR_CLUSTER_1

- name: $USER_NAME_2@$CLUSTER_NAME_2
  user:
    token: $PROVISIONER_TOKEN_FOR_CLUSTER_2

Copy /etc/storpool/kubeconfig on every node with the storpool_mgmt service running.

To validate the configuration, the following command should not return an error:

[sp-mgmt-node]# kubectl --kubeconfig=/etc/storpool/kubeconfig --context=cluster1 get pv

Note

Validate from every node with the storpool_mgmt service running.

The name of the cluster (cluster1 in the above example) must be registered via the StorPool CLI. Execute the following once on a node that is a member of the StorPool cluster and configured to access the API:

[k8s-master]# storpool kubernetes add name cluster1

To check Kubernetes status in StorPool:

[k8s-master]# storpool kubernetes list
[k8s-master]# storpool kubernetes status

Enable and start storpool_kubcsi.service on each host running a Kubelet:

[k8s-node]# systemctl enable --now storpool_kubcsi.service

Restart the Kubelet so it can load the StorPool CSI plugin:

[k8s-node]# systemctl restart kubelet.service

Create a StorageClass that uses the StorPool CSI to provision volumes:

Note

StorPool example StorageClass can be found in /usr/share/doc/storpool/examples/kubernetes/storage-class-example.yaml

[k8s-master]# cat storpool-csi-sc.yaml << _EOF_
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: storpool-csi
  namespace: kube-system
# annotations:
#   storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi-driver.storpool.com
reclaimPolicy: Delete
parameters:
  template: "ssd"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
_EOF_