Kubernetes Integration
1. Introduction
StorPool provides and maintains a Kubernetes integration that makes it possible for the Kubernetes cluster to use StorPool volumes transparently as persistent storage volumes for its data. It comes in two flavors: StorPool CSI and StorPool iSCSI CSI.
2. Comparison
StorPool CSI utilizes the storpool_block service to mount StorPool volumes in the
/dev
directory. Hyper-converged deployments would benefit from this CSI.StorPool iSCSI CSI utilizes the host’s iSCSI initiator (client) to attach volumes to it. This CSI would benefit deployments where Kubernetes and StorPool do not share the same hosts.
3. StorPool CSI
3.1. Requirements
A working StorPool cluster;
A working Kubernetes deployment. Please see Kubelet configuration for details;
All hosts running a Kubelet must have the
storpool_block
service running;The
kubernetes
StorPool module must be installed on all hosts running a Kubelet as well as on all nodes with thestorpool_mgmt
service running;kubectl
must be installed on all nodes with thestorpool_mgmt
service running;The
jq
tool must be installed. It is used for parsing JSON output from the command line.
3.2. Kubelet configuration
Due to the architecture of StorPool and Kubelet, the default configuration of the Kubelet will fail its startup as a result of CGroup configuration mismatch. If not specified, the Kubelet will try to use all available cores to enforce Pod QoS policies. If StorPool has been already deployed the Kubelet will fail to start as the kernel won’t allow it to use the cores exclusively dedicated to StorPool. Changing one or more parameters can resolve the issue in a couple of ways:
recommended Set the
SP_MACHINE_SLICE_NAME
variable value tokubepods
. This is the default CGroup name the Kubelet tries to place pods into. By letting StorPool create a CGroup that excludes the exclusively provisioned cores, the Kubelet can successfully start and enforce QoS policies for Pods using the remaining resources. Also, set the cgroup driver tocgroupfs
as StorPool cannot modify cgroups via thesystemd
driver. Remember to also set the cgroup driver tocgroupfs
at your container runtime.not recommended Disable Pod QoS policy via the
--cgroups-per-qos
Kubelet parameter set to false or by settingcgroupsPerQOS
option in the Kubelet config file tofalse
. This setting will effectively turn off QoS policy enforcement and is not recommended.
3.3. Installation
Create a StorPool ServiceAccount
, ClusterRole
and ClusterRoleBinding
for the StorPool CSI and give it the required permissions.
Note
StorPool example ServiceAccount
, ClusterRole
and ClusterRoleBinding
configuration can be found in
/usr/share/doc/storpool/examples/kubernetes/storpool-provisioner.yaml
[k8s-master]# cat > storpool-provisioner.yaml << _EOF_
kind: ServiceAccount
apiVersion: v1
metadata:
name: storpool-provisioner
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storpool-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update", "create", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storpool-provisioner-role
namespace: kube-system
subjects:
- kind: ServiceAccount
name: storpool-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: storpool-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: csi-driver.storpool.com
spec:
attachRequired: false
podInfoOnMount: false
volumeLifecycleModes: # added in Kubernetes 1.16
- Persistent
- Ephemeral
_EOF_
[k8s-master]# kubectl apply -f storpool-provisioner.yaml
Prepare a kubeconfig file
Note
In the following example we are using the certificate-authority-data and
kubernetes master’s endpoint from ~/.kube/config
on Kubernetes master.
[k8s-master]# CLUSTER_NAME=cluster1
[k8s-master]# USER_NAME=storpool
[k8s-master]# SP_SECRET_NAME=$(kubectl get serviceaccounts --namespace=kube-system storpool-provisioner -o json | jq -r '.secrets[0].name')
[k8s-master]# echo "SP_SECRET_NAME=$SP_SECRET_NAME"
[k8s-master]# PROVISIONER_TOKEN=$(kubectl get secret --namespace=kube-system "$SP_SECRET_NAME" -o json | jq -r .data.token | base64 -d)
[k8s-master]# echo "PROVISIONER_TOKEN=$PROVISIONER_TOKEN"
[k8s-master]# head -n5 ~/.kube/config > kubeconfig
[k8s-master]# cat > kubeconfig << _EOF_
name: $CLUSTER_NAME
contexts:
- context:
cluster: $CLUSTER_NAME
user: $USER_NAME@$CLUSTER_NAME
name: $CLUSTER_NAME
current-context: ""
kind: Config
preferences: {}
users:
- name: $USER_NAME@$CLUSTER_NAME
user:
token: $PROVISIONER_TOKEN
_EOF_
Note
The node config file supports multiple clusters, configurations and/or users. If there are more Kubernetes clusters, they need to be added to the config file.
Example configuration file with two kubernetes clusters:
apiVersion: v1
kind: Config
current-context: ""
preferences: {}
clusters:
- cluster:
certificate-authority-data: DQ0FTSXdEUVlKS29aSWh2Y05BUUVCQl...
server: https://10.8.111.253:6443
name: $CLUSTER_NAME_1
- cluster:
certificate-authority-data: 1Q5a0VJdUtmUnd2NkhpRwpNcWJDSHNT...
server: https://10.8.222.253:6443
name: $CLUSTER_NAME_2
contexts:
- context:
cluster: $CLUSTER_NAME_1
user: $USER_NAME_1@$CLUSTER_NAME_1
name: $CLUSTER_NAME_1
- context:
cluster: $CLUSTER_NAME_2
user: $USER_NAME_2@$CLUSTER_NAME_2
name: $CLUSTER_NAME_2
users:
- name: $USER_NAME_1@$CLUSTER_NAME_1
user:
token: $PROVISIONER_TOKEN_FOR_CLUSTER_1
- name: $USER_NAME_2@$CLUSTER_NAME_2
user:
token: $PROVISIONER_TOKEN_FOR_CLUSTER_2
Copy /etc/storpool/kubeconfig
on every node with the storpool_mgmt
service running.
To validate the configuration, the following command should not return an error:
[sp-mgmt-node]# kubectl --kubeconfig=/etc/storpool/kubeconfig --context=cluster1 get pv
Note
Validate from every node with the storpool_mgmt
service running.
The name of the cluster (cluster1 in the above example) must be registered via the StorPool CLI. Execute the following once on a node that is a member of the StorPool cluster and configured to access the API:
[k8s-master]# storpool kubernetes add name cluster1
To check Kubernetes status in StorPool:
[k8s-master]# storpool kubernetes list
[k8s-master]# storpool kubernetes status
Enable and start storpool_kubcsi.service on each host running a Kubelet:
[k8s-node]# systemctl enable --now storpool_kubcsi.service
Restart the Kubelet so it can load the StorPool CSI plugin:
[k8s-node]# systemctl restart kubelet.service
Create a StorageClass
that uses the StorPool CSI to provision volumes:
Note
StorPool example StorageClass
can be found in
/usr/share/doc/storpool/examples/kubernetes/storage-class-example.yaml
[k8s-master]# cat storpool-csi-sc.yaml << _EOF_
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storpool-csi
namespace: kube-system
# annotations:
# storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi-driver.storpool.com
reclaimPolicy: Delete
parameters:
template: "ssd"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
_EOF_
4. StorPool iSCSI CSI
4.1. Requirements
A working StorPool cluster;
An iSCSI portal group configured in StorPool;
A working Kubernetes deployment;
The
kubernetes
StorPool module must be installed on all hosts running a Kubelet as well as on all nodes with thestorpool_mgmt
service running;kubectl
must be installed on all nodes with thestorpool_mgmt
service running;An iSCSI initiator (client) must be installed on all nodes which run a
kubelet
service;The
jq
tool must be installed. It is used for parsing JSON output from the command line.
4.2. Installation
Create a StorPool ServiceAccount
, ClusterRole
and ClusterRoleBinding
for the StorPool CSI and give it the required permissions.
Note
StorPool example ServiceAccount
, ClusterRole
and ClusterRoleBinding
configuration can be found in
/usr/share/doc/storpool/examples/kubernetes/storpool-iscsi-provisioner.yaml
[k8s-master]# cat > storpool-iscsi-provisioner.yaml << _EOF_
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: storpool-provisioner
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storpool-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update", "create", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storpool-provisioner-role
namespace: kube-system
subjects:
- kind: ServiceAccount
name: storpool-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: storpool-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: csi-iscsi-driver.storpool.com
spec:
attachRequired: false
podInfoOnMount: false
volumeLifecycleModes: # added in Kubernetes 1.16
- Persistent
- Ephemeral
_EOF_
[k8s-master]# kubectl apply -f storpool-iscsi-provisioner.yaml
Prepare a kubeconfig file:
Note
In the following example we are using the certificate-authority-data and
kubernetes master’s endpoint from ~/.kube/config
on Kubernetes master.
[k8s-master]# CLUSTER_NAME=cluster1
[k8s-master]# USER_NAME=storpool
[k8s-master]# SP_SECRET_NAME=$(kubectl get serviceaccounts --namespace=kube-system storpool-provisioner -o json | jq -r '.secrets[0].name')
[k8s-master]# echo "SP_SECRET_NAME=$SP_SECRET_NAME"
[k8s-master]# PROVISIONER_TOKEN=$(kubectl get secret --namespace=kube-system "$SP_SECRET_NAME" -o json | jq -r .data.token | base64 -d)
[k8s-master]# echo "PROVISIONER_TOKEN=$PROVISIONER_TOKEN"
[k8s-master]# head -n5 ~/.kube/config > kubeconfig
[k8s-master]# cat > kubeconfig << _EOF_
name: $CLUSTER_NAME
contexts:
- context:
cluster: $CLUSTER_NAME
user: $USER_NAME@$CLUSTER_NAME
name: $CLUSTER_NAME
current-context: ""
kind: Config
preferences: {}
users:
- name: $USER_NAME@$CLUSTER_NAME
user:
token: $PROVISIONER_TOKEN
_EOF_
Note
The node config file supports multiple clusters, configurations and/or users. If there are more Kubernetes clusters, they need to be added to the config file.
Example configuration file with two kubernetes clusters:
apiVersion: v1
kind: Config
current-context: ""
preferences: {}
clusters:
- cluster:
certificate-authority-data: DQ0FTSXdEUVlKS29aSWh2Y05BUUVCQl...
server: https://10.8.111.253:6443
name: $CLUSTER_NAME_1
- cluster:
certificate-authority-data: 1Q5a0VJdUtmUnd2NkhpRwpNcWJDSHNT...
server: https://10.8.222.253:6443
name: $CLUSTER_NAME_2
contexts:
- context:
cluster: $CLUSTER_NAME_1
user: $USER_NAME_1@$CLUSTER_NAME_1
name: $CLUSTER_NAME_1
- context:
cluster: $CLUSTER_NAME_2
user: $USER_NAME_2@$CLUSTER_NAME_2
name: $CLUSTER_NAME_2
users:
- name: $USER_NAME_1@$CLUSTER_NAME_1
user:
token: $PROVISIONER_TOKEN_FOR_CLUSTER_1
- name: $USER_NAME_2@$CLUSTER_NAME_2
user:
token: $PROVISIONER_TOKEN_FOR_CLUSTER_2
Copy /etc/storpool/kubeconfig
on every node with the storpool_mgmt
service running.
To validate the configuration, the following command should not return an error:
[sp-mgmt-node]# kubectl --kubeconfig=/etc/storpool/kubeconfig --context=cluster1 get pv
Note
Validate from every node with the storpool_mgmt
service running.
The name of the cluster (cluster1 in the above example) must be registered via the StorPool CLI. Execute the following once on a node that is a member of the StorPool cluster and configured to access the API:
[k8s-master]# storpool kubernetes add name cluster1
To check Kubernetes status in StorPool:
[k8s-master]# storpool kubernetes list
[k8s-master]# storpool kubernetes status
Before installing the StorPool iSCSI CSI node plugin, each node running a kubelet
service must be added as an iSCSI initiator in StorPool. StorPool iSCSI CSI uses the value
in /etc/iscsi/initiatorname.iscsi
as the initiator’s IQN. Run the following commands for
each kubelet
node:
[k8s-master]# storpool iscsi config initiator <iqn-name-here> create
Note
Since the command above creates an initiator without authentication, one might want to enable connections only from trusted subnets. This can be achieved with the following command:
[k8s-master]# storpool iscsi config initiator <iqn-name-here> addNet <iscsi-subnet-as-prefix>
Next, deploy the StorPool iSCSI CSI node plugin:
[k8s-master]# cat > storpool-iscsi-node-plugin.yaml << _EOF_
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: storpool-iscsi-provisioner
spec:
selector:
matchLabels:
app: storpool-iscsi-provisioner
template:
metadata:
labels:
app: storpool-iscsi-provisioner
spec:
serviceAccount: storpool-provisioner
hostNetwork: true
containers:
- name: storpool-iscsi
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: cts.storpool.com/public/storpool-iscsi-provisioner:latest
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
[
"/bin/sh",
"-c",
"rm -rf /var/lib/kubelet/plugins_registry/csi-iscsi-driver.storpool.com",
]
volumeMounts:
- name: pods-mount-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
- mountPath: /sys
name: sysfs
- mountPath: /dev
name: dev
- name: registration-dir
mountPath: /var/lib/kubelet/plugins_registry
- name: host
mountPath: /host
volumes:
- name: pods-mount-dir
hostPath:
path: /var/lib/kubelet/pods
type: Directory
- hostPath:
path: /var/lib/kubelet/plugins_registry
type: Directory
name: registration-dir
- name: sysfs
hostPath:
path: /sys
type: Directory
- name: dev
hostPath:
path: /dev
type: Directory
- name: host
hostPath:
path: /
type: Directory
_EOF_
[k8s-master]# kubectl apply -f storpool-iscsi-node-plugin.yaml
Add the StorPool iSCSI CSI driver. Replace the values of the template and iscsiPGName parameters with the correct values of your setup:
[k8s-master]# cat > storpool-csi-iscsi-sc.yaml << _EOF_
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storpool-csi-iscsi
namespace: kube-system
# annotations:
# storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi-iscsi-driver.storpool.com
reclaimPolicy: Delete
parameters:
template: "ssd"
iscsiPGName: "kubernetes-iscsi-pg"
volumeBindingMode: Immediate
allowVolumeExpansion: true
[k8s-master]# kubectl apply -f storpool-csi-iscsi-sc.yaml
5. Usage example
Create a PersistentVolumeClaim
that uses a configured StorageClass
to
dynamically provision a volume. The example below uses “storpool-csi”
for StorageClass
, but the same PersistentVolumeClaim
can be used for
storpool-csi-iscsi.
[k8s-master]# cat > pvclaim.yaml << _EOF_
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-storpool-csi-request
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: storpool-csi
_EOF_
[k8s-master]# kubectl apply -f pvclaim.yaml
[k8s-master]# kubectl get pvc
The persistent claim should be listed, and its status should change to
Pending, if everything is working. The newly created volume should be listed
in the pv
’s:
[k8s-master]# kubectl get pv
Hint
In case this is not happening,
kubectl describe pvc my-storpool-csi-request
, might list a StorPool
generated event e.g. noResources, or template missing, more info will be
available in the storpool_mgmt
API logs (/var/log/debug
and
/var/log/storpool/mgmt.log
).
Create a pod:
[k8s-master]# cat > pod.yaml << _EOF_
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: my-storpool-csi-request
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
_EOF_
[k8s-master]# kubectl apply -f pod.yaml
[k8s-master]# kubectl get pod
The integration now supports also live expansion of the filesystem. In order to
trigger a live expansion use kubectl edit pvc my-storpool-csi-request
and
bump the size in spec
-> resources
-> requests
-> storage
.
Note
Use kubectl exec -it task-pv-pod -- /bin/sh -c "kill 1"
if you’d like to
restart the container and update the size immediately.