StorPool iSCSI CSI
Requirements
A working StorPool cluster;
An iSCSI portal group configured in StorPool;
A working Kubernetes deployment;
The
kubernetes
StorPool module must be installed on all hosts running a Kubelet as well as on all nodes with thestorpool_mgmt
service running;kubectl
must be installed on all nodes with thestorpool_mgmt
service running;An iSCSI initiator (client) must be installed on all nodes which run a
kubelet
service;The
jq
tool must be installed. It is used for parsing JSON output from the command line.
Installation
Create a StorPool ServiceAccount
, ClusterRole
and ClusterRoleBinding
for the StorPool CSI and give it the required permissions.
Note
StorPool example ServiceAccount
, ClusterRole
and ClusterRoleBinding
configuration can be found in
/usr/share/doc/storpool/examples/kubernetes/storpool-iscsi-provisioner.yaml
[k8s-master]# cat > storpool-iscsi-provisioner.yaml << _EOF_
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: storpool-provisioner
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storpool-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update", "create", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storpool-provisioner-role
namespace: kube-system
subjects:
- kind: ServiceAccount
name: storpool-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: storpool-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: csi-iscsi-driver.storpool.com
spec:
attachRequired: false
podInfoOnMount: false
volumeLifecycleModes: # added in Kubernetes 1.16
- Persistent
- Ephemeral
_EOF_
[k8s-master]# kubectl apply -f storpool-iscsi-provisioner.yaml
Prepare a kubeconfig file:
Note
In the following example we are using the certificate-authority-data and
kubernetes master’s endpoint from ~/.kube/config
on Kubernetes master.
[k8s-master]# CLUSTER_NAME=cluster1
[k8s-master]# USER_NAME=storpool
[k8s-master]# SP_SECRET_NAME=$(kubectl get serviceaccounts --namespace=kube-system storpool-provisioner -o json | jq -r '.secrets[0].name')
[k8s-master]# echo "SP_SECRET_NAME=$SP_SECRET_NAME"
[k8s-master]# PROVISIONER_TOKEN=$(kubectl get secret --namespace=kube-system "$SP_SECRET_NAME" -o json | jq -r .data.token | base64 -d)
[k8s-master]# echo "PROVISIONER_TOKEN=$PROVISIONER_TOKEN"
[k8s-master]# head -n5 ~/.kube/config > kubeconfig
[k8s-master]# cat > kubeconfig << _EOF_
name: $CLUSTER_NAME
contexts:
- context:
cluster: $CLUSTER_NAME
user: $USER_NAME@$CLUSTER_NAME
name: $CLUSTER_NAME
current-context: ""
kind: Config
preferences: {}
users:
- name: $USER_NAME@$CLUSTER_NAME
user:
token: $PROVISIONER_TOKEN
_EOF_
Note
The node config file supports multiple clusters, configurations and/or users. If there are more Kubernetes clusters, they need to be added to the config file.
Example configuration file with two kubernetes clusters:
apiVersion: v1
kind: Config
current-context: ""
preferences: {}
clusters:
- cluster:
certificate-authority-data: DQ0FTSXdEUVlKS29aSWh2Y05BUUVCQl...
server: https://10.8.111.253:6443
name: $CLUSTER_NAME_1
- cluster:
certificate-authority-data: 1Q5a0VJdUtmUnd2NkhpRwpNcWJDSHNT...
server: https://10.8.222.253:6443
name: $CLUSTER_NAME_2
contexts:
- context:
cluster: $CLUSTER_NAME_1
user: $USER_NAME_1@$CLUSTER_NAME_1
name: $CLUSTER_NAME_1
- context:
cluster: $CLUSTER_NAME_2
user: $USER_NAME_2@$CLUSTER_NAME_2
name: $CLUSTER_NAME_2
users:
- name: $USER_NAME_1@$CLUSTER_NAME_1
user:
token: $PROVISIONER_TOKEN_FOR_CLUSTER_1
- name: $USER_NAME_2@$CLUSTER_NAME_2
user:
token: $PROVISIONER_TOKEN_FOR_CLUSTER_2
Copy /etc/storpool/kubeconfig
on every node with the storpool_mgmt
service running.
To validate the configuration, the following command should not return an error:
[sp-mgmt-node]# kubectl --kubeconfig=/etc/storpool/kubeconfig --context=cluster1 get pv
Note
Validate from every node with the storpool_mgmt
service running.
The name of the cluster (cluster1 in the above example) must be registered via the StorPool CLI. Execute the following once on a node that is a member of the StorPool cluster and configured to access the API:
[k8s-master]# storpool kubernetes add name cluster1
To check Kubernetes status in StorPool:
[k8s-master]# storpool kubernetes list
[k8s-master]# storpool kubernetes status
Before installing the StorPool iSCSI CSI node plugin, each node running a kubelet
service must be added as an iSCSI initiator in StorPool. StorPool iSCSI CSI uses the value
in /etc/iscsi/initiatorname.iscsi
as the initiator’s IQN. Run the following commands for
each kubelet
node:
[k8s-master]# storpool iscsi config initiator <iqn-name-here> create
Note
Since the command above creates an initiator without authentication, one might want to enable connections only from trusted subnets. This can be achieved with the following command:
[k8s-master]# storpool iscsi config initiator <iqn-name-here> addNet <iscsi-subnet-as-prefix>
Next, deploy the StorPool iSCSI CSI node plugin:
[k8s-master]# cat > storpool-iscsi-node-plugin.yaml << _EOF_
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: storpool-iscsi-provisioner
spec:
selector:
matchLabels:
app: storpool-iscsi-provisioner
template:
metadata:
labels:
app: storpool-iscsi-provisioner
spec:
serviceAccount: storpool-provisioner
hostNetwork: true
containers:
- name: storpool-iscsi
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: cts.storpool.com/public/storpool-iscsi-provisioner:latest
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
[
"/bin/sh",
"-c",
"rm -rf /var/lib/kubelet/plugins_registry/csi-iscsi-driver.storpool.com",
]
volumeMounts:
- name: pods-mount-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
- mountPath: /sys
name: sysfs
- mountPath: /dev
name: dev
- name: registration-dir
mountPath: /var/lib/kubelet/plugins_registry
- name: host
mountPath: /host
volumes:
- name: pods-mount-dir
hostPath:
path: /var/lib/kubelet/pods
type: Directory
- hostPath:
path: /var/lib/kubelet/plugins_registry
type: Directory
name: registration-dir
- name: sysfs
hostPath:
path: /sys
type: Directory
- name: dev
hostPath:
path: /dev
type: Directory
- name: host
hostPath:
path: /
type: Directory
_EOF_
[k8s-master]# kubectl apply -f storpool-iscsi-node-plugin.yaml
Add the StorPool iSCSI CSI driver. Replace the values of the template and iscsiPGName parameters with the correct values of your setup:
[k8s-master]# cat > storpool-csi-iscsi-sc.yaml << _EOF_
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storpool-csi-iscsi
namespace: kube-system
# annotations:
# storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi-iscsi-driver.storpool.com
reclaimPolicy: Delete
parameters:
template: "ssd"
iscsiPGName: "kubernetes-iscsi-pg"
volumeBindingMode: Immediate
allowVolumeExpansion: true
[k8s-master]# kubectl apply -f storpool-csi-iscsi-sc.yaml