Kubernetes Integration¶
1. Introduction¶
StorPool provides and maintains a Kubernetes integration that makes it possible for the Kubernetes cluster to use transparently StorPool volumes as persistent storage volumes for its data.
2. Requirements¶
Working StorPool cluster
Working Kubernetes deployment
All Kubernetes workers are joined to StorPool cluster as clients
The
kubernetes
StorPool module installed on all Kubernetes workers as well as on all nodes withstorpool_mgmt
service runningInstalled
kubectl
on all nodes withstorpool_mgmt
service runningThe
jq
tool used for parsing JSON output from the command line
[k8s-master]# yum --enablerepo=extras install -y epel-release
[k8s-master]# yum install -y jq
3. Installation¶
Create a Kubernetes ServiceAccount
, ClusterRole
and
ClusterRoleBinding
for the StorPool provisioner and give it the required
permissions.
Note
StorPool provisioner configuration can be found in
/usr/share/doc/storpool/examples/kubernetes/storpool-provisioner.yaml
[k8s-master]# cat > storpool-provisioner.yaml <<_EOF_
kind: ServiceAccount
apiVersion: v1
metadata:
name: storpool-provisioner
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storpool-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update", "create", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: storpool-provisioner-role
namespace: kube-system
subjects:
- kind: ServiceAccount
name: storpool-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: storpool-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: csi-driver.storpool.com
spec:
attachRequired: false
podInfoOnMount: false
volumeLifecycleModes: # added in Kubernetes 1.16
- Persistent
- Ephemeral
_EOF_
[k8s-master]# kubectl apply -f storpool-provisioner.yaml
Prepare a kubeconfig file
Note
In the follwing example we are borrowing the certificate-authority-data and
kubernetes master’s endpoint from ~/.kube/config
on Kubernetes master.
[k8s-master]# CLUSTER_NAME=cluster1
[k8s-master]# USER_NAME=storpool
[k8s-master]# SP_SECRET_NAME=$(kubectl get serviceaccounts --namespace=kube-system storpool-provisioner -o json | jq -r '.secrets[0].name')
[k8s-master]# echo "SP_SECRET_NAME=$SP_SECRET_NAME"
[k8s-master]# PROVISIONER_TOKEN=$(kubectl get secret --namespace=kube-system "$SP_SECRET_NAME" -o json | jq -r .data.token | base64 -d)
[k8s-master]# echo "PROVISIONER_TOKEN=$PROVISIONER_TOKEN"
[k8s-master]# head -n5 ~/.kube/config > kubeconfig
[k8s-master]# cat >>kubeconfig <<_EOF_
name: $CLUSTER_NAME
contexts:
- context:
cluster: $CLUSTER_NAME
user: $USER_NAME@$CLUSTER_NAME
name: $CLUSTER_NAME
current-context: ""
kind: Config
preferences: {}
users:
- name: $USER_NAME@$CLUSTER_NAME
user:
token: $PROVISIONER_TOKEN
_EOF_
Note
The node config file supports multiple clusters, configurations and/or users. If there are more Kubernetes clusters, they need to be added to the config file.
Example configuration file with two kubernetes clusters:
apiVersion: v1
kind: Config
current-context: ""
preferences: {}
clusters:
- cluster:
certificate-authority-data: DQ0FTSXdEUVlKS29aSWh2Y05BUUVCQl...
server: https://10.8.111.253:6443
name: $CLUSTER_NAME_1
- cluster:
certificate-authority-data: 1Q5a0VJdUtmUnd2NkhpRwpNcWJDSHNT...
server: https://10.8.222.253:6443
name: $CLUSTER_NAME_2
contexts:
- context:
cluster: $CLUSTER_NAME_1
user: $USER_NAME_1@$CLUSTER_NAME_1
name: $CLUSTER_NAME_1
- context:
cluster: $CLUSTER_NAME_2
user: $USER_NAME_2@$CLUSTER_NAME_2
name: $CLUSTER_NAME_2
users:
- name: $USER_NAME_1@$CLUSTER_NAME_1
user:
token: $PROVISIONER_TOKEN_FOR_CLUSTER_1
- name: $USER_NAME_2@$CLUSTER_NAME_2
user:
token: $PROVISIONER_TOKEN_FOR_CLUSTER_2
Copy /etc/storpool/kubeconfig
on every node with the storpool_mgmt
service running.
To validate the configuration, the following command should not return an error:
[sp-mgmt-node]# kubectl --kubeconfig=/etc/storpool/kubeconfig --context=cluster1 get pv
Note
Validate from every node with the storpool_mgmt
service running.
The name of the cluster (cluster1 in the above example) must be registered via the StorPool CLI. Execute the following once on a node that is a member of the StorPool cluster and configured to access the API:
[k8s-master]# storpool kubernetes add name cluster1
To check Kubernetes status in StorPool
[k8s-master]# storpool kubernetes list
[k8s-master]# storpool kubernetes status
Enable and start storpool_kubcsi.service on each Kubernetes worker node
[k8-node]# systemctl enable --now storpool_kubcsi.service
4. Usage example¶
Create a storage class in Kubernetes. Create a new template for the kubernetes cluster, for accountability reasons.
Hint
Uncomment the annotation if this is to be based as default class.
Update the template if necessary,
k8s-nvme-only
is just an example name for the template in StorPool
[k8s-master]# TEMPLATE="k8s-nvme-only"
[k8s-master]# cat > storageClass.yaml <<_EOF_
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storpool-csi
namespace: kube-system
#annotations:
#storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi-driver.storpool.com
reclaimPolicy: Delete
parameters:
template: "$TEMPLATE"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
_EOF_
[k8s-master]# kubectl apply -f "storageClass.yaml"
[k8s-master]# kubectl get sc
Now PersistentVolumeClaim
could be created and should get auto provisioned.
[k8s-master]# cat > pvclaim.yaml <<_EOF_
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-storpool-csi-request
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: storpool-csi
_EOF_
[k8s-master]# kubectl apply -f "pvclaim.yaml"
[k8s-master]# kubectl get pvc
The persistent claim should be listed, and its status should change to
Pending, if everything is working. The newly created volume should be listed
in the pv
’s:
[k8s-master]# kubectl get pv
Hint
In case this is not happening,
kubectl describe pvc my-storpool-csi-request
, might list a StorPool
generated event e.g. noResources, or template missing, more info will be
available in the storpool_mgmt
API logs (/var/log/debug
and
/var/log/storpool/mgmt.log
).
Create a pod:
[k8s-master]# cat > pod.yaml <<_EOF_
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: my-storpool-csi-request
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
_EOF_
[k8s-master]# kubectl apply -f pod.yaml
[k8s-master]# kubectl get pod
The integration now supports also live expansion of the filesystem. In order to
trigger a live expansion use kubectl edit pvc my-storpool-csi-request
and
bump the size in spec
-> resources
-> requests
-> storage
.
Note
Use kubectl exec -it task-pv-pod -- /bin/sh -c "kill 1"
if you’d like to
restart the container and update the size immediately.