Plugin functionality
Here you can find details about the implementation of the StorPool plugin for CloudStack. They is relevant mainly for developers working on the plugin.
Creating template from a snapshot
When the bypass option is enabled, the snapshot exists only on PRIMARY (StorPool) storage. From this snapshot a template will be created on SECONADRY.
Creating ROOT volume from templates
When creating the first volume based on the given template, if snapshot of the template does not exists on StorPool it will be first downloaded (cached) to PRIMARY storage. This is mapped to a StorPool snapshot so, creating consecutive volumes from the same template does not incur additional copying of data to PRIMARY storage.
This cached snapshot is garbage collected when the original template is deleted from CloudStack. This cleanup is done by a background task in CloudStack.
Creating a ROOT volume from an ISO image
We just need to create the volume. The ISO installation is handled by CloudStack.
Creating a DATA volume
DATA volumes are created by CloudStack the first time it is attached to a VM.
Creating volume from snapshot
We use the fact that the snapshot already exists on PRIMARY, so no data is copied. We will copy snapshots from SECONDARY to StorPool PRIMARY, when there is no corresponding StorPool snapshot.
Resizing volumes
We need to send a resize cmd to the agent where the VM the volume is attached to is running, so that the resize is visible by the VM.
Creating snapshots
The snapshot is first created on the PRIMARY storage (StorPool), then backed-up on SECONDARY storage (tested with NFS secondary); works when the bypass option is not enabled. The original StorPool snapshot is kept, so that creating volumes from the snapshot does not need to copy the data again to PRIMARY. When the snapshot is deleted from CloudStack so is the corresponding StorPool snapshot.
Currently snapshots are taken in RAW format.
Reverting volume to snapshot
It’s handled by StorPool.
Migrating volumes to other Storage pools
Tested with storage pools on NFS only.
Virtual machine snapshot and group snapshot
StorPool supports consistent snapshots of volumes attached to a virtual machine.
BW/IOPS limitations
Max IOPS are kept in StorPool’s volumes with the help of custom service offerings, by adding IOPS limits to the corresponding system disk offering.
CloudStack has no way to specify max BW.
Support for host HA
Supported for versions 4.19 and newer.
If StorPool is used as primary storage, the administrator can choose the StorPool-provided heartbeat mechanism. It relies on the presence of the host in the storage network, and there is no need for additional primary storage over NFS just for HA goals. The StorPool heartbeat mechanism currently works for compute-only nodes. The support for hyper-converged nodes is in the roadmap.
The StorPool heartbeat plug-in for CloudStack is part of the standard CloudStack install. There is no additional work required to add and enable this component.
If there is more than one primary storage in the cluster and you want node fencing when one of them is down, then you should set the kvm.ha.fence.on.storage.heartbeat.failure
global setting to true.
This setting can be configured in CloudStack on the global settings page in the UI, or using the updateConfiguration call of the CloudStack API.
Supported operations for volume encryption
Supported virtual machine operations:
Live migration of VM to another host
Virtual machine snapshots (group snapshot without memory)
Revert VM snapshot
Delete VM snapshot
Supported Volume operations:
Attach/detach volume
Live migrate volume between two StorPool primary storages
Volume snapshot
Delete snapshot
Revert snapshot
Note that volume snapshot are allowed only when sp.bypass.secondary.storage is set to true (see Plugin settings). This means that the snapshots are not backed up to secondary storage