CloudStack integration

The StorPool primary storage plugin is used both for the CloudStack management and agents. It is intended to be self-contained and non-intrusive. All StorPool related communication (for example, data copying, volume resize) is done with StorPool-specific commands.

The information provided here applies for CloudStack versions 4.17.0.0 and newer, where the StorPool plugin is part of the standard CloudStack installation. If you are using an earlier version of CloudStack, then you should install the plugin provided in the StorPool CloudStack repository.

For more high-level information on how you can accelerate your CloudStack deployment using StorPool, see the StorPool site.

Introduction

The StorPool plug-in is deeply integrated with CloudStack and works with KVM hypervisors. When used with service or disk offerings, an administrator is able to build an environment in which a root or data disk that a user creates leads to the dynamic creation of a StorPool volume, which has guaranteed performance. Such a StorPool volume is associated with one CloudStack volume, so performance of the CloudStack volume does not vary depending on how heavily other tenants are using the system. The volume migration is supported across non-managed storage pools (for example, local storage) to StorPool, and across StorPool storage pools. For more information about StorPool volumes, see Volumes and snapshots.

CloudStack overview

For more information, see CloudStack’s documentation for Storage Setup.

Primary and Secondary storage

Primary storage is associated with a cluster or zone, and it stores the virtual disks for all the VMs running on hosts in that cluster/zone.

Secondary storage includes the following:

  • Templates are OS images that can be used to boot VMs, and can include additional configuration information, such as installed applications.

  • ISO images are disc images containing data or bootable media for operating systems.

  • Disk volume snapshots are saved copies of VM data, which can be used for data recovery or for creating new templates.

ROOT and DATA volumes

ROOT volumes correspond to the boot disk of a VM. They are created automatically by CloudStack during VM creation. These volumes are created based on a system disk offering, corresponding to the service offering the user VM is based on. We may change the ROOT volume disk offering but only to another system created disk offering.

DATA volumes correspond to additional disks. These can be created by users and then attached or detached to VMs. DATA volumes are created based on a user-defined disk offering.

Setup

Setting up StorPool

First, perform the StorPool installation. Note the following:

  • Create a template to be used by CloudStack. You must set placeHead, placeAll, placeTail, and replication.

  • There is no need to set default volume size because it is determined by the CloudStack disks and services offering.

Setting up a StorPool PRIMARY storage pool in CloudStack

Tip

For more information about this procedure, check Adding Primary Storage in the CloudStack documentation.

The next step after installing StorPool is configuring the plugin. Log in to the CloudStack UI, go to Infrastructure > Primary Storage > Add Primary Storage, and enter the following details:

  • Command: createStoragePool

  • Scope: select Zone-Wide

  • Hypervisor: select KVM

  • Zone: pick appropriate zone

  • Zone id: enter your zone id

  • Name: enter a name for the primary storage

  • Protocol: select custom

  • Path: enter /dev/storpool (required argument, actually not needed in practice).

  • Provider: select StorPool

  • Managed: leave unchecked (currently ignored)

  • Capacity Bytes: used for accounting purposes only. May be more or less than the actual StorPool template capacity.

  • Capacity IOPS: currently not used (may use for max IOPS limitations on volumes from this pool).

  • URL: enter SP_API_HTTP=address:port;SP_AUTH_TOKEN=token;SP_TEMPLATE=template_name. At present one template can be used for at most one Storage Pool.

    • SP_API_HTTP - address of StorPool API

    • SP_AUTH_TOKEN - StorPool’s token

    • SP_TEMPLATE - name of StorPool’s template

    • For more information about these values, see Node configuration options and Templates.

    • For versions 4.19.1.0 and newer, you can use an alternative format for the URL: storpool://{SP_AUTH_TOKEN}@{SP_API_HTTP}:{SP_API_HTTP_PORT}/{SP_TEMPLATE}

  • Storage Tags: If left blank, the StorPool storage plugin will use the pool name to create a corresponding storage tag. This storage tag may be used later, when defining service or disk offerings.

After adding StorPool as primary storage you can set the following parameters in the Settings tab:

sp.bypass.secondary.storage

For StorPool managed storage backup to secondary.

sp.cluster.id

For StorPool multi cluster authorization (it will be set automatically for each cluster).

sp.enable.alternative.endpoint

Used for StorPool primary storage, defines if there is a need to be used alternative endpoint.

sp.alternative.endpoint

Used for StorPool primary storage for an alternative endpoint. The structure of the endpoint is SP_API_HTTP=address:port; SP_AUTH_TOKEN=token;SP_TEMPLATE=template_name.

storpool.volume.tags.checkup

Minimal interval (in seconds) to check and report if a StorPool volume created by CloudStack exists in CloudStack’s database.

storpool.snapshot.tags.checkup

Minimal interval (in seconds) to check and report if a StorPool Snapshot created by CloudStack exists in CloudStack’s database.

storpool.delete.after.interval

The interval (in seconds) after the StorPool snapshot will be deleted.

storpool.list.snapshots.delete.after.interval

The interval (in seconds) to fetch the StorPool snapshots with deleteAfter flag.

Plugin functionality

Creating template from a snapshot

When the bypass option is enabled, the snapshot exists only on PRIMARY (StorPool) storage. From this snapshot a template will be created on SECONADRY.

Creating ROOT volume from templates

When creating the first volume based on the given template, if snapshot of the template does not exists on StorPool it will be first downloaded (cached) to PRIMARY storage. This is mapped to a StorPool snapshot so, creating consecutive volumes from the same template does not incur additional copying of data to PRIMARY storage.

This cached snapshot is garbage collected when the original template is deleted from CloudStack. This cleanup is done by a background task in CloudStack.

Creating a ROOT volume from an ISO image

We just need to create the volume. The ISO installation is handled by CloudStack.

Creating a DATA volume

DATA volumes are created by CloudStack the first time it is attached to a VM.

Creating volume from snapshot

We use the fact that the snapshot already exists on PRIMARY, so no data is copied. We will copy snapshots from SECONDARY to StorPool PRIMARY, when there is no corresponding StorPool snapshot.

Resizing volumes

We need to send a resize cmd to the agent where the VM the volume is attached to is running, so that the resize is visible by the VM.

Creating snapshots

The snapshot is first created on the PRIMARY storage (StorPool), then backed-up on SECONDARY storage (tested with NFS secondary); works when the bypass option is not enabled. The original StorPool snapshot is kept, so that creating volumes from the snapshot does not need to copy the data again to PRIMARY. When the snapshot is deleted from CloudStack so is the corresponding StorPool snapshot.

Currently snapshots are taken in RAW format.

Reverting volume to snapshot

It’s handled by StorPool.

Migrating volumes to other Storage pools

Tested with storage pools on NFS only.

Virtual machine snapshot and group snapshot

StorPool supports consistent snapshots of volumes attached to a virtual machine.

BW/IOPS limitations

Max IOPS are kept in StorPool’s volumes with the help of custom service offerings, by adding IOPS limits to the corresponding system disk offering.

CloudStack has no way to specify max BW.

Support for host HA

Supported for versions 4.19 and newer.

If StorPool is used as primary storage, the administrator can choose the StorPool-provided heartbeat mechanism. It relies on the presence of the host in the storage network, and there is no need for additional primary storage over NFS just for HA goals. The StorPool heartbeat mechanism currently works for compute-only nodes. The support for hyper-converged nodes is in the roadmap.

The StorPool heartbeat plug-in for CloudStack is part of the standard CloudStack install. There is no additional work required to add and enable this component.

If there is more than one primary storage in the cluster and the administrator wants node fencing when one of them is down, then one should set the kvm.ha.fence.on.storage.heartbeat.failure global setting to true.

Supported operations for volume encryption

Supported virtual machine operations:

  • Live migration of VM to another host

  • Virtual machine snapshots (group snapshot without memory)

  • Revert VM snapshot

  • Delete VM snapshot

Supported Volume operations:

  • Attach/detach volume

  • Live migrate volume between two StorPool primary storages

  • Volume snapshot

  • Delete snapshot

  • Revert snapshot

Note that volume snapshot are allowed only when sp.bypass.secondary.storage is set to true. This means that the snapshots are not backed up to secondary storage

Temporarily backup StorPool volume before expunge

Sometimes a user could delete a volume by mistake. The StorPool plug-in provides a solution for preventing data loss in such a situation. When the storpool.delete.after.interval and storpool.list.snapshots.delete.after.interval global settings are set (see Setup), the plugin would create a backup of the volume before it’s deleted. This way, the user will be able to see the snapshot in CloudStack’s UI/CLI, and to create a volume from it.

Note that this mechanism doesn’t support encrypted volumes, and also doesn’t support the option to create a template from the snapshot. You can only restore the volume from the snapshot via the createVolume API call.

Using QoS

Supported for versions 4.20 and newer. Also supported via custom plugin build in versions 4.17 to 4.19.

StorPool provides the ‘storpool_qos’ service (see Quality of service) that tracks and configures the storage tier for all volumes based on a specifically provided qc tag specifying the storage tier for each volume.

To manage the QoS limits with a qc tag, you have to add a qc tag resource detail to each disk offering to which a tier should be applied, with a key SP_QOSCLASS and the value from the configuration file for the storpool_qos service:

add resourcedetail resourceid={diskofferingid} details[0].key=SP_QOSCLASS details[0].value={the name of the tier from the config} resourcetype=DiskOffering

To change the tier via CloudStack, you can use the CloudStack API call changeOfferingForVolume. The size is required, but the user could use the current volume size. Example:

change offeringforvolume id={The UUID of the Volume} diskofferingid={The UUID of the disk offering} size={The current or a new size for the volume}

Users who were using the offerings to change the StorPool template via the SP_TEMPLATE detail, will continue to have this functionality but should use changeOfferingForVolume API call instead of:

  • resizeVolume API call for DATA disk

  • scaleVirtualMachine API call for ROOT disk

If the disk offering has both SP_TEMPLATE and SP_QOSCLASS defined, the SP_QOSCLASS detail will be prioritized, setting the volume’s QoS using the respective qc tag value. In case the QoS for a volume is changed manually, the ‘storpool_qos’ service will automatically reset the QoS limits following the qc tag value once per minute.

Creating Disk Offering for each tier

Go to Service Offerings > Disk Offering > Add disk offering. Add disk offering detail with API call in CloudStack CLI:

add resourcedetail resourcetype=diskoffering resourceid=$UUID details[0].key=SP_QOSCLASS details[0].value=$Tier Name

Creating VM with QoS

To deploy a virtual machine:

  1. Go to Compute > Instances > Add Instances.

  2. For the ROOT volume, choose the option Override disk offering. This will set the required qc tag from the disk offering (DO) detail.

Creating DATA disk with QoS:

  1. Create volume via GUI/CLI.

  2. Choose a disk offering which has the required SP_QOSCLASS detail.

To update the tier of a ROOT/DATA volume go to Storage > Volumes, select the Volume, and click on the Change disk offering for the volume in the upper right corner.