CloudStack integration

The StorPool primary storage plugin is used both for the CloudStack management and agents. It is intended to be self-contained and non-intrusive. All StorPool related communication (for example, data copying, volume resize) is done with StorPool-specific commands.

The information provided here applies for CloudStack versions 4.17.0.0 and newer, where the StorPool plugin is part of the standard CloudStack installation. In case the version of CloudStack you are running is lower than 4.17.0.0, then you should install the plugin provided in the StorPool CloudStack repository.

For more high-level information on how you can accelerate your CloudStack deployment using StorPool, see the StorPool site.

Introduction

The StorPool plug-in is deeply integrated with CloudStack and works with KVM hypervisors. When used with service or disk offerings, an administrator is able to build an environment in which a root or data disk that a user creates leads to the dynamic creation of a StorPool volume, which has guaranteed performance. Such a StorPool volume is associated with one CloudStack volume, so performance of the CloudStack volume does not vary depending on how heavily other tenants are using the system. The volume migration is supported across non-managed storage pools (for example, local storage) to StorPool, and across StorPool storage pools. For more information about StorPool volumes, see 15.  Volumes and snapshots.

CloudStack overview

For more information, see CloudStack’s documentation for Storage Setup.

Primary and Secondary storage

Primary storage is associated with a cluster or zone, and it stores the virtual disks for all the VMs running on hosts in that cluster/zone.

Secondary storage includes the following:

  • Templates are OS images that can be used to boot VMs, and can include additional configuration information, such as installed applications.

  • ISO images are disc images containing data or bootable media for operating systems.

  • Disk volume snapshots are saved copies of VM data, which can be used for data recovery or for creating new templates.

ROOT and DATA volumes

ROOT volumes correspond to the boot disk of a VM. They are created automatically by CloudStack during VM creation. These volumes are created based on a system disk offering, corresponding to the service offering the user VM is based on. We may change the ROOT volume disk offering but only to another system created disk offering.

DATA volumes correspond to additional disks. These can be created by users and then attached or detached to VMs. DATA volumes are created based on a user-defined disk offering.

Setup

Setting up StorPool

First, perform the StorPool installation. Note the following:

  • Create a template to be used by CloudStack. You must set placeHead, placeAll, placeTail, and replication.

  • There is no need to set default volume size because it is determined by the CloudStack disks and services offering.

Setting up a StorPool PRIMARY storage pool in CloudStack

Tip

For more information about this procedure, check Adding Primary Storage in the CloudStack documentation.

The next step after installing StorPool is configuring the plugin. Log in to the CloudStack UI, go to Infrastructure > Primary Storage > Add Primary Storage, and enter the following details:

  • Command: createStoragePool

  • Scope: select Zone-Wide

  • Hypervisor: select KVM

  • Zone: pick appropriate zone

  • Zone id: enter your zone id

  • Name: enter a name for the primary storage

  • Protocol: select custom

  • Path: enter /dev/storpool (required argument, actually not needed in practice).

  • Provider: select StorPool

  • Managed: leave unchecked (currently ignored)

  • Capacity Bytes: used for accounting purposes only. May be more or less than the actual StorPool template capacity.

  • Capacity IOPS: currently not used (may use for max IOPS limitations on volumes from this pool).

  • URL: enter SP_API_HTTP=address:port;SP_AUTH_TOKEN=token;SP_TEMPLATE=template_name. At present one template can be used for at most one Storage Pool.

    • SP_API_HTTP - address of StorPool API

    • SP_AUTH_TOKEN - StorPool’s token

    • SP_TEMPLATE - name of StorPool’s template

    • For more information about these values, see 6.  Node configuration options and 12.14.  Templates.

    • For versions 4.19.1.0 and newer, you can use an alternative format for the URL: storpool://{SP_AUTH_TOKEN}@{SP_API_HTTP}:{SP_API_HTTP_PORT}/{SP_TEMPLATE}

  • Storage Tags: If left blank, the StorPool storage plugin will use the pool name to create a corresponding storage tag. This storage tag may be used later, when defining service or disk offerings.

After adding StorPool as primary storage you can set the following parameters in the Settings tab:

sp.bypass.secondary.storage

For StorPool managed storage backup to secondary.

sp.cluster.id

For StorPool multi cluster authorization (it will be set automatically for each cluster).

sp.enable.alternative.endpoint

Used for StorPool primary storage, defines if there is a need to be used alternative endpoint.

sp.alternative.endpoint

Used for StorPool primary storage for an alternative endpoint. The structure of the endpoint is SP_API_HTTP=address:port; SP_AUTH_TOKEN=token;SP_TEMPLATE=template_name.

storpool.volume.tags.checkup

Minimal interval (in seconds) to check and report if a StorPool volume created by CloudStack exists in CloudStack’s database.

storpool.snapshot.tags.checkup

Minimal interval (in seconds) to check and report if a StorPool Snapshot created by CloudStack exists in CloudStack’s database.

Plugin functionality

Actions

Plugin action

CloudStack action

management/agent

Implementation details

Create ROOT volume from ISO

create VM from ISO

management

createVolumeAsync

Create ROOT volume from Template

create VM from Template

management + agent

copyAsync (T => T, T => V)

Create DATA volume

create Volume

management

createVolumeAsync

Attach ROOT/DATA volume

start VM (+attach/detach Volume)

agent

connectPhysicalDisk

Detach ROOT/DATA volume

stop VM

agent

disconnectPhysicalDiskByPath

Migrate VM

agent

attach + detach

Delete ROOT volume

destroy VM (expunge)

management

deleteAsync

Delete DATA volume

delete Volume (detached)

management

deleteAsync

Create ROOT/DATA volume snapshot

snapshot volume

management + agent

takeSnapshot + copyAsync (S => S)

Create volume from snapshot

create volume from snapshot

management + agent(?)

copyAsync (S => V)

Create TEMPLATE from ROOT volume

create template from volume

management + agent

copyAsync (V => T)

Create TEMPLATE from snapshot

create template from snapshot

SECONDARY STORAGE

Download volume

download volume

management + agent

copyAsync (V => V)

Revert ROOT/DATA volume to snapshot

revert to snapshot

management

revertSnapshot

(Live) resize ROOT/DATA volume

resize volume

management + agent

resize + StorpoolResizeCmd

Delete SNAPSHOT (ROOT/DATA)

delete snapshot

management

StorpoolSnapshotStrategy

Delete TEMPLATE

delete template

agent

deletePhysicalDisk

migrate VM/volume

migrate VM/volume to another storage

management/management + agent

copyAsync (V => V)

VM snapshot

group snapshot of VM’s disks

management

StorpoolVMSnapshotStrategy takeVMSnapshot

revert VM snapshot

revert group snapshot of VM’s disks

management

StorpoolVMSnapshotStrategy revertVMSnapshot

delete VM snapshot

delete group snapshot of VM’s disks

management

StorpoolVMSnapshotStrategy deleteVMSnapshot

VM vc_policy tag

vc_policy tag for all disks attached to VM

management

StorPoolCreateTagsCmd

delete VM vc_policy tag

remove vc_policy tag for all disks attached to VM

management

StorPoolDeleteTagsCmd

Note the following:

  • When using multi-cluster, for each CloudStack cluster in its settings set the value of StorPool’s SP_CLUSTER_ID in “sp.cluster.id”.

  • Secondary storage could be bypassed with configuration setting “sp.bypass.secondary.storage” set to true. In this case only snapshots won’t be downloaded to secondary storage.

Creating template from a snapshot

When the bypass option is enabled, the snapshot exists only on PRIMARY (StorPool) storage. From this snapshot a template will be created on SECONADRY.

Creating ROOT volume from templates

When creating the first volume based on the given template, if snapshot of the template does not exists on StorPool it will be first downloaded (cached) to PRIMARY storage. This is mapped to a StorPool snapshot so, creating consecutive volumes from the same template does not incur additional copying of data to PRIMARY storage.

This cached snapshot is garbage collected when the original template is deleted from CloudStack. This cleanup is done by a background task in CloudStack.

Creating a ROOT volume from an ISO image

We just need to create the volume. The ISO installation is handled by CloudStack.

Creating a DATA volume

DATA volumes are created by CloudStack the first time it is attached to a VM.

Creating volume from snapshot

We use the fact that the snapshot already exists on PRIMARY, so no data is copied. We will copy snapshots from SECONDARY to StorPool PRIMARY, when there is no corresponding StorPool snapshot.

Resizing volumes

We need to send a resize cmd to the agent where the VM the volume is attached to is running, so that the resize is visible by the VM.

Creating snapshots

The snapshot is first created on the PRIMARY storage (StorPool), then backed-up on SECONDARY storage (tested with NFS secondary); works when the bypass option is not enabled. The original StorPool snapshot is kept, so that creating volumes from the snapshot does not need to copy the data again to PRIMARY. When the snapshot is deleted from CloudStack so is the corresponding StorPool snapshot.

Currently snapshots are taken in RAW format.

Reverting volume to snapshot

It’s handled by StorPool.

Migrating volumes to other Storage pools

Tested with storage pools on NFS only.

Virtual machine snapshot and group snapshot

StorPool supports consistent snapshots of volumes attached to a virtual machine.

BW/IOPS limitations

Max IOPS are kept in StorPool’s volumes with the help of custom service offerings, by adding IOPS limits to the corresponding system disk offering.

CloudStack has no way to specify max BW.

Support for host HA

Supported for versions 4.19 and newer.

If StorPool is used as primary storage, the administrator can choose the StorPool-provided heartbeat mechanism. It relies on the presence of the host in the storage network, and there is no need for additional primary storage over NFS just for HA goals. The StorPool heartbeat mechanism currently works for compute-only nodes. The support for hyper-converged nodes is in the roadmap.

The StorPool heartbeat plug-in for CloudStack is part of the standard CloudStack install. There is no additional work required to add and enable this component.

If there is more than one primary storage in the cluster and the administrator wants node fencing when one of them is down, then one should set the kvm.ha.fence.on.storage.heartbeat.failure global setting to true.

Supported operations for volume encryption

Supported virtual machine operations:

  • Live migration of VM to another host

  • Virtual machine snapshots (group snapshot without memory)

  • Revert VM snapshot

  • Delete VM snapshot

Supported Volume operations:

  • Attach/detach volume

  • Live migrate volume between two StorPool primary storages

  • Volume snapshot

  • Delete snapshot

  • Revert snapshot

Note that volume snapshot are allowed only when sp.bypass.secondary.storage is set to true. This means that the snapshots are not backed up to secondary storage

Using QoS

Supported for versions 4.20 and newer. Also supported via custom plugin build in versions 4.17 to 4.19.

StorPool provides the ‘storpool_qos’ service (see Quality of service) that tracks and configures the storage tier for all volumes based on a specifically provided qc tag specifying the storage tier for each volume.

To manage the QoS limits with a qc tag, you have to add a qc tag resource detail to each disk offering to which a tier should be applied, with a key SP_QOSCLASS and the value from the configuration file for the storpool_qos service:

add resourcedetail resourceid={diskofferingid} details[0].key=SP_QOSCLASS details[0].value={the name of the tier from the config} resourcetype=DiskOffering

To change the tier via CloudStack, you can use the CloudStack API call changeOfferingForVolume. The size is required, but the user could use the current volume size. Example:

change offeringforvolume id={The UUID of the Volume} diskofferingid={The UUID of the disk offering} size={The current or a new size for the volume}

Users who were using the offerings to change the StorPool template via the SP_TEMPLATE detail, will continue to have this functionality but should use changeOfferingForVolume API call instead of:

  • resizeVolume API call for DATA disk

  • scaleVirtualMachine API call for ROOT disk

If the disk offering has both SP_TEMPLATE and SP_QOSCLASS defined, the SP_QOSCLASS detail will be prioritized, setting the volume’s QoS using the respective qc tag value. In case the QoS for a volume is changed manually, the ‘storpool_qos’ service will automatically reset the QoS limits following the qc tag value once per minute.

Creating Disk Offering for each tier

Go to Service Offerings > Disk Offering > Add disk offering. Add disk offering detail with API call in CloudStack CLI:

add resourcedetail resourcetype=diskoffering resourceid=$UUID details[0].key=SP_QOSCLASS details[0].value=$Tier Name

Creating VM with QoS

To deploy a virtual machine:

  1. Go to Compute > Instances > Add Instances.

  2. For the ROOT volume, choose the option Override disk offering. This will set the required qc tag from the disk offering (DO) detail.

Creating DATA disk with QoS:

  1. Create volume via GUI/CLI.

  2. Choose a disk offering which has the required SP_QOSCLASS detail.

To update the tier of a ROOT/DATA volume go to Storage > Volumes, select the Volume, and click on the Change disk offering for the volume in the upper right corner.

Plugin development

Note

The information in this section is intended for developers. You can ignore it if you just want to use the StorPool plugin with your CloudStack setup.

Building

Go to the source directory and run:

mvn -Pdeveloper -DskipTests install

The resulting JAR file is located in the target/ subdirectory. Note the following:

  • Checkstyle errors: before compilation a code style check is performed; if it fails the compilation is aborted. In short: no trailing whitespace, indent using 4 spaces (not tabs), comment-out or remove unused imports.

  • Need to build both the KVM plugin and the StorPool plugin proper.

Installing

For each CloudStack management host:

scp ./target/cloud-plugin-storage-volume-storpool-{version}.jar {MGMT_HOST}:/usr/share/cloudstack-management/lib/

For each CloudStack agent host:

scp ./target/cloud-plugin-storage-volume-storpool-{version}.jar {AGENT_HOST}:/usr/share/cloudstack-agent/plugins/

Note the following:

  • CloudStack management and agent services must be restarted after adding the plugin to the respective directories.

  • Agents should have access to the StorPool management API, since attach and detach operations happen on the agent. This is required due to CloudStack’s design.