Volumes

The volumes are the basic service of the StorPool storage system. The basic features of a volume are as follows:

  • It always has a name and a certain size.

  • It can be read from and written to.

  • It could be attached to hosts as read-only or read-write block device under the /dev/storpool directory.

  • It may have one or more tags created or changed using the name=value form.

The name of a volume is a string consisting of one or more of the allowed characters - upper and lower latin letters (a-z,A-Z), numbers (0-9) and the delimiter dot (.), colon (:), dash (-) and underscore (_). The same rules apply for the keys and values used for the volume tags. The volume name including tags cannot exceed 200 bytes.

When creating a volume you must specify at minimum its name, the template or placement/replication details, and its size. Here is an example:

# storpool volume testvolume create size 100G template hybrid

Volume parameters

When performing volume operations you can use the following parameters:

placeAll

Place all objects in placementGroup (Default value: default).

placeTail

Name of placementGroup for reader (Default value: same as placeAll value).

placeHead

Place the third replica in a different placementGroup (Default value: same as placeAll value)

template

Use template with preconfigured placement, replication, and/or limits; for details, see Templates. Usage of templates is seriously encouraged due to easier tracking and capacity management.

parent

Use a snapshot as a parent for this volume.

reuseServer

Place multiple copies on the same server.

baseOn

Use parent volume, this will create a transient snapshot used as a parent. For details, see Snapshots).

iops

Set the maximum IOPS limit for this volume (in IOPS).

bw

Set maximum bandwidth limit (in MB/s).

tag

Set a tag for this volume in the form name=value.

create

Create the volume, fail if it exists. Mandatory when creating a new volume. Creating volumes without setting this option is deprecated.

update

Update the volume, fail if it does not exist. Mandatory for operations where a volume is modified; see the examples in Managing volumes. Modifying volumes without setting this option is deprecated.

limitType

Specify whether iops and bw limits ought to be for the total size of the block device or per each GiB (one of “total” or “perGiB”)

The create option is useful in scripts when you have to prevent an involuntary update of a volume:

# storpool volume test create template hybrid
OK
# storpool volume test create size 200G template hybrid
Error: Volume 'test' already exists

A statement with update parameter will fail with an error if the volume does not exist:

# storpool volume test update template hybrid size +100G
OK
# storpool volume test1 update template hybrid
Error: volume 'test1' does not exist

Listing all volumes

To list all available volumes:

# storpool volume list
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| volume               |    size  | rdnd. | placeHead  | placeAll   | placeTail  |   iops  |    bw   | parent               | template             | flags     | tags       |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| testvolume           |  100 GiB |     3 | ultrastar  | ultrastar  | ssd        |       - |       - | testvolume@35691     | hybrid               |           | name=value |
| testvolume_8_2       |  100 GiB |   8+2 |       nvme |      nvme  | nvme       |       - |       - | testvolume_8_2@35693 | nvme                 |           | name=value |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Flags:
R - allow placing two disks within a replication chain onto the same server
t - volume move target. Waiting for the move to finish
G - IOPS and bandwidth limits are per GiB and depends on volume/snapshot size

Listing exported volumes

To list volumes exported to other sub-clusters in the multi-cluster:

# storpool volume list exports
---------------------------------
| remote    | volume | globalId |
---------------------------------
| Lab-D-cl2 | test   | d.n.buy  |
---------------------------------

To list volumes exported in other sub-clusters to this one in a multi-cluster setup:

# volume list remote
--------------------------------------------------------------------------
| location | remoteId | name | size         | creationTimestamp   | tags |
--------------------------------------------------------------------------
| Lab-D    | d.n.buy  | test | 137438953472 | 2020-05-27 11:57:38 |      |
--------------------------------------------------------------------------

Note

Once attached a remotely exported volume will no longer be visible with volume list remote, even if the export is still visible in the remote cluster with volume list exports. Every export invocation in the local cluster will be used up for every attach in the remote cluster.

Volume status

To get an overview of all volumes and snapshots and their state in the system:

# storpool volume status
----------------------------------------------------------------------------------------------------------------------------------------------------
| volume               |     size | rdnd. | tags       |  alloc % |   stored |  on disk | syncing | missing | status    | flags | drives down      |
----------------------------------------------------------------------------------------------------------------------------------------------------
| testvolume           |  100 GiB |     3 | name=value |    0.0 % |     0  B |     0  B |    0  B |    0  B | up        |       |                  |
| testvolume@35691     |  100 GiB |     3 |            |  100.0 % |  100 GiB |  317 GiB |    0  B |    0  B | up        | S     |                  |
----------------------------------------------------------------------------------------------------------------------------------------------------
| 2 volumes            |  200 GiB |       |            |   50.0 % |  100 GiB |  317 GiB |    0  B |    0  B |           |       |                  |
----------------------------------------------------------------------------------------------------------------------------------------------------

Flags:
  S - snapshot
  B - balancer blocked on this volume
  D - decreased redundancy (degraded)
  M - migrating data to a new disk
  R - allow placing two disks within a replication chain onto the same server
  t - volume move target. Waiting for the move to finish
  C - disk placement constraints violated, rebalance needed

The columns in this output are:

  • volume - name of the volume or snapshot (see flags below)

  • size - provisioned volume size, the visible size inside a VM for example

  • rdnd. - number of copies for this volume or its erasure coding scheme

  • tags - all custom key=value tags configured for this volume or snapshot

  • alloc % - how much space was used on this volume in percent

  • stored - space allocated on this volume

  • on disk - the size allocated on all drives in the cluster after replication and the overhead from data protection

  • syncing - how much data is not in sync after a drive or server was missing, the data is recovered automatically once the missing drive or server is back in the cluster

  • missing - shows how much data is not available for this volume when the volume is with status down, see status below

  • status - shows the status of the volume, which could be one of:

    • up - all copies are available

    • down - none of the copies are available for some parts of the volume

    • up soon - all copies are available and the volume will soon get up

  • flags - flags denoting features of this volume:

    • S - stands for snapshot, which is essentially a read-only (frozen) volume

    • B - used to denote that the balancer is blocked for this volume (usually when some of the drives are missing)

    • D - this flag is displayed when some of the copies is either not available or outdated and the volume is with decreased redundancy

    • M - displayed when changing the replication or a cluster re-balance is in progress

    • R - displayed when the policy for keeping copies on different servers is overridden

    • C - displayed when the volume or snapshot placement constraints are violated

  • drives down - displayed when the volume is in down state, displaying the drives required to get the volume back up.

Size is in B/KiB/MiB/GiB, TiB or PiB.

To get just the status data from the storpool_controller services in the cluster, without any info for stored, on disk size, etc.:

# storpool volume quickStatus
----------------------------------------------------------------------------------------------------------------------------------------------------
| volume               |     size | rdnd. | tags       |  alloc % |   stored |  on disk | syncing | missing | status    | flags | drives down      |
----------------------------------------------------------------------------------------------------------------------------------------------------
| testvolume           |  100 GiB |     3 | name=value |    0.0 % |     0  B |     0  B |    0  B |    0  B | up        |       |                  |
| testvolume@35691     |  100 GiB |     3 |            |    0.0 % |     0  B |     0  B |    0  B |    0  B | up        | S     |                  |
----------------------------------------------------------------------------------------------------------------------------------------------------
| 2 volumes            |  200 GiB |       |            |    0.0 % |     0  B |     0  B |    0  B |    0  B |           |       |                  |
----------------------------------------------------------------------------------------------------------------------------------------------------

Note

The quickStatus has less of an impact on the storpool_server services and thus on end-user operations because the gathered data does not include the per-volume detailed storage stats provided with status.

Used space estimation

To check the estimated used space by the volumes in the system:

# storpool volume usedSpace
-----------------------------------------------------------------------------------------
| volume               |        size | rdnd. |      stored |        used | missing info |
-----------------------------------------------------------------------------------------
| testvolume           |     100 GiB |     3 |     1.9 GiB |     100 GiB |         0  B |
-----------------------------------------------------------------------------------------

The columns are as follows:

  • volume - name of the volume

  • size - the provisioned size of this volume

  • rdnd. - number of copies for this volume or its erasure coding scheme

  • stored - how much data is stored for this volume (without referring all parent snapshots)

  • used - how much data has been written (including the data written in parent snapshots)

  • missing info - if this value is anything other than 0  B probably some of the storpool_controller services in the cluster is not running correctly.

Note

The used column shows how much data is accessible and reserved for this volume.

Listing disk sets and objects

To list the target disk sets and objects of a volume:

# storpool volume testvolume list
volume testvolume
size 100 GiB
replication 3
placeHead hdd
placeAll hdd
placeTail ssd
target disk sets:
       0: 1122 1323 1203
       1: 1424 1222 1301
       2: 1121 1324 1201
[snip]
  object: disks
       0: 1122 1323 1203
       1: 1424 1222 1301
       2: 1121 1324 1201
[snip]

Hint

In this example, the volume is with hybrid placement with two copies on HDDs and one copy on SSDs (the rightmost disk sets column). The target disk sets are lists of triplets of drives in the cluster used as a template for the actual objects of the volume.

To get detailed info about the disks used for this volume and the number of objects on each of them:

# storpool volume testvolume info
    diskId | count
  1101 |   200
  1102 |   200
  1103 |   200
  [snip]

chain                | count
1121-1222-1404       |  25
1121-1226-1303       |  25
1121-1226-1403       |  25
[snip]

diskSet              | count
218-313-402          |   3
218-317-406          |   3
219-315-402          |   3

Note

The order of the diskSet is not by placeHead, placeAll, placeTail, check the actual order in the storpool volume <volumename> list output. The reason is to count similar diskSet with a different order in the same slot, i.e. [101, 201, 301] is accounted as the same diskSet as [201, 101, 301].

Managing volumes

To rename a volume:

# storpool volume testvolume update rename newvolume
OK

Attention

Changing the name of a volume will not wait for clients that have this volume attached to update the name of the symlink. Always use client sync for all clients with the volume attached.

To add a tag for a volume:

# storpool volume testvolume update tag name=value

To change a tag for a volume:

# storpool volume testvolume update tag name=newvalue

To remove a tag just set it to an empty value:

# storpool volume testvolume update tag name=

To resize a volume up:

# storpool volume testvolume update size +1G
OK

To shrink a volume (resize down):

# storpool volume testvolume update size 50G shrinkOk

Attention

Shrinking of a storpool volume changes the size of the block device, but does not adjust the size of LVM or filesystem contained in the volume. Failing to adjust the size of the filesystem or LVM prior to shrinking the StorPool volume would result in data loss.

To delete a volume:

# storpool volume vol1 delete vol1

Note

To avoid accidents, the volume name must be entered twice. Attached volumes cannot be deleted even when not used as a safety precaution. For details, see Attachments.

A volume could be converted from based on a snapshot to a stand-alone volume. For example the testvolume below is based on an anonymous snapshot:

# storpool_tree
StorPool
  `-testvolume@37126
     `-testvolume

To rebase it against root (known also as “promote”):

# storpool volume testvolume rebase
OK
# storpool_tree
StorPool
  `- testvolume@255 [snapshot]
     `- testvolume [volume]

The rebase operation could also be to a particular snapshot from a chain of parent snapshots on this child volume:

# storpool_tree
StorPool
  `- testvolume-snap1 [snapshot]
     `- testvolume-snap2 [snapshot]
        `- testvolume-snap3 [snapshot]
           `- testvolume [volume]
# storpool volume testvolume rebase testvolume-snap2
OK

After the operation the volume is directly based on testvolume-snap2 and includes all changes from testvolume-snap3:

# storpool_tree
StorPool
  `- testvolume-snap1 [snapshot]
     `- testvolume-snap2 [snapshot]
        |- testvolume [volume]
        `- testvolume-snap3 [snapshot]

To backup a volume named testvolume in a configured remote location LocationA-CityB:

# storpool volume testvolume backup LocationA-CityB
OK

After this operation a temporary snapshot will be created and will be transferred in LocationA-CityB location. After the transfer completes, the local temporary snapshot will be deleted and the remote snapshot will be visible as exported from LocationA-CityB. For more information on working with snapshot exports, see Remote snapshots .

When backing up a volume, the remote snapshot may have one or more tags applied, example below:

# storpool volume testvolume backup LocationA-CityB tag key=value # [tag key2=value2]
OK

To move a volume to a different cluster in a multicluster environment (more on clusters here):

# storpool volume testvolume moveToRemote Lab-D-cl2 # onAttached export

Note

Moving a volume to a remote cluster will fail if the volume is attached on a local host. It could be further specified what to do in such case with the onAttached parameter, as in the comment in the example above. More info on volume move is available in Volume and snapshot move.