Volumes
The volumes are the basic service of the StorPool storage system. The basic features of a volume are as follows:
It always has a name and a certain size.
It can be read from and written to.
It could be attached to hosts as read-only or read-write block device under the
/dev/storpooldirectory.It may have one or more tags created or changed using the
name=valueform.
For an overview of how volumes work, see Volumes and snapshots.
Command parameters
You can perform volume operations on the command line using the
storpool volume command.
Common parameters
When there is no volume name specified to the storpool volume command
you can use the following parameters:
listGet a list of volumes; for details, see Listing all volumes.
groupBackupBackup a group of volumes to a remote location.
quickStatusGet just the
statusdata from thestorpool_controllerservices in the cluster; for details, see Volume status.statusGet an overview of all volumes and snapshots and their state in the system; for details, see Volume status.
usedSpaceCheck the estimated used space by the volumes in the system; for details, see Used space estimation.
Volume-specific parameters
When there is a volume name specified to the storpool volume command
you can use the following parameters:
backupBackup a volume to a remote location; for details, see Backing up to a remote location.
baseOnUse parent volume, this will create a transient snapshot used as a parent. For details, see Snapshots.
boundCreate a bound snapshot.
bwSet maximum bandwidth limit (in MB/s).
createCreate the volume, fail if it exists. Mandatory when creating a new volume. Creating volumes without setting this option is deprecated. For details, see Creating a volume.
deleteDelete a volume; for details, see Deleting volumes.
freezeFreeze a volume.
iopsSet the maximum IOPS limit for this volume (in IOPS).
limitTypeSpecify whether
iopsandbwlimits ought to be for the total size of the block device or per each GiB (one of “total” or “perGiB”)listList the target disk sets and objects of a volume; for details, see Listing disk sets and objects.
moveToRemoteMove a volume to a different cluster in a multi-cluster environment; for details, see Moving to another cluster.
placeAllName of the “All” placement group. Default value: default.
placeHeadName of the “Head” placement group (see Placement groups). Default value: same as
placeAll.placeTailName of the “Tail” placement group. Default value: same as
placeAll.parentUse a snapshot as a parent for this volume.
rebaseConvert a volume from based-on-a-snapshot to a stand-alone volume; for details, see Rebasing volumes.
remoteCreate form a remote volume.
renameRename a volume; for details, see Renaming volumes.
revertToSnapshotRevert a volume to an existing snapshot; for details, see Volume operations.
reuseServerPlace multiple copies on the same server.
tagSet a tag for this volume in the form
name=value; for details, see Using tags.templateUse template with preconfigured placement, replication, and/or limits; for details, see Templates. Usage of templates is seriously encouraged due to easier tracking and capacity management.
updateUpdate the volume, fail if it does not exist. Mandatory for operations where a volume is modified; see the examples in Managing volumes. Modifying volumes without setting this option is deprecated.
A statement with
updateparameter will fail with an error if the volume does not exist:# storpool volume test update template hybrid size +100G OK # storpool volume test1 update template hybrid Error: volume 'test1' does not exist
Creating a volume
When creating a volume you must specify at minimum its name, the template or placement/replication details (see Templates and Placement groups), and its size. Here is an example:
# storpool volume testvolume create size 100G template hybrid
The name of a volume is a string consisting of one or more of the allowed characters:
Upper and lower latin letters (
a-z,A-Z)Numbers (
0-9)Delimiters like dot (
.), colon (:), dash (-), or underscore (_)
The same rules apply for the keys and values used for the volume tags. Note that the volume name including tags cannot exceed 200 bytes.
The create parameter is useful in scripts when you have to prevent an
involuntary update of a volume:
# storpool volume test create template hybrid
OK
# storpool volume test create size 200G template hybrid
Error: Volume 'test' already exists
Listing all volumes
To list all available volumes:
# storpool volume list
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| volume | size | rdnd. | placeHead | placeAll | placeTail | iops | bw | parent | template | flags | tags |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| testvolume | 100 GiB | 3 | ultrastar | ultrastar | ssd | - | - | testvolume@35691 | hybrid | | name=value |
| testvolume_8_2 | 100 GiB | 8+2 | nvme | nvme | nvme | - | - | testvolume_8_2@35693 | nvme | | name=value |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Flags:
R - allow placing two disks within a replication chain onto the same server
t - volume move target. Waiting for the move to finish
G - IOPS and bandwidth limits are per GiB and depends on volume/snapshot size
Listing exported volumes
To list volumes exported to other sub-clusters in the multi-cluster:
# storpool volume list exports
---------------------------------
| remote | volume | globalId |
---------------------------------
| Lab-D-cl2 | test | d.n.buy |
---------------------------------
To list volumes exported in other sub-clusters to this one in a multi-cluster setup:
# storpool volume list remote
--------------------------------------------------------------------------
| location | remoteId | name | size | creationTimestamp | tags |
--------------------------------------------------------------------------
| Lab-D | d.n.buy | test | 137438953472 | 2020-05-27 11:57:38 | |
--------------------------------------------------------------------------
Note
Once attached a remotely exported volume will no longer be visible
with volume list remote, even if the export is still visible in
the remote cluster with volume list exports. Every export
invocation in the local cluster will be used up for every attach in
the remote cluster.
Volume status
To get an overview of all volumes and snapshots and their state in the system:
# storpool volume status
----------------------------------------------------------------------------------------------------------------------------------------------------
| volume | size | rdnd. | tags | alloc % | stored | on disk | syncing | missing | status | flags | drives down |
----------------------------------------------------------------------------------------------------------------------------------------------------
| testvolume | 100 GiB | 3 | name=value | 0.0 % | 0 B | 0 B | 0 B | 0 B | up | | |
| testvolume@35691 | 100 GiB | 3 | | 100.0 % | 100 GiB | 317 GiB | 0 B | 0 B | up | S | |
----------------------------------------------------------------------------------------------------------------------------------------------------
| 2 volumes | 200 GiB | | | 50.0 % | 100 GiB | 317 GiB | 0 B | 0 B | | | |
----------------------------------------------------------------------------------------------------------------------------------------------------
Flags:
S - snapshot
B - balancer blocked on this volume
D - decreased redundancy (degraded)
M - migrating data to a new disk
R - allow placing two disks within a replication chain onto the same server
t - volume move target. Waiting for the move to finish
C - disk placement constraints violated, rebalance needed
The columns in this output are:
volume- name of the volume or snapshot (seeflagsbelow)size- provisioned volume size, the visible size inside a VM for examplerdnd.- number of copies for this volume or its erasure coding schemetags- all custom key=value tags configured for this volume or snapshotalloc %- how much space was used on this volume in percentstored- space allocated on this volumeon disk- the size allocated on all drives in the cluster after replication and the overhead from data protectionsyncing- how much data is not in sync after a drive or server was missing, the data is recovered automatically once the missing drive or server is back in the clustermissing- shows how much data is not available for this volume when the volume is with statusdown, seestatusbelowstatus- shows the status of the volume, which could be one of:up- all copies are availabledown- none of the copies are available for some parts of the volumeup soon- all copies are available and the volume will soon get up
flags- flags denoting features of this volume:S- stands for snapshot, which is essentially a read-only (frozen) volumeB- used to denote that the balancer is blocked for this volume (usually when some of the drives are missing)D- this flag is displayed when some of the copies is either not available or outdated and the volume is with decreased redundancyM- displayed when changing the replication or a cluster re-balance is in progressR- displayed when the policy for keeping copies on different servers is overriddenC- displayed when the volume or snapshot placement constraints are violated
drives down- displayed when the volume is indownstate, displaying the drives required to get the volume back up.
Size is in B/KiB/MiB/GiB, TiB or PiB.
To get just the status data from the storpool_controller services in the
cluster, without any info for stored, on disk size, and so on:
# storpool volume quickStatus
----------------------------------------------------------------------------------------------------------------------------------------------------
| volume | size | rdnd. | tags | alloc % | stored | on disk | syncing | missing | status | flags | drives down |
----------------------------------------------------------------------------------------------------------------------------------------------------
| testvolume | 100 GiB | 3 | name=value | 0.0 % | 0 B | 0 B | 0 B | 0 B | up | | |
| testvolume@35691 | 100 GiB | 3 | | 0.0 % | 0 B | 0 B | 0 B | 0 B | up | S | |
----------------------------------------------------------------------------------------------------------------------------------------------------
| 2 volumes | 200 GiB | | | 0.0 % | 0 B | 0 B | 0 B | 0 B | | | |
----------------------------------------------------------------------------------------------------------------------------------------------------
Note
The quickStatus has less of an impact on the storpool_server
services and thus on end-user operations because the gathered data
does not include the per-volume detailed storage stats provided with
status.
Used space estimation
To check the estimated used space by the volumes in the system:
# storpool volume usedSpace
-----------------------------------------------------------------------------------------
| volume | size | rdnd. | stored | used | missing info |
-----------------------------------------------------------------------------------------
| testvolume | 100 GiB | 3 | 1.9 GiB | 100 GiB | 0 B |
-----------------------------------------------------------------------------------------
The columns are as follows:
volume- name of the volumesize- the provisioned size of this volumerdnd.- number of copies for this volume or its erasure coding schemestored- how much data is stored for this volume (without referring all parent snapshots)used- how much data has been written (including the data written in parent snapshots)missing info- if this value is anything other than0 Bprobably some of thestorpool_controllerservices in the cluster is not running correctly.
Note
The used column shows how much data is accessible and reserved for
this volume.
Listing disk sets and objects
To list the target disk sets and objects of a volume:
# storpool volume testvolume list
volume testvolume
size 100 GiB
replication 3
placeHead hdd
placeAll hdd
placeTail ssd
target disk sets:
0: 1122 1323 1203
1: 1424 1222 1301
2: 1121 1324 1201
[snip]
object: disks
0: 1122 1323 1203
1: 1424 1222 1301
2: 1121 1324 1201
[snip]
Hint
In this example, the volume is with hybrid placement with two copies on HDDs and one copy on SSDs (the rightmost disk sets column). The target disk sets are lists of triplets of drives in the cluster used as a template for the actual objects of the volume.
To get detailed info about the disks used for this volume and the number of objects on each of them:
# storpool volume testvolume info
diskId | count
1101 | 200
1102 | 200
1103 | 200
[snip]
chain | count
1121-1222-1404 | 25
1121-1226-1303 | 25
1121-1226-1403 | 25
[snip]
diskSet | count
218-313-402 | 3
218-317-406 | 3
219-315-402 | 3
Note
The order of the diskSet is not by placeHead, placeAll,
placeTail, check the actual order in the
storpool volume <volumename> list output. The reason is to count
similar diskSet with a different order in the same slot, i.e.
[101, 201, 301] is accounted as the same diskSet as
[201, 101, 301].
Managing volumes
Renaming volumes
To rename a volume:
# storpool volume testvolume update rename newvolume
OK
Attention
Changing the name of a volume will not wait for clients that have this volume attached to update the name of the symlink. Always use client sync for all clients with the volume attached.
Resizing volumes
StorPool supports both online volume enlargement and shrinkage. The volumes can be in use and don’t need to be detached before resizing.
To resize a volume up:
# storpool volume testvolume update size +1G OK
To shrink a volume (resize down):
# storpool volume testvolume update size 50G shrinkOk
Attention
When shrinking a volume, the volume is truncated to the new size, discarding all data beyond that. Ensure that any filesystems or partitions are shrunk first, and verify that no data will be lost when shrinking the volume.
Deleting volumes
To delete a volume:
# storpool volume vol1 delete vol1
Note
To avoid accidents, the volume name must be entered twice. Attached volumes cannot be deleted even when not used as a safety precaution. For details, see Attachments.
Rebasing volumes
A volume could be converted from based on a snapshot to a stand-alone volume.
For example the testvolume below is based on an anonymous snapshot:
# storpool_tree
StorPool
`-testvolume@37126
`-testvolume
To rebase it against root (known also as “promote”):
# storpool volume testvolume rebase
OK
# storpool_tree
StorPool
`- testvolume@255 [snapshot]
`- testvolume [volume]
The rebase operation could also be to a particular snapshot from a chain of parent snapshots on this child volume:
# storpool_tree
StorPool
`- testvolume-snap1 [snapshot]
`- testvolume-snap2 [snapshot]
`- testvolume-snap3 [snapshot]
`- testvolume [volume]
# storpool volume testvolume rebase testvolume-snap2
OK
After the operation the volume is directly based on testvolume-snap2 and
includes all changes from testvolume-snap3:
# storpool_tree
StorPool
`- testvolume-snap1 [snapshot]
`- testvolume-snap2 [snapshot]
|- testvolume [volume]
`- testvolume-snap3 [snapshot]
Backing up to a remote location
To backup a volume named testvolume in a configured remote location
LocationA-CityB:
# storpool volume testvolume backup LocationA-CityB
OK
After this operation a temporary snapshot will be created and will be
transferred in LocationA-CityB location. After the transfer completes, the
local temporary snapshot will be deleted and the remote snapshot will be visible
as exported from LocationA-CityB. For more information on
working with snapshot exports, see Remote snapshots .
When backing up a volume, the remote snapshot may have one or more tags applied, example below:
# storpool volume testvolume backup LocationA-CityB tag key=value # [tag key2=value2]
OK
Moving to another cluster
To move a volume to a different cluster in a multi-cluster environment (more on clusters here):
# storpool volume testvolume moveToRemote Lab-D-cl2 # onAttached export
Note
Moving a volume to a remote cluster will fail if the volume is
attached on a local host. It could be further specified what to do in
such case with the onAttached parameter, as in the comment in the
example above. More info on volume move is available in
Moving volumes and snapshots.