Snapshots
Snapshots are read-only point-in-time images of volumes. They are created once
and cannot be changed. They can be attached to hosts as read-only block devices
under /dev/storpool
. Volumes and snapshots share the same name-space, thus
their names are unique within a StorPool cluster. Volumes can be based on
snapshots. Such volumes contain only the changes since the snapshot was taken.
After a volume is created from a snapshot, writes will be recorded within the
volume. Reads from volume may be served by volume or by its parent snapshot
depending on whether volume contains changed data for the read request or not.
For more information, see Volumes and snapshots.
Creating snapshots
To create an unnamed (known also as anonymous) snapshot of a volume:
# storpool volume testvolume snapshot
OK
This will create a snapshot named testvolume@<ID>
, where ID
is an unique
serial number. Note that any tags on the volume will not be propagated to the
snapshot; to set tags on the snapshot at creation time:
# storpool volume testvolume tag key=value snapshot
To create a named snapshot of a volume:
# storpool volume testvolume snapshot testsnap
OK
To directly set tags:
# storpool volume testvolume snapshot testsnapplustags tag key=value
To create a bound snapshot on a volume:
# storpool volume testvolume bound snapshot
OK
This snapshot will be automatically deleted when the last child volume created from it is deleted. Useful for non-persistent images.
Listing snapshots
To list the snapshots:
# storpool snapshot list
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| snapshot | size | rdnd. | placeHead | placeAll | placeTail | created on | volume | iops | bw | parent | template | flags | targetDeleteDate | tags |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| testsnap | 100 GiB | 3 | hdd | hdd | ssd | 2019-08-30 04:11:23 | testvolume | - | - | testvolume@1430 | hybrid-r3 | | - | key=value |
| testvolume@1430 | 100 GiB | 3 | hdd | hdd | ssd | 2019-08-30 03:56:58 | testvolume | - | - | | hybrid-r3 | A | - | |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Flags:
A - anonymous snapshot with auto-generated name
B - bound snapshot
D - snapshot currently in the process of deletion
T - transient snapshot (created during volume cloning)
R - allow placing two disks within a replication chain onto the same server
P - snapshot delete blocked due to multiple children
To list the snapshots only for a particular volume:
# storpool volume testvolume list snapshots
[snip]
To list the target disk sets and objects of a snapshot:
# storpool snapshot testsnap list
[snip]
The output is similar as with storpool volume <volumename> list
; for
details, see Listing disk sets and objects.
To get detailed info about the disks used for this snapshot and the number of objects on each of them:
# storpool snapshot testsnap info
[snip]
The output is similar to the storpool volume <volumename> info
.
Volume operations
To create a volume based on an existing snapshot (cloning):
# storpool volume testvolume parent centos73-base-snap
OK
To revert a volume to an existing snapshot:
# storpool volume testvolume revertToSnapshot centos73-working
OK
This is also possible through the use of templates with a parent snapshot (see Templates):
# storpool volume spd template centos73-base
OK
Create a volume based on another existing volume (cloning):
# storpool volume testvolume1 baseOn testvolume
OK
Note
This operation will first create an anonymous bound snapshot on
testvolume
and will then create testvolume1
with the bound
snapshot as parent. The snapshot will exist until both volumes are
deleted and will be automatically deleted afterwards.
Deleting snapshots
To delete a snapshot:
# storpool snapshot spdb_snap1 delete spdb_snap1
OK
Note
To avoid accidents, the name of the snapshot must be entered twice.
Sometimes the system would not delete the snapshot immediately; during this
period of time, it would be visible with *
in the output of storpool
volume status
or storpool snapshot list
.
To set a snapshot for deferred deletion:
# storpool snapshot testsnap deleteAfter 1d
OK
The above will set a target delete date for this snapshot in exactly one day from the present time.
Note
The snapshot will be deleted at the desired point in time only if delayed snapshot delete was enabled in the local cluster, check Management configuration for details.
A snapshot could also be bound to its child volumes, it will exist until all child volumes are deleted:
# storpool snapshot testsnap bind
OK
The opposite operation is also possible, to unbind such snapshot:
# storpool snapshot testsnap unbind
OK
To get the space that will be freed if a snapshot is deleted:
# storpool snapshot space
----------------------------------------------------------------------------------------------------------------
| snapshot | on volume | size | rdnd. | stored | used | missing info |
----------------------------------------------------------------------------------------------------------------
| testsnap | testvolume | 100 GiB | 3 | 27 GiB | -135 GiB | 0 B |
| testvolume@3794 | testvolume | 100 GiB | 3 | 27 GiB | 1.9 GiB | 0 B |
| testvolume@3897 | testvolume | 100 GiB | 3 | 507 MiB | 432 KiB | 0 B |
| testvolume@3899 | testvolume | 100 GiB | 3 | 334 MiB | 224 KiB | 0 B |
| testvolume@4332 | testvolume | 100 GiB | 3 | 73 MiB | 36 KiB | 0 B |
| testvolume@4333 | testvolume | 100 GiB | 3 | 45 MiB | 40 KiB | 0 B |
| testvolume@4334 | testvolume | 100 GiB | 3 | 59 MiB | 16 KiB | 0 B |
| frozenvolume | - | 8 GiB | 2 | 80 MiB | 80 MiB | 0 B |
----------------------------------------------------------------------------------------------------------------
Used mainly for accounting purposes. The columns are as follows:
snapshot
Name of the snapshot.
on volume
The name of the volume child for this snapshot if any. For example, a frozen volume would have this field empty.
size
The size of the snapshot as provisioned.
rdnd.
Number of copies for this volume or its erasure coding scheme.
stored
How much data is actually written.
used
Stands for the amount of data that would be freed from the underlying drives (before redundancy) if the snapshot is removed.
missing info
If this value is anything other than
0 B
probably some of thestorpool_controller
services in the cluster are not running correctly.
The used
column could be negative in some cases when the snapshot has more
than one child volume. In these cases deleting the snapshot would “free”
negative space i.e. will end up taking more space on the underlying disks.
Snapshot parameters
Similar to volumes, snapshots could have different placementGroups or other parameters. You can use the following parameters:
placeAll
Place all objects in placementGroup; default value: default.
placeTail
Name of placementGroup for reader; default value: same as the value of
placeAll
.placeHead
Place the third replica in a different placementGroup; default value: same as the value of
placeAll
.reuseServer
Place multiple copies on the same server.
tag
Set a tag in the form
key=value
.template
Use template with preconfigured placement, replication, and/or limits (check Templates for details).
iops
Set the maximum IOPS limit for this snapshot (in IOPS).
bw
Set maximum bandwidth limit (in MB/s).
limitType
Specify whether
iops
andbw
limits ought to be for the total size of the block device, or per each GiB (one of “total” or “perGiB”).
Note
The bandwidth and IOPS limits are concerning only the particular snapshot if it is attached and does not limit any child volumes using this snapshot as parent.
Here are two examples - one for setting a template, and one for removing a tag on a snapshot:
# storpool snapshot testsnap template all-ssd
OK
# storpool snapshot testsnapplustags tag key=
Also similar to the same operation with volumes a snapshot could be renamed with:
# storpool snapshot testsnap rename ubuntu1604-base
OK
Attention
Changing the name of a snapshot will not wait for clients that have this snapshot attached to update the name of the symlink. Always use client sync for all clients with the snapshot attached.
A snapshot could also be rebased to root (promoted) or rebased to another parent snapshot in a chain:
# storpool snapshot testsnap rebase # [parent-snapshot-name]
OK
Remote snapshots
In case multi-site or multicluster is enabled (the cluster has a
storpool_bridge
service running), a snapshot could be exported and become
visible to other configured clusters.
For example, to export a snapshot snap1
to a location named
StorPool-Rome
:
# storpool snapshot snap1 export StorPool-Rome
OK
To list the presently exported snapshots:
# storpool snapshot list exports
-------------------------------------------------------------------------------
| remote | snapshot | globalId | backingUp | volumeMove |
-------------------------------------------------------------------------------
| StorPool-Rome | snap1 | nzkr.b.cuj | false | false |
-------------------------------------------------------------------------------
To list the snapshots exported from remote sites:
# storpool snapshot list remote
------------------------------------------------------------------------------------------
| location | remoteId | name | onVolume | size | creationTimestamp | tags |
------------------------------------------------------------------------------------------
| s02 | a.o.cxz | snapshot1 | | 107374182400 | 2019-08-20 03:21:42 | |
------------------------------------------------------------------------------------------
Single snapshot could be exported to multiple configured locations.
To create a clone of a remote snapshot locally:
# storpool snapshot snapshot1-copy template hybrid-r3 remote s02 a.o.cxz # [tag key=value]
In this example, the remote location
is s02
and the remoteId
is
a.o.cxz
. Any key=value
pair tags may be configured at creation time.
To unexport a local snapshot:
# storpool snapshot snap1 unexport StorPool-Rome
OK
If you need to swam the remote location you can use the all
keyword. The
system will attempt to unexport the snapshot from all location it was previously
exported to.
Note
If the snapshot is presently being transferred then the unexport
operation will fail. It could be forced by adding force
to the end
of the unexport command, however this is discouraged in favor to
waiting for any active transfer to complete.
To unexport a remote snapshot:
# storpool snapshot remote s02 a.o.cxz unexport
OK
The snapshot will no longer be visible with storpool snapshot list remote
.
To unexport a remote snapshot and also set for deferred deletion in the remote site:
# storpool snapshot remote s02 a.o.cxz unexport deleteAfter 1h
OK
This will attempt to set a target delete date for a.o.cxz
in the remote site
in exactly one hour from the present time for this snapshot. If the
minimumDeleteDelay
flag (see Minimum deletion delay) in the
remote site has a higher value (for example, 1 day), the selected value will be
overwritten with the minimumDeleteDelay
- in this example 1 day. For more
information on deferred deletion, see Remote deferred deletion.
To move a snapshot to a different cluster in a multi-cluster environment (see Cluster):
# storpool snapshot snap1 moveToRemote Lab-D-cl2
Note
Moving a snapshot to a remote cluster is forbidden for attached snapshots. For more information on snapshot moving, see Volume and snapshot move.