Attachments
Attaching a volume or snapshot makes it accessible to a client under the
/dev/storpool
and /dev/storpool-byid
directories. Volumes can be
attached as read-only or read-write. Snapshots are always attached read-only.
Here is an example for attaching a volume testvolume
to a client with ID
1
. This creates the block device /dev/storpool/testvolume
:
# storpool attach volume testvolume client 1
OK
To attach a volume or snapshot to the node you are currently connected to:
# storpool attach volume testvolume here
OK
# storpool attach snapshot testsnap here
OK
By default, this command will block until the volume is attached to the client
and the /dev/storpool/<volumename>
symlink is created. For example, if the
storpool_block
service has not been started the command will wait
indefinitely. To set a timeout for this operation:
# storpool attach volume testvolume here timeout 10 # seconds
OK
To completely disregard the readiness check:
# storpool attach volume testvolume here noWait
OK
Note
The use of noWait
is discouraged in favor of the default behaviour
of the attach
command.
Attaching a volume will create a read-write block device attachment by default. To attach it read-only:
# storpool volume testvolume2 attach client 12 mode ro
OK
To list all attachments:
# storpool attach list
-------------------------------------------------------------------
| client | volume | globalId | mode | tags |
-------------------------------------------------------------------
| 11 | testvolume | d.n.a1z | RW | vc-policy=no |
| 12 | testvolume1 | d.n.c2p | RW | vc-policy=no |
| 12 | testvolume2 | d.n.uwp | RO | vc-policy=no |
| 14 | testsnap | d.n.s1m | RO | vc-policy=no |
-------------------------------------------------------------------
To detach:
# storpool detach volume testvolume client 1 # or 'here' if the command is being executed on client ID 1
If a volume is actively being written or read from, a detach operation will fail:
# storpool detach volume testvolume client 11
Error: 'testvolume' is open at client 11
In this case the detach could be forced; beware that forcing a detachment is discouraged:
# storpool detach volume testvolume client 11 force yes
OK
Attention
Any operations on the volume will receive an IO Error when it is forcefully detached. Some mounted filesystems lead to kernel panic when a block device disappears when there with live operations, thus be extra careful if these filesystems are mounted on a hypervisor node directly.
If a volume or snapshot is attached to more than one client it could be detached from all nodes with a single command:
# storpool detach volume testvolume all
OK
# storpool detach snapshot testsnap all
OK