Management configuration
Tip
Please consult with StorPool support before changing the management configuration defaults.
The mgmtConfig
submenu is used to set some internal configuration
parameters.
Listing current configuration
To list the presently configured parameters:
# storpool mgmtConfig list
relocator on, interval 5.000 s
relocator transaction: min objects 320, max objects 4294967295
relocator recovery: max tasks per disk 2, max objects per disk 2400
relocator recovery objects trigger 32
relocator min free 150 GB
relocator max objects per HDD tail 0
balancer auto off, interval 5.000 s
snapshot delete interval 1.000 s
disks soft-eject interval 5.000 s
snapshot delayed delete off
snapshot dematerialize interval 1.000 s
mc owner check interval 2.000 s
mc autoreconcile interval 2.000 s
reuse server implicit on disk down disabled
max local recovery requests 1
max remote recovery requests 2
maintenance state production
max disk latency nvme 1000.000 ms
max disk latency ssd 1000.000 ms
max disk latency hdd 1000.000 ms
max disk latency journal 50.000 ms
backup template name backup_template
aggScoreSpace sameAg 99
aggScoreSpace suppress for disk full below 1%
aggScoreSpace restore for disk full above 2%
Local and remote recovery
Using the maxLocalRecoveryRequests
and maxRemoteRecoveryRequests
parameters you can set the number of parallel requests to issue while performing
local or remote recovery, respectively. The values of the parameters should be
between 1 and 64.
To set the default local recovery requests for all disks:
StorPool> mgmtConfig maxLocalRecoveryRequests 1
OK
StorPool> mgmtConfig maxRemoteRecoveryRequests 2
OK
You can override the values per disk in the following way:
StorPool> disk 1111 maxRecoveryRequestsOverride local 1
OK
StorPool> disk 1111 maxRecoveryRequestsOverride remote 2
OK
You can also clear the overrides so that the defaults take precedence:
StorPool> disk 1111 maxRecoveryRequestsOverride local clear
OK
StorPool> disk 1111 maxRecoveryRequestsOverride remote clear
OK
An example use case would be the need to speed up or slow down a re-balancing or a remote transfer, based on the operational requirements at the time. For example, to lower the impact on latency-sensitive user operations, or decrease the time required for getting a cluster back to full redundancy, or getting a remote transfer completed faster.
These parameters are introduced with the 19.3 revision 19.01.2592.cf99471bd release.
Miscellaneous parameters
To disable the deferred snapshot deletion (default on):
# storpool mgmtConfig delayedSnapshotDelete off
OK
When enabled, all snapshots with configured time to be deleted will be cleared at the configured date and time.
To change the default interval between periodic checks whether disks marked for ejection can actually be ejected (5 sec.):
# storpool mgmtConfig disksSoftEjectInterval 20000 # value in ms - 20 sec.
OK
To change the default interval (5 sec.) for the relocator to check if there is new work to be done:
# storpool mgmtConfig relocatorInterval 20000 # value is in ms - 20 sec.
OK
To set a different than the default number of objects per disk (3200) in recovery at a time:
# storpool mgmtConfig relocatorMaxRecoveryObjectsPerDisk 2000 # value in number of objects per disk
OK
To change the default maximum number of recovery tasks per disk (2 tasks):
# storpool mgmtConfig relocatorMaxRecoveryTasksPerDisk 4 # value is number of tasks per disk - will set 4 tasks
OK
To change the minimum (default 320) or the maximum (default 4294967295) number of objects per transaction for the relocator:
# storpool mgmtConfig relocatorMaxTrObjects 2147483647
OK
# storpool mgmtConfig relocatorMinTrObjects 640
OK
To change the maximum number of objects per transaction per HDD tail drives use (0 is unset, 1+ is number of objects):
# storpool mgmtConfig relocatorMaxTrObjectsPerHddTail 2
To change the maximum number of objects in recovery for a disk to be usable by the relocator (default 32):
# storpool mgmtConfig relocatorRecoveryObjectsTrigger 64
To change the default check for new snapshots for deleting:
# storpool mgmtConfig snapshotDeleteInterval
Snapshot dematerialization
To enable snapshot dematerialization or change the interval:
# storpool mgmtConfig snapshotDematerializeInterval 30000 # sets the interval 30 seconds, 0 disables it
Snapshot dematerialization checks and removes all objects that do not refer to any data, i.e. no change in this object from the last snapshot (or ever). This helps reducing the number of used objects per disk in clusters with a large number of snapshots and a small number of changed blocks between the snapshots in the chain.
To update the free space threshold in GB after which the relocator will not be adding new tasks:
# storpool mgmtConfig relocatorGBFreeBeforeAdd 75 # value is in GB
Multi-cluster parameters
To set or change the default multi-cluster owner check interval:
# storpool mcOwnerCheckInterval 2000 # sets the interval to 2 seconds, 0 disables it
To set or change the default MultiCluster auto-reconcile interval:
# storpool mcAutoReconcileInterval 2000 # sets the interval to 2 seconds, 0 disables it
Reusing server on disk failure
If there is a disk down, and a new volume could not be allocated, enabling the
reuseServerImplicitOnDiskDown
option will retry the new volume allocation as
if the reuseServer
parameter was specified. This is helpful for minimum
installation requirements with 3 nodes when one of the nodes or a disk is down.
To enable the option:
# storpool mgmtConfig reuseServerImplicitOnDiskDown enable
The only downside is that the volume will have two of its replicas on drives in the same server. When the missing node comes back a re-balancing will be required, so that all replicas created on the same server are re-distributed back on all nodes. A new needbalance alert will be raised for these occasions.
This option is turned on by default for all new installations. Its history is as follows:
Introduced in 19.1 revision 19.01.1025.0baac06a6.
As of 19.3 revision 19.01.2318.10e55fce0, all volumes and snapshots that violate some placement constraints will be visible in the output of
storpool volume status
andstorpool volume quickStatus
with the flag C; for details, see Volume status.
Changing default template
To change the default template upon receiving a snapshot from a remote cluster,
through the storpool_bridge
service (was the now deprecated
SP_BRIDGE_TEMPLATE
):
# storpool mgmtConfig backupTemplateName all-flash # the all-flash template should exist
OK
Cluster maintenance mode
A full cluster maintenance mode is available for occasions involving full cluster related maintenance activities. An example would be a scheduled restart of a network switch that will be reported as missing network for all nodes in a cluster.
This mode does not perform any checks, and is mainly for informational purposes in order to sync context between customers and StorPool’s support teams. Full cluster maintenance mode could be used in addition to the per-node maintenance state explained above when necessary.
To change the full cluster maintenance state to maintenance
:
# storpool mgmtConfig maintenanceState maintenance
OK
To switch back into production
state:
# storpool mgmtConfig maintenanceState production
OK
In case you only need to do this for a single node you can use storpool
maintenance
, as described in Maintenance mode.
Latency thresholds
Note
For individual per disk latency thresholds check Disk list performance information section.
To define a global latency threshold before ejecting a HDD disk:
# storpool mgmtConfig maxDiskLatencies hdd 1000 # value is in milliseconds
To define a global latency threshold before ejecting a SSD drive:
# storpool mgmtConfig maxDiskLatencies ssd 1000 # value is in milliseconds
To define a global latency threshold before ejecting a NVMe drive:
# storpool mgmtConfig maxDiskLatencies nvme 1000 # value is in milliseconds
To define a global latency limit before ejecting a journal device:
# storpool mgmtConfig maxDiskLatencies journal 50 # value is in milliseconds
Aggregate score parameters
To configure a different defaults for the disk space aggregation algorithm, use:
# storpool mgmtConfig aggScoreSpace suppressEnd 1
# storpool mgmtConfig aggScoreSpace restore 2
# storpool mgmtConfig aggScoreSpace sameAg 99
Note
These settings will be gradually modified in all production installations to new defaults that will be a lot less aggressive when a large amount of data gets deleted. With the new defaults the impact on user operations will be a lot smaller compared to the previous defaults with a mostly linear relation to the amount of data written and then freed from a disk:
# storpool mgmtConfig aggScoreSpace suppressEnd 90
# storpool mgmtConfig aggScoreSpace restore 95
# storpool mgmtConfig aggScoreSpace sameAg 10
Added with 21.0 revision 21.0.841.983f5880c release.