Multi-server
The multi-server feature enables the use of up to seven separate
storpool_server
instances on a single node. This makes sense for dedicated
storage nodes, or in the case of a heavily-loaded converged setup with more
resources isolated for the storage system.
For example, a dedicated storage node with 36 drives would provide better peak performance with 4 server instances each controlling 1/4th of all disks/SSDs than with a single instance. Another good example would be a converged node with 16 SSDs/HDDs, which would provide better peak performance with two server instances each controlling half of the drives and running on separate CPU cores, or even running on two threads on a single CPU core compared to a single server instance.
Configuration
The configuration of the CPUs on which the different instances are running is
done via cgroups, through the storpool_cg
tool; for details, see
Cgroup options.
Configuring which drive is handled by which instance is done with the
storpool_initdisk
tool. For example, if you have two drives whose IDs are
1101
and 1102
, both controlled by the first server instance, the output
from storpool_initdisk
would look like this:
# storpool_initdisk --list
/dev/sde1, diskId 1101, version 10007, server instance 0, cluster init.b, SSD
/dev/sdf1, diskId 1102, version 10007, server instance 0, cluster init.b, SSD
Setting the second SSD drive (1102
) to be controlled by the second server
instance is done like this (X
is the drive letter and N
is the partition
number, for example /dev/sdf1
):
# storpool_initdisk -r -i 1 /dev/sdXN
Hint
The above command will fail if the storpool_server
service is
running, please eject the disk prior to re-setting an instance.
In some occasions, if the first server instance was configured with a large
amount of cache (check SP_CACHE_SIZE
in Node configuration options),
when migrating from one to two instances it is recommended to first split the
cache between the different instances (for example, from 8192
to 4096
).
These parameters will be automatically taken care of by the storpool_cg
tool, check for more details in Cgroup options.
Helper
StorPool provides a tool for easy reconfiguration between different number of server instances. It can be used to print the required commands. For example, for a node with some SSD and some HDDs automatically assigned to 3 SSD only, and one HDD-only server instances:
[root@s25 ~]# /usr/lib/storpool/multi-server-helper.py -i 4 -s 3
/usr/sbin/storpool_initdisk -r -i 0 2532 0000:01:00.0-p1 # SSD
/usr/sbin/storpool_initdisk -r -i 0 2534 0000:02:00.0-p1 # SSD
/usr/sbin/storpool_initdisk -r -i 0 2533 0000:06:00.0-p1 # SSD
/usr/sbin/storpool_initdisk -r -i 0 2531 0000:07:00.0-p1 # SSD
/usr/sbin/storpool_initdisk -r -i 1 2505 /dev/sde1 # SSD
/usr/sbin/storpool_initdisk -r -i 1 2506 /dev/sdf1 # SSD
/usr/sbin/storpool_initdisk -r -i 1 2507 /dev/sdg1 # SSD
/usr/sbin/storpool_initdisk -r -i 1 2508 /dev/sdh1 # SSD
/usr/sbin/storpool_initdisk -r -i 2 2501 /dev/sda1 # SSD
/usr/sbin/storpool_initdisk -r -i 2 2502 /dev/sdb1 # SSD
/usr/sbin/storpool_initdisk -r -i 2 2503 /dev/sdc1 # SSD
/usr/sbin/storpool_initdisk -r -i 2 2504 /dev/sdd1 # SSD
/usr/sbin/storpool_initdisk -r -i 3 2511 /dev/sdi1 # WBC
/usr/sbin/storpool_initdisk -r -i 3 2512 /dev/sdj1 # WBC
/usr/sbin/storpool_initdisk -r -i 3 2513 /dev/sdk1 # WBC
/usr/sbin/storpool_initdisk -r -i 3 2514 /dev/sdl1 # WBC
/usr/sbin/storpool_initdisk -r -i 3 2515 /dev/sdn1 # WBC
/usr/sbin/storpool_initdisk -r -i 3 2516 /dev/sdo1 # WBC
/usr/sbin/storpool_initdisk -r -i 3 2517 /dev/sdp1 # WBC
/usr/sbin/storpool_initdisk -r -i 3 2518 /dev/sdq1 # WBC
[root@s25 ~]# /usr/lib/storpool/multi-server-helper.py -h
usage: multi-server-helper.py [-h] [-i INSTANCES] [-s [SSD_ONLY]]
Prints relevant commands for dispersing the drives to multiple server
instances
optional arguments:
-h, --help show this help message and exit
-i INSTANCES, --instances INSTANCES
Number of instances
-s [SSD_ONLY], --ssd-only [SSD_ONLY]
Splits by type, 's' SSD-only instances plusi-s HDD
instances (default s: 1)
Note that the commands could be executed only when the relevant
storpool_server*
service instances are stopped and a cgroup re-configuration
would likely be required after the setup changes (see
Cgroup options for more info on how to update cgroups).