Creating cgroups configurations for dedicated storage and hyperconverged machines

All parameters described in Creating cgroups configurations for hypervisors can also be used on storage and hyperconverged machines.

Warning

Before running storpool_cg on a storage or hyperconverged machine, make sure that all its disks are initialized for StorPool. For details, see 7.  Storage devices.

Dedicated storage machines

Here is a sample output from the storpool_cg on a dedicated storage machine, which has its disks configured to run in four storpool_server instances and will run the storpool_iscsi service:

$ storpool_cg conf --noop
########## START SUMMARY ##########
slice: storpool limit: 26382M
  subslice: storpool/common limit: 23054M
  subslice: storpool/alloc limit: 3328M
slice: system limit: 2445M
slice: user limit: 2G
###################################
cpus for StorPool: [1, 2, 3, 6, 7, 8, 9]
socket:0
  core: 0 cpu: 0, 6 <--- 6 - mgmt,block,beacon
  core: 1 cpu: 1, 7 <--- 1 - rdma; 7 - iscsi
  core: 2 cpu: 2, 8 <--- 2 - server; 8 - server_1
  core: 3 cpu: 3, 9 <--- 3 - server_2; 9 - server_3
  core: 4 cpu: 4,10
  core: 5 cpu: 5,11
###################################
SP_CACHE_SIZE=2048
SP_CACHE_SIZE_1=2048
SP_CACHE_SIZE_2=2048
SP_CACHE_SIZE_3=2048
########### END SUMMARY ###########

First thing to notice is the SP_CACHE_SIZE{_X} variable at the bottom. By default, when run on a node with local disks, storpool_cg will set the cache sizes for different storpool_server instances. These values will be written in /etc/storpool.conf.d/cache-size.conf.

Cache size

If you don’t want storpool_cg to set the server caches (maybe you have already done it yourself) you can set the set_cache_size command line parameter to false:

$ storpool_cg conf --noop set_cache_size=false
########## START SUMMARY ##########
slice: storpool limit: 26382M
  subslice: storpool/common limit: 23054M
  subslice: storpool/alloc limit: 3328M
slice: system limit: 2445M
slice: user limit: 2G
###################################
cpus for StorPool: [1, 2, 3, 6, 7, 8, 9]
socket:0
  core: 0 cpu: 0, 6 <--- 6 - mgmt,block,beacon
  core: 1 cpu: 1, 7 <--- 1 - rdma; 7 - iscsi
  core: 2 cpu: 2, 8 <--- 2 - server; 8 - server_1
  core: 3 cpu: 3, 9 <--- 3 - server_2; 9 - server_3
  core: 4 cpu: 4,10
  core: 5 cpu: 5,11
########### END SUMMARY ###########

As shown in the example above, SP_CACHE_SIZE{_X} disappeared from the config summary, which means they won’t be changed.

Number of servers

storpool_cg detects how many server instances will be running on the machine by reading the storpool_initdisk --list output (see 7.3.2.  Initializing a drive). If you haven’t configured the right amount of servers on the machine, you can override this detection by specifying the servers command line parameter:

$ storpool_cg conf --noop set_cache_size=false servers=2
########## START SUMMARY ##########
slice: storpool limit: 26382M
  subslice: storpool/common limit: 23054M
  subslice: storpool/alloc limit: 3328M
slice: system limit: 2445M
slice: user limit: 2G
###################################
cpus for StorPool: [1, 2, 6, 7, 8]
socket:0
  core: 0 cpu: 0, 6 <--- 6 - mgmt,block,beacon
  core: 1 cpu: 1, 7 <--- 1 - rdma; 7 - iscsi
  core: 2 cpu: 2, 8 <--- 2 - server; 8 - server_1
  core: 3 cpu: 3, 9
  core: 4 cpu: 4,10
  core: 5 cpu: 5,11
########### END SUMMARY ###########

Hyperconverged machines

On hyperconverged machines storpool_cg should be run with the converged command line parameter set to true (or 1). There are two major differences compared to configuring storage-only nodes:

  • A machine.slice will be created for the machine.

  • The memory limit of the storpool.slice will be calculated carefully to be minimal, which allows having more memory available for virtual machines (machine.slice).

$ storpool_cg conf --noop converged=1
##########START SUMMARY##########
slice: machine limit: 356G
slice: storpool limit: 16134M
  subslice: storpool/common limit: 12806M
  subslice: storpool/alloc limit: 3328M
slice: system limit: 2836M
slice: user limit: 2G
#################################
cpus for StorPool: [3, 5, 7, 23, 25, 27]
socket:0
  core: 0 cpu: 0,20
  core: 1 cpu: 2,22
  core: 2 cpu: 4,24
  core: 3 cpu: 6,26
  core: 4 cpu: 8,28
  core: 8 cpu:10,30
  core: 9 cpu:12,32
  core:10 cpu:14,34
  core:11 cpu:16,36
  core:12 cpu:18,38
socket:1
  core: 0 cpu: 1,21
  core: 1 cpu: 3,23 <--- 3 - rdma; 23 - server
  core: 2 cpu: 5,25 <--- 5 - server_1; 25 - mgmt,beacon
  core: 3 cpu: 7,27 <--- 7 - iscsi; 27 - block
  core: 4 cpu: 9,29
  core: 8 cpu:11,31
  core: 9 cpu:13,33
  core:10 cpu:15,35
  core:11 cpu:17,37
  core:12 cpu:19,39
#################################
SP_CACHE_SIZE=1024
SP_CACHE_SIZE_1=4096
###########END SUMMARY###########

Warning

If the machine does not boot with the kernel memsw cgroups feature enabled, you should specify that to storpool_cg conf by setting set_memsw to false (or 0).

Note that storpool_cg will use only CPUs from the network interface local cpulist, which are commonly restricted to one NUMA node. If you want to allow storpool_cg to use all CPUs on the machine, specify that to storpool_cg conf by setting numa_overflow to true (or 1).