Verifying machine’s cgroups state and configurations

You can use the storpool_cg tool with the print and check commands to view or check the current cgroups configuration. You can also use the storpool_process tool to find all StorPool processes running on the machine and report their cpuset and memory cgroups.

storpool_cg print

The storpool_cg print command reads the cgroups filesystem and reports its current state in a human-friendly readable format. It is in the same format used by storpool_cg for printing configurations.

storpool_cg print is useful for making yourself familiar with the machine configuration. Here is an example:

$ storpool_cg print
slice: storpool.slice limit: 26631M
  subslice: storpool.slice/alloc limit: 3328M
  subslice: storpool.slice/common limit: 23303M
slice: system.slice limit: 2G
slice: user.slice limit: 2G
socket:0
  core:0 cpus:[ 0  1]  --
  core:1 cpus:[ 2  3]  --  nic    | rdma
  core:2 cpus:[ 4  5]  --  server | server_1
  core:3 cpus:[ 6  7]  --  iscsi  | beacon,mgmt,block
socket:1
  core:0 cpus:[ 8  9]  --
  core:1 cpus:[10 11]  --
  core:2 cpus:[12 13]  --
  core:3 cpus:[14 15]  --

When running the storpool_cg tool with the print command you can use the following options:

  • -N / --numas: Print the NUMA nodes in the CPU configuration. Default is False.

  • -S / --slices: Print the cpuset slices in the CPU configuration. Default is False.

  • -U / --usage: Print the memory slices usage.

  • -E / --expect: Print the expected memory usage for some StorPool components.

NUMA nodes and cpuset slices

You can use the --numas and --slices options to display the NUMA nodes and cpuset slices for the CPUs:

$ storpool_cg print --numas --slices
slice: storpool.slice limit: 26631M
  subslice: storpool.slice/alloc limit: 3328M
  subslice: storpool.slice/common limit: 23303M
slice: system.slice limit: 2G
slice: user.slice limit: 2G
socket:0
  core:0 cpus:[ 0  1]  --  numa:[0 0]  --  system user      | system user
  core:1 cpus:[ 2  3]  --  numa:[0 0]  --  storpool: nic    | storpool: rdma
  core:2 cpus:[ 4  5]  --  numa:[0 0]  --  storpool: server | storpool: server_1
  core:3 cpus:[ 6  7]  --  numa:[0 0]  --  storpool: iscsi  | storpool: beacon,mgmt,block
socket:1
  core:0 cpus:[ 8  9]  --  numa:[1 1]  --  system user      | system user
  core:1 cpus:[10 11]  --  numa:[1 1]  --  system user      | system user
  core:2 cpus:[12 13]  --  numa:[1 1]  --  system user      | system user
  core:3 cpus:[14 15]  --  numa:[1 1]  --  system user      | system user

Memory usage

With the -U/--usage option you will see a table with the memory usage of each memory slice it usually prints, as well as what memory is left for the kernel.

$ storpool_cg print --usage
slice                      usage    limit    perc    free
=========================================================
machine.slice               0.00 / 13.21G   0.00%  13.21G
storpool.slice              2.86 / 10.17G  28.09%   7.32G
  storpool.slice/alloc      0.20 /  4.38G   4.61%   4.17G
  storpool.slice/common     2.66 /  5.80G  45.81%   3.14G
system.slice                2.13 /  4.44G  47.84%   2.32G
user.slice                  0.65 /  2.00G  32.73%   1.35G
=========================================================
ALL SLICES                  5.64 / 29.82G  18.91%  24.19G

                        reserved    total    perc  kernel
=========================================================
NON KERNEL                 29.82 / 31.26G  95.40%   1.44G
=========================================================
cpus for StorPool: [1, 2, 3, 4, 5, 6, 7]
socket:0
  core:0 cpus:[ 0  1]  --         | bridge,mgmt
  core:1 cpus:[ 2  3]  --  nic    | rdma
  core:2 cpus:[ 4  5]  --  server | server_1
  core:3 cpus:[ 6  7]  --  iscsi  | beacon,block
socket:1
  core:0 cpus:[ 8  9]  --
  core:1 cpus:[10 11]  --
  core:2 cpus:[12 13]  --
  core:3 cpus:[14 15]  --

Expected memory usage

You can use the -E/--expect option to see a table with the expected memory usage for some StorPool components and their respective real usage. Here is an example:

$ storpool_cg print --expect
sp service   usage expected buffer buffer used %
================================================
block            -     0.06   0.30
cache         1.04     1.04     no
  cache_0     0.52     0.52     no
  cache_1     0.52     0.52     no
servers       1.01     0.96   3.04         0.01%
  server_0    0.50     0.48   1.52         0.01%
  server_1    0.50     0.48   1.52         0.01%
sp_logs       0.38     0.39     no
================================================
total         2.96     2.46   3.34         0.15%

The table contains the following details:

  • All raw values are in GiB.

  • The cache and server rows are summations of all caches and servers below them.

  • The usage column is filed from the sizes of the StorPool-specific files in /dev/shm/

  • The values in the expected column are calculated internally by cgtool.

  • The buffer column is the cgtool memory insurance given to the service.

  • The values in the total row are not the sum of the above components. Instead, the tool shows the real usage read from the cgroups memory controller.

storpool_cg check

storpool_cg check will run a series of cgroup-related checks on the current cgroup configuration, and will report any errors or warnings it finds. It can be used to identify cgroup-related problems. Here is an example:

$ storpool_cg check
M: ==== cpuset ====
E: user.slice and machine.slice cpusets intersect
E: machine.slice and system.slice cpusets intersect
M: ==== memory ====
W: memory left for kernel is 0MB
E: sum of storpool.slice, user.slice, system.slice, machine.slice limits is 33549.0MB, while total memory is 31899.46875MB
M: Done.

If you want to obtain the result from this command in a JSON format you can use the -J / --json option.

storpool_process

storpool_process is a tool that can find all StorPool processes running on the machine and report their cpuset and memory cgroups. It can be used to check in which cgroups the StorPool processes are running to quickly find problems (for example, StorPool processes in the root cgroup).

To list all StorPool processes run:

$ storpool_process list
[pid] [service]  [cpuset]              [memory]
1121  stat       system.slice          system.slice/storpool_stat.service
1181  stat       system.slice          system.slice/storpool_stat.service
1261  stat       system.slice          system.slice/storpool_stat.service
1262  stat       system.slice          system.slice/storpool_stat.service
1263  stat       system.slice          system.slice/storpool_stat.service
1266  stat       system.slice          system.slice/storpool_stat.service
5743  server     storpool.slice/server storpool.slice
14483 block      storpool.slice/block  storpool.slice
21327 stat       system.slice          system.slice/storpool_stat.service
27379 rdma       storpool.slice/rdma   storpool.slice
27380 rdma       storpool.slice/rdma   storpool.slice
27381 rdma       storpool.slice/rdma   storpool.slice
27382 rdma       storpool.slice/rdma   storpool.slice
27383 rdma       storpool.slice/rdma   storpool.slice
28940 mgmt       storpool.slice/mgmt   storpool.slice/alloc
29346 controller system.slice          system.slice
29358 controller system.slice          system.slice
29752 nvmed      storpool.slice/beacon storpool.slice
29764 nvmed      storpool.slice/beacon storpool.slice
30838 block      storpool.slice/block  storpool.slice
31055 server     storpool.slice/server storpool.slice
31086 mgmt       storpool.slice/mgmt   storpool.slice/alloc
31450 beacon     storpool.slice/beacon storpool.slice
31469 beacon     storpool.slice/beacon storpool.slice

By default, processes are sorted by pid. You can specify the sorting using the -S parameter:

$ storpool_process list -S service pid
[pid] [service]  [cpuset]              [memory]
31450 beacon     storpool.slice/beacon storpool.slice
31469 beacon     storpool.slice/beacon storpool.slice
14483 block      storpool.slice/block  storpool.slice
30838 block      storpool.slice/block  storpool.slice
29346 controller system.slice          system.slice
29358 controller system.slice          system.slice
28940 mgmt       storpool.slice/mgmt   storpool.slice/alloc
31086 mgmt       storpool.slice/mgmt   storpool.slice/alloc
29752 nvmed      storpool.slice/beacon storpool.slice
29764 nvmed      storpool.slice/beacon storpool.slice
27379 rdma       storpool.slice/rdma   storpool.slice
27380 rdma       storpool.slice/rdma   storpool.slice
27381 rdma       storpool.slice/rdma   storpool.slice
27382 rdma       storpool.slice/rdma   storpool.slice
27383 rdma       storpool.slice/rdma   storpool.slice
5743  server     storpool.slice/server storpool.slice
31055 server     storpool.slice/server storpool.slice
1121  stat       system.slice          system.slice/storpool_stat.service
1181  stat       system.slice          system.slice/storpool_stat.service
1261  stat       system.slice          system.slice/storpool_stat.service
1262  stat       system.slice          system.slice/storpool_stat.service
1263  stat       system.slice          system.slice/storpool_stat.service
1266  stat       system.slice          system.slice/storpool_stat.service
21327 stat       system.slice          system.slice/storpool_stat.service

You can also use the storpool_process tool to reclassify misplaced StorPool processes in their right cgroups. If the proper cgroups are configured in storpool.conf you can run storpool_process reclassify, and the tool will classify each process to its correct cpuset and memory cgroup. It is advisable to run storpool_process reclassify -N (or even storpool_process reclassify -N -v) first to see which processes are affected and where will they be moved.