Background services

A StorPool installation provides background services that take care of different functionality on each node participating in the cluster.

For details about how to control the services, see Managing services with storpool_ctl.

storpool_beacon

The beacon must be the first StorPool process started on all nodes in the cluster. It informs all members about the availability of the node on which it is installed. If the number of the visible nodes changes, every storpool_beacon service checks that its node still participates is the quorum - which means it can communicate with more than half of the expected nodes, including itself (see SP_EXPECTED_NODES in Node configuration options).

If the storpool_beacon service starts successfully, it will send to the system log (/var/log/messages, /var/log/syslog, or similar) messages as those shown below for every node that comes up in the StorPool cluster:

[snip]
Jan 21 16:22:18 s01 storpool_beacon[18839]: [info] incVotes(1) from 0 to 1, voteOwner 1
Jan 21 16:23:10 s01 storpool_beacon[18839]: [info] peer 2, beaconStatus UP bootupTime 1390314187662389
Jan 21 16:23:10 s01 storpool_beacon[18839]: [info] incVotes(1) from 1 to 2, voteOwner 2
Jan 21 16:23:10 s01 storpool_beacon[18839]: [info] peer up 1
[snip]

storpool_server

The storpool_server service must be started on each node that provides its storage devices (HDD, SSD, or NVMe drives) to the cluster. If the service starts successfully, all the drives intended to be used as StorPool disks should be listed in the system log, as shown in the example below:

Dec 14 09:54:19 s11 storpool_server[13658]: [info] /dev/sdl1: adding as data disk 1101 (ssd)
Dec 14 09:54:19 s11 storpool_server[13658]: [info] /dev/sdb1: adding as data disk 1111
Dec 14 09:54:20 s11 storpool_server[13658]: [info] /dev/sda1: adding as data disk 1114
Dec 14 09:54:20 s11 storpool_server[13658]: [info] /dev/sdk1: adding as data disk 1102 (ssd)
Dec 14 09:54:20 s11 storpool_server[13658]: [info] /dev/sdj1: adding as data disk 1113
Dec 14 09:54:22 s11 storpool_server[13658]: [info] /dev/sdi1: adding as data disk 1112

On a dedicated node (for example, one with a larger amount of spare resources) you can start more than one instances of storpool_server service (up to seven); for details, see Multi-server.

storpool_block

The storpool_block service provides the client (initiator) functionality. StorPool volumes can be attached only to the nodes where this service is running. When attached to a node, a volume can be used and manipulated as a regular block device via the /dev/stopool/{volume_name} symlink:

# lsblk /dev/storpool/test
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sp-2 251:2    0  100G  0 disk

storpool_mgmt

The storpool_mgmt service should be started on at least two management nodes in the cluster. It receives requests from user space tools (CLI or API), executes them in the StorPool cluster, and returns the results back to the sender. An automatic failover mechanism is available: when the node with the active storpool_mgmt service fails, the SP_API_HTTP_HOST IP address is automatically configured on the next node with the lowest SP_OURID with a running storpool_mgmt service.

storpool_bridge

The storpool_bridge service is started on two or more nodes in the cluster, with one being active (similarly to the storpool_mgmt service). This service synchronizes snapshots for the backup and disaster recovery use cases between the current cluster and one or more StorPool clusters in different locations.

storpool_controller

The storpool_controller service is started on all nodes running the storpool_server service. It collects information from all storpool_server instances in order to provide statistics data to the API.

Note

The storpool_controller service requires port 47567 to be open on the nodes where the API (storpool_mgmt) service is running.

storpool_nvmed

The storpool_nvmed service is started on all nodes that have the storpool_server service and have NVMe devices. It handles the management of the NVMe devices, their unplugging from kernel’s NVMe driver, and passing to the storpool_pci or vfio_pci drivers. You can configure this using the SP_NVME_PCI_DRIVER option in the /etc/storpool.conf file. For more information, see NVMe SSD drives.

storpool_stat

The storpool_stat service is started on all nodes. It collects the following system metrics from all nodes:

  • CPU stats - queue run/wait, user, system, and so on, per CPU

  • Memory usage stats per cgroup

  • Network stats for the StorPool services

  • The I/O stats of the system drives

  • Per-host validating service checks (for example, if there are processes in the root cgroup, the API is reachable if configured, and so on)

On some nodes it collects additional information:

  • On all nodes with the storpool_block service: the I/O stats of all attached StorPool volumes;

  • On server nodes: stats for the communication of storpool_server with the drives.

For more information, see Monitoring metrics collected.

The collected data can be viewed at https://analytics.storpool.com. It can also be submitted to an InfluxDB instance run by your organization; this can be configured in storpool.conf, for details see Monitoring and issue reports.

storpool_qos

The storpool_qos service tracks changes for volumes that match certain criteria, and takes care for updating the I/O performance settings of the matching volumes. For details, see Quality of service.

storpool_iscsi

storpool_iscsi is a client service (like storpool_block) that translates the operations received via iSCSI to the StorPool internal protocol.

The service differs from the rest of the StorPool services in that it requires 4 network interfaces instead of the regular 2. Two of them are used for communication with the cluster, and the other two are used for providing iSCSI service to initiators.

The service itself needs a separate IP address for every portal and network (different than the ones used in the host kernel). These addresses are handled by the service’s TCP/IP stack and have their own MAC addresses.

Note

Currently, re-use of the host IP address for the iSCSI service is not possible.

Note that the iSCSI service cannot operate without hardware acceleration. For details, see Network interfaces and the StorPool System Requirements document.

For more information on configuring and using the service, see iSCSI options and Setting iSCSI targets

storpool_abrtsync

The storpool_abrtsync service automatically sends reports about aborted services to StorPool’s monitoring system.

storpool_cgmove

The storpool_cgmove service finds and moves all processes from the root cgroup into a slice, so that they:

  • Cannot eat up memory in the root cgroup

  • Are accounted in some of the slices

The service does this once, when the system boots. For more information about the configuration options for the service, see Cgroup options.

If you need to manage further StorPool processes running on the machine their cgroups, it is recommended to use the storpool_process tool. For details about cgroups-related alerts, see Monitoring alerts.

storpool_havm

The storpool_havm (highly available virtual machine tracking) service tracks the state of one or more virtual machines and keeps it active on one of the nodes in the cluster. The sole purpose of this service is to offload the orchestration responsibility for virtual machines where the fast startup after a failover event is crucial.

A virtual machine is configured on all nodes where the storpool API (storpool_mgmt service) is running with a predefined VM XML and predefined volume names. The storpool_havm@<vm_name> service gets enabled on each API node in the cluster, then starts tracking the state of this virtual machine.

The VM is kept active on the active API node. In the typical case where the active API changes due to service restart, the VM gets live-migrated to the new active API node.

In case of a failure of the node where the active API was last running, the service takes care to fence the block devices on the old API node and to start the VM on the present active node.

The primary use case is virtual machines for NFS or S3.

This service is available starting with release 20.0 revision 20.0.19.1a208ffab.

storpool_logd

The StorPool log daemon (storpool_logd) receives log messages from all StorPool services working in a cluster and the Linux kernel logs for further analysis and advanced monitoring.

Tracking the storage service logs for the whole cluster enables more advanced monitoring, as well as safer maintenance operations. In the long term, it allows for:

  • Better accountability

  • Reduced times for investigating issues or incidents

  • Logs inspection for longer periods

  • Retroactive detection of issues identified in a production cluster in the whole installed base

The service reads messages from two log streams, enqueues them into a persistent backend, and sends them to StorPool’s infrastructure. Once the reception is confirmed, messages are removed from the backend.

The service tries its best to ensure the logs are delivered. Logs can survive between process and entire node restarts. storpool_logd prioritizes persisted messages over incoming ones, so new messages will be dropped if the persistent storage is full.

Monitoring cluster-wide logs allows raising alerts on cluster-wide events that are based either on specific messages or classified message frequency thresholds. The ultimate goal is to lower the risk of accidents and unplanned downtime by proactively detecting issues based on similarities in the observed services behavior.

The main use cases for the service are:

  • Proactively detect abnormal and unexpected messages and raise alerts.

  • Proactively detect an abnormal rate of messages and raise alerts

  • Enhance situational awareness by allowing operators to monitor the logs for a whole cluster in one view

  • Allow for easier tracking of newly discovered failure scenarios

There are relevant configuration sections for the service if a proxy is required to be able to send data, or if a custom instance is used to override the URL of the default instance procured in StorPool’s infrastructure.

This service is available starting with release 20.0 revision 20.0.372.4cd1679db.