Add and Remove Node

1. Add a Node

Follow the Installation Guide to setup the new node. After the node is added to the cluster (StorPool services are started), update the /etc/storpool.conf on all nodes:

Attention

The change for SP_EXPECTED_NODES must happen ONLY AFTER all nodes are up and in the cluster (in storpool net list), otherwise a large expansion can flap the cluster.

  1. If this is a voting node (usually all server nodes are voting, and client only nodes are non-voting), increase the value of SP_EXPECTED_NODES. See the User’s Guide for details.

  2. Add section for the new node with node specific settings.

  3. Copy the updated storpool.conf file to all nodes in the cluster. There is no need to restart any services after this change.

    Attention

    Since rel.18.02 it is important that each node has valid configuration sections for all nodes in the cluster in its local /etc/storpool.conf file. Keep consistent /etc/storpool.conf files across all nodes in the cluster.

2. Remove a Node from Cluster

  1. Notify StorPool support that a node will be decommissioned and monitoring and notifications shall be stopped for this node.

  2. If this is a server node, re-balance out all data from the disks of this node:

    For each disk on this node run:

    # storpool disk <diskID> softEject
    

    After all disks are marked as softEject, run balancer:

    # /usr/lib/storpool/balancer.sh -F -c0
    # storpool balancer commit
    

    Note

    This operation may take several hours to complete, depending on the amount of data to be relocated, the number of disks, as well as how loaded is the cluster while the rebalancing takes place.

  3. Wait for the relocator to complete before the node is disconnected from the cluster. Monitor the rebalancing progress with:

    # storpool relocator status
    
  4. If the node being removed is configured as voting, verify that there will be enough voting nodes remaining in the cluster, when this node is disconnected. Check expected and actual number of voting nodes in the cluster with:

    # storpool net list
    ...
    Quorum status: 4 voting beacons up out of 4 expected
    

    Attention

    After the node is removed, the number of remaining voting nodes shall be more than 50% of the expected nodes listed above. If this requirement is not satisfied, decrease the number of expected nodes before disconnecting the node (see below).

  5. After re-balancing is completed, disconnect the node form the cluster by:

    # systemctl disable --now storpool_beacon.service storpool_stat.service storpool_abrtsync.service
    

    Check that the node is not part of the cluster anymore with:

    # storpool service list
    

    The node can be physically disconnected form the storage network.

  6. Update /etc/storpool.conf on all remaining nodes in the cluster by:

    6.1. If this is a voting node, decrease SP_EXPECTED_NODES.

    6.2. Remove the node-specific section from /etc/storpool.conf

    6.3. Copy the updated storpool.conf file to all nodes in the cluster. There is no need to restart any services after this change.

  7. After the /etc/storpool.conf is updated, if the number of expected voting nodes in the cluster was reduced, make the new value active by executing the following on the current active management node:

    # echo "expected ${EXPECTED}" | socat - unix-sendto:/var/run/storpool/beacon.cmd.sock
    

    , where ${EXPECTED} is the new number of expected voting nodes. (e.g. echo "expected 4" | socat - unix-sendto:/var/run/storpool/beacon.cmd.sock).

    Check the expected number if voting nodes is correct with:

    # storpool net list
    
  8. Forget all StorPool disks previously used by the removed node:

    For each disk on this node run:

    # storpool disk <diskID> forget