Overview
Multi-cluster
The main use case for the multi-cluster mode is seamless scalability in the same datacenter. A volume can be live-migrated between different sub-clusters in a multi-cluster setup. This way workloads could be balanced between multiple sub-clusters in a location, which is generally referred to as a multi-cluster setup.
![digraph G {
rankdir=LR;
compound=true;
ranksep=1;
style=radial;
bgcolor="white:gray";
image=svg;
label="Location A";
subgraph cluster_a0 {
style=filled;
bgcolor="white:lightgrey";
node [
style=filled,
shape=square,
];
bridge0;
a00 [label="a0.1"];
a01 [label="a0.2"];
space0 [label="..."];
a03 [label="a0.N"];
label = "Cluster A0";
}
subgraph cluster_a1 {
style=filled;
bgcolor="white:lightgrey";
node [
style=filled,
shape=square,
];
bridge1;
a10 [label="a1.1"];
a11 [label="a1.2"];
space1 [label="..."];
a13 [label="a1.N"];
label = "Cluster A1";
}
subgraph cluster_a2 {
style=filled;
bgcolor="white:lightgrey";
node [
style=filled,
shape=square,
];
bridge2;
a20 [label="a2.1"];
a21 [label="a2.2"];
space2 [label="..."];
a23 [label="a2.N"];
label = "Cluster A2";
}
bridge0 -> bridge1 [dir=both, lhead=cluster_a1, ltail=cluster_a0];
bridge1 -> bridge2 [dir=both, lhead=cluster_a2, ltail=cluster_a1];
bridge0 -> bridge2 [dir=both, lhead=cluster_a2, ltail=cluster_a0];
// was:
// bridge0 -> bridge1 [color="red", lhead=cluster_a1, ltail=cluster_a0];
// bridge1 -> bridge0 [color="blue", lhead=cluster_a0, ltail=cluster_a1];
// bridge1 -> bridge2 [color="red", lhead=cluster_a2, ltail=cluster_a1];
// bridge2 -> bridge1 [color="blue", lhead=cluster_a1, ltail=cluster_a2];
// bridge0 -> bridge2 [color="red", lhead=cluster_a2, ltail=cluster_a0];
// bridge2 -> bridge0 [color="blue", lhead=cluster_a0, ltail=cluster_a2];
}](../../_images/graphviz-be332ebfd0edca7bd33e71822eeb6659846c4f1d.png)
Multi-cluster illustration
For a detailed overview, see Introduction to multi-cluster mode.
Multi site
Remotely connected clusters in different locations are referred as multi site. When two remote clusters are connected, they can efficiently transfer snapshots between them. The usual use case is remote backup and disaster recovery (see also Disaster Recovery Engine).
![digraph G {
rankdir=LR;
compound=true;
ranksep=2;
image=svg;
subgraph cluster_loc_a {
style=radial;
bgcolor="white:gray";
node [
style=filled,
color="white:lightgrey",
shape=square,
];
a0 [label="Cluster A0"];
a1 [label="Cluster A1"];
a2 [label="Cluster A2"];
label = "Location A";
}
subgraph cluster_loc_b {
style=filled;
color=grey;
node [
style=filled,
color="white:grey",
shape=square,
];
b0 [label="Cluster B0"];
b1 [label="Cluster B1"];
b2 [label="Cluster B2"];
label = "Location B";
}
a1 -> b1 [color="red", lhead=cluster_loc_b, ltail=cluster_loc_a];
b1 -> a1 [color="blue", lhead=cluster_loc_a, ltail=cluster_loc_b];
}](../../_images/graphviz-ad152d97e0890d55c79e682b11f6ef9b6809f0da.png)
Multi site illustration
Setup
Connecting clusters regardless of their locations requires the storpool_bridge service to be running on at least two nodes in each cluster.
Each node running this service needs the following parameters to be configured in /etc/storpool.conf
or /etc/storpool.conf.d/*.conf
files (see Introduction):
SP_BRIDGE_HOST
, see Address for the bridge service.Note that
3749
port should be unblocked in the firewalls between the two locations.SP_CLUSTER_ID
, see Cluster ID.
A backup template should be configured through mgmtConfig (see Management configuration).
It is needed to instruct the local bridge which template should be used for incoming snapshots for the VolumeBackup
call and if the template is not specified explicitly.
Warning
The backupTemplateName
mgmtConfig option must be configured in the destination cluster for storpool volume XXX backup LOCATION
to work (otherwise the transfer won’t start).
For more information about the setup steps, see Connecting two clusters.