Connecting two clusters
Cluster parameters
In the examples in this document there are two clusters named Cluster_A
and Cluster_B
:
Option |
Bridge A |
Bridge B |
---|---|---|
|
|
|
|
10.10.10.1 |
10.10.20.1 |
|
|
|
To have these two connected through their bridge services you would have to introduce each of them to the other. Note the following:
In case of a multi-cluster setup the location will be the same for both clusters.
The procedure is the same for both cases, with the slight difference that in the multi-cluster case the remote bridges are usually configured with
noCrypto
(see Remote bridge).
Cluster A steps
The following parameters from Cluster_B
will be required:
The
SP_CLUSTER_ID
:locationBId.bId
The
SP_BRIDGE_HOST
IP address:10.10.20.1
The public key located in
/usr/lib/storpool/bridge/bridge.key.txt
in the remote bridge host inCluster_B
:eeeeeeeeeeeee.ffffffffffff.ggggggggggggg.hhhhhhhhhhhhh
Using the CLI in Cluster_A
, add Cluster_B
’s location with the following commands:
user@hostA # storpool location add locationBId location_b
user@hostA # storpool cluster add location_b bId
user@hostA # storpool cluster list
--------------------------------------------
| name | id | location |
--------------------------------------------
| location_b-cl1 | bId | location_b |
--------------------------------------------
The remote name is location_b-cl1
, where the clN
number is automatically generated based on the cluster ID.
The last step to perform in Cluster_A
is to register the Cluster_B
’s bridge.
The command looks like this:
user@hostA # storpool remoteBridge register location_b-cl1 10.10.20.1 eeeeeeeeeeeee.ffffffffffff.ggggggggggggg.hhhhhhhhhhhhh
To list the registered bridges in Cluster_A
:
user@hostA # storpool remoteBridge list
----------------------------------------------------------------------------------------------------------------------------
| ip | remote | minimumDeleteDelay | publicKey | noCrypto |
----------------------------------------------------------------------------------------------------------------------------
| 10.10.20.1 | location_b-cl1 | | eeeeeeeeeeeee.ffffffffffff.ggggggggggggg.hhhhhhhhhhhhh | 0 |
----------------------------------------------------------------------------------------------------------------------------
Hint
The public key in /usr/lib/storpool/bridge/bridge.key.txt
will be generated on the first run of the storpool_bridge
service.
Note
The noCrypto
parameter is usually 1
in case of multi-cluster with a secure datacenter network for higher throughput and lower latency during migrations.
![digraph G {
rankdir=LR;
image=svg;
compound=true;
ranksep=2;
subgraph cluster_a {
style=filled;
color=lightgrey;
node [
style=filled,
color=white,
shape=square,
label="Bridge A",
];
bridge0;
label = "Cluster A";
}
subgraph cluster_b {
style=filled;
color=grey;
node [
style=filled,
color=white,
shape=square,
label="Bridge B"
];
bridge1;
label = "Cluster B";
}
bridge0 -> bridge1 [color="red", lhead=cluster_b, ltail=cluster_a];
// bridge1 -> bridge0 [color="blue", lhead=cluster_a, ltail=cluster_b];
}](../../_images/graphviz-09ae56d82ee5d3e6c8953b156d8fdd7ae19f828e.png)
Cluster B steps
Similarly, the parameters from Cluster_A
will be required for registering the location, cluster and bridge(s) in Cluster B
:
The
SP_CLUSTER_ID
:locationAId.aId
The
SP_BRIDGE_HOST
IP address inCluster_A
:10.10.10.1
The public key in
/usr/lib/storpool/bridge/bridge.key.txt
in the remote bridge host inCluster_A
:aaaaaaaaaaaaa.bbbbbbbbbbbb.ccccccccccccc.ddddddddddddd
Similarly, the commands to run in Cluster_B
should be:
user@hostB # storpool location add locationAId location_a
user@hostB # storpool cluster add location_a aId
user@hostB # storpool cluster list
--------------------------------------------
| name | id | location |
--------------------------------------------
| location_a-cl1 | aId | location_a |
--------------------------------------------
user@hostB # storpool remoteBridge register location_a-cl1 1.2.3.4 aaaaaaaaaaaaa.bbbbbbbbbbbb.ccccccccccccc.ddddddddddddd
user@hostB # storpool remoteBridge list
-------------------------------------------------------------------------------------------------------------------------
| ip | remote | minimumDeleteDelay | publicKey | noCrypto |
-------------------------------------------------------------------------------------------------------------------------
| 1.2.3.4 | location_a-cl1 | | aaaaaaaaaaaaa.bbbbbbbbbbbb.ccccccccccccc.ddddddddddddd | 0 |
-------------------------------------------------------------------------------------------------------------------------
At this point, provided network connectivity is working, the two bridges will be connected.
![digraph G {
rankdir=LR;
image=svg;
compound=true;
ranksep=2;
subgraph cluster_a {
style=filled;
color=lightgrey;
node [
style=filled,
color=white,
shape=square,
label="Bridge A",
];
bridge0;
label = "Cluster A";
}
subgraph cluster_b {
style=filled;
color=grey;
node [
style=filled,
color=white,
shape=square,
label="Bridge B"
];
bridge1;
label = "Cluster B";
}
bridge0 -> bridge1 [color="red", lhead=cluster_b, ltail=cluster_a];
bridge1 -> bridge0 [color="blue", lhead=cluster_a, ltail=cluster_b];
}](../../_images/graphviz-ed4ce0f7675989b501845b80f714ba5295210997.png)
Bridge redundancy
Redundancy for the bridge services is done by configuring and starting the storpool_bridge
service on two (or more) nodes in each cluster.
Currently, one bridge is active at a time and is being failed over in case the node or the active service is restarted.