7 Clustering is a mechanism that enables multiple processes and programs to work
8 together as one entity. For example, when you search for something on
9 google.com, it may seem like your search request is processed by only one web
10 server. In reality, your search request is processed by may web servers
11 connected in a cluster. Similarly, you can have multiple instances of
12 OpenDaylight working together as one entity.
14 Advantages of clustering are:
16 * Scaling: If you have multiple instances of OpenDaylight running, you can
17 potentially do more work and store more data than you could with only one
18 instance. You can also break up your data into smaller chunks (shards) and
19 either distribute that data across the cluster or perform certain operations
20 on certain members of the cluster.
21 * High Availability: If you have multiple instances of OpenDaylight running and
22 one of them crashes, you will still have the other instances working and
24 * Data Persistence: You will not lose any data stored in OpenDaylight after a
25 manual restart or a crash.
27 The following sections describe how to set up clustering on both individual and
28 multiple OpenDaylight instances.
30 Multiple Node Clustering
31 ------------------------
33 The following sections describe how to set up multiple node clusters in OpenDaylight.
35 Deployment Considerations
36 ^^^^^^^^^^^^^^^^^^^^^^^^^
38 To implement clustering, the deployment considerations are as follows:
40 * To set up a cluster with multiple nodes, we recommend that you use a minimum
41 of three machines. You can set up a cluster with just two nodes. However, if
42 one of the two nodes fail, the cluster will not be operational.
44 .. note:: This is because clustering in OpenDaylight requires a majority of the
45 nodes to be up and one node cannot be a majority of two nodes.
47 * Every device that belongs to a cluster needs to have an identifier.
48 OpenDaylight uses the node's ``role`` for this purpose. After you define the
49 first node's role as *member-1* in the ``akka.conf`` file, OpenDaylight uses
50 *member-1* to identify that node.
52 * Data shards are used to contain all or a certain segment of a OpenDaylight's
53 MD-SAL datastore. For example, one shard can contain all the inventory data
54 while another shard contains all of the topology data.
56 If you do not specify a module in the ``modules.conf`` file and do not specify
57 a shard in ``module-shards.conf``, then (by default) all the data is placed in
58 the default shard (which must also be defined in ``module-shards.conf`` file).
59 Each shard has replicas configured. You can specify the details of where the
60 replicas reside in ``module-shards.conf`` file.
62 * If you have a three node cluster and would like to be able to tolerate any
63 single node crashing, a replica of every defined data shard must be running
64 on all three cluster nodes.
66 .. note:: This is because OpenDaylight's clustering implementation requires a
67 majority of the defined shard replicas to be running in order to
68 function. If you define data shard replicas on two of the cluster nodes
69 and one of those nodes goes down, the corresponding data shards will not
72 * If you have a three node cluster and have defined replicas for a data shard
73 on each of those nodes, that shard will still function even if only two of
74 the cluster nodes are running. Note that if one of those remaining two nodes
75 goes down, the shard will not be operational.
77 * It is recommended that you have multiple seed nodes configured. After a
78 cluster member is started, it sends a message to all of its seed nodes.
79 The cluster member then sends a join command to the first seed node that
80 responds. If none of its seed nodes reply, the cluster member repeats this
81 process until it successfully establishes a connection or it is shut down.
83 * After a node is unreachable, it remains down for configurable period of time
84 (10 seconds, by default). Once a node goes down, you need to restart it so
85 that it can rejoin the cluster. Once a restarted node joins a cluster, it
86 will synchronize with the lead node automatically.
88 .. _getting-started-clustering-scripts:
93 OpenDaylight includes some scripts to help with the clustering configuration.
97 Scripts are stored in the OpenDaylight distribution/bin folder, and
98 maintained in the distribution project
99 `repository <https://git.opendaylight.org/gerrit/p/integration/distribution>`_
100 in the folder distribution-karaf/src/main/assembly/bin/.
102 Configure Cluster Script
103 ^^^^^^^^^^^^^^^^^^^^^^^^
105 This script is used to configure the cluster parameters (e.g. akka.conf,
106 module-shards.conf) on a member of the controller cluster. The user should
107 restart the node to apply the changes.
111 The script can be used at any time, even before the controller is started
116 bin/configure_cluster.sh <index> <seed_nodes_list>
118 * index: Integer within 1..N, where N is the number of seed nodes. This indicates
119 which controller node (1..N) is configured by the script.
120 * seed_nodes_list: List of seed nodes (IP address), separated by comma or space.
122 The IP address at the provided index should belong to the member executing
123 the script. When running this script on multiple seed nodes, keep the
124 seed_node_list the same, and vary the index from 1 through N.
126 Optionally, shards can be configured in a more granular way by modifying the
127 file "custom_shard_configs.txt" in the same folder as this tool. Please see
128 that file for more details.
132 bin/configure_cluster.sh 2 192.168.0.1 192.168.0.2 192.168.0.3
134 The above command will configure the member 2 (IP address 192.168.0.2) of a
135 cluster made of 192.168.0.1 192.168.0.2 192.168.0.3.
137 Setting Up a Multiple Node Cluster
138 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
140 To run OpenDaylight in a three node cluster, perform the following:
142 First, determine the three machines that will make up the cluster. After that,
143 do the following on each machine:
145 #. Copy the OpenDaylight distribution zip file to the machine.
146 #. Unzip the distribution.
147 #. Open the following .conf files:
149 * configuration/initial/akka.conf
150 * configuration/initial/module-shards.conf
152 #. In each configuration file, make the following changes:
154 Find every instance of the following lines and replace _127.0.0.1_ with the
155 hostname or IP address of the machine on which this file resides and
156 OpenDaylight will run::
159 hostname = "127.0.0.1"
161 .. note:: The value you need to specify will be different for each node in the
164 #. Find the following lines and replace _127.0.0.1_ with the hostname or IP
165 address of any of the machines that will be part of the cluster::
168 seed-nodes = ["akka.tcp://opendaylight-cluster-data@${IP_OF_MEMBER1}:2550",
169 <url-to-cluster-member-2>,
170 <url-to-cluster-member-3>]
172 #. Find the following section and specify the role for each member node. Here
173 we assign the first node with the *member-1* role, the second node with the
174 *member-2* role, and the third node with the *member-3* role::
180 .. note:: This step should use a different role on each node.
182 #. Open the configuration/initial/module-shards.conf file and update the
183 replicas so that each shard is replicated to all three nodes::
191 For reference, view a sample config files <<_sample_config_files,below>>.
193 #. Move into the +<karaf-distribution-directory>/bin+ directory.
194 #. Run the following command::
196 JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf
198 #. Enable clustering by running the following command at the Karaf command line::
200 feature:install odl-mdsal-clustering
202 OpenDaylight should now be running in a three node cluster. You can use any of
203 the three member nodes to access the data residing in the datastore.
208 Sample ``akka.conf`` file::
212 mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
213 mailbox-capacity = 1000
214 mailbox-push-timeout-time = 100ms
217 metric-capture-enabled = true
221 loggers = ["akka.event.slf4j.Slf4jLogger"]
225 provider = "akka.cluster.ClusterActorRefProvider"
227 java = "akka.serialization.JavaSerializer"
228 proto = "akka.remote.serialization.ProtobufSerializer"
231 serialization-bindings {
232 "com.google.protobuf.Message" = proto
237 log-remote-lifecycle-events = off
239 hostname = "10.194.189.96"
241 maximum-frame-size = 419430400
242 send-buffer-size = 52428800
243 receive-buffer-size = 52428800
248 seed-nodes = ["akka.tcp://opendaylight-cluster-data@10.194.189.96:2550",
249 "akka.tcp://opendaylight-cluster-data@10.194.189.98:2550",
250 "akka.tcp://opendaylight-cluster-data@10.194.189.101:2550"]
252 auto-down-unreachable-after = 10s
264 mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
265 mailbox-capacity = 1000
266 mailbox-push-timeout-time = 100ms
269 metric-capture-enabled = true
273 loggers = ["akka.event.slf4j.Slf4jLogger"]
276 provider = "akka.cluster.ClusterActorRefProvider"
280 log-remote-lifecycle-events = off
282 hostname = "10.194.189.96"
288 seed-nodes = ["akka.tcp://opendaylight-cluster-rpc@10.194.189.96:2551"]
290 auto-down-unreachable-after = 10s
295 Sample ``module-shards.conf`` file::
355 OpenDaylight exposes shard information via MBeans, which can be explored with
356 JConsole, VisualVM, or other JMX clients, or exposed via a REST API using
357 `Jolokia <https://jolokia.org/features-nb.html>`_, provided by the
358 ``odl-jolokia`` Karaf feature. This is convenient, due to a significant focus
359 on REST in OpenDaylight.
361 The basic URI that lists a schema of all available MBeans, but not their
366 To read the information about the shards local to the queried OpenDaylight
367 instance use the following REST calls. For the config datastore::
369 GET /jolokia/read/org.opendaylight.controller:type=DistributedConfigDatastore,Category=ShardManager,name=shard-manager-config
371 For the operational datastore::
373 GET /jolokia/read/org.opendaylight.controller:type=DistributedOperationalDatastore,Category=ShardManager,name=shard-manager-operational
375 The output contains information on shards present on the node::
379 "mbean": "org.opendaylight.controller:Category=ShardManager,name=shard-manager-operational,type=DistributedOperationalDatastore",
384 "member-1-shard-default-operational",
385 "member-1-shard-entity-ownership-operational",
386 "member-1-shard-topology-operational",
387 "member-1-shard-inventory-operational",
388 "member-1-shard-toaster-operational"
391 "MemberName": "member-1"
393 "timestamp": 1483738005,
397 The exact names from the "LocalShards" lists are needed for further
398 exploration, as they will be used as part of the URI to look up detailed info
399 on a particular shard. An example output for the
400 ``member-1-shard-default-operational`` looks like this::
404 "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-default-operational,type=DistributedOperationalDatastore",
408 "ReadWriteTransactionCount": 0,
410 "InMemoryJournalLogSize": 1,
411 "ReplicatedToAllIndex": 4,
412 "Leader": "member-1-shard-default-operational",
414 "RaftState": "Leader",
415 "LastCommittedTransactionTime": "2017-01-06 13:19:00.135",
417 "LastLeadershipChangeTime": "2017-01-06 13:18:37.605",
419 "PeerAddresses": "member-3-shard-default-operational: akka.tcp://opendaylight-cluster-data@192.168.16.3:2550/user/shardmanager-operational/member-3-shard-default-operational, member-2-shard-default-operational: akka.tcp://opendaylight-cluster-data@192.168.16.2:2550/user/shardmanager-operational/member-2-shard-default-operational",
420 "WriteOnlyTransactionCount": 0,
421 "FollowerInitialSyncStatus": false,
424 "timeSinceLastActivity": "00:00:00.320",
428 "id": "member-3-shard-default-operational",
432 "timeSinceLastActivity": "00:00:00.320",
436 "id": "member-2-shard-default-operational",
440 "FailedReadTransactionsCount": 0,
441 "StatRetrievalTime": "810.5 μs",
445 "FailedTransactionsCount": 0,
446 "PendingTxCommitQueueSize": 0,
447 "VotedFor": "member-1-shard-default-operational",
448 "SnapshotCaptureInitiated": false,
449 "CommittedTransactionsCount": 6,
450 "TxCohortCacheSize": 0,
451 "PeerVotingStates": "member-3-shard-default-operational: true, member-2-shard-default-operational: true",
453 "StatRetrievalError": null,
456 "AbortTransactionsCount": 0,
457 "ReadOnlyTransactionCount": 0,
458 "ShardName": "member-1-shard-default-operational",
459 "LeadershipChangeCount": 1,
460 "InMemoryJournalDataSize": 450
462 "timestamp": 1483740350,
466 The output helps identifying shard state (leader/follower, voting/non-voting),
467 peers, follower details if the shard is a leader, and other
470 The ODLTools team is maintaining a Python based `tool
471 <https://github.com/opendaylight/odltools>`_,
472 that takes advantage of the above MBeans exposed via Jolokia.
474 .. _cluster_admin_api:
482 A fundamental problem in distributed systems is that network
483 partitions (split brain scenarios) and machine crashes are indistinguishable
484 for the observer, i.e. a node can observe that there is a problem with another
485 node, but it cannot tell if it has crashed and will never be available again,
486 if there is a network issue that might or might not heal again after a while or
487 if process is unresponsive because of overload, CPU starvation or long garbage
490 When there is a crash, we would like to remove the affected node immediately
491 from the cluster membership. When there is a network partition or unresponsive
492 process we would like to wait for a while in the hope that it is a transient
493 problem that will heal again, but at some point, we must give up and continue
494 with the nodes on one side of the partition and shut down nodes on the other
495 side. Also, certain features are not fully available during partitions so it
496 might not matter that the partition is transient or not if it just takes too
497 long. Those two goals are in conflict with each other and there is a trade-off
498 between how quickly we can remove a crashed node and premature action on
499 transient network partitions.
504 You need to enable the Split Brain Resolver by configuring it as downing
505 provider in the configuration::
507 akka.cluster.downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
509 You should also consider different downing strategies, described below.
511 .. note:: If no downing provider is specified, NoDowning provider is used.
513 All strategies are inactive until the cluster membership and the information about
514 unreachable nodes have been stable for a certain time period. Continuously adding
515 more nodes while there is a network partition does not influence this timeout, since
516 the status of those nodes will not be changed to Up while there are unreachable nodes.
517 Joining nodes are not counted in the logic of the strategies.
519 Setting ``akka.cluster.split-brain-resolver.stable-after`` to a shorter duration for having
520 quicker removal of crashed nodes can be done at the price of risking a too early action on
521 transient network partitions that otherwise would have healed. Do not set this to a shorter
522 duration than the membership dissemination time in the cluster, which depends on the cluster size.
523 Recommended minimum duration for different cluster sizes:
525 ============ ============
526 Cluster size stable-after
527 ============ ============
534 ============ ============
536 .. note:: It is important that you use the same configuration on all nodes.
538 When reachability observations by the failure detector are changed, the SBR
539 decisions are deferred until there are no changes within the stable-after
540 duration. If this continues for too long it might be an indication of an
541 unstable system/network and it could result in delayed or conflicting
542 decisions on separate sides of a network partition.
544 As a precaution for that scenario all nodes are downed if no decision is
545 made within stable-after + down-all-when-unstable from the first unreachability
546 event. The measurement is reset if all unreachable have been healed, downed or
547 removed, or if there are no changes within stable-after * 2.
551 akka.cluster.split-brain-resolver {
552 # Time margin after which shards or singletons that belonged to a downed/removed
553 # partition are created in surviving partition. The purpose of this margin is that
554 # in case of a network partition the persistent actors in the non-surviving partitions
555 # must be stopped before corresponding persistent actors are started somewhere else.
556 # This is useful if you implement downing strategies that handle network partitions,
557 # e.g. by keeping the larger side of the partition and shutting down the smaller side.
558 # Decision is taken by the strategy when there has been no membership or
559 # reachability changes for this duration, i.e. the cluster state is stable.
562 # When reachability observations by the failure detector are changed the SBR decisions
563 # are deferred until there are no changes within the 'stable-after' duration.
564 # If this continues for too long it might be an indication of an unstable system/network
565 # and it could result in delayed or conflicting decisions on separate sides of a network
567 # As a precaution for that scenario all nodes are downed if no decision is made within
568 # `stable-after + down-all-when-unstable` from the first unreachability event.
569 # The measurement is reset if all unreachable have been healed, downed or removed, or
570 # if there are no changes within `stable-after * 2`.
571 # The value can be on, off, or a duration.
572 # By default it is 'on' and then it is derived to be 3/4 of stable-after, but not less than
574 down-all-when-unstable = on
581 This strategy is used by default, because it works well for most systems.
582 It will down the unreachable nodes if the current node is in the majority part
583 based on the last known membership information. Otherwise down the reachable nodes,
584 i.e. the own part. If the parts are of equal size the part containing the node with
585 the lowest address is kept.
587 This strategy is a good choice when the number of nodes in the cluster change
588 dynamically and you can therefore not use static-quorum.
590 * If there are membership changes at the same time as the network partition
591 occurs, for example, the status of two members are changed to Up on one side
592 but that information is not disseminated to the other side before the
593 connection is broken, it will down all nodes on the side that could be in
594 minority if the joining nodes were changed to Up on the other side.
595 Note that if the joining nodes were not changed to Up and becoming a majority
596 on the other side then each part will shut down itself, terminating the whole
599 * If there are more than two partitions and none is in majority each part will
600 shut down itself, terminating the whole cluster.
602 * If more than half of the nodes crash at the same time the other running nodes
603 will down themselves because they think that they are not in majority, and
604 thereby the whole cluster is terminated.
606 The decision can be based on nodes with a configured role instead of all nodes
607 in the cluster. This can be useful when some types of nodes are more valuable
612 akka.cluster.split-brain-resolver.active-strategy=keep-majority
616 akka.cluster.split-brain-resolver.keep-majority {
617 # if the 'role' is defined the decision is based only on members with that 'role'
624 The strategy named static-quorum will down the unreachable nodes if the number
625 of remaining nodes are greater than or equal to a configured quorum-size.
626 Otherwise, it will down the reachable nodes, i.e. it will shut down that side
629 This strategy is a good choice when you have a fixed number of nodes in the
630 cluster, or when you can define a fixed number of nodes with a certain role.
632 * If there are unreachable nodes when starting up the cluster, before reaching
633 this limit, the cluster may shut itself down immediately. This is not an issue
634 if you start all nodes at approximately the same time or use the
635 akka.cluster.min-nr-of-members to define required number of members before the
636 leader changes member status of ‘Joining’ members to ‘Up’. You can tune the
637 timeout after which downing decisions are made using the stable-after setting.
639 * You should not add more members to the cluster than quorum-size * 2 - 1.
640 If the exceeded cluster size remains when a SBR decision is needed it will down
641 all nodes because otherwise there is a risk that both sides may down each other
642 and thereby form two separate clusters.
644 * If the cluster is split into 3 (or more) parts each part that is smaller than
645 then configured quorum-size will down itself and possibly shutdown the whole cluster.
647 * If more nodes than the configured quorum-size crash at the same time the other
648 running nodes will down themselves because they think that they are not in the
649 majority, and thereby the whole cluster is terminated.
651 The decision can be based on nodes with a configured role instead of all nodes
652 in the cluster. This can be useful when some types of nodes are more valuable
655 By defining a role for a few stable nodes in the cluster and using that in the
656 configuration of static-quorum you will be able to dynamically add and remove
657 other nodes without this role and still have good decisions of what nodes to
658 keep running and what nodes to shut down in the case of network partitions.
659 The advantage of this approach compared to keep-majority is that you do not risk
660 splitting the cluster into two separate clusters, i.e. a split brain.
664 akka.cluster.split-brain-resolver.active-strategy=static-quorum
668 akka.cluster.split-brain-resolver.static-quorum {
669 # minimum number of nodes that the cluster must have
670 quorum-size = undefined
672 # if the 'role' is defined the decision is based only on members with that 'role'
679 The strategy named keep-oldest will down the part that does not contain the oldest
680 member. The oldest member is interesting because the active Cluster Singleton
681 instance is running on the oldest member.
683 This strategy is good to use if you use Cluster Singleton and do not want to shut
684 down the node where the singleton instance runs. If the oldest node crashes a new
685 singleton instance will be started on the next oldest node.
687 * If down-if-alone is configured to on, then if the oldest node has partitioned
688 from all other nodes the oldest will down itself and keep all other nodes running.
689 The strategy will not down the single oldest node when it is the only remaining
692 * If there are membership changes at the same time as the network partition occurs,
693 for example, the status of the oldest member is changed to Exiting on one side but
694 that information is not disseminated to the other side before the connection is
695 broken, it will detect this situation and make the safe decision to down all nodes
696 on the side that sees the oldest as Leaving. Note that this has the drawback that
697 if the oldest was Leaving and not changed to Exiting then each part will shut down
698 itself, terminating the whole cluster.
700 The decision can be based on nodes with a configured role instead of all nodes in
705 akka.cluster.split-brain-resolver.active-strategy=keep-oldest
710 akka.cluster.split-brain-resolver.keep-oldest {
711 # Enable downing of the oldest node when it is partitioned from all other nodes
714 # if the 'role' is defined the decision is based only on members with that 'role',
715 # i.e. using the oldest member (singleton) within the nodes with that role
722 The strategy named down-all will down all nodes.
724 This strategy can be a safe alternative if the network environment is highly unstable
725 with unreachability observations that can’t be fully trusted, and including frequent
726 occurrences of indirectly connected nodes. Due to the instability there is an increased
727 risk of different information on different sides of partitions and therefore the other
728 strategies may result in conflicting decisions. In such environments it can be better
729 to shutdown all nodes and start up a new fresh cluster.
731 * This strategy is not recommended for large clusters (> 10 nodes) because any minor
732 problem will shutdown all nodes, and that is more likely to happen in larger clusters
733 since there are more nodes that may fail.
737 akka.cluster.split-brain-resolver.active-strategy=down-all
742 The strategy named lease-majority is using a distributed lease (lock) to decide what
743 nodes that are allowed to survive. Only one SBR instance can acquire the lease make
744 the decision to remain up. The other side will not be able to aquire the lease and
745 will therefore down itself.
747 This strategy is very safe since coordination is added by an external arbiter.
749 * In some cases the lease will be unavailable when needed for a decision from all
750 SBR instances, e.g. because it is on another side of a network partition, and then
751 all nodes will be downed.
757 downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
758 split-brain-resolver {
759 active-strategy = "lease-majority"
761 lease-implementation = "akka.coordination.lease.kubernetes"
769 akka.cluster.split-brain-resolver.lease-majority {
770 lease-implementation = ""
772 # This delay is used on the minority side before trying to acquire the lease,
773 # as an best effort to try to keep the majority side.
774 acquire-lease-delay-for-minority = 2s
776 # If the 'role' is defined the majority/minority is based only on members with that 'role'.
780 Indirectly connected nodes
781 ^^^^^^^^^^^^^^^^^^^^^^^^^^
783 In a malfunctional network there can be situations where nodes are observed as
784 unreachable via some network links but they are still indirectly connected via
785 other nodes, i.e. it’s not a clean network partition (or node crash).
787 When this situation is detected the Split Brain Resolvers will keep fully
788 connected nodes and down all the indirectly connected nodes.
790 If there is a combination of indirectly connected nodes and a clean network
791 partition it will combine the above decision with the ordinary decision,
792 e.g. keep majority, after excluding suspicious failure detection observations.
797 An OpenDaylight cluster has an ability to run on multiple data centers in a way,
798 that tolerates network partitions among them.
800 Nodes can be assigned to group of nodes by setting the
801 ``akka.cluster.multi-data-center.self-data-center`` configuration property.
802 A node can only belong to one data center and if nothing is specified a node will
803 belong to the default data center.
805 The grouping of nodes is not limited to the physical boundaries of data centers,
806 it could also be used as a logical grouping for other reasons, such as isolation
807 of certain nodes to improve stability or splitting up a large cluster into smaller
808 groups of nodes for better scalability.
813 Failure detection is performed by sending heartbeat messages to detect if a node
814 is unreachable. This is done more frequently and with more certainty among the
815 nodes in the same data center than across data centers.
817 Two different failure detectors can be configured for these two purposes:
819 * ``akka.cluster.failure-detector`` for failure detection within own data center
821 * ``akka.cluster.multi-data-center.failure-detector`` for failure detection across
822 different data centers
824 Heartbeat messages for failure detection across data centers are only performed
825 between a number of the oldest nodes on each side. The number of nodes is configured
826 with ``akka.cluster.multi-data-center.cross-data-center-connections``.
828 This influences how rolling updates should be performed. Don’t stop all of the oldest nodes
829 that are used for gossip at the same time. Stop one or a few at a time so that new
830 nodes can take over the responsibility. It’s best to leave the oldest nodes until last.
835 # Defines which data center this node belongs to. It is typically used to make islands of the
836 # cluster that are colocated. This can be used to make the cluster aware that it is running
837 # across multiple availability zones or regions. It can also be used for other logical
839 self-data-center = "default"
842 # Try to limit the number of connections between data centers. Used for gossip and heartbeating.
843 # This will not limit connections created for the messaging of the application.
844 # If the cluster does not span multiple data centers, this value has no effect.
845 cross-data-center-connections = 5
847 # The n oldest nodes in a data center will choose to gossip to another data center with
848 # this probability. Must be a value between 0.0 and 1.0 where 0.0 means never, 1.0 means always.
849 # When a data center is first started (nodes < 5) a higher probability is used so other data
850 # centers find out about the new nodes more quickly
851 cross-data-center-gossip-probability = 0.2
854 # FQCN of the failure detector implementation.
855 # It must implement akka.remote.FailureDetector and have
856 # a public constructor with a com.typesafe.config.Config and
857 # akka.actor.EventStream parameter.
858 implementation-class = "akka.remote.DeadlineFailureDetector"
860 # Number of potentially lost/delayed heartbeats that will be
861 # accepted before considering it to be an anomaly.
862 # This margin is important to be able to survive sudden, occasional,
863 # pauses in heartbeat arrivals, due to for example garbage collect or
865 acceptable-heartbeat-pause = 10 s
867 # How often keep-alive heartbeat messages should be sent to each connection.
868 heartbeat-interval = 3 s
870 # After the heartbeat request has been sent the first failure detection
871 # will start after this period, even though no heartbeat message has
873 expected-response-after = 1 s
880 It is desirable to have the possibility to fail over to a different
881 data center, in case all nodes become unreachable. To achieve that
882 shards in the backup data center must be in "non-voting" state.
884 The API to manipulate voting states on shards is defined as RPCs in the
885 `cluster-admin.yang <https://git.opendaylight.org/gerrit/gitweb?p=controller.git;a=blob;f=opendaylight/md-sal/sal-cluster-admin-api/src/main/yang/cluster-admin.yang>`_
886 file in the *controller* project, which is well documented. A summary is
891 Unless otherwise indicated, the below POST requests are to be sent to any
894 To create an active/backup setup with a 6 node cluster (3 active and 3 backup
895 nodes in two locations) such configuration is used:
897 * for member-1, member-2 and member-3 (active data center)::
899 akka.cluster.multi-data-center {
900 self-data-center = "main"
903 * for member-4, member-5, member-6 (backup data center)::
905 akka.cluster.multi-data-center {
906 self-data-center = "backup"
909 There is an RPC to set voting states of all shards on
910 a list of nodes to a given state::
912 POST /restconf/operations/cluster-admin:change-member-voting-states-for-all-shards
916 POST /rests/operations/cluster-admin:change-member-voting-states-for-all-shards
918 This RPC needs the list of nodes and the desired voting state as input. For
919 creating the backup nodes, this example input can be used::
923 "member-voting-state": [
925 "member-name": "member-4",
929 "member-name": "member-5",
933 "member-name": "member-6",
940 When an active/backup deployment already exists, with shards on the backup
941 nodes in non-voting state, all that is needed for a fail-over from the active
942 data center to backup data center is to flip the voting state of each
943 shard (on each node, active AND backup). That can be easily achieved with the
944 following RPC call (no parameters needed)::
946 POST /restconf/operations/cluster-admin:flip-member-voting-states-for-all-shards
950 POST /rests/operations/cluster-admin:flip-member-voting-states-for-all-shards
952 If it's an unplanned outage where the primary voting nodes are down, the
953 "flip" RPC must be sent to a backup non-voting node. In this case there are no
954 shard leaders to carry out the voting changes. However there is a special case
955 whereby if the node that receives the RPC is non-voting and is to be changed
956 to voting and there's no leader, it will apply the voting changes locally and
957 attempt to become the leader. If successful, it persists the voting changes
958 and replicates them to the remaining nodes.
960 When the primary site is fixed and you want to fail back to it, care must be
961 taken when bringing the site back up. Because it was down when the voting
962 states were flipped on the secondary, its persisted database won't contain
963 those changes. If brought back up in that state, the nodes will think they're
964 still voting. If the nodes have connectivity to the secondary site, they
965 should follow the leader in the secondary site and sync with it. However if
966 this does not happen then the primary site may elect its own leader thereby
967 partitioning the 2 clusters, which can lead to undesirable results. Therefore
968 it is recommended to either clean the databases (i.e., ``journal`` and
969 ``snapshots`` directory) on the primary nodes before bringing them back up or
970 restore them from a recent backup of the secondary site (see section
971 :ref:`cluster_backup_restore`).
973 If is also possible to gracefully remove a node from a cluster, with the
976 POST /restconf/operations/cluster-admin:remove-all-shard-replicas
980 POST /rests/operations/cluster-admin:remove-all-shard-replicas
986 "member-name": "member-1"
990 or just one particular shard::
992 POST /restconf/operations/cluster-admin:remove-shard-replica
996 POST /rests/operations/cluster-admin:remove-shard-replicas
1002 "shard-name": "default",
1003 "member-name": "member-2",
1004 "data-store-type": "config"
1008 Now that a (potentially dead/unrecoverable) node was removed, another one can
1009 be added at runtime, without changing the configuration files of the healthy
1010 nodes (requiring reboot)::
1012 POST /restconf/operations/cluster-admin:add-replicas-for-all-shards
1016 POST /rests/operations/cluster-admin:add-replicas-for-all-shards
1018 No input required, but this RPC needs to be sent to the new node, to instruct
1019 it to replicate all shards from the cluster.
1023 While the cluster admin API allows adding and removing shards dynamically,
1024 the ``module-shard.conf`` and ``modules.conf`` files are still used on
1025 startup to define the initial configuration of shards. Modifications from
1026 the use of the API are not stored to those static files, but to the journal.
1028 Extra Configuration Options
1029 ---------------------------
1031 ============================================== ================= ======= ==============================================================================================================================================================================
1032 Name Type Default Description
1033 ============================================== ================= ======= ==============================================================================================================================================================================
1034 max-shard-data-change-executor-queue-size uint32 (1..max) 1000 The maximum queue size for each shard's data store data change notification executor.
1035 max-shard-data-change-executor-pool-size uint32 (1..max) 20 The maximum thread pool size for each shard's data store data change notification executor.
1036 max-shard-data-change-listener-queue-size uint32 (1..max) 1000 The maximum queue size for each shard's data store data change listener.
1037 max-shard-data-store-executor-queue-size uint32 (1..max) 5000 The maximum queue size for each shard's data store executor.
1038 shard-transaction-idle-timeout-in-minutes uint32 (1..max) 10 The maximum amount of time a shard transaction can be idle without receiving any messages before it self-destructs.
1039 shard-snapshot-batch-count uint32 (1..max) 20000 The minimum number of entries to be present in the in-memory journal log before a snapshot is to be taken.
1040 shard-snapshot-data-threshold-percentage uint8 (1..100) 12 The percentage of Runtime.totalMemory() used by the in-memory journal log before a snapshot is to be taken
1041 shard-hearbeat-interval-in-millis uint16 (100..max) 500 The interval at which a shard will send a heart beat message to its remote shard.
1042 operation-timeout-in-seconds uint16 (5..max) 5 The maximum amount of time for akka operations (remote or local) to complete before failing.
1043 shard-journal-recovery-log-batch-size uint32 (1..max) 5000 The maximum number of journal log entries to batch on recovery for a shard before committing to the data store.
1044 shard-transaction-commit-timeout-in-seconds uint32 (1..max) 30 The maximum amount of time a shard transaction three-phase commit can be idle without receiving the next messages before it aborts the transaction
1045 shard-transaction-commit-queue-capacity uint32 (1..max) 20000 The maximum allowed capacity for each shard's transaction commit queue.
1046 shard-initialization-timeout-in-seconds uint32 (1..max) 300 The maximum amount of time to wait for a shard to initialize from persistence on startup before failing an operation (eg transaction create and change listener registration).
1047 shard-leader-election-timeout-in-seconds uint32 (1..max) 30 The maximum amount of time to wait for a shard to elect a leader before failing an operation (eg transaction create).
1048 enable-metric-capture boolean false Enable or disable metric capture.
1049 bounded-mailbox-capacity uint32 (1..max) 1000 Max queue size that an actor's mailbox can reach
1050 persistent boolean true Enable or disable data persistence
1051 shard-isolated-leader-check-interval-in-millis uint32 (1..max) 5000 the interval at which the leader of the shard will check if its majority followers are active and term itself as isolated
1052 ============================================== ================= ======= ==============================================================================================================================================================================
1054 These configuration options are included in the etc/org.opendaylight.controller.cluster.datastore.cfg configuration file.