-
-For reference, view a sample akka.conf file here: https://gist.github.com/moizr/88f4bd4ac2b03cfa45f0
-
-[start=5]
-.. Run the following commands on each of your cluster’s nodes:
-
-* *JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf*
-
-* *JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf*
-
-* *JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf*
-
-'''
-
-The OpenDaylight controller can now run in a three node cluster. Use any of the three member nodes to access the data residing in the datastore.
-
-Say you want to view information about shard designated as _member-1_ on a node. To do so, query the shard’s data by making the following HTTP request:
-
-'''
-
-*GET http://_<host>_:8181/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-1-shard-inventory-config,type=DistributedConfigDatastore*
-
-NOTE: If prompted, enter _admin_ as both the username and password.
-
-'''
-
-This request should return the following information:
-
- {
- "timestamp": 1410524741,
- "status": 200,
- "request": {
- "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-inventory-config,type=DistributedConfigDatastore",
- "type": "read"
- },
- "value": {
- "ReadWriteTransactionCount": 0,
- "LastLogIndex": -1,
- "MaxNotificationMgrListenerQueueSize": 1000,
- "ReadOnlyTransactionCount": 0,
- "LastLogTerm": -1,
- "CommitIndex": -1,
- "CurrentTerm": 1,
- "FailedReadTransactionsCount": 0,
- "Leader": "member-1-shard-inventory-config",
- "ShardName": "member-1-shard-inventory-config",
- "DataStoreExecutorStats": {
- "activeThreadCount": 0,
- "largestQueueSize": 0,
- "currentThreadPoolSize": 1,
- "maxThreadPoolSize": 1,
- "totalTaskCount": 1,
- "largestThreadPoolSize": 1,
- "currentQueueSize": 0,
- "completedTaskCount": 1,
- "rejectedTaskCount": 0,
- "maxQueueSize": 5000
- },
- "FailedTransactionsCount": 0,
- "CommittedTransactionsCount": 0,
- "NotificationMgrExecutorStats": {
- "activeThreadCount": 0,
- "largestQueueSize": 0,
- "currentThreadPoolSize": 0,
- "maxThreadPoolSize": 20,
- "totalTaskCount": 0,
- "largestThreadPoolSize": 0,
- "currentQueueSize": 0,
- "completedTaskCount": 0,
- "rejectedTaskCount": 0,
- "maxQueueSize": 1000
- },
- "LastApplied": -1,
- "AbortTransactionsCount": 0,
- "WriteOnlyTransactionCount": 0,
- "LastCommittedTransactionTime": "1969-12-31 16:00:00.000",
- "RaftState": "Leader",
- "CurrentNotificationMgrListenerQueueStats": []
- }
- }
-
-The key thing here is the name of the shard. Shard names are structured as follows:
-
-_<member-name>_-shard-_<shard-name-as-per-configuration>_-_<store-type>_
-
-Here are a couple sample data short names:
-
-* member-1-shard-topology-config
-* member-2-shard-default-operational
-
-===== Enabling HA on a Multiple Node Cluster
-
-To enable HA in a three node cluster:
-
-'''
-
-. Open the configuration/initial/module-shards.conf file on each cluster node.
-. Add _member-2_ and _member-3_ to the replica list for each data shard.
-. Restart all of the nodes. The nodes should automatically sync up with member-1. After some time, the cluster should be ready for operation.
-
-'''
-
-When HA is enabled, you must have at least three replicas of every shard. Each node’s configuration files should look something like this:
-
- module-shards = [
- {
- name = "default"
- shards = [
- {
- name="default"
- replicas = [
- "member-1",
- "member-2",
- "member-3"
- ]
- }
- ]
- },
- {
- name = "topology"
- shards = [
- {
- name="topology"
- replicas = [
- "member-1",
- "member-2",
- "member-3"
- ]
- }
- ]
- },
- {
- name = "inventory"
- shards = [
- {
- name="inventory"
- replicas = [
- "member-1",
- "member-2",
- "member-3"
- ]
- }
- ]
- },
- {
- name = "toaster"
- shards = [
- {
- name="toaster"
- replicas = [
- "member-1",
- "member-2",
- "member-3"
- ]
- }
- ]
- }
- ]
-
-When HA is enabled on multiple nodes, shards will replicate the data for those nodes. Whenever the lead replica on a data shard is brought down, another replica takes its place. As a result, the cluster should remain available. To determine which replica is acting as the lead on a data shard, make an HTTP request to obtain the information for a data shard on any of the nodes. The resulting information will indicate which replica is acting as the lead.
-
++
+For reference, view a sample config files <<_sample_config_files,below>>.
++
+. Move into the +<karaf-distribution-directory>/bin+ directory.
+. Run the following command:
++
+ JAVA_MAX_MEM=4G JAVA_MAX_PERM_MEM=512m ./karaf
++
+. Enable clustering by running the following command at the Karaf command line:
++
+ feature:install odl-mdsal-clustering
+
+OpenDaylight should now be running in a three node cluster. You can use any of
+the three member nodes to access the data residing in the datastore.
+
+// This doesn't work at the moment. The install -s command fails.
+//===== Debugging Clustering
+//
+//To debug clustering first install Jolokia by entering the following command:
+//
+// install -s mvn:org.jolokia/jolokia-osgi/1.1.5
+//
+//After that, you can view specific information about the cluster. For example,
+//to view information about shard designated as _member-1_ on a node, query the
+//shard's data by sending the following HTTP request:
+//
+//*GET http://_<host>_:8181/jolokia/read/org.opendaylight.controller:Category=Shards,name=member-1-shard-inventory-config,type=DistributedConfigDatastore*
+//
+//NOTE: If prompted, enter your credentials for OpenDaylight. The default
+// credentials are a username and password of _admin_.
+//
+//This request should return the following information:
+//
+// {
+// "timestamp": 1410524741,
+// "status": 200,
+// "request": {
+// "mbean": "org.opendaylight.controller:Category=Shards,name=member-1-shard-inventory-config,type=DistributedConfigDatastore",
+// "type": "read"
+// },
+// "value": {
+// "ReadWriteTransactionCount": 0,
+// "LastLogIndex": -1,
+// "MaxNotificationMgrListenerQueueSize": 1000,
+// "ReadOnlyTransactionCount": 0,
+// "LastLogTerm": -1,
+// "CommitIndex": -1,
+// "CurrentTerm": 1,
+// "FailedReadTransactionsCount": 0,
+// "Leader": "member-1-shard-inventory-config",
+// "ShardName": "member-1-shard-inventory-config",
+// "DataStoreExecutorStats": {
+// "activeThreadCount": 0,
+// "largestQueueSize": 0,
+// "currentThreadPoolSize": 1,
+// "maxThreadPoolSize": 1,
+// "totalTaskCount": 1,
+// "largestThreadPoolSize": 1,
+// "currentQueueSize": 0,
+// "completedTaskCount": 1,
+// "rejectedTaskCount": 0,
+// "maxQueueSize": 5000
+// },
+// "FailedTransactionsCount": 0,
+// "CommittedTransactionsCount": 0,
+// "NotificationMgrExecutorStats": {
+// "activeThreadCount": 0,
+// "largestQueueSize": 0,
+// "currentThreadPoolSize": 0,
+// "maxThreadPoolSize": 20,
+// "totalTaskCount": 0,
+// "largestThreadPoolSize": 0,
+// "currentQueueSize": 0,
+// "completedTaskCount": 0,
+// "rejectedTaskCount": 0,
+// "maxQueueSize": 1000
+// },
+// "LastApplied": -1,
+// "AbortTransactionsCount": 0,
+// "WriteOnlyTransactionCount": 0,
+// "LastCommittedTransactionTime": "1969-12-31 16:00:00.000",
+// "RaftState": "Leader",
+// "CurrentNotificationMgrListenerQueueStats": []
+// }
+// }
+//
+//The key information is the name of the shard. Shard names are structured as follows:
+//
+//_<member-name>_-shard-_<shard-name-as-per-configuration>_-_<store-type>_
+//
+//Here are a couple sample data short names:
+//
+//* member-1-shard-topology-config
+//* member-2-shard-default-operational
+
+===== Sample Config Files
+
+.Sample +akka.conf+ file
+----
+odl-cluster-data {
+ bounded-mailbox {
+ mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
+ mailbox-capacity = 1000
+ mailbox-push-timeout-time = 100ms
+ }
+
+ metric-capture-enabled = true
+
+ akka {
+ loglevel = "DEBUG"
+ loggers = ["akka.event.slf4j.Slf4jLogger"]
+
+ actor {
+
+ provider = "akka.cluster.ClusterActorRefProvider"
+ serializers {
+ java = "akka.serialization.JavaSerializer"
+ proto = "akka.remote.serialization.ProtobufSerializer"
+ }
+
+ serialization-bindings {
+ "com.google.protobuf.Message" = proto
+
+ }
+ }
+ remote {
+ log-remote-lifecycle-events = off
+ netty.tcp {
+ hostname = "10.194.189.96"
+ port = 2550
+ maximum-frame-size = 419430400
+ send-buffer-size = 52428800
+ receive-buffer-size = 52428800
+ }
+ }
+
+ cluster {
+ seed-nodes = ["akka.tcp://opendaylight-cluster-data@10.194.189.96:2550"]
+
+ auto-down-unreachable-after = 10s
+
+ roles = [
+ "member-1"
+ ]
+
+ }
+ }
+}
+
+odl-cluster-rpc {
+ bounded-mailbox {
+ mailbox-type = "org.opendaylight.controller.cluster.common.actor.MeteredBoundedMailbox"
+ mailbox-capacity = 1000
+ mailbox-push-timeout-time = 100ms
+ }
+
+ metric-capture-enabled = true
+
+ akka {
+ loglevel = "INFO"
+ loggers = ["akka.event.slf4j.Slf4jLogger"]
+
+ actor {
+ provider = "akka.cluster.ClusterActorRefProvider"
+
+ }
+ remote {
+ log-remote-lifecycle-events = off
+ netty.tcp {
+ hostname = "10.194.189.96"
+ port = 2551
+ }
+ }
+
+ cluster {
+ seed-nodes = ["akka.tcp://opendaylight-cluster-rpc@10.194.189.96:2551"]
+
+ auto-down-unreachable-after = 10s
+ }
+ }
+}
+----
+
+.Sample +module-shards.conf+ file
+----
+module-shards = [
+ {
+ name = "default"
+ shards = [
+ {
+ name="default"
+ replicas = [
+ "member-1",
+ "member-2",
+ "member-3"
+ ]
+ }
+ ]
+ },
+ {
+ name = "topology"
+ shards = [
+ {
+ name="topology"
+ replicas = [
+ "member-1",
+ "member-2",
+ "member-3"
+ ]
+ }
+ ]
+ },
+ {
+ name = "inventory"
+ shards = [
+ {
+ name="inventory"
+ replicas = [
+ "member-1",
+ "member-2",
+ "member-3"
+ ]
+ }
+ ]
+ },
+ {
+ name = "toaster"
+ shards = [
+ {
+ name="toaster"
+ replicas = [
+ "member-1",
+ "member-2",
+ "member-3"
+ ]
+ }
+ ]
+ }
+]
+----
\ No newline at end of file