1 = Cluster Wide Services
4 The existing OpenDaylight service deployment model assumes symmetric
5 clusters, where all services are activated on all nodes in the cluster.
6 However, many services require that there is a single active service
7 instance per cluster. Examples are global MD-SAL RPC services, or
8 services that use centralized data processing, or the OpenFlow Topology
9 Manager, which needs to interact with all OF switches connected to a
10 clustered controller and determine how the switches are interconnected.
11 We call such services 'singleton services'.
13 A developer of a singleton service must create logic that determines
14 the active service instance, manages service failovers and ensures that
15 a service instance always runs in the surviving partition of a cluster.
16 This logic would have to interact with the Entity Ownership Service (EOS)
17 and it is not easy to get it right. Leaving it to individual services
18 would mean that each service would design and implement essentially the
19 same functionality, each with its own behavior, engineering and issues.
22 == General Cluster Singleton Service Approach
23 The main idea represents a single cluster service instance. The Entity
24 Ownership Service (EOS) represents the base Leadership choice for one
25 Entity instance. So we are able to move candidate election to the EOS.
26 Every Cluster Singleton service *type* must have its own Entity and every
27 Cluster Singleton service *instance* must have its own Entity Candidate.
28 Every registered Entity Candidate should be notified about its actual role
31 To ensure that there is only one active (i.e. fully instantiated) service
32 instance in the cluster at any given time, we use the "double-candidate"
33 approach: a service instance maintains not only a candidate registration
34 for the ownership of the service's Entity in the cluster, but also an
35 additonal (guard) ownership registration that ensures a full shutdown of
36 the service instance before the overall ownership of the service is
37 relinquished. To achieve the overall ownership of a singleton service,
38 a service candidate must hold ownership of both these entities (see the
39 sequence diagram below).
41 .Double Candidate Solution (Async. Close Guard)
42 include::01_doubleCandidateSimpleSequence.plantuml[]
44 The double-candidate approach prevents the shutdown of a service with
45 outstanding asynchronous operations, such as unfinished MD-SAL Data
46 Store transactions. The **main entity** candidate is focused on the
47 actual role of the service in the cluster; the **close guard entity**
48 candidate is a guard that tracks the outstanding asynchronous operations.
49 Every new Leader must register its own **close guard entity** candidate.
50 A Leader that wishes to relinquish its leadership must close its
51 **close guard entity** candidate. This is typically done in the last
52 step of the shutdown procedure. When the old Leader relinquishes its
53 **close guard entity** ownership, the new Leader will take the leadership
54 for the **close guard entity** candidate (it has to hold ownership for
55 both candidate signatures). That is the marker to full start cluster
56 node application instance and old leader stops successfully. Figure 1
57 shows the entire sequence.
59 IMPORTANT: Double candidate approach (async. close guard) prerequisite is "actual ownership doesn't change by new candidate registration".
61 === Cluster Singleton Service
62 Double candidate solution is relevant for all services and we don't need to implement same code for every instances. So we are able to hide whole EOS interaction for user and we can encapsulate it to some "ODL Cluster Singleton Service Provider" parent.
64 .Class Diagram Cluster Singleton Service
65 include::02_classClusterSingletonService.plantuml[]
67 === Cluster Singleton Service Grouping
68 Sometimes we wish to have couple of services to run on same Cluster Node. So Double candidate EOS interaction could by realized for a list of ClusterSingletonService instances.
70 .Class Diagram Cluster Singleton Service Group
71 include::03_classClusterSingletonServiceGroup.plantuml[]
74 === Cluster Singleton Service Provider
75 Provider implementation is realized as stay alone service which has to be instantiated for every ClusterNode and it has to be available for every depend applications. So class diagram looks as next.
77 .Class Diagram Cluster Singleton Service Provider
78 include::04_classClusterSingletonServiceProvider.plantuml[]
80 === Cluster Singleton Service RPC implementation sample
81 We'd like to show a grouping RPC service sample. RPC services don't need be a part of same project.
85 public class SampleClusterSingletonServiceRPC_1 implements ClusterSingletonService, AutoCloseable {
87 /* Property contains an entity name guard for all instances of this group of services */
88 private static final String CLUSTER_SERVICE_GROUP_IDENTIFIER = "sample-service-group";
90 private ClusterSingletonServiceRegistration registration;
92 public SampleClusterSingletonServiceRPC_1(final ClusterSingletonServiceProvider provider) {
93 Preconditions.checkArgument(provider != null);
94 this.registration = provider.registerClusterSingletonService(this);
98 public void instantiateServiceInstance() {
99 // TODO : implement start service functionality
103 public ListenableFuture<Void> closeServiceInstance() {
104 // TODO : implement sync. or async. stop service functionality
105 return Futures.immediateFuture(null);
109 public String getServiceGroupIdentifier() {
110 return CLUSTER_SERVICE_GROUP_IDENTIFIER;
114 public void close() throws Exception {
115 if (registration != null) {
116 registration.close();
123 public class SampleClusterSingletonServiceRPC_2 implements ClusterSingletonService, AutoCloseable {
125 /* Property contains an entity name guard for all instances of this group of services */
126 private static final String CLUSTER_SERVICE_GROUP_IDENTIFIER = "sample-service-group";
128 private ClusterSingletonServiceRegistration registration;
130 public SampleClusterSingletonServiceRPC_1(final ClusterSingletonServiceProvider provider) {
131 Preconditions.checkArgument(provider != null);
132 this.registration = provider.registerClusterSingletonService(this);
136 public void instantiateServiceInstance() {
137 // TODO : implement start service functionality
141 public ListenableFuture<Void> closeServiceInstance() {
142 // TODO : implement sync. or async. stop service functionality
143 return Futures.immediateFuture(null);
147 public String getServiceGroupIdentifier() {
148 return CLUSTER_SERVICE_GROUP_IDENTIFIER;
152 public void close() throws Exception {
153 if (registration != null) {
154 registration.close();
162 Both RPCs are instantiated for some ClusterNode and RPCs have only one instance in whole Cluster.
164 === Cluster Singleton Application
165 OSGi module application could be understand like service too. So we would like to focus for OSGi container like a application loader. Every OSGi app has own lifecycle which should be adapting to use EOS and only master could be loading fully. We wish to encapsulate EOS interaction in an ODL application Loader.
167 .Life cycle of plug-ins in OSGi
168 include::05_pluginOsgiLifeCycle.plantuml[]
170 ==== Application Module instantiation
171 Every "ODL app." has Provider class which is instantiated in __AbstractModule<ODL app>__ class. Module has method __createInstance()__ which start an application Provider. So application provider has to implement __ClusterSingletonService__ interface and the application provider initialization (or constructor) has to register itself to ClusterSingletonServiceProvider. The application Provider body will be initialized by leader ClusterNode election for master only.
173 .Base Cluster-wide app instantiation
174 include::06_baseAppSingleInstance.plantuml[]
176 So we are able to hide whole EOS interaction for user and encapsulate inside "ClusterSingletonServiceProvider". Application/service needs only implement relevant interface and registrates itself to provider.
178 Simplified sequence diagram (without double candidate) is displayed in next picture:
180 .Simply Cluster-wide app instantiation (without double candidate)
181 include::07_processAppSingleInstSimply.plantuml[]
183 Full sequence implementation diagram for __AbstractClusterProjectProvider__ is displayed in next picture:
185 .Cluster-wide app instantiation
186 include::08_processAppSingleInst.plantuml[]
190 public class ClusterSingletonProjectSample implements ClusterSingletonService, AutoCloseable {
192 /* Property contains an entity name guard for all instances of this group of services */
193 private static final String CLUSTER_SERVICE_GROUP_IDENTIFIER = "sample-service-group";
195 private ClusterSingletonServiceRegistration registration;
197 public ClusterSingletonProjectSample(final ClusterSingletonServiceProvider provider) {
198 Preconditions.checkArgument(provider != null);
199 this.registration = provider.registerClusterSingletonService(this);
203 public void instantiateServiceInstance() {
204 // TODO : implement start project functionality
209 public ListenableFuture<Void> closeServiceInstance() {
210 // TODO : implement sync. or async. stop project functionality
211 return Futures.immediateFuture(null);
215 public String getServiceGroupIdentifier() {
216 return CLUSTER_SERVICE_GROUP_IDENTIFIER;
220 public void close() throws Exception {
221 if (registration != null) {
222 registration.close();
229 public class ApplicationModule extends ProjectAbstractModule<? extends AbstractStatisticsManagerModule> {
234 public java.lang.AutoCloseable createInstance() {
235 AbstractServiceProvider projectProvider =
236 new ClusterSingletonProjectSample(getClusterSingletonServiceProviderDependency());
237 return projectProvider;