1 module odl-mdsal-lowlevel-control {
4 namespace "tag:opendaylight.org,2017:controller:yang:lowlevel:control";
7 import odl-mdsal-lowlevel-common {
8 revision-date "2017-02-15";
12 organization "OpenDaylight";
13 contact "Vratko Polak <vrpolak@cisco.com>";
14 description "Control RPCs used to dynamically register, unregister, start or stop
15 the operations under test, which are defined in odl-mdsal-lowlevel-target (llt).
16 Control RPCs are backed by an implementation upon feature instalation.
17 Their registration shall only affect the local member,
18 but their invocation can interact with Entity Ownership or Singleton.
20 The 'mdsal' in the module name refers to the component which defines most APIs
21 accessed by the agent implementation. The intent is to test clustering behavior,
22 but most RPCs do not access APIs from clustering component of Controller project.
24 TODO: Unify grammar: present or future tense, or imperative mood.";
26 revision "2017-02-15" {
27 description "Initial revision for Carbon clustering testing.";
30 rpc register-constant {
31 description "Upon receiving this, the member has to create llt:get-constant
32 implementation (global RPC). If the registration fails for any reason,
33 propagate the corresponding error.";
35 uses llc:constant-grouping;
40 rpc unregister-constant {
41 description "Upon receiving this, the member has to unregister
42 any llt:get-constant implementations it has registered.
43 If no implementation has been registered, do nothing.";
48 rpc register-bound-constant {
49 description "Upon receiving this, the member has to create and register
50 a bound llt:get-contexted-constant implementation (routed RPC).
51 If the registration fails for any reason, propagate the corresponding error.";
53 uses llc:context-grouping;
54 uses llc:constant-grouping;
59 rpc unregister-bound-constant {
60 description "Upon receiving this, the member has to unregister
61 any llt:get-contexted-constant implementations bound to the context.
62 If no bound implementation for the context has been registered, do nothing.";
64 uses llc:context-grouping;
69 rpc register-singleton-constant {
70 description "Upon receiving this, the member checks whether it has already registered
71 a singleton application, and fails if yes. If no, the member creates
72 an application implementation based on the given constant
73 and registers the implementation as a singleton application.
74 If the registration fails for any reason, propagate the corresponding error.
75 If the application is instantiated, it creates and registers
76 a llt:get-singleton-constant implementation, which returns the given costant.
77 When the application instance is closed, it unregisters that
78 llt:get-singleton-constant implementation.";
80 uses llc:constant-grouping;
85 rpc unregister-singleton-constant {
86 description "Upon receiving this, the member checks whether it has currently registered
87 a singleton application, and fails if no. If yes, the member shall unregister
88 the application, presumably causing application instantiation on other member,
89 and closing of the local application instance (unregistering llt:get-singleton-constant).
90 If the unregistration fails for any reason, propagate the corresponding error.";
95 rpc register-flapping-singleton {
96 description "Upon receiving this, the member checks whether it has already created
97 a 'flapping' application implementation and 'active' flag is set, and fails if yes.
98 If no, the member (creates a flapping application implementation,)
99 sets the active flag, initializes local variable flap-count to 0,
100 and registers the implementation as a singleton application.
101 If the registration fails for any reason, propagate the corresponding error.
102 If the application is instantiated, it immediatelly un-registers itself.
103 When the application instance is closed, it increments flap-count
104 and if active flag is set, re-registers the application implementation as a singleton.
105 If either un-registration or re-registration fails, 'active' flag is unset,
106 flap-count is set to negative of its previous value (minus one in case of un-registration)
107 to signal a failure has happened.";
112 rpc unregister-flapping-singleton {
113 description "Upon receiving this, the member checks whether it has created
114 a flapping application, and fails if no. If yes, the member shall
115 set the active flag to false and return the current flap-count value.";
119 description "Number of successful re-registrations. If negative,
120 (minus) cycle number when a failure occured.";
127 rpc start-publish-notifications {
128 description "Upon receiving this, the member checks whether it is already in the middle of publishing,
129 for this id, and fails if yes. If no, the member shall clear any state tracking data possibly present
130 from the previous call wth this id, and start publishing llt:id-sequence
131 notifications with the given id and sequence numbers increasing from 1.
132 The RPC shall return immediatelly before the first notification is published.
133 The publishing task stops on first error of after the given time.";
135 uses llc:id-grouping;
137 description "This RPC has to work (roughly) this long.";
141 leaf notifications-per-second {
142 description "An upper limit of publishes per second this RPC shall try to achieve.";
150 rpc check-publish-notifications {
151 description "Upon receiving this, the member shall immediatelly return
152 the current tracking data related to the current (or previous) task
153 started by start-publish-notifications with this id.";
155 uses llc:id-grouping;
159 description "True if a publishing task for this id is running, false otherwise.";
164 description "How many notifications were published for this id since last start.
165 If there was no start-publish-notifications call for this id, this leaf is absent.";
170 description "If no task has been started by start-publish-notifications for this id,
171 or if the last such call has not encountered an error, this leaf is absent.
172 Otherwise it contains a string message from the last error, including stacktrace if possible.";
180 description "Upon receiving this, the member checks whether it has already subscribed
181 a yang listener for the given id, and fails if yes.
182 If no, the member subscribes a Yang notification listener to listen for
183 llt:id-sequence notifications. The member also creates a local variable
184 (called local-number) for the sequence number and initialize that to 0.
185 Also three local counters are initialized to 0: all-not, id-not, err-not.
186 Upon receiving any id-sequence notification, all-not is incremented.
187 Each id-sequence notification of matching id shall increment id-not.
188 If local-number was one less than the sequence number (from a notification matching id),
189 increment local-number, else increment err-not.";
191 uses llc:id-grouping;
196 rpc unsubscribe-ynl {
197 description "Upon receiving this, the member checks whether it has currently subscribed
198 a yang listener for the given id, and fails if no. If yes, the member
199 shall unsubscribe the listener and return values of the local variables.";
201 uses llc:id-grouping;
205 description "Number of received id-sequence notifications of any id.";
210 description "Number of received id-sequence notifications of matching id
211 and any sequence number.";
216 description "Number of received id-sequence notifications of matching id,
217 but out-of-order sequence number.";
222 description "Value of the local number, should be equal to
223 the sequence number of the last compatible id-sequence notification received.";
230 grouping transactions-params {
232 description "This RPC has to work (roughly) this long.";
236 leaf transactions-per-second {
237 description "An upper limit of transactions per second this RPC shall try to achieve.";
243 grouping transactions-result {
245 description "Number of all transactions executed.";
250 description "Number of transactions that inserted data.";
255 description "Number of transactions that deleted data.";
261 rpc write-transactions {
262 description "Upon receiving this, the member shall make sure the outer list item
263 of llt:id-ints exists for the given id, and then start creating (one by one)
264 and submitting transactions to randomly add or delete items on the inner list for that id.
265 The randomness should avoid creating conflicting writes (at least for non-chained
266 transactions). The recommended way is to require the random number
267 has low significant bits different than the past ~100k numbers.
268 To ensure balanced number of deletes, the first write can create
269 a random set of numbers. Other writes shall be one per number.
270 The writes shall use the old API, transaction (chains) created directly on datastore
271 (as opposed to DOMDataTreeProducer).
272 .get with a timeout on currently earliest non-complete Future (from .submit)
273 shall be used as the primary wait method to throttle the submission rate.
274 This RPC shall not return until all transactions are confirmed successful,
275 or an exception is raised (the exception should propagate to restconf response).
276 OptimisticLockException is always considered an error.";
278 uses llc:id-grouping;
279 uses transactions-params;
280 leaf chained-transactions {
281 description "If true, write transactions shall be created on a transaction chain,
282 (created at start of the RPC call, and deleted at at its end).
283 If false, write transactions shall be created separately.";
289 uses transactions-result;
293 rpc produce-transactions {
294 description "Upon receiving this, the member shall make sure the outer list item
295 of llt:in-ints exists for the given id, make sure a shard for
296 the whole (config) id-ints is created (by creating and closing producer
297 for the whole id-ints), and create a DOMDataTreeProducer for that item (using that shard).
299 FIXME: Is the above the normal way of creating prefix-based chards?
301 Then start creating (one by one) and submitting transactions
302 to randomly add or delete items on the inner list for that id.
303 To ensure balanced number of deletes, the first write can create
304 a random set of random numbers. Other writes shall be one per number.
305 The writes shall use DOMDataTreeProducer API, as opposed to transaction (chains)
306 created directly on datastore.
307 .get with a timeout on currently earliest non-complete Future (from .submit)
308 shall be used as the primary wait method to throttle the submission rate.
309 This RPC shall not return until all transactions are confirmed successful,
310 or an exception is raised (the exception should propagate to restconf response).
311 OptimisticLockException is always considered an error.
312 In either case, the producer should be closed before returning,
313 but the shard and the whole id item shall be kept as they are.";
315 uses llc:id-grouping;
316 uses transactions-params;
317 leaf isolated-transactions {
318 description "The value for DOMDataTreeProducer#createTransaction argument.";
324 uses transactions-result;
328 rpc create-prefix-shard {
329 description "Upon receiving this, the member creates a prefix shard at the instance-identifier, with replicas
330 on the required members.";
335 type instance-identifier;
344 rpc remove-prefix-shard {
345 description "Upon receiving this, the member removes the prefix based shard identifier by this prefix.
346 This must be called from the same node that created the shard.";
351 type instance-identifier;
357 rpc become-prefix-leader {
358 description "Upon receiving this, the member shall ask the appropriate API
359 to become Leader of the given shard (presumably the llt:list-ints one,
360 created by produce-transactions) and return immediatelly.";
364 type instance-identifier;
370 rpc remove-shard-replica {
371 description "A specialised copy of cluster-admin:remove-shard-replica.
373 FIXME: Is this really needed for prefix shards, or even module shards
374 (or is the cluster-admin RPC sufficient)?";
377 description "The name of the config shard for which
378 to remove the replica on the current member.";
386 rpc add-shard-replica {
387 description "A specialised copy of cluster-admin:add-shard-replica.
389 FIXME: Is this really needed for prefix shards, or even module shards
390 (or is the cluster-admin RPC sufficient)?";
393 description "The name of the config shard for which
394 to add the replica on the current member.";
402 rpc is-client-aborted {
403 description "Return state of cds-access-client.
405 FIXME: Is an input needed?";
408 description "True if the local client is aborted (or unreachable), false otherwise.";
416 description "Upon receiving this, the member checks whether it has already subscribed
417 and fails if yes. If no, the member subscribes a Data Tree Change Listener
418 to listen for changes on whole llt:id-ints. The first notification received is stored immediately.
419 Every notification received after the first one has the data(getDataBefore()) compared with the
420 last stored notification(called local copy), if they match the local copy is overwritten with
421 this notifications data(getDataAfter()). If they don't match the new notification is ignored.";
426 rpc unsubscribe-dtcl {
427 description "Upon receiving this, the member checks whether it has currently subscribed
428 a Data Tree Change Listener for llt:id-ints changes, and fails if no. If yes, the member
429 shall unsubscribe the listener, read state of id-ints, compare that
430 to the local copy, and return whether the local copy is the same.";
436 description "True if and only if the read id-ints is equal to the local copy.";
442 description "Upon receiving this, the member checks whether it has already subscribed
443 and fails if yes. If no, the member subscribes a DOMDataTreeListener
444 to listen for changes on whole llt:id-ints, and stores
445 the state from the initial notification to a local variable (called the local copy).
446 Each Data Tree Change from further notifications shall be applied
447 to the local copy if it is compatible
448 (the old state from notification is equal to the local copy state).
449 If a notification is not compatible, it shall be ignored.";
454 rpc unsubscribe-ddtl {
455 description "Upon receiving this, the member checks whether it has currently subscribed
456 a DOMDataTreeListener for llt:id-ints changes, and fails if no. If yes, the member
457 shall unsubscribe the listener, read state of id-ints (by briefly subscribing
458 and ubsubscribing again), compare that to the local copy,
459 and return whether the local copy is the same.";
463 description "True if and only if the read id-ints is equal to the local copy.";
470 // The following calls are not required for Carbon testing.
472 rpc deconfigure-id-ints-shard {
473 description "Upon receiving this, the member shall ask the appropriate API
474 to remove the llt:id-ints shard (presumably created by produce-transactions)
475 and return immediatelly.
476 It is expected the data would move to the root prefix shard seamlessly.
478 TODO: Make shard name configurable by input?";
483 rpc register-default-constant {
484 description "Upon receiving this, the member has to create and register
485 a default llt:get-contexted-constant implementation (routed RPC).
486 If the registration fails for any reason, propagate the corresponding error.";
488 uses llc:constant-grouping;
493 rpc unregister-default-constant {
494 description "Upon receiving this, the member has to unregister
495 any llt:get-contexted-constant default implementations it has registered.
496 If no default implementation has been registered, do nothing.";
501 rpc shutdown-shard-replica {
502 description "Upon receiving this, the member will try to gracefully shutdown local configuration
503 data store module-based shard replica.";
507 description "The name of the configuration data store module-based shard to be shutdown
513 rpc shutdown-prefix-shard-replica {
514 description "Upon receiving this, the member will try to gracefully shutdown local configuration
515 data store prefix-based shard replica.";
518 description "The prefix of the configuration data store prefix-based shard to be shutdown
521 type instance-identifier;