1 module odl-mdsal-lowlevel-control {
4 namespace "tag:opendaylight.org,2017:controller:yang:lowlevel:control";
7 import odl-mdsal-lowlevel-common {
8 revision-date "2017-02-15";
12 organization "OpenDaylight";
13 contact "Vratko Polak <vrpolak@cisco.com>";
14 description "Control RPCs used to dynamically register, unregister, start or stop
15 the operations under test, which are defined in odl-mdsal-lowlevel-target (llt).
16 Control RPCs are backed by an implementation upon feature instalation.
17 Their registration shall only affect the local member,
18 but their invocation can interact with Entity Ownership or Singleton.
20 The 'mdsal' in the module name refers to the component which defines most APIs
21 accessed by the agent implementation. The intent is to test clustering behavior,
22 but most RPCs do not access APIs from clustering component of Controller project.
24 TODO: Unify grammar: present or future tense, or imperative mood.";
26 revision "2017-02-15" {
27 description "Initial revision for Carbon clustering testing.";
30 rpc register-constant {
31 description "Upon receiving this, the member has to create llt:get-constant
32 implementation (global RPC). If the registration fails for any reason,
33 propagate the corresponding error.";
35 uses llc:constant-grouping;
40 rpc unregister-constant {
41 description "Upon receiving this, the member has to unregister
42 any llt:get-constant implementations it has registered.
43 If no implementation has been registered, do nothing.";
48 rpc register-bound-constant {
49 description "Upon receiving this, the member has to create and register
50 a bound llt:get-contexted-constant implementation (routed RPC).
51 If the registration fails for any reason, propagate the corresponding error.";
53 uses llc:context-grouping;
54 uses llc:constant-grouping;
59 rpc unregister-bound-constant {
60 description "Upon receiving this, the member has to unregister
61 any llt:get-contexted-constant implementations bound to the context.
62 If no bound implementation for the context has been registered, do nothing.";
64 uses llc:context-grouping;
69 rpc register-singleton-constant {
70 description "Upon receiving this, the member checks whether it has already registered
71 a singleton application, and fails if yes. If no, the member creates
72 an application implementation based on the given constant
73 and registers the implementation as a singleton application.
74 If the registration fails for any reason, propagate the corresponding error.
75 If the application is instantiated, it creates and registers
76 a llt:get-singleton-constant implementation, which returns the given costant.
77 When the application instance is closed, it unregisters that
78 llt:get-singleton-constant implementation.";
80 uses llc:constant-grouping;
85 rpc unregister-singleton-constant {
86 description "Upon receiving this, the member checks whether it has currently registered
87 a singleton application, and fails if no. If yes, the member shall unregister
88 the application, presumably causing application instantiation on other member,
89 and closing of the local application instance (unregistering llt:get-singleton-constant).
90 If the unregistration fails for any reason, propagate the corresponding error.";
95 rpc register-flapping-singleton {
96 description "Upon receiving this, the member checks whether it has already created
97 a 'flapping' application implementation and 'active' flag is set, and fails if yes.
98 If no, the member (creates a flapping application implementation,)
99 sets the active flag, initializes local variable flap-count to 0,
100 and registers the implementation as a singleton application.
101 If the registration fails for any reason, propagate the corresponding error.
102 If the application is instantiated, it immediatelly un-registers itself.
103 When the application instance is closed, it increments flap-count
104 and if active flag is set, re-registers the application implementation as a singleton.
105 If either un-registration or re-registration fails, 'active' flag is unset,
106 flap-count is set to negative of its previous value (minus one in case of un-registration)
107 to signal a failure has happened.";
112 rpc unregister-flapping-singleton {
113 description "Upon receiving this, the member checks whether it has created
114 a flapping application, and fails if no. If yes, the member shall
115 set the active flag to false and return the current flap-count value.";
119 description "Number of successful re-registrations. If negative,
120 (minus) cycle number when a failure occured.";
127 rpc start-publish-notifications {
128 description "Upon receiving this, the member checks whether it is already in the middle of publishing,
129 for this id, and fails if yes. If no, the member shall clear any state tracking data possibly present
130 from the previous call wth this id, and start publishing llt:id-sequence
131 notifications with the given id and sequence numbers increasing from 1.
132 The RPC shall return immediatelly before the first notification is published.
133 The publishing task stops on first error of after the given time.";
135 uses llc:id-grouping;
137 description "This RPC has to work (roughly) this long.";
141 leaf notifications-per-second {
142 description "An upper limit of publishes per second this RPC shall try to achieve.";
150 rpc check-publish-notifications {
151 description "Upon receiving this, the member shall immediatelly return
152 the current tracking data related to the current (or previous) task
153 started by start-publish-notifications with this id.";
155 uses llc:id-grouping;
159 description "True if a publishing task for this id is running, false otherwise.";
164 description "How many notifications were published for this id since last start.
165 If there was no start-publish-notifications call for this id, this leaf is absent.";
170 description "If no task has been started by start-publish-notifications for this id,
171 or if the last such call has not encountered an error, this leaf is absent.
172 Otherwise it contains a string message from the last error, including stacktrace if possible.";
180 description "Upon receiving this, the member checks whether it has already subscribed
181 a yang listener for the given id, and fails if yes.
182 If no, the member subscribes a Yang notification listener to listen for
183 llt:id-sequence notifications. The member also creates a local variable
184 (called local-number) for the sequence number and initialize that to 0.
185 Also three local counters are initialized to 0: all-not, id-not, err-not.
186 Upon receiving any id-sequence notification, all-not is incremented.
187 Each id-sequence notification of matching id shall increment id-not.
188 If local-number was one less than the sequence number (from a notification matching id),
189 increment local-number, else increment err-not.";
191 uses llc:id-grouping;
196 rpc unsubscribe-ynl {
197 description "Upon receiving this, the member checks whether it has currently subscribed
198 a yang listener for the given id, and fails if no. If yes, the member
199 shall unsubscribe the listener and return values of the local variables.";
201 uses llc:id-grouping;
205 description "Number of received id-sequence notifications of any id.";
210 description "Number of received id-sequence notifications of matching id
211 and any sequence number.";
216 description "Number of received id-sequence notifications of matching id,
217 but out-of-order sequence number.";
222 description "Value of the local number, should be equal to
223 the sequence number of the last compatible id-sequence notification received.";
230 rpc write-transactions {
231 description "Upon receiving this, the member shall make sure the outer list item
232 of llt:id-ints exists for the given id, and then start creating (one by one)
233 and submitting transactions to randomly add or delete items on the inner list for that id.
234 The randomness should avoid creating conflicting writes (at least for non-chained
235 transactions). The recommended way is to require the random number
236 has low significant bits different than the past ~100k numbers.
237 To ensure balanced number of deletes, the first write can create
238 a random set of numbers. Other writes shall be one per number.
239 The writes shall use the old API, transaction (chains) created directly on datastore
240 (as opposed to DOMDataTreeProducer).
241 .get with a timeout on currently earliest non-complete Future (from .submit)
242 shall be used as the primary wait method to throttle the submission rate.
243 This RPC shall not return until all transactions are confirmed successful,
244 or an exception is raised (the exception should propagate to restconf response).
245 OptimisticLockException is always considered an error.";
247 uses llc:id-grouping;
249 description "This RPC has to work (roughly) this long.";
253 leaf transactions-per-second {
254 description "An upper limit of transactions per second this RPC shall try to achieve.";
258 leaf chained-transactions {
259 description "If true, write transactions shall be created on a transaction chain,
260 (created at start of the RPC call, and deleted at at its end).
261 If false, write transactions shall be created separately.";
268 description "Number of all transactions executed.";
273 description "Number of transactions that inserted data.";
278 description "Number of transactions that deleted data.";
285 rpc produce-transactions {
286 description "Upon receiving this, the member shall make sure the outer list item
287 of llt:in-ints exists for the given id, make sure a shard for
288 the whole (config) id-ints is created (by creating and closing producer
289 for the whole id-ints), and create a DOMDataTreeProducer for that item (using that shard).
291 FIXME: Is the above the normal way of creating prefix-based chards?
293 Then start creating (one by one) and submitting transactions
294 to randomly add or delete items on the inner list for that id.
295 To ensure balanced number of deletes, the first write can create
296 a random set of random numbers. Other writes shall be one per number.
297 The writes shall use DOMDataTreeProducer API, as opposed to transaction (chains)
298 created directly on datastore.
299 .get with a timeout on currently earliest non-complete Future (from .submit)
300 shall be used as the primary wait method to throttle the submission rate.
301 This RPC shall not return until all transactions are confirmed successful,
302 or an exception is raised (the exception should propagate to restconf response).
303 OptimisticLockException is always considered an error.
304 In either case, the producer should be closed before returning,
305 but the shard and the whole id item shall be kept as they are.";
307 uses llc:id-grouping;
309 description "This RPC has to work (roughly) this long.";
313 leaf transactions-per-second {
314 description "An upper limit of transactions per second this RPC shall try to achieve.";
318 leaf isolated-transactions {
319 description "The value for DOMDataTreeProducer#createTransaction argument.";
326 description "Number of all transactions executed.";
331 description "Number of transactions that inserted data.";
336 description "Number of transactions that deleted data.";
343 rpc create-prefix-shard {
344 description "Upon receiving this, the member creates a prefix shard at the instance-identifier, with replicas
345 on the required members.";
350 type instance-identifier;
359 rpc remove-prefix-shard {
360 description "Upon receiving this, the member removes the prefix based shard identifier by this prefix.
361 This must be called from the same node that created the shard.";
366 type instance-identifier;
372 rpc become-prefix-leader {
373 description "Upon receiving this, the member shall ask the appropriate API
374 to become Leader of the given shard (presumably the llt:list-ints one,
375 created by produce-transactions) and return immediatelly.";
379 type instance-identifier;
385 rpc remove-shard-replica {
386 description "A specialised copy of cluster-admin:remove-shard-replica.
388 FIXME: Is this really needed for prefix shards, or even module shards
389 (or is the cluster-admin RPC sufficient)?";
392 description "The name of the config shard for which
393 to remove the replica on the current member.";
401 rpc add-shard-replica {
402 description "A specialised copy of cluster-admin:add-shard-replica.
404 FIXME: Is this really needed for prefix shards, or even module shards
405 (or is the cluster-admin RPC sufficient)?";
408 description "The name of the config shard for which
409 to add the replica on the current member.";
417 rpc is-client-aborted {
418 description "Return state of cds-access-client.
420 FIXME: Is an input needed?";
423 description "True if the local client is aborted (or unreachable), false otherwise.";
431 description "Upon receiving this, the member checks whether it has already subscribed
432 and fails if yes. If no, the member subscribes a Data Tree Change Listener
433 to listen for changes on whole llt:id-ints. The first notification received is stored immediately.
434 Every notification received after the first one has the data(getDataBefore()) compared with the
435 last stored notification(called local copy), if they match the local copy is overwritten with
436 this notifications data(getDataAfter()). If they don't match the new notification is ignored.";
441 rpc unsubscribe-dtcl {
442 description "Upon receiving this, the member checks whether it has currently subscribed
443 a Data Tree Change Listener for llt:id-ints changes, and fails if no. If yes, the member
444 shall unsubscribe the listener, read state of id-ints, compare that
445 to the local copy, and return whether the local copy is the same.";
451 description "True if and only if the read id-ints is equal to the local copy.";
457 description "Upon receiving this, the member checks whether it has already subscribed
458 and fails if yes. If no, the member subscribes a DOMDataTreeListener
459 to listen for changes on whole llt:id-ints, and stores
460 the state from the initial notification to a local variable (called the local copy).
461 Each Data Tree Change from further notifications shall be applied
462 to the local copy if it is compatible
463 (the old state from notification is equal to the local copy state).
464 If a notification is not compatible, it shall be ignored.";
469 rpc unsubscribe-ddtl {
470 description "Upon receiving this, the member checks whether it has currently subscribed
471 a DOMDataTreeListener for llt:id-ints changes, and fails if no. If yes, the member
472 shall unsubscribe the listener, read state of id-ints (by briefly subscribing
473 and ubsubscribing again), compare that to the local copy,
474 and return whether the local copy is the same.";
478 description "True if and only if the read id-ints is equal to the local copy.";
485 // The following calls are not required for Carbon testing.
487 rpc deconfigure-id-ints-shard {
488 description "Upon receiving this, the member shall ask the appropriate API
489 to remove the llt:id-ints shard (presumably created by produce-transactions)
490 and return immediatelly.
491 It is expected the data would move to the root prefix shard seamlessly.
493 TODO: Make shard name configurable by input?";
498 rpc register-default-constant {
499 description "Upon receiving this, the member has to create and register
500 a default llt:get-contexted-constant implementation (routed RPC).
501 If the registration fails for any reason, propagate the corresponding error.";
503 uses llc:constant-grouping;
508 rpc unregister-default-constant {
509 description "Upon receiving this, the member has to unregister
510 any llt:get-contexted-constant default implementations it has registered.
511 If no default implementation has been registered, do nothing.";
516 rpc shutdown-shard-replica {
517 description "Upon receiving this, the member will try to gracefully shutdown local configuration
518 data store module-based shard replica.";
522 description "The name of the configuration data store module-based shard to be shutdown