7 OpenDaylight Controller is Java-based, model-driven controller using
8 YANG as its modeling language for various aspects of the system and
9 applications and with its components serves as a base platform for other
10 OpenDaylight applications.
12 The OpenDaylight Controller relies on the following technologies:
14 - **OSGI** - This framework is the back-end of OpenDaylight as it
15 allows dynamically loading of bundles and packages JAR files, and
16 binding bundles together for exchanging information.
18 - **Karaf** - Application container built on top of OSGI, which
19 simplifies operational aspects of packaging and installing
22 - **YANG** - a data modeling language used to model configuration and
23 state data manipulated by the applications, remote procedure calls,
26 The OpenDaylight Controller provides following model-driven subsystems
27 as a foundation for Java applications:
29 - **`Config Subsystem <#_config_subsystem>`__** - an activation,
30 dependency-injection and configuration framework, which allows
31 two-phase commits of configuration and dependency-injection, and
32 allows for run-time rewiring.
34 - **`MD-SAL <#_md_sal_overview>`__** - messaging and data storage
35 functionality for data, notifications and RPCs modeled by application
36 developers. MD-SAL uses YANG as the modeling for both interface and
37 data definitions, and provides a messaging and data-centric runtime
38 for such services based on YANG modeling.
40 - **MD-SAL Clustering** - enables cluster support for core MD-SAL
41 functionality and provides location-transparent accesss to
44 The OpenDaylight Controller supports external access to applications and
45 data using following model-driven protocols:
47 - **NETCONF** - XML-based RPC protocol, which provides abilities for
48 client to invoke YANG-modeled RPCs, receive notifications and to
49 read, modify and manipulate YANG modeled data.
51 - **RESTCONF** - HTTP-based protocol, which provides REST-like APIs to
52 manipulate YANG modeled data and invoke YANG modeled RPCs, using XML
53 or JSON as payload format.
58 The Model-Driven Service Adaptation Layer (MD-SAL) is message-bus
59 inspired extensible middleware component that provides messaging and
60 data storage functionality based on data and interface models defined by
61 application developers (i.e. user-defined models).
65 - Defines a **common-layer, concepts, data model building blocks and
66 messaging patterns** and provides infrastructure / framework for
67 applications and inter-application communication.
69 - Provide common support for user-defined transport and payload
70 formats, including payload serialization and adaptation (e.g. binary,
73 The MD-SAL uses **YANG** as the modeling language for both interface and
74 data definitions, and provides a messaging and data-centric runtime for
75 such services based on YANG modeling.
77 | The MD-SAL provides two different API types (flavours):
79 - **MD-SAL Binding:** MD-SAL APIs which extensively uses APIs and
80 classes generated from YANG models, which provides compile-time
83 - **MD-SAL DOM:** (Document Object Model) APIs which uses DOM-like
84 representation of data, which makes them more powerful, but provides
85 less compile-time safety.
89 Model-driven nature of the MD-SAL and **DOM**-based APIs allows for
90 behind-the-scene API and payload type mediation and transformation
91 to facilitate seamless communication between applications - this
92 enables for other components and applications to provide connectors
93 / expose different set of APIs and derive most of its functionality
94 purely from models, which all existing code can benefit from without
95 modification. For example **RESTCONF Connector** is an application
96 built on top of MD-SAL and exposes YANG-modeled application APIs
97 transparently via HTTP and adds support for XML and JSON payload
103 Basic concepts are building blocks which are used by applications, and
104 from which MD-SAL uses to define messaging patterns and to provide
105 services and behavior based on developer-supplied YANG models.
108 All state-related data are modeled and represented as data tree,
109 with possibility to address any element / subtree
111 - **Operational Data Tree** - Reported state of the system,
112 published by the providers using MD-SAL. Represents a feedback
113 loop for applications to observe state of the network / system.
115 - **Configuration Data Tree** - Intended state of the system or
116 network, populated by consumers, which expresses their intention.
119 Unique identifier of node / subtree in data tree, which provides
120 unambiguous information, how to reference and retrieve node /
121 subtree from conceptual data trees.
124 Asynchronous transient event which may be consumed by subscribers
125 and they may act upon it
128 asynchronous request-reply message pair, when request is triggered
129 by consumer, send to the provider, which in future replies with
134 In MD-SAL terminology, the term *RPC* is used to define the
135 input and output for a procedure (function) that is to be
136 provided by a provider, and mediated by the MD-SAL, that means
137 it may not result in remote call.
142 MD-SAL provides several messaging patterns using broker derived from
143 basic concepts, which are intended to transfer YANG modeled data between
144 applications to provide data-centric integration between applications
145 instead of API-centric integration.
147 - **Unicast communication**
149 - **Remote Procedure Calls** - unicast between consumer and
150 provider, where consumer sends **request** message to provider,
151 which asynchronously responds with **reply** message
153 - **Publish / Subscribe**
155 - **Notifications** - multicast transient message which is published
156 by provider and is delivered to subscribers
158 - **Data Change Events** - multicast asynchronous event, which is
159 sent by data broker if there is change in conceptual data tree,
160 and is delivered to subscribers
162 - **Transactional access to Data Tree**
164 - Transactional **reads** from conceptual **data tree** - read-only
165 transactions with isolation from other running transactions.
167 - Transactional **modification** to conceptual **data tree** - write
168 transactions with isolation from other running transactions.
170 - **Transaction chaining**
172 MD-SAL Data Transactions
173 ------------------------
175 MD-SAL **Data Broker** provides transactional access to conceptual
176 **data trees** representing configuration and operational state.
180 **Data tree** usually represents state of the modeled data, usually
181 this is state of controller, applications and also external systems
184 **Transactions** provide **`stable and isolated
185 view <#_transaction_isolation>`__** from other currently running
186 transactions. The state of running transaction and underlying data tree
187 is not affected by other concurrently running transactions.
190 Transaction provides only modification capabilities, but does not
191 provide read capabilities. Write-only transaction is allocated using
192 ``newWriteOnlyTransaction()``.
196 This allows less state tracking for write-only transactions and
197 allows MD-SAL Clustering to optimize internal representation of
198 transaction in cluster.
201 Transaction provides both read and write capabilities. It is
202 allocated using ``newReadWriteTransaction()``.
205 Transaction provides stable read-only view based on current data
206 tree. Read-only view is not affected by any subsequent write
207 transactions. Read-only transaction is allocated using
208 ``newReadOnlyTransaction()``.
212 If an application needs to observe changes itself in data tree,
213 it should use **data tree listeners** instead of read-only
214 transactions and polling data tree.
216 Transactions may be allocated using the **data broker** itself or using
217 **transaction chain**. In the case of **transaction chain**, the new
218 allocated transaction is not based on current state of data tree, but
219 rather on state introduced by previous transaction from the same chain,
220 even if the commit for previous transaction has not yet occurred (but
221 transaction was submitted).
223 Write-Only & Read-Write Transaction
224 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
226 Write-Only and Read-Write transactions provide modification capabilities
227 for the conceptual data trees.
229 1. application allocates new transactions using
230 ``newWriteOnlyTransaction()`` or ``newReadWriteTransaction()``.
232 2. application `modifies data tree <#_modification_of_data_tree>`__
233 using ``put``, ``merge`` and/or ``delete``.
235 3. application finishes transaction using
236 ```submit()`` <#_submitting_transaction>`__, which seals transaction
237 and submits it to be processed.
239 4. application observes the result of the transaction commit using
240 either blocking or asynchronous calls.
242 The **initial state** of the write transaction is a **stable snapshot**
243 of the current data tree state captured when transaction was created and
244 it’s state and underlying data tree are not affected by other
245 concurrently running transactions.
247 Write transactions are **isolated** from other concurrent write
248 transactions. All **`writes are local <#_transaction_local_state>`__**
249 to the transaction and represents only a **proposal of state change**
250 for data tree and **are not visible** to any other concurrently running
251 transactions (including read-only transactions).
253 The transaction **`commit may fail <#_commit_failure_scenarios>`__** due
254 to failing verification of data or concurrent transaction modifying and
255 affected data in an incompatible way.
257 Modification of Data Tree
258 ^^^^^^^^^^^^^^^^^^^^^^^^^
260 Write-only and read-write transaction provides following methods to
266 <T> void put(LogicalDatastoreType store, InstanceIdentifier<T> path, T data);
268 Stores a piece of data at a specified path. This acts as an **add /
269 replace** operation, which is to say that whole subtree will be
270 replaced by the specified data.
275 <T> void merge(LogicalDatastoreType store, InstanceIdentifier<T> path, T data);
277 Merges a piece of data with the existing data at a specified path.
278 Any **pre-existing data** which are not explicitly overwritten
279 **will be preserved**. This means that if you store a container, its
280 child subtrees will be merged.
285 void delete(LogicalDatastoreType store, InstanceIdentifier<?> path);
287 Removes a whole subtree from a specified path.
289 Submitting transaction
290 ^^^^^^^^^^^^^^^^^^^^^^
292 Transaction is submitted to be processed and committed using following
297 CheckedFuture<Void,TransactionCommitFailedException> submit();
299 Applications publish the changes proposed in the transaction by calling
300 ``submit()`` on the transaction. This **seals the transaction**
301 (preventing any further writes using this transaction) and submits it to
302 be processed and applied to global conceptual data tree. The
303 ``submit()`` method does not block, but rather returns
304 ``ListenableFuture``, which will complete successfully once processing
305 of transaction is finished and changes are applied to data tree. If
306 **commit** of data failed, the future will fail with
307 ``TransactionFailedException``.
309 Application may listen on commit state asynchronously using
310 ``ListenableFuture``.
314 Futures.addCallback( writeTx.submit(), new FutureCallback<Void>() {
315 public void onSuccess( Void result ) {
316 LOG.debug("Transaction committed successfully.");
319 public void onFailure( Throwable t ) {
320 LOG.error("Commit failed.",e);
324 - Submits ``writeTx`` and registers application provided
325 ``FutureCallback`` on returned future.
327 - Invoked when future completed successfully - transaction ``writeTx``
328 was successfully committed to data tree.
330 - Invoked when future failed - commit of transaction ``writeTx``
331 failed. Supplied exception provides additional details and cause of
334 If application need to block till commit is finished it may use
335 ``checkedGet()`` to wait till commit is finished.
340 writeTx.submit().checkedGet();
341 } catch (TransactionCommitFailedException e) {
342 LOG.error("Commit failed.",e);
345 - Submits ``writeTx`` and blocks till commit of ``writeTx`` is
346 finished. If commit fails ``TransactionCommitFailedException`` will
349 - Catches ``TransactionCommitFailedException`` and logs it.
351 Transaction local state
352 ^^^^^^^^^^^^^^^^^^^^^^^
354 Read-Write transactions maintain transaction-local state, which renders
355 all modifications as if they happened, but this is only local to
358 Reads from the transaction returns data as if the previous modifications
359 in transaction already happened.
361 Let assume initial state of data tree for ``PATH`` is ``A``.
365 ReadWriteTransaction rwTx = broker.newReadWriteTransaction();
367 rwRx.read(OPERATIONAL,PATH).get();
368 rwRx.put(OPERATIONAL,PATH,B);
369 rwRx.read(OPERATIONAL,PATH).get();
370 rwRx.put(OPERATIONAL,PATH,C);
371 rwRx.read(OPERATIONAL,PATH).get();
373 - Allocates new ``ReadWriteTransaction``.
375 - Read from ``rwTx`` will return value ``A`` for ``PATH``.
377 - Writes value ``B`` to ``PATH`` using ``rwTx``.
379 - Read will return value ``B`` for ``PATH``, since previous write
380 occurred in same transaction.
382 - Writes value ``C`` to ``PATH`` using ``rwTx``.
384 - Read will return value ``C`` for ``PATH``, since previous write
385 occurred in same transaction.
387 Transaction isolation
388 ~~~~~~~~~~~~~~~~~~~~~
390 Running (not submitted) transactions are isolated from each other and
391 changes done in one transaction are not observable in other currently
394 Lets assume initial state of data tree for ``PATH`` is ``A``.
398 ReadOnlyTransaction txRead = broker.newReadOnlyTransaction();
399 ReadWriteTransaction txWrite = broker.newReadWriteTransaction();
401 txRead.read(OPERATIONAL,PATH).get();
402 txWrite.put(OPERATIONAL,PATH,B);
403 txWrite.read(OPERATIONAL,PATH).get();
404 txWrite.submit().get();
405 txRead.read(OPERATIONAL,PATH).get();
406 txAfterCommit = broker.newReadOnlyTransaction();
407 txAfterCommit.read(OPERATIONAL,PATH).get();
409 - Allocates read only transaction, which is based on data tree which
410 contains value ``A`` for ``PATH``.
412 - Allocates read write transaction, which is based on data tree which
413 contains value ``A`` for ``PATH``.
415 - Read from read-only transaction returns value ``A`` for ``PATH``.
417 - Data tree is updated using read-write transaction, ``PATH`` contains
418 ``B``. Change is not public and only local to transaction.
420 - Read from read-write transaction returns value ``B`` for ``PATH``.
422 - Submits changes in read-write transaction to be committed to data
423 tree. Once commit will finish, changes will be published and ``PATH``
424 will be updated for value ``B``. Previously allocated transactions
425 are not affected by this change.
427 - Read from previously allocated read-only transaction still returns
428 value ``A`` for ``PATH``, since it provides stable and isolated view.
430 - Allocates new read-only transaction, which is based on data tree,
431 which contains value ``B`` for ``PATH``.
433 - Read from new read-only transaction return value ``B`` for ``PATH``
434 since read-write transaction was committed.
438 Examples contain blocking calls on future only to illustrate that
439 action happened after other asynchronous action. The use of the
440 blocking call ``ListenableFuture#get()`` is discouraged for most
441 use-cases and you should use
442 ``Futures#addCallback(ListenableFuture, FutureCallback)`` to listen
443 asynchronously for result.
445 Commit failure scenarios
446 ~~~~~~~~~~~~~~~~~~~~~~~~
448 A transaction commit may fail because of following reasons:
450 Optimistic Lock Failure
451 Another transaction finished earlier and **modified the same node in
452 a non-compatible way**. The commit (and the returned future) will
453 fail with an ``OptimisticLockFailedException``.
455 It is the responsibility of the caller to create a new transaction
456 and submit the same modification again in order to update data tree.
460 ``OptimisticLockFailedException`` usually exposes **multiple
461 writers** to the same data subtree, which may conflict on same
464 In most cases, retrying may result in a probability of success.
466 There are scenarios, albeit unusual, where any number of retries
467 will not succeed. Therefore it is strongly recommended to limit
468 the number of retries (2 or 3) to avoid an endless loop.
471 The data change introduced by this transaction **did not pass
472 validation** by commit handlers or data was incorrectly structured.
473 The returned future will fail with a
474 ``DataValidationFailedException``. User **should not retry** to
475 create new transaction with same data, since it probably will fail
478 Example conflict of two transactions
479 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
481 This example illustrates two concurrent transactions, which derived from
482 same initial state of data tree and proposes conflicting modifications.
486 WriteTransaction txA = broker.newWriteTransaction();
487 WriteTransaction txB = broker.newWriteTransaction();
489 txA.put(CONFIGURATION, PATH, A);
490 txB.put(CONFIGURATION, PATH, B);
492 CheckedFuture<?,?> futureA = txA.submit();
493 CheckedFuture<?,?> futureB = txB.submit();
495 - Updates ``PATH`` to value ``A`` using ``txA``
497 - Updates ``PATH`` to value ``B`` using ``txB``
499 - Seals & submits ``txA``. The commit will be processed asynchronously
500 and data tree will be updated to contain value ``A`` for ``PATH``.
501 The returned ‘ListenableFuture’ will complete successfully once state
502 is applied to data tree.
504 - Seals & submits ``txB``. Commit of ``txB`` will fail, because
505 previous transaction also modified path in a concurrent way. The
506 state introduced by ``txB`` will not be applied. The returned
507 ``ListenableFuture`` will fail with ``OptimisticLockFailedException``
508 exception, which indicates that concurrent transaction prevented the
509 submitted transaction from being applied.
511 Example asynchronous retry-loop
512 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
516 private void doWrite( final int tries ) {
517 WriteTransaction writeTx = dataBroker.newWriteOnlyTransaction();
519 MyDataObject data = ...;
520 InstanceIdentifier<MyDataObject> path = ...;
521 writeTx.put( LogicalDatastoreType.OPERATIONAL, path, data );
523 Futures.addCallback( writeTx.submit(), new FutureCallback<Void>() {
524 public void onSuccess( Void result ) {
528 public void onFailure( Throwable t ) {
529 if( t instanceof OptimisticLockFailedException && (( tries - 1 ) > 0)) {
530 doWrite( tries - 1 );
538 Concurrent change compatibility
539 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
541 There are several sets of changes which could be considered incompatible
542 between two transactions which are derived from same initial state.
543 Rules for conflict detection applies recursively for each subtree level.
545 Following table shows state changes and failures between two concurrent
546 transactions, which are based on same initial state, ``tx1`` is
547 submitted before ``tx2``.
549 INFO: Following tables stores numeric values and shows data using
550 ``toString()`` to simplify examples.
552 +--------------------+--------------------+--------------------+--------------------+
553 | Initial state | tx1 | tx2 | Observable Result |
554 +====================+====================+====================+====================+
555 | Empty | ``put(A,1)`` | ``put(A,2)`` | ``tx2`` will fail, |
556 | | | | value of ``A`` is |
558 +--------------------+--------------------+--------------------+--------------------+
559 | Empty | ``put(A,1)`` | ``merge(A,2)`` | value of ``A`` is |
561 +--------------------+--------------------+--------------------+--------------------+
562 | Empty | ``merge(A,1)`` | ``put(A,2)`` | ``tx2`` will fail, |
563 | | | | value of ``A`` is |
565 +--------------------+--------------------+--------------------+--------------------+
566 | Empty | ``merge(A,1)`` | ``merge(A,2)`` | ``A`` is ``2`` |
567 +--------------------+--------------------+--------------------+--------------------+
568 | A=0 | ``put(A,1)`` | ``put(A,2)`` | ``tx2`` will fail, |
569 | | | | ``A`` is ``1`` |
570 +--------------------+--------------------+--------------------+--------------------+
571 | A=0 | ``put(A,1)`` | ``merge(A,2)`` | ``A`` is ``2`` |
572 +--------------------+--------------------+--------------------+--------------------+
573 | A=0 | ``merge(A,1)`` | ``put(A,2)`` | ``tx2`` will fail, |
574 | | | | value of ``A`` is |
576 +--------------------+--------------------+--------------------+--------------------+
577 | A=0 | ``merge(A,1)`` | ``merge(A,2)`` | ``A`` is ``2`` |
578 +--------------------+--------------------+--------------------+--------------------+
579 | A=0 | ``delete(A)`` | ``put(A,2)`` | ``tx2`` will fail, |
580 | | | | ``A`` does not |
582 +--------------------+--------------------+--------------------+--------------------+
583 | A=0 | ``delete(A)`` | ``merge(A,2)`` | ``A`` is ``2`` |
584 +--------------------+--------------------+--------------------+--------------------+
586 Table: Concurrent change resolution for leaves and leaf-list items
588 +--------------------+--------------------+--------------------+--------------------+
589 | Initial state | ``tx1`` | ``tx2`` | Result |
590 +====================+====================+====================+====================+
591 | Empty | put(TOP,[]) | put(TOP,[]) | ``tx2`` will fail, |
592 | | | | state is TOP=[] |
593 +--------------------+--------------------+--------------------+--------------------+
594 | Empty | put(TOP,[]) | merge(TOP,[]) | TOP=[] |
595 +--------------------+--------------------+--------------------+--------------------+
596 | Empty | put(TOP,[FOO=1]) | put(TOP,[BAR=1]) | ``tx2`` will fail, |
598 | | | | TOP=[FOO=1] |
599 +--------------------+--------------------+--------------------+--------------------+
600 | Empty | put(TOP,[FOO=1]) | merge(TOP,[BAR=1]) | TOP=[FOO=1,BAR=1] |
601 +--------------------+--------------------+--------------------+--------------------+
602 | Empty | merge(TOP,[FOO=1]) | put(TOP,[BAR=1]) | ``tx2`` will fail, |
604 | | | | TOP=[FOO=1] |
605 +--------------------+--------------------+--------------------+--------------------+
606 | Empty | merge(TOP,[FOO=1]) | merge(TOP,[BAR=1]) | TOP=[FOO=1,BAR=1] |
607 +--------------------+--------------------+--------------------+--------------------+
608 | TOP=[] | put(TOP,[FOO=1]) | put(TOP,[BAR=1]) | ``tx2`` will fail, |
610 | | | | TOP=[FOO=1] |
611 +--------------------+--------------------+--------------------+--------------------+
612 | TOP=[] | put(TOP,[FOO=1]) | merge(TOP,[BAR=1]) | state is |
613 | | | | TOP=[FOO=1,BAR=1] |
614 +--------------------+--------------------+--------------------+--------------------+
615 | TOP=[] | merge(TOP,[FOO=1]) | put(TOP,[BAR=1]) | ``tx2`` will fail, |
617 | | | | TOP=[FOO=1] |
618 +--------------------+--------------------+--------------------+--------------------+
619 | TOP=[] | merge(TOP,[FOO=1]) | merge(TOP,[BAR=1]) | state is |
620 | | | | TOP=[FOO=1,BAR=1] |
621 +--------------------+--------------------+--------------------+--------------------+
622 | TOP=[] | delete(TOP) | put(TOP,[BAR=1]) | ``tx2`` will fail, |
623 | | | | state is empty |
625 +--------------------+--------------------+--------------------+--------------------+
626 | TOP=[] | delete(TOP) | merge(TOP,[BAR=1]) | state is |
627 | | | | TOP=[BAR=1] |
628 +--------------------+--------------------+--------------------+--------------------+
629 | TOP=[] | put(TOP/FOO,1) | put(TOP/BAR,1]) | state is |
630 | | | | TOP=[FOO=1,BAR=1] |
631 +--------------------+--------------------+--------------------+--------------------+
632 | TOP=[] | put(TOP/FOO,1) | merge(TOP/BAR,1) | state is |
633 | | | | TOP=[FOO=1,BAR=1] |
634 +--------------------+--------------------+--------------------+--------------------+
635 | TOP=[] | merge(TOP/FOO,1) | put(TOP/BAR,1) | state is |
636 | | | | TOP=[FOO=1,BAR=1] |
637 +--------------------+--------------------+--------------------+--------------------+
638 | TOP=[] | merge(TOP/FOO,1) | merge(TOP/BAR,1) | state is |
639 | | | | TOP=[FOO=1,BAR=1] |
640 +--------------------+--------------------+--------------------+--------------------+
641 | TOP=[] | delete(TOP) | put(TOP/BAR,1) | ``tx2`` will fail, |
642 | | | | state is empty |
644 +--------------------+--------------------+--------------------+--------------------+
645 | TOP=[] | delete(TOP) | merge(TOP/BAR,1] | ``tx2`` will fail, |
646 | | | | state is empty |
648 +--------------------+--------------------+--------------------+--------------------+
649 | TOP=[FOO=1] | put(TOP/FOO,2) | put(TOP/BAR,1) | state is |
650 | | | | TOP=[FOO=2,BAR=1] |
651 +--------------------+--------------------+--------------------+--------------------+
652 | TOP=[FOO=1] | put(TOP/FOO,2) | merge(TOP/BAR,1) | state is |
653 | | | | TOP=[FOO=2,BAR=1] |
654 +--------------------+--------------------+--------------------+--------------------+
655 | TOP=[FOO=1] | merge(TOP/FOO,2) | put(TOP/BAR,1) | state is |
656 | | | | TOP=[FOO=2,BAR=1] |
657 +--------------------+--------------------+--------------------+--------------------+
658 | TOP=[FOO=1] | merge(TOP/FOO,2) | merge(TOP/BAR,1) | state is |
659 | | | | TOP=[FOO=2,BAR=1] |
660 +--------------------+--------------------+--------------------+--------------------+
661 | TOP=[FOO=1] | delete(TOP/FOO) | put(TOP/BAR,1) | state is |
662 | | | | TOP=[BAR=1] |
663 +--------------------+--------------------+--------------------+--------------------+
664 | TOP=[FOO=1] | delete(TOP/FOO) | merge(TOP/BAR,1] | state is |
665 | | | | TOP=[BAR=1] |
666 +--------------------+--------------------+--------------------+--------------------+
668 Table: Concurrent change resolution for containers, lists, list items
673 The MD-SAL provides a way to deliver Remote Procedure Calls (RPCs) to a
674 particular implementation based on content in the input as it is modeled
675 in YANG. This part of the the RPC input is referred to as a **context
678 The MD-SAL does not dictate the name of the leaf which is used for this
679 RPC routing, but provides necessary functionality for YANG model author
680 to define their **context reference** in their model of RPCs.
682 MD-SAL routing behavior is modeled using following terminology and its
683 application to YANG models:
686 Logical type of RPC routing. Context type is modeled as YANG
687 ``identity`` and is referenced in model to provide scoping
691 Conceptual location in data tree, which represents context in which
692 RPC could be executed. Context instance usually represent logical
693 point to which RPC execution is attached.
696 Field of RPC input payload which contains Instance Identifier
697 referencing **context instance** in which the RPC should be
700 Modeling a routed RPC
701 ~~~~~~~~~~~~~~~~~~~~~
703 In order to define routed RPCs, the YANG model author needs to declare
704 (or reuse) a **context type**, set of possible **context instances** and
705 finally RPCs which will contain **context reference** on which they will
708 Declaring a routing context type
709 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
713 identity node-context {
714 description "Identity used to mark node context";
717 This declares an identity named ``node-context``, which is used as
718 marker for node-based routing and is used in other places to reference
721 Declaring possible context instances
722 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
724 In order to define possible values of **context instances** for routed
725 RPCs, we need to model that set accordingly using ``context-instance``
726 extension from the ``yang-ext`` model.
730 import yang-ext { prefix ext; }
732 /** Base structure **/
736 ext:context-instance "node-context";
737 // other node-related fields would go here
741 The statement ``ext:context-instance "node-context";`` marks any element
742 of the ``list node`` as a possible valid **context instance** in
743 ``node-context`` based routing.
747 The existence of a **context instance** node in operational or
748 config data tree is not strongly tied to existence of RPC
751 For most routed RPC models, there is relationship between the data
752 present in operational data tree and RPC implementation
753 availability, but this is not enforced by MD-SAL. This provides some
754 flexibility for YANG model writers to better specify their routing
755 model and requirements for implementations. Details when RPC
756 implementations are available should be documented in YANG model.
758 If user invokes RPC with a **context instance** that has no
759 registered implementation, the RPC invocation will fail with the
760 exception ``DOMRpcImplementationNotAvailableException``.
762 Declaring a routed RPC
763 ^^^^^^^^^^^^^^^^^^^^^^
765 To declare RPC to be routed based on ``node-context`` we need to add
766 leaf of ``instance-identifier`` type (or type derived from
767 ``instance-identifier``) to the RPC and mark it as **context
770 This is achieved using YANG extension ``context-reference`` from
771 ``yang-ext`` model on leaf, which will be used for RPC routing.
775 rpc example-routed-rpc {
778 ext:context-reference "node-context";
779 type "instance-identifier";
781 // other input to the RPC would go here
785 The statement ``ext:context-reference "node-context"`` marks
786 ``leaf node`` as **context reference** of type ``node-context``. The
787 value of this leaf, will be used by the MD-SAL to select the particular
788 RPC implementation that registered itself as the implementation of the
789 RPC for particular **context instance**.
794 From a user perspective (e.g. invoking RPCs) there is no difference
795 between routed and non-routed RPCs. Routing information is just an
796 additional leaf in RPC which must be populated.
798 Implementing a routed RPC
799 ~~~~~~~~~~~~~~~~~~~~~~~~~
803 Registering implementations
804 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
806 Implementations of a routed RPC (e.g., southbound plugins) will specify
807 an instance-identifier for the **context reference** (in this case a
808 node) for which they want to provide an implementation during
809 registration. Consumers, e.g., those calling the RPC are required to
810 specify that instance-identifier (in this case the identifier of a node)
813 Simple code which showcases that for add-flow via Binding-Aware APIs
814 (`RoutedServiceTest.java <https://git.opendaylight.org/gerrit/gitweb?p=controller.git;a=blob;f=opendaylight/md-sal/sal-binding-it/src/test/java/org/opendaylight/controller/test/sal/binding/it/RoutedServiceTest.java;h=d49d6f0e25e271e43c8550feb5eef63d96301184;hb=HEAD>`__
820 62 public void onSessionInitiated(ProviderContext session) {
821 63 assertNotNull(session);
822 64 firstReg = session.addRoutedRpcImplementation(SalFlowService.class, salFlowService1);
825 Line 64: We are registering salFlowService1 as implementation of
830 107 NodeRef nodeOne = createNodeRef("foo:node:1");
832 110 * Provider 1 registers path of node 1
834 112 firstReg.registerPath(NodeContext.class, nodeOne);
836 Line 107: We are creating NodeRef (encapsulation of InstanceIdentifier)
839 Line 112: We register salFlowService1 as implementation for nodeOne.
841 The salFlowService1 will be executed only for RPCs which contains
842 Instance Identifier for foo:node:1.
844 OpenDaylight Controller MD-SAL: RESTCONF
845 ----------------------------------------
847 RESCONF operations overview
848 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
850 | RESTCONF allows access to datastores in the controller.
851 | There are two datastores:
853 - Config: Contains data inserted via controller
855 - Operational: Contains other data
859 | Each request must start with the URI /restconf.
860 | RESTCONF listens on port 8080 for HTTP requests.
862 RESTCONF supports **OPTIONS**, **GET**, **PUT**, **POST**, and
863 **DELETE** operations. Request and response data can either be in the
864 XML or JSON format. XML structures according to yang are defined at:
865 `XML-YANG <http://tools.ietf.org/html/rfc6020>`__. JSON structures are
867 `JSON-YANG <http://tools.ietf.org/html/draft-lhotka-netmod-yang-json-02>`__.
868 Data in the request must have a correctly set **Content-Type** field in
869 the http header with the allowed value of the media type. The media type
870 of the requested data has to be set in the **Accept** field. Get the
871 media types for each resource by calling the OPTIONS operation. Most of
872 the paths of the pathsRestconf endpoints use `Instance
873 Identifier <https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:Concepts#Instance_Identifier>`__.
874 ``<identifier>`` is used in the explanation of the operations.
878 - It must start with <moduleName>:<nodeName> where <moduleName> is a
879 name of the module and <nodeName> is the name of a node in the
880 module. It is sufficient to just use <nodeName> after
881 <moduleName>:<nodeName>. Each <nodeName> has to be separated by /.
883 - <nodeName> can represent a data node which is a list or container
884 yang built-in type. If the data node is a list, there must be defined
885 keys of the list behind the data node name for example,
886 <nodeName>/<valueOfKey1>/<valueOfKey2>.
888 - | The format <moduleName>:<nodeName> has to be used in this case as
890 | Module A has node A1. Module B augments node A1 by adding node X.
891 Module C augments node A1 by adding node X. For clarity, it has to
892 be known which node is X (for example: C:X). For more details about
893 encoding, see: `RESTCONF 02 - Encoding YANG Instance Identifiers in
895 URI. <http://tools.ietf.org/html/draft-bierman-netconf-restconf-02#section-5.3.1>`__
900 | A Node can be behind a mount point. In this case, the URI has to be in
901 format <identifier>/**yang-ext:mount**/<identifier>. The first
902 <identifier> is the path to a mount point and the second <identifier>
903 is the path to a node behind the mount point. A URI can end in a mount
904 point itself by using <identifier>/**yang-ext:mount**.
905 | More information on how to actually use mountpoints is available at:
907 Controller:Config:Examples:Netconf <https://wiki.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf>`__.
915 - Returns the XML description of the resources with the required
916 request and response media types in Web Application Description
919 GET /restconf/config/<identifier>
920 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
922 - Returns a data node from the Config datastore.
924 - <identifier> points to a data node which must be retrieved.
926 GET /restconf/operational/<identifier>
927 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
929 - Returns the value of the data node from the Operational datastore.
931 - <identifier> points to a data node which must be retrieved.
933 PUT /restconf/config/<identifier>
934 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
936 - Updates or creates data in the Config datastore and returns the state
939 - <identifier> points to a data node which must be stored.
945 PUT http://<controllerIP>:8080/restconf/config/module1:foo/bar
946 Content-Type: applicaton/xml
951 | **Example with mount point:**
955 PUT http://<controllerIP>:8080/restconf/config/module1:foo1/foo2/yang-ext:mount/module2:foo/bar
956 Content-Type: applicaton/xml
961 POST /restconf/config
962 ^^^^^^^^^^^^^^^^^^^^^
964 - Creates the data if it does not exist
970 POST URL: http://localhost:8080/restconf/config/
971 content-type: application/yang.data+json
977 "toaster:toasterManufacturer" : "General Electric",
978 "toaster:toasterModelNumber" : "123",
979 "toaster:toasterStatus" : "up"
983 POST /restconf/config/<identifier>
984 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
986 - Creates the data if it does not exist in the Config datastore, and
987 returns the state about success.
989 - <identifier> points to a data node where data must be stored.
991 - The root element of data must have the namespace (data are in XML) or
992 module name (data are in JSON.)
998 POST http://<controllerIP>:8080/restconf/config/module1:foo
999 Content-Type: applicaton/xml/
1000 <bar xmlns=“module1namespace”>
1004 **Example with mount point:**
1008 http://<controllerIP>:8080/restconf/config/module1:foo1/foo2/yang-ext:mount/module2:foo
1009 Content-Type: applicaton/xml
1010 <bar xmlns=“module2namespace”>
1014 POST /restconf/operations/<moduleName>:<rpcName>
1015 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1019 - <moduleName>:<rpcName> - <moduleName> is the name of the module and
1020 <rpcName> is the name of the RPC in this module.
1022 - The Root element of the data sent to RPC must have the name “input”.
1024 - The result can be the status code or the retrieved data having the
1025 root element “output”.
1031 POST http://<controllerIP>:8080/restconf/operations/module1:fooRpc
1032 Content-Type: applicaton/xml
1033 Accept: applicaton/xml
1038 The answer from the server could be:
1043 | **An example using a JSON payload:**
1047 POST http://localhost:8080/restconf/operations/toaster:make-toast
1048 Content-Type: application/yang.data+json
1052 "toaster:toasterDoneness" : "10",
1053 "toaster:toasterToastType":"wheat-bread"
1059 Even though this is a default for the toasterToastType value in the
1060 yang, you still need to define it.
1062 DELETE /restconf/config/<identifier>
1063 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1065 - Removes the data node in the Config datastore and returns the state
1068 - <identifier> points to a data node which must be removed.
1070 More information is available in the `RESTCONF
1071 RFC <http://tools.ietf.org/html/draft-bierman-netconf-restconf-02>`__.
1076 | RESTCONF uses these base classes:
1079 Represents the path in the data tree
1082 Used for invoking RPCs
1085 Offers manipulation with transactions and reading data from the
1089 Holds information about yang modules
1092 Returns MountInstance based on the InstanceIdentifier pointing to a
1096 Contains the SchemaContext behind the mount point
1099 Provides information about the schema node
1102 Possesses the same name as the schema node, and contains the value
1103 representing the data node value
1106 Can contain CompositeNode-s and SimpleNode-s
1111 Figure 1 shows the GET operation with URI restconf/config/M:N where M is
1112 the module name, and N is the node name.
1114 .. figure:: ./images/Get.png
1119 1. The requested URI is translated into the InstanceIdentifier which
1120 points to the data node. During this translation, the DataSchemaNode
1121 that conforms to the data node is obtained. If the data node is
1122 behind the mount point, the MountInstance is obtained as well.
1124 2. RESTCONF asks for the value of the data node from DataBrokerService
1125 based on InstanceIdentifier.
1127 3. DataBrokerService returns CompositeNode as data.
1129 4. StructuredDataToXmlProvider or StructuredDataToJsonProvider is called
1130 based on the **Accept** field from the http request. These two
1131 providers can transform CompositeNode regarding DataSchemaNode to an
1132 XML or JSON document.
1134 5. XML or JSON is returned as the answer on the request from the client.
1139 Figure 2 shows the PUT operation with the URI restconf/config/M:N where
1140 M is the module name, and N is the node name. Data is sent in the
1141 request either in the XML or JSON format.
1143 .. figure:: ./images/Put.png
1148 1. Input data is sent to JsonToCompositeNodeProvider or
1149 XmlToCompositeNodeProvider. The correct provider is selected based on
1150 the Content-Type field from the http request. These two providers can
1151 transform input data to CompositeNode. However, this CompositeNode
1152 does not contain enough information for transactions.
1154 2. The requested URI is translated into InstanceIdentifier which points
1155 to the data node. DataSchemaNode conforming to the data node is
1156 obtained during this translation. If the data node is behind the
1157 mount point, the MountInstance is obtained as well.
1159 3. CompositeNode can be normalized by adding additional information from
1162 4. RESTCONF begins the transaction, and puts CompositeNode with
1163 InstanceIdentifier into it. The response on the request from the
1164 client is the status code which depends on the result from the
1170 1. Create a new flow on the switch openflow:1 in table 2.
1177 URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2
1178 Content-Type: application/xml
1182 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
1184 xmlns="urn:opendaylight:flow:inventory">
1185 <strict>false</strict>
1197 <table_id>2</table_id>
1199 <cookie_mask>10</cookie_mask>
1200 <out_port>10</out_port>
1201 <installHw>false</installHw>
1202 <out_group>2</out_group>
1209 <ipv4-destination>10.0.0.1/24</ipv4-destination>
1211 <hard-timeout>0</hard-timeout>
1213 <idle-timeout>0</idle-timeout>
1214 <flow-name>FooXf22</flow-name>
1215 <priority>2</priority>
1216 <barrier>false</barrier>
1223 Status: 204 No Content
1225 1. Change *strict* to *true* in the previous flow.
1232 URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2/flow/111
1233 Content-Type: application/xml
1237 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
1239 xmlns="urn:opendaylight:flow:inventory">
1240 <strict>true</strict>
1252 <table_id>2</table_id>
1254 <cookie_mask>10</cookie_mask>
1255 <out_port>10</out_port>
1256 <installHw>false</installHw>
1257 <out_group>2</out_group>
1264 <ipv4-destination>10.0.0.1/24</ipv4-destination>
1266 <hard-timeout>0</hard-timeout>
1268 <idle-timeout>0</idle-timeout>
1269 <flow-name>FooXf22</flow-name>
1270 <priority>2</priority>
1271 <barrier>false</barrier>
1280 1. Show flow: check that *strict* is *true*.
1287 URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2/flow/111
1288 Accept: application/xml
1298 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
1300 xmlns="urn:opendaylight:flow:inventory">
1301 <strict>true</strict>
1313 <table_id>2</table_id>
1315 <cookie_mask>10</cookie_mask>
1316 <out_port>10</out_port>
1317 <installHw>false</installHw>
1318 <out_group>2</out_group>
1325 <ipv4-destination>10.0.0.1/24</ipv4-destination>
1327 <hard-timeout>0</hard-timeout>
1329 <idle-timeout>0</idle-timeout>
1330 <flow-name>FooXf22</flow-name>
1331 <priority>2</priority>
1332 <barrier>false</barrier>
1335 1. Delete the flow created.
1342 URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2/flow/111
1350 Websocket change event notification subscription tutorial
1351 ---------------------------------------------------------
1353 Subscribing to data change notifications makes it possible to obtain
1354 notifications about data manipulation (insert, change, delete) which are
1355 done on any specified **path** of any specified **datastore** with
1356 specific **scope**. In following examples *{odlAddress}* is address of
1357 server where ODL is running and *{odlPort}* is port on which
1358 OpenDaylight is running.
1360 Websocket notifications subscription process
1361 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1363 In this section we will learn what steps need to be taken in order to
1364 successfully subscribe to data change event notifications.
1369 In order to use event notifications you first need to call RPC that
1370 creates notification stream that you can later listen to. You need to
1371 provide three parameters to this RPC:
1373 - **path**: data store path that you plan to listen to. You can
1374 register listener on containers, lists and leaves.
1376 - **datastore**: data store type. *OPERATIONAL* or *CONFIGURATION*.
1378 - **scope**: Represents scope of data change. Possible options are:
1380 - BASE: only changes directly to the data tree node specified in the
1381 path will be reported
1383 - ONE: changes to the node and to direct child nodes will be
1386 - SUBTREE: changes anywhere in the subtree starting at the node will
1389 The RPC to create the stream can be invoked via RESCONF like this:
1392 http://{odlAddress}:{odlPort}/restconf/operations/sal-remote:create-data-change-event-subscription
1394 - HEADER: Content-Type=application/json
1404 "path": "/toaster:toaster/toaster:toasterStatus",
1405 "sal-remote-augment:datastore": "OPERATIONAL",
1406 "sal-remote-augment:scope": "ONE"
1410 The response should look something like this:
1416 "stream-name": "toaster:toaster/toaster:toasterStatus/datastore=CONFIGURATION/scope=SUBTREE"
1420 **stream-name** is important because you will need to use it when you
1421 subscribe to the stream in the next step.
1425 Internally, this will create a new listener for *stream-name* if it
1426 did not already exist.
1431 In order to subscribe to stream and obtain WebSocket location you need
1432 to call *GET* on your stream path. The URI should generally be
1433 http://{odlAddress}:{odlPort}/restconf/streams/stream/{streamName},
1434 where *{streamName}* is the *stream-name* parameter contained in
1435 response from *create-data-change-event-subscription* RPC from the
1439 http://{odlAddress}:{odlPort}/restconf/streams/stream/toaster:toaster/datastore=CONFIGURATION/scope=SUBTREE
1443 The expected response status is 200 OK and response body should be
1444 empty. You will get your WebSocket location from **Location** header of
1445 response. For example in our particular toaster example location header
1446 would have this value:
1447 *ws://{odlAddress}:8185/toaster:toaster/datastore=CONFIGURATION/scope=SUBTREE*
1451 During this phase there is an internal check for to see if a
1452 listener for the *stream-name* from the URI exists. If not, new a
1453 new listener is registered with the DOM data broker.
1455 Receive notifications
1456 ^^^^^^^^^^^^^^^^^^^^^
1458 You should now have a data change notification stream created and have
1459 location of a WebSocket. You can use this WebSocket to listen to data
1460 change notifications. To listen to notifications you can use a
1461 JavaScript client or if you are using chrome browser you can use the
1463 Client <https://chrome.google.com/webstore/detail/simple-websocket-client/pfdhoblngboilpfeibdedpjgfnlcodoo>`__.
1465 Also, for testing purposes, there is simple Java application named
1466 WebSocketClient. The application is placed in the
1467 *-sal-rest-connector-classes.class* project. It accepts a WebSocket URI
1468 as and input parameter. After starting the utility (WebSocketClient
1469 class directly in Eclipse/InteliJ Idea) received notifications should be
1470 displayed in console.
1472 Notifications are always in XML format and look like this:
1476 <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
1477 <eventTime>2014-09-11T09:58:23+02:00</eventTime>
1478 <data-changed-notification xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:remote">
1480 <path xmlns:meae="http://netconfcentral.org/ns/toaster">/meae:toaster</path>
1481 <operation>updated</operation>
1483 <!-- updated data -->
1485 </data-change-event>
1486 </data-changed-notification>
1492 The typical use case is listening to data change events to update web
1493 page data in real-time. In this tutorial we will be using toaster as the
1496 When you call *make-toast* RPC, it sets *toasterStatus* to "down" to
1497 reflect that the toaster is busy making toast. When it finishes,
1498 *toasterStatus* is set to "up" again. We will listen to this toaster
1499 status changes in data store and will reflect it on our web page in
1500 real-time thanks to WebSocket data change notification.
1502 Simple javascript client implementation
1503 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1505 We will create simple JavaScript web application that will listen
1506 updates on *toasterStatus* leaf and update some element of our web page
1507 according to new toaster status state.
1512 First you need to create stream that you are planing to subscribe to.
1513 This can be achieved by invoking "create-data-change-event-subscription"
1514 RPC on RESTCONF via AJAX request. You need to provide data store
1515 **path** that you plan to listen on, **data store type** and **scope**.
1516 If the request is successful you can extract the **stream-name** from
1517 the response and use that to subscribe to the newly created stream. The
1518 *{username}* and *{password}* fields represent your credentials that you
1519 use to connect to OpenDaylight via RESTCONF:
1523 The default user name and password are "admin".
1525 .. code:: javascript
1527 function createStream() {
1530 url: 'http://{odlAddress}:{odlPort}/restconf/operations/sal-remote:create-data-change-event-subscription',
1533 'Authorization': 'Basic ' + btoa('{username}:{password}'),
1534 'Content-Type': 'application/json'
1536 data: JSON.stringify(
1539 'path': '/toaster:toaster/toaster:toasterStatus',
1540 'sal-remote-augment:datastore': 'OPERATIONAL',
1541 'sal-remote-augment:scope': 'ONE'
1545 }).done(function (data) {
1546 // this function will be called when ajax call is executed successfully
1547 subscribeToStream(data.output['stream-name']);
1548 }).fail(function (data) {
1549 // this function will be called when ajax call fails
1550 console.log("Create stream call unsuccessful");
1557 The Next step is to subscribe to the stream. To subscribe to the stream
1558 you need to call *GET* on
1559 *http://{odlAddress}:{odlPort}/restconf/streams/stream/{stream-name}*.
1560 If the call is successful, you get WebSocket address for this stream in
1561 **Location** parameter inside response header. You can get response
1562 header by calling *getResponseHeader(\ *Location*)* on HttpRequest
1563 object inside *done()* function call:
1565 .. code:: javascript
1567 function subscribeToStream(streamName) {
1570 url: 'http://{odlAddress}:{odlPort}/restconf/streams/stream/' + streamName;
1573 'Authorization': 'Basic ' + btoa('{username}:{password}'),
1576 ).done(function (data, textStatus, httpReq) {
1577 // we need function that has http request object parameter in order to access response headers.
1578 listenToNotifications(httpReq.getResponseHeader('Location'));
1579 }).fail(function (data) {
1580 console.log("Subscribe to stream call unsuccessful");
1584 Receive notifications
1585 ^^^^^^^^^^^^^^^^^^^^^
1587 Once you got WebSocket server location you can now connect to it and
1588 start receiving data change events. You need to define functions that
1589 will handle events on WebSocket. In order to process incoming events
1590 from OpenDaylight you need to provide a function that will handle
1591 *onmessage* events. The function must have one parameter that represents
1592 the received event object. The event data will be stored in
1593 *event.data*. The data will be in an XML format that you can then easily
1596 .. code:: javascript
1598 function listenToNotifications(socketLocation) {
1600 var notificatinSocket = new WebSocket(socketLocation);
1602 notificatinSocket.onmessage = function (event) {
1603 // we process our received event here
1604 console.log('Received toaster data change event.');
1605 $($.parseXML(event.data)).find('data-change-event').each(
1607 var operation = $(this).find('operation').text();
1608 if (operation == 'updated') {
1609 // toaster status was updated so we call function that gets the value of toasterStatus leaf
1610 updateToasterStatus();
1616 notificatinSocket.onerror = function (error) {
1617 console.log("Socket error: " + error);
1619 notificatinSocket.onopen = function (event) {
1620 console.log("Socket connection opened.");
1622 notificatinSocket.onclose = function (event) {
1623 console.log("Socket connection closed.");
1625 // if there is a problem on socket creation we get exception (i.e. when socket address is incorrect)
1627 alert("Error when creating WebSocket" + e );
1631 The *updateToasterStatus()* function represents function that calls
1632 *GET* on the path that was modified and sets toaster status in some web
1633 page element according to received data. After the WebSocket connection
1634 has been established you can test events by calling make-toast RPC via
1639 for more information about WebSockets in JavaScript visit `Writing
1641 applications <https://developer.mozilla.org/en-US/docs/WebSockets/Writing_WebSocket_client_applications>`__
1649 The Controller configuration operation has three stages:
1651 - First, a Proposed configuration is created. Its target is to replace
1652 the old configuration.
1654 - Second, the Proposed configuration is validated, and then committed.
1655 If it passes validation successfully, the Proposed configuration
1656 state will be changed to Validated.
1658 - Finally, a Validated configuration can be Committed, and the affected
1659 modules can be reconfigured.
1661 In fact, each configuration operation is wrapped in a transaction. Once
1662 a transaction is created, it can be configured, that is to say, a user
1663 can abort the transaction during this stage. After the transaction
1664 configuration is done, it is committed to the validation stage. In this
1665 stage, the validation procedures are invoked. If one or more validations
1666 fail, the transaction can be reconfigured. Upon success, the second
1667 phase commit is invoked. If this commit is successful, the transaction
1668 enters the last stage, committed. After that, the desired modules are
1669 reconfigured. If the second phase commit fails, it means that the
1670 transaction is unhealthy - basically, a new configuration instance
1671 creation failed, and the application can be in an inconsistent state.
1673 .. figure:: ./images/configuration.jpg
1674 :alt: Configuration states
1676 Configuration states
1678 .. figure:: ./images/Transaction.jpg
1679 :alt: Transaction states
1686 To secure the consistency and safety of the new configuration and to
1687 avoid conflicts, the configuration validation process is necessary.
1688 Usually, validation checks the input parameters of a new configuration,
1689 and mostly verifies module-specific relationships. The validation
1690 procedure results in a decision on whether the proposed configuration is
1696 Since there can be dependencies between modules, a change in a module
1697 configuration can affect the state of other modules. Therefore, we need
1698 to verify whether dependencies on other modules can be resolved. The
1699 Dependency Resolver acts in a manner similar to dependency injectors.
1700 Basically, a dependency tree is built.
1705 This section describes configuration system APIs and SPIs.
1710 **Module** org.opendaylight.controller.config.spi. Module is the common
1711 interface for all modules: every module must implement it. The module is
1712 designated to hold configuration attributes, validate them, and create
1713 instances of service based on the attributes. This instance must
1714 implement the AutoCloseable interface, owing to resources clean up. If
1715 the module was created from an already running instance, it contains an
1716 old instance of the module. A module can implement multiple services. If
1717 the module depends on other modules, setters need to be annotated with
1722 1. The module needs to be configured, set with all required attributes.
1724 2. The module is then moved to the commit stage for validation. If the
1725 validation fails, the module attributes can be reconfigured.
1726 Otherwise, a new instance is either created, or an old instance is
1727 reconfigured. A module instance is identified by ModuleIdentifier,
1728 consisting of the factory name and instance name.
1730 | **ModuleFactory** org.opendaylight.controller.config.spi. The
1731 ModuleFactory interface must be implemented by each module factory.
1732 | A module factory can create a new module instance in two ways:
1734 - From an existing module instance
1736 - | An entirely new instance
1737 | ModuleFactory can also return default modules, useful for
1738 populating registry with already existing configurations. A module
1739 factory implementation must have a globally unique name.
1744 +--------------------------------------+--------------------------------------+
1745 | ConfigRegistry | Represents functionality provided by |
1746 | | a configuration transaction (create, |
1747 | | destroy module, validate, or abort |
1749 +--------------------------------------+--------------------------------------+
1750 | ConfigTransactionController | Represents functionality for |
1751 | | manipulating with configuration |
1752 | | transactions (begin, commit config). |
1753 +--------------------------------------+--------------------------------------+
1754 | RuntimeBeanRegistratorAwareConfiBean | The module implementing this |
1755 | | interface will receive |
1756 | | RuntimeBeanRegistrator before |
1757 | | getInstance is invoked. |
1758 +--------------------------------------+--------------------------------------+
1763 +--------------------------------------+--------------------------------------+
1764 | RuntimeBean | Common interface for all runtime |
1766 +--------------------------------------+--------------------------------------+
1767 | RootRuntimeBeanRegistrator | Represents functionality for root |
1768 | | runtime bean registration, which |
1769 | | subsequently allows hierarchical |
1771 +--------------------------------------+--------------------------------------+
1772 | HierarchicalRuntimeBeanRegistration | Represents functionality for runtime |
1773 | | bean registration and |
1774 | | unreregistration from hierarchy |
1775 +--------------------------------------+--------------------------------------+
1780 | JMX API is purposed as a transition between the Client API and the JMX
1783 +--------------------------------------+--------------------------------------+
1784 | ConfigTransactionControllerMXBean | Extends ConfigTransactionController, |
1785 | | executed by Jolokia clients on |
1786 | | configuration transaction. |
1787 +--------------------------------------+--------------------------------------+
1788 | ConfigRegistryMXBean | Represents entry point of |
1789 | | configuration management for |
1791 +--------------------------------------+--------------------------------------+
1792 | Object names | Object Name is the pattern used in |
1793 | | JMX to locate JMX beans. It consists |
1794 | | of domain and key properties (at |
1795 | | least one key-value pair). Domain is |
1797 | | "org.opendaylight.controller". The |
1798 | | only mandatory property is "type". |
1799 +--------------------------------------+--------------------------------------+
1804 | A few samples of successful and unsuccessful transaction scenarios
1807 **Successful commit scenario**
1809 1. The user creates a transaction calling creteTransaction() method on
1812 2. ConfigRegisty creates a transaction controller, and registers the
1813 transaction as a new bean.
1815 3. Runtime configurations are copied to the transaction. The user can
1816 create modules and set their attributes.
1818 4. The configuration transaction is to be committed.
1820 5. The validation process is performed.
1822 6. After successful validation, the second phase commit begins.
1824 7. Modules proposed to be destroyed are destroyed, and their service
1825 instances are closed.
1827 8. Runtime beans are set to registrator.
1829 9. The transaction controller invokes the method getInstance on each
1832 10. The transaction is committed, and resources are either closed or
1835 | **Validation failure scenario**
1836 | The transaction is the same as the previous case until the validation
1839 1. If validation fails, (that is to day, illegal input attributes values
1840 or dependency resolver failure), the validationException is thrown
1841 and exposed to the user.
1843 2. The user can decide to reconfigure the transaction and commit again,
1844 or abort the current transaction.
1846 3. On aborted transactions, TransactionController and JMXRegistrator are
1849 4. Unregistration event is sent to ConfigRegistry.
1851 Default module instances
1852 ^^^^^^^^^^^^^^^^^^^^^^^^
1854 The configuration subsystem provides a way for modules to create default
1855 instances. A default instance is an instance of a module, that is
1856 created at the module bundle start-up (module becomes visible for
1857 configuration subsystem, for example, its bundle is activated in the
1858 OSGi environment). By default, no default instances are produced.
1860 The default instance does not differ from instances created later in the
1861 module life-cycle. The only difference is that the configuration for the
1862 default instance cannot be provided by the configuration subsystem. The
1863 module has to acquire the configuration for these instances on its own.
1864 It can be acquired from, for example, environment variables. After the
1865 creation of a default instance, it acts as a regular instance and fully
1866 participates in the configuration subsystem (It can be reconfigured or
1867 deleted in following transactions.).