11 An instantiated logical tree that represents configuration or operational
12 state data of a modeled problem domain (for example, a controller or a
16 A component acting on data, after this data are introduced into one or more
17 particular subtrees of a Data Tree.
19 :Data Tree Identifier:
20 A unique identifier for a particular subtree of a Data Tree. It is composed
21 of the logical data store type and the instance identifier of the subtree's
22 root node. It is represented by a ``DOMDataTreeIdentifier``.
25 A component responsible for providing data for one or more particular
26 subtrees of a Data Tree.
29 A component responsible for providing storage or access to a particular
30 subtree of a Data Tree.
33 A longest-prefix mapping between Data Tree Identifiers and Data Tree Shards
34 responsible for providing access to a data subtree.
39 Data Tree is a Namespace
40 ------------------------
42 The concept of a data tree comes from `RFC6020
43 <https://tools.ietf.org/html/rfc6020>`_. It is is vaguely
44 split into two instances, configuration and operational. The implicit
45 assumption is that **config implies oper**, i.e. any configuration data is
46 also a valid operational data. Further interactions between the two are left
47 undefined and the YANG language is not strictly extensible in the number and
48 semantics of these instances, leaving a lot to implementation details. An
49 outline of data tree use, which is consistent with the current MD-SAL design,
50 is described in `draft-kwatsen-netmod-opstate
51 <https://tools.ietf.org/html/draft-kwatsen-netmod-opstate>`_.
53 The OpenDaylight MD-SAL design makes no inherent assumptions about the
54 relationship between the configuration and operational data tree instances.
55 They are treated as separate entities and they are both fully addressable via
56 the ``DOMDataTreeIdentifier`` objects. It is up to MD-SAL plugins (e.g.
57 protocol plugins or applications) to maintain this relationship. This reflects
58 the asynchronous nature of applying configuration and also the fact that the
59 intended configuration data may be subject to translation (such as template
60 configuration instantiation).
62 Both the configuration and operational namespaces (data trees) are instances
63 of the Conceptual Data Tree. Any data item in the conceptual data tree is
64 addressed via a ``YangInstanceIdentifier`` object, which is a unique,
65 hierarchical, content-based identifier. All applications use the identifier
66 objects to identify data to MD-SAL services, which in turn are expected to
67 perform proper namespace management such that logical operation connectivity is
72 Can you reword '...are expected to perform proper namespace management such
73 that logical operation connectivity is maintained...' - not clear what you
76 .. _identifiers-vs-locators:
78 Identifiers versus Locators
79 ---------------------------
81 It is important to note that when we talk about Identifiers and Locators,
82 we **do not** mean `URIs and URLs
83 <https://en.wikipedia.org/wiki/Uniform_Resource_Identifier>`_,
84 but rather URNs and URLs as strictly separate entities. MD-SAL plugins do not
85 have access to locators and it is the job of MD-SAL services to provide
86 location independence.
88 The details of how a particular MD-SAL service achieves location independence
89 is currently left up to the service's implementation, which leads to the
90 problem of having MD-SAL services cooperate, such as storing data in different
91 backends (in-memory, SQL, NoSQL, etc.) and providing unified access to all
92 available data. Note that data availability is subject to capabilities of a
93 particular storage engine and its operational state, which leads to the design
94 decision that a ``YangInstanceIdentifier`` lookup needs to be performed in two
97 #. A longest-prefix match is performed to locate the storage backend instance
99 #. Masked path elements are resolved by the storage engine
106 A process similar to the first step above is performed today by the Distributed
107 Data Store implementation to split data into Shards. The concept of a Shard as
108 currently implemented is limited to specifying namespaces, and it does not
109 allow for pluggable storage engines.
111 In context of the Conceptual Data Tree, the concept of a Shard is generalized
112 as the shorthand equivalent of a storage backend instance. A Shard can be
113 attached at any (even wildcard) ``YangInstanceIdentifier``. This contract is
114 exposed via the ``DOMShardedDataTree``, which is an MD-SAL SPI class that
115 implements an ``YangInstanceIdentifier`` -> ``Shard`` registry service. This is
116 an omnipresent MD-SAL service, Shard Registry, whose visibility scope is a
117 single OpenDaylight instance (i.e. a cluster member). **Shard Layout** refers
118 to the mapping information contained in this service.
120 Federation, Replication and High Availability
121 ---------------------------------------------
123 Support for various multi-node scenarios is a concern outside of core MD-SAL.
124 If a particular scenario requires the shard layout to be replicated (either
125 fully or partially), it is up to Shard providers to maintain an omnipresent
126 service on each node, which in turn is responsible for dynamically registering
127 ``DOMDataTreeShard`` instances with the Shard Registry service.
129 Since the Shard Layout is strictly local to a particular OpenDaylight instance,
130 an OpenDaylight cluster is not strictly consistent in its mapping of
131 ``YangInstanceIdentifier`` to data. When a query for the entire data tree is
132 executed, the returned result will vary between member instances based on the
133 differences of their Shard Layouts. This allows each node to project its local
134 operational details, as well as the partitioning of the data set being worked
135 on based on workload and node availability.
137 Partial symmetry of the conceptual data tree can still be maintained to
138 the extent that a particular deployment requires. For example the Shard
139 containing the OpenFlow topology can be configured to be registered on all
140 cluster members, leading to queries into that topology returning consistent
153 A Data Tree Listener is a data consumer, for example a process that wants
154 to act on data after it has been introduced to the Conceptual Data Tree.
156 A Data Tree Listener implements the :mdsal-apidoc:`DOMDataTreeListener
157 <DOMDataTreeListener.html>` interface and registers itself using
158 :mdsal-apidoc:`DOMDataTreeService <DOMDataTreeService.html>`.
160 A Data Tree Listener may register for multiple subtrees. Each time it is
161 invoked it will be provided with the current state of all subtrees that it
164 .. todo:: Consider linking / inlining interface
166 .DOMDataTreeListener interface signature
170 public interface DOMDataTreeListener extends EventListener {
172 void onDataTreeChanged(Collection<DataTreeCandidate> changes, // (1)
173 Map<DOMDataTreeIdentifier, NormalizedNode<?, ?>> subtrees);
175 void onDataTreeFailed(Collection<DOMDataTreeListeningException> causes); // (2)
178 #. Invoked when the data tree to which the Data Tree Listener is subscribed
179 to changed. `changes` contains the collection of changes, `subtrees`
180 contains the current state of all subtrees to which the listener is
182 #. Invoked when a subtree listening failure occurs. For example, a failure
183 can be triggered when a connection to an external subtree source is
192 A Data Tree Producer represents source of data in system. Data TreeProducer
193 implementations are not required to implement a specific interface, but
194 use a :mdsal-apidoc:`DOMDataTreeProducer <DOMDataTreeProducer.html>` instance
195 to publish data (i.e. to modify the Conceptual Data Tree).
197 A Data Tree Producer is exclusively bound to one or more subtrees of the
198 Conceptual Data Tree, i.e. binding a Data Tree Producer to a subtree prevents
199 other Data Tree Producers from modifying the subtree.
201 * A failed Data Tree Producer still holds a claim to the namespace to which
202 it is bound (i.e. the exclusive lock of the subtree) until it is closed.
204 :mdsal-apidoc:`DOMDataTreeProducer <DOMDataTreeProducer.html>` represents a
205 Data Tree Producer context
207 * allows transactions to be submitted to subtrees specified at creation
209 * at any given time there may be a single transaction open.
210 * once a transaction is submitted, it will proceed to be committed
213 .. todo:: Consider linking / inlining interface
215 .DOMDataTreeProducer interface signature
219 public interface DOMDataTreeProducer extends DOMDataTreeProducerFactory, AutoCloseable {
220 DOMDataWriteTransaction createTransaction(boolean isolated); // (1)
221 DOMDataTreeProducer createProducer(Collection<DOMDataTreeIdentifier> subtrees); // (2)
224 #. Allocates a new transaction. All previously allocated transactions must
225 have been either submitted or canceled. Setting `isolated` to `true`
226 disables state compression for this transaction.
227 #. Creates a sub-producer for the provided `subtrees`. The parent producer
228 loses the ability to access the specified paths until the resulting child
236 - **A Data Tree Shard** is always bound to either the ``OPERATIONAL``, or the
237 ``CONFIG`` space, never to both at the same time.
239 - **Data Tree Shards** may be nested, the parent shard must be aware of
240 sub-shards and execute every request in context of a self-consistent view of
241 sub-shards liveness. Data requests passing through it must be multiplexed
242 with sub-shard creations/deletions. In other words, if an application creates
243 a transaction rooted at the parent Shard and attempts to access data residing
244 in a sub-shard, the parent Shard implementation must coordinate with the
245 sub-shard implementation to provide the illusion that the data resides in the
246 parent shard. In the case of a transaction running concurrently with
247 sub-shard creation or deletion, these operations need to execute atomically
248 with respect to each other, which is to say that the transactions must
249 completely execute as if the sub-shard creation/deletion occurred before the
250 transaction started or as if the transaction completed before the sub-shard
251 creation/deletion request was executed. This requirement can also be
252 satisfied by the Shard implementation preventing transactions from
253 completing. A Shard implementation may choose to abort any open transactions
254 prior to executing a sub-shard operation.
256 - **Shard Layout** is local to an OpenDaylight instance.
258 - **Shard Layout** is modified by agents (registering / unregistering Data Tree
259 Shards) in order to make the Data Tree Shard and the underlaying data
260 available to plugins and applications executing on that particular
261 OpenDaylight instance.
263 Registering a Shard with the Conceptual Data Tree
264 -------------------------------------------------
266 .. note:: Namespace in this context means a Data Tree Identifier prefix.
268 #. **Claim a namespace** - An agent that is registering a shard must prove that
269 it has sufficient rights to modify the subtree where the shard is going to be
270 attached. A namespace for the shard is claimed by binding a Data Tree
271 Producer instance to same subtree where the shard will be bound. The Data
272 Tree Producer must not have any open child producers, and it should not have
273 any outstanding transactions.
275 #. **Create a shard instance** - Once a namespace is claimed, the agent creates
278 #. **Attach shard** - The agent registers the created shard instance and
279 provides in the reigstration the Data Tree Producer instance to verify the
280 namespace claim. The newly created Shard is checked for its ability to
281 cooperate with its parent shard. If the check is successful, the newly
282 created Shard is attached to its parent shard and recorded in the Shard
285 #. **Remove namespace claim** (optional) - If the Shard is providing storage for
286 applications, the agent should close the Data Tree Producer instance to make
287 the subtree available to applications.
291 Steps 1, 2 and 3 may fail, and the recovery strategy depends
292 on which step failed and on the failure reason.
294 .. todo:: Describe possible failures and recovery scenarios