1 YANG Tools Developer Guide
2 ==========================
7 YANG Tools is set of libraries and tooling providing support for use
8 `YANG <https://tools.ietf.org/html/rfc6020>`__ for Java (or other
9 JVM-based language) projects and applications.
11 YANG Tools provides following features in OpenDaylight:
13 - parsing of YANG sources and semantic inference of relationship across
14 YANG models as defined in
15 `RFC6020 <https://tools.ietf.org/html/rfc6020>`__
17 - representation of YANG-modeled data in Java
19 - **Normalized Node** representation - DOM-like tree model, which
20 uses conceptual meta-model more tailored to YANG and OpenDaylight
21 use-cases than a standard XML DOM model allows for.
23 - serialization / deserialization of YANG-modeled data driven by YANG
27 `RFC6020 <https://tools.ietf.org/html/rfc6020>`__
29 - JSON - as defined in
30 `draft-lhotka-netmod-yang-json-01 <https://tools.ietf.org/html/rfc6020>`__
32 - support for third-party generators processing YANG models.
37 YANG Tools project consists of following logical subsystems:
39 - **Commons** - Set of general purpose code, which is not specific to
40 YANG, but is also useful outside YANG Tools implementation.
42 - **YANG Model and Parser** - YANG semantic model and lexical and
43 semantic parser of YANG models, which creates in-memory
44 cross-referenced represenation of YANG models, which is used by other
45 components to determine their behaviour based on the model.
47 - **YANG Data** - Definition of Normalized Node APIs and Data Tree
48 APIs, reference implementation of these APIs and implementation of
49 XML and JSON codecs for Normalized Nodes.
51 - **YANG Maven Plugin** - Maven plugin which integrates YANG parser
52 into Maven build lifecycle and provides code-generation framework for
53 components, which wants to generate code or other artefacts based on
59 Project defines base concepts and helper classes which are
60 project-agnostic and could be used outside of YANG Tools project scope.
69 - yang-data-codec-gson
83 - yang-maven-plugin-it
85 - yang-maven-plugin-spi
100 Class diagram of yang model API
102 .. figure:: images/yang-model-api.png
109 Yang Statement Parser works on the idea of statement concepts as defined
110 in RFC6020, section 6.3. We come up here with basic ModelStatement and
111 StatementDefinition, following RFC6020 idea of having sequence of
112 statements, where every statement contains keyword and zero or one
113 argument. ModelStatement is extended by DeclaredStatement (as it comes
114 from source, e.g. YANG source) and EffectiveStatement, which contains
115 other substatements and tends to represent result of semantic processing
116 of other statements (uses, augment for YANG). IdentifierNamespace
117 represents common superclass for YANG model namespaces.
119 Input of the Yang Statement Parser is a collection of
120 StatementStreamSource objects. StatementStreamSource interface is used
121 for inference of effective model and is required to emit its statements
122 using supplied StatementWriter. Each source (e.g. YANG source) has to be
123 processed in three steps in order to emit different statements for each
124 step. This package provides support for various namespaces used across
125 statement parser in order to map relations during declaration phase
128 Currently, there are two implementations of StatementStreamSource in
131 - YangStatementSourceImpl - intended for yang sources
133 - YinStatementSourceImpl - intended for yin sources
138 Class diagram of yang data API
140 .. figure:: images/yang-data-api.png
147 Codecs which enable serialization of NormalizedNodes into YANG-modeled
148 data in XML or JSON format and deserialization of YANG-modeled data in
149 XML or JSON format into NormalizedNodes.
154 Maven plugin which integrates YANG parser into Maven build lifecycle and
155 provides code-generation framework for components, which wants to
156 generate code or other artefacts based on YANG model.
161 Working with YANG Model
162 ~~~~~~~~~~~~~~~~~~~~~~~
164 First thing you need to do if you want to work with YANG models is to
165 instantiate a SchemaContext object. This object type describes one or
166 more parsed YANG modules.
168 In order to create it you need to utilize YANG statement parser which
169 takes one or more StatementStreamSource objects as input and then
170 produces the SchemaContext object.
172 StatementStreamSource object contains the source file information. It
173 has two implementations, one for YANG sources - YangStatementSourceImpl,
174 and one for YIN sources - YinStatementSourceImpl.
176 Here is an example of creating StatementStreamSource objects for YANG
177 files, providing them to the YANG statement parser and building the
182 StatementStreamSource yangModuleSource == new YangStatementSourceImpl("/example.yang", false);
183 StatementStreamSource yangModuleSource2 == new YangStatementSourceImpl("/example2.yang", false);
185 CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild();
186 reactor.addSources(yangModuleSource, yangModuleSource2);
188 SchemaContext schemaContext == reactor.buildEffective();
190 First, StatementStreamSource objects with two constructor arguments
191 should be instantiated: path to the yang source file (which is a regular
192 String object) and a boolean which determines if the path is absolute or
195 Next comes the initiation of new yang parsing cycle - which is
196 represented by CrossSourceStatementReactor.BuildAction object. You can
197 get it by calling method newBuild() on CrossSourceStatementReactor
198 object (RFC6020\_REACTOR) in YangInferencePipeline class.
200 Then you should feed yang sources to it by calling method addSources()
201 that takes one or more StatementStreamSource objects as arguments.
203 Finally you call the method buildEffective() on the reactor object which
204 returns EffectiveSchemaContext (that is a concrete implementation of
205 SchemaContext). Now you are ready to work with contents of the added
208 Let us explain how to work with models contained in the newly created
209 SchemaContext. If you want to get all the modules in the schemaContext,
210 you have to call method getModules() which returns a Set of modules. If
211 you want to get all the data definitions in schemaContext, you need to
212 call method getDataDefinitions, etc.
216 Set<Module> modules == schemaContext.getModules();
217 Set<DataSchemaNodes> dataSchemaNodes == schemaContext.getDataDefinitions();
219 Usually you want to access specific modules. Getting a concrete module
220 from SchemaContext is a matter of calling one of these methods:
222 - findModuleByName(),
224 - findModuleByNamespace(),
226 - findModuleByNamespaceAndRevision().
228 In the first case, you need to provide module name as it is defined in
229 the yang source file and module revision date if it specified in the
230 yang source file (if it is not defined, you can just pass a null value).
231 In order to provide the revision date in proper format, you can use a
232 utility class named SimpleDateFormatUtil.
236 Module exampleModule == schemaContext.findModuleByName("example-module", null);
238 Date revisionDate == SimpleDateFormatUtil.getRevisionFormat().parse("2015-09-02");
239 Module exampleModule == schemaContext.findModuleByName("example-module", revisionDate);
241 In the second case, you have to provide module namespace in form of an
246 Module exampleModule == schema.findModuleByNamespace(new URI("opendaylight.org/example-module"));
248 In the third case, you provide both module namespace and revision date
251 Once you have a Module object, you can access its contents as they are
252 defined in YANG Model API. One way to do this is to use method like
253 getIdentities() or getRpcs() which will give you a Set of objects.
254 Otherwise you can access a DataSchemaNode directly via the method
255 getDataChildByName() which takes a QName object as its only argument.
256 Here are a few examples.
260 Set<AugmentationSchema> augmentationSchemas == exampleModule.getAugmentations();
261 Set<ModuleImport> moduleImports == exampleModule.getImports();
263 ChoiceSchemaNode choiceSchemaNode == (ChoiceSchemaNode) exampleModule.getDataChildByName(QName.create(exampleModule.getQNameModule(), "example-choice"));
265 ContainerSchemaNode containerSchemaNode == (ContainerSchemaNode) exampleModule.getDataChildByName(QName.create(exampleModule.getQNameModule(), "example-container"));
267 The YANG statement parser can work in three modes:
271 - mode with active resolution of if-feature statements
273 - mode with active semantic version processing
275 The default mode is active when you initialize the parsing cycle as
276 usual by calling the method newBuild() without passing any arguments to
277 it. The second and third mode can be activated by invoking the
278 newBuild() with a special argument. You can either activate just one of
279 them or both by passing proper arguments. Let us explain how these modes
282 Mode with active resolution of if-features makes yang statements
283 containing an if-feature statement conditional based on the supported
284 features. These features are provided in the form of a QName-based
285 java.util.function.Predicate object. In the example below, only two
286 features are supported: example-feature-1 and example-feature-2. The
287 Predicate which contains this information is passed to the method
288 newBuild() and the mode is activated.
292 Predicate<QName> isFeatureSupported == qName -> {
293 Set<QName> supportedFeatures == new HashSet<>();
294 supportedFeatures.add(QName.create("example-namespace", "2016-08-31", "example-feature-1"));
295 supportedFeatures.add(QName.create("example-namespace", "2016-08-31", "example-feature-2"));
296 return supportedFeatures.contains(qName);
299 CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild(isFeatureSupported);
301 In case when no features should be supported, you should provide a
302 Predicate<QName> object whose test() method just returns false.
306 Predicate<QName> isFeatureSupported == qName -> false;
308 CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild(isFeatureSupported);
310 When this mode is not activated, all features in the processed YANG
311 sources are supported.
313 Mode with active semantic version processing changes the way how YANG
314 import statements work - each module import is processed based on the
315 specified semantic version statement and the revision-date statement is
316 ignored. In order to activate this mode, you have to provide
317 StatementParserMode.SEMVER\_MODE enum constant as argument to the method
322 CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild(StatementParserMode.SEMVER_MODE);
324 Before you use a semantic version statement in a YANG module, you need
325 to define an extension for it so that the YANG statement parser can
330 module semantic-version {
331 namespace "urn:opendaylight:yang:extension:semantic-version";
335 revision 2016-02-02 {
336 description "Initial version";
338 sv:semantic-version "0.0.1";
340 extension semantic-version {
341 argument "semantic-version" {
347 In the example above, you see a YANG module which defines semantic
348 version as an extension. This extension can be imported to other modules
349 in which we want to utilize the semantic versioning concept.
351 Below is a simple example of the semantic versioning usage. With
352 semantic version processing mode being active, the foo module imports
353 the bar module based on its semantic version. Notice how both modules
354 import the module with the semantic-version extension.
363 import semantic-version { prefix sv; revision-date 2016-02-02; sv:semantic-version "0.0.1"; }
364 import bar { prefix bar; sv:semantic-version "0.1.2";}
366 revision "2016-02-01" {
367 description "Initial version";
369 sv:semantic-version "0.1.1";
381 import semantic-version { prefix sv; revision-date 2016-02-02; sv:semantic-version "0.0.1"; }
383 revision "2016-01-01" {
384 description "Initial version";
386 sv:semantic-version "0.1.2";
391 Every semantic version must have the following form: x.y.z. The x
392 corresponds to a major version, the y corresponds to a minor version and
393 the z corresponds to a patch version. If no semantic version is
394 specified in a module or an import statement, then the default one is
397 A major version number of 0 indicates that the model is still in
398 development and is subject to change.
400 Following a release of major version 1, all modules will increment major
401 version number when backwards incompatible changes to the model are
404 The minor version is changed when features are added to the model that
405 do not impact current clients use of the model.
407 The patch version is incremented when non-feature changes (such as
408 bugfixes or clarifications of human-readable descriptions that do not
409 impact model functionality) are made that maintain backwards
412 When importing a module with activated semantic version processing mode,
413 only the module with the newest (highest) compatible semantic version is
414 imported. Two semantic versions are compatible when all of the following
417 - the major version in the import statement and major version in the
418 imported module are equal. For instance, 1.5.3 is compatible with
419 1.5.3, 1.5.4, 1.7.2, etc., but it is not compatible with 0.5.2 or
422 - the combination of minor version and patch version in the import
423 statement is not higher than the one in the imported module. For
424 instance, 1.5.2 is compatible with 1.5.2, 1.5.4, 1.6.8 etc. In fact,
425 1.5.2 is also compatible with versions like 1.5.1, 1.4.9 or 1.3.7 as
426 they have equal major version. However, they will not be imported
427 because their minor and patch version are lower (older).
429 If the import statement does not specify a semantic version, then the
430 default one is chosen - 0.0.0. Thus, the module is imported only if it
431 has a semantic version compatible with the default one, for example
432 0.0.0, 0.1.3, 0.3.5 and so on.
434 Working with YANG Data
435 ~~~~~~~~~~~~~~~~~~~~~~
437 If you want to work with YANG Data you are going to need NormalizedNode
438 objects that are specified in the YANG Data API. NormalizedNode is an
439 interface at the top of the YANG Data hierarchy. It is extended through
440 sub-interfaces which define the behaviour of specific NormalizedNode
441 types like AnyXmlNode, ChoiceNode, LeafNode, ContainerNode, etc.
442 Concrete implemenations of these interfaces are defined in
443 yang-data-impl module. Once you have one or more NormalizedNode
444 instances, you can perform CRUD operations on YANG data tree which is an
445 in-memory database designed to store normalized nodes in a tree-like
448 In some cases it is clear which NormalizedNode type belongs to which
449 yang statement (e.g. AnyXmlNode, ChoiceNode, LeafNode). However, there
450 are some normalized nodes which are named differently from their yang
451 counterparts. They are listed below:
453 - LeafSetNode - leaf-list
455 - OrderedLeafSetNode - leaf-list that is ordered-by user
457 - LeafSetEntryNode - concrete entry in a leaf-list
459 - MapNode - keyed list
461 - OrderedMapNode - keyed list that is ordered-by user
463 - MapEntryNode - concrete entry in a keyed list
465 - UnkeyedListNode - unkeyed list
467 - UnkeyedListEntryNode - concrete entry in an unkeyed list
469 In order to create a concrete NormalizedNode object you can use the
470 utility class Builders or ImmutableNodes. These classes can be found in
471 yang-data-impl module and they provide methods for building each type of
472 normalized node. Here is a simple example of building a normalized node:
477 ContainerNode containerNode == Builders.containerBuilder().withNodeIdentifier(new YangInstanceIdentifier.NodeIdentifier(QName.create(moduleQName, "example-container")).build();
480 ContainerNode containerNode2 == Builders.containerBuilder(containerSchemaNode).build();
482 Both examples produce the same result. NodeIdentifier is one of the four
483 types of YangInstanceIdentifier (these types are described in the
484 javadoc of YangInstanceIdentifier). The purpose of
485 YangInstanceIdentifier is to uniquely identify a particular node in the
486 data tree. In the first example, you have to add NodeIdentifier before
487 building the resulting node. In the second example it is also added
488 using the provided ContainerSchemaNode object.
490 ImmutableNodes class offers similar builder methods and also adds an
491 overloaded method called fromInstanceId() which allows you to create a
492 NormalizedNode object based on YangInstanceIdentifier and SchemaContext.
493 Below is an example which shows the use of this method.
497 YangInstanceIdentifier.NodeIdentifier contId == new YangInstanceIdentifier.NodeIdentifier(QName.create(moduleQName, "example-container");
499 NormalizedNode<?, ?> contNode == ImmutableNodes.fromInstanceId(schemaContext, YangInstanceIdentifier.create(contId));
501 Let us show a more complex example of creating a NormalizedNode. First,
502 consider the following YANG module:
506 module example-module {
507 namespace "opendaylight.org/example-module";
510 container parent-container {
511 container child-container {
512 list parent-ordered-list {
515 key "parent-key-leaf";
517 leaf parent-key-leaf {
521 leaf parent-ordinary-leaf {
525 list child-ordered-list {
528 key "child-key-leaf";
530 leaf child-key-leaf {
534 leaf child-ordinary-leaf {
543 In the following example, two normalized nodes based on the module above
544 are written to and read from the data tree.
548 TipProducingDataTree inMemoryDataTree == InMemoryDataTreeFactory.getInstance().create(TreeType.OPERATIONAL);
549 inMemoryDataTree.setSchemaContext(schemaContext);
551 // first data tree modification
552 MapEntryNode parentOrderedListEntryNode == Builders.mapEntryBuilder().withNodeIdentifier(
553 new YangInstanceIdentifier.NodeIdentifierWithPredicates(
554 parentOrderedListQName, parentKeyLeafQName, "pkval1"))
555 .withChild(Builders.leafBuilder().withNodeIdentifier(
556 new YangInstanceIdentifier.NodeIdentifier(parentOrdinaryLeafQName))
557 .withValue("plfval1").build()).build();
559 OrderedMapNode parentOrderedListNode == Builders.orderedMapBuilder().withNodeIdentifier(
560 new YangInstanceIdentifier.NodeIdentifier(parentOrderedListQName))
561 .withChild(parentOrderedListEntryNode).build();
563 ContainerNode parentContainerNode == Builders.containerBuilder().withNodeIdentifier(
564 new YangInstanceIdentifier.NodeIdentifier(parentContainerQName))
565 .withChild(Builders.containerBuilder().withNodeIdentifier(
566 new NodeIdentifier(childContainerQName)).withChild(parentOrderedListNode).build()).build();
568 YangInstanceIdentifier path1 == YangInstanceIdentifier.of(parentContainerQName);
570 DataTreeModification treeModification == inMemoryDataTree.takeSnapshot().newModification();
571 treeModification.write(path1, parentContainerNode);
573 // second data tree modification
574 MapEntryNode childOrderedListEntryNode == Builders.mapEntryBuilder().withNodeIdentifier(
575 new YangInstanceIdentifier.NodeIdentifierWithPredicates(
576 childOrderedListQName, childKeyLeafQName, "chkval1"))
577 .withChild(Builders.leafBuilder().withNodeIdentifier(
578 new YangInstanceIdentifier.NodeIdentifier(childOrdinaryLeafQName))
579 .withValue("chlfval1").build()).build();
581 OrderedMapNode childOrderedListNode == Builders.orderedMapBuilder().withNodeIdentifier(
582 new YangInstanceIdentifier.NodeIdentifier(childOrderedListQName))
583 .withChild(childOrderedListEntryNode).build();
585 ImmutableMap.Builder<QName, Object> builder == ImmutableMap.builder();
586 ImmutableMap<QName, Object> keys == builder.put(parentKeyLeafQName, "pkval1").build();
588 YangInstanceIdentifier path2 == YangInstanceIdentifier.of(parentContainerQName).node(childContainerQName)
589 .node(parentOrderedListQName).node(new NodeIdentifierWithPredicates(parentOrderedListQName, keys)).node(childOrderedListQName);
591 treeModification.write(path2, childOrderedListNode);
592 treeModification.ready();
593 inMemoryDataTree.validate(treeModification);
594 inMemoryDataTree.commit(inMemoryDataTree.prepare(treeModification));
596 DataTreeSnapshot snapshotAfterCommits == inMemoryDataTree.takeSnapshot();
597 Optional<NormalizedNode<?, ?>> readNode == snapshotAfterCommits.readNode(path1);
598 Optional<NormalizedNode<?, ?>> readNode2 == snapshotAfterCommits.readNode(path2);
600 First comes the creation of in-memory data tree instance. The schema
601 context (containing the model mentioned above) of this tree is set.
602 After that, two normalized nodes are built. The first one consists of a
603 parent container, a child container and a parent ordered list which
604 contains a key leaf and an ordinary leaf. The second normalized node is
605 a child ordered list that also contains a key leaf and an ordinary leaf.
607 In order to add a child node to a node, method withChild() is used. It
608 takes a NormalizedNode as argument. When creating a list entry,
609 YangInstanceIdentifier.NodeIdentifierWithPredicates should be used as
610 its identifier. Its arguments are the QName of the list, QName of the
611 list key and the value of the key. Method withValue() specifies a value
612 for the ordinary leaf in the list.
614 Before writing a node to the data tree, a path (YangInstanceIdentifier)
615 which determines its place in the data tree needs to be defined. The
616 path of the first normalized node starts at the parent container. The
617 path of the second normalized node points to the child ordered list
618 contained in the parent ordered list entry specified by the key value
621 Write operation is performed with both normalized nodes mentioned
622 earlier. It consist of several steps. The first step is to instantiate a
623 DataTreeModification object based on a DataTreeSnapshot.
624 DataTreeSnapshot gives you the current state of the data tree. Then
625 comes the write operation which writes a normalized node at the provided
626 path in the data tree. After doing both write operations, method ready()
627 has to be called, marking the modification as ready for application to
628 the data tree. No further operations within the modification are
629 allowed. The modification is then validated - checked whether it can be
630 applied to the data tree. Finally we commit it to the data tree.
632 Now you can access the written nodes. In order to do this, you have to
633 create a new DataTreeSnapshot instance and call the method readNode()
634 with path argument pointing to a particular node in the tree.
636 Serialization / deserialization of YANG Data
637 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
639 If you want to deserialize YANG-modeled data which have the form of an
640 XML document, you can use the XML parser found in the module
641 yang-data-codec-xml. The parser walks through the XML document
642 containing YANG-modeled data based on the provided SchemaContext and
643 emits node events into a NormalizedNodeStreamWriter. The parser
644 disallows multiple instances of the same element except for leaf-list
645 and list entries. The parser also expects that the YANG-modeled data in
646 the XML source are wrapped in a root element. Otherwise it will not work
649 Here is an example of using the XML parser.
653 InputStream resourceAsStream == ExampleClass.class.getResourceAsStream("/example-module.yang");
655 XMLInputFactory factory == XMLInputFactory.newInstance();
656 XMLStreamReader reader == factory.createXMLStreamReader(resourceAsStream);
658 NormalizedNodeResult result == new NormalizedNodeResult();
659 NormalizedNodeStreamWriter streamWriter == ImmutableNormalizedNodeStreamWriter.from(result);
661 XmlParserStream xmlParser == XmlParserStream.create(streamWriter, schemaContext);
662 xmlParser.parse(reader);
664 NormalizedNode<?, ?> transformedInput == result.getResult();
666 The XML parser utilizes the javax.xml.stream.XMLStreamReader for parsing
667 an XML document. First, you should create an instance of this reader
668 using XMLInputFactory and then load an XML document (in the form of
669 InputStream object) into it.
671 In order to emit node events while parsing the data you need to
672 instantiate a NormalizedNodeStreamWriter. This writer is actually an
673 interface and therefore you need to use a concrete implementation of it.
674 In this example it is the ImmutableNormalizedNodeStreamWriter, which
675 constructs immutable instances of NormalizedNodes.
677 There are two ways how to create an instance of this writer using the
678 static overloaded method from(). One version of this method takes a
679 NormalizedNodeResult as argument. This object type is a result holder in
680 which the resulting NormalizedNode will be stored. The other version
681 takes a NormalizedNodeContainerBuilder as argument. All created nodes
682 will be written to this builder.
684 Next step is to create an instance of the XML parser. The parser itself
685 is represented by a class named XmlParserStream. You can use one of two
686 versions of the static overloaded method create() to construct this
687 object. One version accepts a NormalizedNodeStreamWriter and a
688 SchemaContext as arguments, the other version takes the same arguments
689 plus a SchemaNode. Node events are emitted to the writer. The
690 SchemaContext is used to check if the YANG data in the XML source comply
691 with the provided YANG model(s). The last argument, a SchemaNode object,
692 describes the node that is the parent of nodes defined in the XML data.
693 If you do not provide this argument, the parser sets the SchemaContext
696 The parser is now ready to walk through the XML. Parsing is initiated by
697 calling the method parse() on the XmlParserStream object with
698 XMLStreamReader as its argument.
700 Finally you can access the result of parsing - a tree of NormalizedNodes
701 containg the data as they are defined in the parsed XML document - by
702 calling the method getResult() on the NormalizedNodeResult object.
704 Introducing schema source repositories
705 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
707 Writing YANG driven generators
708 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
710 Introducing specific extension support for YANG parser
711 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~