1 .. _yangtools-developer-guide:
3 YANG Tools Developer Guide
4 ==========================
9 YANG Tools is set of libraries and tooling providing support for use
10 `YANG <https://tools.ietf.org/html/rfc6020>`__ for Java (or other
11 JVM-based language) projects and applications.
13 YANG Tools provides following features in OpenDaylight:
15 - parsing of YANG sources and semantic inference of relationship across
16 YANG models as defined in
17 `RFC6020 <https://tools.ietf.org/html/rfc6020>`__
19 - representation of YANG-modeled data in Java
21 - **Normalized Node** representation - DOM-like tree model, which
22 uses conceptual meta-model more tailored to YANG and OpenDaylight
23 use-cases than a standard XML DOM model allows for.
25 - serialization / deserialization of YANG-modeled data driven by YANG
29 `RFC6020 <https://tools.ietf.org/html/rfc6020>`__
31 - JSON - as defined in
32 `draft-lhotka-netmod-yang-json-01 <https://tools.ietf.org/html/rfc6020>`__
34 - support for third-party generators processing YANG models.
39 YANG Tools project consists of following logical subsystems:
41 - **Commons** - Set of general purpose code, which is not specific to
42 YANG, but is also useful outside YANG Tools implementation.
44 - **YANG Model and Parser** - YANG semantic model and lexical and
45 semantic parser of YANG models, which creates in-memory
46 cross-referenced represenation of YANG models, which is used by other
47 components to determine their behaviour based on the model.
49 - **YANG Data** - Definition of Normalized Node APIs and Data Tree
50 APIs, reference implementation of these APIs and implementation of
51 XML and JSON codecs for Normalized Nodes.
53 - **YANG Maven Plugin** - Maven plugin which integrates YANG parser
54 into Maven build lifecycle and provides code-generation framework for
55 components, which wants to generate code or other artefacts based on
61 Project defines base concepts and helper classes which are
62 project-agnostic and could be used outside of YANG Tools project scope.
71 - yang-data-codec-gson
85 - yang-maven-plugin-it
87 - yang-maven-plugin-spi
102 Class diagram of yang model API
104 .. figure:: images/yang-model-api.png
111 Yang Statement Parser works on the idea of statement concepts as defined
112 in RFC6020, section 6.3. We come up here with basic ModelStatement and
113 StatementDefinition, following RFC6020 idea of having sequence of
114 statements, where every statement contains keyword and zero or one
115 argument. ModelStatement is extended by DeclaredStatement (as it comes
116 from source, e.g. YANG source) and EffectiveStatement, which contains
117 other substatements and tends to represent result of semantic processing
118 of other statements (uses, augment for YANG). IdentifierNamespace
119 represents common superclass for YANG model namespaces.
121 Input of the Yang Statement Parser is a collection of
122 StatementStreamSource objects. StatementStreamSource interface is used
123 for inference of effective model and is required to emit its statements
124 using supplied StatementWriter. Each source (e.g. YANG source) has to be
125 processed in three steps in order to emit different statements for each
126 step. This package provides support for various namespaces used across
127 statement parser in order to map relations during declaration phase
130 Currently, there are two implementations of StatementStreamSource in
133 - YangStatementSourceImpl - intended for yang sources
135 - YinStatementSourceImpl - intended for yin sources
140 Class diagram of yang data API
142 .. figure:: images/yang-data-api.png
149 Codecs which enable serialization of NormalizedNodes into YANG-modeled
150 data in XML or JSON format and deserialization of YANG-modeled data in
151 XML or JSON format into NormalizedNodes.
156 Maven plugin which integrates YANG parser into Maven build lifecycle and
157 provides code-generation framework for components, which wants to
158 generate code or other artefacts based on YANG model.
163 Working with YANG Model
164 ~~~~~~~~~~~~~~~~~~~~~~~
166 First thing you need to do if you want to work with YANG models is to
167 instantiate a SchemaContext object. This object type describes one or
168 more parsed YANG modules.
170 In order to create it you need to utilize YANG statement parser which
171 takes one or more StatementStreamSource objects as input and then
172 produces the SchemaContext object.
174 StatementStreamSource object contains the source file information. It
175 has two implementations, one for YANG sources - YangStatementSourceImpl,
176 and one for YIN sources - YinStatementSourceImpl.
178 Here is an example of creating StatementStreamSource objects for YANG
179 files, providing them to the YANG statement parser and building the
184 StatementStreamSource yangModuleSource == new YangStatementSourceImpl("/example.yang", false);
185 StatementStreamSource yangModuleSource2 == new YangStatementSourceImpl("/example2.yang", false);
187 CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild();
188 reactor.addSources(yangModuleSource, yangModuleSource2);
190 SchemaContext schemaContext == reactor.buildEffective();
192 First, StatementStreamSource objects with two constructor arguments
193 should be instantiated: path to the yang source file (which is a regular
194 String object) and a boolean which determines if the path is absolute or
197 Next comes the initiation of new yang parsing cycle - which is
198 represented by CrossSourceStatementReactor.BuildAction object. You can
199 get it by calling method newBuild() on CrossSourceStatementReactor
200 object (RFC6020\_REACTOR) in YangInferencePipeline class.
202 Then you should feed yang sources to it by calling method addSources()
203 that takes one or more StatementStreamSource objects as arguments.
205 Finally you call the method buildEffective() on the reactor object which
206 returns EffectiveSchemaContext (that is a concrete implementation of
207 SchemaContext). Now you are ready to work with contents of the added
210 Let us explain how to work with models contained in the newly created
211 SchemaContext. If you want to get all the modules in the schemaContext,
212 you have to call method getModules() which returns a Set of modules. If
213 you want to get all the data definitions in schemaContext, you need to
214 call method getDataDefinitions, etc.
218 Set<Module> modules == schemaContext.getModules();
219 Set<DataSchemaNodes> dataSchemaNodes == schemaContext.getDataDefinitions();
221 Usually you want to access specific modules. Getting a concrete module
222 from SchemaContext is a matter of calling one of these methods:
224 - findModuleByName(),
226 - findModuleByNamespace(),
228 - findModuleByNamespaceAndRevision().
230 In the first case, you need to provide module name as it is defined in
231 the yang source file and module revision date if it specified in the
232 yang source file (if it is not defined, you can just pass a null value).
233 In order to provide the revision date in proper format, you can use a
234 utility class named SimpleDateFormatUtil.
238 Module exampleModule == schemaContext.findModuleByName("example-module", null);
240 Date revisionDate == SimpleDateFormatUtil.getRevisionFormat().parse("2015-09-02");
241 Module exampleModule == schemaContext.findModuleByName("example-module", revisionDate);
243 In the second case, you have to provide module namespace in form of an
248 Module exampleModule == schema.findModuleByNamespace(new URI("opendaylight.org/example-module"));
250 In the third case, you provide both module namespace and revision date
253 Once you have a Module object, you can access its contents as they are
254 defined in YANG Model API. One way to do this is to use method like
255 getIdentities() or getRpcs() which will give you a Set of objects.
256 Otherwise you can access a DataSchemaNode directly via the method
257 getDataChildByName() which takes a QName object as its only argument.
258 Here are a few examples.
262 Set<AugmentationSchema> augmentationSchemas == exampleModule.getAugmentations();
263 Set<ModuleImport> moduleImports == exampleModule.getImports();
265 ChoiceSchemaNode choiceSchemaNode == (ChoiceSchemaNode) exampleModule.getDataChildByName(QName.create(exampleModule.getQNameModule(), "example-choice"));
267 ContainerSchemaNode containerSchemaNode == (ContainerSchemaNode) exampleModule.getDataChildByName(QName.create(exampleModule.getQNameModule(), "example-container"));
269 The YANG statement parser can work in three modes:
273 - mode with active resolution of if-feature statements
275 - mode with active semantic version processing
277 The default mode is active when you initialize the parsing cycle as
278 usual by calling the method newBuild() without passing any arguments to
279 it. The second and third mode can be activated by invoking the
280 newBuild() with a special argument. You can either activate just one of
281 them or both by passing proper arguments. Let us explain how these modes
284 Mode with active resolution of if-features makes yang statements
285 containing an if-feature statement conditional based on the supported
286 features. These features are provided in the form of a QName-based
287 java.util.Set object. In the example below, only two
288 features are supported: example-feature-1 and example-feature-2. The
289 Set which contains this information is passed to the method
290 newBuild() and the mode is activated.
294 Set<QName> supportedFeatures = ImmutableSet.of(
295 QName.create("example-namespace", "2016-08-31", "example-feature-1"),
296 QName.create("example-namespace", "2016-08-31", "example-feature-2"));
298 CrossSourceStatementReactor.BuildAction reactor = YangInferencePipeline.RFC6020_REACTOR.newBuild(supportedFeatures);
300 In case when no features should be supported, you should provide an
301 empty Set<QName> object.
305 Set<QName> supportedFeatures = ImmutableSet.of();
307 CrossSourceStatementReactor.BuildAction reactor = YangInferencePipeline.RFC6020_REACTOR.newBuild(supportedFeatures);
309 When this mode is not activated, all features in the processed YANG
310 sources are supported.
312 Mode with active semantic version processing changes the way how YANG
313 import statements work - each module import is processed based on the
314 specified semantic version statement and the revision-date statement is
315 ignored. In order to activate this mode, you have to provide
316 StatementParserMode.SEMVER\_MODE enum constant as argument to the method
321 CrossSourceStatementReactor.BuildAction reactor == YangInferencePipeline.RFC6020_REACTOR.newBuild(StatementParserMode.SEMVER_MODE);
323 Before you use a semantic version statement in a YANG module, you need
324 to define an extension for it so that the YANG statement parser can
329 module semantic-version {
330 namespace "urn:opendaylight:yang:extension:semantic-version";
334 revision 2016-02-02 {
335 description "Initial version";
337 sv:semantic-version "0.0.1";
339 extension semantic-version {
340 argument "semantic-version" {
346 In the example above, you see a YANG module which defines semantic
347 version as an extension. This extension can be imported to other modules
348 in which we want to utilize the semantic versioning concept.
350 Below is a simple example of the semantic versioning usage. With
351 semantic version processing mode being active, the foo module imports
352 the bar module based on its semantic version. Notice how both modules
353 import the module with the semantic-version extension.
362 import semantic-version { prefix sv; revision-date 2016-02-02; sv:semantic-version "0.0.1"; }
363 import bar { prefix bar; sv:semantic-version "0.1.2";}
365 revision "2016-02-01" {
366 description "Initial version";
368 sv:semantic-version "0.1.1";
380 import semantic-version { prefix sv; revision-date 2016-02-02; sv:semantic-version "0.0.1"; }
382 revision "2016-01-01" {
383 description "Initial version";
385 sv:semantic-version "0.1.2";
390 Every semantic version must have the following form: x.y.z. The x
391 corresponds to a major version, the y corresponds to a minor version and
392 the z corresponds to a patch version. If no semantic version is
393 specified in a module or an import statement, then the default one is
396 A major version number of 0 indicates that the model is still in
397 development and is subject to change.
399 Following a release of major version 1, all modules will increment major
400 version number when backwards incompatible changes to the model are
403 The minor version is changed when features are added to the model that
404 do not impact current clients use of the model.
406 The patch version is incremented when non-feature changes (such as
407 bugfixes or clarifications of human-readable descriptions that do not
408 impact model functionality) are made that maintain backwards
411 When importing a module with activated semantic version processing mode,
412 only the module with the newest (highest) compatible semantic version is
413 imported. Two semantic versions are compatible when all of the following
416 - the major version in the import statement and major version in the
417 imported module are equal. For instance, 1.5.3 is compatible with
418 1.5.3, 1.5.4, 1.7.2, etc., but it is not compatible with 0.5.2 or
421 - the combination of minor version and patch version in the import
422 statement is not higher than the one in the imported module. For
423 instance, 1.5.2 is compatible with 1.5.2, 1.5.4, 1.6.8 etc. In fact,
424 1.5.2 is also compatible with versions like 1.5.1, 1.4.9 or 1.3.7 as
425 they have equal major version. However, they will not be imported
426 because their minor and patch version are lower (older).
428 If the import statement does not specify a semantic version, then the
429 default one is chosen - 0.0.0. Thus, the module is imported only if it
430 has a semantic version compatible with the default one, for example
431 0.0.0, 0.1.3, 0.3.5 and so on.
433 Working with YANG Data
434 ~~~~~~~~~~~~~~~~~~~~~~
436 If you want to work with YANG Data you are going to need NormalizedNode
437 objects that are specified in the YANG Data API. NormalizedNode is an
438 interface at the top of the YANG Data hierarchy. It is extended through
439 sub-interfaces which define the behaviour of specific NormalizedNode
440 types like AnyXmlNode, ChoiceNode, LeafNode, ContainerNode, etc.
441 Concrete implemenations of these interfaces are defined in
442 yang-data-impl module. Once you have one or more NormalizedNode
443 instances, you can perform CRUD operations on YANG data tree which is an
444 in-memory database designed to store normalized nodes in a tree-like
447 In some cases it is clear which NormalizedNode type belongs to which
448 yang statement (e.g. AnyXmlNode, ChoiceNode, LeafNode). However, there
449 are some normalized nodes which are named differently from their yang
450 counterparts. They are listed below:
452 - LeafSetNode - leaf-list
454 - OrderedLeafSetNode - leaf-list that is ordered-by user
456 - LeafSetEntryNode - concrete entry in a leaf-list
458 - MapNode - keyed list
460 - OrderedMapNode - keyed list that is ordered-by user
462 - MapEntryNode - concrete entry in a keyed list
464 - UnkeyedListNode - unkeyed list
466 - UnkeyedListEntryNode - concrete entry in an unkeyed list
468 In order to create a concrete NormalizedNode object you can use the
469 utility class Builders or ImmutableNodes. These classes can be found in
470 yang-data-impl module and they provide methods for building each type of
471 normalized node. Here is a simple example of building a normalized node:
476 ContainerNode containerNode == Builders.containerBuilder().withNodeIdentifier(new YangInstanceIdentifier.NodeIdentifier(QName.create(moduleQName, "example-container")).build();
479 ContainerNode containerNode2 == Builders.containerBuilder(containerSchemaNode).build();
481 Both examples produce the same result. NodeIdentifier is one of the four
482 types of YangInstanceIdentifier (these types are described in the
483 javadoc of YangInstanceIdentifier). The purpose of
484 YangInstanceIdentifier is to uniquely identify a particular node in the
485 data tree. In the first example, you have to add NodeIdentifier before
486 building the resulting node. In the second example it is also added
487 using the provided ContainerSchemaNode object.
489 ImmutableNodes class offers similar builder methods and also adds an
490 overloaded method called fromInstanceId() which allows you to create a
491 NormalizedNode object based on YangInstanceIdentifier and SchemaContext.
492 Below is an example which shows the use of this method.
496 YangInstanceIdentifier.NodeIdentifier contId == new YangInstanceIdentifier.NodeIdentifier(QName.create(moduleQName, "example-container");
498 NormalizedNode<?, ?> contNode == ImmutableNodes.fromInstanceId(schemaContext, YangInstanceIdentifier.create(contId));
500 Let us show a more complex example of creating a NormalizedNode. First,
501 consider the following YANG module:
505 module example-module {
506 namespace "opendaylight.org/example-module";
509 container parent-container {
510 container child-container {
511 list parent-ordered-list {
514 key "parent-key-leaf";
516 leaf parent-key-leaf {
520 leaf parent-ordinary-leaf {
524 list child-ordered-list {
527 key "child-key-leaf";
529 leaf child-key-leaf {
533 leaf child-ordinary-leaf {
542 In the following example, two normalized nodes based on the module above
543 are written to and read from the data tree.
547 TipProducingDataTree inMemoryDataTree == InMemoryDataTreeFactory.getInstance().create(TreeType.OPERATIONAL);
548 inMemoryDataTree.setSchemaContext(schemaContext);
550 // first data tree modification
551 MapEntryNode parentOrderedListEntryNode == Builders.mapEntryBuilder().withNodeIdentifier(
552 new YangInstanceIdentifier.NodeIdentifierWithPredicates(
553 parentOrderedListQName, parentKeyLeafQName, "pkval1"))
554 .withChild(Builders.leafBuilder().withNodeIdentifier(
555 new YangInstanceIdentifier.NodeIdentifier(parentOrdinaryLeafQName))
556 .withValue("plfval1").build()).build();
558 OrderedMapNode parentOrderedListNode == Builders.orderedMapBuilder().withNodeIdentifier(
559 new YangInstanceIdentifier.NodeIdentifier(parentOrderedListQName))
560 .withChild(parentOrderedListEntryNode).build();
562 ContainerNode parentContainerNode == Builders.containerBuilder().withNodeIdentifier(
563 new YangInstanceIdentifier.NodeIdentifier(parentContainerQName))
564 .withChild(Builders.containerBuilder().withNodeIdentifier(
565 new NodeIdentifier(childContainerQName)).withChild(parentOrderedListNode).build()).build();
567 YangInstanceIdentifier path1 == YangInstanceIdentifier.of(parentContainerQName);
569 DataTreeModification treeModification == inMemoryDataTree.takeSnapshot().newModification();
570 treeModification.write(path1, parentContainerNode);
572 // second data tree modification
573 MapEntryNode childOrderedListEntryNode == Builders.mapEntryBuilder().withNodeIdentifier(
574 new YangInstanceIdentifier.NodeIdentifierWithPredicates(
575 childOrderedListQName, childKeyLeafQName, "chkval1"))
576 .withChild(Builders.leafBuilder().withNodeIdentifier(
577 new YangInstanceIdentifier.NodeIdentifier(childOrdinaryLeafQName))
578 .withValue("chlfval1").build()).build();
580 OrderedMapNode childOrderedListNode == Builders.orderedMapBuilder().withNodeIdentifier(
581 new YangInstanceIdentifier.NodeIdentifier(childOrderedListQName))
582 .withChild(childOrderedListEntryNode).build();
584 ImmutableMap.Builder<QName, Object> builder == ImmutableMap.builder();
585 ImmutableMap<QName, Object> keys == builder.put(parentKeyLeafQName, "pkval1").build();
587 YangInstanceIdentifier path2 == YangInstanceIdentifier.of(parentContainerQName).node(childContainerQName)
588 .node(parentOrderedListQName).node(new NodeIdentifierWithPredicates(parentOrderedListQName, keys)).node(childOrderedListQName);
590 treeModification.write(path2, childOrderedListNode);
591 treeModification.ready();
592 inMemoryDataTree.validate(treeModification);
593 inMemoryDataTree.commit(inMemoryDataTree.prepare(treeModification));
595 DataTreeSnapshot snapshotAfterCommits == inMemoryDataTree.takeSnapshot();
596 Optional<NormalizedNode<?, ?>> readNode == snapshotAfterCommits.readNode(path1);
597 Optional<NormalizedNode<?, ?>> readNode2 == snapshotAfterCommits.readNode(path2);
599 First comes the creation of in-memory data tree instance. The schema
600 context (containing the model mentioned above) of this tree is set.
601 After that, two normalized nodes are built. The first one consists of a
602 parent container, a child container and a parent ordered list which
603 contains a key leaf and an ordinary leaf. The second normalized node is
604 a child ordered list that also contains a key leaf and an ordinary leaf.
606 In order to add a child node to a node, method withChild() is used. It
607 takes a NormalizedNode as argument. When creating a list entry,
608 YangInstanceIdentifier.NodeIdentifierWithPredicates should be used as
609 its identifier. Its arguments are the QName of the list, QName of the
610 list key and the value of the key. Method withValue() specifies a value
611 for the ordinary leaf in the list.
613 Before writing a node to the data tree, a path (YangInstanceIdentifier)
614 which determines its place in the data tree needs to be defined. The
615 path of the first normalized node starts at the parent container. The
616 path of the second normalized node points to the child ordered list
617 contained in the parent ordered list entry specified by the key value
620 Write operation is performed with both normalized nodes mentioned
621 earlier. It consist of several steps. The first step is to instantiate a
622 DataTreeModification object based on a DataTreeSnapshot.
623 DataTreeSnapshot gives you the current state of the data tree. Then
624 comes the write operation which writes a normalized node at the provided
625 path in the data tree. After doing both write operations, method ready()
626 has to be called, marking the modification as ready for application to
627 the data tree. No further operations within the modification are
628 allowed. The modification is then validated - checked whether it can be
629 applied to the data tree. Finally we commit it to the data tree.
631 Now you can access the written nodes. In order to do this, you have to
632 create a new DataTreeSnapshot instance and call the method readNode()
633 with path argument pointing to a particular node in the tree.
635 Serialization / deserialization of YANG Data
636 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
638 If you want to deserialize YANG-modeled data which have the form of an
639 XML document, you can use the XML parser found in the module
640 yang-data-codec-xml. The parser walks through the XML document
641 containing YANG-modeled data based on the provided SchemaContext and
642 emits node events into a NormalizedNodeStreamWriter. The parser
643 disallows multiple instances of the same element except for leaf-list
644 and list entries. The parser also expects that the YANG-modeled data in
645 the XML source are wrapped in a root element. Otherwise it will not work
648 Here is an example of using the XML parser.
652 InputStream resourceAsStream == ExampleClass.class.getResourceAsStream("/example-module.yang");
654 XMLInputFactory factory == XMLInputFactory.newInstance();
655 XMLStreamReader reader == factory.createXMLStreamReader(resourceAsStream);
657 NormalizedNodeResult result == new NormalizedNodeResult();
658 NormalizedNodeStreamWriter streamWriter == ImmutableNormalizedNodeStreamWriter.from(result);
660 XmlParserStream xmlParser == XmlParserStream.create(streamWriter, schemaContext);
661 xmlParser.parse(reader);
663 NormalizedNode<?, ?> transformedInput == result.getResult();
665 The XML parser utilizes the javax.xml.stream.XMLStreamReader for parsing
666 an XML document. First, you should create an instance of this reader
667 using XMLInputFactory and then load an XML document (in the form of
668 InputStream object) into it.
670 In order to emit node events while parsing the data you need to
671 instantiate a NormalizedNodeStreamWriter. This writer is actually an
672 interface and therefore you need to use a concrete implementation of it.
673 In this example it is the ImmutableNormalizedNodeStreamWriter, which
674 constructs immutable instances of NormalizedNodes.
676 There are two ways how to create an instance of this writer using the
677 static overloaded method from(). One version of this method takes a
678 NormalizedNodeResult as argument. This object type is a result holder in
679 which the resulting NormalizedNode will be stored. The other version
680 takes a NormalizedNodeContainerBuilder as argument. All created nodes
681 will be written to this builder.
683 Next step is to create an instance of the XML parser. The parser itself
684 is represented by a class named XmlParserStream. You can use one of two
685 versions of the static overloaded method create() to construct this
686 object. One version accepts a NormalizedNodeStreamWriter and a
687 SchemaContext as arguments, the other version takes the same arguments
688 plus a SchemaNode. Node events are emitted to the writer. The
689 SchemaContext is used to check if the YANG data in the XML source comply
690 with the provided YANG model(s). The last argument, a SchemaNode object,
691 describes the node that is the parent of nodes defined in the XML data.
692 If you do not provide this argument, the parser sets the SchemaContext
695 The parser is now ready to walk through the XML. Parsing is initiated by
696 calling the method parse() on the XmlParserStream object with
697 XMLStreamReader as its argument.
699 Finally you can access the result of parsing - a tree of NormalizedNodes
700 containg the data as they are defined in the parsed XML document - by
701 calling the method getResult() on the NormalizedNodeResult object.
703 Introducing schema source repositories
704 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
706 Writing YANG driven generators
707 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
709 Introducing specific extension support for YANG parser
710 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~