1 .. _netconf-user-guide:
9 NETCONF is an XML-based protocol used for configuration and monitoring
10 devices in the network. The base NETCONF protocol is described in
11 `RFC-6241 <http://tools.ietf.org/html/rfc6241>`__.
13 **NETCONF in OpenDaylight:.**
15 OpenDaylight supports the NETCONF protocol as a northbound server as
16 well as a southbound plugin. It also includes a set of test tools for
17 simulating NETCONF devices and clients.
19 Southbound (netconf-connector)
20 ------------------------------
22 The NETCONF southbound plugin is capable of connecting to remote NETCONF
23 devices and exposing their configuration/operational datastores, RPCs
24 and notifications as MD-SAL mount points. These mount points allow
25 applications and remote users (over RESTCONF) to interact with the
28 In terms of RFCs, the connector supports:
30 - `RFC-6241 <http://tools.ietf.org/html/rfc6241>`__
32 - `RFC-5277 <https://tools.ietf.org/html/rfc5277>`__
34 - `RFC-6022 <https://tools.ietf.org/html/rfc6022>`__
36 - `draft-ietf-netconf-yang-library-06 <https://tools.ietf.org/html/draft-ietf-netconf-yang-library-06>`__
38 **Netconf-connector is fully model-driven (utilizing the YANG modeling
39 language) so in addition to the above RFCs, it supports any
40 data/RPC/notifications described by a YANG model that is implemented by
45 NETCONF southbound can be activated by installing
46 ``odl-netconf-connector-all`` Karaf feature.
48 Netconf-connector configuration
49 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
51 There are 2 ways for configuring netconf-connector: NETCONF or RESTCONF.
52 This guide focuses on using RESTCONF.
56 There are 2 different endpoints related to RESTCONF protocols:
58 - | ``http://localhost:8181/restconf`` is related to `draft-bierman-netconf-restconf-02 <https://tools.ietf.org/html/draft-bierman-netconf-restconf-02>`__,
59 | can be activated by installing ``odl-restconf-nb-bierman02``
61 | This user guide uses this approach.
63 - | ``http://localhost:8181/rests`` is related to `RFC-8040 <https://tools.ietf.org/html/rfc8040>`__,
64 | can be activated by installing ``odl-restconf-nb-rfc8040``
67 | In case of `RFC-8040 <https://tools.ietf.org/html/rfc8040>`__
68 resources for configuration and operational datastores start
71 http://localhost:8181/rests/data/network-topology:network-topology
72 with response of both datastores. It's allowed to use query
73 parameters to distinguish between them.
75 http://localhost:8181/rests/data/network-topology:network-topology?content=config
76 for configuration datastore
78 http://localhost:8181/rests/data/network-topology:network-topology?content=nonconfig
79 for operational datastore.
88 The default configuration contains all the necessary dependencies (file:
89 01-netconf.xml) and a single instance of netconf-connector (file:
90 99-netconf-connector.xml) called **controller-config** which connects
91 itself to the NETCONF northbound in OpenDaylight in a loopback fashion.
92 The connector mounts the NETCONF server for config-subsystem in order to
93 enable RESTCONF protocol for config-subsystem. This RESTCONF still goes
94 via NETCONF, but using RESTCONF is much more user friendly than using
97 Spawning additional netconf-connectors while the controller is running
98 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
102 1. OpenDaylight is running
104 2. In Karaf, you must have the netconf-connector installed (at the Karaf
105 prompt, type: ``feature:install odl-netconf-connector-all``); the
106 loopback NETCONF mountpoint will be automatically configured and
109 3. Wait until log displays following entry:
110 RemoteDevice{controller-config}: NETCONF connector initialized
113 To configure a new netconf-connector you need to send following request
117 http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules
121 - Accept application/xml
123 - Content-Type application/xml
127 <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
128 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">prefix:sal-netconf-connector</type>
129 <name>new-netconf-device</name>
130 <address xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">127.0.0.1</address>
131 <port xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">830</port>
132 <username xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">admin</username>
133 <password xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">admin</password>
134 <tcp-only xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">false</tcp-only>
135 <event-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
136 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:netty">prefix:netty-event-executor</type>
137 <name>global-event-executor</name>
139 <binding-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
140 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
141 <name>binding-osgi-broker</name>
143 <dom-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
144 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">prefix:dom-broker-osgi-registry</type>
145 <name>dom-broker</name>
147 <client-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
148 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:config:netconf">prefix:netconf-client-dispatcher</type>
149 <name>global-netconf-dispatcher</name>
151 <processing-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
152 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:threadpool</type>
153 <name>global-netconf-processing-executor</name>
154 </processing-executor>
155 <keepalive-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
156 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:scheduled-threadpool</type>
157 <name>global-netconf-ssh-scheduled-executor</name>
158 </keepalive-executor>
161 This spawns a new netconf-connector which tries to connect to (or mount)
162 a NETCONF device at 127.0.0.1 and port 830. You can check the
163 configuration of config-subsystem’s configuration datastore. The new
164 netconf-connector will now be present there. Just invoke:
167 http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules
169 The response will contain the module for new-netconf-device.
171 Right after the new netconf-connector is created, it writes some useful
172 metadata into the datastore of MD-SAL under the network-topology
173 subtree. This metadata can be found at:
176 http://localhost:8181/restconf/operational/network-topology:network-topology/
178 Information about connection status, device capabilities, etc. can be
181 Connecting to a device not supporting NETCONF monitoring
182 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
184 The netconf-connector in OpenDaylight relies on ietf-netconf-monitoring
185 support when connecting to remote NETCONF device. The
186 ietf-netconf-monitoring support allows netconf-connector to list and
187 download all YANG schemas that are used by the device. NETCONF connector
188 can only communicate with a device if it knows the set of used schemas
189 (or at least a subset). However, some devices use YANG models internally
190 but do not support NETCONF monitoring. Netconf-connector can also
191 communicate with these devices, but you have to side load the necessary
192 yang models into OpenDaylight’s YANG model cache for netconf-connector.
193 In general there are 2 situations you might encounter:
195 **1. NETCONF device does not support ietf-netconf-monitoring but it does
196 list all its YANG models as capabilities in HELLO message**
198 This could be a device that internally uses only ietf-inet-types YANG
199 model with revision 2010-09-24. In the HELLO message that is sent from
200 this device there is this capability reported:
204 urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&revision=2010-09-24
206 **For such devices you only need to put the schema into folder
207 cache/schema inside your Karaf distribution.**
211 The file with YANG schema for ietf-inet-types has to be called
212 ietf-inet-types@2010-09-24.yang. It is the required naming format of
215 **2. NETCONF device does not support ietf-netconf-monitoring and it does
216 NOT list its YANG models as capabilities in HELLO message**
218 Compared to device that lists its YANG models in HELLO message, in this
219 case there would be no capability with ietf-inet-types in the HELLO
220 message. This type of device basically provides no information about the
221 YANG schemas it uses so its up to the user of OpenDaylight to properly
222 configure netconf-connector for this device.
224 Netconf-connector has an optional configuration attribute called
225 yang-module-capabilities and this attribute can contain a list of "YANG
226 module based" capabilities. So by setting this configuration attribute,
227 it is possible to override the "yang-module-based" capabilities reported
228 in HELLO message of the device. To do this, we need to modify the
229 configuration of netconf-connector by adding this XML (It needs to be
230 added next to the address, port, username etc. configuration elements):
234 <yang-module-capabilities xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
235 <capability xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
236 urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&revision=2010-09-24
238 </yang-module-capabilities>
240 **Remember to also put the YANG schemas into the cache folder.**
244 For putting multiple capabilities, you just need to replicate the
245 capability xml element inside yang-module-capability element.
246 Capability element is modeled as a leaf-list. With this
247 configuration, we would make the remote device report usage of
248 ietf-inet-types in the eyes of netconf-connector.
250 Reconfiguring Netconf-Connector While the Controller is Running
251 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
253 It is possible to change the configuration of a running module while the
254 whole controller is running. This example will continue where the last
255 left off and will change the configuration for the brand new
256 netconf-connector after it was spawned. Using one RESTCONF request, we
257 will change both username and password for the netconf-connector.
259 To update an existing netconf-connector you need to send following
263 http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-sal-netconf-connector-cfg:sal-netconf-connector/new-netconf-device
267 <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
268 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">prefix:sal-netconf-connector</type>
269 <name>new-netconf-device</name>
270 <username xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">bob</username>
271 <password xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">passwd</password>
272 <tcp-only xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">false</tcp-only>
273 <event-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
274 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:netty">prefix:netty-event-executor</type>
275 <name>global-event-executor</name>
277 <binding-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
278 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
279 <name>binding-osgi-broker</name>
281 <dom-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
282 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">prefix:dom-broker-osgi-registry</type>
283 <name>dom-broker</name>
285 <client-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
286 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:config:netconf">prefix:netconf-client-dispatcher</type>
287 <name>global-netconf-dispatcher</name>
289 <processing-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
290 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:threadpool</type>
291 <name>global-netconf-processing-executor</name>
292 </processing-executor>
293 <keepalive-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
294 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:scheduled-threadpool</type>
295 <name>global-netconf-ssh-scheduled-executor</name>
296 </keepalive-executor>
299 Since a PUT is a replace operation, the whole configuration must be
300 specified along with the new values for username and password. This
301 should result in a 2xx response and the instance of netconf-connector
302 called new-netconf-device will be reconfigured to use username bob and
303 password passwd. New configuration can be verified by executing:
306 http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-sal-netconf-connector-cfg:sal-netconf-connector/new-netconf-device
308 With new configuration, the old connection will be closed and a new one
311 Destroying Netconf-Connector While the Controller is Running
312 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
314 Using RESTCONF one can also destroy an instance of a module. In case of
315 netconf-connector, the module will be destroyed, NETCONF connection
316 dropped and all resources will be cleaned. To do this, simply issue a
317 request to following URL:
320 http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-sal-netconf-connector-cfg:sal-netconf-connector/new-netconf-device
322 The last element of the URL is the name of the instance and its
323 predecessor is the type of that module (In our case the type is
324 **sal-netconf-connector** and name **new-netconf-device**). The type and
325 name are actually the keys of the module list.
327 Netconf-connector configuration with MD-SAL
328 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
330 It is also possible to configure new NETCONF connectors directly through
331 MD-SAL with the usage of the network-topology model. You can configure
332 new NETCONF connectors both through the NETCONF server for MD-SAL (port
333 2830) or RESTCONF. This guide focuses on RESTCONF.
337 To enable NETCONF connector configuration through MD-SAL install
338 either the ``odl-netconf-topology`` or
339 ``odl-netconf-clustered-topology`` feature. We will explain the
340 difference between these features later.
345 1. OpenDaylight is running
347 2. In Karaf, you must have the ``odl-netconf-topology`` or
348 ``odl-netconf-clustered-topology`` feature installed.
350 3. Feature ``odl-restconf`` must be installed
352 4. Wait until log displays following entry:
356 Successfully pushed configuration snapshot 02-netconf-topology.xml(odl-netconf-topology,odl-netconf-topology)
362 GET http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/
364 returns a non-empty response, for example:
368 <topology xmlns="urn:TBD:params:xml:ns:yang:network-topology">
369 <topology-id>topology-netconf</topology-id>
372 Spawning new NETCONF connectors
373 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
375 To create a new NETCONF connector you need to send the following request
380 PUT http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
384 - Accept: application/xml
386 - Content-Type: application/xml
392 <node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
393 <node-id>new-netconf-device</node-id>
394 <host xmlns="urn:opendaylight:netconf-node-topology">127.0.0.1</host>
395 <port xmlns="urn:opendaylight:netconf-node-topology">17830</port>
396 <username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
397 <password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
398 <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
399 <!-- non-mandatory fields with default values, you can safely remove these if you do not wish to override any of these values-->
400 <reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema>
401 <connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis>
402 <max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts>
403 <between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis>
404 <sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.5</sleep-factor>
405 <!-- keepalive-delay set to 0 turns off keepalives-->
406 <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">120</keepalive-delay>
409 Note that the device name in <node-id> element must match the last
410 element of the restconf URL.
412 Reconfiguring an existing connector
413 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
415 The steps to reconfigure an existing connector are exactly the same as
416 when spawning a new connector. The old connection will be disconnected
417 and a new connector with the new configuration will be created.
419 Deleting an existing connector
420 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
422 To remove an already configured NETCONF connector you need to send the
427 DELETE http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
429 Connecting to a device supporting only NETCONF 1.0
430 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
432 OpenDaylight is schema-based distribution and heavily depends on YANG
433 models. However some legacy NETCONF devices are not schema-based and
434 implement just RFC 4741. This type of device does not utilize YANG
435 models internally and OpenDaylight does not know how to communicate
436 with such devices, how to validate data, or what the semantics of data
439 NETCONF connector can communicate also with these devices, but the
440 trade-offs are worsened possibilities in utilization of NETCONF
441 mountpoints. Using RESTCONF with such devices is not suported. Also
442 communicating with schemaless devices from application code is slightly
445 To connect to schemaless device, there is a optional configuration option
446 in netconf-node-topology model called schemaless. You have to set this
449 Clustered NETCONF connector
450 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
452 To spawn NETCONF connectors that are cluster-aware you need to install
453 the ``odl-netconf-clustered-topology`` karaf feature.
457 The ``odl-netconf-topology`` and ``odl-netconf-clustered-topology``
458 features are considered **INCOMPATIBLE**. They both manage the same
459 space in the datastore and would issue conflicting writes if
462 Configuration of clustered NETCONF connectors works the same as the
463 configuration through the topology model in the previous section.
465 When a new clustered connector is configured the configuration gets
466 distributed among the member nodes and a NETCONF connector is spawned on
467 each node. From these nodes a master is chosen which handles the schema
468 download from the device and all the communication with the device. You
469 will be able to read/write to/from the device from all slave nodes due
470 to the proxy data brokers implemented.
472 You can use the ``odl-netconf-clustered-topology`` feature in a single
473 node scenario as well but the code that uses akka will be used, so for a
474 scenario where only a single node is used, ``odl-netconf-topology``
477 Netconf-connector utilization
478 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
480 Once the connector is up and running, users can utilize the new mount
481 point instance. By using RESTCONF or from their application code. This
482 chapter deals with using RESTCONF and more information for app
483 developers can be found in the developers guide or in the official
484 tutorial application **ncmount** that can be found in the coretutorials
487 - https://github.com/opendaylight/coretutorials/tree/stable/beryllum/ncmount
489 Reading data from the device
490 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
492 Just invoke (no body needed):
495 http://localhost:8080/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/
497 This will return the entire content of operation datastore from the
498 device. To view just the configuration datastore, change **operational**
499 in this URL to **config**.
501 Writing configuration data to the device
502 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
504 In general, you cannot simply write any data you want to the device. The
505 data have to conform to the YANG models implemented by the device. In
506 this example we are adding a new interface-configuration to the mounted
507 device (assuming the device supports Cisco-IOS-XR-ifmgr-cfg YANG model).
508 In fact this request comes from the tutorial dedicated to the
509 **ncmount** tutorial app.
512 http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/Cisco-IOS-XR-ifmgr-cfg:interface-configurations
516 <interface-configuration xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ifmgr-cfg">
518 <interface-name>mpls</interface-name>
519 <description>Interface description</description>
520 <bandwidth>32</bandwidth>
521 <link-status></link-status>
522 </interface-configuration>
524 Should return 200 response code with no body.
528 This call is transformed into a couple of NETCONF RPCs. Resulting
529 NETCONF RPCs that go directly to the device can be found in the
530 OpenDaylight logs after invoking ``log:set TRACE
531 org.opendaylight.controller.sal.connect.netconf`` in the Karaf
532 shell. Seeing the NETCONF RPCs might help with debugging.
534 This request is very similar to the one where we spawned a new netconf
535 device. That’s because we used the loopback netconf-connector to write
536 configuration data into config-subsystem datastore and config-subsystem
537 picked it up from there.
542 Devices can implement any additional RPC and as long as it provides YANG
543 models for it, it can be invoked from OpenDaylight. Following example
544 shows how to invoke the get-schema RPC (get-schema is quite common among
545 netconf devices). Invoke:
548 http://localhost:8181/restconf/operations/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/ietf-netconf-monitoring:get-schema
552 <input xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring">
553 <identifier>ietf-yang-types</identifier>
554 <version>2013-07-15</version>
557 This call should fetch the source for ietf-yang-types YANG model from
560 Netconf-connector + Netopeer
561 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
563 `Netopeer <https://github.com/cesnet/netopeer>`__ (an open-source
564 NETCONF server) can be used for testing/exploring NETCONF southbound in
567 Netopeer installation
568 ^^^^^^^^^^^^^^^^^^^^^
570 A `Docker <https://www.docker.com/>`__ container with netopeer will be
571 used in this guide. To install Docker and start the `netopeer
572 image <https://index.docker.io/u/dockeruser/netopeer/>`__ perform
575 1. Install docker http://docs.docker.com/linux/step_one/
577 2. Start the netopeer image:
581 docker run -rm -t -p 1831:830 dockeruser/netopeer
583 3. Verify netopeer is running by invoking (netopeer should send its
584 HELLO message right away:
588 ssh root@localhost -p 1831 -s netconf
591 Mounting netopeer NETCONF server
592 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
596 - OpenDaylight is started with features ``odl-restconf-all`` and
597 ``odl-netconf-connector-all``.
599 - Netopeer is up and running in docker
601 Now just follow the chapter: `Spawning
602 netconf-connector <#_spawning_additional_netconf_connectors_while_the_controller_is_running>`__.
603 In the payload change the:
605 - name, e.g., to netopeer
607 - username/password to your system credentials
613 After netopeer is mounted successfully, its configuration can be read
614 using RESTCONF by invoking:
617 http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/netopeer/yang-ext:mount/
619 Northbound (NETCONF servers)
620 ----------------------------
622 OpenDaylight provides 2 types of NETCONF servers:
624 - **NETCONF server for config-subsystem (listening by default on port
627 - Serves as a default interface for config-subsystem and allows
628 users to spawn/reconfigure/destroy modules (or applications) in
631 - **NETCONF server for MD-SAL (listening by default on port 2830)**
633 - Serves as an alternative interface for MD-SAL (besides RESTCONF)
634 and allows users to read/write data from MD-SAL’s datastore and to
635 invoke its rpcs (NETCONF notifications are not available in the
636 Boron release of OpenDaylight)
640 The reason for having 2 NETCONF servers is that config-subsystem and
641 MD-SAL are 2 different components of OpenDaylight and require
642 different approach for NETCONF message handling and data
643 translation. These 2 components will probably merge in the future.
647 Since Nitrogen release, there is performance regression in NETCONF
648 servers accepting SSH connections. While opening a connection takes
649 less than 10 seconds on Carbon, on Nitrogen time can increase up to
650 60 seconds. Please see https://bugs.opendaylight.org/show_bug.cgi?id=9020
652 NETCONF server for config-subsystem
653 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
655 This NETCONF server is the primary interface for config-subsystem. It
656 allows the users to interact with config-subsystem in a standardized
659 In terms of RFCs, these are supported:
661 - `RFC-6241 <http://tools.ietf.org/html/rfc6241>`__
663 - `RFC-5277 <https://tools.ietf.org/html/rfc5277>`__
665 - `RFC-6470 <https://tools.ietf.org/html/rfc6470>`__
667 - (partially, only the schema-change notification is available in
670 - `RFC-6022 <https://tools.ietf.org/html/rfc6022>`__
672 For regular users it is recommended to use RESTCONF + the
673 controller-config loopback mountpoint instead of using pure NETCONF. How
674 to do that is spesific for each component/module/application in
675 OpenDaylight and can be found in their dedicated user guides.
677 NETCONF server for MD-SAL
678 ~~~~~~~~~~~~~~~~~~~~~~~~~
680 This NETCONF server is just a generic interface to MD-SAL in
681 OpenDaylight. It uses the stadard MD-SAL APIs and serves as an
682 alternative to RESTCONF. It is fully model driven and supports any data
683 and rpcs that are supported by MD-SAL.
685 In terms of RFCs, these are supported:
687 - `RFC-6241 <http://tools.ietf.org/html/rfc6241>`__
689 - `RFC-6022 <https://tools.ietf.org/html/rfc6022>`__
691 - `draft-ietf-netconf-yang-library-06 <https://tools.ietf.org/html/draft-ietf-netconf-yang-library-06>`__
693 Notifications over NETCONF are not supported in the Boron release.
697 Install NETCONF northbound for MD-SAL by installing feature:
698 ``odl-netconf-mdsal`` in karaf. Default binding port is **2830**.
703 The default configuration can be found in file: *08-netconf-mdsal.xml*.
704 The file contains the configuration for all necessary dependencies and a
705 single SSH endpoint starting on port 2830. There is also a (by default
706 disabled) TCP endpoint. It is possible to start multiple endpoints at
707 the same time either in the initial configuration file or while
708 OpenDaylight is running.
710 The credentials for SSH endpoint can also be configured here, the
711 defaults are admin/admin. Credentials in the SSH endpoint are not yet
712 managed by the centralized AAA component and have to be configured
715 Verifying MD-SAL’s NETCONF server
716 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
718 After the NETCONF server is available it can be examined by a command
723 ssh admin@localhost -p 2830 -s netconf
725 The server will respond by sending its HELLO message and can be used as
726 a regular NETCONF server from then on.
728 Mounting the MD-SAL’s NETCONF server
729 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
731 To perform this operation, just spawn a new netconf-connector as
732 described in `Spawning
733 netconf-connector <#_spawning_additional_netconf_connectors_while_the_controller_is_running>`__.
734 Just change the ip to "127.0.0.1" port to "2830" and its name to
737 Now the MD-SAL’s datastore can be read over RESTCONF via NETCONF by
741 http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/controller-mdsal/yang-ext:mount
745 This might not seem very useful, since MD-SAL can be accessed
746 directly from RESTCONF or from Application code, but the same method
747 can be used to mount and control other OpenDaylight instances by the
748 "master OpenDaylight".
753 **NETCONF testtool is a set of standalone runnable jars that can:**
755 - Simulate NETCONF devices (suitable for scale testing)
757 - Stress/Performance test NETCONF devices
759 - Stress/Performance test RESTCONF devices
761 These jars are part of OpenDaylight’s controller project and are built
762 from the NETCONF codebase in OpenDaylight.
766 Download testtool from OpenDaylight Nexus at:
767 https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/netconf/netconf-testtool/1.1.0-Boron/
769 **Nexus contains 3 executable tools:**
771 - executable.jar - device simulator
773 - stress.client.tar.gz - NETCONF stress/performance measuring tool
775 - perf-client.jar - RESTCONF stress/performance measuring tool
779 Each executable tool provides help. Just invoke ``java -jar
780 <name-of-the-tool.jar> --help``
782 NETCONF device simulator
783 ~~~~~~~~~~~~~~~~~~~~~~~~
785 NETCONF testtool (or NETCONF device simulator) is a tool that
787 - Simulates 1 or more NETCONF devices
789 - Is suitable for scale, performance or crud testing
791 - Uses core implementation of NETCONF server from OpenDaylight
793 - Generates configuration files for controller so that the OpenDaylight
794 distribution (Karaf) can easily connect to all simulated devices
796 - Provides broad configuration options
798 - Can start a fully fledged MD-SAL datastore
800 - Supports notifications
805 1. Check out latest NETCONF repository from
806 `git <https://git.opendaylight.org/gerrit/#/admin/projects/netconf>`__
808 2. Move into the ``opendaylight/netconf/tools/netconf-testtool/`` folder
810 3. Build testtool using the ``mvn clean install`` command
815 Netconf-testtool is now part of default maven build profile for
816 controller and can be also downloaded from nexus. The executable jar for
817 testtool can be found at:
818 `nexus-artifacts <https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/netconf/netconf-testtool/1.1.0-Boron/>`__
823 1. After successfully building or downloading, move into the
824 ``opendaylight/netconf/tools/netconf-testtool/target/`` folder and
825 there is file ``netconf-testtool-1.1.0-SNAPSHOT-executable.jar`` (or
826 if downloaded from nexus just take that jar file)
828 2. Execute this file using, e.g.:
832 java -jar netconf-testtool-1.1.0-SNAPSHOT-executable.jar
834 This execution runs the testtool with default for all parameters and
835 you should see this log output from the testtool :
839 10:31:08.206 [main] INFO o.o.c.n.t.t.NetconfDeviceSimulator - Starting 1, SSH simulated devices starting on port 17830
840 10:31:08.675 [main] INFO o.o.c.n.t.t.NetconfDeviceSimulator - All simulated devices started successfully from port 17830 to 17830
845 The default parameters for testtool are:
849 - Run 1 simulated device
851 - Device port is 17830
853 - YANG modules used by device are only: ietf-netconf-monitoring,
854 ietf-yang-types, ietf-inet-types (these modules are required for
855 device in order to support NETCONF monitoring and are included in the
858 - Connection timeout is set to 30 minutes (quite high, but when testing
859 with 10000 devices it might take some time for all of them to fully
860 establish a connection)
862 - Debug level is set to false
864 - No distribution is modified to connect automatically to the NETCONF
870 To verify that the simulated device is up and running, we can try to
871 connect to it using command line ssh tool. Execute this command to
872 connect to the device:
876 ssh admin@localhost -p 17830 -s netconf
878 Just accept the server with yes (if required) and provide any password
879 (testtool accepts all users with all passwords). You should see the
880 hello message sent by simulated device.
887 usage: netconf testtool [-h] [--edit-content EDIT-CONTENT] [--async-requests {true,false}] [--thread-amount THREAD-AMOUNT] [--throttle THROTTLE]
888 [--auth AUTH AUTH] [--controller-destination CONTROLLER-DESTINATION] [--device-count DEVICES-COUNT]
889 [--devices-per-port DEVICES-PER-PORT] [--schemas-dir SCHEMAS-DIR] [--notification-file NOTIFICATION-FILE]
890 [--initial-config-xml-file INITIAL-CONFIG-XML-FILE] [--starting-port STARTING-PORT]
891 [--generate-config-connection-timeout GENERATE-CONFIG-CONNECTION-TIMEOUT]
892 [--generate-config-address GENERATE-CONFIG-ADDRESS] [--generate-configs-batch-size GENERATE-CONFIGS-BATCH-SIZE]
893 [--distribution-folder DISTRO-FOLDER] [--ssh {true,false}] [--exi {true,false}] [--debug {true,false}]
894 [--md-sal {true,false}] [--time-out TIME-OUT] [-ip IP] [--thread-pool-size THREAD-POOL-SIZE] [--rpc-config RPC-CONFIG]
899 -h, --help show this help message and exit
900 --edit-content EDIT-CONTENT
901 --async-requests {true,false}
902 --thread-amount THREAD-AMOUNT
903 The number of threads to use for configuring devices.
904 --throttle THROTTLE Maximum amount of async requests that can be open at a time, with mutltiple threads this gets divided among all threads
905 --auth AUTH AUTH Username and password for HTTP basic authentication in order username password.
906 --controller-destination CONTROLLER-DESTINATION
907 Ip address and port of controller. Must be in following format <ip>:<port> if available it will be used for spawning
908 netconf connectors via topology configuration as a part of URI. Example (http://<controller
909 destination>/restconf/config/network-topology:network-topology/topology/topology-netconf/node/<node-id>)otherwise it will
910 just start simulated devices and skip the execution of PUT requests
911 --device-count DEVICES-COUNT
912 Number of simulated netconf devices to spin. This is the number of actual ports open for the devices.
913 --devices-per-port DEVICES-PER-PORT
914 Amount of config files generated per port to spoof more devices than are actually running
915 --schemas-dir SCHEMAS-DIR
916 Directory containing yang schemas to describe simulated devices. Some schemas e.g. netconf monitoring and inet types are
918 --notification-file NOTIFICATION-FILE
919 Xml file containing notifications that should be sent to clients after create subscription is called
920 --initial-config-xml-file INITIAL-CONFIG-XML-FILE
921 Xml file containing initial simulatted configuration to be returned via get-config rpc
922 --starting-port STARTING-PORT
923 First port for simulated device. Each other device will have previous+1 port number
924 --generate-config-connection-timeout GENERATE-CONFIG-CONNECTION-TIMEOUT
925 Timeout to be generated in initial config files
926 --generate-config-address GENERATE-CONFIG-ADDRESS
927 Address to be placed in generated configs
928 --generate-configs-batch-size GENERATE-CONFIGS-BATCH-SIZE
929 Number of connector configs per generated file
930 --distribution-folder DISTRO-FOLDER
931 Directory where the karaf distribution for controller is located
932 --ssh {true,false} Whether to use ssh for transport or just pure tcp
933 --exi {true,false} Whether to use exi to transport xml content
934 --debug {true,false} Whether to use debug log level instead of INFO
935 --md-sal {true,false} Whether to use md-sal datastore instead of default simulated datastore.
936 --time-out TIME-OUT the maximum time in seconds for executing each PUT request
937 -ip IP Ip address which will be used for creating a socket address.It can either be a machine name, such as java.sun.com, or a
938 textual representation of its IP address.
939 --thread-pool-size THREAD-POOL-SIZE
940 The number of threads to keep in the pool, when creating a device simulator. Even if they are idle.
941 --rpc-config RPC-CONFIG
942 Rpc config file. It can be used to define custom rpc behavior, or override the default one.Usable for testing buggy device
949 Testtool default simple datastore supported operations:
952 returns YANG schemas loaded from user specified directory,
955 always returns OK and stores the XML from the input in a local
956 variable available for get-config and get RPC. Every edit-config
957 replaces the previous data,
960 always returns OK, but does not actually commit the data,
963 returns local XML stored by edit-config,
966 returns local XML stored by edit-config with netconf-state subtree,
967 but also supports filtering.
970 returns always OK with no lock guarantee
973 returns always OK and after the operation is triggered, provided
974 NETCONF notifications (if any) are fed to the client. No filtering
975 or stream recognition is supported.
977 Note: when operation="delete" is present in the payload for edit-config,
978 it will wipe its local store to simulate the removal of data.
980 When using the MD-SAL datastore testtool behaves more like normal
981 NETCONF server and is suitable for crud testing. create-subscription is
982 not supported when testtool is running with the MD-SAL datastore.
987 Testtool supports notifications via the --notification-file switch. To
988 trigger the notification feed, create-subscription operation has to be
989 invoked. The XML file provided should look like this example file:
993 <?xml version='1.0' encoding='UTF-8' standalone='yes'?>
996 <!-- Notifications are processed in the order they are defined in XML -->
998 <!-- Notification that is sent only once right after create-subscription is called -->
1000 <!-- Content of each notification entry must contain the entire notification with event time. Event time can be hardcoded, or generated by testtool if XXXX is set as eventtime in this XML -->
1002 <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
1003 <eventTime>2011-01-04T12:30:46</eventTime>
1004 <random-notification xmlns="http://www.opendaylight.org/netconf/event:1.0">
1005 <random-content>single no delay</random-content>
1006 </random-notification>
1011 <!-- Repeated Notification that is sent 5 times with 2 second delay inbetween -->
1013 <!-- Delay in seconds from previous notification -->
1015 <!-- Number of times this notification should be repeated -->
1018 <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
1019 <eventTime>XXXX</eventTime>
1020 <random-notification xmlns="http://www.opendaylight.org/netconf/event:1.0">
1021 <random-content>scheduled 5 times 10 seconds each</random-content>
1022 </random-notification>
1027 <!-- Single notification that is sent only once right after the previous notification -->
1031 <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
1032 <eventTime>XXXX</eventTime>
1033 <random-notification xmlns="http://www.opendaylight.org/netconf/event:1.0">
1034 <random-content>single with delay</random-content>
1035 </random-notification>
1042 Connecting testtool with controller Karaf distribution
1043 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1045 Auto connect to OpenDaylight
1046 ''''''''''''''''''''''''''''
1048 It is possible to make OpenDaylight auto connect to the simulated
1049 devices spawned by testtool (so user does not have to post a
1050 configuration for every NETCONF connector via RESTCONF). The testtool is
1051 able to modify the OpenDaylight distribution to auto connect to the
1052 simulated devices after feature ``odl-netconf-connector-all`` is
1053 installed. When running testtool, issue this command (just point the
1054 testool to the distribution:
1058 java -jar netconf-testtool-1.1.0-SNAPSHOT-executable.jar --device-count 10 --distribution-folder ~/distribution-karaf-0.4.0-SNAPSHOT/ --debug true
1060 With the distribution-folder parameter, the testtool will modify the
1061 distribution to include configuration for netconf-connector to connect
1062 to all simulated devices. So there is no need to spawn
1063 netconf-connectors via RESTCONF.
1065 Running testtool and OpenDaylight on different machines
1066 '''''''''''''''''''''''''''''''''''''''''''''''''''''''
1068 The testtool binds by default to 0.0.0.0 so it should be accessible from
1069 remote machines. However you need to set the parameter
1070 "generate-config-address" (when using autoconnect) to the address of
1071 machine where testtool will be run so OpenDaylight can connect. The
1072 default value is localhost.
1074 Executing operations via RESTCONF on a mounted simulated device
1075 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1077 Simulated devices support basic RPCs for editing their config. This part
1078 shows how to edit data for simulated device via RESTCONF.
1083 The controller and RESTCONF assume that the data that can be manipulated
1084 for mounted device is described by a YANG schema. For demonstration, we
1085 will define a simple YANG model:
1091 namespace "urn:opendaylight:test";
1094 revision "2014-10-17";
1105 Save this schema in file called test@2014-10-17.yang and store it a
1106 directory called test-schemas/, e.g., your home folder.
1108 Editing data for simulated device
1109 '''''''''''''''''''''''''''''''''
1111 - Start the device with following command:
1115 java -jar netconf-testtool-1.1.0-SNAPSHOT-executable.jar --device-count 10 --distribution-folder ~/distribution-karaf-0.4.0-SNAPSHOT/ --debug true --schemas-dir ~/test-schemas/
1117 - Start OpenDaylight
1119 - Install odl-netconf-connector-all feature
1121 - Install odl-restconf feature
1123 - Check that you can see config data for simulated device by executing
1128 http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount/
1130 - The data should be just and empty data container
1132 - Now execute edit-config request by executing a POST request to:
1136 http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount
1142 Accept application/xml
1143 Content-Type application/xml
1149 <cont xmlns="urn:opendaylight:test">
1153 - Check that you can see modified config data for simulated device by
1154 executing GET request to
1158 http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount/
1160 - Check that you can see the same modified data in operational for
1161 simulated device by executing GET request to
1165 http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount/
1169 Data will be mirrored in operational datastore only when using the
1170 default simple datastore.
1173 Testing User defined RPC
1174 ^^^^^^^^^^^^^^^^^^^^^^^^
1176 The NETCONF test-tool allows using custom RPC. Custom RPC needs to be defined in yang model provide to test-tool along
1177 with parameter ``--schemas-dir``.
1179 The input and output of the custom RPC should be provided with ``--rpc-config`` parameter as a path to the file containing
1180 definition of input and output. The format of the custom RPC file is xml as shown below.
1182 Start the device with following command:
1186 java -jar netconf/tools/netconf-testtool/target/netconf-testtool-1.7.0-SNAPSHOT-executable.jar --schemas-dir ~/test-schemas/ --rpc-config ~/tmp/customrpc.xml --debug=true
1188 Example YANG model file:
1192 module example-ops {
1193 namespace "urn:example-ops:reboot";
1196 import ietf-yang-types {
1201 revision "2016-07-07" {
1202 description "Initial version.";
1203 reference "example document.";
1208 description "Reboot operation.";
1215 "Delay in seconds.";
1227 Example payload (RPC config file customrpc.xml):
1234 <reboot xmlns="urn:example-ops:reboot">
1236 <message>message</message>
1240 <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
1253 POST http://localhost:8181/restconf/operations/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/example-ops:get-reboot-info
1255 If successful the command will return code 200.
1261 A working example of user defined RPC can be found in TestToolTest.java class of the tools[netconf-testtool] project.
1267 Slow creation of devices on virtual machines
1268 ''''''''''''''''''''''''''''''''''''''''''''
1270 When testtool seems to take unusually long time to create the devices
1271 use this flag when running it:
1275 -Dorg.apache.sshd.registerBouncyCastle=false
1280 When testtool or OpenDaylight starts to fail with TooManyFilesOpen
1281 exception, you need to increase the limit of open files in your OS. To
1282 find out the limit in linux execute:
1288 Example sufficient configuration in linux:
1292 core file size (blocks, -c) 0
1293 data seg size (kbytes, -d) unlimited
1294 scheduling priority (-e) 0
1295 file size (blocks, -f) unlimited
1296 pending signals (-i) 63338
1297 max locked memory (kbytes, -l) 64
1298 max memory size (kbytes, -m) unlimited
1299 open files (-n) 500000
1300 pipe size (512 bytes, -p) 8
1301 POSIX message queues (bytes, -q) 819200
1302 real-time priority (-r) 0
1303 stack size (kbytes, -s) 8192
1304 cpu time (seconds, -t) unlimited
1305 max user processes (-u) 63338
1306 virtual memory (kbytes, -v) unlimited
1307 file locks (-x) unlimited
1309 To set these limits edit file: /etc/security/limits.conf, for example:
1313 * hard nofile 500000
1314 * soft nofile 500000
1315 root hard nofile 500000
1316 root soft nofile 500000
1321 The testtool might end unexpectedly with a simple message: "Killed".
1322 This means that the OS killed the tool due to too much memory consumed
1323 or too many threads spawned. To find out the reason on linux you can use
1328 dmesg | egrep -i -B100 'killed process'
1330 Also take a look at this file: /proc/sys/kernel/threads-max. It limits
1331 the number of threads spawned by a process. Sufficient (but probably
1332 much more than enough) value is, e.g., 126676
1334 NETCONF stress/performance measuring tool
1335 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1337 This is basically a NETCONF client that puts NETCONF servers under heavy
1338 load of NETCONF RPCs and measures the time until a configurable amount
1339 of them is processed.
1341 RESTCONF stress-performance measuring tool
1342 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1344 Very similar to NETCONF stress tool with the difference of using
1345 RESTCONF protocol instead of NETCONF.
1347 YANGLIB remote repository
1348 -------------------------
1350 There are scenarios in NETCONF deployment, that require for a centralized
1351 YANG models repository. YANGLIB plugin provides such remote repository.
1353 To start this plugin, you have to install odl-yanglib feature. Then you
1354 have to configure YANGLIB either through RESTCONF or NETCONF. We will
1355 show how to configure YANGLIB through RESTCONF.
1357 YANGLIB configuration through RESTCONF
1358 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1360 You have to specify what local YANG modules directory you want to provide.
1361 Then you have to specify address and port whre you want to provide YANG
1362 sources. For example, we want to serve yang sources from folder /sources
1363 on localhost:5000 adress. The configuration for this scenario will be
1368 PUT http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/yanglib:yanglib/example
1372 - Accept: application/xml
1374 - Content-Type: application/xml
1380 <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
1381 <name>example</name>
1382 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">prefix:yanglib</type>
1383 <broker xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">
1384 <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
1385 <name>binding-osgi-broker</name>
1387 <cache-folder xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">/sources</cache-folder>
1388 <binding-addr xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">localhost</binding-addr>
1389 <binding-port xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">5000</binding-port>
1392 This should result in a 2xx response and new YANGLIB instance should be
1393 created. This YANGLIB takes all YANG sources from /sources folder and
1394 for each generates URL in form:
1398 http://localhost:5000/schemas/{modelName}/{revision}
1400 On this URL will be hosted YANG source for particular module.
1402 YANGLIB instance also write this URL along with source identifier to
1403 ietf-netconf-yang-library/modules-state/module list.
1405 Netconf-connector with YANG library as fallback
1406 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1408 There is an optional configuration in netconf-connector called
1409 yang-library. You can specify YANG library to be plugged as additional
1410 source provider into the mount's schema repository. Since YANGLIB
1411 plugin is advertising provided modules through yang-library model, we
1412 can use it in mount point's configuration as YANG library. To do this,
1413 we need to modify the configuration of netconf-connector by adding this
1418 <yang-library xmlns="urn:opendaylight:netconf-node-topology">
1419 <yang-library-url xmlns="urn:opendaylight:netconf-node-topology">http://localhost:8181/restconf/operational/ietf-yang-library:modules-state</yang-library-url>
1420 <username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
1421 <password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
1424 This will register YANGLIB provided sources as a fallback schemas for
1425 particular mount point.
1432 The call home feature is experimental and will change in a future
1433 release. In particular, the Yang models will change to those specified
1434 in the `RFC 8071 <https://tools.ietf.org/html/rfc8071>`__
1436 Call Home Installation
1437 ~~~~~~~~~~~~~~~~~~~~~~
1439 ODL Call-Home server is installed in Karaf by installing karaf feature
1440 ``odl-netconf-callhome-ssh``. RESTCONF feature is recommended for
1441 configuring Call Home & testing its functionality.
1445 feature:install odl-netconf-callhome-ssh
1450 In order to test Call Home functionality we recommend Netopeer.
1451 See `Netopeer Call Home <https://github.com/CESNET/netopeer/wiki/CallHome>`__ to learn how to enable call-home on Netopeer.
1453 Northbound Call-Home API
1454 ~~~~~~~~~~~~~~~~~~~~~~~~
1456 The northbound Call Home API is used for administering the Call-Home Server. The
1457 following describes this configuration.
1459 Global Configuration
1460 ^^^^^^^^^^^^^^^^^^^^
1462 Configuring global credentials
1463 ''''''''''''''''''''''''''''''
1465 ODL Call-Home server allows user to configure global credentials, which
1466 will be used for devices which does not have device-specific credentials
1469 This is done by creating
1470 ``/odl-netconf-callhome-server:netconf-callhome-server/global/credentials``
1471 with username and passwords specified.
1473 *Configuring global username & passwords to try*
1475 .. code-block:: none
1478 /restconf/config/odl-netconf-callhome-server:netconf-callhome-server/global/credentials HTTP/1.1
1479 Content-Type: application/json
1480 Accept: application/json
1482 .. code-block:: json
1487 "username": "example",
1488 "passwords": [ "first-password-to-try", "second-password-to-try" ]
1492 Configuring to accept any ssh server key using global credentials
1493 '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
1495 By default Netconf Call-Home Server accepts only incoming connections
1496 from allowed devices
1497 ``/odl-netconf-callhome-server:netconf-callhome-server/allowed-devices``,
1498 if user desire to allow all incoming connections, it is possible to set
1499 ``accept-all-ssh-keys`` to ``true`` in
1500 ``/odl-netconf-callhome-server:netconf-callhome-server/global``.
1502 The name of this devices in ``netconf-topology`` will be in format
1503 ``ip-address:port``. For naming devices see Device-Specific
1506 *Allowing unknown devices to connect*
1508 This is a debug feature and should not be used in production. Besides being an obvious
1509 security issue, this also causes the Call-Home Server to drastically increase its output
1512 .. code-block:: none
1515 /restconf/config/odl-netconf-callhome-server:netconf-callhome-server/global HTTP/1.1
1516 Content-Type: application/json
1517 Accept: application/json
1519 .. code-block:: json
1523 "accept-all-ssh-keys": "true"
1527 Device-Specific Configuration
1528 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1530 Allowing Device & Configuring Name
1531 ''''''''''''''''''''''''''''''''''
1533 Netconf Call Home Server uses device provided SSH server key (host key)
1534 to identify device. The pairing of name and server key is configured in
1535 ``/odl-netconf-callhome-server:netconf-callhome-server/allowed-devices``.
1536 This list is colloquially called a whitelist.
1538 If the Call-Home Server finds the SSH host key in the whitelist, it continues
1539 to negotiate a NETCONF connection over an SSH session. If the SSH host key is
1540 not found, the connection between the Call Home server and the device is dropped
1541 immediately. In either case, the device that connects to the Call home server
1542 leaves a record of its presence in the operational store.
1544 *Example of configuring device*
1546 .. code-block:: none
1549 /restconf/config/odl-netconf-callhome-server:netconf-callhome-server/allowed-devices/device/example HTTP/1.1
1550 Content-Type: application/json
1551 Accept: application/json
1553 .. code-block:: json
1557 "unique-id": "example",
1558 "ssh-host-key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDHoH1jMjltOJnCt999uaSfc48ySutaD3ISJ9fSECe1Spdq9o9mxj0kBTTTq+2V8hPspuW75DNgN+V/rgJeoUewWwCAasRx9X4eTcRrJrwOQKzb5Fk+UKgQmenZ5uhLAefi2qXX/agFCtZi99vw+jHXZStfHm9TZCAf2zi+HIBzoVksSNJD0VvPo66EAvLn5qKWQD4AdpQQbKqXRf5/W8diPySbYdvOP2/7HFhDukW8yV/7ZtcywFUIu3gdXsrzwMnTqnATSLPPuckoi0V2jd8dQvEcu1DY+rRqmqu0tEkFBurlRZDf1yhNzq5xWY3OXcjgDGN+RxwuWQK3cRimcosH"
1562 Configuring Device with Device-specific Credentials
1563 '''''''''''''''''''''''''''''''''''''''''''''''''''
1565 Call Home Server also allows to configure credentials per device basis,
1566 this is done by introducing ``credentials`` container into
1567 device-specific configuration. Format is same as in global credentials.
1569 *Configuring Device with Credentials*
1571 .. code-block:: none
1574 /restconf/config/odl-netconf-callhome-server:netconf-callhome-server/allowed-devices/device/example HTTP/1.1
1575 Content-Type: application/json
1576 Accept: application/json
1578 .. code-block:: json
1582 "unique-id": "example",
1584 "username": "example",
1585 "passwords": [ "password" ]
1587 "ssh-host-key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDHoH1jMjltOJnCt999uaSfc48ySutaD3ISJ9fSECe1Spdq9o9mxj0kBTTTq+2V8hPspuW75DNgN+V/rgJeoUewWwCAasRx9X4eTcRrJrwOQKzb5Fk+UKgQmenZ5uhLAefi2qXX/agFCtZi99vw+jHXZStfHm9TZCAf2zi+HIBzoVksSNJD0VvPo66EAvLn5qKWQD4AdpQQbKqXRf5/W8diPySbYdvOP2/7HFhDukW8yV/7ZtcywFUIu3gdXsrzwMnTqnATSLPPuckoi0V2jd8dQvEcu1DY+rRqmqu0tEkFBurlRZDf1yhNzq5xWY3OXcjgDGN+RxwuWQK3cRimcosH"
1594 Once an entry is made into the config side of "allowed-devices", the Call-Home Server will
1595 populate an corresponding operational device that is the same as the config device but
1596 has an additional status. By default, this status is *DISCONNECTED*. Once a device calls
1597 home, this status will change to one of:
1599 *CONNECTED* — The device is currently connected and the NETCONF mount is available for network
1602 *FAILED_AUTH_FAILURE* — The last attempted connection was unsuccessful because the Call-Home
1603 Server was unable to provide the acceptable credentials of the device. The device is also
1604 disconnected and not available for network management.
1606 *FAILED_NOT_ALLOWED* — The last attempted connection was unsuccessful because the device was
1607 not recognized as an acceptable device. The device is also disconnected and not available for
1610 *FAILED* — The last attempted connection was unsuccessful for a reason other than not
1611 allowed to connect or incorrect client credentials. The device is also disconnected and not
1612 available for network management.
1614 *DISCONNECTED* — The device is currently disconnected.
1619 Devices which are not on the whitelist might try to connect to the Call-Home Server. In
1620 these cases, the server will keep a record by instantiating an operational device. There
1621 will be no corresponding config device for these rogues. They can be identified readily
1622 because their device id, rather than being user-supplied, will be of the form
1623 "address:port". Note that if a device calls back multiple times, there will only be
1624 a single operatinal entry (even if the port changes); these devices are recognized by
1625 their unique host key.
1627 Southbound Call-Home API
1628 ~~~~~~~~~~~~~~~~~~~~~~~~
1630 The Call-Home Server listens for incoming TCP connections and assumes that the other side of
1631 the connection is a device calling home via a NETCONF connection with SSH for
1632 management. The server uses port 6666 by default and this can be configured via a
1633 blueprint configuration file.
1635 The device **must** initiate the connection and the server will not try to re-establish the
1636 connection in case of a drop. By requirement, the server cannot assume it has connectivity
1637 to the device due to NAT or firewalls among others.