+++ /dev/null
-.. _alto-developer-guide:
-
-ALTO Developer Guide
-====================
-
-Overview
---------
-
-The topics of this guide are:
-
-1. How to add alto projects as dependencies;
-
-2. How to put/fetch data from ALTO;
-
-3. Basic API and DataType;
-
-4. How to use customized service implementations.
-
-Adding ALTO Projects as Dependencies
-------------------------------------
-
-Most ALTO packages can be added as dependencies in Maven projects by
-putting the following code in the *pom.xml* file.
-
-::
-
- <dependency>
- <groupId>org.opendaylight.alto</groupId>
- <artifactId>${THE_NAME_OF_THE_PACKAGE_YOU_NEED}</artifactId>
- <version>${ALTO_VERSION}</version>
- </dependency>
-
-The current stable version for ALTO is ``0.3.0-Boron``.
-
-Putting/Fetching data from ALTO
--------------------------------
-
-Using RESTful API
-~~~~~~~~~~~~~~~~~
-
-There are two kinds of RESTful APIs for ALTO: the one provided by
-``alto-northbound`` which follows the formats defined in `RFC
-7285 <https://tools.ietf.org/html/rfc7285>`__, and the one provided by
-RESTCONF whose format is defined by the YANG model proposed in `this
-draft <https://tools.ietf.org/html/draft-shi-alto-yang-model-03>`__.
-
-One way to get the URLs for the resources from ``alto-northbound`` is to
-visit the IRD service first where there is a ``uri`` field for every
-entry. However, the IRD service is not yet implemented so currently the
-developers have to construct the URLs themselves. The base URL is
-``/alto`` and below is a list of the specific paths defined in
-``alto-core/standard-northbound-route`` using Jersey ``@Path``
-annotation:
-
-- ``/ird/{rid}``: the path to access *IRD* services;
-
-- ``/networkmap/{rid}[/{tag}]``: the path to access *Network Map* and
- *Filtered Network Map* services;
-
-- ``/costmap/{rid}[/{tag}[/{mode}/{metric}]]``: the path to access
- *Cost Map* and *Filtered Cost Map* services;
-
-- ``/endpointprop``: the path to access *Endpoint Property* services;
-
-- ``/endpointcost``: the path to access *Endpoint Cost* services.
-
-.. note::
-
- The segments in brackets are optional.
-
-If you want to fetch the data using RESTCONF, it is highly recommended
-to take a look at the ``apidoc`` page
-(`http://{controller\_ip}:8181/apidoc/explorer/index.html <http://{controller_ip}:8181/apidoc/explorer/index.html>`__)
-after installing the ``odl-alto-release`` feature in karaf.
-
-It is also worth pointing out that ``alto-northbound`` only supports
-``GET`` and ``POST`` operations so it is impossible to manipulate the
-data through its RESTful APIs. To modify the data, use ``PUT`` and
-``DELETE`` methods with RESTCONF.
-
-.. note::
-
- The current implementation uses the ``configuration`` data store and
- that enables the developers to modify the data directly through
- RESTCONF. In the future this approach might be disabled in the core
- packages of ALTO but may still be available as an extension.
-
-Using MD-SAL
-~~~~~~~~~~~~
-
-You can also fetch data from the datastore directly.
-
-First you must get the access to the datastore by registering your
-module with a data broker.
-
-Then an ``InstanceIdentifier`` must be created. Here is an example of
-how to build an ``InstanceIdentifier`` for a *network map*:
-
-::
-
- import org.opendaylight...alto...Resources;
- import org.opendaylight...alto...resources.NetworkMaps;
- import org.opendaylight...alto...resources.network.maps.NetworkMap;
- import org.opendaylight...alto...resources.network.maps.NetworkMapKey;
- ...
- protected
- InstanceIdentifier<NetworkMap> getNetworkMapIID(String resource_id) {
- ResourceId rid = ResourceId.getDefaultInstance(resource_id);
- NetworkMapKey key = new NetworkMapKey(rid);
- InstanceIdentifier<NetworkMap> iid = null;
- iid = InstanceIdentifier.builder(Resources.class)
- .child(NetworkMaps.class)
- .child(NetworkMap.class, key)
- .build();
- return iid;
- }
- ...
-
-With the ``InstanceIdentifier`` you can use ``ReadOnlyTransaction``,
-``WriteTransaction`` and ``ReadWriteTransaction`` to manipulate the data
-accordingly. The ``simple-impl`` package, which provides some of the
-AD-SAL APIs mentioned above, is using this method to get data from the
-datastore and then convert them into RFC7285-compatible objects.
-
-Basic API and DataType
-----------------------
-
-a. alto-basic-types: Defines basic types of ALTO protocol.
-
-b. alto-service-model-api: Includes the YANG models for the five basic
- ALTO services defined in `RFC
- 7285 <https://tools.ietf.org/html/rfc7285>`__.
-
-c. alto-resourcepool: Manages the meta data of each ALTO service,
- including capabilities and versions.
-
-d. alto-northbound: Provides the root of RFC7285-compatible services at
- http://localhost:8080/alto.
-
-e. alto-northbound-route: Provides the root of the network map resources
- at http://localhost:8080/alto/networkmap/.
-
-How to customize service
-------------------------
-
-Define new service API
-~~~~~~~~~~~~~~~~~~~~~~
-
-Add a new module in ``alto-core/standard-service-models``. For example,
-we named our service model module as ``model-example``.
-
-Implement service RPC
-~~~~~~~~~~~~~~~~~~~~~
-
-Add a new module in ``alto-basic`` to implement a service RPC in
-``alto-core``.
-
-Currently ``alto-core/standard-service-models/model-base`` has defined a
-template of the service RPC. You can define your own RPC using
-``augment`` in YANG. Here is an example in ``alto-simpleird``.
-
-.. code::
-
- grouping "alto-ird-request" {
- container "ird-request" {
- }
- }
- grouping "alto-ird-response" {
- container "ird" {
- container "meta" {
- }
- list "resource" {
- key "resource-id";
- leaf "resource-id" {
- type "alto-types:resource-id";
- }
- }
- }
- }
- augment "/base:query/base:input/base:request" {
- case "ird-request-data" {
- uses "alto-ird-request";
- }
- }
- augment "/base:query/base:output/base:response" {
- case "ird-response-data" {
- uses "alto-ird-response";
- }
- }
-
-Register northbound route
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If necessary, you can add a northbound route module in
-``alto-core/standard-northbound-routes``.
+++ /dev/null
-.. _bier-dev-guide:
-
-BIER Developer Guide
-====================
-
-BIER Architecture
------------------
-
-- **Channel**
-
- - Channel (multicast flow) configuration and deploying information management.
-
-- **Common**
-
- - Common YANG models collection.
-
-- **Drivers**
-
- - South-bound NETCONF interface for BIER, it has implemented standard interface (ietf-bier).
- If your BFR's NETCONF interface is Non-standard, you should add your own interface for driver.
-
-- **Sbi-Adapter**
-
- - Adapter for different BIER south-bound NETCONF interfaces.
-
-- **Service**
-
- - Major processor function for BIER.
-
-- **Bierman**
-
- - BIER topology management, and BIER information (BIER, BIER-TE, lable info) configuration.
-
-- **Pce**
-
- - Path computation element for BIER-TE.
-
-- **Bierapp**
-
- - BIER UI, show topology and configure BIER/BIER-TE and channel.
-
-
-APIs in BIER
-------------
-
-The sections below give details about the configuration settings for
-the components that can be configured.
-
-BIER Information Manager
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-API Description
-^^^^^^^^^^^^^^^
-
-- bier/bierman/api/src/main/yang/bier-topology-api.yang
-
- - **load-topology**
-
- - Load BIER topology, and list all topo-name in all BIER topologies.
-
- - **configure-domain**
-
- - Configure domain in given BIER topology.
-
- - **configure-subdomain**
-
- - Configure sub-domain in given BIER domain and topology.
-
- - **delete-domain**
-
- - Delete given domain in given topology.
-
- - **delete-subdomain**
-
- - Delete given sub-domain in given domain and topology.
-
- - **query-topology**
-
- - Query given topology in BIER topology, and then display this
- topology's detail, such as information of node and link.
-
- - **query-node**
-
- - Query given nodes in given topology, and then display these nodes'
- detail, such as information of node-name, router-id,
- termination-point list, BIER domain and sub-domain list, etc.
-
- - **query-link**
-
- - Query given link in given topology, and then display this link's detail.
-
- - **query-domain**
-
- - Query domain in given BIER topology, and then display the domain-id list.
-
- - **query-subdomain**
-
- - Query sub-domain in given domain and given topology, and then display
- the sub-domain-id list.
-
- - **query-subdomain-node**
-
- - Query nodes which have been assigned to given sub-domain and domain in given
- topology, and then display these nodes' details.
-
- - **query-subdomain-link**
-
- - Query links which have been assigned to given sub-domain and domain in given
- topology, and then display these links' details.
-
- - **query-te-subdomain-node**
-
- - Query te-nodes which have been assigned to given sub-domain and domain in given
- topology, and then display these te-nodes' details.
-
- - **query-te-subdomain-link**
-
- - Query te-links which have been assigned to given sub-domain and domain in given
- topology, and then display these te-links' details.
-
-
-- bier/bierman/api/src/main/yang/bier-config-api.yang
-
- - **configure-node**
-
- - Configure node information in given topology, which defined in ietf-bier,
- such as domains, sub-domains, bitstringlength, bfr-id, encapsulation-type, etc.
-
- - **delete-node**
-
- - Delete given node which be assigned to given sub-domain and domain in
- given topology.
-
- - **delete-ipv4**
-
- - Delete bier mapping entry of ipv4.
-
- - **delete-ipv6**
-
- - Delete bier mapping entry of ipv6.
-
-
-- bier/bierman/api/src/main/yang/bier-te-config-api.yang
-
- - **configure-te-node**
-
- - Configure adjancency information for node, such as domains, sub-domains, si,
- bitstringlength, tpid, bitposition, etc.
-
- - **configure-te-label**
-
- - Configure BIER-TE label range for node.
-
- - **delete-te-babel**
-
- - Delete BIER-TE label range of node.
-
- - **delete-te-bsl**
-
- - Delete BIER-TE bitstringlength, including all SIs which belongs to this bitstringlenght.
-
- - **delete-te-si**
-
- - Delete BIER-TE SI, including all bitpositions which belongs to this SI.
-
- - **delete-te-bp**
-
- - Delete BIER-TE bitposition of an adjancency.
-
-Parameters Description
-^^^^^^^^^^^^^^^^^^^^^^
-
-- **topology-id**
-
- - BIER topology identifier.
-
-- **node-id**
-
- - Node identifier in network topology.
-
-- **latitude**
-
- - Node’s latitude, default value is 0.
-
-- **longitude**
-
- - Node’s longitude, default value is 0.
-
-- **tp-id**
-
- - Termination point identifier.
-
-- **domain-id**
-
- - BIER domain identifier.
-
-- **encapsulation-type**
-
- - Base identity for BIER encapsulation. Default value is "bier-encapsulation-mpls".
-
-- **bitstringlength**
-
- - The bitstringlength type for imposition mode. It's value can be chosen from 64,
- 128, 256, 512, 1024, 2048, and 4096.
-
- - The BitStringLength ("Imposition BitStringLength") and sub-domain ("Imposition
- sub-domain") to use when it imposes (as a BFIR) a BIER encapsulation on a
- particular set of packets.
-
-- **bfr-id**
-
- - BIER bfr identifier. BFR-id is a number in the range [1, 65535].
-
- - Bfr-id is unique within the sub-domain. A BFR-id is a small unstructured positive
- integer. For instance, if a particular BIER sub-domain contains 1, 374 BFRs, each
- one could be given a BFR-id in the range 1-1374.
-
- - If a given BFR belongs to more than one sub-domain, it may (though it need not)
- have a different BFR-id for each sub-domain.
-
-- **ipv4-bfr-prefix**
-
- - BIER BFR IPv4 prefix.
-
- - A BFR's BFR-Prefix MUST be an IP address (either IPv4 or IPv6) of the BFR, and MUST be
- unique and routable within the BIER domain. It is RECOMMENDED that the BFR-prefix be a
- loopback address of the BFR. Two BFRs in the same BIER domain MUST NOT be assigned the
- same BFR-Prefix. Note that a BFR in a given BIER domain has the same BFR-prefix in all
- the sub-domains of that BIER domain.
-
-- **ipv6-bfr-prefix**
-
- - BIER BFR IPv6 prefix.
-
-- **sub-domain-id**
-
- - Sub-domain identifier. Each sub-domain is identified by a sub-domain-id in the range [0, 255].
-
- - A BIER domain may contain one or more sub-domains. Each BIER domain MUST contain at least one
- sub-domain, the "default sub-domain" (also denoted "sub-domain zero"). If a BIER domain
- contains more than one sub-domain, each BFR in the domain MUST be provisioned to know the set
- of sub-domains to which it belongs.
-
-- **igp-type**
-
- - The IGP type. Enum type contains OSPF and ISIS.
-
-- **mt-id**
-
- - Multi-topology associated with BIER sub-domain.
-
-- **bitstringlength**
-
- - Disposition bitstringlength.
-
- - The BitStringLengths ("Disposition BitStringLengths") that it will process when
- (as a BFR or BFER) it receives packets from a particular sub-domain.
-
-- **bier-mpls-label-base**
-
- - BIER mpls-label, range in [0, 1048575].
-
-- **bier-mpls-label-range-size**
-
- - BIER mpls-label range size.
-
-- **link-id**
-
- - The identifier of a link in the topology.
-
- - A link is specific to a topology to which it belongs.
-
-
-- **source-node**
-
- - Source node identifier, must be in same topology.
-
-- **source-tp**
-
- - Termination point within source node that terminates the link.
-
-- **dest-node**
-
- - Destination node identifier and must be in same topology.
-
-- **dest-tp**
-
- - Termination point within destination node that terminates the link.
-
-- **delay**
-
- - The link delay, default value is 0.
-
-- **loss**
-
- - The number of packet loss on the link and default value is 0.
-
-Channel Manager
-~~~~~~~~~~~~~~~
-
-API Description
-^^^^^^^^^^^^^^^
-
-- bier/channel/api/src/main/yang/bier-channel-api.yang
-
- - **get-channel**
-
- - Display all channel's names in given BIER topology.
-
- - **query-channel**
-
- - Query specific channel in given topology and display this channel's information (multicast
- flow information and related BFIR,BFER information).
-
- - **add-channel**
-
- - Create channel with multicast information in given BIER topology.
-
- - **modify-channel**
-
- - Modify the channel's information which created above.
-
- - **remove-channel**
-
- - Remove given channel in given topology.
-
- - **deploy-channel**
-
- - Deploy channel, and configure BFIR and BFERs about this multicast flow in given topology.
-
-Parameters Description
-^^^^^^^^^^^^^^^^^^^^^^
-
-- **topology-id**
-
- - BIER topology identifier.
-
-- **channel-name**
-
- - BIER channel (multicast flow information) name.
-
-- **src-ip**
-
- - The IPv4 of multicast source. The value set to zero means that the receiver interests in
- all source that relevant to one group.
-
-- **dst-group**
-
- - The IPv4 of multicast group.
-
-- **domain-id**
-
- - BIER domain identifier.
-
-- **sub-domain-id**
-
- - BIER sub-domain identifier.
-
-- **source-wildcard**
-
- - The wildcard information of source, in the range [1, 32].
-
-- **group-wildcard**
-
- - The wildcard information of multi-cast group, in the range [1, 32].
-
-- **ingress-node**
-
- - BFIR (Bit-Forwarding Ingress Router).
-
-- **ingress-bfr-id**
-
- - The bfr-id of BRIR.
-
-- **egress-node**
-
- - BFER (Bit-Forwarding Egress Router).
-
-- **egress-bfr-id**
-
- - The bfr-id of BRER.
-
-- **bier-forwarding-type**
-
- - The forwarding type, enum type contains BIER and BIER-TE.
-
-.. note:: For more information about BIER terminology, see `YANG Data Model for BIER Protocol <https://datatracker.ietf.org/doc/draft-ietf-bier-bier-yang/?include_text=1>`_.
-
-
-Sample Configurations
----------------------
-
-1. Configure Domain And Sub-domain
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-1.1. Configure Domain
-^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:configure-domain*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topo-id": " bier-topo" ,
- "domain ":[
- {
- "domain-id": " 1",
- },
- {
- "domain-id": " 2",
- }
- ]
- }
- }
-
-1.2. Configure Sub-domain
-^^^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:configure-subdomain*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topo-id": " bier-topo" ,
- "domain-id":" 1",
- "sub-domain":[
- {
- "sub-domain-id":" 0",
- },
- {
- "sub-domain-id":"1",
- }
- ]
- }
- }
-
-2. Configure Node
-~~~~~~~~~~~~~~~~~
-
-2.1. Configure BIER Parameters
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-config-api:configure-node*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "node-id": "node1",
- "domain": [
- {
- "domain-id": "2",
- "bier-global": {
- "sub-domain": [
- {
- "sub-domain-id": "0",
- "igp-type": "ISIS",
- "mt-id": "1",
- "bfr-id": "3",
- "bitstringlength": "64-bit",
- "af": {
- "ipv4": [
- {
- "bitstringlength": "64",
- "bier-mpls-label-base": "56",
- "bier-mpls-label-range-size": "100"
- }
- ]
- }
- }
- ],
- "encapsulation-type": "bier-encapsulation-mpls",
- "bitstringlength": "64-bit",
- "bfr-id": "33",
- "ipv4-bfr-prefix": "192.168.1.1/24",
- "ipv6-bfr-prefix": "1030:0:0:0:C9B4:FF12:48AA:1A2B/60"
- }
- }
- ]
- }
- }
-
-2.2. Configure BIER-TE label
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-te-config-api:configure-te-label*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "node-id": "node1",
- "label-base": "100",
- "label-range-size": "20"
- }
- }
-
-2.3. Configure BIER-TE Parameters
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-te-config-api:configure-te-node*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "node-id": "node1",
- "te-domain": [
- {
- "domain-id": "1",
- "te-sub-domain": [
- {
- "sub-domain-id": "0",
- "te-bsl": [
- {
- "bitstringlength": "64-bit",
- "te-si": [
- {
- "si": "1",
- "te-bp": [
- {
- "tp-id":"tp1",
- "bitposition": "1"
- }
- ]
- }
- ]
- }
- ]
- }
- ]
- }
- ]
- }
- }
-
-3. Query BIER Topology Information
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-3.1. Load Topology
-^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:load-topology*
-
-no request body.
-
-3.2. Query Topology
-^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:query-topology*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topo-id": "bier-topo"
- }
- }
-
-3.3. Query BIER Node
-^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:query-node*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topo-id": "bier-topo",
- "node-id": "node1"
- }
- }
-
-3.4. Query BIER Link
-^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:query-link*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topo-id": "bier-topo",
- "node-id": "node1"
- }
- }
-
-3.5. Query Domain
-^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:query-domain*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topo-id": "bier-topo"
- }
- }
-
-3.6. Query Sub-domain
-^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:query-subdomain*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topo-id": "bier-topo",
- "domain-id": "1"
- }
- }
-
-3.7. Query Sub-domain Node
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:query-subdomain-node*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "domain-id": "1",
- "sub-domain-id": "0"
- }
- }
-
-3.8. Query Sub-domain Link
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:query-subdomain-link*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "domain-id": "1",
- "sub-domain-id": "0"
- }
- }
-
-3.9. Query BIER-TE Sub-domain Node
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:query-te-subdomain-node*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "domain-id": "1",
- "sub-domain-id": "0"
- }
- }
-
-3.10. Query BIER-TE Sub-domain Link
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:query-te-subdomain-link*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "domain-id": "1",
- "sub-domain-id": "0"
- }
- }
-
-4. BIER Channel Configuration
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-4.1. Configure Channel
-^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-channel-api:add-channel*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "name": "channel-1",
- "src-ip": "1.1.1.1",
- "dst-group": "224.1.1.1",
- "domain-id": "1",
- "sub-domain-id": "11",
- "source-wildcard": "24",
- "group-wildcard": "30"
- }
- }
-
-4.2. Modify Channel
-^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-channel-api:modify-channel*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "name": "channel-1",
- "src-ip": "2.2.2.2",
- "dst-group": "225.1.1.1",
- "domain-id": "1",
- "sub-domain-id": "11",
- "source-wildcard": "24",
- "group-wildcard": "30"
- }
- }
-
-5. Deploy Channel
-~~~~~~~~~~~~~~~~~
-
-**REST API** : *POST /restconf/operations/bier-channel-api:deploy-channel*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "channel-name": "channel-1",
- "bier-forwarding-type":"bier-te"
- "ingress-node": "node1",
- "egress-node": [
- {
- "node-id": "node2"
- },
- {
- "node-id": "node3"
- }
- ]
- }
- }
-
-6. Query Channel Information
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-6.1. Get Channel
-^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-channel-api:get-channel*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo"
- }
- }
-
-6.2. Query Channel
-^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-channel-api:query-channel*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "channel-name": [
- "channel-1",
- "channel-2"
- ]
- }
- }
-
-7. Remove Channel
-~~~~~~~~~~~~~~~~~
-
-**REST API** : *POST /restconf/operations/bier-channel-api:remove-channel*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topology-id": "bier-topo",
- "channel-name": "channel-1"
- }
- }
-
-8. Delete BIER and BIER-TE Configuration
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-8.1. Delete BIER Node
-^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-config-api:delete-node*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topo-id": "bier-topo",
- "node-id": "node3",
- "domain-id": "1",
- "subdomain-id": "0"
- }
- }
-
-8.2. Delete IPv4 of BIER Node
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-config-api:delete-ipv4*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- input: {
- "topology-id": "bier-topo",
- "domain-id": "1",
- "sub-domain-id": "0",
- "node-id": "node1",
- "ipv4": {
- "bier-mpls-label-base": "10",
- "bier-mpls-label-range-size": "16",
- "bitstringlength": "64"
- }
- }
- }
-
-8.3. Delete IPv6 of BIER Node
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-config-api:delete-ipv6*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- input: {
- "topology-id": "bier-topo",
- "domain-id": "1",
- "sub-domain-id": "0",
- "node-id": "node1",
- "ipv6": {
- "bier-mpls-label-base": "10",
- "bier-mpls-label-range-size": "16",
- "bitstringlength": "64"
- }
- }
- }
-
-8.4. Delete BIER-TE BSL
-^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-te-config-api:delete-te-bsl*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- input:{
- "topology-id": "bier-topo",
- "node-id": "node1",
- "domain-id": "1",
- "sub-domain-id": "0",
- "bitstringlength": "64-bit"
- }
- }
-
-8.5. Delete BIER-TE SI
-^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-te-config-api:delete-te-si*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- input:{
- "topology-id": "bier-topo",
- "node-id": "node1",
- "domain-id": "1",
- "sub-domain-id": "0",
- "bitstringlength": "64-bit",
- "si": "1"
- }
- }
-
-8.6. Delete BIER-TE BP
-^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-te-config-api:delete-te-bp*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- input: {
- "topology-id": "bier-topo",
- "node-id": "node1",
- "domain-id": "1",
- "sub-domain-id": "0",
- "bitstringlength": "64-bit",
- "si": "1",
- "tp-id": "tp1"
- }
- }
-
-8.7. Delete BIER-TE Label
-^^^^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-te-config-api:delete-te-label*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topo-id": "bier-topo",
- "node-id": "node1"
- }
- }
-
-8.8. Delete Sub-domain
-^^^^^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:delete-subdomian*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topo-id": "bier-topo",
- "domain-id": "1",
- "subdomain-id": "0"
- }
- }
-
-8.9. Delete Domain
-^^^^^^^^^^^^^^^^^^
-
-**REST API** : *POST /restconf/operations/bier-topology-api:delete-domian*
-
-**Sample JSON Data**
-
-.. code:: json
-
- {
- "input": {
- "topo-id": "bier-topo",
- "domain-id": "1"
- }
- }
+++ /dev/null
-CAPWAP Developer Guide
-======================
-
-Overview
---------
-
-The Control And Provisioning of Wireless Access Points (CAPWAP) plugin
-project aims to provide new southbound interface for controller to be
-able to monitor and manage CAPWAP compliant wireless termination point
-(WTP) network devices. The CAPWAP feature will provide REST based
-northbound APIs.
-
-CAPWAP Architecture
--------------------
-
-The CAPWAP feature is implemented as an MD-SAL based provider module,
-which helps discover WTP devices and update their states in the MD-SAL
-operational datastore.
-
-CAPWAP APIs and Interfaces
---------------------------
-
-This section describes the APIs for interacting with the CAPWAP plugin.
-
-Discovered WTPs
-~~~~~~~~~~~~~~~
-
-The CAPWAP project maintains list of discovered CAPWAP WTPs that is
-YANG-based in MD-SAL. These models are available via RESTCONF.
-
-- Name: Discovered-WTPs
-
-- URL:
- `http://${ipaddress}:8181/restconf/operational/capwap-impl:capwap-ac-root/ <http://${ipaddress}:8181/restconf/operational/capwap-impl:capwap-ac-root/>`__
-
-- Description: Displays list of discovered WTPs and their basic
- attributes
-
-API Reference Documentation
----------------------------
-
-Go to
-`http://${ipaddress}:8181/apidoc/explorer/index.html <http://${ipaddress}:8181/apidoc/explorer/index.html>`__,
-sign in, and expand the capwap-impl panel. From there, users can execute
-various API calls to test their CAPWAP deployment.
-
+++ /dev/null
-.. _cardinal-dev-guide:
-
-Cardinal: OpenDaylight Monitoring as a Service
-==============================================
-
-Overview
---------
-
-Cardinal (OpenDaylight Monitoring as a Service) enables OpenDaylight and
-the underlying software defined network to be remotely monitored by
-deployed Network Management Systems (NMS) or Analytics suite. In the
-Boron release, Cardinal adds:
-
-1. OpenDaylight MIB.
-
-2. Enable ODL diagnostics/monitoring to be exposed across SNMP (v2c, v3)
- and REST north-bound.
-
-3. Extend ODL System health, Karaf parameter and feature info, ODL
- plugin scalability and network parameters.
-
-4. Support autonomous notifications (SNMP Traps).
-
-Cardinal Architecture
----------------------
-
-The Cardinal architecture can be found at the below link:
-
-https://wiki.opendaylight.org/images/8/89/Cardinal-ODL_Monitoring_as_a_Service_V2.pdf
-
-Key APIs and Interfaces
------------------------
-
-There are 6 main APIs for requesting snmpget request of the Karaf info,
-System info, Openflow devices and Netconf Devices. To expose these APIs,
-it assumes that you already have the ``odl-cardinal`` and ``odl-restconf``
-features installed. You can do that by entering the following at the Karaf console:
-
-::
-
- feature:install odl-cardinal
- feature:install odl-restconf-all
- feature:install odl-l2switch-switch
- feature:install odl-netconf-all
- feature:install odl-netconf-connector-all
- feature:install odl-netconf-mdsal
- feature:install cardinal-features4
- feature:install odl-cardinal-api
- feature:install odl-cardinal-ui
- feature:install odl-cardinal-rest
-
-System Info APIs
-~~~~~~~~~~~~~~~~
-
-Open the REST interface and using the basic authentication, execute REST
-APIs for system info as:
-
-::
-
- http://localhost:8181/restconf/operational/cardinal:CardinalSystemInfo/
-
-You should get the response code of the same as 200 OK with the
-following output as:
-
-::
-
- {
- "CardinalSystemInfo": {
- "odlSystemMemUsage": " 9",
- "odlSystemSysInfo": " OpenDaylight Node Information",
- "odlSystemOdlUptime": " 00:29",
- "odlSystemCpuUsage": " 271",
- "odlSystemHostAddress": " Address of the Host should come up"
- }
- }
-
-Karaf Info APIs
-~~~~~~~~~~~~~~~
-
-Open the REST interface and using the basic authentication, execute REST
-APIs for system info as:
-
-::
-
- http://localhost:8181/restconf/operational/cardinal-karaf:CardinalKarafInfo/
-
-You should get the response code of the same as 200 OK with the
-following output as:
-
-::
-
- {
- "CardinalKarafInfo": {
- "odlKarafBundleListActive1": " org.ops4j.pax.url.mvn_2.4.5 [1]",
- "odlKarafBundleListActive2": " org.ops4j.pax.url.wrap_2.4.5 [2]",
- "odlKarafBundleListActive3": " org.ops4j.pax.logging.pax-logging-api_1.8.4 [3]",
- "odlKarafBundleListActive4": " org.ops4j.pax.logging.pax-logging-service_1.8.4 [4]",
- "odlKarafBundleListActive5": " org.apache.karaf.service.guard_3.0.6 [5]",
- "odlKarafBundleListActive6": " org.apache.felix.configadmin_1.8.4 [6]",
- "odlKarafBundleListActive7": " org.apache.felix.fileinstall_3.5.2 [7]",
- "odlKarafBundleListActive8": " org.objectweb.asm.all_5.0.3 [8]",
- "odlKarafBundleListActive9": " org.apache.aries.util_1.1.1 [9]",
- "odlKarafBundleListActive10": " org.apache.aries.proxy.api_1.0.1 [10]",
- "odlKarafBundleListInstalled1": " org.ops4j.pax.url.mvn_2.4.5 [1]",
- "odlKarafBundleListInstalled2": " org.ops4j.pax.url.wrap_2.4.5 [2]",
- "odlKarafBundleListInstalled3": " org.ops4j.pax.logging.pax-logging-api_1.8.4 [3]",
- "odlKarafBundleListInstalled4": " org.ops4j.pax.logging.pax-logging-service_1.8.4 [4]",
- "odlKarafBundleListInstalled5": " org.apache.karaf.service.guard_3.0.6 [5]",
- "odlKarafFeatureListInstalled1": " config",
- "odlKarafFeatureListInstalled2": " region",
- "odlKarafFeatureListInstalled3": " package",
- "odlKarafFeatureListInstalled4": " http",
- "odlKarafFeatureListInstalled5": " war",
- "odlKarafFeatureListInstalled6": " kar",
- "odlKarafFeatureListInstalled7": " ssh",
- "odlKarafFeatureListInstalled8": " management",
- "odlKarafFeatureListInstalled9": " odl-netty",
- "odlKarafFeatureListInstalled10": " odl-lmax",
- "odlKarafBundleListResolved1": " org.ops4j.pax.url.mvn_2.4.5 [1]",
- "odlKarafBundleListResolved2": " org.ops4j.pax.url.wrap_2.4.5 [2]",
- "odlKarafBundleListResolved3": " org.ops4j.pax.logging.pax-logging-api_1.8.4 [3]",
- "odlKarafBundleListResolved4": " org.ops4j.pax.logging.pax-logging-service_1.8.4 [4]",
- "odlKarafBundleListResolved5": " org.apache.karaf.service.guard_3.0.6 [5]",
- "odlKarafFeatureListUnInstalled1": " aries-annotation",
- "odlKarafFeatureListUnInstalled2": " wrapper",
- "odlKarafFeatureListUnInstalled3": " service-wrapper",
- "odlKarafFeatureListUnInstalled4": " obr",
- "odlKarafFeatureListUnInstalled5": " http-whiteboard",
- "odlKarafFeatureListUnInstalled6": " jetty",
- "odlKarafFeatureListUnInstalled7": " webconsole",
- "odlKarafFeatureListUnInstalled8": " scheduler",
- "odlKarafFeatureListUnInstalled9": " eventadmin",
- "odlKarafFeatureListUnInstalled10": " jasypt-encryption"
- }
- }
-
-OpenFlowInfo Apis
-~~~~~~~~~~~~~~~~~
-
-Open the REST interface and using the basic authentication, execute REST APIs for system info as:
-
-http://localhost:8181/restconf/operational/cardinal-openflow:Devices
-
-You should get the response code of the same as 200 OK with the following output as:
-
-::
-
- {
- "Devices": {
- "openflow": [
- {
- "macAddress": "6a:80:ef:06:d3:46",
- "status": "Connected",
- "flowStats": " ",
- "interface": "s1",
- "manufacturer": "Nicira, Inc.",
- "nodeName": "openflow:1:LOCAL",
- "meterStats": " "
- },
- {
- "macAddress": "32:56:c7:41:5d:9a",
- "status": "Connected",
- "flowStats": " ",
- "interface": "s2-eth2",
- "manufacturer": "Nicira, Inc.",
- "nodeName": "openflow:2:2",
- "meterStats": " "
- },
- {
- "macAddress": "36:a8:3b:fe:e2:21",
- "status": "Connected",
- "flowStats": " ",
- "interface": "s3-eth1",
- "manufacturer": "Nicira, Inc.",
- "nodeName": "openflow:3:1",
- "meterStats": " "
- }
- ]
- }
- }
-
-
-Configuration for Netconf Devices:-
-
-1. To configure or update a netconf-connector via topology you need to send following request to Restconf:
-
-::
-
- Method: PUT
- URI: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
- Headers:
- Accept: application/xml
- Content-Type: application/xml
-
-Payload:
-
-.. code-block:: xml
-
- <node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
- <node-id>new-netconf-device</node-id>
- <host xmlns="urn:opendaylight:netconf-node-topology">127.0.0.1</host>
- <port xmlns="urn:opendaylight:netconf-node-topology">17830</port>
- <username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
- <password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
- <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
- <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">0</keepalive-delay>
- </node>
-
-2. To delete a netconf connector issue a DELETE request to the following url:
-URI:http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
-
-NetConf Info Apis
-Open the REST interface and using the basic authentication, execute REST APIs for system info as:
-
-http://localhost:8181/restconf/operational/cardinal-netconf:Devices
-
-You should get the response code of the same as 200 OK with the following output as:
-
-::
-
- {
- "Devices": {
- "netconf": [
- {
- "status": "connecting",
- "host": "127.0.0.1",
- "nodeId": "new-netconf-device1",
- "port": "17830"
- },
- {
- "status": "connecting",
- "host": "127.0.0.1",
- "nodeId": "new-netconf-device",
- "port": "17830"
- },
- {
- "status": "connecting",
- "host": "127.0.0.1",
- "nodeId": "controller-config",
- "port": "1830"
- }
- ]
- }
- }
+++ /dev/null
-.. _didm-developer-guide:
-
-DIDM Developer Guide
-====================
-
-Overview
---------
-
-The Device Identification and Driver Management (DIDM) project addresses
-the need to provide device-specific functionality. Device-specific
-functionality is code that performs a feature, and the code is
-knowledgeable of the capability and limitations of the device. For
-example, configuring VLANs and adjusting FlowMods are features, and
-there may be different implementations for different device types.
-Device-specific functionality is implemented as Device Drivers. Device
-Drivers need to be associated with the devices they can be used with. To
-determine this association requires the ability to identify the device
-type.
-
-DIDM Architecture
------------------
-
-The DIDM project creates the infrastructure to support the following
-functions:
-
-- **Discovery** - Determination that a device exists in the controller
- management domain and connectivity to the device can be established.
- For devices that support the OpenFlow protocol, the existing
- discovery mechanism in OpenDaylight suffices. Devices that do not
- support OpenFlow will be discovered through manual means such as the
- operator entering device information via GUI or REST API.
-
-- **Identification** – Determination of the device type.
-
-- **Driver Registration** – Registration of Device Drivers as routed
- RPCs.
-
-- **Synchronization** – Collection of device information, device
- configuration, and link (connection) information.
-
-- **Data Models for Common Features** – Data models will be defined to
- perform common features such as VLAN configuration. For example,
- applications can configure a VLAN by writing the VLAN data to the
- data store as specified by the common data model.
-
-- **RPCs for Common Features** – Configuring VLANs and adjusting
- FlowMods are example of features. RPCs will be defined that specify
- the APIs for these features. Drivers implement features for specific
- devices and support the APIs defined by the RPCs. There may be
- different Driver implementations for different device types.
-
-Key APIs and Interfaces
------------------------
-
-.. _didm-flow-objective-api:
-
-FlowObjective API
-~~~~~~~~~~~~~~~~~
-
-Following are the list of the APIs to create the flow objectives to
-install the flow rule in OpenFlow switch in pipeline agnostic way.
-Currently these APIs are getting consumed by Atrium project.
-
-Install the Forwarding Objective:
-
-``http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:forward``
-
-Install the Filter Objective
-
-``http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:filter``
-
-Install the Next Objective:
-
-``http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:next``
-
-Flow mod driver API
-~~~~~~~~~~~~~~~~~~~
-
-This release includes a flow mod driver for the HP 3800. This
-driver adjusts the flows and push the same to the device. This API takes
-the flow to be adjusted as input and displays the adjusted flow as
-output in the REST output container. Here is the REST API to adjust and
-push flows to HP 3800 device:
-
-::
-
- http://<CONTROLLER-IP:8181>/restconf/operations/openflow-feature:adjust-flow
-
-Here is an example of an ARP flow and how it gets adjusted and pushed to
-device HP3800:
-
-**adjust-flow input.**
-
-::
-
- <?xml version="1.0" encoding="UTF-8" standalone="no"?>
- <input xmlns="urn:opendaylight:params:xml:ns:yang:didm:drivers:openflow" xmlns:opendaylight-inventory="urn:opendaylight:inventory">
- <node>/opendaylight-inventory:nodes/opendaylight-inventory:node[opendaylight-inventory:id='openflow:673249119553088']</node>
- <flow>
- <match>
- <ethernet-match>
- <ethernet-type>
- <type>2054</type>
- </ethernet-type>
- </ethernet-match>
- </match>
- <flags>SEND_FLOW_REM</flags>
- <priority>0</priority>
- <flow-name>ARP_FLOW</flow-name>
- <instructions>
- <instruction>
- <order>0</order>
- <apply-actions>
- <action>
- <order>0</order>
- <output-action>
- <output-node-connector>CONTROLLER</output-node-connector>
- <max-length>65535</max-length>
- </output-action>
- </action>
- <action>
- <order>1</order>
- <output-action>
- <output-node-connector>NORMAL</output-node-connector>
- <max-length>65535</max-length>
- </output-action>
- </action>
- </apply-actions>
- </instruction>
- </instructions>
- <idle-timeout>180</idle-timeout>
- <hard-timeout>1800</hard-timeout>
- <cookie>10</cookie>
- </flow>
- </input>
-
-In the output, you can see that the table ID has been identified for the
-given flow and two flow mods are created as a result of adjustment. The
-first one is to catch ARP packets in Hardware table 100 with an action
-to goto table 200. The second flow mod is in table 200 with actions:
-output normal and output controller.
-
-**adjust-flow output.**
-
-::
-
- {
- "output": {
- "flow": [
- {
- "idle-timeout": 180,
- "instructions": {
- "instruction": [
- {
- "order": 0,
- "apply-actions": {
- "action": [
- {
- "order": 1,
- "output-action": {
- "output-node-connector": "NORMAL",
- "max-length": 65535
- }
- },
- {
- "order": 0,
- "output-action": {
- "output-node-connector": "CONTROLLER",
- "max-length": 65535
- }
- }
- ]
- }
- }
- ]
- },
- "strict": false,
- "table_id": 200,
- "flags": "SEND_FLOW_REM",
- "cookie": 10,
- "hard-timeout": 1800,
- "match": {
- "ethernet-match": {
- "ethernet-type": {
- "type": 2054
- }
- }
- },
- "flow-name": "ARP_FLOW",
- "priority": 0
- },
- {
- "idle-timeout": 180,
- "instructions": {
- "instruction": [
- {
- "order": 0,
- "go-to-table": {
- "table_id": 200
- }
- }
- ]
- },
- "strict": false,
- "table_id": 100,
- "flags": "SEND_FLOW_REM",
- "cookie": 10,
- "hard-timeout": 1800,
- "match": {},
- "flow-name": "ARP_FLOW",
- "priority": 0
- }
- ]
- }
- }
-
-API Reference Documentation
----------------------------
-
-Go to
-http://${controller-ip}:8181/apidoc/explorer/index.html,
-and look under DIDM section to see all the available REST calls and
-tables
+++ /dev/null
-.. _eman-dev-guide:
-
-eman Developer Guide
-====================
-
-Overview
---------
-
-The OpenDaylight Energy Management (eman) plugin implements an abstract
-Information Model that describes energy measurement and control features
-that may be supported by a variety of device types. The eman plugin may
-support a number of southbound interfaces to accommodate a set of
-protocols, including but not limited to SNMP, NETCONF, IPDR. The plugin
-presents a northbound REST API. This framework enables any number of
-applications to interoperate with any number of devices in order to
-measure and optimize energy usage. The Information Model will be
-inherited from the `SCTE 216 standard – Adaptive Power Systems Interface
-Specification (APSIS)
-<http://www.scte.org/SCTEDocs/Standards/ANSI_SCTE%20216%202015.pdf>`_,
-which in turn inherits definitions within the `IETF eman document set
-<https://datatracker.ietf.org/wg/eman/documents/>`_.
-
-This documentation is directed to developers who may use the eman features
-to build other OpenDaylight features or applications.
-
-eman is composed of 3 Karaf features:
- * ``eman`` incudes the YANG model and its implementation
- * ``eman-api`` adds support for REST
-
-Developers will typically interface with ``eman-api``.
-
-
-eman Architecture
------------------
-
-``eman`` defines a YANG model that represents the IETF energy management
-Information Model, and includes RPCs. The implementation of the model
-currently supports an SNMP 'binding' via interfacing with the
-OpenDaylight SNMP module. In the future, other Southbound protocols may
-be supported.
-
-Developers my use the ``eman-api`` feature to read and write energy
-related data and commands to devices that support the IETF eman MIBS.
-
-Key APIs and Interfaces
------------------------
-
-The eman API currently supports a subset of the IETF eman Information Model,
-including the EnergyObjectPowerMeasurement table. Users of the API may
-get individual attributes or the entire table. When querying the table, the
-results are written into the MD-SAL, for subsequent access. For example,
-a developer may periodically poll a device for its powerMeasurements,
-and fetch a collection of measurements to discover a history of measurements.
-
-
-Operational API
----------------
-
-Via MD-SAL, the following endpoint provides access to previously
-captured power measurements.
-
-.. note::
- "eo" indicates "energy object" as per the IETF Information Model
-
-operational::
-
- eman:eoDevices/eoDevice{id}/eoPowerMeasurement{id}
-
- id indicates an index into a collection
-
-EoDevices may contain a collection of individual eoDevice objects, which
-in turn may contain a collection of eoPowerMeasurement objects
-
-Operations API
---------------
-
-A set of RPCs enable interactions with devices.
-
-get-eoAttribute enables query on an individual attribute of a energy object::
-
- get-eoAttribute
-
- deviceIP indicates IP address of target device
- attribute indicates name of requested attribute
-
-.. note:: Future releases will provide a enumeration of allowed names.
-
-The supported name are:
-
-* eoPower
-* eoPowerNameplate
-* eoPowerUnitMultiplier
-* eoPowerAccuracy
-* eoPowerMeasurementCaliber
-* eoPowerCurrentType
-* eoPowerMeasurementLocal
-* eoPowerAdminState
-* eoPowerOperState
-* eoPowerStateEnterReason
-
-set-eoAttribute enables sending a command to an energy object::
-
- set-eoAttribute
-
- deviceIP. IP address of target device
- attribute. string indicating name of attribute. Currently, no attributes
-
-get-eoDevicePowerMeasures reads an eoPowerMEasurements table from a device
-and stores the result in MD-SAL, making it available vie the operational API::
-
- get-eoDevicePowerMeasures
-
- deviceIP. IP address of target device
-
-API Reference Documentation
----------------------------
-
-See eman project page for additional information:
-https://wiki.opendaylight.org/view/eman:Main
+++ /dev/null
-.. _faas_dev_guide:
-
-Fabric As A Service
-===================
-
-FaaS (Fabric As A service) has two layers of APIs. We describe the top
-level API in the user guide. This document focuses on the Fabric level
-API and describes each API’s semantics and example implementation. The
-second layer defines an abstraction layer called *''Fabric*'' API. The
-idea is to abstract network into a topology formed by a collections of
-fabric objects other than varies of physical devices.Each Fabric object
-provides a collection of unified services.The top level API enables
-application developers or users to write applications to map high level
-model such as GBP, Intent etc… into a logical network model, while the
-lower level gives the application more control to individual fabric
-object level. More importantly the Fabric API is more like SP (Service
-Provider API) a fabric provider or vendor can implement the SPI based on
-its own Fabric technique such as TRILL, SPB etc …
-
-For how to use first level API operation, please refer to user guide for
-more details.
-
-FaaS Architecture
------------------
-
-FaaS Architecture is an 3 layered architecture, on the top is the FaaS
-Application layer, in the middle is the Fabric manager and at the bottom
-are different types of fabric objects. From bottom up, it is
-
-Fabric and its controller (Fabric Controller)
- The Fabric object provides an abstraction of a homogeneous network
- or portion of the network and also has a built in Fabric controller
- which provides management plane and control plane for the fabric.
- The fabric controller implements the services required in Fabric
- Service and monitor and control the fabric operation.
-
-Fabric Manager
- Fabric Manager manages all the fabric objects. also Fabric manager
- acts as a Unified Fabric Controller which provides inter-connect
- fabric control and configuration Also Fabric Manager is FaaS API
- service via Which FaaS user level logical network API (the top level
- API as mentioned previously) exposed and implemented.
-
-FaaS renderer for GBP (Group Based Policy)
- FaaS renderer for GBP is an application of FaaS and provides the
- rendering service between GBP model and logical network model
- provided by Fabric Manager.
-
-Fabric APIs and Interfaces
---------------------------
-
-FaaS APIs have 4 groups as defined below
-
-Fabric Provisioning API
- This set of APIs is used to create and remove Fabric Abstractions,
- in other words, those APIs is to provision the underlay networks and
- prepare to create overlay network(the logical network) on top of it.
-
-Fabric Service API
- This set of APIs is used to create logical network over the Fabrics.
-
-EndPoint API
- EndPoint API is used to bind a physical port which is the location
- of the attachment of an EndPoint happens or will happen.
-
-OAM API
- Those APIs are for Operations, Administration and Maintenance
- purpose and In current release, OAM API is not implemented yet.
-
-Fabric Provisioning API
-~~~~~~~~~~~~~~~~~~~~~~~
-
-- `http://${ipaddress}:8181/restconf/operations/fabric:compose-fabric <http://${ipaddress}:8181/restconf/operations/fabric:compose-fabric>`__
-
-- `http://${ipaddress}:8181/restconf/operations/fabric:decompose-fabric <http://${ipaddress}:8181/restconf/operations/fabric:decompose-fabric>`__
-
-- `http://${ipaddress}:8181/restconf/operations/fabric:get-all-fabrics <http://${ipaddress}:8181/restconf/operations/fabric:get-all-fabrics>`__
-
-Fabric Service API
-~~~~~~~~~~~~~~~~~~
-
-- RESTCONF for creating Logical port, switch, router, routing entries
- and link. Among them, both switches and routers have ports. links
- connect ports.these 5 logical elements are basic building blocks of a
- logical network.
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:create-logical-switch <http://${ipaddress}:8181/restconf/operations/fabric-service:create-logical-switch>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logical-switch <http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logical-switch>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:create-logical-router <http://${ipaddress}:8181/restconf/operations/fabric-service:create-logical-router>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logical-router <http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logical-router>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:add-static-route <http://${ipaddress}:8181/restconf/operations/fabric-service:add-static-route>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:create-logic-port <http://${ipaddress}:8181/restconf/operations/fabric-service:create-logic-port>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logic-port <http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logic-port>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:create-gateway <http://${ipaddress}:8181/restconf/operations/fabric-service:create-gateway>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:rm-gateway <http://${ipaddress}:8181/restconf/operations/fabric-service:rm-gateway>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:port-binding-logical-to-fabric <http://${ipaddress}:8181/restconf/operations/fabric-service:port-binding-logical-to-fabric>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:port-binding-logical-to-device <http://${ipaddress}:8181/restconf/operations/fabric-service:port-binding-logical-to-device>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:add-port-function <http://${ipaddress}:8181/restconf/operations/fabric-service:add-port-function>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:add-acl <http://${ipaddress}:8181/restconf/operations/fabric-service:add-acl>`__
-
- - `http://${ipaddress}:8181/restconf/operations/fabric-service:del-acl <http://${ipaddress}:8181/restconf/operations/fabric-service:del-acl>`__
-
-EndPoint API
-~~~~~~~~~~~~
-
-The following APIs is to bind the physical ports to the logical ports on
-the logical switches:
-
-- `http://${ipaddress}:8181/restconf/operations/fabric-endpoint:register-endpoint <http://${ipaddress}:8181/restconf/operations/fabric-endpoint:register-endpoint>`__
-
-- `http://${ipaddress}:8181/restconf/operations/fabric-endpoint:unregister-endpoint <http://${ipaddress}:8181/restconf/operations/fabric-endpoint:unregister-endpoint>`__
-
-- `http://${ipaddress}:8181/restconf/operations/fabric-endpoint:locate-endpoint <http://${ipaddress}:8181/restconf/operations/fabric-endpoint:locate-endpoint>`__
-
-Others API
-~~~~~~~~~~
-
-- `http://${ipaddress}:8181/restconf/operations/fabric-resource:create-fabric-port <http://${ipaddress}:8181/restconf/operations/fabric-resource:create-fabric-port>`__
-
-API Reference Documentation
----------------------------
-
-Go to
-`http://${ipaddress}:8181/restconf/apidoc/index.html <http://${ipaddress}:8181/restconf/apidoc/index.html>`__
-and expand on *''FaaS*'' related panel for more APIs.
-
.. toctree::
:maxdepth: 1
- alto-developer-guide
- bier-developer-guide
- capwap-developer-guide
- cardinal_-opendaylight-monitoring-as-a-service
- didm-developer-guide
distribution-version
distribution-test-features
- eman-developer-guide
- fabric-as-a-service
- iotdm-developer-guide
jsonrpc
- l2switch-developer-guide
- lacp-developer-guide
nemo-developer-guide
- network-intent-composition-(nic)-developer-guide
- netide-developer-guide
neutron-service-developer-guide
neutron-northbound
odl-parent-developer-guide
- ocp-plugin-developer-guide
- odl-sdni-developer-guide
- of-config-developer-guide
openflow-protocol-library-developer-guide
openflow-plugin-project-developer-guide
opflex-agent-ovs-developer-guide
opflex-genie-developer-guide
opflex-libopflex-developer-guide
ovsdb-developer-guide
- packetcable-developer-guide
p4plugin-developer-guide
service-function-chaining
snmp4sdn-developer-guide
- topology-processing-framework-developer-guide
unified-secure-channel
yang-tools
+++ /dev/null
-.. _iotdm_dev_guide:
-
-IoTDM Developer Guide
-=====================
-
-Overview
---------
-
-The Internet of Things Data Management (IoTDM) on OpenDaylight project
-is about developing a data-centric middleware that will act as a oneM2M
-compliant IoT Data Broker and enable authorized applications to retrieve
-IoT data uploaded by any device. The OpenDaylight platform is used to
-implement the oneM2M data store which models a hierarchical containment
-tree, where each node in the tree represents an oneM2M resource.
-Typically, IoT devices and applications interact with the resource tree
-over standard protocols such as CoAP, MQTT, and HTTP. Initially, the
-oneM2M resource tree is used by applications to retrieve data. Possible
-applications are inventory or device management systems or big data
-analytic systems designed to make sense of the collected data. But, at
-some point, applications will need to configure the devices. Features
-and tools will have to be provided to enable configuration of the
-devices based on applications responding to user input, network
-conditions, or some set of programmable rules or policies possibly
-triggered by the receipt of data collected from the devices. The
-OpenDaylight platform, with its rich unique cross-section of SDN
-capabilities, NFV, and now IoT device and application management, can be
-bundled with a targeted set of features and deployed anywhere in the
-network to give the network service provider ultimate control. Depending
-on the use case, the OpenDaylight IoT platform can be configured with
-only IoT data collection capabilities where it is deployed near the IoT
-devices and its footprint needs to be small, or it can be configured to
-run as a highly scaled up and out distributed cluster with IoT, SDN and
-NFV functions enabled and deployed in a high traffic data center.
-
-oneM2M Architecture
--------------------
-
-The architecture provides a framework that enables the support of the
-oneM2M resource containment tree. The onem2m-core implements the MDSAL
-RPCs defined in the onem2m-api YANG files. These RPCs enable oneM2M
-resources to be created, read, updated, and deleted (CRUD), and also
-enables the management of subscriptions. When resources are CRUDed, the
-onem2m-notifier issues oneM2M notification events to interested
-subscribers. TS0001: oneM2M Functional Architecture and TS0004: oneM2M
-Service Layer Protocol are great reference documents to learn details of
-oneM2M resource types, message flow, formats, and CRUD/N semantics. Both
-of these specifications can be found at
-http://onem2m.org/technical/published-documents
-
-The oneM2M resource tree is modeled in YANG and essentially is a
-meta-model for the tree. The oneM2M wire protocols allow the resource
-tree to be constructed via HTTP or CoAP messages that populate nodes in
-the tree with resource specific attributes. Each oneM2M resource type
-has semantic behaviour associated with it. For example: a container
-resource has attributes which control quotas on how many and how big the
-collection of data or content instance objects that can exist below it
-in the tree. Depending on the resource type, the oneM2M core software
-implements and enforces the resource type specific rules to ensure a
-well-behaved resource tree.
-
-The resource tree can be simultaneously accessed by many concurrent
-applications wishing to manage or access the tree, and also many devices
-can be reporting in new data or sensor readings into their appropriate
-place in the tree.
-
-Key APIs and Interfaces
------------------------
-
-The API’s to access the oneM2M datastore are well documented in TS0004
-(referred above) found on onem2m.org
-
-RESTCONF is available too but generally HTTP and CoAP are used to access
-the oneM2M data tree.
-
+++ /dev/null
-.. _l2switch-dev-guide:
-
-L2Switch Developer Guide
-========================
-
-Overview
---------
-
-The L2Switch project provides Layer2 switch functionality.
-
-L2Switch Architecture
----------------------
-
-- Packet Handler
-
- - Decodes the packets coming to the controller and dispatches them
- appropriately
-
-- Loop Remover
-
- - Removes loops in the network
-
-- Arp Handler
-
- - Handles the decoded ARP packets
-
-- Address Tracker
-
- - Learns the Addresses (MAC and IP) of entities in the network
-
-- Host Tracker
-
- - Tracks the locations of hosts in the network
-
-- L2Switch Main
-
- - Installs flows on each switch based on network traffic
-
-Key APIs and Interfaces
------------------------
-
-- Packet Handler
-
-- Loop Remover
-
-- Arp Handler
-
-- Address Tracker
-
-- Host Tracker
-
-- L2Switch Main
-
-Packet Dispatcher
-~~~~~~~~~~~~~~~~~
-
-Classes
-^^^^^^^
-
-- AbstractPacketDecoder
-
- - Defines the methods that all decoders must implement
-
-- EthernetDecoder
-
- - The base decoder which decodes the packet into an Ethernet packet
-
-- ArpDecoder, Ipv4Decoder, Ipv6Decoder
-
- - Decodes Ethernet packets into the either an ARP or IPv4 or IPv6
- packet
-
-Further development
-^^^^^^^^^^^^^^^^^^^
-
-There is a need for more decoders. A developer can write
-
-- A decoder for another EtherType, i.e. LLDP.
-
-- A higher layer decoder for the body of the IPv4 packet or IPv6
- packet, i.e. TCP and UDP.
-
-How to write a new decoder
-
-- extends AbstractDecoder<A, B>
-
- - A refers to the notification that the new decoder consumes
-
- - B refers to the notification that the new decoder produces
-
-- implements xPacketListener
-
- - The new decoder must specify which notification it is listening to
-
-- canDecode method
-
- - This method should examine the consumed notification to see
- whether the new decoder can decode the contents of the packet
-
-- decode method
-
- - This method does the actual decoding of the packet
-
-Loop Remover
-~~~~~~~~~~~~
-
-Classes
-^^^^^^^
-
-- **LoopRemoverModule**
-
- - Reads config subsystem value for *is-install-lldp-flow*
-
- - If *is-install-lldp-flow* is true, then an
- **InitialFlowWriter** is created
-
- - Creates and initializes the other LoopRemover classes
-
-- **InitialFlowWriter**
-
- - Only created when *is-install-lldp-flow* is true
-
- - Installs a flow, which forwards all LLDP packets to the
- controller, on each switch
-
-- **TopologyLinkDataChangeHandler**
-
- - Listens to data change events on the Topology tree
-
- - When these changes occur, it waits *graph-refresh-delay* seconds
- and then tells **NetworkGraphImpl** to update
-
- - Writes an STP (Spanning Tree Protocol) status of "forwarding" or
- "discarding" to each link in the Topology data tree
-
- - Forwarding links can forward packets.
-
- - Discarding links cannot forward packets.
-
-- **NetworkGraphImpl**
-
- - Creates a loop-free graph of the network
-
-Configuration
-^^^^^^^^^^^^^
-
-- graph-refresh-delay
-
- - Used in TopologyLinkDataChangeHandler
-
- - A higher value has the advantage of doing less graph updates, at
- the potential cost of losing some packets because the graph didn’t
- update immediately.
-
- - A lower value has the advantage of handling network topology
- changes quicker, at the cost of doing more computation.
-
-- is-install-lldp-flow
-
- - Used in LoopRemoverModule
-
- - "true" means a flow that sends all LLDP packets to the controller
- will be installed on each switch
-
- - "false" means this flow will not be installed
-
-- lldp-flow-table-id
-
- - The LLDP flow will be installed on the specified flow table of
- each switch
-
-- lldp-flow-priority
-
- - The LLDP flow will be installed with the specified priority
-
-- lldp-flow-idle-timeout
-
- - The LLDP flow will timeout (removed from the switch) if the flow
- doesn’t forward a packet for *x* seconds
-
-- lldp-flow-hard-timeout
-
- - The LLDP flow will timeout (removed from the switch) after *x*
- seconds, regardless of how many packets it is forwarding
-
-Further development
-^^^^^^^^^^^^^^^^^^^
-
-No suggestions at the moment.
-
-Validating changes to Loop Remover
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-STP Status information is added to the Inventory data tree.
-
-- A status of "forwarding" means the link is active and packets are
- flowing on it.
-
-- A status of "discarding" means the link is inactive and packets are
- not sent over it.
-
-The STP status of a link can be checked through a browser or a REST
-Client.
-
-::
-
- http://10.194.126.91:8080/restconf/operational/opendaylight-inventory:nodes/node/openflow:1/node-connector/openflow:1:2
-
-The STP status should still be there after changes are made.
-
-Arp Handler
-~~~~~~~~~~~
-
-Classes
-^^^^^^^
-
-- **ArpHandlerModule**
-
- - Reads config subsystem value for *is-proactive-flood-mode*
-
- - If *is-proactive-flood-mode* is true, then a
- ProactiveFloodFlowWriter is created
-
- - If *is-proactive-flood-mode* is false, then an
- InitialFlowWriter is created
-
-- **ProactiveFloodFlowWriter**
-
- - Only created when *is-proactive-flood-mode* is true
-
- - Installs a flood flow on each switch. With this flood flow, a
- packet that doesn’t match any other flows will be
- flooded/broadcast from that switch.
-
-- **InitialFlowWriter**
-
- - Only created when *is-proactive-flood-mode* is false
-
- - Installs a flow, which sends all ARP packets to the controller, on
- each switch
-
-- **ArpPacketHandler**
-
- - Only created when *is-proactive-flood-mode* is false
-
- - Handles and processes the controller’s incoming ARP packets
-
- - Uses **PacketDispatcher** to send the ARP packet back into the
- network
-
-- **PacketDispatcher**
-
- - Only created when *is-proactive-flood-mode* is false
-
- - Sends packets out to the network
-
- - Uses **InventoryReader** to determine which node-connector to a
- send a packet on
-
-- **InventoryReader**
-
- - Only created when *is-proactive-flood-mode* is false
-
- - Maintains a list of each switch’s node-connectors
-
-Configuration
-^^^^^^^^^^^^^
-
-- is-proactive-flood-mode
-
- - "true" means that flood flows will be installed on each switch.
- With this flood flow, each switch will flood a packet that doesn’t
- match any other flows.
-
- - Advantage: Fewer packets are sent to the controller because
- those packets are flooded to the network.
-
- - Disadvantage: A lot of network traffic is generated.
-
- - "false" means the previously mentioned flood flows will not be
- installed. Instead an ARP flow will be installed on each switch
- that sends all ARP packets to the controller.
-
- - Advantage: Less network traffic is generated.
-
- - Disadvantage: The controller handles more packets (ARP requests
- & replies) and the ARP process takes longer than if there were
- flood flows.
-
-- flood-flow-table-id
-
- - The flood flow will be installed on the specified flow table of
- each switch
-
-- flood-flow-priority
-
- - The flood flow will be installed with the specified priority
-
-- flood-flow-idle-timeout
-
- - The flood flow will timeout (removed from the switch) if the flow
- doesn’t forward a packet for *x* seconds
-
-- flood-flow-hard-timeout
-
- - The flood flow will timeout (removed from the switch) after *x*
- seconds, regardless of how many packets it is forwarding
-
-- arp-flow-table-id
-
- - The ARP flow will be installed on the specified flow table of each
- switch
-
-- arp-flow-priority
-
- - The ARP flow will be installed with the specified priority
-
-- arp-flow-idle-timeout
-
- - The ARP flow will timeout (removed from the switch) if the flow
- doesn’t forward a packet for *x* seconds
-
-- arp-flow-hard-timeout
-
- - The ARP flow will timeout (removed from the switch) after
- *arp-flow-hard-timeout* seconds, regardless of how many packets it
- is forwarding
-
-Further development
-^^^^^^^^^^^^^^^^^^^
-
-The **ProactiveFloodFlowWriter** needs to be improved. It does have the
-advantage of having less traffic come to the controller; however, it
-generates too much network traffic.
-
-Address Tracker
-~~~~~~~~~~~~~~~
-
-Classes
-^^^^^^^
-
-- AddressTrackerModule
-
- - Reads config subsystem value for *observe-addresses-from*
-
- - If *observe-addresses-from* contains "arp", then an
- AddressObserverUsingArp is created
-
- - If *observe-addresses-from* contains "ipv4", then an
- AddressObserverUsingIpv4 is created
-
- - If *observe-addresses-from* contains "ipv6", then an
- AddressObserverUsingIpv6 is created
-
-- AddressObserverUsingArp
-
- - Registers for ARP packet notifications
-
- - Uses **AddressObservationWriter** to write address observations
- from ARP packets
-
-- AddressObserverUsingIpv4
-
- - Registers for IPv4 packet notifications
-
- - Uses **AddressObservationWriter** to write address observations
- from IPv4 packets
-
-- AddressObserverUsingIpv6
-
- - Registers for IPv6 packet notifications
-
- - Uses **AddressObservationWriter** to write address observations
- from IPv6 packets
-
-- AddressObservationWriter
-
- - Writes new Address Observations to the Inventory data tree
-
- - Updates existing Address Observations with updated "last seen"
- timestamps
-
- - Uses the *timestamp-update-intervval* configuration variable to
- determine whether or not to update
-
-Configuration
-^^^^^^^^^^^^^
-
-- timestamp-update-interval
-
- - A last-seen timestamp is associated with each address. This
- last-seen timestamp will only be updated after
- *timestamp-update-interval* milliseconds.
-
- - A higher value has the advantage of performing less writes to the
- database.
-
- - A lower value has the advantage of knowing how fresh an address
- is.
-
-- observe-addresses-from
-
- - IP and MAC addresses can be observed/learned from ARP, IPv4, and
- IPv6 packets. Set which packets to make these observations from.
-
-Further development
-^^^^^^^^^^^^^^^^^^^
-
-Further improvements can be made to the **AddressObservationWriter** so
-that it (1) doesn’t make any unnecessary writes to the DB and (2) is
-optimized for multi-threaded environments.
-
-Validating changes to Address Tracker
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Address Observations are added to the Inventory data tree.
-
-The Address Observations on a Node Connector can be checked through a
-browser or a REST Client.
-
-::
-
- http://10.194.126.91:8080/restconf/operational/opendaylight-inventory:nodes/node/openflow:1/node-connector/openflow:1:1
-
-The Address Observations should still be there after changes.
-
-Developer’s Guide for Host Tracker
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Validationg changes to Host Tracker
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Host information is added to the Topology data tree.
-
-- Host address
-
-- Attachment point (link) to a node/switch
-
-This host information and attachment point information can be checked
-through a browser or a REST Client.
-
-::
-
- http://10.194.126.91:8080/restconf/operational/network-topology:network-topology/topology/flow:1/
-
-Host information should still be there after changes.
-
-L2Switch Main
-~~~~~~~~~~~~~
-
-Classes
-^^^^^^^
-
-- L2SwitchMainModule
-
- - Reads config subsystem value for *is-install-dropall-flow*
-
- - If *is-install-dropall-flow* is true, then an
- **InitialFlowWriter** is created
-
- - Reads config subsystem value for *is-learning-only-mode*
-
- - If *is-learning-only-mode* is false, then a
- **ReactiveFlowWriter** is created
-
-- InitialFlowWriter
-
- - Only created when *is-install-dropall-flow* is true
-
- - Installs a flow, which drops all packets, on each switch. This
- flow has low priority and means that packets that don’t match any
- higher-priority flows will simply be dropped.
-
-- ReactiveFlowWriter
-
- - Reacts to network traffic and installs MAC-to-MAC flows on
- switches. These flows have matches based on MAC source and MAC
- destination.
-
- - Uses **FlowWriterServiceImpl** to write these flows to the
- switches
-
-- FlowWriterService / FlowWriterServiceImpl
-
- - Writes flows to switches
-
-Configuration
-^^^^^^^^^^^^^
-
-- is-install-dropall-flow
-
- - "true" means a drop-all flow will be installed on each switch, so
- the default action will be to drop a packet instead of sending it
- to the controller
-
- - "false" means this flow will not be installed
-
-- dropall-flow-table-id
-
- - The dropall flow will be installed on the specified flow table of
- each switch
-
- - This field is only relevant when "is-install-dropall-flow" is set
- to "true"
-
-- dropall-flow-priority
-
- - The dropall flow will be installed with the specified priority
-
- - This field is only relevant when "is-install-dropall-flow" is set
- to "true"
-
-- dropall-flow-idle-timeout
-
- - The dropall flow will timeout (removed from the switch) if the
- flow doesn’t forward a packet for *x* seconds
-
- - This field is only relevant when "is-install-dropall-flow" is set
- to "true"
-
-- dropall-flow-hard-timeout
-
- - The dropall flow will timeout (removed from the switch) after *x*
- seconds, regardless of how many packets it is forwarding
-
- - This field is only relevant when "is-install-dropall-flow" is set
- to "true"
-
-- is-learning-only-mode
-
- - "true" means that the L2Switch will only be learning addresses. No
- additional flows to optimize network traffic will be installed.
-
- - "false" means that the L2Switch will react to network traffic and
- install flows on the switches to optimize traffic. Currently,
- MAC-to-MAC flows are installed.
-
-- reactive-flow-table-id
-
- - The reactive flow will be installed on the specified flow table of
- each switch
-
- - This field is only relevant when "is-learning-only-mode" is set to
- "false"
-
-- reactive-flow-priority
-
- - The reactive flow will be installed with the specified priority
-
- - This field is only relevant when "is-learning-only-mode" is set to
- "false"
-
-- reactive-flow-idle-timeout
-
- - The reactive flow will timeout (removed from the switch) if the
- flow doesn’t forward a packet for *x* seconds
-
- - This field is only relevant when "is-learning-only-mode" is set to
- "false"
-
-- reactive-flow-hard-timeout
-
- - The reactive flow will timeout (removed from the switch) after *x*
- seconds, regardless of how many packets it is forwarding
-
- - This field is only relevant when "is-learning-only-mode" is set to
- "false"
-
-Further development
-^^^^^^^^^^^^^^^^^^^
-
-The **ReactiveFlowWriter** needs to be improved to install the
-MAC-to-MAC flows faster. For the first ping, the ARP request and reply
-are successful. However, then the ping packets are sent out. The first
-ping packet is dropped sometimes because the MAC-to-MAC flow isn’t
-installed quickly enough. The second, third, and following ping packets
-are successful though.
-
-API Reference Documentation
----------------------------
-
-Further documentation can be found by checking out the L2Switch project.
-
-Checking out the L2Switch project
----------------------------------
-
-::
-
- git clone https://git.opendaylight.org/gerrit/p/l2switch.git
-
-The above command will create a directory called "l2switch" with the
-project.
-
-Testing your changes to the L2Switch project
---------------------------------------------
-
-Running the L2Switch project
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-To run the base distribution, you can use the following command
-
-::
-
- ./distribution/base/target/distributions-l2switch-base-0.1.0-SNAPSHOT-osgipackage/opendaylight/run.sh
-
-If you need additional resources, you can use these command line
-arguments:
-
-::
-
- -Xms1024m -Xmx2048m -XX:PermSize=512m -XX:MaxPermSize=1024m'
-
-To run the karaf distribution, you can use the following command:
-
-::
-
- ./distribution/karaf/target/assembly/bin/karaf
-
-Create a network using mininet
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- sudo mn --controller=remote,ip=<Controller IP> --topo=linear,3 --switch ovsk,protocols=OpenFlow13
- sudo mn --controller=remote,ip=127.0.0.1 --topo=linear,3 --switch ovsk,protocols=OpenFlow13
-
-The above command will create a virtual network consisting of 3
-switches. Each switch will connect to the controller located at the
-specified IP, i.e. 127.0.0.1
-
-::
-
- sudo mn --controller=remote,ip=127.0.0.1 --mac --topo=linear,3 --switch ovsk,protocols=OpenFlow13
-
-The above command has the "mac" option, which makes it easier to
-distinguish between Host MAC addresses and Switch MAC addresses.
-
-Generating network traffic using mininet
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- h1 ping h2
-
-The above command will cause host1 (h1) to ping host2 (h2)
-
-::
-
- pingall
-
-*pingall* will cause each host to ping every other host.
-
-Miscellaneous mininet commands
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- link s1 s2 down
-
-This will bring the link between switch1 (s1) and switch2 (s2) down
-
-::
-
- link s1 s2 up
-
-This will bring the link between switch1 (s1) and switch2 (s2) up
-
-::
-
- link s1 h1 down
-
-This will bring the link between switch1 (s1) and host1 (h1) down
-
+++ /dev/null
-.. _lacp-dev-guide:
-
-LACP Developer Guide
-====================
-
-LACP Overview
--------------
-
-The OpenDaylight LACP (Link Aggregation Control Protocol) project can be
-used to aggregate multiple links between OpenDaylight controlled network
-switches and LACP enabled legacy switches or hosts operating in active
-LACP mode.
-
-OpenDaylight LACP passively negotiates automatic bundling of multiple
-links to form a single LAG (Link Aggregation Group). LAGs are realised
-in the OpenDaylight controlled switches using OpenFlow 1.3+ group table
-functionality.
-
-LACP Architecture
------------------
-
-- **inventory**
-
- - Maintains list of OpenDaylight controlled switches and port
- information
-
- - List of LAGs created and physical ports that are part of the LAG
-
- - Interacts with MD-SAL to update LACP related information
-
-- **inventorylistener**
-
- - This module interacts with MD-SAL for receiving
- node/node-connector notifications
-
-- **flow**
-
- - Programs the switch to punt LACP PDU (Protocol Data Unit) to
- controller
-
-- **packethandler**
-
- - Receives and transmits LACP PDUs to the LACP enabled endpoint
-
- - Provides infrastructure services for group table programming
-
-- **core**
-
- - Performs LACP state machine processing
-
-How LAG programming is implemented
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The LAG representing the aggregated multiple physical ports are realized
-in the OpenDaylight controlled switches by creating a group table entry
-(Group table supported from OpenFlow 1.3 onwards). The group table entry
-has a group type **Select** and action referring to the aggregated
-physical ports. Any data traffic to be sent out through the LAG can be
-sent through the **group entry** available for the LAG.
-
-Suppose there are ports P1-P8 in a node. When LACP project is installed,
-a group table entry for handling broadcast traffic is automatically
-created on all the switches that have registered to the controller.
-
-+--------------------------+--------------------------+--------------------------+
-| GroupID | GroupType | EgressPorts |
-+==========================+==========================+==========================+
-| <B’castgID> | ALL | P1,P2,…P8 |
-+--------------------------+--------------------------+--------------------------+
-
-Now, assume P1 & P2 are now part of LAG1. The group table would be
-programmed as follows:
-
-+--------------------------+--------------------------+--------------------------+
-| GroupID | GroupType | EgressPorts |
-+==========================+==========================+==========================+
-| <B’castgID> | ALL | P3,P4,…P8 |
-+--------------------------+--------------------------+--------------------------+
-| <LAG1> | SELECT | P1,P2 |
-+--------------------------+--------------------------+--------------------------+
-
-When a second LAG, LAG2, is formed with ports P3 and P4,
-
-+--------------------------+--------------------------+--------------------------+
-| GroupID | GroupType | EgressPorts |
-+==========================+==========================+==========================+
-| <B’castgID> | ALL | P5,P6,…P8 |
-+--------------------------+--------------------------+--------------------------+
-| <LAG1> | SELECT | P1,P2 |
-+--------------------------+--------------------------+--------------------------+
-| <LAG2> | SELECT | P3,P4 |
-+--------------------------+--------------------------+--------------------------+
-
-How applications can program OpenFlow flows using LACP-created LAG groups
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-OpenDaylight controller modules can get the information of LAG by
-listening/querying the LACP Aggregator datastore.
-
-When any application receives packets, it can check, if the ingress port
-is part of a LAG by verifying the LAG Aggregator reference
-(lacp-agg-ref) for the source nodeConnector that OpenFlow plugin
-provides.
-
-When applications want to add flows to egress out of the LAG, they must
-use the group entry corresponding to the LAG.
-
-From the above example, for a flow to egress out of LAG1,
-
-**add-flow eth\_type=<xxxx>,ip\_dst=<x.x.x.x>,actions=output:<LAG1>**
-
-Similarly, when applications want traffic to be broadcasted, they should
-use the group table entries **<B’castgID>,<LAG1>,<LAG2>** in output
-action.
-
-For all applications, the group table information is accessible from
-LACP Aggregator datastore.
-
+++ /dev/null
-.. _netide-dev-guide:
-
-NetIDE Developer Guide
-======================
-
-Overview
---------
-
-The NetIDE Network Engine enables portability and cooperation inside a
-single network by using a client/server multi-controller SDN
-architecture. Separate "Client SDN Controllers" host the various SDN
-Applications with their access to the actual physical network abstracted
-and coordinated through a single "Server SDN Controller", in this
-instance OpenDaylight. This allows applications written for
-Ryu/Floodlight/Pyretic to execute on OpenDaylight managed
-infrastructure.
-
-The "Network Engine" is modular by design:
-
-- An OpenDaylight plugin, "shim", sends/receives messages to/from
- subscribed SDN Client Controllers. This consumes the ODL OpenFlow
- Plugin
-
-- An initial suite of SDN Client Controller "Backends": Floodlight,
- Ryu, Pyretic. Further controllers may be added over time as the
- engine is extensible.
-
-The Network Engine provides a compatibility layer capable of translating
-calls of the network applications running on top of the client
-controllers, into calls for the server controller framework. The
-communication between the client and the server layers is achieved
-through the NetIDE intermediate protocol, which is an application-layer
-protocol on top of TCP that transmits the network control/management
-messages from the client to the server controller and vice-versa.
-Between client and server controller sits the Core Layer which also
-"speaks" the intermediate protocol. The core layer implements three main
-functions:
-
-i. interfacing with the client backends and server shim, controlling
- the lifecycle of controllers as well as modules in them,
-
-ii. orchestrating the execution of individual modules (in one client
- controller) or complete applications (possibly spread across
- multiple client controllers),
-
-iii. interfacing with the tools.
-
-.. figure:: ./images/netide/arch-engine.jpg
- :alt: NetIDE Network Engine Architecture
-
- NetIDE Network Engine Architecture
-
-NetIDE Intermediate Protocol
-----------------------------
-
-The Intermediate Protocol serves several needs, it has to:
-
-i. carry control messages between core and shim/backend, e.g., to
- start up/take down a particular module, providing unique
- identifiers for modules,
-
-ii. carry event and action messages between shim, core, and backend,
- properly demultiplexing such messages to the right module based on
- identifiers,
-
-iii. encapsulate messages specific to a particular SBI protocol version
- (e.g., OpenFlow 1.X, NETCONF, etc.) towards the client controllers
- with proper information to recognize these messages as such.
-
-The NetIDE packages can be added as dependencies in Maven projects by
-putting the following code in the *pom.xml* file.
-
-::
-
- <dependency>
- <groupId>org.opendaylight.netide</groupId>
- <artifactId>api</artifactId>
- <version>${NETIDE_VERSION}</version>
- </dependency>
-
-The current stable version for NetIDE is ``0.2.0-Boron``.
-
-Protocol specification
-~~~~~~~~~~~~~~~~~~~~~~
-
-Messages of the NetIDE protocol contain two basic elements: the NetIDE
-header and the data (or payload). The NetIDE header, described below, is
-placed before the payload and serves as the communication and control
-link between the different components of the Network Engine. The payload
-can contain management messages, used by the components of the Network
-Engine to exchange relevant information, or control/configuration
-messages (such as OpenFlow, NETCONF, etc.) crossing the Network Engine
-generated by either network application modules or by the network
-elements.
-
-The NetIDE header is defined as follows:
-
-::
-
- 0 1 2 3
- 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | netide_ver | type | length |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | xid |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | module_id |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- | |
- + datapath_id +
- | |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
-where each tick mark represents one bit position. Alternatively, in a
-C-style coding format, the NetIDE header can be represented with the
-following structure:
-
-::
-
- struct netide_header {
- uint8_t netide_ver ;
- uint8_t type ;
- uint16_t length ;
- uint32_t xid
- uint32_t module_id
- uint64_t datapath_id
- };
-
-- ``netide_ver`` is the version of the NetIDE protocol (the current
- version is v1.2, which is identified with value 0x03).
-
-- ``length`` is the total length of the payload in bytes.
-
-- ``type`` contains a code that indicates the type of the message
- according with the following values:
-
- ::
-
- enum type {
- NETIDE_HELLO = 0x01 ,
- NETIDE_ERROR = 0x02 ,
- NETIDE_MGMT = 0x03 ,
- MODULE_ANNOUNCEMENT = 0x04 ,
- MODULE_ACKNOWLEDGE = 0x05 ,
- NETIDE_HEARTBEAT = 0x06 ,
- NETIDE_OPENFLOW = 0x11 ,
- NETIDE_NETCONF = 0x12 ,
- NETIDE_OPFLEX = 0x13
- };
-
-- ``datapath_id`` is a 64-bit field that uniquely identifies the
- network elements.
-
-- ``module_id`` is a 32-bits field that uniquely identifies Backends
- and application modules running on top of each client controller. The
- composition mechanism in the core layer leverages on this field to
- implement the correct execution flow of these modules.
-
-- ``xid`` is the transaction identifier associated to the each message.
- Replies must use the same value to facilitate the pairing.
-
-Module announcement
-~~~~~~~~~~~~~~~~~~~
-
-The first operation performed by a Backend is registering itself and the
-modules that it is running to the Core. This is done by using the
-``MODULE_ANNOUNCEMENT`` and ``MODULE_ACKNOWLEDGE`` message types. As a
-result of this process, each Backend and application module can be
-recognized by the Core through an identifier (the ``module_id``) placed
-in the NetIDE header. First, a Backend registers itself by using the
-following schema: backend-<platform name>-<pid>.
-
-For example,odule a Ryu Backend will register by using the following
-name in the message backend-ryu-12345 where 12345 is the process ID of
-the registering instance of the Ryu platform. The format of the message
-is the following:
-
-::
-
- struct NetIDE_message {
- netide_ver = 0x03
- type = MODULE_ANNOUNCEMENT
- length = len(" backend -< platform_name >-<pid >")
- xid = 0
- module_id = 0
- datapath_id = 0
- data = " backend -< platform_name >-<pid >"
- }
-
-The answer generated by the Core will include a ``module_id`` number and
-the Backend name in the payload (the same indicated in the
-``MODULE_ANNOUNCEMENT`` message):
-
-::
-
- struct NetIDE_message {
- netide_ver = 0x03
- type = MODULE_ACKNOWLEDGE
- length = len(" backend -< platform_name >-<pid >")
- xid = 0
- module_id = MODULE_ID
- datapath_id = 0
- data = " backend -< platform_name >-<pid >"
- }
-
-Once a Backend has successfully registered itself, it can start
-registering its modules with the same procedure described above by
-indicating the name of the module in the data (e.g. data="Firewall").
-From this point on, the Backend will insert its own ``module_id`` in the
-header of the messages it generates (e.g. heartbeat, hello messages,
-OpenFlow echo messages from the client controllers, etc.). Otherwise, it
-will encapsulate the control/configuration messages (e.g. FlowMod,
-PacketOut, FeatureRequest, NETCONF request, etc.) generated by network
-application modules with the specific +module\_id+s.
-
-Heartbeat
-~~~~~~~~~
-
-The heartbeat mechanism has been introduced after the adoption of the
-ZeroMQ messaging queuing library to transmit the NetIDE messages.
-Unfortunately, the ZeroMQ library does not offer any mechanism to find
-out about disrupted connections (and also completely unresponsive
-peers). This limitation of the ZeroMQ library can be an issue for the
-Core’s composition mechanism and for the tools connected to the Network
-Engine, as they cannot understand when an client controller disconnects
-or crashes. As a consequence, Backends must periodically send (let’s say
-every 5 seconds) a "heartbeat" message to the Core. If the Core does not
-receive at least one "heartbeat" message from the Backend within a
-certain timeframe, the Core considers it disconnected, removes all the
-related data from its memory structures and informs the relevant tools.
-The format of the message is the following:
-
-::
-
- struct NetIDE_message {
- netide_ver = 0x03
- type = NETIDE_HEARTBEAT
- length = 0
- xid = 0
- module_id = backend -id
- datapath_id = 0
- data = 0
- }
-
-Handshake
-~~~~~~~~~
-
-Upon a successful connection with the Core, the client controller must
-immediately send a hello message with the list of the control and/or
-management protocols needed by the applications deployed on top of it.
-
-::
-
- struct NetIDE_message {
- struct netide_header header ;
- uint8 data [0]
- };
-
-The header contains the following values:
-
-- ``netide ver=0x03``
-
-- ``type=NETIDE_HELLO``
-
-- ``length=2*NR_PROTOCOLS``
-
-- ``data`` contains one 2-byte word (in big endian order) for each
- protocol, with the first byte containing the code of the protocol
- according to the above enum, while the second byte in- dictates the
- version of the protocol (e.g. according to the ONF specification,
- 0x01 for OpenFlow v1.0, 0x02 for OpenFlow v1.1, etc.). NETCONF
- version is marked with 0x01 that refers to the specification in the
- RFC6241, while OpFlex version is marked with 0x00 since this protocol
- is still in work-in-progress stage.
-
-The Core relays hello messages to the server controller which responds
-with another hello message containing the following:
-
-- ``netide ver=0x03``
-
-- ``type=NETIDE_HELLO``
-
-- ``length=2*NR_PROTOCOLS``
-
-If at least one of the protocols requested by the client is supported.
-In particular, ``data`` contains the codes of the protocols that match
-the client’s request (2-bytes words, big endian order). If the hand-
-shake fails because none of the requested protocols is supported by the
-server controller, the header of the answer is as follows:
-
-- ``netide ver=0x03``
-
-- ``type=NETIDE_ERROR``
-
-- ``length=2*NR_PROTOCOLS``
-
-- ``data`` contains the codes of all the protocols supported by the
- server controller (2-bytes words, big endian order). In this case,
- the TCP session is terminated by the server controller just after the
- answer is received by the client. \`
-
+++ /dev/null
-.. _nic-dev-guide:
-
-Network Intent Composition (NIC) Developer Guide
-================================================
-
-Overview
---------
-
-The Network Intent Composition (NIC) provides four features:
-
-- odl-nic-core-hazelcast: Provides a distributed intent mapping
- service, implemented using hazelcast, that stores metadata needed by
- odl-nic-core feature.
-
-- odl-nic-core-mdsal: Provides an intent rest API to external
- applications for CRUD operations on intents, conflict resolution and
- event handling. Uses MD-SAL as backend.
-
-- odl-nic-console: Provides a karaf CLI extension for intent CRUD
- operations and mapping service operations.
-
-- odl-nic-renderer-of - Generic OpenFlow Renderer.
-
-- odl-nic-renderer-vtn - a feature that transforms an intent to a
- network modification using the VTN project
-
-- odl-nic-renderer-gbp - a feature that transforms an intent to a
- network modification using the Group Policy project
-
-- odl-nic-renderer-nemo - a feature that transforms an intent to a
- network modification using the NEMO project
-
-- odl-nic-listeners - adds support for event listening. (depends on:
- odl-nic-renderer-of)
-
-- odl-nic-neutron-integration - allow integration with openstack
- neutron to allow coexistence between existing neutron security rules
- and intents pushed by ODL applications.
-
-*Only a single renderer feature should be installed at a time for the
-Boron release.*
-
-odl-nic-core-mdsal XOR odl-nic-core-hazelcast
----------------------------------------------
-
-This feature supplies the base models for the Network Intent Composition
-(NIC) capability. This includes the definition of intent as well as the
-configuration and operational data trees.
-
-This feature only provides an information model. The interface for NIC
-is to modify the information model via the configuraiton data tree,
-which will trigger the renderer to make the appropriate changes in the
-controlled network.
-
-Installation
-------------
-
-First you need to install one of the core installations:
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- feature:install odl-nic-core-service-mdsal odl-nic-console
-
-*OR*
-
-::
-
- feature:install odl-nic-core-service-hazelcast odl-nic-console
-
-Then pick a renderer:
-~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- feature:install odl-nic-listeners (will install odl-nic-renderer-of)
-
-*OR*
-
-::
-
- feature:install odl-nic-renderer-vtn
-
-*OR*
-
-::
-
- feature:install odl-nic-renderer-gbp
-
-*OR*
-
-::
-
- feature:install odl-nic-renderer-nemo
-
-REST Supported operations
--------------------------
-
-POST / PUT (configuration)
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-This operations create instances of an intent in the configuration data
-tree and trigger the creation or modification of an intent.
-
-GET (configuration / operational)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-This operation lists all or fetches a single intent from the data tree.
-
-DELETE (configuration)
-~~~~~~~~~~~~~~~~~~~~~~
-
-This operation will cause an intent to be removed from the system and
-trigger any configuration changes on the network rendered from this
-intent to be removed.
-
-odl-nic-cli user guide
-----------------------
-
-This feature provides karaf console CLI command to manipulate the intent
-data model. The CLI essentailly invokes the equivalent data operations.
-
-intent:add
-~~~~~~~~~~
-
-Creates a new intent in the configuration data tree
-
-::
-
- DESCRIPTION
- intent:add
-
- Adds an intent to the controller.
-
- Examples: --actions [ALLOW] --from <subject> --to <subject>
- --actions [BLOCK] --from <subject>
-
- SYNTAX
- intent:add [options]
-
- OPTIONS
- -a, --actions
- Action to be performed.
- -a / --actions BLOCK/ALLOW
- (defaults to [BLOCK])
- --help
- Display this help message
- -t, --to
- Second Subject.
- -t / --to <subject>
- (defaults to any)
- -f, --from
- First subject.
- -f / --from <subject>
- (defaults to any)
-
-intent:delete
-~~~~~~~~~~~~~
-
-Removes an existing intent from the system
-
-::
-
- DESCRIPTION
- intent:remove
-
- Removes an intent from the controller.
-
- SYNTAX
- intent:remove id
-
- ARGUMENTS
- id Intent Id
-
-intent:list
-~~~~~~~~~~~
-
-Lists all the intents in the system
-
-::
-
- DESCRIPTION
- intent:list
-
- Lists all intents in the controller.
-
- SYNTAX
- intent:list [options]
-
- OPTIONS
- -c, --config
- List Configuration Data (optional).
- -c / --config <ENTER>
- --help
- Display this help message
-
-intent:show
-~~~~~~~~~~~
-
-Displays the details of a single intent
-
-::
-
- DESCRIPTION
- intent:show
-
- Shows detailed information about an intent.
-
- SYNTAX
- intent:show id
-
- ARGUMENTS
- id Intent Id
-
-intent:map
-~~~~~~~~~~
-
-List/Add/Delete current state from/to the mapping service.
-
-::
-
- DESCRIPTION
- intent:map
-
- List/Add/Delete current state from/to the mapping service.
-
- SYNTAX
- intent:map [options]
-
- Examples: --list, -l [ENTER], to retrieve all keys.
- --add-key <key> [ENTER], to add a new key with empty contents.
- --del-key <key> [ENTER], to remove a key with it's values."
- --add-key <key> --value [<value 1>, <value 2>, ...] [ENTER],
- to add a new key with some values (json format).
- OPTIONS
- --help
- Display this help message
- -l, --list
- List values associated with a particular key.
- -l / --filter <regular expression> [ENTER]
- --add-key
- Adds a new key to the mapping service.
- --add-key <key name> [ENTER]
- --value
- Specifies which value should be added/delete from the mapping service.
- --value "key=>value"... --value "key=>value" [ENTER]
- (defaults to [])
- --del-key
- Deletes a key from the mapping service.
- --del-key <key name> [ENTER]
-
-Sample Use case: MPLS
----------------------
-
-Description
-~~~~~~~~~~~
-
-The scope of this use-case is to add MPLS intents between two MPLS
-endpoints. The use-case tries to address the real-world scenario
-illustrated in the diagram below:
-
-.. figure:: ./images/nic/MPLS_VPN_Service_Diagram.png
- :alt: MPLS VPN Service Diagram
-
- MPLS VPN Service Diagram
-
-where PE (Provider Edge) and P (Provider) switches are managed by
-OpenDaylight. In NIC’s terminology the endpoints are the PE switches.
-There could be many P switches between the PEs.
-
-In order for NIC to recognize endpoints as MPLS endpoints, the user is
-expected to add mapping information about the PE switches to NIC’s
-mapping service to include the below properties:
-
-1. MPLS Label to identify a PE
-
-2. IPv4 Prefix for the customer site that are connected to a PE
-
-3. Switch-Port: Ingress (or Egress) for source (or Destination) endpoint
- of the source (or Destination) PE
-
-An intent:add between two MPLS endpoints renders OpenFlow rules for: 1.
-push/pop labels to the MPLS endpoint nodes after an IPv4 Prefix match.
-2. forward to port rule after MPLS label match to all the switches that
-form the shortest path between the endpoints (calculated using Dijkstra
-algorithm).
-
-Additionally, we have also added constraints to Intent model for
-protection and failover mechanism to ensure end-to-end connectivity
-between endpoints. By specifying these constraints to intent:add the
-use-case aims to reduces the risk of connectivity failure due to a
-single link or port down event on a forwarding device.
-
-- Protection constraint: Constraint that requires an end-to-end
- connectivity to be protected by providing redundant paths.
-
-- Failover constraint: Constraint that specifies the type of failover
- implementation. slow-reroute: Uses disjoint path calculation
- algorithms like Suurballe to provide alternate end-to-end routes.
- fast-reroute: Uses failure detection feature in hardware forwarding
- device through OF group table features (Future plans) When no
- constraint is requested by the user we default to offering a since
- end-to-end route using Dijkstra shortest path.
-
-How to use it?
-~~~~~~~~~~~~~~
-
-1. Start Karaf and install related features:
-
- ::
-
- feature:install odl-nic-core-service-mdsal odl-nic-core odl-nic-console odl-nic-listeners
-
-2. Start mininet topology. (verification of topology can be done with the topology
- URI using RESTCONF)
-
- ::
-
- mn --controller=remote,ip=$CONTROLLER_IP --custom ~/shortest_path.py --topo shortest_path --switch ovsk,protocols=OpenFlow13
-
- ::
-
- cat shortest.py -->
- from mininet.topo import Topo
- from mininet.cli import CLI
- from mininet.net import Mininet
- from mininet.link import TCLink
- from mininet.util import irange,dumpNodeConnections
- from mininet.log import setLogLevel
-
- ::
-
- class Fast_Failover_Demo_Topo(Topo):
-
- ::
-
- def __init__(self):
- # Initialize topology and default options
- Topo.__init__(self)
-
- ::
-
- s1 = self.addSwitch('s1',dpid='0000000000000001')
- s2a = self.addSwitch('s2a',dpid='000000000000002a')
- s2b = self.addSwitch('s2b',dpid='000000000000002b')
- s2c = self.addSwitch('s2c',dpid='000000000000002c')
- s3 = self.addSwitch('s3',dpid='0000000000000003')
- self.addLink(s1, s2a)
- self.addLink(s1, s2b)
- self.addLink(s2b, s2c)
- self.addLink(s3, s2a)
- self.addLink(s3, s2c)
- host_1 = self.addHost('h1',ip='10.0.0.1',mac='10:00:00:00:00:01')
- host_2 = self.addHost('h2',ip='10.0.0.2',mac='10:00:00:00:00:02')
- self.addLink(host_1, s1)
- self.addLink(host_2, s3)
-
- ::
-
- topos = { 'shortest_path': ( lambda: Demo_Topo() ) }
-
-3. Update the mapping service with required information
-
- Sample payload:
-
- ::
-
- {
- "mappings": {
- "outer-map": [
- {
- "id": "uva",
- "inner-map": [
- {
- "inner-key": "ip_prefix",
- "value": "10.0.0.1/32"
- },
- {
- "inner-key": "mpls_label",
- "value": "15"
- },
- {
- "inner-key": "switch_port",
- "value": "openflow:1:1"
- }
- ]
- },
- {
- "id": "eur",
- "inner-map": [
- {
- "inner-key": "ip_prefix",
- "value": "10.0.0.2/32"
- },
- {
- "inner-key": "mpls_label",
- "value": "16"
- },
- {
- "inner-key": "switch_port",
- "value": "openflow:3:1"
- }
- ]
- }
- ]
- }
- }
-
-4. Create bidirectional Intents using Karaf command line or RestCONF:
-
- Example:
-
- ::
-
- intent:add -f uva -t eur -a ALLOW
- intent:add -f eur -t uva -a ALLOW
-
-5. Verify by running ovs-ofctl command on mininet if the flows were pushed
- correctly to the nodes that form the shortest path.
-
- Example:
-
- ::
-
- ovs-ofctl -O OpenFlow13 dump-flows s1
-
+++ /dev/null
-.. _ocpplugin-dev-guide:
-
-OCP Plugin Developer Guide
-==========================
-
-This document is intended for both OCP (ORI [Open Radio Interface] C&M
-[Control and Management] Protocol) agent developers and OpenDaylight
-service/application developers. It describes essential information
-needed to implement an OCP agent that is capable of interoperating with
-the OCP plugin running in OpenDaylight, including the OCP connection
-establishment and state machines used on both ends of the connection. It
-also provides a detailed description of the northbound/southbound APIs
-that the OCP plugin exposes to allow automation and programmability.
-
-Overview
---------
-
-OCP is an ETSI standard protocol for control and management of Remote
-Radio Head (RRH) equipment. The OCP Project addresses the need for a
-southbound plugin that allows applications and controller services to
-interact with RRHs using OCP. The OCP southbound plugin will allow
-applications acting as a Radio Equipment Control (REC) to interact with
-RRHs that support an OCP agent.
-
-.. figure:: ./images/ocpplugin/ocp-sb-plugin.jpg
- :alt: OCP southbound plugin
-
- OCP southbound plugin
-
-Architecture
-------------
-
-OCP is a vendor-neutral standard communications interface defined to
-enable control and management between RE and REC of an ORI architecture.
-The OCP Plugin supports the implementation of the OCP specification; it
-is based on the Model Driven Service Abstraction Layer (MD-SAL)
-architecture.
-
-The OCP Plugin project consists of three main components: OCP southbound
-plugin, OCP protocol library and OCP service. For details on each of
-them, refer to the OCP Plugin User Guide.
-
-.. figure:: ./images/ocpplugin/plugin-design.jpg
- :alt: Overall architecture
-
- Overall architecture
-
-Connection Establishment
-------------------------
-
-The OCP layer is transported over a TCP/IP connection established
-between the RE and the REC. OCP provides the following functions:
-
-- Control & Management of the RE by the REC
-
-- Transport of AISG/3GPP Iuant Layer 7 messages and alarms between REC
- and RE
-
-Hello Message
-~~~~~~~~~~~~~
-
-Hello message is used by the OCP agent during connection setup. It is
-used for version negotiation. When the connection is established, the
-OCP agent immediately sends a Hello message with the version field set
-to highest OCP version supported by itself, along with the verdor ID and
-serial number of the radio head it is running on.
-
-The combinaiton of the verdor ID and serial number will be used by the
-OCP plugin to uniquely identify a managed radio head. When not receiving
-reply from the OCP plugin, the OCP agent can resend Hello message with
-pre-defined Hello timeout (THLO) and Hello resend times (NHLO).
-
-According to ORI spec, the default value of TCP Link Monitoring Timer
-(TTLM) is 50 seconds. The RE shall trigger an OCP layer restart while
-TTLM expires in RE or the RE detects a TCP link failure. So we may
-define NHLO \* THLO = 50 seconds (e.g. NHLO = 10, THLO = 5 seconds).
-
-By nature the Hello message is a new type of indication, and it contains
-supported OCP version, vendor ID and serial number as shown below.
-
-**Hello message.**
-
-::
-
- <?xml version="1.0" encoding="UTF-8"?>
- <msg xmlns="http://uri.etsi.org/ori/002-2/v4.1.1">
- <header>
- <msgType>IND</msgType>
- <msgUID>0</msgUID>
- </header>
- <body>
- <helloInd>
- <version>4.1.1</version>
- <vendorId>XYZ</vendorId>
- <serialNumber>ABC123</serialNumber>
- </helloInd>
- </body>
- </msg>
-
-Ack Message
-~~~~~~~~~~~
-
-Hello from the OCP agent will always make the OCP plugin respond with
-ACK. In case everything is OK, it will be ACK(OK). In case something is
-wrong, it will be ACK(FAIL).
-
-If the OCP agent receives ACK(OK), it goes to Established state. If the
-OCP agent receives ACK(FAIL), it goes to Maintenance state. The failure
-code and reason of ACK(FAIL) are defined as below:
-
-- FAIL\_OCP\_VERSION (OCP version not supported)
-
-- FAIL\_NO\_MORE\_CAPACITY (OCP plugin cannot control any more radio
- heads)
-
-The result inside Ack message indicates OK or FAIL with different
-reasons.
-
-**Ack message.**
-
-::
-
- <?xml version="1.0" encoding="UTF-8"?>
- <msg xmlns="http://uri.etsi.org/ori/002-2/v4.1.1">
- <header>
- <msgType>ACK</msgType>
- <msgUID>0</msgUID>
- </header>
- <body>
- <helloAck>
- <result>FAIL_OCP_VERSION</result>
- </helloAck>
- </body>
- </msg>
-
-State Machines
-~~~~~~~~~~~~~~
-
-The following figures illustrate the Finite State Machine (FSM) of the
-OCP agent and OCP plugin for new connection procedure.
-
-.. figure:: ./images/ocpplugin/ocpagent-state-machine.jpg
- :alt: OCP agent state machine
-
- OCP agent state machine
-
-.. figure:: ./images/ocpplugin/ocpplugin-state-machine.jpg
- :alt: OCP plugin state machine
-
- OCP plugin state machine
-
-Northbound APIs
----------------
-
-There are ten exposed northbound APIs: health-check, set-time, re-reset,
-get-param, modify-param, create-obj, delete-obj, get-state, modify-state
-and get-fault
-
-health-check
-~~~~~~~~~~~~
-
-The Health Check procedure allows the application to verify that the OCP
-layer is functioning correctly at the RE.
-
-Default URL:
-http://localhost:8181/restconf/operations/ocp-service:health-check-nb
-
-POST Input
-^^^^^^^^^^
-
-+--------------------+----------+--------------------+--------------------+----------+
-| Field Name | Type | Description | Example | Required |
-| | | | | ? |
-+====================+==========+====================+====================+==========+
-| nodeId | String | Inventory node | ocp:MTI-101-200 | Yes |
-| | | reference for OCP | | |
-| | | radio head | | |
-+--------------------+----------+--------------------+--------------------+----------+
-| tcpLinkMonTimeout | unsigned | TCP Link | 50 | Yes |
-| | Short | Monitoring Timeout | | |
-| | | (unit: seconds) | | |
-+--------------------+----------+--------------------+--------------------+----------+
-
-**Example.**
-
-::
-
- {
- "health-check-nb": {
- "input": {
- "nodeId": "ocp:MTI-101-200",
- "tcpLinkMonTimeout": "50"
- }
- }
- }
-
-POST Output
-^^^^^^^^^^^
-
-+--------------------+--------------------+--------------------------------------+
-| Field Name | Type | Description |
-+====================+====================+======================================+
-| result | String, enumerated | Common default result codes |
-+--------------------+--------------------+--------------------------------------+
-
-**Example.**
-
-::
-
- {
- "output": {
- "result": "SUCCESS"
- }
- }
-
-set-time
-~~~~~~~~
-
-The Set Time procedure allows the application to set/update the absolute
-time reference that shall be used by the RE.
-
-Default URL:
-http://localhost:8181/restconf/operations/ocp-service:set-time-nb
-
-POST Input
-^^^^^^^^^^
-
-+------------+------------+----------------------+----------------------+------------+
-| Field Name | Type | Description | Example | Required? |
-+============+============+======================+======================+============+
-| nodeId | String | Inventory node | ocp:MTI-101-200 | Yes |
-| | | reference for OCP | | |
-| | | radio head | | |
-+------------+------------+----------------------+----------------------+------------+
-| newTime | dateTime | New datetime setting | 2016-04-26T10:23:00- | Yes |
-| | | for radio head | 05:00 | |
-+------------+------------+----------------------+----------------------+------------+
-
-**Example.**
-
-::
-
- {
- "set-time-nb": {
- "input": {
- "nodeId": "ocp:MTI-101-200",
- "newTime": "2016-04-26T10:23:00-05:00"
- }
- }
- }
-
-POST Output
-^^^^^^^^^^^
-
-+--------------------+--------------------+--------------------------------------+
-| Field Name | Type | Description |
-+====================+====================+======================================+
-| result | String, enumerated | Common default result codes + |
-| | | FAIL\_INVALID\_TIMEDATA |
-+--------------------+--------------------+--------------------------------------+
-
-**Example.**
-
-::
-
- {
- "output": {
- "result": "SUCCESS"
- }
- }
-
-re-reset
-~~~~~~~~
-
-The RE Reset procedure allows the application to reset a specific RE.
-
-Default URL:
-http://localhost:8181/restconf/operations/ocp-service:re-reset-nb
-
-POST Input
-^^^^^^^^^^
-
-+------------+------------+----------------------+----------------------+------------+
-| Field Name | Type | Description | Example | Required? |
-+============+============+======================+======================+============+
-| nodeId | String | Inventory node | ocp:MTI-101-200 | Yes |
-| | | reference for OCP | | |
-| | | radio head | | |
-+------------+------------+----------------------+----------------------+------------+
-
-**Example.**
-
-::
-
- {
- "re-reset-nb": {
- "input": {
- "nodeId": "ocp:MTI-101-200"
- }
- }
- }
-
-POST Output
-^^^^^^^^^^^
-
-+--------------------+--------------------+--------------------------------------+
-| Field Name | Type | Description |
-+====================+====================+======================================+
-| result | String, enumerated | Common default result codes |
-+--------------------+--------------------+--------------------------------------+
-
-**Example.**
-
-::
-
- {
- "output": {
- "result": "SUCCESS"
- }
- }
-
-get-param
-~~~~~~~~~
-
-The Object Parameter Reporting procedure allows the application to
-retrieve the following information:
-
-1. the defined object types and instances within the Resource Model of
- the RE
-
-2. the values of the parameters of the objects
-
-Default URL:
-http://localhost:8181/restconf/operations/ocp-service:get-param-nb
-
-POST Input
-^^^^^^^^^^
-
-+------------+------------+----------------------+----------------------+------------+
-| Field Name | Type | Description | Example | Required? |
-+============+============+======================+======================+============+
-| nodeId | String | Inventory node | ocp:MTI-101-200 | Yes |
-| | | reference for OCP | | |
-| | | radio head | | |
-+------------+------------+----------------------+----------------------+------------+
-| objId | String | Object ID | RxSigPath\_5G:1 | Yes |
-+------------+------------+----------------------+----------------------+------------+
-| paramName | String | Parameter name | dataLink | Yes |
-+------------+------------+----------------------+----------------------+------------+
-
-**Example.**
-
-::
-
- {
- "get-param-nb": {
- "input": {
- "nodeId": "ocp:MTI-101-200",
- "objId": "RxSigPath_5G:1",
- "paramName": "dataLink"
- }
- }
- }
-
-POST Output
-^^^^^^^^^^^
-
-+--------------------+--------------------+--------------------------------------+
-| Field Name | Type | Description |
-+====================+====================+======================================+
-| id | String | Object ID |
-+--------------------+--------------------+--------------------------------------+
-| name | String | Object parameter name |
-+--------------------+--------------------+--------------------------------------+
-| value | String | Object parameter value |
-+--------------------+--------------------+--------------------------------------+
-| result | String, enumerated | Common default result codes + |
-| | | "FAIL\_UNKNOWN\_OBJECT", |
-| | | "FAIL\_UNKNOWN\_PARAM" |
-+--------------------+--------------------+--------------------------------------+
-
-**Example.**
-
-::
-
- {
- "output": {
- "obj": [
- {
- "id": "RxSigPath_5G:1",
- "param": [
- {
- "name": "dataLink",
- "value": "dataLink:1"
- }
- ]
- }
- ],
- "result": "SUCCESS"
- }
- }
-
-modify-param
-~~~~~~~~~~~~
-
-The Object Parameter Modification procedure allows the application to
-configure the values of the parameters of the objects identified by the
-Resource Model.
-
-Default URL:
-http://localhost:8181/restconf/operations/ocp-service:modify-param-nb
-
-POST Input
-^^^^^^^^^^
-
-+------------+------------+----------------------+----------------------+------------+
-| Field Name | Type | Description | Example | Required? |
-+============+============+======================+======================+============+
-| nodeId | String | Inventory node | ocp:MTI-101-200 | Yes |
-| | | reference for OCP | | |
-| | | radio head | | |
-+------------+------------+----------------------+----------------------+------------+
-| objId | String | Object ID | RxSigPath\_5G:1 | Yes |
-+------------+------------+----------------------+----------------------+------------+
-| name | String | Object parameter | dataLink | Yes |
-| | | name | | |
-+------------+------------+----------------------+----------------------+------------+
-| value | String | Object parameter | dataLink:1 | Yes |
-| | | value | | |
-+------------+------------+----------------------+----------------------+------------+
-
-**Example.**
-
-::
-
- {
- "modify-param-nb": {
- "input": {
- "nodeId": "ocp:MTI-101-200",
- "objId": "RxSigPath_5G:1",
- "param": [
- {
- "name": "dataLink",
- "value": "dataLink:1"
- }
- ]
- }
- }
- }
-
-POST Output
-^^^^^^^^^^^
-
-+--------------------+--------------------+--------------------------------------+
-| Field Name | Type | Description |
-+====================+====================+======================================+
-| objId | String | Object ID |
-+--------------------+--------------------+--------------------------------------+
-| globResult | String, enumerated | Common default result codes + |
-| | | "FAIL\_UNKNOWN\_OBJECT", |
-| | | "FAIL\_PARAMETER\_FAIL", |
-| | | "FAIL\_NOSUCH\_RESOURCE" |
-+--------------------+--------------------+--------------------------------------+
-| name | String | Object parameter name |
-+--------------------+--------------------+--------------------------------------+
-| result | String, enumerated | "SUCCESS", "FAIL\_UNKNOWN\_PARAM", |
-| | | "FAIL\_PARAM\_READONLY", |
-| | | "FAIL\_PARAM\_LOCKREQUIRED", |
-| | | "FAIL\_VALUE\_OUTOF\_RANGE", |
-| | | "FAIL\_VALUE\_TYPE\_ERROR" |
-+--------------------+--------------------+--------------------------------------+
-
-**Example.**
-
-::
-
- {
- "output": {
- "objId": "RxSigPath_5G:1",
- "globResult": "SUCCESS",
- "param": [
- {
- "name": "dataLink",
- "result": "SUCCESS"
- }
- ]
- }
- }
-
-create-obj
-~~~~~~~~~~
-
-The Object Creation procedure allows the application to create and
-initialize a new instance of the given object type on the RE.
-
-Default URL:
-http://localhost:8181/restconf/operations/ocp-service:create-obj-nb
-
-POST Input
-^^^^^^^^^^
-
-+------------+------------+----------------------+----------------------+------------+
-| Field Name | Type | Description | Example | Required? |
-+============+============+======================+======================+============+
-| nodeId | String | Inventory node | ocp:MTI-101-200 | Yes |
-| | | reference for OCP | | |
-| | | radio head | | |
-+------------+------------+----------------------+----------------------+------------+
-| objType | String | Object type | RxSigPath\_5G | Yes |
-+------------+------------+----------------------+----------------------+------------+
-| name | String | Object parameter | dataLink | No |
-| | | name | | |
-+------------+------------+----------------------+----------------------+------------+
-| value | String | Object parameter | dataLink:1 | No |
-| | | value | | |
-+------------+------------+----------------------+----------------------+------------+
-
-**Example.**
-
-::
-
- {
- "create-obj-nb": {
- "input": {
- "nodeId": "ocp:MTI-101-200",
- "objType": "RxSigPath_5G",
- "param": [
- {
- "name": "dataLink",
- "value": "dataLink:1"
- }
- ]
- }
- }
- }
-
-POST Output
-^^^^^^^^^^^
-
-+--------------------+--------------------+--------------------------------------+
-| Field Name | Type | Description |
-+====================+====================+======================================+
-| objId | String | Object ID |
-+--------------------+--------------------+--------------------------------------+
-| globResult | String, enumerated | Common default result codes + |
-| | | "FAIL\_UNKNOWN\_OBJTYPE", |
-| | | "FAIL\_STATIC\_OBJTYPE", |
-| | | "FAIL\_UNKNOWN\_OBJECT", |
-| | | "FAIL\_CHILD\_NOTALLOWED", |
-| | | "FAIL\_OUTOF\_RESOURCES" |
-| | | "FAIL\_PARAMETER\_FAIL", |
-| | | "FAIL\_NOSUCH\_RESOURCE" |
-+--------------------+--------------------+--------------------------------------+
-| name | String | Object parameter name |
-+--------------------+--------------------+--------------------------------------+
-| result | String, enumerated | "SUCCESS", "FAIL\_UNKNOWN\_PARAM", |
-| | | "FAIL\_PARAM\_READONLY", |
-| | | "FAIL\_PARAM\_LOCKREQUIRED", |
-| | | "FAIL\_VALUE\_OUTOF\_RANGE", |
-| | | "FAIL\_VALUE\_TYPE\_ERROR" |
-+--------------------+--------------------+--------------------------------------+
-
-**Example.**
-
-::
-
- {
- "output": {
- "objId": "RxSigPath_5G:0",
- "globResult": "SUCCESS",
- "param": [
- {
- "name": "dataLink",
- "result": "SUCCESS"
- }
- ]
- }
- }
-
-delete-obj
-~~~~~~~~~~
-
-The Object Deletion procedure allows the application to delete a given
-object instance and recursively its entire child objects on the RE.
-
-Default URL:
-http://localhost:8181/restconf/operations/ocp-service:delete-obj-nb
-
-POST Input
-^^^^^^^^^^
-
-+------------+------------+----------------------+----------------------+------------+
-| Field Name | Type | Description | Example | Required? |
-+============+============+======================+======================+============+
-| nodeId | String | Inventory node | ocp:MTI-101-200 | Yes |
-| | | reference for OCP | | |
-| | | radio head | | |
-+------------+------------+----------------------+----------------------+------------+
-| objId | String | Object ID | RxSigPath\_5G:1 | Yes |
-+------------+------------+----------------------+----------------------+------------+
-
-**Example.**
-
-::
-
- {
- "delete-obj-nb": {
- "input": {
- "nodeId": "ocp:MTI-101-200",
- "obj-id": "RxSigPath_5G:0"
- }
- }
- }
-
-POST Output
-^^^^^^^^^^^
-
-+--------------------+--------------------+--------------------------------------+
-| Field Name | Type | Description |
-+====================+====================+======================================+
-| result | String, enumerated | Common default result codes + |
-| | | "FAIL\_UNKNOWN\_OBJECT", |
-| | | "FAIL\_STATIC\_OBJTYPE", |
-| | | "FAIL\_LOCKREQUIRED" |
-+--------------------+--------------------+--------------------------------------+
-
-**Example.**
-
-::
-
- {
- "output": {
- "result": "SUCCESS"
- }
- }
-
-get-state
-~~~~~~~~~
-
-The Object State Reporting procedure allows the application to acquire
-the current state (for the requested state type) of one or more objects
-of the RE resource model, and additionally configure event-triggered
-reporting of the detected state changes for all state types of the
-indicated objects.
-
-Default URL:
-http://localhost:8181/restconf/operations/ocp-service:get-state-nb
-
-POST Input
-^^^^^^^^^^
-
-+--------------------+----------+--------------------+--------------------+----------+
-| Field Name | Type | Description | Example | Required |
-| | | | | ? |
-+====================+==========+====================+====================+==========+
-| nodeId | String | Inventory node | ocp:MTI-101-200 | Yes |
-| | | reference for OCP | | |
-| | | radio head | | |
-+--------------------+----------+--------------------+--------------------+----------+
-| objId | String | Object ID | RxSigPath\_5G:1 | Yes |
-+--------------------+----------+--------------------+--------------------+----------+
-| stateType | String, | Valid values: | ALL | Yes |
-| | enumerat | "AST", "FST", | | |
-| | ed | "ALL" | | |
-+--------------------+----------+--------------------+--------------------+----------+
-| eventDrivenReporti | Boolean | Event-triggered | true | Yes |
-| ng | | reporting of state | | |
-| | | change | | |
-+--------------------+----------+--------------------+--------------------+----------+
-
-**Example.**
-
-::
-
- {
- "get-state-nb": {
- "input": {
- "nodeId": "ocp:MTI-101-200",
- "objId": "antPort:0",
- "stateType": "ALL",
- "eventDrivenReporting": "true"
- }
- }
- }
-
-POST Output
-^^^^^^^^^^^
-
-+--------------------+--------------------+--------------------------------------+
-| Field Name | Type | Description |
-+====================+====================+======================================+
-| id | String | Object ID |
-+--------------------+--------------------+--------------------------------------+
-| type | String, enumerated | State type. Valid values: "AST", |
-| | | "FST |
-+--------------------+--------------------+--------------------------------------+
-| value | String, enumerated | State value. Valid values: For state |
-| | | type = "AST": "LOCKED", "UNLOCKED". |
-| | | For state type = "FST": |
-| | | "PRE\_OPERATIONAL", "OPERATIONAL", |
-| | | "DEGRADED", "FAILED", |
-| | | "NOT\_OPERATIONAL", "DISABLED" |
-+--------------------+--------------------+--------------------------------------+
-| result | String, enumerated | Common default result codes + |
-| | | "FAIL\_UNKNOWN\_OBJECT", |
-| | | "FAIL\_UNKNOWN\_STATETYPE", |
-| | | "FAIL\_VALUE\_OUTOF\_RANGE" |
-+--------------------+--------------------+--------------------------------------+
-
-**Example.**
-
-::
-
- {
- "output": {
- "obj": [
- {
- "id": "antPort:0",
- "state": [
- {
- "type": "FST",
- "value": "DISABLED"
- },
- {
- "type": "AST",
- "value": "LOCKED"
- }
- ]
- }
- ],
- "result": "SUCCESS"
- }
- }
-
-modify-state
-~~~~~~~~~~~~
-
-The Object State Modification procedure allows the application to
-trigger a change in the state of an object of the RE Resource Model.
-
-Default URL:
-http://localhost:8181/restconf/operations/ocp-service:modify-state-nb
-
-POST Input
-^^^^^^^^^^
-
-+------------+------------+----------------------+----------------------+------------+
-| Field Name | Type | Description | Example | Required? |
-+============+============+======================+======================+============+
-| nodeId | String | Inventory node | ocp:MTI-101-200 | Yes |
-| | | reference for OCP | | |
-| | | radio head | | |
-+------------+------------+----------------------+----------------------+------------+
-| objId | String | Object ID | RxSigPath\_5G:1 | Yes |
-+------------+------------+----------------------+----------------------+------------+
-| stateType | String, | Valid values: "AST", | AST | Yes |
-| | enumerated | "FST", "ALL" | | |
-+------------+------------+----------------------+----------------------+------------+
-| stateValue | String, | Valid values: For | LOCKED | Yes |
-| | enumerated | state type = "AST": | | |
-| | | "LOCKED", | | |
-| | | "UNLOCKED". For | | |
-| | | state type = "FST": | | |
-| | | "PRE\_OPERATIONAL", | | |
-| | | "OPERATIONAL", | | |
-| | | "DEGRADED", | | |
-| | | "FAILED", | | |
-| | | "NOT\_OPERATIONAL", | | |
-| | | "DISABLED" | | |
-+------------+------------+----------------------+----------------------+------------+
-
-**Example.**
-
-::
-
- {
- "modify-state-nb": {
- "input": {
- "nodeId": "ocp:MTI-101-200",
- "objId": "RxSigPath_5G:1",
- "stateType": "AST",
- "stateValue": "LOCKED"
- }
- }
- }
-
-POST Output
-^^^^^^^^^^^
-
-+--------------------+--------------------+--------------------------------------+
-| Field Name | Type | Description |
-+====================+====================+======================================+
-| objId | String | Object ID |
-+--------------------+--------------------+--------------------------------------+
-| stateType | String, enumerated | State type. Valid values: "AST", |
-| | | "FST |
-+--------------------+--------------------+--------------------------------------+
-| stateValue | String, enumerated | State value. Valid values: For state |
-| | | type = "AST": "LOCKED", "UNLOCKED". |
-| | | For state type = "FST": |
-| | | "PRE\_OPERATIONAL", "OPERATIONAL", |
-| | | "DEGRADED", "FAILED", |
-| | | "NOT\_OPERATIONAL", "DISABLED" |
-+--------------------+--------------------+--------------------------------------+
-| result | String, enumerated | Common default result codes + |
-| | | "FAIL\_UNKNOWN\_OBJECT", |
-| | | "FAIL\_UNKNOWN\_STATETYPE", |
-| | | "FAIL\_UNKNOWN\_STATEVALUE", |
-| | | "FAIL\_STATE\_READONLY", |
-| | | "FAIL\_RESOURCE\_UNAVAILABLE", |
-| | | "FAIL\_RESOURCE\_INUSE", |
-| | | "FAIL\_PARENT\_CHILD\_CONFLICT", |
-| | | "FAIL\_PRECONDITION\_NOTMET |
-+--------------------+--------------------+--------------------------------------+
-
-**Example.**
-
-::
-
- {
- "output": {
- "objId": "RxSigPath_5G:1",
- "stateType": "AST",
- "stateValue": "LOCKED",
- "result": "SUCCESS",
- }
- }
-
-get-fault
-~~~~~~~~~
-
-The Fault Reporting procedure allows the application to acquire
-information about all current active faults associated with a primary
-object, as well as configure the RE to report when the fault status
-changes for any of faults associated with the indicated primary object.
-
-Default URL:
-http://localhost:8181/restconf/operations/ocp-service:get-fault-nb
-
-POST Input
-^^^^^^^^^^
-
-+------------+------------+----------------------+----------------------+------------+
-| Field Name | Type | Description | Example | Required? |
-+============+============+======================+======================+============+
-| nodeId | String | Inventory node | ocp:MTI-101-200 | Yes |
-| | | reference for OCP | | |
-| | | radio head | | |
-+------------+------------+----------------------+----------------------+------------+
-| objId | String | Object ID | RE:0 | Yes |
-+------------+------------+----------------------+----------------------+------------+
-| eventDrive | Boolean | Event-triggered | true | Yes |
-| nReporting | | reporting of fault | | |
-+------------+------------+----------------------+----------------------+------------+
-
-**Example.**
-
-::
-
- {
- "get-fault-nb": {
- "input": {
- "nodeId": "ocp:MTI-101-200",
- "objId": "RE:0",
- "eventDrivenReporting": "true"
- }
- }
- }
-
-POST Output
-^^^^^^^^^^^
-
-+--------------------+--------------------+--------------------------------------+
-| Field Name | Type | Description |
-+====================+====================+======================================+
-| result | String, enumerated | Common default result codes + |
-| | | "FAIL\_UNKNOWN\_OBJECT", |
-| | | "FAIL\_VALUE\_OUTOF\_RANGE" |
-+--------------------+--------------------+--------------------------------------+
-| id (obj) | String | Object ID |
-+--------------------+--------------------+--------------------------------------+
-| id (fault) | String | Fault ID |
-+--------------------+--------------------+--------------------------------------+
-| severity | String | Fault severity |
-+--------------------+--------------------+--------------------------------------+
-| timestamp | dateTime | Time stamp |
-+--------------------+--------------------+--------------------------------------+
-| descr | String | Text description |
-+--------------------+--------------------+--------------------------------------+
-| affectedObj | String | Affected object |
-+--------------------+--------------------+--------------------------------------+
-
-**Example.**
-
-::
-
- {
- "output": {
- "result": "SUCCESS",
- "obj": [
- {
- "id": "RE:0",
- "fault": [
- {
- "id": "FAULT_OVERTEMP",
- "severity": "DEGRADED",
- "timestamp": "2012-02-12T16:35:00",
- "descr": "PA temp too high; Pout reduced",
- "affectedObj": [
- "TxSigPath_EUTRA:0",
- "TxSigPath_EUTRA:1"
- ]
- },
- {
- "id": "FAULT_VSWR_OUTOF_RANGE",
- "severity": "WARNING",
- "timestamp": "2012-02-12T16:01:05",
- }
- ]
- }
- ],
- }
- }
-
-.. note::
-
- The northbound APIs described above wrap the southbound APIs to make
- them accessible to external applications via RESTCONF, as well as
- take care of synchronizing the RE resource model between radio heads
- and the controller’s datastore. See
- applications/ocp-service/src/main/yang/ocp-resourcemodel.yang for
- the yang representation of the RE resource model.
-
-Java Interfaces (Southbound APIs)
----------------------------------
-
-The southbound APIs provide concrete implementation of the following OCP
-elementary functions: health-check, set-time, re-reset, get-param,
-modify-param, create-obj, delete-obj, get-state, modify-state and
-get-fault. Any OpenDaylight services/applications (of course, including
-OCP service) wanting to speak OCP to radio heads will need to use them.
-
-SalDeviceMgmtService
-~~~~~~~~~~~~~~~~~~~~
-
-Interface SalDeviceMgmtService defines three methods corresponding to
-health-check, set-time and re-reset.
-
-**SalDeviceMgmtService.java.**
-
-::
-
- package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.device.mgmt.rev150811;
-
- public interface SalDeviceMgmtService
- extends
- RpcService
- {
-
- Future<RpcResult<HealthCheckOutput>> healthCheck(HealthCheckInput input);
-
- Future<RpcResult<SetTimeOutput>> setTime(SetTimeInput input);
-
- Future<RpcResult<ReResetOutput>> reReset(ReResetInput input);
-
- }
-
-SalConfigMgmtService
-~~~~~~~~~~~~~~~~~~~~
-
-Interface SalConfigMgmtService defines two methods corresponding to
-get-param and modify-param.
-
-**SalConfigMgmtService.java.**
-
-::
-
- package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.config.mgmt.rev150811;
-
- public interface SalConfigMgmtService
- extends
- RpcService
- {
-
- Future<RpcResult<GetParamOutput>> getParam(GetParamInput input);
-
- Future<RpcResult<ModifyParamOutput>> modifyParam(ModifyParamInput input);
-
- }
-
-SalObjectLifecycleService
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Interface SalObjectLifecycleService defines two methods corresponding to
-create-obj and delete-obj.
-
-**SalObjectLifecycleService.java.**
-
-::
-
- package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.object.lifecycle.rev150811;
-
- public interface SalObjectLifecycleService
- extends
- RpcService
- {
-
- Future<RpcResult<CreateObjOutput>> createObj(CreateObjInput input);
-
- Future<RpcResult<DeleteObjOutput>> deleteObj(DeleteObjInput input);
-
- }
-
-SalObjectStateMgmtService
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Interface SalObjectStateMgmtService defines two methods corresponding to
-get-state and modify-state.
-
-**SalObjectStateMgmtService.java.**
-
-::
-
- package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.object.state.mgmt.rev150811;
-
- public interface SalObjectStateMgmtService
- extends
- RpcService
- {
-
- Future<RpcResult<GetStateOutput>> getState(GetStateInput input);
-
- Future<RpcResult<ModifyStateOutput>> modifyState(ModifyStateInput input);
-
- }
-
-SalFaultMgmtService
-~~~~~~~~~~~~~~~~~~~
-
-Interface SalFaultMgmtService defines only one method corresponding to
-get-fault.
-
-**SalFaultMgmtService.java.**
-
-::
-
- package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.fault.mgmt.rev150811;
-
- public interface SalFaultMgmtService
- extends
- RpcService
- {
-
- Future<RpcResult<GetFaultOutput>> getFault(GetFaultInput input);
-
- }
-
-Notifications
--------------
-
-In addition to indication messages, the OCP southbound plugin will
-translate specific events (e.g., connect, disconnect) coming up from the
-OCP protocol library into MD-SAL Notification objects and then publish
-them to the MD-SAL. Also, the OCP service will notify the completion of
-certain operation via Notification as well.
-
-SalDeviceMgmtListener
-~~~~~~~~~~~~~~~~~~~~~
-
-An onDeviceConnected Notification will be published to the MD-SAL as
-soon as a radio head is connected to the controller, and when that radio
-head is disconnected the OCP southbound plugin will publish an
-onDeviceDisconnected Notification in response to the disconnect event
-propagated from the OCP protocol library.
-
-**SalDeviceMgmtListener.java.**
-
-::
-
- package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.device.mgmt.rev150811;
-
- public interface SalDeviceMgmtListener
- extends
- NotificationListener
- {
-
- void onDeviceConnected(DeviceConnected notification);
-
- void onDeviceDisconnected(DeviceDisconnected notification);
-
- }
-
-OcpServiceListener
-~~~~~~~~~~~~~~~~~~
-
-The OCP service will publish an onAlignmentCompleted Notification to the
-MD-SAL once it has completed the OCP alignment procedure with the radio
-head.
-
-**OcpServiceListener.java.**
-
-::
-
- package org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.ocp.applications.ocp.service.rev150811;
-
- public interface OcpServiceListener
- extends
- NotificationListener
- {
-
- void onAlignmentCompleted(AlignmentCompleted notification);
-
- }
-
-SalObjectStateMgmtListener
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-When receiving a state change indication message, the OCP southbound
-plugin will propagate the indication message to upper layer
-services/applications by publishing a corresponding onStateChangeInd
-Notification to the MD-SAL.
-
-**SalObjectStateMgmtListener.java.**
-
-::
-
- package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.object.state.mgmt.rev150811;
-
- public interface SalObjectStateMgmtListener
- extends
- NotificationListener
- {
-
- void onStateChangeInd(StateChangeInd notification);
-
- }
-
-SalFaultMgmtListener
-~~~~~~~~~~~~~~~~~~~~
-
-When receiving a fault indication message, the OCP southbound plugin
-will propagate the indication message to upper layer
-services/applications by publishing a corresponding onFaultInd
-Notification to the MD-SAL.
-
-**SalFaultMgmtListener.java.**
-
-::
-
- package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.fault.mgmt.rev150811;
-
- public interface SalFaultMgmtListener
- extends
- NotificationListener
- {
-
- void onFaultInd(FaultInd notification);
-
- }
+++ /dev/null
-.. _sdni-dev-guide:
-
-ODL-SDNi Developer Guide
-========================
-
-Overview
---------
-
-This project aims at enabling inter-SDN controller communication by
-developing SDNi (Software Defined Networking interface) as an
-application (ODL-SDNi App).
-
-ODL-SDNi Architecture
----------------------
-
-- SDNi Aggregator: Northbound SDNi plugin acts as an aggregator for
- collecting network information such as topology, stats, host etc.
- This plugin can be evolving as per needs of network data requested to
- be shared across federated SDN controllers.
-
-- SDNi API: API view autogenerated and accessible through RESTCONF to
- fetch the aggregated information from the northbound plugin – SDNi
- aggregator.The RESTCONF protocol operates on a conceptual datastore
- defined with the YANG data modeling language.
-
-- SDNi Wrapper: SDNi BGP Wrapper will be responsible for the sharing
- and collecting information to/from federated controllers.
-
-- SDNi UI:This component displays the SDN controllers connected to each
- other.
-
-SDNi Aggregator
----------------
-
-- SDNiAggregator connects with the Base Network Service Functions of
- the controller. Currently it is querying network topology through
- MD-SAL for creating SDNi network capability.
-
-- SDNiAggregator is customized to retrieve the host controller’s
- details, while running the controller in cluster mode. Rest of the
- northbound APIs of controller will retrieve the entire topology
- information of all the connected controllers.
-
-- The SDNiAggregator creates a topology structure.This structure is
- populated by the various network funtions.
-
-SDNi API
---------
-
-Topology and QoS data is fetched from SDNiAggregator through RESTCONF.
-
-`http://${controlleripaddress}:8181/apidoc/explorer/index.html <http://${controlleripaddress}:8181/apidoc/explorer/index.html>`__
-
-`http://${ipaddress}:8181/restconf/operations/opendaylight-sdni-topology-msg:getAllPeerTopology <http://${ipaddress}:8181/restconf/operations/opendaylight-sdni-topology-msg:getAllPeerTopology>`__
-
-**Peer Topology Data:** Controller IP Address, Links, Nodes, Link
-Bandwidths, MAC Address of switches, Latency, Host IP address.
-
-`http://${ipaddress}:8181/restconf/operations/opendaylight-sdni-qos-msg:get-all-node-connectors-statistics <http://${ipaddress}:8181/restconf/operations/opendaylight-sdni-qos-msg:get-all-node-connectors-statistics>`__
-
-**QOS Data:** Node, Port, Transmit Packets, Receive Packets, Collision
-Count, Receive Frame Error, Receive Over Run Error, Receive Crc Error
-
-`http://${ipaddress}:8181/restconf/operations/opendaylight-sdni-qos-msg:get-all-peer-node-connectors-statistics <http://${ipaddress}:8181/restconf/operations/opendaylight-sdni-qos-msg:get-all-peer-node-connectors-statistics>`__
-
-**Peer QOS Data:** Node, Port, Transmit Packets, Receive Packets,
-Collision Count, Receive Frame Error, Receive Over Run Error, Receive
-Crc Error
-
-SDNi Wrapper
-------------
-
-.. figure:: ./images/SDNiWrapper.png
- :alt: SDNiWrapper
-
- SDNiWrapper
-
-- SDNiWrapper is an extension of ODL-BGPCEP where SDNi topology data is
- exchange along with the Update NLRI message. Refer
- http://tools.ietf.org/html/draft-ietf-idr-ls-distribution-04 for more
- information on NLRI.
-
-- SDNiWrapper gets the controller’s network capabilities through SDNi
- Aggregator and serialize it in Update NLRI message. This NLRI message
- will get exchange between the clustered controllers through
- BGP-UPDATE message. Similarly peer controller’s UPDATE message is
- received and unpacked then format to SDNi Network capability data,
- which will be stored for further purpose.
-
-SDNi UI
--------
-
-This component displays the SDN controllers connected to each other.
-
-http://localhost:8181/index.html#/sdniUI/sdnController
-
-API Reference Documentation
----------------------------
-
-Go to
-`http://${controlleripaddress}:8181/apidoc/explorer/index.html <http://${controlleripaddress}:8181/apidoc/explorer/index.html>`__,
-sign in, and expand the opendaylight-sdni panel. From there, users can
-execute various API calls to test their SDNi deployment.
+++ /dev/null
-.. _ofconfig-dev-guide:
-
-OF-CONFIG Developer Guide
-=========================
-
-Overview
---------
-
-OF-CONFIG defines an OpenFlow switch as an abstraction called an
-OpenFlow Logical Switch. The OF-CONFIG protocol enables configuration of
-essential artifacts of an OpenFlow Logical Switch so that an OpenFlow
-controller can communicate and control the OpenFlow Logical switch via
-the OpenFlow protocol. OF-CONFIG introduces an operating context for one
-or more OpenFlow data paths called an OpenFlow Capable Switch for one or
-more switches. An OpenFlow Capable Switch is intended to be equivalent
-to an actual physical or virtual network element (e.g. an Ethernet
-switch) which is hosting one or more OpenFlow data paths by partitioning
-a set of OpenFlow related resources such as ports and queues among the
-hosted OpenFlow data paths. The OF-CONFIG protocol enables dynamic
-association of the OpenFlow related resources of an OpenFlow Capable
-Switch with specific OpenFlow Logical Switches which are being hosted on
-the OpenFlow Capable Switch. OF-CONFIG does not specify or report how
-the partitioning of resources on an OpenFlow Capable Switch is achieved.
-OF-CONFIG assumes that resources such as ports and queues are
-partitioned amongst multiple OpenFlow Logical Switches such that each
-OpenFlow Logical Switch can assume full control over the resources that
-is assigned to it.
-
-How to start
-------------
-
-- start OF-CONFIG feature as below:
-
- ::
-
- feature:install odl-of-config-all
-
-Compatible with NETCONF
------------------------
-
-- Config OpenFlow Capable Switch via OpenFlow Configuration Points
-
- Method: POST
-
- URI:
- http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules
-
- Headers: Content-Type" and "Accept" header attributes set to
- application/xml
-
- Payload:
-
- .. code:: xml
-
- <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
- <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">prefix:sal-netconf-connector</type>
- <name>testtool</name>
- <address xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">10.74.151.67</address>
- <port xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">830</port>
- <username xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">mininet</username>
- <password xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">mininet</password>
- <tcp-only xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">false</tcp-only>
- <event-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
- <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:netty">prefix:netty-event-executor</type>
- <name>global-event-executor</name>
- </event-executor>
- <binding-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
- <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
- <name>binding-osgi-broker</name>
- </binding-registry>
- <dom-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
- <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">prefix:dom-broker-osgi-registry</type>
- <name>dom-broker</name>
- </dom-registry>
- <client-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
- <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:config:netconf">prefix:netconf-client-dispatcher</type>
- <name>global-netconf-dispatcher</name>
- </client-dispatcher>
- <processing-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
- <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:threadpool</type>
- <name>global-netconf-processing-executor</name>
- </processing-executor>
- </module>
-
-- NETCONF establishes the connections with OpenFlow Capable Switches
- using the parameters in the previous step. NETCONF also gets the
- information of whether the OpenFlow Switch supports NETCONF during
- the signal handshaking. The information will be stored in the NETCONF
- topology as prosperity of a node.
-
-- OF-CONFIG can be aware of the switches accessing and leaving by
- monitoring the data changes in the NETCONF topology. For the detailed
- information it can be refered to the
- `implementation <https://git.opendaylight.org/gerrit/gitweb?p=of-config.git;a=blob_plain;f=southbound/southbound-impl/src/main/java/org/opendaylight/ofconfig/southbound/impl/OdlOfconfigApiServiceImpl.java;hb=refs/heads/stable/boron>`__.
-
-The establishment of OF-CONFIG topology
----------------------------------------
-
-Firstly, OF-CONFIG will check whether the newly accessed switch supports
-OF-CONFIG by querying the NETCONF interface.
-
-1. During the NETCONF connection’s establishment, the NETCONF and the
- switches will exchange the their capabilities via the "hello"
- message.
-
-2. OF-CONFIG gets the connection information between the NETCONF and
- switches by monitoring the data changes via the interface of
- DataChangeListener.
-
-3. After the NETCONF connection established, the OF-CONFIG module will
- check whether OF-CONFIG capability is in the switch’s capabilities
- list which is got in step 1.
-
-4. If the result of step 3 is yes, the OF-CONFIG will call the following
- processing steps to create the topology database.
-
-For the detailed information it can be referred to the
-`implementation <https://git.opendaylight.org/gerrit/gitweb?p=of-config.git;a=blob_plain;f=southbound/southbound-impl/src/main/java/org/opendaylight/ofconfig/southbound/impl/listener/OfconfigListenerHelper.java;hb=refs/heads/stable/boron>`__.
-
-Secondly, the capable switch node and logical switch node are added in
-the OF-CONFIG topology if the switch supports OF-CONFIG.
-
-OF-CONFIG’s topology compromise: Capable Switch topology (underlay) and
-logical Switch topology (overlay). Both of them are enhanced (augment)
-on
-
-/topo:network-topology/topo:topology/topo:node
-
-The NETCONF will add the nodes in the Topology via the path of
-"/topo:network-topology/topo:topology/topo:node" if it gets the
-configuration information of the switches.
-
-For the detailed information it can be referred to the
-`implementation <https://git.opendaylight.org/gerrit/gitweb?p=of-config.git;a=blob;f=southbound/southbound-api/src/main/yang/odl-ofconfig-topology.yang;h=dbdaec46ee59da3791386011f571d7434dd1e416;hb=refs/heads/stable/boron>`__.
-
+++ /dev/null
-.. _packetcable-dev-guide:
-
-PacketCable Developer Guide
-===========================
-
-PCMM Specification
-------------------
-
-`PacketCable™ Multimedia
-Specification <http://www.cablelabs.com/specification/packetcable-multimedia-specification>`__
-
-System Overview
----------------
-
-These components introduce a DOCSIS QoS Service Flow management using
-the PCMM protocol. The driver component is responsible for the
-PCMM/COPS/PDP functionality required to service requests from
-PacketCable Provider and FlowManager. Requests are transposed into PCMM
-Gate Control messages and transmitted via COPS to the CCAP/CMTS. This
-plugin adheres to the PCMM/COPS/PDP functionality defined in the
-CableLabs specification. PacketCable solution is an MDSAL compliant
-component.
-
-PacketCable Components
-----------------------
-
-The packetcable maven project is comprised of several modules.
-
-+--------------------------------------+--------------------------------------+
-| Bundle | Description |
-+======================================+======================================+
-| packetcable-driver | A common module that containts the |
-| | COPS stack and manages all |
-| | connections to CCAPS/CMTSes. |
-+--------------------------------------+--------------------------------------+
-| packetcable-emulator | A basic CCAP emulator to facilitate |
-| | testing the plugin when no physical |
-| | CCAP is avaible. |
-+--------------------------------------+--------------------------------------+
-| packetcable-policy-karaf | Generates a Karaf distribution with |
-| | a config that loads all the |
-| | packetcable features at runtime. |
-+--------------------------------------+--------------------------------------+
-| packetcable-policy-model | Contains the YANG information model. |
-+--------------------------------------+--------------------------------------+
-| packetcable-policy-server | Provider hosts the model processing, |
-| | RESTCONF, and API implementation. |
-+--------------------------------------+--------------------------------------+
-
-Setting Logging Levels
-~~~~~~~~~~~~~~~~~~~~~~
-
-From the Karaf console
-
-::
-
- log:set <LEVEL> (<PACKAGE>|<BUNDLE>)
- Example
- log:set DEBUG org.opendaylight.packetcable.packetcable-policy-server
-
-Tools for Testing
------------------
-
-Postman REST client for Chrome
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-`Install the Chrome
-extension <https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en>`__
-
-`Download and import sample packetcable
-collection <https://git.opendaylight.org/gerrit/gitweb?p=packetcable.git;a=tree;f=packetcable-policy-server/doc/restconf-samples>`__
-
-View Rest API
-~~~~~~~~~~~~~
-
-1. Install the ``odl-mdsal-apidocs`` feature from the karaf console.
-
-2. Open http://localhost:8181/apidoc/explorer/index.html default dev
- build user/pass is admin/admin
-
-3. Navigate to the PacketCable section.
-
-Yang-IDE
-~~~~~~~~
-
-Editing yang can be done in any text editor but Yang-IDE will help
-prevent mistakes.
-
-`Setup and Build Yang-IDE for
-Eclipse <https://github.com/xored/yang-ide/wiki/Setup-and-build>`__
-
-Using Wireshark to Trace PCMM
------------------------------
-
-1. To start wireshark with privileges issue the following command:
-
- ::
-
- sudo wireshark &
-
-2. Select the interface to monitor.
-
-3. Use the Filter to only display COPS messages by applying “cops” in
- the filter field.
-
- .. figure:: ./images/packetcable-developer-wireshark.png
-
- Wireshark looking for COPS messages.
-
-Debugging and Verifying DQoS Gate (Flows) on the CCAP/CMTS
-----------------------------------------------------------
-
-Below are some of the most useful CCAP/CMTS commands to verify flows
-have been enabled on the CMTS.
-
-Cisco
-~~~~~
-
-`Cisco CMTS Cable Command
-Reference <http://www.cisco.com/c/en/us/td/docs/cable/cmts/cmd_ref/b_cmts_cable_cmd_ref.pdf>`__
-
-Find the Cable Modem
-~~~~~~~~~~~~~~~~~~~~
-
-::
-
- 10k2-DSG#show cable modem
- D
- MAC Address IP Address I/F MAC Prim RxPwr Timing Num I
- State Sid (dBmv) Offset CPE P
- 0010.188a.faf6 0.0.0.0 C8/0/0/U0 offline 1 0.00 1482 0 N
- 74ae.7600.01f3 10.32.115.150 C8/0/10/U0 online 1 -0.50 1431 0 Y
- 0010.188a.fad8 10.32.115.142 C8/0/10/UB w-online 2 -0.50 1507 1 Y
- 000e.0900.00dd 10.32.115.143 C8/0/10/UB w-online 3 1.00 1677 0 Y
- e86d.5271.304f 10.32.115.168 C8/0/10/UB w-online 6 -0.50 1419 1 Y
-
-Show PCMM Plugin Connection
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- 10k2-DSG#show packetcabl ?
- cms Gate Controllers connected to this PacketCable client
- event Event message server information
- gate PacketCable gate information
- global PacketCable global information
-
- 10k2-DSG#show packetcable cms
- GC-Addr GC-Port Client-Addr COPS-handle Version PSID Key PDD-Cfg
-
-
- 10k2-DSG#show packetcable cms
- GC-Addr GC-Port Client-Addr COPS-handle Version PSID Key PDD-Cfg
- 10.32.0.240 54238 10.32.15.3 0x4B9C8150/1 4.0 0 0 0
-
-Show COPS Messages
-~~~~~~~~~~~~~~~~~~
-
-::
-
- debug cops details
-
-Use CM Mac Address to List Service Flows
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- 10k2-DSG#show cable modem
- D
- MAC Address IP Address I/F MAC Prim RxPwr Timing Num I
- State Sid (dBmv) Offset CPE P
- 0010.188a.faf6 --- C8/0/0/UB w-online 1 0.50 1480 1 N
- 74ae.7600.01f3 10.32.115.150 C8/0/10/U0 online 1 -0.50 1431 0 Y
- 0010.188a.fad8 10.32.115.142 C8/0/10/UB w-online 2 -0.50 1507 1 Y
- 000e.0900.00dd 10.32.115.143 C8/0/10/UB w-online 3 0.00 1677 0 Y
- e86d.5271.304f 10.32.115.168 C8/0/10/UB w-online 6 -0.50 1419 1 Y
-
-
- 10k2-DSG#show cable modem 000e.0900.00dd service-flow
-
-
- SUMMARY:
- MAC Address IP Address Host MAC Prim Num Primary DS
- Interface State Sid CPE Downstream RfId
- 000e.0900.00dd 10.32.115.143 C8/0/10/UB w-online 3 0 Mo8/0/2:1 2353
-
-
- Sfid Dir Curr Sid Sched Prio MaxSusRate MaxBrst MinRsvRate Throughput
- State Type
- 23 US act 3 BE 0 0 3044 0 39
- 30 US act 16 BE 0 500000 3044 0 0
- 24 DS act N/A N/A 0 0 3044 0 17
-
-
-
- UPSTREAM SERVICE FLOW DETAIL:
-
- SFID SID Requests Polls Grants Delayed Dropped Packets
- Grants Grants
- 23 3 784 0 784 0 0 784
- 30 16 0 0 0 0 0 0
-
-
- DOWNSTREAM SERVICE FLOW DETAIL:
-
- SFID RP_SFID QID Flg Policer Scheduler FrwdIF
- Xmits Drops Xmits Drops
- 24 33019 131550 0 0 777 0 Wi8/0/2:2
-
- Flags Legend:
- $: Low Latency Queue (aggregated)
- ~: CIR Queue
-
-Deleting a PCMM Gate Message from the CMTS
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- 10k2-DSG#test cable dsd 000e.0900.00dd 30
-
-Find service flows
-~~~~~~~~~~~~~~~~~~
-
-All gate controllers currently connected to the PacketCable client are
-displayed
-
-::
-
- show cable modem 00:11:22:33:44:55 service flow ????
- show cable modem
-
-Debug and display PCMM Gate messages
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-::
-
- debug packetcable gate control
- debug packetcable gate events
- show packetcable gate summary
- show packetcable global
- show packetcable cms
-
-Debug COPS messages
-~~~~~~~~~~~~~~~~~~~
-
-::
-
- debug cops detail
- debug packetcable cops
- debug cable dynamic_qos trace
-
-Integration Verification
-------------------------
-
-Checkout the integration project and perform regression tests.
-
-::
-
- git clone ssh://${ODL_USERNAME}@git.opendaylight.org:29418/integration.git
- git clone https:/git.opendaylight.org/gerrit/integration.git
-
-1. Check and edit the
- integration/features/src/main/resources/features.xml and follow the
- directions there.
-
-2. Check and edit the integration/features/pom.xml and add a dependency
- for your feature file
-
-3. Build integration/features and debug
-
-`` mvn clean install``
-
-Test your feature in the integration/distributions/extra/karaf/
-distribution
-
-::
-
- cd integration/distributions/extra/karaf/
- mvn clean install
- cd target/assembly/bin
- ./karaf
-
-service-wrapper
-~~~~~~~~~~~~~~~
-
-Install http://karaf.apache.org/manual/latest/users-guide/wrapper.html
-
-::
-
- opendaylight-user@root>feature:install service-wrapper
- opendaylight-user@root>wrapper:install --help
- DESCRIPTION
- wrapper:install
-
- Install the container as a system service in the OS.
-
- SYNTAX
- wrapper:install [options]
-
- OPTIONS
- -d, --display
- The display name of the service.
- (defaults to karaf)
- --help
- Display this help message
- -s, --start-type
- Mode in which the service is installed. AUTO_START or DEMAND_START (Default: AUTO_START)
- (defaults to AUTO_START)
- -n, --name
- The service name that will be used when installing the service. (Default: karaf)
- (defaults to karaf)
- -D, --description
- The description of the service.
- (defaults to )
-
- opendaylight-user@root> wrapper:install
- Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/bin/karaf-wrapper
- Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/bin/karaf-service
- Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/etc/karaf-wrapper.conf
- Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/lib/libwrapper.so
- Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/lib/karaf-wrapper.jar
- Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/lib/karaf-wrapper-main.jar
-
- Setup complete. You may wish to tweak the JVM properties in the wrapper configuration file:
- /home/user/odl/distribution-karaf-0.5.0-Boron/etc/karaf-wrapper.conf
- before installing and starting the service.
-
-
- Ubuntu/Debian Linux system detected:
- To install the service:
- $ ln -s /home/user/odl/distribution-karaf-0.5.0-Boron/bin/karaf-service /etc/init.d/
-
- To start the service when the machine is rebooted:
- $ update-rc.d karaf-service defaults
-
- To disable starting the service when the machine is rebooted:
- $ update-rc.d -f karaf-service remove
-
- To start the service:
- $ /etc/init.d/karaf-service start
-
- To stop the service:
- $ /etc/init.d/karaf-service stop
-
- To uninstall the service :
- $ rm /etc/init.d/karaf-service
+++ /dev/null
-.. _topoprocessing-dev-guide:
-
-Topology Processing Framework Developer Guide
-=============================================
-
-Overview
---------
-
-The Topology Processing Framework allows developers to aggregate and
-filter topologies according to defined correlations. It also provides
-functionality, which you can use to make your own topology model by
-automating the translation from one model to another. For example to
-translate from the opendaylight-inventory model to only using the
-network-topology model.
-
-Architecture
-------------
-
-Chapter Overview
-~~~~~~~~~~~~~~~~
-
-In this chapter we describe the architecture of the Topology Processing
-Framework. In the first part, we provide information about available
-features and basic class relationships. In the second part, we describe
-our model specific approach, which is used to provide support for
-different models.
-
-Basic Architecture
-~~~~~~~~~~~~~~~~~~
-
-The Topology Processing Framework consists of several Karaf features:
-
-- odl-topoprocessing-framework
-
-- odl-topoprocessing-inventory
-
-- odl-topoprocessing-network-topology
-
-- odl-topoprocessing-i2rs
-
-- odl-topoprocessing-inventory-rendering
-
-The feature odl-topoprocessing-framework contains the
-topoprocessing-api, topoprocessing-spi and topoprocessing-impl bundles.
-This feature is the core of the Topology Processing Framework and is
-required by all others features.
-
-- topoprocessing-api - contains correlation definitions and definitions
- required for rendering
-
-- topoprocessing-spi - entry point for topoprocessing service (start
- and close)
-
-- topoprocessing-impl - contains base implementations of handlers,
- listeners, aggregators and filtrators
-
-TopoProcessingProvider is the entry point for Topology Processing
-Framework. It requires a DataBroker instance. The DataBroker is needed
-for listener registration. There is also the TopologyRequestListener
-which listens on aggregated topology requests (placed into the
-configuration datastore) and UnderlayTopologyListeners which listen on
-underlay topology data changes (made in operational datastore). The
-TopologyRequestHandler saves toporequest data and provides a method for
-translating a path to the specified leaf. When a change in the topology
-occurs, the registered UnderlayTopologyListener processes this
-information for further aggregation and/or filtration. Finally, after an
-overlay topology is created, it is passed to the TopologyWriter, which
-writes this topology into operational datastore.
-
-.. figure:: ./images/topoprocessing/TopologyRequestHandler_classesRelationship.png
- :alt: Class relationship
-
- Class relationship
-
-[1] TopologyRequestHandler instantiates TopologyWriter and
-TopologyManager. Then, according to the request, initializes either
-TopologyAggregator, TopologyFiltrator or LinkCalculator.
-
-[2] It creates as many instances of UnderlayTopologyListener as there
-are underlay topologies.
-
-[3] PhysicalNodes are created for relevant incoming nodes (those having
-node ID).
-
-[4a] It performs aggregation and creates logical nodes.
-
-[4b] It performs filtration and creates logical nodes.
-
-[4c] It performs link computation and creates links between logical
-nodes.
-
-[5] Logical nodes are put into wrapper.
-
-[6] The wrapper is translated into the appropriate format and written
-into datastore.
-
-Model Specific Approach
-~~~~~~~~~~~~~~~~~~~~~~~
-
-The Topology Processing Framework consists of several modules and Karaf
-features, which provide support for different input models. Currently we
-support the network-topology, opendaylight-inventory and i2rs models.
-For each of these input models, the Topology Processing Framework has
-one module and one Karaf feature.
-
-How it works
-^^^^^^^^^^^^
-
-**User point of view:**
-
-When you start the odl-topoprocessing-framework feature, the Topology
-Processing Framework starts without knowledge how to work with any input
-models. In order to allow the Topology Processing Framework to process
-some kind of input model, you must install one (or more) model specific
-features. Installing these features will also start
-odl-topoprocessing-framework feature if it is not already running. These
-features inject appropriate logic into the odl-topoprocessing-framework
-feature. From that point, the Topology Processing Framework is able to
-process different kinds of input models, specifically those that you
-install features for.
-
-**Developer point of view:**
-
-The topoprocessing-impl module contains (among other things) classes and
-interfaces, which are common for every model specific topoprocessing
-module. These classes and interfaces are implemented and extended by
-classes in particular model specific modules. Model specific modules
-also depend on the TopoProcessingProvider class in the
-topoprocessing-spi module. This dependency is injected during
-installation of model specific features in Karaf. When a model specific
-feature is started, it calls the registerAdapters(adapters) method of
-the injected TopoProcessingProvider object. After this step, the
-Topology Processing Framework is able to use registered model adapters
-to work with input models.
-
-To achieve the described functionality we created a ModelAdapter
-interface. It represents installed feature and provides methods for
-creating crucial structures specific to each model.
-
-.. figure:: ./images/topoprocessing/ModelAdapter.png
- :alt: ModelAdapter interface
-
- ModelAdapter interface
-
-Model Specific Features
-^^^^^^^^^^^^^^^^^^^^^^^
-
-- odl-topoprocessing-network-topology - this feature contains logic to
- work with network-topology model
-
-- odl-topoprocessing-inventory - this feature contains logic to work
- with opendaylight-inventory model
-
-- odl-topoprocessing-i2rs - this feature contains logic to work with
- i2rs model
-
-Inventory Model Support
-~~~~~~~~~~~~~~~~~~~~~~~
-
-The opendaylight-inventory model contains only nodes, termination
-points, information regarding these structures. This model co-operates
-with network-topology model, where other topology related information is
-stored. This means that we have to handle two input models at once. To
-support the inventory model, InventoryListener and
-NotificationInterConnector classes were introduced. Please see the flow
-diagrams below.
-
-.. figure:: ./images/topoprocessing/Network_topology_model_flow_diagram.png
- :alt: Network topology model
-
- Network topology model
-
-.. figure:: ./images/topoprocessing/Inventory_model_listener_diagram.png
- :alt: Inventory model
-
- Inventory model
-
-Here we can see the InventoryListener and NotificationInterConnector
-classes. InventoryListener listens on data changes in the inventory
-model and passes these changes wrapped as an UnderlayItem for further
-processing to NotificationInterConnector. It doesn’t contain node
-information - it contains a leafNode (node based on which aggregation
-occurs) instead. The node information is stored in the topology model,
-where UnderlayTopologyListener is registered as usual. This listener
-delivers the missing information.
-
-Then the NotificationInterConnector combines the two notifications into
-a complete UnderlayItem (no null values) and delivers this UnderlayItem
-for further processing (to next TopologyOperator).
-
-Aggregation and Filtration
---------------------------
-
-Chapter Overview
-~~~~~~~~~~~~~~~~
-
-The Topology Processing Framework allows the creation of aggregated
-topologies and filtered views over existing topologies. Currently,
-aggregation and filtration is supported for topologies that follow
-`network-topology <https://github.com/opendaylight/yangtools/blob/master/model/ietf/ietf-topology/src/main/yang/network-topology%402013-10-21.yang>`__,
-opendaylight-inventory or i2rs model. When a request to create an
-aggregated or filtered topology is received, the framework creates one
-listener per underlay topology. Whenever any specified underlay topology
-is changed, the appropriate listener is triggered with the change and
-the change is processed. Two types of correlations (functionalities) are
-currently supported:
-
-- Aggregation
-
- - Unification
-
- - Equality
-
-- Filtration
-
-Terminology
-~~~~~~~~~~~
-
-We use the term underlay item (physical node) for items (nodes, links,
-termination-points) from underlay and overlay item (logical node) for
-items from overlay topologies regardless of whether those are actually
-physical network elements.
-
-Aggregation
-~~~~~~~~~~~
-
-Aggregation is an operation which creates an aggregated item from two or
-more items in the underlay topology if the aggregation condition is
-fulfilled. Requests for aggregated topologies must specify a list of
-underlay topologies over which the overlay (aggregated) topology will be
-created and a target field in the underlay item that the framework will
-check for equality.
-
-Create Overlay Node
-^^^^^^^^^^^^^^^^^^^
-
-First, each new underlay item is inserted into the proper topology
-store. Once the item is stored, the framework compares it (using the
-target field value) with all stored underlay items from underlay
-topologies. If there is a target-field match, a new overlay item is
-created containing pointers to all *equal* underlay items. The newly
-created overlay item is also given new references to its supporting
-underlay items.
-
-**Equality case:**
-
-If an item doesn’t fulfill the equality condition with any other items,
-processing finishes after adding the item into topology store. It will
-stay there for future use, ready to create an aggregated item with a new
-underlay item, with which it would satisfy the equality condition.
-
-**Unification case:**
-
-An overlay item is created for all underlay items, even those which
-don’t fulfill the equality condition with any other items. This means
-that an overlay item is created for every underlay item, but for items
-which satisfy the equality condition, an aggregated item is created.
-
-Update Node
-^^^^^^^^^^^
-
-Processing of updated underlay items depends on whether the target field
-has been modified. If yes, then:
-
-- if the underlay item belonged to some overlay item, it is removed
- from that item. Next, if the aggregation condition on the target
- field is satisfied, the item is inserted into another overlay item.
- If the condition isn’t met then:
-
- - in equality case - the item will not be present in overlay
- topology.
-
- - in unification case - the item will create an overlay item with a
- single underlay item and this will be written into overlay
- topology.
-
-- if the item didn’t belong to some overlay item, it is checked again
- for aggregation with other underlay items.
-
-Remove Node
-^^^^^^^^^^^
-
-The underlay item is removed from the corresponding topology store, from
-it’s overlay item (if it belongs to one) and this way it is also removed
-from overlay topology.
-
-**Equality case:**
-
-If there is only one underlay item left in the overlay item, the overlay
-item is removed.
-
-**Unification case:**
-
-The overlay item is removed once it refers to no underlay item.
-
-Filtration
-~~~~~~~~~~
-
-Filtration is an operation which results in creation of overlay topology
-containing only items fulfilling conditions set in the topoprocessing
-request.
-
-Create Underlay Item
-^^^^^^^^^^^^^^^^^^^^
-
-If a newly created underlay item passes all filtrators and their
-conditions, then it is stored in topology store and a creation
-notification is delivered into topology manager. No operation otherwise.
-
-Update Underlay Item
-^^^^^^^^^^^^^^^^^^^^
-
-First, the updated item is checked for presence in topology store:
-
-- if it is present in topology store:
-
- - if it meets the filtering conditions, then processUpdatedData
- notification is triggered
-
- - else processRemovedData notification is triggered
-
-- if item isn’t present in topology store
-
- - if item meets filtering conditions, then processCreatedData
- notification is triggered
-
- - else it is ignored
-
-Remove Underlay Item
-^^^^^^^^^^^^^^^^^^^^
-
-If an underlay node is supporting some overlay node, the overlay node is
-simply removed.
-
-Default Filtrator Types
-^^^^^^^^^^^^^^^^^^^^^^^
-
-There are seven types of default filtrators defined in the framework:
-
-- IPv4-address filtrator - checks if specified field meets IPv4 address
- + mask criteria
-
-- IPv6-address filtrator - checks if specified field meets IPv6 address
- + mask criteria
-
-- Specific number filtrator - checks for specific number
-
-- Specific string filtrator - checks for specific string
-
-- Range number filtrator - checks if specified field is higher than
- provided minimum (inclusive) and lower than provided maximum
- (inclusive)
-
-- Range string filtrator - checks if specified field is alphabetically
- greater than provided minimum (inclusive) and alphabetically lower
- than provided maximum (inclusive)
-
-- Script filtrator - allows a user or application to implement their
- own filtrator
-
-Register Custom Filtrator
-^^^^^^^^^^^^^^^^^^^^^^^^^
-
-There might be some use case that cannot be achieved with the default
-filtrators. In these cases, the framework offers the possibility for a
-user or application to register a custom filtrator.
-
-Pre-Filtration / Filtration & Aggregation
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-This feature was introduced in order to lower memory and performance
-demands. It is a combination of the filtration and aggregation
-operations. First, uninteresting items are filtered out and then
-aggregation is performed only on items that passed filtration. This way
-the framework saves on compute time. The PreAggregationFiltrator and
-TopologyAggregator share the same TopoStoreProvider (and thus topology
-store) which results in lower memory demands (as underlay items are
-stored only in one topology store - they aren’t stored twice).
-
-Link Computation
-----------------
-
-Chapter Overview
-~~~~~~~~~~~~~~~~
-
-While processing the topology request, we create overlay nodes with
-lists of supporting underlay nodes. Because these overlay nodes have
-completely new identifiers, we lose link information. To regain this
-link information, we provide Link Computation functionality. Its main
-purpose is to create new overlay links based on the links from the
-underlay topologies and underlay items from overlay items. The required
-information for Link Computation is provided via the Link Computation
-model in
-(`topology-link-computation.yang <https://git.opendaylight.org/gerrit/gitweb?p=topoprocessing.git;a=blob;f=topoprocessing-api/src/main/yang/topology-link-computation.yang;hb=refs/heads/stable/boron>`__).
-
-Link Computation Functionality
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Let us consider two topologies with following components:
-
-Topology 1:
-
-- Node: ``node:1:1``
-
-- Node: ``node:1:2``
-
-- Node: ``node:1:3``
-
-- Link: ``link:1:1`` (from ``node:1:1`` to ``node:1:2``)
-
-- Link: ``link:1:2`` (from ``node:1:3`` to ``node:1:2``)
-
-Topology 2:
-
-- Node: ``node:2:1``
-
-- Node: ``node:2:2``
-
-- Node: ``node:2:3``
-
-- Link: ``link:2:1`` (from ``node:2:1`` to ``node:2:3``)
-
-Now let’s say that we applied some operations over these topologies that
-results into aggregating together
-
-- ``node:1:1`` and ``node:2:3`` (``node:1``)
-
-- ``node:1:2`` and ``node:2:2`` (``node:2``)
-
-- ``node:1:3`` and ``node:2:1`` (``node:3``)
-
-At this point we can no longer use available links in new topology
-because of the node ID change, so we must create new overlay links with
-source and destination node set to new nodes IDs. It means that
-``link:1:1`` from topology 1 will create new link ``link:1``. Since
-original source (``node:1:1``) is already aggregated under ``node:1``,
-it will become source node for ``link:1``. Using same method the
-destination will be ``node:2``. And the final output will be three
-links:
-
-- ``link:1``, from ``node:1`` to ``node:2``
-
-- ``link:2``, from ``node:3`` to ``node:2``
-
-- ``link:3``, from ``node:3`` to ``node:1``
-
-.. figure:: ./images/topoprocessing/LinkComputation.png
- :alt: Overlay topology with computed links
-
- Overlay topology with computed links
-
-In-Depth Look
-~~~~~~~~~~~~~
-
-The main logic behind Link Computation is executed in the LinkCalculator
-operator. The required information is passed to LinkCalculator through
-the LinkComputation section of the topology request. This section is
-defined in the topology-link-computation.yang file. The main logic also
-covers cases when some underlay nodes may not pass through other
-topology operators.
-
-Link Computation Model
-^^^^^^^^^^^^^^^^^^^^^^
-
-There are three essential pieces of information for link computations.
-All of them are provided within the LinkComputation section. These
-pieces are:
-
-- output model
-
-.. code::
-
- leaf output-model {
- type identityref {
- base topo-corr:model;
- }
- description "Desired output model for computed links.";
- }
-
-- overlay topology with new nodes
-
-.. code::
-
- container node-info {
- leaf node-topology {
- type string;
- mandatory true;
- description "Topology that contains aggregated nodes.
- This topology will be used for storing computed links.";
- }
- uses topo-corr:input-model-grouping;
- }
-
-- underlay topologies with original links
-
-.. code::
-
- list link-info {
- key "link-topology input-model";
- leaf link-topology {
- type string;
- mandatory true;
- description "Topology that contains underlay (base) links.";
- }
- leaf aggregated-links {
- type boolean;
- description "Defines if link computation should be based on supporting-links.";
- }
- uses topo-corr:input-model-grouping;
- }
-
-This whole section is augmented into ``network-topology:topology``. By
-placing this section out of correlations section, it allows us to send
-link computation request separately from topology operations request.
-
-Main Logic
-^^^^^^^^^^
-
-Taking into consideration that some of the underlay nodes may not
-transform into overlay nodes (e.g. they are filtered out), we created
-two possible states for links:
-
-- matched - a link is considered as matched when both original source
- and destination node were transformed to overlay nodes
-
-- waiting - a link is considered as waiting if original source,
- destination or both nodes are missing from the overlay topology
-
-All links in waiting the state are stored in waitingLinks list, already
-matched links are stored in matchedLinks list and overlay nodes are
-stored in the storedOverlayNodes list. All processing is based only on
-information in these lists. Processing created, updated and removed
-underlay items is slightly different and described in next sections
-separately.
-
-**Processing Created Items**
-
-Created items can be either nodes or links, depending on the type of
-listener from which they came. In the case of a link, it is immediately
-added to waitingLinks and calculation for possible overlay link
-creations (calculatePossibleLink) is started. The flow diagram for this
-process is shown in the following picture:
-
-.. figure:: ./images/topoprocessing/LinkComputationFlowDiagram.png
- :alt: Flow diagram of processing created items
-
- Flow diagram of processing created items
-
-Searching for the source and destination nodes in the
-calculatePossibleLink method runs over each node in storedOverlayNodes
-and the IDs of each supporting node is compared against IDs from the
-underlay link’s source and destination nodes. If there are any nodes
-missing, the link remains in the waiting state. If both the source and
-destination nodes are found, the corresponding overlay nodes is recorded
-as the new source and destination. The link is then removed from
-waitingLinks and a new CalculatedLink is added to the matched links. At
-the end, the new link (if it exists) is written into the datastore.
-
-If the created item is an overlayNode, this is added to
-storedOverlayNodes and we call calculatePossibleLink for every link in
-waitingLinks.
-
-**Processing Updated Items**
-
-The difference from processing created items is that we have three
-possible types of updated items: overlay nodes, waiting underlay links,
-and matched underlay links.
-
-- In the case of a change in a matched link, this must be recalculated
- and based on the result it will either be matched with new source and
- destination or will be returned to waiting links. If the link is
- moved back to a waiting state, it must also be removed from the
- datastore.
-
-- In the case of change in a waiting link, it is passed to the
- calculation process and based on the result will either remain in
- waiting state or be promoted to the matched state.
-
-- In the case of a change in an overlay node, storedOverlayNodes must
- be updated properly and all links must be recalculated in case of
- changes.
-
-**Processing Removed items**
-
-Same as for processing updated item. There can be three types of removed
-items:
-
-- In case of waiting link removal, the link is just removed from
- waitingLinks
-
-- In case of matched link removal, the link is removed from
- matchingLinks and datastore
-
-- In case of overlay node removal, the node must be removed form
- storedOverlayNodes and all matching links must be recalculated
-
-Wrapper, RPC Republishing, Writing Mechanism
---------------------------------------------
-
-Chapter Overview
-~~~~~~~~~~~~~~~~
-
-During the process of aggregation and filtration, overlay items (so
-called logical nodes) were created from underlay items (physical nodes).
-In the topology manager, overlay items are put into a wrapper. A wrapper
-is identified with unique ID and contains list of logical nodes.
-Wrappers are used to deal with transitivity of underlay items - which
-permits grouping of overlay items (into wrappers).
-
-.. figure:: ./images/topoprocessing/wrapper.png
- :alt: Wrapper
-
- Wrapper
-
-PN1, PN2, PN3 = physical nodes
-
-LN1, LN2 = logical nodes
-
-RPC Republishing
-~~~~~~~~~~~~~~~~
-
-All RPCs registered to handle underlay items are re-registered under
-their corresponding wrapper ID. RPCs of underlay items (belonging to an
-overlay item) are gathered, and registered under ID of their wrapper.
-
-RPC Call
-^^^^^^^^
-
-When RPC is called on overlay item, this call is delegated to it’s
-underlay items, this means that the RPC is called on all underlay items
-of this overlay item.
-
-Writing Mechanism
-~~~~~~~~~~~~~~~~~
-
-When a wrapper (containing overlay item(s) with it’s underlay item(s))
-is ready to be written into data store, it has to be converted into DOM
-format. After this translation is done, the result is written into
-datastore. Physical nodes are stored as supporting-nodes. In order to
-use resources responsibly, writing operation is divided into two steps.
-First, a set of threads registers prepared operations (deletes and puts)
-and one thread makes actual write operation in batch.
-
-Topology Rendering Guide - Inventory Rendering
-----------------------------------------------
-
-Chapter Overview
-~~~~~~~~~~~~~~~~
-
-In the most recent OpenDaylight release, the opendaylight-inventory
-model is marked as deprecated. To facilitate migration from it to the
-network-topology model, there were requests to render (translate) data
-from inventory model (whether augmented or not) to another model for
-further processing. The Topology Processing Framework was extended to
-provide this functionality by implementing several rendering-specific
-classes. This chapter is a step-by-step guide on how to implement your
-own topology rendering using our inventory rendering as an example.
-
-Use case
-~~~~~~~~
-
-For the purpose of this guide we are going to render the following
-augmented fields from the OpenFlow model:
-
-- from inventory node:
-
- - manufacturer
-
- - hardware
-
- - software
-
- - serial-number
-
- - description
-
- - ip-address
-
-- from inventory node-connector:
-
- - name
-
- - hardware-address
-
- - current-speed
-
- - maximum-speed
-
-We also want to preserve the node ID and termination-point ID from
-opendaylight-topology-inventory model, which is network-topology part of
-the inventory model.
-
-Implementation
-~~~~~~~~~~~~~~
-
-There are two ways to implement support for your specific topology
-rendering:
-
-- add a module to your project that depends on the Topology Processing
- Framework
-
-- add a module to the Topology Processing Framework itself
-
-Regardless, a successful implementation must complete all of the
-following steps.
-
-Step1 - Target Model Creation
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Because the network-topology node does not have fields to store all
-desired data, it is necessary to create new model to render this extra
-data in to. For this guide we created the inventory-rendering model. The
-picture below shows how data will be rendered and stored.
-
-.. figure:: ./images/topoprocessing/Inventory_Rendering_Use_case.png
- :alt: Rendering to the inventory-rendering model
-
- Rendering to the inventory-rendering model
-
-.. important::
-
- When implementing your version of the topology-rendering model in
- the Topology Processing Framework, the source file of the model
- (.yang) must be saved in /topoprocessing-api/src/main/yang folder so
- corresponding structures can be generated during build and can be
- accessed from every module through dependencies.
-
-When the target model is created you have to add an identifier through
-which you can set your new model as output model. To do that you have to
-add another identity item to topology-correlation.yang file. For our
-inventory-rendering model identity looks like this:
-
-.. code::
-
- identity inventory-rendering-model {
- description "inventory-rendering.yang";
- base model;
- }
-
-After that you will be able to set inventory-rendering-model as output
-model in XML.
-
-Step2 - Module and Feature Creation
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-.. important::
-
- This and following steps are based on the `model specific
- approach <#_model_specific_approach>`__ in the Topology Processing
- Framework. We highly recommend that you familiarize yourself with
- this approach in advance.
-
-To create a base module and add it as a feature to Karaf in the Topology
-Processing Framework we made the changes in following
-`commit <https://git.opendaylight.org/gerrit/#/c/26223/>`__. Changes in
-other projects will likely be similar.
-
-+--------------------------------------+--------------------------------------+
-| File | Changes |
-+======================================+======================================+
-| pom.xml | add new module to topoprocessing |
-+--------------------------------------+--------------------------------------+
-| features.xml | add feature to topoprocessing |
-+--------------------------------------+--------------------------------------+
-| features/pom.xml | add dependencies needed by features |
-+--------------------------------------+--------------------------------------+
-| topoprocessing-artifacts/pom.xml | add artifact |
-+--------------------------------------+--------------------------------------+
-| topoprocessing-config/pom.xml | add configuration file |
-+--------------------------------------+--------------------------------------+
-| 81-topoprocessing-inventory-renderin | configuration file for new module |
-| g-config.xml | |
-+--------------------------------------+--------------------------------------+
-| topoprocessing-inventory-rendering/p | main pom for new module |
-| om.xml | |
-+--------------------------------------+--------------------------------------+
-| TopoProcessingProviderIR.java | contains startup method which |
-| | register new model adapter |
-+--------------------------------------+--------------------------------------+
-| TopoProcessingProviderIRModule.java | generated class which contains |
-| | createInstance method. You should |
-| | call your startup method from here. |
-+--------------------------------------+--------------------------------------+
-| TopoProcessingProviderIRModuleFactor | generated class. You will probably |
-| y.java | not need to edit this file |
-+--------------------------------------+--------------------------------------+
-| log4j.xml | configuration file for logger |
-| | topoprocessing-inventory-rendering-p |
-| | rovider-impl.yang |
-+--------------------------------------+--------------------------------------+
-
-Step3 - Module Adapters Creation
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-There are seven mandatory interfaces or abstract classes that needs to
-be implemented in each module. They are:
-
-- TopoProcessingProvider - provides module registration
-
-- ModelAdapter - provides model specific instances
-
-- TopologyRequestListener - listens on changes in the configuration
- datastore
-
-- TopologyRequestHandler - processes configuration datastore changes
-
-- UnderlayTopologyListener - listens for changes in the specific model
-
-- LinkTransaltor and NodeTranslator - used by OverlayItemTranslator to
- create NormalizedNodes from OverlayItems
-
-The name convention we used was to add an abbreviation for the specific
-model to the beginning of implementing class name (e.g. the
-IRModelAdapter refers to class which implements ModelAdapter in module
-Inventory Rendering). In the case of the provider class, we put the
-abbreviation at the end.
-
-.. important::
-
- - In the next sections, we use the terms TopologyRequestListener,
- TopologyRequestHandler, etc. without a prepended or appended
- abbreviation because the steps apply regardless of which specific
- model you are targeting.
-
- - If you want to implement rendering from inventory to
- network-topology, you can just copy-paste our module and
- additional changes will be required only in the output part.
-
-**Provider part**
-
-This part is the starting point of the whole module. It is responsible
-for creating and registering TopologyRequestListeners. It is necessary
-to create three classes which will import:
-
-- **TopoProcessingProviderModule** - is a generated class from
- topoprocessing-inventory-rendering-provider-impl.yang (created in
- previous step, file will appear after first build). Its method
- ``createInstance()`` is called at the feature start and must be
- modified to create an instance of TopoProcessingProvider and call its
- ``startup(TopoProcessingProvider topoProvider)`` function.
-
-- **TopoProcessingProvider** - in
- ``startup(TopoProcessingProvider topoProvider)`` function provides
- ModelAdapter registration to TopoProcessingProviderImpl.
-
-- **ModelAdapter** - provides creation of corresponding module specific
- classes.
-
-**Input part**
-
-This includes the creation of the classes responsible for input data
-processing. In this case, we had to create five classes implementing:
-
-- **TopologyRequestListener** and **TopologyRequestHandler** - when
- notified about a change in the configuration datastore, verify if the
- change contains a topology request (has correlations in it) and
- creates UnderlayTopologyListeners if needed. The implementation of
- these classes will differ according to the model in which are
- correlations saved (network-topology or i2rs). In the case of using
- network-topology, as the input model, you can use our classes
- IRTopologyRequestListener and IRTopologyRequestHandler.
-
-- **UnderlayTopologyListener** - registers underlay listeners according
- to input model. In our case (listening in the inventory model), we
- created listeners for the network-topology model and inventory model,
- and set the NotificationInterConnector as the first operator and set
- the IRRenderingOperator as the second operator (after
- NotificationInterConnector). Same as for
- TopologyRequestListener/Handler, if you are rendering from the
- inventory model, you can use our class IRUnderlayTopologyListener.
-
-- **InventoryListener** - a new implementation of this class is
- required only for inventory input model. This is because the
- InventoryListener from topoprocessing-impl requires pathIdentifier
- which is absent in the case of rendering.
-
-- **TopologyOperator** - replaces classic topoprocessing operator.
- While the classic operator provides specific operations on topology,
- the rendering operator just wraps each received UnderlayItem to
- OverlayItem and sends them to write.
-
-.. important::
-
- For purposes of topology rendering from inventory to
- network-topology, there are misused fields in UnderlayItem as
- follows:
-
- - item - contains node from network-topology part of inventory
-
- - leafItem - contains node from inventory
-
- In case of implementing UnderlayTopologyListener or
- InventoryListener you have to carefully adjust UnderlayItem creation
- to these terms.
-
-**Output part**
-
-The output part of topology rendering is responsible for translating
-received overlay items to normalized nodes. In the case of inventory
-rendering, this is where node information from inventory are combined
-with node information from network-topology. This combined information
-is stored in our inventory-rendering model normalized node and passed to
-the writer.
-
-The output part consists of two translators implementing the
-NodeTranslator and LinkTranslator interfaces.
-
-**NodeTranslator implementation** - The NodeTranslator interface has one
-``translate(OverlayItemWrapper wrapper)`` method. For our purposes,
-there is one important thing in wrapper - the list of OverlayItems which
-have one or more common UnderlayItems. Regardless of this list, in the
-case of rendering it will always contains only one OverlayItem. This
-item has list of UnderlayItems, but again in case of rendering there
-will be only one UnderlayItem item in this list. In NodeTranslator, the
-OverlayItem and corresponding UnderlayItem represent nodes from the
-translating model.
-
-The UnderlayItem has several attributes. How you will use these
-attributes in your rendering is up to you, as you create this item in
-your topology operator. For example, as mentioned above, in our
-inventory rendering example is an inventory node normalized node stored
-in the UnderlayItem leafNode attribute, and we also store node-id from
-network-topology model in UnderlayItem itemId attribute. You can now use
-these attributes to build a normalized node for your new model. How to
-read and create normalized nodes is out of scope of this document.
-
-**LinkTranslator implementation** - The LinkTranslator interface also
-has one ``translate(OverlayItemWrapper wrapper)`` method. In our
-inventory rendering this method returns ``null``, because the inventory
-model doesn’t have links. But if you also need links, this is the place
-where you should translate it into a normalized node for your model. In
-LinkTranslator, the OverlayItem and corresponding UnderlayItem represent
-links from the translating model. As in NodeTranslator, there will be
-only one OverlayItem and one UnderlayItem in the corresponding lists.
-
-Testing
-~~~~~~~
-
-If you want to test topoprocessing with some manually created underlay
-topologies (like in this guide), than you have to tell Topoprocessing
-to listen for underlay topologies on Configuration datastore
-instead of Operational.
-
-| You can do this in this config file
-| ``<topoprocessing_directory>/topoprocessing-config/src/main/resources/80-topoprocessing-config.xml``.
-| Here you have to change
-| ``<datastore-type>OPERATIONAL</datastore-type>``
-| to
-| ``<datastore-type>CONFIGURATION</datastore-type>``.
-
-
-Also you have to add dependency required to test "inventory" topologies.
-
-| In ``<topoprocessing_directory>/features/pom.xml``
-| add ``<openflowplugin.version>latest_snapshot</openflowplugin.version>``
- to properties section
-| and add this dependency to dependencies section
-
-.. code:: xml
-
- <dependency>
- <groupId>org.opendaylight.openflowplugin</groupId>
- <artifactId>features-openflowplugin</artifactId>
- <version>${openflowplugin.version}</version>
- <classifier>features</classifier><type>xml</type>
- </dependency>
-
-``latest_snapshot`` in ``<openflowplugin.version>`` replace with latest snapshot, which can be found `here <https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/openflowplugin/openflowplugin/>`__.
-
-| And in ``<topoprocessing_directory>/features/src/main/resources/features.xml``
-| add ``<repository>mvn:org.opendaylight.openflowplugin/features-openflowplugin/${openflowplugin.version}/xml/features</repository>``
- to repositories section.
-
-Now after you rebuild project and start Karaf, you can install necessary features.
-
-| You can install all with one command:
-| ``feature:install odl-restconf-noauth odl-topoprocessing-inventory-rendering odl-openflowplugin-southbound odl-openflowplugin-nsf-model``
-
-Now you can send messages to REST from any REST client (e.g. Postman in
-Chrome). Messages have to have following headers:
-
-+--------------------------------------+--------------------------------------+
-| Header | Value |
-+======================================+======================================+
-| Content-Type: | application/xml |
-+--------------------------------------+--------------------------------------+
-| Accept: | application/xml |
-+--------------------------------------+--------------------------------------+
-| username: | admin |
-+--------------------------------------+--------------------------------------+
-| password: | admin |
-+--------------------------------------+--------------------------------------+
-
-Firstly send topology request to
-http://localhost:8181/restconf/config/network-topology:network-topology/topology/render:1
-with method PUT. Example of simple rendering request:
-
-.. code:: xml
-
- <topology xmlns="urn:TBD:params:xml:ns:yang:network-topology">
- <topology-id>render:1</topology-id>
- <correlations xmlns="urn:opendaylight:topology:correlation" >
- <output-model>inventory-rendering-model</output-model>
- <correlation>
- <correlation-id>1</correlation-id>
- <type>rendering-only</type>
- <correlation-item>node</correlation-item>
- <rendering>
- <underlay-topology>und-topo:1</underlay-topology>
- </rendering>
- </correlation>
- </correlations>
- </topology>
-
-This request says that we want create topology with name render:1 and
-this topology should be stored in the inventory-rendering-model and it
-should be created from topology flow:1 by node rendering.
-
-Next we send the network-topology part of topology flow:1. So to the URL
-http://localhost:8181/restconf/config/network-topology:network-topology/topology/und-topo:1
-we PUT:
-
-.. code:: xml
-
- <topology xmlns="urn:TBD:params:xml:ns:yang:network-topology"
- xmlns:it="urn:opendaylight:model:topology:inventory"
- xmlns:i="urn:opendaylight:inventory">
- <topology-id>und-topo:1</topology-id>
- <node>
- <node-id>openflow:1</node-id>
- <it:inventory-node-ref>
- /i:nodes/i:node[i:id="openflow:1"]
- </it:inventory-node-ref>
- <termination-point>
- <tp-id>tp:1</tp-id>
- <it:inventory-node-connector-ref>
- /i:nodes/i:node[i:id="openflow:1"]/i:node-connector[i:id="openflow:1:1"]
- </it:inventory-node-connector-ref>
- </termination-point>
- </node>
- </topology>
-
-And the last input will be inventory part of topology. To the URL
-http://localhost:8181/restconf/config/opendaylight-inventory:nodes we
-PUT:
-
-.. code:: xml
-
- <nodes
- xmlns="urn:opendaylight:inventory">
- <node>
- <id>openflow:1</id>
- <node-connector>
- <id>openflow:1:1</id>
- <port-number
- xmlns="urn:opendaylight:flow:inventory">1
- </port-number>
- <current-speed
- xmlns="urn:opendaylight:flow:inventory">10000000
- </current-speed>
- <name
- xmlns="urn:opendaylight:flow:inventory">s1-eth1
- </name>
- <supported
- xmlns="urn:opendaylight:flow:inventory">
- </supported>
- <current-feature
- xmlns="urn:opendaylight:flow:inventory">copper ten-gb-fd
- </current-feature>
- <configuration
- xmlns="urn:opendaylight:flow:inventory">
- </configuration>
- <peer-features
- xmlns="urn:opendaylight:flow:inventory">
- </peer-features>
- <maximum-speed
- xmlns="urn:opendaylight:flow:inventory">0
- </maximum-speed>
- <advertised-features
- xmlns="urn:opendaylight:flow:inventory">
- </advertised-features>
- <hardware-address
- xmlns="urn:opendaylight:flow:inventory">0E:DC:8C:63:EC:D1
- </hardware-address>
- <state
- xmlns="urn:opendaylight:flow:inventory">
- <link-down>false</link-down>
- <blocked>false</blocked>
- <live>false</live>
- </state>
- <flow-capable-node-connector-statistics
- xmlns="urn:opendaylight:port:statistics">
- <receive-errors>0</receive-errors>
- <receive-frame-error>0</receive-frame-error>
- <receive-over-run-error>0</receive-over-run-error>
- <receive-crc-error>0</receive-crc-error>
- <bytes>
- <transmitted>595</transmitted>
- <received>378</received>
- </bytes>
- <receive-drops>0</receive-drops>
- <duration>
- <second>28</second>
- <nanosecond>410000000</nanosecond>
- </duration>
- <transmit-errors>0</transmit-errors>
- <collision-count>0</collision-count>
- <packets>
- <transmitted>7</transmitted>
- <received>5</received>
- </packets>
- <transmit-drops>0</transmit-drops>
- </flow-capable-node-connector-statistics>
- </node-connector>
- <node-connector>
- <id>openflow:1:LOCAL</id>
- <port-number
- xmlns="urn:opendaylight:flow:inventory">4294967294
- </port-number>
- <current-speed
- xmlns="urn:opendaylight:flow:inventory">0
- </current-speed>
- <name
- xmlns="urn:opendaylight:flow:inventory">s1
- </name>
- <supported
- xmlns="urn:opendaylight:flow:inventory">
- </supported>
- <current-feature
- xmlns="urn:opendaylight:flow:inventory">
- </current-feature>
- <configuration
- xmlns="urn:opendaylight:flow:inventory">
- </configuration>
- <peer-features
- xmlns="urn:opendaylight:flow:inventory">
- </peer-features>
- <maximum-speed
- xmlns="urn:opendaylight:flow:inventory">0
- </maximum-speed>
- <advertised-features
- xmlns="urn:opendaylight:flow:inventory">
- </advertised-features>
- <hardware-address
- xmlns="urn:opendaylight:flow:inventory">BA:63:87:0C:76:41
- </hardware-address>
- <state
- xmlns="urn:opendaylight:flow:inventory">
- <link-down>false</link-down>
- <blocked>false</blocked>
- <live>false</live>
- </state>
- <flow-capable-node-connector-statistics
- xmlns="urn:opendaylight:port:statistics">
- <receive-errors>0</receive-errors>
- <receive-frame-error>0</receive-frame-error>
- <receive-over-run-error>0</receive-over-run-error>
- <receive-crc-error>0</receive-crc-error>
- <bytes>
- <transmitted>576</transmitted>
- <received>468</received>
- </bytes>
- <receive-drops>0</receive-drops>
- <duration>
- <second>28</second>
- <nanosecond>426000000</nanosecond>
- </duration>
- <transmit-errors>0</transmit-errors>
- <collision-count>0</collision-count>
- <packets>
- <transmitted>6</transmitted>
- <received>6</received>
- </packets>
- <transmit-drops>0</transmit-drops>
- </flow-capable-node-connector-statistics>
- </node-connector>
- <serial-number
- xmlns="urn:opendaylight:flow:inventory">None
- </serial-number>
- <manufacturer
- xmlns="urn:opendaylight:flow:inventory">Nicira, Inc.
- </manufacturer>
- <hardware
- xmlns="urn:opendaylight:flow:inventory">Open vSwitch
- </hardware>
- <software
- xmlns="urn:opendaylight:flow:inventory">2.1.3
- </software>
- <description
- xmlns="urn:opendaylight:flow:inventory">None
- </description>
- <ip-address
- xmlns="urn:opendaylight:flow:inventory">10.20.30.40
- </ip-address>
- <meter-features
- xmlns="urn:opendaylight:meter:statistics">
- <max_bands>0</max_bands>
- <max_color>0</max_color>
- <max_meter>0</max_meter>
- </meter-features>
- <group-features
- xmlns="urn:opendaylight:group:statistics">
- <group-capabilities-supported
- xmlns:x="urn:opendaylight:group:types">x:chaining
- </group-capabilities-supported>
- <group-capabilities-supported
- xmlns:x="urn:opendaylight:group:types">x:select-weight
- </group-capabilities-supported>
- <group-capabilities-supported
- xmlns:x="urn:opendaylight:group:types">x:select-liveness
- </group-capabilities-supported>
- <max-groups>4294967040</max-groups>
- <actions>67082241</actions>
- <actions>0</actions>
- </group-features>
- </node>
- </nodes>
-
-After this, the expected result from a GET request to
-http://127.0.0.1:8181/restconf/operational/network-topology:network-topology
-is:
-
-.. code:: xml
-
- <network-topology
- xmlns="urn:TBD:params:xml:ns:yang:network-topology">
- <topology>
- <topology-id>render:1</topology-id>
- <node>
- <node-id>openflow:1</node-id>
- <node-augmentation
- xmlns="urn:opendaylight:topology:inventory:rendering">
- <ip-address>10.20.30.40</ip-address>
- <serial-number>None</serial-number>
- <manufacturer>Nicira, Inc.</manufacturer>
- <description>None</description>
- <hardware>Open vSwitch</hardware>
- <software>2.1.3</software>
- </node-augmentation>
- <termination-point>
- <tp-id>openflow:1:1</tp-id>
- <tp-augmentation
- xmlns="urn:opendaylight:topology:inventory:rendering">
- <hardware-address>0E:DC:8C:63:EC:D1</hardware-address>
- <current-speed>10000000</current-speed>
- <maximum-speed>0</maximum-speed>
- <name>s1-eth1</name>
- </tp-augmentation>
- </termination-point>
- <termination-point>
- <tp-id>openflow:1:LOCAL</tp-id>
- <tp-augmentation
- xmlns="urn:opendaylight:topology:inventory:rendering">
- <hardware-address>BA:63:87:0C:76:41</hardware-address>
- <current-speed>0</current-speed>
- <maximum-speed>0</maximum-speed>
- <name>s1</name>
- </tp-augmentation>
- </termination-point>
- </node>
- </topology>
- </network-topology>
-
-Use Cases
----------
-
-You can find use case examples on `this wiki page
-<https://wiki.opendaylight.org/view/Topology_Processing_Framework:Developer_Guide:Use_Case_Tutorial>`__.
-
-Key APIs and Interfaces
------------------------
-
-The basic provider class is TopoProcessingProvider which provides
-startup and shutdown methods. Otherwise, the framework communicates via
-requests and outputs stored in the MD-SAL datastores.
-
-API Reference Documentation
----------------------------
-
-You can find API examples on `this wiki
-page <https://wiki.opendaylight.org/view/Topology_Processing_Framework:Developer_Guide:REST_API_Specification>`__.