Merge "Technical debt - fix Renderer sonar issues"
[transportpce.git] / docs / developer-guide.rst
index 54a762fe919d245707c6a0469a64581e89db6582..7ea61b67d9b2277d1333dfd8b237137dd00d6141 100644 (file)
@@ -28,14 +28,14 @@ equipment\_ and Optical Line Management (OLM) is associated with a generic block
 relying on open models, each of them communicating through published APIs.
 
 
-.. figure:: ./images/tpce_architecture.jpg
+.. figure:: ./images/TransportPCE-Diagramm-Magnesium.jpg
    :alt: TransportPCE architecture
 
    TransportPCE architecture
 
-The current version of transportPCE is dedicated to the control of WDM transport
-infrastructure. OTN layer will be integrated in a later step. The WDM layer is
-built from colorless ROADMs and transponders.
+Fluorine, Neon and Sodium releases of transportPCE are dedicated to the control
+of WDM transport infrastructure. The WDM layer is built from colorless ROADMs
+and transponders.
 
 The interest of using a controller to provision automatically services strongly
 relies on its ability to handle end to end optical services that spans through
@@ -43,10 +43,24 @@ the different network domains, potentially equipped with equipment coming from
 different suppliers. Thus, interoperability in the optical layer is a key
 element to get the benefit of automated control.
 
-Initial design of TransportPCE leverages Open ROADM Multi-Source-Agreement (MSA)
+Initial design of TransportPCE leverages OpenROADM Multi-Source-Agreement (MSA)
 which defines interoperability specifications, consisting of both Optical
 interoperability and Yang data models.
 
+End to end OTN services such as OCH-OTU4, structured ODU4 or 10GE-ODU2e
+services are supported since Magnesium SR2. OTN support will continue to be
+improved in the following releases of Magnesium and Aluminium.
+
+An experimental support of Flexgrid is introduced in Aluminium. Depending on
+OpenROADM device models, optical interfaces can be created according to the
+initial fixed grid (for R1.2.1, 96 channels regularly spaced of 50 GHz), or to
+a flexgrid (for R2.2.1 use of specific number of subsequent frequency slots of
+6.25 GHz depending on one side of ROADMs and transponders capabilities and on
+the other side of the rate of the channel. The full support of Flexgrid,
+including path computation and the creation of B100G (Beyond 100 Gbps) higher
+rate interfaces will be added in the following releases of Aluminium.
+
+
 Module description
 ~~~~~~~~~~~~~~~~~~
 
@@ -65,12 +79,18 @@ Renderer and the OLM to delete connections and reset power levels associated wit
 service. The service-list is updated following a successful service deletion. In Neon SR0 is
 added the support for service from ROADM to ROADM, which brings additional flexibility and
 notably allows reserving resources when transponders are not in place at day one.
+Magnesium SR2 fully supports end-to-end OTN services which are part of the OTN infrastructure.
+It concerns the management of OCH-OTU4 (also part of the optical infrastructure) and structured
+HO-ODU4 services. Moreover, once these two kinds of OTN infrastructure service created, it is
+possible to manage some LO-ODU services (for the time being, only 10GE-ODU2e services).
+The full support of OTN services, including 1GE-ODU0 or 100GE, will be introduced along next
+releases (Mg/Al).
 
 PCE
-^^^^^^^^^^^^^^
+^^^
 
 The Path Computation Element (PCE) is the component responsible for path
-calculation. An interface allows the Renderer or external components such as an
+calculation. An interface allows the Service Handler or external components such as an
 orchestrator to request a path computation and get a response from the PCE
 including the computed path(s) in case of success, or errors and indication of
 the reason for the failure in case the request cannot be satisfied. Additional
@@ -80,22 +100,44 @@ allows keeping PCE aligned with the latest changes in the topology. Information
 about current and planned services is available in the MD-SAL data store.
 
 Current implementation of PCE allows finding the shortest path, minimizing either the hop
-count (default) or the propagation delay. Wavelength is assigned considering a fixed grid of
-96 wavelengths. In Neon SR0, the PCE calculates the OSNR, on the base of incremental
-noise specifications provided in Open RAODM MSA. The support of unidirectional ports is
-also added. PCE handles the following constraints as hard constraints:
+count (default) or the propagation delay. Central wavelength is assigned considering a fixed
+grid of 96 wavelengths 50 GHz spaced. The assignment of wavelengths according to a flexible
+grid considering 768 subsequent slots of 6,25 GHz (total spectrum of 4.8 Thz), and their
+occupation by existing services is planned for later releases.
+In Neon SR0, the PCE calculates the OSNR, on the base of incremental noise specifications
+provided in Open ROADM MSA. The support of unidirectional ports is also added.
+
+PCE handles the following constraints as hard constraints:
 
 -   **Node exclusion**
 -   **SRLG exclusion**
 -   **Maximum latency**
 
+In Magnesium SR0, the interconnection of the PCE with GNPY (Gaussian Noise Python), an
+open-source library developed in the scope of the Telecom Infra Project for building route
+planning and optimizing performance in optical mesh networks, is fully supported.
+
+If the OSNR calculated by the PCE is too close to the limit defined in OpenROADM
+specifications, the PCE forwards through a REST interface to GNPY external tool the topology
+and the pre-computed path translated in routing constraints. GNPy calculates a set of Quality of
+Transmission metrics for this path using its own library which includes models for OpenROADM.
+The result is sent back to the PCE. If the path is validated, the PCE sends back a response to
+the service handler. In case of invalidation of the path by GNPY, the PCE sends a new request to
+GNPY, including only the constraints expressed in the path-computation-request initiated by the
+Service Handler. GNPy then tries to calculate a path based on these relaxed constraints. The result
+of the path computation is provided to the PCE which translates the path according to the topology
+handled in transportPCE and forwards the results to the Service Handler.
+
+GNPy relies on SNR and takes into account the linear and non-linear impairments
+to check feasibility. In the related tests, GNPy module runs externally in a
+docker and the communication with T-PCE is ensured via HTTPs.
 
 Topology Management
-^^^^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^
 
 Topology management module builds the Topology according to the Network model
-defined in OpenROADM. The topology is aligned with I2RS model. It includes
-several network layers:
+defined in OpenROADM. The topology is aligned with IETF I2RS RFC8345 model.
+It includes several network layers:
 
 -  **CLLI layer corresponds to the locations that host equipment**
 -  **Network layer corresponds to a first level of disaggregation where we
@@ -103,9 +145,13 @@ several network layers:
 -  **Topology layer introduces a second level of disaggregation where ROADMs
    Add/Drop modules ("SRGs") are separated from the degrees which includes line
    amplifiers and WSS that switch wavelengths from one to another degree**
+-  **OTN layer introduced in Magnesium includes transponders as well as switch-ponders and
+   mux-ponders having the ability to switch OTN containers from client to line cards. Mg SR0
+   release includes creation of the switching pool (used to model cross-connect matrices),
+   tributary-ports and tributary-slots at the initial connection of NETCONF devices.
+   The population of OTN links (OTU4 and ODU4), and the adjustment of the tributary ports/slots
+   pool occupancy when OTN services are created is supported since Magnesium SR2.**
 
-OTN layer which includes OTN elements having or not the ability to switch OTN
-containers from client to line cards is not currently implemented.
 
 Renderer
 ^^^^^^^^
@@ -115,12 +161,16 @@ implementation-request /service delete rpc, sets/deletes the path corresponding
 service between A and Z ends. The path description provided by the service-handler to the
 renderer is based on abstracted resources (nodes, links and termination-points), as provided
 by the PCE module. The renderer converts this path-description in a path topology based on
-device resources (circuit-packs, ports,…). The conversion from abstracted resources to
-device resources is performed relying on the portmapping module which maintains the
-connections between these different resource types. In Neon (SR0), portmapping modules
-has been enriched to support both openroadm 1.2.1 and 2.2 device models. The full support
-of openroadm 2.2 device models (both in the topology management and the rendering
-function) is planned at a later step (ORD2.2 full support is targeted for Neon SR1).
+device resources (circuit-packs, ports,…).
+
+The conversion from abstracted resources to device resources is performed relying on the
+portmapping module which maintains the connections between these different resource types.
+Portmapping module also allows to keep the topology independant from the devices releases.
+In Neon (SR0), portmapping module has been enriched to support both openroadm 1.2.1 and 2.2.1
+device models. The full support of openroadm 2.2.1 device models (both in the topology management
+and the rendering function) has been added in Neon SR1. In Magnesium, portmapping is enriched with
+the supported-interface-capability, OTN supporting-interfaces, and switching-pools (reflecting
+cross-connection capabilities of OTN switch-ponders).
 
 After the path is provided, the renderer first checks what are the existing interfaces on the
 ports of the different nodes that the path crosses. It then creates missing interfaces. After all
@@ -130,8 +180,14 @@ notifies the Service Handler on the status of the path creation. Path is created
 rollback function is called to set the equipment on the path back to their initial configuration
 (as they were before invoking the Renderer).
 
+Magnesium brings the support of OTN services. SR0 supports the creation of OTU4, ODU4, ODU2/ODU2e
+and ODU0 interfaces. The creation of these low-order otn interfaces must be triggered through
+otn-service-path RPC. Magnesium SR2 fully supports end-to-end otn service implementation into devices
+(service-implementation-request /service delete rpc, topology alignement after the service has been created).
+
+
 OLM
-^^^^^^^^
+^^^
 
 Optical Line Management module implements two main features: it is responsible
 for setting up the optical power levels on the different interfaces, and is in
@@ -151,6 +207,57 @@ calculating the right power settings, sending it to the device, and check the
 PM retrieved from the device to verify that the setting was correctly applied
 and the configuration was successfully completed.
 
+
+Inventory
+^^^^^^^^^
+
+TransportPCE Inventory module is responsible to keep track of devices connected in an external MariaDB database.
+Other databases may be used as long as they comply with SQL and are compatible with OpenDaylight (for example MySQL).
+At present, the module supports extracting and persisting inventory of devices OpenROADM MSA version 1.2.1.
+Inventory module changes to support newer device models (2.2.1, etc) and other models (network, service, etc)
+will be progressively included.
+
+The inventory module can be activated by the associated karaf feature (odl-transporpce-inventory)
+The database properties are supplied in the “opendaylight-release” and “opendaylight-snapshots” profiles.
+Below is the settings.xml with properties included in the distribution.
+The module can be rebuild from sources with different parameters.
+
+Sample entry in settings.xml to declare an external inventory database:
+::
+
+    <profiles>
+      <profile>
+          <id>opendaylight-release</id>
+    [..]
+         <properties>
+                 <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
+                 <transportpce.db.database><<databasename>></transportpce.db.database>
+                 <transportpce.db.username><<username>></transportpce.db.username>
+                 <transportpce.db.password><<password>></transportpce.db.password>
+                 <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
+         </properties>
+    </profile>
+    [..]
+    <profile>
+          <id>opendaylight-snapshots</id>
+    [..]
+         <properties>
+                 <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
+                 <transportpce.db.database><<databasename>></transportpce.db.database>
+                 <transportpce.db.username><<username>></transportpce.db.username>
+                 <transportpce.db.password><<password>></transportpce.db.password>
+                 <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
+         </properties>
+        </profile>
+    </profiles>
+
+
+Once the project built and when karaf is started, the cfg file is generated in etc folder with the corresponding
+properties supplied in settings.xml. When devices with OpenROADM 1.2.1 device model are mounted, the device listener in
+the inventory module loads several device attributes to various tables as per the supplied database.
+The database structure details can be retrieved from the file tests/inventory/initdb.sql inside project sources.
+Installation scripts and a docker file are also provided.
+
 Key APIs and Interfaces
 -----------------------
 
@@ -203,7 +310,6 @@ Netconf Service
 
    -  node list : composed of netconf nodes in topology-netconf
 
-
 Internal APIs
 ~~~~~~~~~~~~~
 
@@ -245,14 +351,27 @@ Renderer Service
 
    - service-path-rpc-result : result of service RPC
 
+Device Renderer
+^^^^^^^^^^^^^^^
+
+-  RPC call
+
+   -  service-path used in SR0 as an intermediate solution to address directly the renderer
+      from a REST NBI to create OCH-OTU4-ODU4 interfaces on network port of otn devices.
+
+   -  otn-service-path used in SR0 as an intermediate solution to address directly the renderer
+      from a REST NBI for otn-service creation. Otn service-creation through
+      service-implementation-request call from the Service Handler will be supported in later
+      Magnesium releases
+
 Topology Management Service
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 -  Data structure
 
    -  network list : composed of networks(openroadm-topology, netconf-topology)
-   -  node list : composed of node-id
-   -  link list : composed of link-id
+   -  node list : composed of nodes identified by their node-id
+   -  link list : composed of links identified by their link-id
    -  node : composed of roadm, xponder
       link : composed of links of different types (roadm-to-roadm, express, add-drop ...)
 
@@ -280,6 +399,25 @@ odl-transportpce-stubmodels
       renderer (Stubpce and Stubrenderer).
       Stubs are used for development purposes and can be used for some of the functionnal tests.
 
+Interfaces to external software
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+It defines the interfaces implemented to interconnect TransportPCE modules with other software in
+order to perform specific tasks
+
+GNPy interface
+^^^^^^^^^^^^^^
+
+-  Request structure
+
+   -  topology : composed of list of elements and connections
+   -  service : source, destination, explicit-route-objects, path-constraints
+
+-  Response structure
+
+   -  path-properties/path-metric : OSNR-0.1nm, OSNR-bandwidth, SNR-0.1nm, SNR-bandwidth,
+   -  path-properties/path-route-objects : composed of path elements
+
 
 Running transportPCE project
 ----------------------------
@@ -347,16 +485,13 @@ Use the following JSON RPC to check that function internally named *portMapping*
 
 .. note::
 
-    in Neon SR0, the support of openroadm 2.2 device model is added. Thus 2.2 nodes can be
-    discovered and added to the portmapping node list. However, full topology management
-    support (and notably link discovery) is not provided for 2.2 nodes. The support for link
-    discovery and full topology management with 1.2.1 and 2.2 nodes will be added in a next release.
-
-.. note::
-
-    In ``org-openroadm-device.yang``, two types of optical nodes can be managed:
+    In ``org-openroadm-device.yang``, four types of optical nodes can be managed:
         * rdm: ROADM device (optical switch)
         * xpdr: Xponder device (device that converts client to optical channel interface)
+        * ila: in line amplifier (optical amplifier)
+        * extplug: external pluggable (an optical pluggable that can be inserted in an external unit such as a router)
+
+    TransportPCE currently supports rdm and xpdr
 
 Depending on the kind of open ROADM device connected, different kind of *Logical Connection Points*
 should appear, if the node configuration is not empty:
@@ -449,10 +584,49 @@ From rdm to xpdr:
       }
     }
 
+OTN topology
+~~~~~~~~~~~~
+
+Before creating an OTN service, your topology must contain at least two xpdr devices of MUXPDR
+or SWITCH type connected to two different rdm devices. To check that these xpdr are present in the
+OTN topology, use the following command on the REST API :
+
+**REST API** : *GET /restconf/config/ietf-network:network/otn-topology*
+
+An optical connectivity service shall have been created in a first setp. Since Magnesium SR2, the OTN
+links are automatically populated in the topology after the Och, OTU4 and ODU4 interfaces have
+been created on the two network ports of the xpdr.
 
 Creating a service
 ~~~~~~~~~~~~~~~~~~
 
+Use the *service handler* module to create any end-to-end connectivity service on an OpenROADM
+network. Two kind of end-to-end "optical" services are managed by TransportPCE:
+- 100GE service from client port to client port of two transponders (TPDR)
+- Optical Channel (OC) service from client add/drop port (PP port of SRG) to client add/drop port of
+two ROADMs.
+
+For these services, TransportPCE automatically invokes *renderer* module to create all required
+interfaces and cross-connection on each device supporting the service.
+As an example, the creation of a 100GE service implies among other things, the creation of OCH, OTU4
+and ODU4 interfaces on the Network port of TPDR devices.
+
+Since Magnesium SR2, the *service handler* module directly manages some end-to-end otn
+connectivity services.
+Before creating a low-order OTN service (1GE or 10GE services terminating on client port of MUXPDR
+or SWITCH), the user must ensure that a high-order ODU4 container exists and has previously been
+configured (it means structured to support low-order otn services) to support low-order OTN containers.
+Thus, OTN service creation implies three steps:
+1. OCH-OTU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
+2. HO-ODU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
+3. 10GE service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
+
+The management of other OTN services (1GE-ODU0, 100GE...) is planned for future releases.
+
+
+100GE service creation
+^^^^^^^^^^^^^^^^^^^^^^
+
 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
 end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
 nodes.
@@ -557,9 +731,467 @@ on xpdr nodes.This RPC invokes the *PCE* module to compute a path over the *open
 then invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
 
 
+OC service creation
+^^^^^^^^^^^^^^^^^^^
+
+Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
+end-to end Optical Channel (OC) connectivity service between two add/drop ports (PP port of SRG
+node) over an optical network only composed of rdm nodes.
+
+**REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
+
+**Sample JSON Data**
+
+.. code:: json
+
+    {
+        "input": {
+            "sdnc-request-header": {
+                "request-id": "request-1",
+                "rpc-action": "service-create",
+                "request-system-id": "appname"
+            },
+            "service-name": "something",
+            "common-id": "commonId",
+            "connection-type": "roadm-line",
+            "service-a-end": {
+                "service-rate": "100",
+                "node-id": "<xpdr-node-id>",
+                "service-format": "OC",
+                "clli": "<ccli-name>",
+                "tx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-client-port>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-client-port-number>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "rx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-client-port>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-client-port-number>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "optic-type": "gray"
+            },
+            "service-z-end": {
+                "service-rate": "100",
+                "node-id": "<xpdr-node-id>",
+                "service-format": "OC",
+                "clli": "<ccli-name>",
+                "tx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-client-port>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-client-port-number>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "rx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-client-port>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-client-port-number>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "optic-type": "gray"
+            },
+            "due-date": "yyyy-mm-ddT00:00:01Z",
+            "operator-contact": "some-contact-info"
+        }
+    }
+
+As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
+*openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
+the devices.
+
+OTN OCH-OTU4 service creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Use the following REST RPC to invoke *service handler* module in order to create over the optical
+infrastructure a bidirectional end-to-end OTU4 over an optical wavelength connectivity service
+between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such service configure the
+optical network infrastructure composed of rdm nodes.
+
+**REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
+
+**Sample JSON Data**
+
+.. code:: json
+
+    {
+        "input": {
+            "sdnc-request-header": {
+                "request-id": "request-1",
+                "rpc-action": "service-create",
+                "request-system-id": "appname"
+            },
+            "service-name": "something",
+            "common-id": "commonId",
+            "connection-type": "infrastructure",
+            "service-a-end": {
+                "service-rate": "100",
+                "node-id": "<xpdr-node-id>",
+                "service-format": "OTU",
+                "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
+                "clli": "<ccli-name>",
+                "tx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-network-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "rx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-network-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "optic-type": "gray"
+            },
+            "service-z-end": {
+                "service-rate": "100",
+                "node-id": "<xpdr-node-id>",
+                "service-format": "OTU",
+                "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
+                "clli": "<ccli-name>",
+                "tx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-network-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "rx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-network-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "optic-type": "gray"
+            },
+            "due-date": "yyyy-mm-ddT00:00:01Z",
+            "operator-contact": "some-contact-info"
+        }
+    }
+
+As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
+*openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
+the devices.
+
+OTN HO-ODU4 service creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Use the following REST RPC to invoke *service handler* module in order to create over the optical
+infrastructure a bidirectional end-to-end ODU4 OTN service over an OTU4 and structured to support
+low-order OTN services (ODU2e, ODU0). As for OTU4, such a service must be created between two network
+ports of OTN Xponder (MUXPDR or SWITCH).
+
+**REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
+
+**Sample JSON Data**
+
+.. code:: json
+
+    {
+        "input": {
+            "sdnc-request-header": {
+                "request-id": "request-1",
+                "rpc-action": "service-create",
+                "request-system-id": "appname"
+            },
+            "service-name": "something",
+            "common-id": "commonId",
+            "connection-type": "infrastructure",
+            "service-a-end": {
+                "service-rate": "100",
+                "node-id": "<xpdr-node-id>",
+                "service-format": "ODU",
+                "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
+                "clli": "<ccli-name>",
+                "tx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-network-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "rx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-network-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "optic-type": "gray"
+            },
+            "service-z-end": {
+                "service-rate": "100",
+                "node-id": "<xpdr-node-id>",
+                "service-format": "ODU",
+                "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
+                "clli": "<ccli-name>",
+                "tx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-network-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "rx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-network-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "optic-type": "gray"
+            },
+            "due-date": "yyyy-mm-ddT00:00:01Z",
+            "operator-contact": "some-contact-info"
+        }
+    }
+
+As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
+*otn-topology* that must contains OTU4 links with valid bandwidth parameters, and then
+invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
+
+OTN 10GE-ODU2e service creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Use the following REST RPC to invoke *service handler* module in order to create over the OTN
+infrastructure a bidirectional end-to-end 10GE-ODU2e OTN service over an ODU4.
+Such a service must be created between two client ports of OTN Xponder (MUXPDR or SWITCH)
+configured to support 10GE interfaces.
+
+**REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
+
+**Sample JSON Data**
+
+.. code:: json
+
+    {
+        "input": {
+            "sdnc-request-header": {
+                "request-id": "request-1",
+                "rpc-action": "service-create",
+                "request-system-id": "appname"
+            },
+            "service-name": "something",
+            "common-id": "commonId",
+            "connection-type": "service",
+            "service-a-end": {
+                "service-rate": "10",
+                "node-id": "<xpdr-node-id>",
+                "service-format": "Ethernet",
+                "clli": "<ccli-name>",
+                "subrate-eth-sla": {
+                    "subrate-eth-sla": {
+                        "committed-info-rate": "10000",
+                        "committed-burst-size": "64"
+                    }
+                },
+                "tx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-client-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "rx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-client-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "optic-type": "gray"
+            },
+            "service-z-end": {
+                "service-rate": "10",
+                "node-id": "<xpdr-node-id>",
+                "service-format": "Ethernet",
+                "clli": "<ccli-name>",
+                "subrate-eth-sla": {
+                    "subrate-eth-sla": {
+                        "committed-info-rate": "10000",
+                        "committed-burst-size": "64"
+                    }
+                },
+                "tx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-client-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "rx-direction": {
+                    "port": {
+                        "port-device-name": "<xpdr-node-id-in-otn-topology>",
+                        "port-type": "fixed",
+                        "port-name": "<xpdr-client-port-in-otn-topology>",
+                        "port-rack": "000000.00",
+                        "port-shelf": "Chassis#1"
+                    },
+                    "lgx": {
+                        "lgx-device-name": "Some lgx-device-name",
+                        "lgx-port-name": "Some lgx-port-name",
+                        "lgx-port-rack": "000000.00",
+                        "lgx-port-shelf": "00"
+                    }
+                },
+                "optic-type": "gray"
+            },
+            "due-date": "yyyy-mm-ddT00:00:01Z",
+            "operator-contact": "some-contact-info"
+        }
+    }
+
+As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
+*otn-topology* that must contains ODU4 links with valid bandwidth parameters, and then
+invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
+
+
+.. note::
+    Since Magnesium SR2, the service-list corresponding to OCH-OTU4, ODU4 or again 10GE-ODU2e services is
+    updated in the service-list datastore.
+
+.. note::
+    trib-slot is used when the equipment supports contiguous trib-slot allocation (supported from
+    Magnesium SR0). The trib-slot provided corresponds to the first of the used trib-slots.
+    complex-trib-slots will be used when the equipment does not support contiguous trib-slot
+    allocation. In this case a list of the different trib-slots to be used shall be provided.
+    The support for non contiguous trib-slot allocation is planned for later Magnesium release.
+
 Deleting a service
 ~~~~~~~~~~~~~~~~~~
 
+Deleting any kind of service
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
 Use the following REST RPC to invoke *service handler* module in order to delete a given optical
 connectivity service.
 
@@ -578,7 +1210,7 @@ connectivity service.
                 "notification-url": "http://localhost:8585/NotificationServer/notify"
             },
             "service-delete-req-info": {
-                "service-name": "test1",
+                "service-name": "something",
                 "tail-retention": "no"
             }
         }
@@ -587,10 +1219,128 @@ connectivity service.
 Most important parameters for this REST RPC is the *service-name*.
 
 
+.. note::
+    Deleting OTN services implies proceeding in the reverse way to their creation. Thus, OTN
+    service deletion must respect the three following steps:
+    1. delete first all 10GE services supported over any ODU4 to be deleted
+    2. delete ODU4
+    3. delete OCH-OTU4 supporting the just deleted ODU4
+
+Invoking PCE module
+~~~~~~~~~~~~~~~~~~~
+
+Use the following REST RPCs to invoke *PCE* module in order to check connectivity between xponder
+nodes and the availability of a supporting optical connectivity between the network-ports of the
+nodes.
+
+Checking OTU4 service connectivity
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
+
+**Sample JSON Data**
+
+.. code:: json
+
+   {
+      "input": {
+           "service-name": "something",
+           "resource-reserve": "true",
+           "service-handler-header": {
+             "request-id": "request1"
+           },
+           "service-a-end": {
+             "service-rate": "100",
+             "clli": "<clli-node>",
+             "service-format": "OTU",
+             "node-id": "<otn-node-id>"
+           },
+           "service-z-end": {
+             "service-rate": "100",
+             "clli": "<clli-node>",
+             "service-format": "OTU",
+             "node-id": "<otn-node-id>"
+             },
+           "pce-metric": "hop-count"
+       }
+   }
+
+.. note::
+    here, the <otn-node-id> corresponds to the node-id as appearing in "openroadm-network" topology
+    layer
+
+Checking ODU4 service connectivity
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
+
+**Sample JSON Data**
+
+.. code:: json
+
+   {
+      "input": {
+           "service-name": "something",
+           "resource-reserve": "true",
+           "service-handler-header": {
+             "request-id": "request1"
+           },
+           "service-a-end": {
+             "service-rate": "100",
+             "clli": "<clli-node>",
+             "service-format": "ODU",
+             "node-id": "<otn-node-id>"
+           },
+           "service-z-end": {
+             "service-rate": "100",
+             "clli": "<clli-node>",
+             "service-format": "ODU",
+             "node-id": "<otn-node-id>"
+             },
+           "pce-metric": "hop-count"
+       }
+   }
+
+.. note::
+    here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
+
+Checking 10GE/ODU2e service connectivity
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
+
+**Sample JSON Data**
+
+.. code:: json
+
+   {
+      "input": {
+           "service-name": "something",
+           "resource-reserve": "true",
+           "service-handler-header": {
+             "request-id": "request1"
+           },
+           "service-a-end": {
+             "service-rate": "10",
+             "clli": "<clli-node>",
+             "service-format": "Ethernet",
+             "node-id": "<otn-node-id>"
+           },
+           "service-z-end": {
+             "service-rate": "10",
+             "clli": "<clli-node>",
+             "service-format": "Ethernet",
+             "node-id": "<otn-node-id>"
+             },
+           "pce-metric": "hop-count"
+       }
+   }
+
+.. note::
+    here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
+
+
 Help
 ----
 
--  `TransportPCE Wiki <https://wiki.opendaylight.org/view/TransportPCE:Main>`__
-
--  TransportPCE Mailing List
-   (`developer <https://lists.opendaylight.org/mailman/listinfo/transportpce-dev>`__)
+-  `TransportPCE Wiki <https://wiki.opendaylight.org/display/ODL/TransportPCE>`__