-Service Handler handles request coming from a higher level controller or an
-orchestrator through the northbound API, as defined in the Open ROADM service
-model. Current implementation addresses the following rpcs: service-create,
-service–delete, service-reroute.
-It checks the request consistency and trigs path calculation sending rpcs to the
-PCE. If a valid path is returned by the PCE, path configuration is initiated
-relying on Renderer and OLM.
-At the confirmation of a successful service creation, the Service Handler
-updates the service-list in the MD-SAL.
-For service deletion, the Service Handler relies on the Renderer and the OLM to
-delete connections and reset power levels associated with the service.
-The service-list is updated following a successful service deletion.
-
+Service Handler handles request coming from a higher level controller or an orchestrator
+through the northbound API, as defined in the Open ROADM service model. Current
+implementation addresses the following rpcs: service-create, temp-service-create,
+service–delete, temp-service-delete, service-reroute, and service-restoration. It checks the
+request consistency and trigs path calculation sending rpcs to the PCE. If a valid path is
+returned by the PCE, path configuration is initiated relying on Renderer and OLM. At the
+confirmation of a successful service creation, the Service Handler updates the service-
+list/temp-service-list in the MD-SAL. For service deletion, the Service Handler relies on the
+Renderer and the OLM to delete connections and reset power levels associated with the
+service. The service-list is updated following a successful service deletion. In Neon SR0 is
+added the support for service from ROADM to ROADM, which brings additional flexibility and
+notably allows reserving resources when transponders are not in place at day one.
+The full support of OTN services, including OTU, HO-ODU and LO-ODU will be introduced
+in next release of Magnesium.
+In Magnesium SR0, the interconnection of the PCE with GNPY (Gaussian Noise Python), an
+open-source library developed in the scope of the Telecom Infra Project for building route
+planning and optimizing performance in optical mesh networks, is fully supported.
+
+If the OSNR calculated by the PCE is too close to the limit defined in OpenROADM
+specifications, the PCE forwards through a REST interface to GNPY external tool the topology
+and the pre-computed path translated in routing constraints. GNPy calculates a set of Quality of
+Transmission metric for this path using its own library which includes models for OpenROADM.
+The result is sent back to the PCE. If the path is validated, the PCE sends back a response to
+the service handler. In case of invalidation of the path by GNPY, the PCE sends a new request to
+GNPY, including only the constraints expressed in the path-computation-request initiated by the
+Service Handler. GNPy then tries to calculate a path based on these relaxed constraints. The result
+of the path computation is provided to the PCE which translates the path according to the topology
+handled in transportPCE and forwards the results to the Service Handler.
+
+GNPy relies on SNR and takes into account the linear and non-linear impairments to check feasibility.
+In the related tests, GNPy module runs externally in a docker and the communication with T-PCE is
+ensured via HTTPs.
-The Renderer module, on request coming from the Service Handler through a
-service-implementation-request /service delete rpc, sets/deletes the path
-corresponding to a specific service between A and Z ends.
-It first checks what are the existing interfaces on the ports of the different
-nodes that the path crosses. It then creates missing interfaces. After all
-needed interfaces have been created it sets the connections required in the
-nodes and notifies the Service Handler on the status of the path creation.
-Path is created in 2 steps (from A to Z and Z to A). In case the path between
-A and Z could not be fully created, a rollback function is called to set the
-equipment on the path back to their initial configuration (as they were before
-invoking the Renderer).
+The Renderer module, on request coming from the Service Handler through a service-
+implementation-request /service delete rpc, sets/deletes the path corresponding to a specific
+service between A and Z ends. The path description provided by the service-handler to the
+renderer is based on abstracted resources (nodes, links and termination-points), as provided
+by the PCE module. The renderer converts this path-description in a path topology based on
+device resources (circuit-packs, ports,…).
+
+The conversion from abstracted resources to device resources is performed relying on the
+portmapping module which maintains the connections between these different resource types.
+Portmapping module also allows to keep the topology independant from the devices releases.
+In Neon (SR0), portmapping module has been enriched to support both openroadm 1.2.1 and 2.2.1
+device models. The full support of openroadm 2.2.1 device models (both in the topology management
+and the renderingfunction) has been added in Neon SR1. In Magnesium, portmapping is enriched with
+the supported-interface-capability, OTN supporting-interfaces, and switching-pools (reflecting
+cross-connection capabilities of OTN switch-ponders).
+
+After the path is provided, the renderer first checks what are the existing interfaces on the
+ports of the different nodes that the path crosses. It then creates missing interfaces. After all
+needed interfaces have been created it sets the connections required in the nodes and
+notifies the Service Handler on the status of the path creation. Path is created in 2 steps
+(from A to Z and Z to A). In case the path between A and Z could not be fully created, a
+rollback function is called to set the equipment on the path back to their initial configuration
+(as they were before invoking the Renderer).
+
+Magnesium brings the support of OTN services. SR0 supports the creation of OTU4, ODU4, ODU2/ODU2e
+and ODU1 interfaces. The creation of these interfaces must be triggered through otn-service-path
+RPC. Full support (service-implementation-request /service delete rpc, topology alignement after
+the service has been created) will be provided in later releases of Magnesium.
+
- "netconf-node-topology:tcp-only": "false",
- "netconf-node-topology:reconnect-on-changed-schema": "false",
- "netconf-node-topology:host": "<node-ip-address>",
- "netconf-node-topology:default-request-timeout-millis": "120000",
- "netconf-node-topology:max-connection-attempts": "0",
- "netconf-node-topology:sleep-factor": "1.5",
- "netconf-node-topology:actor-response-wait-time": "5",
- "netconf-node-topology:concurrent-rpc-limit": "0",
- "netconf-node-topology:between-attempts-timeout-millis": "2000",
- "netconf-node-topology:port": "<netconf-port>",
- "netconf-node-topology:connection-timeout-millis": "20000",
- "netconf-node-topology:username": "<node-username>",
- "netconf-node-topology:password": "<node-password>",
- "netconf-node-topology:keepalive-delay": "300"
+ "netconf-node-topology:tcp-only": "false",
+ "netconf-node-topology:reconnect-on-changed-schema": "false",
+ "netconf-node-topology:host": "<node-ip-address>",
+ "netconf-node-topology:default-request-timeout-millis": "120000",
+ "netconf-node-topology:max-connection-attempts": "0",
+ "netconf-node-topology:sleep-factor": "1.5",
+ "netconf-node-topology:actor-response-wait-time": "5",
+ "netconf-node-topology:concurrent-rpc-limit": "0",
+ "netconf-node-topology:between-attempts-timeout-millis": "2000",
+ "netconf-node-topology:port": "<netconf-port>",
+ "netconf-node-topology:connection-timeout-millis": "20000",
+ "netconf-node-topology:username": "<node-username>",
+ "netconf-node-topology:password": "<node-password>",
+ "netconf-node-topology:keepalive-delay": "300"
- "input": {
- "sdnc-request-header": {
- "request-id": "request-1",
- "rpc-action": "service-create",
- "request-system-id": "appname"
- },
- "service-name": "test1",
- "common-id": "commonId",
- "connection-type": "service",
- "service-a-end": {
- "service-rate": "100",
- "node-id": "<xpdr-node-id>",
- "service-format": "Ethernet",
- "clli": "<ccli-name>",
- "tx-direction": {
- "port": {
- "port-device-name": "<xpdr-client-port>",
- "port-type": "fixed",
- "port-name": "<xpdr-client-port-number>",
- "port-rack": "000000.00",
- "port-shelf": "Chassis#1"
- },
- "lgx": {
- "lgx-device-name": "Some lgx-device-name",
- "lgx-port-name": "Some lgx-port-name",
- "lgx-port-rack": "000000.00",
- "lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
- "port": {
- "port-device-name": "<xpdr-client-port>",
- "port-type": "fixed",
- "port-name": "<xpdr-client-port-number>",
- "port-rack": "000000.00",
- "port-shelf": "Chassis#1"
- },
- "lgx": {
- "lgx-device-name": "Some lgx-device-name",
- "lgx-port-name": "Some lgx-port-name",
- "lgx-port-rack": "000000.00",
- "lgx-port-shelf": "00"
- }
- },
- "optic-type": "gray"
- },
- "service-z-end": {
- "service-rate": "100",
- "node-id": "<xpdr-node-id>",
- "service-format": "Ethernet",
- "clli": "<ccli-name>",
- "tx-direction": {
- "port": {
- "port-device-name": "<xpdr-client-port>",
- "port-type": "fixed",
- "port-name": "<xpdr-client-port-number>",
- "port-rack": "000000.00",
- "port-shelf": "Chassis#1"
- },
- "lgx": {
- "lgx-device-name": "Some lgx-device-name",
- "lgx-port-name": "Some lgx-port-name",
- "lgx-port-rack": "000000.00",
- "lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
- "port": {
- "port-device-name": "<xpdr-client-port>",
- "port-type": "fixed",
- "port-name": "<xpdr-client-port-number>",
- "port-rack": "000000.00",
- "port-shelf": "Chassis#1"
- },
- "lgx": {
- "lgx-device-name": "Some lgx-device-name",
- "lgx-port-name": "Some lgx-port-name",
- "lgx-port-rack": "000000.00",
- "lgx-port-shelf": "00"
- }
- },
- "optic-type": "gray"
- },
- "due-date": "yyyy-mm-ddT00:00:01Z",
- "operator-contact": "some-contact-info"
- }
+ "input": {
+ "sdnc-request-header": {
+ "request-id": "request-1",
+ "rpc-action": "service-create",
+ "request-system-id": "appname"
+ },
+ "service-name": "test1",
+ "common-id": "commonId",
+ "connection-type": "service",
+ "service-a-end": {
+ "service-rate": "100",
+ "node-id": "<xpdr-node-id>",
+ "service-format": "Ethernet",
+ "clli": "<ccli-name>",
+ "tx-direction": {
+ "port": {
+ "port-device-name": "<xpdr-client-port>",
+ "port-type": "fixed",
+ "port-name": "<xpdr-client-port-number>",
+ "port-rack": "000000.00",
+ "port-shelf": "Chassis#1"
+ },
+ "lgx": {
+ "lgx-device-name": "Some lgx-device-name",
+ "lgx-port-name": "Some lgx-port-name",
+ "lgx-port-rack": "000000.00",
+ "lgx-port-shelf": "00"
+ }
+ },
+ "rx-direction": {
+ "port": {
+ "port-device-name": "<xpdr-client-port>",
+ "port-type": "fixed",
+ "port-name": "<xpdr-client-port-number>",
+ "port-rack": "000000.00",
+ "port-shelf": "Chassis#1"
+ },
+ "lgx": {
+ "lgx-device-name": "Some lgx-device-name",
+ "lgx-port-name": "Some lgx-port-name",
+ "lgx-port-rack": "000000.00",
+ "lgx-port-shelf": "00"
+ }
+ },
+ "optic-type": "gray"
+ },
+ "service-z-end": {
+ "service-rate": "100",
+ "node-id": "<xpdr-node-id>",
+ "service-format": "Ethernet",
+ "clli": "<ccli-name>",
+ "tx-direction": {
+ "port": {
+ "port-device-name": "<xpdr-client-port>",
+ "port-type": "fixed",
+ "port-name": "<xpdr-client-port-number>",
+ "port-rack": "000000.00",
+ "port-shelf": "Chassis#1"
+ },
+ "lgx": {
+ "lgx-device-name": "Some lgx-device-name",
+ "lgx-port-name": "Some lgx-port-name",
+ "lgx-port-rack": "000000.00",
+ "lgx-port-shelf": "00"
+ }
+ },
+ "rx-direction": {
+ "port": {
+ "port-device-name": "<xpdr-client-port>",
+ "port-type": "fixed",
+ "port-name": "<xpdr-client-port-number>",
+ "port-rack": "000000.00",
+ "port-shelf": "Chassis#1"
+ },
+ "lgx": {
+ "lgx-device-name": "Some lgx-device-name",
+ "lgx-port-name": "Some lgx-port-name",
+ "lgx-port-rack": "000000.00",
+ "lgx-port-shelf": "00"
+ }
+ },
+ "optic-type": "gray"
+ },
+ "due-date": "yyyy-mm-ddT00:00:01Z",
+ "operator-contact": "some-contact-info"
+ }
+OTU4/ODU4 service creation :
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
+end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
+nodes.
+
+XXXXXXXXXXXXXXX (TO BE COMPLETED )XXXXXXXXXXXXXXXXX
+
+
+1GE/ODU0 and 10GE/ODU2e service creation :
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Use the following REST RPC to invoke *PCE* module in order to check connectivity between xponder
+nodes and the availability of a supporting optical connectivity between the network-ports of the
+nodes.
+
+**REST API** : to complete
+
+After end-to-end optical connectivity between the two xpdr has been checked, use the otn-service-path
+rpc to invoke the *Renderer* and create the corresponding interfaces :
+
+- 1GE and ODU0 interfaces for 1GE services
+- 10GE and ODU2e interfaces for 10GE services
+
+The following example corresponds to the creation of a 10GE service
+
+**REST API** : to complete
+
+.. note::
+ OTN links are not automatically populated in the topology after the ODU2e interfaces have
+ been created on the two client ports of the xpdr. The otn link can be posted manually through
+ the REST API (APIDoc).
+
+.. note::
+ With Magnesium SR0, the service-list corresponding to 1GE/10GE and OTU4/ODU4 services is not
+ updated in the datastore after the interfaces have been created in the device.
+