relying on open models, each of them communicating through published APIs.
-.. figure:: ./images/TransportPCE-Diagram-Phosphorus.jpg
+.. figure:: ./images/TransportPCE-Diagram-Sulfur.jpg
:alt: TransportPCE architecture
TransportPCE architecture
monitoring device port state changes. Associated notifications are handled through
Kafka and DMaaP clients.
+The chlorine release brings structural changes to the project. indeed, all the official
+yang models of the OpenROADM and ONF-TAPI communities are no longer managed directly
+in the TransportPCE project but in a dedicated sub-project: transportpce/models.
+Also, the implementation of these models which is made in TransportPCE now imports
+the models already compiled by maven dependency.
+From a functional point of view, Chlorine supports the autonomous reroute of WDM services
+terminated on 100G or 400G Transponders, as well as the beginning of developments around
+the OpenROAM catalog management that will allow to support Alien Wavelength use cases.
+
Module description
~~~~~~~~~~~~~~~~~~
supported over ODU4 in transponders or switchponders using higher rate network
interfaces.
+In Silicon release, the management of TopologyUpdateNotification coming from the *Topology Management*
+module was implemented. This functionality enables the controller to update the information of existing
+services according to the online status of the network infrastructure. If any service is affected by
+the topology update and the *odl-transportpce-nbi* feature is installed, the Service Handler will send a
+notification to a Kafka server with the service update information.
+
PCE
^^^
The population of OTN links (OTU4 and ODU4), and the adjustment of the tributary ports/slots
pool occupancy when OTN services are created is supported since Magnesium SR2.**
+Since Silicon release, the Topology Management module process NETCONF event received through an
+event stream (as defined in RFC 5277) between devices and the NETCONF adapter of the controller.
+Current implementation detects device configuration changes and updates the topology datastore accordingly.
+Then, it sends a TopologyUpdateNotification to the *Service Handler* to indicate that a change has been
+detected in the network that may affect some of the already existing services.
Renderer
^^^^^^^^
North API, interconnecting the Service Handler to higher level applications
relies on the Service Model defined in the MSA. The Renderer and the OLM are
-developed to allow configuring Open ROADM devices through a southbound
+developed to allow configuring OpenROADM devices through a southbound
Netconf/Yang interface and rely on the MSA’s device model.
ServiceHandler Service
- PCE to Topology Management
- Service Handler to Renderer
- Renderer to OLM
+- Network Model to Service Handler
Pce Service
^^^^^^^^^^^
- This feature provides function to be able to stub some of TransportPCE modules, pce and
renderer (Stubpce and Stubrenderer).
- Stubs are used for development purposes and can be used for some of the functionnal tests.
+ Stubs are used for development purposes and can be used for some of the functional tests.
Interfaces to external software
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the current version, only optical equipment compliant with open ROADM datamodels are managed
by transportPCE.
+ Since Chlorine release, the bierman implementation of RESTCONF is no longer supported for the benefit of the RFC8040.
+ Thus REST API must be compliant to the RFC8040 format.
+
Connecting nodes
~~~~~~~~~~~~~~~~
-To connect a node, use the following JSON RPC
+To connect a node, use the following RESTconf request
-**REST API** : *POST /restconf/config/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
+**REST API** : *PUT /rests/data/network-topology:network-topology/topology=topology-netconf/node=<node-id>*
**Sample JSON Data**
Then check that the netconf session has been correctly established between the controller and the
node. the status of **netconf-node-topology:connection-status** must be **connected**
-**REST API** : *GET /restconf/operational/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
+**REST API** : *GET /rests/data/network-topology:network-topology/topology=topology-netconf/node=<node-id>?content=nonconfig*
Node configuration discovery
physical ports related to transmission. All *circuit-packs* inside the node configuration are
analyzed.
-Use the following JSON RPC to check that function internally named *portMapping*.
+Use the following RESTconf URI to check that function internally named *portMapping*.
-**REST API** : *GET /restconf/config/portmapping:network*
+**REST API** : *GET /rests/data/transportpce-portmapping:network*
.. note::
topology can be partial. Check that link of type *ROADMtoROADM* exists between two adjacent rdm
nodes.
-**REST API** : *GET /restconf/config/ietf-network:network/openroadm-topology*
+**REST API** : *GET /rests/data/ietf-network:networks/network=openroadm-topology*
If it is not the case, you need to manually complement the topology with *ROADMtoROADM* link using
the following REST RPC:
-**REST API** : *POST /restconf/operations/networkutils:init-roadm-nodes*
+**REST API** : *POST /rests/operations/transportpce-networkutils:init-roadm-nodes*
**Sample JSON Data**
.. code:: json
{
- "networkutils:input": {
- "networkutils:rdm-a-node": "<node-id-A>",
- "networkutils:deg-a-num": "<degree-A-number>",
- "networkutils:termination-point-a": "<Logical-Connection-Point>",
- "networkutils:rdm-z-node": "<node-id-Z>",
- "networkutils:deg-z-num": "<degree-Z-number>",
- "networkutils:termination-point-z": "<Logical-Connection-Point>"
+ "input": {
+ "rdm-a-node": "<node-id-A>",
+ "deg-a-num": "<degree-A-number>",
+ "termination-point-a": "<Logical-Connection-Point>",
+ "rdm-z-node": "<node-id-Z>",
+ "deg-z-num": "<degree-Z-number>",
+ "termination-point-z": "<Logical-Connection-Point>"
}
}
From xpdr to rdm:
^^^^^^^^^^^^^^^^^
-**REST API** : *POST /restconf/operations/networkutils:init-xpdr-rdm-links*
+**REST API** : *POST /rests/operations/transportpce-networkutils:init-xpdr-rdm-links*
**Sample JSON Data**
.. code:: json
{
- "networkutils:input": {
- "networkutils:links-input": {
- "networkutils:xpdr-node": "<xpdr-node-id>",
- "networkutils:xpdr-num": "1",
- "networkutils:network-num": "<xpdr-network-port-number>",
- "networkutils:rdm-node": "<rdm-node-id>",
- "networkutils:srg-num": "<srg-number>",
- "networkutils:termination-point-num": "<Logical-Connection-Point>"
+ "input": {
+ "links-input": {
+ "xpdr-node": "<xpdr-node-id>",
+ "xpdr-num": "1",
+ "network-num": "<xpdr-network-port-number>",
+ "rdm-node": "<rdm-node-id>",
+ "srg-num": "<srg-number>",
+ "termination-point-num": "<Logical-Connection-Point>"
}
}
}
From rdm to xpdr:
^^^^^^^^^^^^^^^^^
-**REST API** : *POST /restconf/operations/networkutils:init-rdm-xpdr-links*
+**REST API** : *POST /rests/operations/transportpce-networkutils:init-rdm-xpdr-links*
**Sample JSON Data**
.. code:: json
{
- "networkutils:input": {
- "networkutils:links-input": {
- "networkutils:xpdr-node": "<xpdr-node-id>",
- "networkutils:xpdr-num": "1",
- "networkutils:network-num": "<xpdr-network-port-number>",
- "networkutils:rdm-node": "<rdm-node-id>",
- "networkutils:srg-num": "<srg-number>",
- "networkutils:termination-point-num": "<Logical-Connection-Point>"
+ "input": {
+ "links-input": {
+ "xpdr-node": "<xpdr-node-id>",
+ "xpdr-num": "1",
+ "network-num": "<xpdr-network-port-number>",
+ "rdm-node": "<rdm-node-id>",
+ "srg-num": "<srg-number>",
+ "termination-point-num": "<Logical-Connection-Point>"
}
}
}
or SWITCH type connected to two different rdm devices. To check that these xpdr are present in the
OTN topology, use the following command on the REST API :
-**REST API** : *GET /restconf/config/ietf-network:network/otn-topology*
+**REST API** : *GET /rests/data/ietf-network:networks/network=otn-topology*
An optical connectivity service shall have been created in a first setp. Since Magnesium SR2, the OTN
links are automatically populated in the topology after the Och, OTU4 and ODU4 interfaces have
end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
nodes.
-**REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
+**REST API** : *POST /rests/operations/org-openroadm-service:service-create*
**Sample JSON Data**
"node-id": "<xpdr-node-id>",
"service-format": "Ethernet",
"clli": "<ccli-name>",
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-client-port>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-client-port>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"service-z-end": {
"node-id": "<xpdr-node-id>",
"service-format": "Ethernet",
"clli": "<ccli-name>",
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-client-port>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-client-port>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"due-date": "yyyy-mm-ddT00:00:01Z",
end-to end Optical Channel (OC) connectivity service between two add/drop ports (PP port of SRG
node) over an optical network only composed of rdm nodes.
-**REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
+**REST API** : *POST /rests/operations/org-openroadm-service:service-create*
**Sample JSON Data**
"node-id": "<xpdr-node-id>",
"service-format": "OC",
"clli": "<ccli-name>",
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-client-port>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-client-port>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"service-z-end": {
"node-id": "<xpdr-node-id>",
"service-format": "OC",
"clli": "<ccli-name>",
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-client-port>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-client-port>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"due-date": "yyyy-mm-ddT00:00:01Z",
between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such service configure the
optical network infrastructure composed of rdm nodes.
-**REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
+**REST API** : *POST /rests/operations/org-openroadm-service:service-create*
**Sample JSON Data**
"service-format": "OTU",
"otu-service-rate": "org-openroadm-otn-common-types:OTU4",
"clli": "<ccli-name>",
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"service-z-end": {
"service-format": "OTU",
"otu-service-rate": "org-openroadm-otn-common-types:OTU4",
"clli": "<ccli-name>",
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"due-date": "yyyy-mm-ddT00:00:01Z",
connectivity service between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such
service configure the optical network infrastructure composed of rdm nodes.
-**REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
+**REST API** : *POST /rests/operations/org-openroadm-service:service-create*
**Sample JSON Data**
"service-format": "OTU",
"otu-service-rate": "org-openroadm-otn-common-types:OTUCn",
"clli": "<ccli-name>",
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"service-z-end": {
"service-format": "OTU",
"otu-service-rate": "org-openroadm-otn-common-types:OTUCn",
"clli": "<ccli-name>",
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"due-date": "yyyy-mm-ddT00:00:01Z",
low-order OTN services (ODU2e, ODU0). As for OTU4, such a service must be created between two network
ports of OTN Xponder (MUXPDR or SWITCH).
-**REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
+**REST API** : *POST /rests/operations/org-openroadm-service:service-create*
**Sample JSON Data**
"service-format": "ODU",
"otu-service-rate": "org-openroadm-otn-common-types:ODU4",
"clli": "<ccli-name>",
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"service-z-end": {
"service-format": "ODU",
"otu-service-rate": "org-openroadm-otn-common-types:ODU4",
"clli": "<ccli-name>",
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"due-date": "yyyy-mm-ddT00:00:01Z",
Such a service must be created between two client ports of OTN Xponder (MUXPDR or SWITCH)
configured to support 10GE interfaces.
-**REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
+**REST API** : *POST /rests/operations/org-openroadm-service:service-create*
**Sample JSON Data**
"committed-burst-size": "64"
}
},
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"service-z-end": {
"committed-burst-size": "64"
}
},
- "tx-direction": {
+ "tx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
- "rx-direction": {
+ },
+ "index": 0
+ }],
+ "rx-direction": [{
"port": {
"port-device-name": "<xpdr-node-id-in-otn-topology>",
"port-type": "fixed",
"lgx-port-name": "Some lgx-port-name",
"lgx-port-rack": "000000.00",
"lgx-port-shelf": "00"
- }
- },
+ },
+ "index": 0
+ }],
"optic-type": "gray"
},
"due-date": "yyyy-mm-ddT00:00:01Z",
Use the following REST RPC to invoke *service handler* module in order to delete a given optical
connectivity service.
-**REST API** : *POST /restconf/operations/org-openroadm-service:service-delete*
+**REST API** : *POST /rests/operations/org-openroadm-service:service-delete*
**Sample JSON Data**
Checking OTU4 service connectivity
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-**REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
+**REST API** : *POST /rests/operations/transportpce-pce:path-computation-request*
**Sample JSON Data**
"service-format": "OTU",
"node-id": "<otn-node-id>"
},
- "pce-metric": "hop-count"
+ "pce-routing-metric": "hop-count"
}
}
Checking ODU4 service connectivity
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-**REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
+**REST API** : *POST /rests/operations/transportpce-pce:path-computation-request*
**Sample JSON Data**
"service-format": "ODU",
"node-id": "<otn-node-id>"
},
- "pce-metric": "hop-count"
+ "pce-routing-metric": "hop-count"
}
}
Checking 10GE/ODU2e service connectivity
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-**REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
+**REST API** : *POST /rests/operations/transportpce-pce:path-computation-request*
**Sample JSON Data**
"service-format": "Ethernet",
"node-id": "<otn-node-id>"
},
- "pce-metric": "hop-count"
+ "pce-routing-metric": "hop-count"
}
}
This feature allows TransportPCE application to expose at its northbound interface other APIs than
those defined by the OpenROADM MSA. With this feature, TransportPCE provides part of the Transport-API
-specified by the Open Networking Foundation. More specifically, part of the Topology Service component
-is implemented, allowing to expose to higher level applications an abstraction of its OpenROADM
-topologies in the form of topologies respecting the T-API modelling. The current version of TransportPCE
-implements the *tapi-topology.yang* model in the revision 2018-12-10 (T-API v2.1.2).
+specified by the Open Networking Foundation. More specifically, the Topology Service, Connectivity and Notification
+Service components are implemented, allowing to:
+1. Expose to higher level applications an abstraction of its OpenROADM topologies in the form of topologies respecting the T-API modelling.
+2. Create/delete connectivity services between the Service Interface Points (SIPs) exposed by the T-API topology.
+3. Create/Delete Notification Subscription Service to expose to higher level applications T-API notifications through a Kafka server.
-- RPC call
+The current version of TransportPCE implements the *tapi-topology.yang*,
+*tapi-connectivity.yang* and *tapi-notification.yang* models in the revision
+2018-12-10 (T-API v2.1.2).
+
+Additionally, support for the Path Computation Service will be added in future releases, which will allow T-PCE
+to compute a path over the T-API topology.
+
+T-API Topology Service
+~~~~~~~~~~~~~~~~~~~~~~
+
+- RPC calls implemented:
- get-topology-details
+ - get-node-details
+
+ - get-node-edge-point-details
+
+ - get-link-details
+
+ - get-topology-list
+
+
As in IETF or OpenROADM topologies, T-API topologies are composed of lists of nodes and links that
abstract a set of network resources. T-API specifies the *T0 - Multi-layer topology* which is, as
indicated by its name, a single topology that collapses network logical abstraction for all network
represented by a bidirectional OTN link in TAPI topology, while retaining their available bandwidth
characteristics.
-Two kinds of topologies are currently implemented. The first one is the *"T0 - Multi-layer topology"*
+Phosphorus SR0 extends the T-API topology service implementation by bringing a fully described topology.
+*T0 - Full Multi-layer topology* is derived from the existing *T0 - Multi-layer topology*. But the ROADM
+infrastructure is not abstracted and the higher level application can get more details on the composition
+of the ROADM infrastructure controlled by TransportPCE. Each ROADM node found in the *openroadm-network*
+is converted into a *Photonic Media* node. The details of these T-API nodes are obtained from the
+*openroadm-topology*. Therefore, the external traffic ports of *Degree* and *SRG* nodes are represented
+with a set of Network Edge Points (NEPs) and SIPs belonging to the *Photonic Media* node and a pair of
+roadm-to-roadm links present in *openroadm-topology* is represented by a bidirectional *OMS* link in TAPI
+topology.
+Additionally, T-API topology related information is stored in TransportPCE datastore in the same way as
+OpenROADM topology layers. When a node is connected to the controller through the corresponding *REST API*,
+the T-API topology context gets updated dynamically and stored.
+
+.. note::
+
+ A naming nomenclature is defined to be able to map T-API and OpenROADM data.
+ i.e., T-API_roadm_Name = OpenROADM_roadmID+T-API_layer
+ i.e., T-API_roadm_nep_Name = OpenROADM_roadmID+T-API_layer+OpenROADM_terminationPointID
+
+Three kinds of topologies are currently implemented. The first one is the *"T0 - Multi-layer topology"*
defined in the reference implementation of T-API. This topology gives an abstraction from data coming
from openroadm-topology and otn-topology. Such topology may be rather complex since most of devices are
represented through several nodes and links.
Another topology, named *"Transponder 100GE"*, is also implemented. That latter provides a higher level
of abstraction, much simpler, for the specific case of 100GE transponder, in the form of a single
DSR node.
+Lastly, the *T0 - Full Multi-layer topology* topology was added. This topology collapses the data coming
+from openroadm-network, openroadm-topology and otn-topology. It gives a complete view of the optical
+network as defined in the reference implementation of T-API
The figure below shows an example of TAPI abstractions as performed by TransportPCE starting from Aluminium SR2.
of respectively XPDR-A1-XPDR1 and XPDR-C1-XPDR1...
-**REST API** : *POST /restconf/operations/tapi-topology:get-topology-details*
+**REST API** : *POST /rests/operations/tapi-topology:get-topology-details*
This request builds the TAPI *T0 - Multi-layer topology* abstraction with regard to the current
state of *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
port is connected to Add/Drop nodes of the ROADM infrastructure are retrieved in order to
abstract only relevant information.
+This request builds the TAPI *T0 - Full Multi-layer* topology with respect to the information existing in
+the T-API topology context stored in OpenDaylight datastores.
+
+.. code:: json
+
+ {
+ "tapi-topology:input": {
+ "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology"
+ }
+ }
+
+**REST API** : *POST /rests/operations/tapi-topology:get-node-details*
+
+This request returns the information, stored in the Topology Context, of the corresponding T-API node.
+The user can provide, either the Uuid associated to the attribute or its name.
+
+**Sample JSON Data**
+
+.. code:: json
+
+ {
+ "tapi-topology:input": {
+ "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
+ "tapi-topology:node-id-or-name": "ROADM-A1+PHOTONIC_MEDIA"
+ }
+ }
+
+**REST API** : *POST /rests/operations/tapi-topology:get-node-edge-point-details*
+
+This request returns the information, stored in the Topology Context, of the corresponding T-API NEP.
+The user can provide, either the Uuid associated to the attribute or its name.
+
+**Sample JSON Data**
+
+.. code:: json
+
+ {
+ "tapi-topology:input": {
+ "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
+ "tapi-topology:node-id-or-name": "ROADM-A1+PHOTONIC_MEDIA",
+ "tapi-topology:ep-id-or-name": "ROADM-A1+PHOTONIC_MEDIA+DEG1-TTP-TXRX"
+ }
+ }
+
+**REST API** : *POST /rests/operations/tapi-topology:get-link-details*
+
+This request returns the information, stored in the Topology Context, of the corresponding T-API link.
+The user can provide, either the Uuid associated to the attribute or its name.
+
+**Sample JSON Data**
+
+.. code:: json
+
+ {
+ "tapi-topology:input": {
+ "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
+ "tapi-topology:link-id-or-name": "ROADM-C1-DEG1-DEG1-TTP-TXRXtoROADM-A1-DEG2-DEG2-TTP-TXRX"
+ }
+ }
+
+T-API Connectivity & Common Services
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Phosphorus SR0 extends the T-API interface support by implementing the T-API connectivity Service.
+This interface enables a higher level controller or an orchestrator to request the creation of
+connectivity services as defined in the *tapi-connectivity* model. As it is necessary to indicate the
+two (or more) SIPs (or endpoints) of the connectivity service, the *tapi-common* model is implemented
+to retrieve from the datastore all the innformation related to the SIPs in the tapi-context.
+Current implementation of the connectivity service maps the *connectivity-request* into the appropriate
+*openroadm-service-create* and relies on the Service Handler to perform path calculation and configuration
+of devices. Results received from the PCE and the Rendererare mapped back into T-API to create the
+corresponding Connection End Points (CEPs) and Connections in the T-API Connectivity Context and store it
+in the datastore.
+
+This first implementation includes the creation of:
+
+- ROADM-to-ROADM tapi-connectivity service (MC connectivity service)
+- OTN tapi-connectivity services (OCh/OTU, OTSi/OTU & ODU connectivity services)
+- Ethernet tapi-connectivity services (DSR connectivity service)
+
+- RPC calls implemented
+
+ - create-connectivity-service
+
+ - get-connectivity-service-details
+
+ - get-connection-details
+
+ - delete-connectivity-service
+
+ - get-connection-end-point-details
+
+ - get-connectivity-service-list
+
+ - get-service-interface-point-details
+
+ - get-service-interface-point-list
+
+Creating a T-API Connectivity service
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Use the *tapi* interface to create any end-to-end connectivity service on a T-API based
+network. Two kind of end-to-end "optical" connectivity services are managed by TransportPCE T-API module:
+- 10GE service from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
+- Media Channel (MC) connectivity service from client add/drop port (PP port of SRG) to
+client add/drop port of two ROADMs.
+
+As mentioned earlier, T-API module interfaces with the Service Handler to automatically invoke the
+*renderer* module to create all required tapi connections and cross-connection on each device
+supporting the service.
+
+Before creating a low-order OTN connectivity service (1GE or 10GE services terminating on
+client port of MUXPDR or SWITCH), the user must ensure that a high-order ODU4 container
+exists and has previously been configured (it means structured to support low-order otn services)
+to support low-order OTN containers.
+
+Thus, OTN connectivity service creation implies three steps:
+1. OTSi/OTU connectivity service from network port to network port of two OTN Xponders (MUXPDR or SWITCH in Photonic media layer)
+2. ODU connectivity service from network port to network port of two OTN Xponders (MUXPDR or SWITCH in DSR/ODU layer)
+3. 10GE connectivity service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH in DSR/ODU layer)
+
+The first step corresponds to the OCH-OTU4 service from network port to network port of OpenROADM.
+The corresponding T-API cross and top connections are created between the CEPs of the T-API nodes
+involved in each request.
+
+Additionally, an *MC connectivity service* could be created between two ROADMs to create an optical
+tunnel and reserve resources in advance. This kind of service corresponds to the OC service creation
+use case described earlier.
+
+The management of other OTN services through T-API (1GE-ODU0, 100GE...) is planned for future releases.
+
+Any-Connectivity service creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+As for the Service Creation described for OpenROADM, the initial steps are the same:
+
+- Connect netconf devices to the controller
+- Create XPDR-RDM links and configure RDM-to-RDM links (in openroadm topologies)
+
+Bidirectional T-API links between xpdr and rdm nodes must be created manually. To that end, use the
+following REST RPCs:
+
+From xpdr <--> rdm:
+^^^^^^^^^^^^^^^^^^^
+
+**REST API** : *POST /rests/operations/transportpce-tapinetworkutils:init-xpdr-rdm-tapi-link*
+
+**Sample JSON Data**
+
+.. code:: json
+
+ {
+ "input": {
+ "xpdr-node": "<XPDR_OpenROADM_id>",
+ "network-tp": "<XPDR_TP_OpenROADM_id>",
+ "rdm-node": "<ROADM_OpenROADM_id>",
+ "add-drop-tp": "<ROADM_TP_OpenROADM_id>"
+ }
+ }
+
+Use the following REST RPC to invoke T-API module in order to create a bidirectional connectivity
+service between two devices. The network should be composed of two ROADMs and two Xponders (SWITCH or MUX)
+
+**REST API** : *POST /rests/operations/tapi-connectivity:create-connectivity-service*
+
+**Sample JSON Data**
+
+.. code:: json
+
+ {
+ "tapi-connectivity:input": {
+ "tapi-connectivity:end-point": [
+ {
+ "tapi-connectivity:layer-protocol-name": "<Node_TAPI_Layer>",
+ "tapi-connectivity:service-interface-point": {
+ "tapi-connectivity:service-interface-point-uuid": "<SIP_UUID_of_NEP>"
+ },
+ "tapi-connectivity:administrative-state": "UNLOCKED",
+ "tapi-connectivity:operational-state": "ENABLED",
+ "tapi-connectivity:direction": "BIDIRECTIONAL",
+ "tapi-connectivity:role": "SYMMETRIC",
+ "tapi-connectivity:protection-role": "WORK",
+ "tapi-connectivity:local-id": "<OpenROADM node ID>",
+ "tapi-connectivity:name": [
+ {
+ "tapi-connectivity:value-name": "OpenROADM node id",
+ "tapi-connectivity:value": "<OpenROADM node ID>"
+ }
+ ]
+ },
+ {
+ "tapi-connectivity:layer-protocol-name": "<Node_TAPI_Layer>",
+ "tapi-connectivity:service-interface-point": {
+ "tapi-connectivity:service-interface-point-uuid": "<SIP_UUID_of_NEP>"
+ },
+ "tapi-connectivity:administrative-state": "UNLOCKED",
+ "tapi-connectivity:operational-state": "ENABLED",
+ "tapi-connectivity:direction": "BIDIRECTIONAL",
+ "tapi-connectivity:role": "SYMMETRIC",
+ "tapi-connectivity:protection-role": "WORK",
+ "tapi-connectivity:local-id": "<OpenROADM node ID>",
+ "tapi-connectivity:name": [
+ {
+ "tapi-connectivity:value-name": "OpenROADM node id",
+ "tapi-connectivity:value": "<OpenROADM node ID>"
+ }
+ ]
+ }
+ ],
+ "tapi-connectivity:connectivity-constraint": {
+ "tapi-connectivity:service-layer": "<TAPI_Service_Layer>",
+ "tapi-connectivity:service-type": "POINT_TO_POINT_CONNECTIVITY",
+ "tapi-connectivity:service-level": "Some service-level",
+ "tapi-connectivity:requested-capacity": {
+ "tapi-connectivity:total-size": {
+ "value": "<CAPACITY>",
+ "unit": "GB"
+ }
+ }
+ },
+ "tapi-connectivity:state": "Some state"
+ }
+ }
+
+As for the previous RPC, MC and OTSi correspond to PHOTONIC_MEDIA layer services,
+ODU to ODU layer services and 10GE/DSR to DSR layer services. This RPC invokes the
+*Service Handler* module to trigger the *PCE* to compute a path over the
+*otn-topology* that must contains ODU4 links with valid bandwidth parameters. Once the path is computed
+and validated, the T-API CEPs (associated with a NEP), cross connections and top connections will be created
+according to the service request and the topology objects inside the computed path. Then, the *renderer* and
+*OLM* are invoked to implement the end-to-end path into the devices and to update the status of the connections
+and connectivity service.
+
+.. note::
+ Refer to the "Unconstrained E2E Service Provisioning" use cases from T-API Reference Implementation to get
+ more details about the process of connectivity service creation
+
+Deleting a connectivity service
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Use the following REST RPC to invoke *TAPI* module in order to delete a given optical
+connectivity service.
+
+**REST API** : *POST /rests/operations/tapi-connectivity:delete-connectivity-service*
+
+**Sample JSON Data**
+
+.. code:: json
+
+ {
+ "tapi-connectivity:input": {
+ "tapi-connectivity:service-id-or-name": "<Service_UUID_or_Name>"
+ }
+ }
+
+.. note::
+ Deleting OTN connectivity services implies proceeding in the reverse way to their creation. Thus, OTN
+ connectivity service deletion must respect the three following steps:
+ 1. delete first all 10GE services supported over any ODU4 to be deleted
+ 2. delete ODU4
+ 3. delete MC-OTSi supporting the just deleted ODU4
+
+T-API Notification Service
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- RPC calls implemented:
+
+ - create-notification-subscription-service
+
+ - get-supported-notification-types
+
+ - delete-notification-subscription-service
+
+ - get-notification-subscription-service-details
+
+ - get-notification-subscription-service-list
+
+ - get-notification-list
+
+Sulfur SR1 extends the T-API interface support by implementing the T-API notification service. This feature
+allows TransportPCE to write and read tapi-notifications stored in topics of a Kafka server. It also upgrades
+the nbinotifications module to support the serialization and deserialization of tapi-notifications into JSON
+format and vice-versa. Current implementation of the notification service creates a Kafka topic and stores
+tapi-notification on reception of a create-notification-subscription-service request. Only connectivity-service
+related notifications are stored in the Kafka server.
+
+In comparison with openroadm notifications, in which several pre-defined kafka topics are created on nbinotification
+module instantiation, tapi-related kafka topics are created on-demand. Upon reception of a
+*create-notification-subscription-service request*, a new topic will be created in the Kafka server.
+This topic is named after the connectivity-service UUID.
+
+.. note::
+ Creating a Notification Subscription Service could include a list of T-API object UUIDs, therefore 1 topic per UUID
+ is created in the Kafka server.
+
+In the current implementation, only Connectivity Service related notification are supported.
+
+**REST API** : *POST /rests/operations/tapi-notification:get-supported-notification-types*
+
+The response body will include the type of notifications supported and the object types
+
+Use the following RPC to create a Notification Subscription Service.
+
+**REST API** : *POST /rests/operations/tapi-notification:create-notification-subscription-service*
+
+**Sample JSON Data**
+
+.. code:: json
+
+ {
+ "tapi-notification:input": {
+ "tapi-notification:subscription-filter": {
+ "tapi-notification:requested-notification-types": [
+ "ALARM_EVENT"
+ ],
+ "tapi-notification:requested-object-types": [
+ "CONNECTIVITY_SERVICE"
+ ],
+ "tapi-notification:requested-layer-protocols": [
+ "<LAYER_PROTOCOL_NAME>"
+ ],
+ "tapi-notification:requested-object-identifier": [
+ "<Service_UUID>"
+ ],
+ "tapi-notification:include-content": true,
+ "tapi-notification:local-id": "localId",
+ "tapi-notification:name": [
+ {
+ "tapi-notification:value-name": "Subscription name",
+ "tapi-notification:value": "<notification_service_name>"
+ }
+ ]
+ },
+ "tapi-notification:subscription-state": "ACTIVE"
+ }
+ }
+
+This call will return the *UUID* of the Notification Subscription service, which can later be used to retrieve the
+details of the created subscription, to delete the subscription (and all the related kafka topics) or to retrieve
+all the tapi notifications related to that subscription service.
+
+The figure below shows an example of the application of tapi and nbinotifications in order to notify when there is
+a connectivity service creation process. Depending on the status of the process a tapi-notification with the
+corresponding updated state of the connectivity service is sent to the topic "Service_UUID".
+
+.. figure:: ./images/TransportPCE-tapi-nbinotifications-service-example.jpg
+ :alt: Example of tapi connectivity service notifications using the feature nbinotifications in TransportPCE
+
+Additionally, when a connectivity service breaks down or is restored a tapi notification alarming the new status
+will be sent to a Kafka Server. Below an example of a tapi notification is shown.
+
+**Sample JSON T-API notification**
+
+.. code:: json
+
+ {
+ "nbi-notifications:notification-tapi-service": {
+ "layer-protocol-name": "<LAYER_PROTOCOL_NAME>",
+ "notification-type": "ATTRIBUTE_VALUE_CHANGE",
+ "changed-attributes": [
+ {
+ "value-name": "administrativeState",
+ "old-value": "<LOCKED_OR_UNLOCKED>",
+ "new-value": "<UNLOCKED_OR_LOCKED>"
+ },
+ {
+ "value-name": "operationalState",
+ "old-value": "DISABLED_OR_ENABLED",
+ "new-value": "ENABLED_OR_DISABLED"
+ }
+ ],
+ "target-object-name": [
+ {
+ "value-name": "Connectivity Service Name",
+ "value": "<SERVICE_UUID>"
+ }
+ ],
+ "uuid": "<NOTIFICATION_UUID>",
+ "target-object-type": "CONNECTIVITY_SERVICE",
+ "event-time-stamp": "2022-04-06T09:06:01+00:00",
+ "target-object-identifier": "<SERVICE_UUID>"
+ }
+ }
+
+To retrieve these tapi connectivity service notifications stored in the kafka server:
+
+**REST API** : *POST /rests/operations/tapi-notification:get-notification-list*
+
+**Sample JSON Data**
+
+.. code:: json
+
+ {
+ "tapi-notification:input": {
+ "tapi-notification:subscription-id-or-name": "<SUBSCRIPTION_UUID_OR_NAME>",
+ "tapi-notification:time-period": "time-period"
+ }
+ }
+
+Further development will support more types of T-API objects, i.e., node, link, topology, connection...
+
odl-transportpce-dmaap-client
-----------------------------
This feature allows TransportPCE application to send notifications on ONAP Dmaap Message router
following service request results.
This feature listens on NBI notifications and sends the PublishNotificationService content to
-Dmaap on the topic "unauthenticated.TPCE" through a POST request on /events/unauthenticated.TPCE
+Dmaap on the topic "unauthenticated. TPCE" through a POST request on /events/unauthenticated.TPCE
It uses Jackson to serialize the notification to JSON and jersey client to send the POST request.
odl-transportpce-nbinotifications
It is basically composed of two kinds of elements. First are the 'publishers' that are in charge of sending a notification to
a Kafka server. To protect and only allow specific classes to send notifications, each publisher
is dedicated to an authorized class.
-Then are the 'subscribers' that are in charge of reading notifications from a Kafka server.
+There are the 'subscribers' that are in charge of reading notifications from a Kafka server.
So when the feature is called to write notification to a Kafka server, it will serialize the notification
into JSON format and then will publish it in a topic of the server via a publisher.
And when the feature is called to read notifications from a Kafka server, it will retrieve it from
To retrieve these service notifications stored in the Kafka server :
-**REST API** : *POST /restconf/operations/nbi-notifications:get-notifications-process-service*
+**REST API** : *POST /rests/operations/nbi-notifications:get-notifications-process-service*
**Sample JSON Data**
To retrieve these alarm notifications stored in the Kafka server :
-**REST API** : *POST /restconf/operations/nbi-notifications:get-notifications-alarm-service*
+**REST API** : *POST /rests/operations/nbi-notifications:get-notifications-alarm-service*
**Sample JSON Data**