Merge "Bump upstream dependencies to Cl-SR1"
[transportpce.git] / docs / developer-guide.rst
1 .. _transportpce-dev-guide:
2
3 TransportPCE Developer Guide
4 ============================
5
6 Overview
7 --------
8
9 TransportPCE describes an application running on top of the OpenDaylight
10 controller. Its primary function is to control an optical transport
11 infrastructure using a non-proprietary South Bound Interface (SBI). It may be
12 interconnected with Controllers of different layers (L2, L3 Controller…), a
13 higher layer Controller and/or an Orchestrator through non-proprietary
14 Application Programing Interfaces (APIs). Control includes the capability to
15 configure the optical equipment, and to provision services according to a
16 request coming from a higher layer controller and/or an orchestrator.
17 This capability may rely on the controller only or it may be delegated to
18 distributed (standardized) protocols.
19
20
21 Architecture
22 ------------
23
24 TransportPCE modular architecture is described on the next diagram. Each main
25 function such as Topology management, Path Calculation Engine (PCE), Service
26 handler, Renderer \_responsible for the path configuration through optical
27 equipment\_ and Optical Line Management (OLM) is associated with a generic block
28 relying on open models, each of them communicating through published APIs.
29
30
31 .. figure:: ./images/TransportPCE-Diagram-Sulfur.jpg
32    :alt: TransportPCE architecture
33
34    TransportPCE architecture
35
36 Fluorine, Neon and Sodium releases of transportPCE are dedicated to the control
37 of WDM transport infrastructure. The WDM layer is built from colorless ROADMs
38 and transponders.
39
40 The interest of using a controller to provision automatically services strongly
41 relies on its ability to handle end to end optical services that span through
42 the different network domains, potentially equipped with equipment coming from
43 different suppliers. Thus, interoperability in the optical layer is a key
44 element to get the benefit of automated control.
45
46 Initial design of TransportPCE leverages OpenROADM Multi-Source-Agreement (MSA)
47 which defines interoperability specifications, consisting of both Optical
48 interoperability and Yang data models.
49
50 End to end OTN services such as OCH-OTU4, structured ODU4 or 10GE-ODU2e
51 services are supported since Magnesium SR2. OTN support continued to be
52 improved in the following releases of Magnesium and Aluminium.
53
54 Flexgrid was introduced in Aluminium. Depending on OpenROADM device models,
55 optical interfaces can be created according to the initial fixed grid (for
56 R1.2.1, 96 channels regularly spaced of 50 GHz), or to a flexgrid (for R2.2.1
57 use of specific number of subsequent frequency slots of 6.25 GHz depending on
58 one side of ROADMs and transponders capabilities and on the other side of the
59 rate of the channel.
60
61 Leveraging Flexgrid feature, high rate services are supported since Silicon.
62 First implementation allows rendering 400 GE services. This release also brings
63 asynchronous service creation and deletion, thanks to northbound notifications
64 modules based on a Kafka implementation, allowing interactions with the DMaaP
65 Bus of ONAP.
66
67 Phosphorus consolidates end to end support for high rate services (ODUC4, OTUC4),
68 allowing service creation and deletion from the NBI. The support of path
69 computation for high rate services (OTUC4) will be added through the different P
70 releases, relying on GNPy for impairment aware path computation. An experimental
71 support of T-API is provided allowing service-create/delete from a T-API version
72 2.1.1 compliant NBI. A T-API network topology, with different levels of abstraction
73 and service context are maintained in the MDSAL. Service state is managed,
74 monitoring device port state changes. Associated notifications are handled through
75 Kafka and  DMaaP clients.
76
77 The chlorine release brings structural changes to the project. indeed, all the official
78 yang models of the OpenROADM and ONF-TAPI communities are no longer managed directly
79 in the TransportPCE project but in a dedicated sub-project: transportpce/models.
80 Also, the implementation of these models which is made in TransportPCE now imports
81 the models already compiled by maven dependency.
82 From a functional point of view, Chlorine supports the autonomous reroute of WDM services
83 terminated on 100G or 400G Transponders, as well as the beginning of developments around
84 the OpenROAM catalog management that will allow to support Alien Wavelength use cases.
85
86
87 Module description
88 ~~~~~~~~~~~~~~~~~~
89
90 ServiceHandler
91 ^^^^^^^^^^^^^^
92
93 Service Handler handles request coming from a higher level controller or an
94 orchestrator through the northbound API, as defined in the Open ROADM service model.
95 Current implementation addresses the following rpcs: service-create, temp-service-
96 create, service–delete, temp-service-delete, service-reroute, and service-restoration.
97 It checks the request consistency and trigs path calculation sending rpcs to the PCE.
98 If a valid path is returned by the PCE, path configuration is initiated relying on
99 Renderer and OLM. At the confirmation of a successful service creation, the Service
100 Handler updates the service-list/temp-service-list in the MD-SAL. For service deletion,
101 the Service Handler relies on the Renderer and the OLM to delete connections and reset
102 power levels associated with the service. The service-list is updated following a
103 successful service deletion. In Neon SR0 is added the support for service from ROADM
104 to ROADM, which brings additional flexibility and notably allows reserving resources
105 when transponders are not in place at day one. Magnesium SR2 fully supports end-to-end
106 OTN services which are part of the OTN infrastructure. It concerns the management of
107 OCH-OTU4 (also part of the optical infrastructure) and structured HO-ODU4 services.
108 Moreover, once these two kinds of OTN infrastructure service created, it is possible
109 to manage some LO-ODU services (1GE-ODU0, 10GE-ODU2e). 100GE services are also
110 supported over ODU4 in transponders or switchponders using higher rate network
111 interfaces.
112
113 In Silicon release, the management of TopologyUpdateNotification coming from the *Topology Management*
114 module was implemented. This functionality enables the controller to update the information of existing
115 services according to the online status of the network infrastructure. If any service is affected by
116 the topology update and the *odl-transportpce-nbi* feature is installed, the Service Handler will send a
117 notification to a Kafka server with the service update information.
118
119 PCE
120 ^^^
121
122 The Path Computation Element (PCE) is the component responsible for path
123 calculation. An interface allows the Service Handler or external components such as an
124 orchestrator to request a path computation and get a response from the PCE
125 including the computed path(s) in case of success, or errors and indication of
126 the reason for the failure in case the request cannot be satisfied. Additional
127 parameters can be provided by the PCE in addition to the computed paths if
128 requested by the client module. An interface to the Topology Management module
129 allows keeping PCE aligned with the latest changes in the topology. Information
130 about current and planned services is available in the MD-SAL data store.
131
132 Current implementation of PCE allows finding the shortest path, minimizing either the hop
133 count (default) or the propagation delay. Central wavelength is assigned considering a fixed
134 grid of 96 wavelengths 50 GHz spaced. The assignment of wavelengths according to a flexible
135 grid considering 768 subsequent slots of 6,25 GHz (total spectrum of 4.8 Thz), and their
136 occupation by existing services is planned for later releases.
137 In Neon SR0, the PCE calculates the OSNR, on the base of incremental noise specifications
138 provided in Open ROADM MSA. The support of unidirectional ports is also added.
139
140 PCE handles the following constraints as hard constraints:
141
142 -   **Node exclusion**
143 -   **SRLG exclusion**
144 -   **Maximum latency**
145
146 In Magnesium SR0, the interconnection of the PCE with GNPY (Gaussian Noise Python), an
147 open-source library developed in the scope of the Telecom Infra Project for building route
148 planning and optimizing performance in optical mesh networks, is fully supported. Impairment
149 aware path computation for service of higher rates (Beyond 100G) is planned across Phoshorus
150 releases. It implies to make B100G OpenROADM specifications available in GNPy libraries.
151
152 If the OSNR calculated by the PCE is too close to the limit defined in OpenROADM
153 specifications, the PCE forwards through a REST interface to GNPY external tool the topology
154 and the pre-computed path translated in routing constraints. GNPy calculates a set of Quality of
155 Transmission metrics for this path using its own library which includes models for OpenROADM.
156 The result is sent back to the PCE. If the path is validated, the PCE sends back a response to
157 the service handler. In case of invalidation of the path by GNPY, the PCE sends a new request to
158 GNPY, including only the constraints expressed in the path-computation-request initiated by the
159 Service Handler. GNPy then tries to calculate a path based on these relaxed constraints. The
160 result of the path computation is provided to the PCE which translates the path according to the
161 topology handled in transportPCE and forwards the results to the Service Handler.
162
163 GNPy relies on SNR and takes into account the linear and non-linear impairments
164 to check feasibility. In the related tests, GNPy module runs externally in a
165 docker and the communication with T-PCE is ensured via HTTPs.
166
167 Topology Management
168 ^^^^^^^^^^^^^^^^^^^
169
170 Topology management module builds the Topology according to the Network model
171 defined in OpenROADM. The topology is aligned with IETF I2RS RFC8345 model.
172 It includes several network layers:
173
174 -  **CLLI layer corresponds to the locations that host equipment**
175 -  **Network layer corresponds to a first level of disaggregation where we
176    separate Xponders (transponder, muxponders or switchponders) from ROADMs**
177 -  **Topology layer introduces a second level of disaggregation where ROADMs
178    Add/Drop modules ("SRGs") are separated from the degrees which includes line
179    amplifiers and WSS that switch wavelengths from one to another degree**
180 -  **OTN layer introduced in Magnesium includes transponders as well as switch-ponders and
181    mux-ponders having the ability to switch OTN containers from client to line cards. Mg SR0
182    release includes creation of the switching pool (used to model cross-connect matrices),
183    tributary-ports and tributary-slots at the initial connection of NETCONF devices.
184    The population of OTN links (OTU4 and ODU4), and the adjustment of the tributary ports/slots
185    pool occupancy when OTN services are created is supported since Magnesium SR2.**
186
187 Since Silicon release, the Topology Management module process NETCONF event received through an
188 event stream (as defined in RFC 5277) between devices and the NETCONF adapter of the controller.
189 Current implementation detects device configuration changes and updates the topology datastore accordingly.
190 Then, it sends a TopologyUpdateNotification to the *Service Handler* to indicate that a change has been
191 detected in the network that may affect some of the already existing services.
192
193 Renderer
194 ^^^^^^^^
195
196 The Renderer module, on request coming from the Service Handler through a service-
197 implementation-request /service delete rpc, sets/deletes the path corresponding to a specific
198 service between A and Z ends. The path description provided by the service-handler to the
199 renderer is based on abstracted resources (nodes, links and termination-points), as provided
200 by the PCE module. The renderer converts this path-description in a path topology based on
201 device resources (circuit-packs, ports,…).
202
203 The conversion from abstracted resources to device resources is performed relying on the
204 portmapping module which maintains the connections between these different resource types.
205 Portmapping module also allows to keep the topology independant from the devices releases.
206 In Neon (SR0), portmapping module has been enriched to support both openroadm 1.2.1 and 2.2.1
207 device models. The full support of openroadm 2.2.1 device models (both in the topology management
208 and the rendering function) has been added in Neon SR1. In Magnesium, portmapping is enriched with
209 the supported-interface-capability, OTN supporting-interfaces, and switching-pools (reflecting
210 cross-connection capabilities of OTN switch-ponders). The support for 7.1 devices models is
211 introduced in Silicon (no devices of intermediate releases have been proposed and made available
212 to the market by equipment manufacturers).
213
214 After the path is provided, the renderer first checks what are the existing interfaces on the
215 ports of the different nodes that the path crosses. It then creates missing interfaces. After all
216 needed interfaces have been created it sets the connections required in the nodes and
217 notifies the Service Handler on the status of the path creation. Path is created in 2 steps
218 (from A to Z and Z to A). In case the path between A and Z could not be fully created, a
219 rollback function is called to set the equipment on the path back to their initial configuration
220 (as they were before invoking the Renderer).
221
222 Magnesium brings the support of OTN services. SR0 supports the creation of OTU4, ODU4, ODU2/ODU2e
223 and ODU0 interfaces. The creation of these low-order otn interfaces must be triggered through
224 otn-service-path RPC. Magnesium SR2 fully supports end-to-end otn service implementation into devices
225 (service-implementation-request /service delete rpc, topology alignement after the service
226 has been created).
227
228 In Silicon releases, higher rate OTN interfaces (OTUC4) must be triggered through otn-service-
229 path RPC. Phosphorus SR0 supports end-to-end otn service implementation into devices
230 (service-implementation-request /service delete rpc, topology alignement after the service
231 has been created). One shall note that impairment aware path calculation for higher rates will
232 be made available across the Phosphorus release train.
233
234 OLM
235 ^^^
236
237 Optical Line Management module implements two main features: it is responsible
238 for setting up the optical power levels on the different interfaces, and is in
239 charge of adjusting these settings across the life of the optical
240 infrastructure.
241
242 After the different connections have been established in the ROADMS, between 2
243 Degrees for an express path, or between a SRG and a Degree for an Add or Drop
244 path; meaning the devices have set WSS and all other required elements to
245 provide path continuity, power setting are provided as attributes of these
246 connections. This allows the device to set all complementary elements such as
247 VOAs, to guaranty that the signal is launched at a correct power level
248 (in accordance to the specifications) in the fiber span. This also applies
249 to X-Ponders, as their output power must comply with the specifications defined
250 for the Add/Drop ports (SRG) of the ROADM. OLM has the responsibility of
251 calculating the right power settings, sending it to the device, and check the
252 PM retrieved from the device to verify that the setting was correctly applied
253 and the configuration was successfully completed.
254
255
256 Inventory
257 ^^^^^^^^^
258
259 TransportPCE Inventory module is responsible to keep track of devices connected in an external
260 MariaDB database. Other databases may be used as long as they comply with SQL and are compatible
261 with OpenDaylight (for example MySQL). At present, the module supports extracting and persisting
262 inventory of devices OpenROADM MSA version 1.2.1. Inventory module changes to support newer device
263 models (2.2.1, etc) and other models (network, service, etc) will be progressively included.
264
265 The inventory module can be activated by the associated karaf feature (odl-transporpce-inventory)
266 The database properties are supplied in the “opendaylight-release” and “opendaylight-snapshots”
267 profiles. Below is the settings.xml with properties included in the distribution.
268 The module can be rebuild from sources with different parameters.
269
270 Sample entry in settings.xml to declare an external inventory database:
271 ::
272
273     <profiles>
274       <profile>
275           <id>opendaylight-release</id>
276     [..]
277          <properties>
278                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
279                  <transportpce.db.database><<databasename>></transportpce.db.database>
280                  <transportpce.db.username><<username>></transportpce.db.username>
281                  <transportpce.db.password><<password>></transportpce.db.password>
282                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
283          </properties>
284     </profile>
285     [..]
286     <profile>
287           <id>opendaylight-snapshots</id>
288     [..]
289          <properties>
290                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
291                  <transportpce.db.database><<databasename>></transportpce.db.database>
292                  <transportpce.db.username><<username>></transportpce.db.username>
293                  <transportpce.db.password><<password>></transportpce.db.password>
294                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
295          </properties>
296         </profile>
297     </profiles>
298
299
300 Once the project built and when karaf is started, the cfg file is generated in etc folder with the
301 corresponding properties supplied in settings.xml. When devices with OpenROADM 1.2.1 device model
302 are mounted, the device listener in the inventory module loads several device attributes to various
303 tables as per the supplied database. The database structure details can be retrieved from the file
304 tests/inventory/initdb.sql inside project sources. Installation scripts and a docker file are also
305 provided.
306
307 Key APIs and Interfaces
308 -----------------------
309
310 External API
311 ~~~~~~~~~~~~
312
313 North API, interconnecting the Service Handler to higher level applications
314 relies on the Service Model defined in the MSA. The Renderer and the OLM are
315 developed to allow configuring OpenROADM devices through a southbound
316 Netconf/Yang interface and rely on the MSA’s device model.
317
318 ServiceHandler Service
319 ^^^^^^^^^^^^^^^^^^^^^^
320
321 -  RPC call
322
323    -  service-create (given service-name, service-aend, service-zend)
324
325    -  service-delete (given service-name)
326
327    -  service-reroute (given service-name, service-aend, service-zend)
328
329    -  service-restoration (given service-name, service-aend, service-zend)
330
331    -  temp-service-create (given common-id, service-aend, service-zend)
332
333    -  temp-service-delete (given common-id)
334
335 -  Data structure
336
337    -  service list : made of services
338    -  temp-service list : made of temporary services
339    -  service : composed of service-name, topology wich describes the detailed path (list of used resources)
340
341 -  Notification
342
343    - service-rpc-result : result of service RPC
344    - service-notification : service has been added, modified or removed
345
346 Netconf Service
347 ^^^^^^^^^^^^^^^
348
349 -  RPC call
350
351    -  connect-device : PUT
352    -  disconnect-device : DELETE
353    -  check-connected-device : GET
354
355 -  Data Structure
356
357    -  node list : composed of netconf nodes in topology-netconf
358
359 Internal APIs
360 ~~~~~~~~~~~~~
361
362 Internal APIs define REST APIs to interconnect TransportPCE modules :
363
364 -   Service Handler to PCE
365 -   PCE to Topology Management
366 -   Service Handler to Renderer
367 -   Renderer to OLM
368 -   Network Model to Service Handler
369
370 Pce Service
371 ^^^^^^^^^^^
372
373 -  RPC call
374
375    -  path-computation-request (given service-name, service-aend, service-zend)
376
377    -  cancel-resource-reserve (given service-name)
378
379 -  Notification
380
381    - service-path-rpc-result : result of service RPC
382
383 Renderer Service
384 ^^^^^^^^^^^^^^^^
385
386 -  RPC call
387
388    -  service-implementation-request (given service-name, service-aend, service-zend)
389
390    -  service-delete (given service-name)
391
392 -  Data structure
393
394    -  service path list : composed of service paths
395    -  service path : composed of service-name, path description giving the list of abstracted elements (nodes, tps, links)
396
397 -  Notification
398
399    - service-path-rpc-result : result of service RPC
400
401 Device Renderer
402 ^^^^^^^^^^^^^^^
403
404 -  RPC call
405
406    -  service-path used in SR0 as an intermediate solution to address directly the renderer
407       from a REST NBI to create OCH-OTU4-ODU4 interfaces on network port of otn devices.
408
409    -  otn-service-path used in SR0 as an intermediate solution to address directly the renderer
410       from a REST NBI for otn-service creation. Otn service-creation through
411       service-implementation-request call from the Service Handler will be supported in later
412       Magnesium releases
413
414 Topology Management Service
415 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
416
417 -  Data structure
418
419    -  network list : composed of networks(openroadm-topology, netconf-topology)
420    -  node list : composed of nodes identified by their node-id
421    -  link list : composed of links identified by their link-id
422    -  node : composed of roadm, xponder
423       link : composed of links of different types (roadm-to-roadm, express, add-drop ...)
424
425 OLM Service
426 ^^^^^^^^^^^
427
428 -  RPC call
429
430    -  get-pm (given node-id)
431
432    -  service-power-setup
433
434    -  service-power-turndown
435
436    -  service-power-reset
437
438    -  calculate-spanloss-base
439
440    -  calculate-spanloss-current
441
442 odl-transportpce-stubmodels
443 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
444
445    -  This feature provides function to be able to stub some of TransportPCE modules, pce and
446       renderer (Stubpce and Stubrenderer).
447       Stubs are used for development purposes and can be used for some of the functional tests.
448
449 Interfaces to external software
450 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
451
452 It defines the interfaces implemented to interconnect TransportPCE modules with other software in
453 order to perform specific tasks
454
455 GNPy interface
456 ^^^^^^^^^^^^^^
457
458 -  Request structure
459
460    -  topology : composed of list of elements and connections
461    -  service : source, destination, explicit-route-objects, path-constraints
462
463 -  Response structure
464
465    -  path-properties/path-metric : OSNR-0.1nm, OSNR-bandwidth, SNR-0.1nm, SNR-bandwidth,
466    -  path-properties/path-route-objects : composed of path elements
467
468
469 Running transportPCE project
470 ----------------------------
471
472 To use transportPCE controller, the first step is to connect the controller to optical nodes
473 through the NETCONF connector.
474
475 .. note::
476
477     In the current version, only optical equipment compliant with open ROADM datamodels are managed
478     by transportPCE.
479
480     Since Chlorine release, the bierman implementation of RESTCONF is no longer supported for the benefit of the RFC8040.
481     Thus REST API must be compliant to the RFC8040 format.
482
483
484 Connecting nodes
485 ~~~~~~~~~~~~~~~~
486
487 To connect a node, use the following RESTconf request
488
489 **REST API** : *PUT /rests/data/network-topology:network-topology/topology=topology-netconf/node=<node-id>*
490
491 **Sample JSON Data**
492
493 .. code:: json
494
495     {
496         "node": [
497             {
498                 "node-id": "<node-id>",
499                 "netconf-node-topology:tcp-only": "false",
500                 "netconf-node-topology:reconnect-on-changed-schema": "false",
501                 "netconf-node-topology:host": "<node-ip-address>",
502                 "netconf-node-topology:default-request-timeout-millis": "120000",
503                 "netconf-node-topology:max-connection-attempts": "0",
504                 "netconf-node-topology:sleep-factor": "1.5",
505                 "netconf-node-topology:actor-response-wait-time": "5",
506                 "netconf-node-topology:concurrent-rpc-limit": "0",
507                 "netconf-node-topology:between-attempts-timeout-millis": "2000",
508                 "netconf-node-topology:port": "<netconf-port>",
509                 "netconf-node-topology:connection-timeout-millis": "20000",
510                 "netconf-node-topology:username": "<node-username>",
511                 "netconf-node-topology:password": "<node-password>",
512                 "netconf-node-topology:keepalive-delay": "300"
513             }
514         ]
515     }
516
517
518 Then check that the netconf session has been correctly established between the controller and the
519 node. the status of **netconf-node-topology:connection-status** must be **connected**
520
521 **REST API** : *GET /rests/data/network-topology:network-topology/topology=topology-netconf/node=<node-id>?content=nonconfig*
522
523
524 Node configuration discovery
525 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
526
527 Once the controller is connected to the node, transportPCE application automatically launchs a
528 discovery of the node configuration datastore and creates **Logical Connection Points** to any
529 physical ports related to transmission. All *circuit-packs* inside the node configuration are
530 analyzed.
531
532 Use the following RESTconf URI to check that function internally named *portMapping*.
533
534 **REST API** : *GET /rests/data/transportpce-portmapping:network*
535
536 .. note::
537
538     In ``org-openroadm-device.yang``, four types of optical nodes can be managed:
539         * rdm: ROADM device (optical switch)
540         * xpdr: Xponder device (device that converts client to optical channel interface)
541         * ila: in line amplifier (optical amplifier)
542         * extplug: external pluggable (an optical pluggable that can be inserted in an external unit such as a router)
543
544     TransportPCE currently supports rdm and xpdr
545
546 Depending on the kind of open ROADM device connected, different kind of *Logical Connection Points*
547 should appear, if the node configuration is not empty:
548
549 -  DEG<degree-number>-TTP-<port-direction>: created on the line port of a degree on a rdm equipment
550 -  SRG<srg-number>-PP<port-number>: created on the client port of a srg on a rdm equipment
551 -  XPDR<number>-CLIENT<port-number>: created on the client port of a xpdr equipment
552 -  XPDR<number>-NETWORK<port-number>: created on the line port of a xpdr equipment
553
554     For further details on openROADM device models, see `openROADM MSA white paper <https://0201.nccdn.net/1_2/000/000/134/c50/Open-ROADM-MSA-release-2-Device-White-paper-v1-1.pdf>`__.
555
556 Optical Network topology
557 ~~~~~~~~~~~~~~~~~~~~~~~~
558
559 Before creating an optical connectivity service, your topology must contain at least two xpdr
560 devices connected to two different rdm devices. Normally, the *openroadm-topology* is automatically
561 created by transportPCE. Nevertheless, depending on the configuration inside optical nodes, this
562 topology can be partial. Check that link of type *ROADMtoROADM* exists between two adjacent rdm
563 nodes.
564
565 **REST API** : *GET /rests/data/ietf-network:networks/network=openroadm-topology*
566
567 If it is not the case, you need to manually complement the topology with *ROADMtoROADM* link using
568 the following REST RPC:
569
570
571 **REST API** : *POST /rests/operations/transportpce-networkutils:init-roadm-nodes*
572
573 **Sample JSON Data**
574
575 .. code:: json
576
577     {
578       "input": {
579         "rdm-a-node": "<node-id-A>",
580         "deg-a-num": "<degree-A-number>",
581         "termination-point-a": "<Logical-Connection-Point>",
582         "rdm-z-node": "<node-id-Z>",
583         "deg-z-num": "<degree-Z-number>",
584         "termination-point-z": "<Logical-Connection-Point>"
585       }
586     }
587
588 *<Logical-Connection-Point> comes from the portMapping function*.
589
590 Unidirectional links between xpdr and rdm nodes must be created manually. To that end use the two
591 following REST RPCs:
592
593 From xpdr to rdm:
594 ^^^^^^^^^^^^^^^^^
595
596 **REST API** : *POST /rests/operations/transportpce-networkutils:init-xpdr-rdm-links*
597
598 **Sample JSON Data**
599
600 .. code:: json
601
602     {
603       "input": {
604         "links-input": {
605           "xpdr-node": "<xpdr-node-id>",
606           "xpdr-num": "1",
607           "network-num": "<xpdr-network-port-number>",
608           "rdm-node": "<rdm-node-id>",
609           "srg-num": "<srg-number>",
610           "termination-point-num": "<Logical-Connection-Point>"
611         }
612       }
613     }
614
615 From rdm to xpdr:
616 ^^^^^^^^^^^^^^^^^
617
618 **REST API** : *POST /rests/operations/transportpce-networkutils:init-rdm-xpdr-links*
619
620 **Sample JSON Data**
621
622 .. code:: json
623
624     {
625       "input": {
626         "links-input": {
627           "xpdr-node": "<xpdr-node-id>",
628           "xpdr-num": "1",
629           "network-num": "<xpdr-network-port-number>",
630           "rdm-node": "<rdm-node-id>",
631           "srg-num": "<srg-number>",
632           "termination-point-num": "<Logical-Connection-Point>"
633         }
634       }
635     }
636
637 OTN topology
638 ~~~~~~~~~~~~
639
640 Before creating an OTN service, your topology must contain at least two xpdr devices of MUXPDR
641 or SWITCH type connected to two different rdm devices. To check that these xpdr are present in the
642 OTN topology, use the following command on the REST API :
643
644 **REST API** : *GET /rests/data/ietf-network:networks/network=otn-topology*
645
646 An optical connectivity service shall have been created in a first setp. Since Magnesium SR2, the OTN
647 links are automatically populated in the topology after the Och, OTU4 and ODU4 interfaces have
648 been created on the two network ports of the xpdr.
649
650 Creating a service
651 ~~~~~~~~~~~~~~~~~~
652
653 Use the *service handler* module to create any end-to-end connectivity service on an OpenROADM
654 network. Two different kinds of end-to-end "optical" services are managed by TransportPCE:
655 - 100GE/400GE services from client port to client port of two transponders (TPDR)
656 - Optical Channel (OC) service from client add/drop port (PP port of SRG) to client add/drop port of
657 two ROADMs.
658
659 For these services, TransportPCE automatically invokes *renderer* module to create all required
660 interfaces and cross-connection on each device supporting the service.
661 As an example, the creation of a 100GE service implies among other things, the creation of OCH or
662 Optical Tributary Signal (OTSi), OTU4 and ODU4 interfaces on the Network port of TPDR devices.
663 The creation of a 400GE service implies the creation of OTSi, OTUC4, ODUC4 and ODU4 interfaces on
664 the Network port of TPDR devices.
665
666 Since Magnesium SR2, the *service handler* module directly manages some end-to-end otn
667 connectivity services.
668 Before creating a low-order OTN service (1GE or 10GE services terminating on client port of MUXPDR
669 or SWITCH), the user must ensure that a high-order ODU4 container exists and has previously been
670 configured (it means structured to support low-order otn services) to support low-order OTN containers.
671 Thus, OTN service creation implies three steps:
672 1. OCH-OTU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
673 2. HO-ODU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
674 3. 10GE service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
675
676 The management of other OTN services (1GE-ODU0, 100GE...) is planned for future releases.
677
678
679 100GE service creation
680 ^^^^^^^^^^^^^^^^^^^^^^
681
682 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
683 end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
684 nodes.
685
686 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
687
688 **Sample JSON Data**
689
690 .. code:: json
691
692     {
693         "input": {
694             "sdnc-request-header": {
695                 "request-id": "request-1",
696                 "rpc-action": "service-create",
697                 "request-system-id": "appname"
698             },
699             "service-name": "test1",
700             "common-id": "commonId",
701             "connection-type": "service",
702             "service-a-end": {
703                 "service-rate": "100",
704                 "node-id": "<xpdr-node-id>",
705                 "service-format": "Ethernet",
706                 "clli": "<ccli-name>",
707                 "tx-direction": [{
708                     "port": {
709                         "port-device-name": "<xpdr-client-port>",
710                         "port-type": "fixed",
711                         "port-name": "<xpdr-client-port-number>",
712                         "port-rack": "000000.00",
713                         "port-shelf": "Chassis#1"
714                     },
715                     "lgx": {
716                         "lgx-device-name": "Some lgx-device-name",
717                         "lgx-port-name": "Some lgx-port-name",
718                         "lgx-port-rack": "000000.00",
719                         "lgx-port-shelf": "00"
720                     },
721                     "index": 0
722                 }],
723                 "rx-direction": [{
724                     "port": {
725                         "port-device-name": "<xpdr-client-port>",
726                         "port-type": "fixed",
727                         "port-name": "<xpdr-client-port-number>",
728                         "port-rack": "000000.00",
729                         "port-shelf": "Chassis#1"
730                     },
731                     "lgx": {
732                         "lgx-device-name": "Some lgx-device-name",
733                         "lgx-port-name": "Some lgx-port-name",
734                         "lgx-port-rack": "000000.00",
735                         "lgx-port-shelf": "00"
736                     },
737                     "index": 0
738                 }],
739                 "optic-type": "gray"
740             },
741             "service-z-end": {
742                 "service-rate": "100",
743                 "node-id": "<xpdr-node-id>",
744                 "service-format": "Ethernet",
745                 "clli": "<ccli-name>",
746                 "tx-direction": [{
747                     "port": {
748                         "port-device-name": "<xpdr-client-port>",
749                         "port-type": "fixed",
750                         "port-name": "<xpdr-client-port-number>",
751                         "port-rack": "000000.00",
752                         "port-shelf": "Chassis#1"
753                     },
754                     "lgx": {
755                         "lgx-device-name": "Some lgx-device-name",
756                         "lgx-port-name": "Some lgx-port-name",
757                         "lgx-port-rack": "000000.00",
758                         "lgx-port-shelf": "00"
759                     },
760                     "index": 0
761                 }],
762                 "rx-direction": [{
763                     "port": {
764                         "port-device-name": "<xpdr-client-port>",
765                         "port-type": "fixed",
766                         "port-name": "<xpdr-client-port-number>",
767                         "port-rack": "000000.00",
768                         "port-shelf": "Chassis#1"
769                     },
770                     "lgx": {
771                         "lgx-device-name": "Some lgx-device-name",
772                         "lgx-port-name": "Some lgx-port-name",
773                         "lgx-port-rack": "000000.00",
774                         "lgx-port-shelf": "00"
775                     },
776                     "index": 0
777                 }],
778                 "optic-type": "gray"
779             },
780             "due-date": "yyyy-mm-ddT00:00:01Z",
781             "operator-contact": "some-contact-info"
782         }
783     }
784
785 Most important parameters for this REST RPC are the identification of the two physical client ports
786 on xpdr nodes.This RPC invokes the *PCE* module to compute a path over the *openroadm-topology* and
787 then invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
788
789
790 OC service creation
791 ^^^^^^^^^^^^^^^^^^^
792
793 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
794 end-to end Optical Channel (OC) connectivity service between two add/drop ports (PP port of SRG
795 node) over an optical network only composed of rdm nodes.
796
797 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
798
799 **Sample JSON Data**
800
801 .. code:: json
802
803     {
804         "input": {
805             "sdnc-request-header": {
806                 "request-id": "request-1",
807                 "rpc-action": "service-create",
808                 "request-system-id": "appname"
809             },
810             "service-name": "something",
811             "common-id": "commonId",
812             "connection-type": "roadm-line",
813             "service-a-end": {
814                 "service-rate": "100",
815                 "node-id": "<xpdr-node-id>",
816                 "service-format": "OC",
817                 "clli": "<ccli-name>",
818                 "tx-direction": [{
819                     "port": {
820                         "port-device-name": "<xpdr-client-port>",
821                         "port-type": "fixed",
822                         "port-name": "<xpdr-client-port-number>",
823                         "port-rack": "000000.00",
824                         "port-shelf": "Chassis#1"
825                     },
826                     "lgx": {
827                         "lgx-device-name": "Some lgx-device-name",
828                         "lgx-port-name": "Some lgx-port-name",
829                         "lgx-port-rack": "000000.00",
830                         "lgx-port-shelf": "00"
831                     },
832                     "index": 0
833                 }],
834                 "rx-direction": [{
835                     "port": {
836                         "port-device-name": "<xpdr-client-port>",
837                         "port-type": "fixed",
838                         "port-name": "<xpdr-client-port-number>",
839                         "port-rack": "000000.00",
840                         "port-shelf": "Chassis#1"
841                     },
842                     "lgx": {
843                         "lgx-device-name": "Some lgx-device-name",
844                         "lgx-port-name": "Some lgx-port-name",
845                         "lgx-port-rack": "000000.00",
846                         "lgx-port-shelf": "00"
847                     },
848                     "index": 0
849                 }],
850                 "optic-type": "gray"
851             },
852             "service-z-end": {
853                 "service-rate": "100",
854                 "node-id": "<xpdr-node-id>",
855                 "service-format": "OC",
856                 "clli": "<ccli-name>",
857                 "tx-direction": [{
858                     "port": {
859                         "port-device-name": "<xpdr-client-port>",
860                         "port-type": "fixed",
861                         "port-name": "<xpdr-client-port-number>",
862                         "port-rack": "000000.00",
863                         "port-shelf": "Chassis#1"
864                     },
865                     "lgx": {
866                         "lgx-device-name": "Some lgx-device-name",
867                         "lgx-port-name": "Some lgx-port-name",
868                         "lgx-port-rack": "000000.00",
869                         "lgx-port-shelf": "00"
870                     },
871                     "index": 0
872                 }],
873                 "rx-direction": [{
874                     "port": {
875                         "port-device-name": "<xpdr-client-port>",
876                         "port-type": "fixed",
877                         "port-name": "<xpdr-client-port-number>",
878                         "port-rack": "000000.00",
879                         "port-shelf": "Chassis#1"
880                     },
881                     "lgx": {
882                         "lgx-device-name": "Some lgx-device-name",
883                         "lgx-port-name": "Some lgx-port-name",
884                         "lgx-port-rack": "000000.00",
885                         "lgx-port-shelf": "00"
886                     },
887                     "index": 0
888                 }],
889                 "optic-type": "gray"
890             },
891             "due-date": "yyyy-mm-ddT00:00:01Z",
892             "operator-contact": "some-contact-info"
893         }
894     }
895
896 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
897 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
898 the devices.
899
900 OTN OCH-OTU4 service creation
901 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
902
903 Use the following REST RPC to invoke *service handler* module in order to create over the optical
904 infrastructure a bidirectional end-to-end OTU4 over an optical wavelength connectivity service
905 between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such service configure the
906 optical network infrastructure composed of rdm nodes.
907
908 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
909
910 **Sample JSON Data**
911
912 .. code:: json
913
914     {
915         "input": {
916             "sdnc-request-header": {
917                 "request-id": "request-1",
918                 "rpc-action": "service-create",
919                 "request-system-id": "appname"
920             },
921             "service-name": "something",
922             "common-id": "commonId",
923             "connection-type": "infrastructure",
924             "service-a-end": {
925                 "service-rate": "100",
926                 "node-id": "<xpdr-node-id>",
927                 "service-format": "OTU",
928                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
929                 "clli": "<ccli-name>",
930                 "tx-direction": [{
931                     "port": {
932                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
933                         "port-type": "fixed",
934                         "port-name": "<xpdr-network-port-in-otn-topology>",
935                         "port-rack": "000000.00",
936                         "port-shelf": "Chassis#1"
937                     },
938                     "lgx": {
939                         "lgx-device-name": "Some lgx-device-name",
940                         "lgx-port-name": "Some lgx-port-name",
941                         "lgx-port-rack": "000000.00",
942                         "lgx-port-shelf": "00"
943                     },
944                     "index": 0
945                 }],
946                 "rx-direction": [{
947                     "port": {
948                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
949                         "port-type": "fixed",
950                         "port-name": "<xpdr-network-port-in-otn-topology>",
951                         "port-rack": "000000.00",
952                         "port-shelf": "Chassis#1"
953                     },
954                     "lgx": {
955                         "lgx-device-name": "Some lgx-device-name",
956                         "lgx-port-name": "Some lgx-port-name",
957                         "lgx-port-rack": "000000.00",
958                         "lgx-port-shelf": "00"
959                     },
960                     "index": 0
961                 }],
962                 "optic-type": "gray"
963             },
964             "service-z-end": {
965                 "service-rate": "100",
966                 "node-id": "<xpdr-node-id>",
967                 "service-format": "OTU",
968                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
969                 "clli": "<ccli-name>",
970                 "tx-direction": [{
971                     "port": {
972                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
973                         "port-type": "fixed",
974                         "port-name": "<xpdr-network-port-in-otn-topology>",
975                         "port-rack": "000000.00",
976                         "port-shelf": "Chassis#1"
977                     },
978                     "lgx": {
979                         "lgx-device-name": "Some lgx-device-name",
980                         "lgx-port-name": "Some lgx-port-name",
981                         "lgx-port-rack": "000000.00",
982                         "lgx-port-shelf": "00"
983                     },
984                     "index": 0
985                 }],
986                 "rx-direction": [{
987                     "port": {
988                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
989                         "port-type": "fixed",
990                         "port-name": "<xpdr-network-port-in-otn-topology>",
991                         "port-rack": "000000.00",
992                         "port-shelf": "Chassis#1"
993                     },
994                     "lgx": {
995                         "lgx-device-name": "Some lgx-device-name",
996                         "lgx-port-name": "Some lgx-port-name",
997                         "lgx-port-rack": "000000.00",
998                         "lgx-port-shelf": "00"
999                     },
1000                     "index": 0
1001                 }],
1002                 "optic-type": "gray"
1003             },
1004             "due-date": "yyyy-mm-ddT00:00:01Z",
1005             "operator-contact": "some-contact-info"
1006         }
1007     }
1008
1009 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1010 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
1011 the devices.
1012
1013 OTSi-OTUC4 service creation
1014 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
1015
1016 Use the following REST RPC to invoke *service handler* module in order to create over the optical
1017 infrastructure a bidirectional end-to-end OTUC4 over an optical Optical Tributary Signal
1018 connectivity service between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such
1019 service configure the optical network infrastructure composed of rdm nodes.
1020
1021 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
1022
1023 **Sample JSON Data**
1024
1025 .. code:: json
1026
1027     {
1028         "input": {
1029             "sdnc-request-header": {
1030                 "request-id": "request-1",
1031                 "rpc-action": "service-create",
1032                 "request-system-id": "appname"
1033             },
1034             "service-name": "something",
1035             "common-id": "commonId",
1036             "connection-type": "infrastructure",
1037             "service-a-end": {
1038                 "service-rate": "400",
1039                 "node-id": "<xpdr-node-id>",
1040                 "service-format": "OTU",
1041                 "otu-service-rate": "org-openroadm-otn-common-types:OTUCn",
1042                 "clli": "<ccli-name>",
1043                 "tx-direction": [{
1044                     "port": {
1045                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1046                         "port-type": "fixed",
1047                         "port-name": "<xpdr-network-port-in-otn-topology>",
1048                         "port-rack": "000000.00",
1049                         "port-shelf": "Chassis#1"
1050                     },
1051                     "lgx": {
1052                         "lgx-device-name": "Some lgx-device-name",
1053                         "lgx-port-name": "Some lgx-port-name",
1054                         "lgx-port-rack": "000000.00",
1055                         "lgx-port-shelf": "00"
1056                     },
1057                     "index": 0
1058                 }],
1059                 "rx-direction": [{
1060                     "port": {
1061                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1062                         "port-type": "fixed",
1063                         "port-name": "<xpdr-network-port-in-otn-topology>",
1064                         "port-rack": "000000.00",
1065                         "port-shelf": "Chassis#1"
1066                     },
1067                     "lgx": {
1068                         "lgx-device-name": "Some lgx-device-name",
1069                         "lgx-port-name": "Some lgx-port-name",
1070                         "lgx-port-rack": "000000.00",
1071                         "lgx-port-shelf": "00"
1072                     },
1073                     "index": 0
1074                 }],
1075                 "optic-type": "gray"
1076             },
1077             "service-z-end": {
1078                 "service-rate": "400",
1079                 "node-id": "<xpdr-node-id>",
1080                 "service-format": "OTU",
1081                 "otu-service-rate": "org-openroadm-otn-common-types:OTUCn",
1082                 "clli": "<ccli-name>",
1083                 "tx-direction": [{
1084                     "port": {
1085                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1086                         "port-type": "fixed",
1087                         "port-name": "<xpdr-network-port-in-otn-topology>",
1088                         "port-rack": "000000.00",
1089                         "port-shelf": "Chassis#1"
1090                     },
1091                     "lgx": {
1092                         "lgx-device-name": "Some lgx-device-name",
1093                         "lgx-port-name": "Some lgx-port-name",
1094                         "lgx-port-rack": "000000.00",
1095                         "lgx-port-shelf": "00"
1096                     },
1097                     "index": 0
1098                 }],
1099                 "rx-direction": [{
1100                     "port": {
1101                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1102                         "port-type": "fixed",
1103                         "port-name": "<xpdr-network-port-in-otn-topology>",
1104                         "port-rack": "000000.00",
1105                         "port-shelf": "Chassis#1"
1106                     },
1107                     "lgx": {
1108                         "lgx-device-name": "Some lgx-device-name",
1109                         "lgx-port-name": "Some lgx-port-name",
1110                         "lgx-port-rack": "000000.00",
1111                         "lgx-port-shelf": "00"
1112                     },
1113                     "index": 0
1114                 }],
1115                 "optic-type": "gray"
1116             },
1117             "due-date": "yyyy-mm-ddT00:00:01Z",
1118             "operator-contact": "some-contact-info"
1119         }
1120     }
1121
1122 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1123 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
1124 the devices.
1125
1126 One shall note that in Phosphorus SR0, as the OpenROADM 400G specification are not available (neither
1127 in the GNPy libraries, nor in the *PCE* module), path validation will be performed using the same
1128 asumptions as we use for 100G. This means the path may be validated whereas optical performances do
1129 not reach expected levels. This allows testing OpenROADM device implementing B100G rates, but shall
1130 not be used in operational conditions. The support for higher rate impairment aware path computation
1131 will be introduced across Phosphorus release train.
1132
1133 ODUC4 service creation
1134 ^^^^^^^^^^^^^^^^^^^^^^
1135
1136 For ODUC4 service creation, the REST RPC to invoke *service handler* module in order to create an
1137 ODUC4 over the OTSi-OTUC4 has the same format as the RPC used for the creation of this last. Only
1138 "service-format" needs to be changed to "ODU", and "otu-service-rate" : "org-openroadm-otn-common-
1139 types:OTUCn" needs to be replaced by: "odu-service-rate" : "org-openroadm-otn-common-types:ODUCn"
1140 in both service-a-end and service-z-end containers.
1141
1142 OTN HO-ODU4 service creation
1143 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1144
1145 Use the following REST RPC to invoke *service handler* module in order to create over the optical
1146 infrastructure a bidirectional end-to-end ODU4 OTN service over an OTU4 and structured to support
1147 low-order OTN services (ODU2e, ODU0). As for OTU4, such a service must be created between two network
1148 ports of OTN Xponder (MUXPDR or SWITCH).
1149
1150 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
1151
1152 **Sample JSON Data**
1153
1154 .. code:: json
1155
1156     {
1157         "input": {
1158             "sdnc-request-header": {
1159                 "request-id": "request-1",
1160                 "rpc-action": "service-create",
1161                 "request-system-id": "appname"
1162             },
1163             "service-name": "something",
1164             "common-id": "commonId",
1165             "connection-type": "infrastructure",
1166             "service-a-end": {
1167                 "service-rate": "100",
1168                 "node-id": "<xpdr-node-id>",
1169                 "service-format": "ODU",
1170                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
1171                 "clli": "<ccli-name>",
1172                 "tx-direction": [{
1173                     "port": {
1174                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1175                         "port-type": "fixed",
1176                         "port-name": "<xpdr-network-port-in-otn-topology>",
1177                         "port-rack": "000000.00",
1178                         "port-shelf": "Chassis#1"
1179                     },
1180                     "lgx": {
1181                         "lgx-device-name": "Some lgx-device-name",
1182                         "lgx-port-name": "Some lgx-port-name",
1183                         "lgx-port-rack": "000000.00",
1184                         "lgx-port-shelf": "00"
1185                     },
1186                     "index": 0
1187                 }],
1188                 "rx-direction": [{
1189                     "port": {
1190                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1191                         "port-type": "fixed",
1192                         "port-name": "<xpdr-network-port-in-otn-topology>",
1193                         "port-rack": "000000.00",
1194                         "port-shelf": "Chassis#1"
1195                     },
1196                     "lgx": {
1197                         "lgx-device-name": "Some lgx-device-name",
1198                         "lgx-port-name": "Some lgx-port-name",
1199                         "lgx-port-rack": "000000.00",
1200                         "lgx-port-shelf": "00"
1201                     },
1202                     "index": 0
1203                 }],
1204                 "optic-type": "gray"
1205             },
1206             "service-z-end": {
1207                 "service-rate": "100",
1208                 "node-id": "<xpdr-node-id>",
1209                 "service-format": "ODU",
1210                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
1211                 "clli": "<ccli-name>",
1212                 "tx-direction": [{
1213                     "port": {
1214                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1215                         "port-type": "fixed",
1216                         "port-name": "<xpdr-network-port-in-otn-topology>",
1217                         "port-rack": "000000.00",
1218                         "port-shelf": "Chassis#1"
1219                     },
1220                     "lgx": {
1221                         "lgx-device-name": "Some lgx-device-name",
1222                         "lgx-port-name": "Some lgx-port-name",
1223                         "lgx-port-rack": "000000.00",
1224                         "lgx-port-shelf": "00"
1225                     },
1226                     "index": 0
1227                 }],
1228                 "rx-direction": [{
1229                     "port": {
1230                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1231                         "port-type": "fixed",
1232                         "port-name": "<xpdr-network-port-in-otn-topology>",
1233                         "port-rack": "000000.00",
1234                         "port-shelf": "Chassis#1"
1235                     },
1236                     "lgx": {
1237                         "lgx-device-name": "Some lgx-device-name",
1238                         "lgx-port-name": "Some lgx-port-name",
1239                         "lgx-port-rack": "000000.00",
1240                         "lgx-port-shelf": "00"
1241                     },
1242                     "index": 0
1243                 }],
1244                 "optic-type": "gray"
1245             },
1246             "due-date": "yyyy-mm-ddT00:00:01Z",
1247             "operator-contact": "some-contact-info"
1248         }
1249     }
1250
1251 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1252 *otn-topology* that must contains OTU4 links with valid bandwidth parameters, and then
1253 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1254
1255 OTN 10GE-ODU2e service creation
1256 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1257
1258 Use the following REST RPC to invoke *service handler* module in order to create over the OTN
1259 infrastructure a bidirectional end-to-end 10GE-ODU2e OTN service over an ODU4.
1260 Such a service must be created between two client ports of OTN Xponder (MUXPDR or SWITCH)
1261 configured to support 10GE interfaces.
1262
1263 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
1264
1265 **Sample JSON Data**
1266
1267 .. code:: json
1268
1269     {
1270         "input": {
1271             "sdnc-request-header": {
1272                 "request-id": "request-1",
1273                 "rpc-action": "service-create",
1274                 "request-system-id": "appname"
1275             },
1276             "service-name": "something",
1277             "common-id": "commonId",
1278             "connection-type": "service",
1279             "service-a-end": {
1280                 "service-rate": "10",
1281                 "node-id": "<xpdr-node-id>",
1282                 "service-format": "Ethernet",
1283                 "clli": "<ccli-name>",
1284                 "subrate-eth-sla": {
1285                     "subrate-eth-sla": {
1286                         "committed-info-rate": "10000",
1287                         "committed-burst-size": "64"
1288                     }
1289                 },
1290                 "tx-direction": [{
1291                     "port": {
1292                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1293                         "port-type": "fixed",
1294                         "port-name": "<xpdr-client-port-in-otn-topology>",
1295                         "port-rack": "000000.00",
1296                         "port-shelf": "Chassis#1"
1297                     },
1298                     "lgx": {
1299                         "lgx-device-name": "Some lgx-device-name",
1300                         "lgx-port-name": "Some lgx-port-name",
1301                         "lgx-port-rack": "000000.00",
1302                         "lgx-port-shelf": "00"
1303                     },
1304                     "index": 0
1305                 }],
1306                 "rx-direction": [{
1307                     "port": {
1308                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1309                         "port-type": "fixed",
1310                         "port-name": "<xpdr-client-port-in-otn-topology>",
1311                         "port-rack": "000000.00",
1312                         "port-shelf": "Chassis#1"
1313                     },
1314                     "lgx": {
1315                         "lgx-device-name": "Some lgx-device-name",
1316                         "lgx-port-name": "Some lgx-port-name",
1317                         "lgx-port-rack": "000000.00",
1318                         "lgx-port-shelf": "00"
1319                     },
1320                     "index": 0
1321                 }],
1322                 "optic-type": "gray"
1323             },
1324             "service-z-end": {
1325                 "service-rate": "10",
1326                 "node-id": "<xpdr-node-id>",
1327                 "service-format": "Ethernet",
1328                 "clli": "<ccli-name>",
1329                 "subrate-eth-sla": {
1330                     "subrate-eth-sla": {
1331                         "committed-info-rate": "10000",
1332                         "committed-burst-size": "64"
1333                     }
1334                 },
1335                 "tx-direction": [{
1336                     "port": {
1337                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1338                         "port-type": "fixed",
1339                         "port-name": "<xpdr-client-port-in-otn-topology>",
1340                         "port-rack": "000000.00",
1341                         "port-shelf": "Chassis#1"
1342                     },
1343                     "lgx": {
1344                         "lgx-device-name": "Some lgx-device-name",
1345                         "lgx-port-name": "Some lgx-port-name",
1346                         "lgx-port-rack": "000000.00",
1347                         "lgx-port-shelf": "00"
1348                     },
1349                     "index": 0
1350                 }],
1351                 "rx-direction": [{
1352                     "port": {
1353                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1354                         "port-type": "fixed",
1355                         "port-name": "<xpdr-client-port-in-otn-topology>",
1356                         "port-rack": "000000.00",
1357                         "port-shelf": "Chassis#1"
1358                     },
1359                     "lgx": {
1360                         "lgx-device-name": "Some lgx-device-name",
1361                         "lgx-port-name": "Some lgx-port-name",
1362                         "lgx-port-rack": "000000.00",
1363                         "lgx-port-shelf": "00"
1364                     },
1365                     "index": 0
1366                 }],
1367                 "optic-type": "gray"
1368             },
1369             "due-date": "yyyy-mm-ddT00:00:01Z",
1370             "operator-contact": "some-contact-info"
1371         }
1372     }
1373
1374 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1375 *otn-topology* that must contains ODU4 links with valid bandwidth parameters, and then
1376 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1377
1378
1379 .. note::
1380     Since Magnesium SR2, the service-list corresponding to OCH-OTU4, ODU4 or again 10GE-ODU2e services is
1381     updated in the service-list datastore.
1382
1383 .. note::
1384     trib-slot is used when the equipment supports contiguous trib-slot allocation (supported from
1385     Magnesium SR0). The trib-slot provided corresponds to the first of the used trib-slots.
1386     complex-trib-slots will be used when the equipment does not support contiguous trib-slot
1387     allocation. In this case a list of the different trib-slots to be used shall be provided.
1388     The support for non contiguous trib-slot allocation is planned for later release.
1389
1390 Deleting a service
1391 ~~~~~~~~~~~~~~~~~~
1392
1393 Deleting any kind of service
1394 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1395
1396 Use the following REST RPC to invoke *service handler* module in order to delete a given optical
1397 connectivity service.
1398
1399 **REST API** : *POST /rests/operations/org-openroadm-service:service-delete*
1400
1401 **Sample JSON Data**
1402
1403 .. code:: json
1404
1405     {
1406         "input": {
1407             "sdnc-request-header": {
1408                 "request-id": "request-1",
1409                 "rpc-action": "service-delete",
1410                 "request-system-id": "appname",
1411                 "notification-url": "http://localhost:8585/NotificationServer/notify"
1412             },
1413             "service-delete-req-info": {
1414                 "service-name": "something",
1415                 "tail-retention": "no"
1416             }
1417         }
1418     }
1419
1420 Most important parameters for this REST RPC is the *service-name*.
1421
1422
1423 .. note::
1424     Deleting OTN services implies proceeding in the reverse way to their creation. Thus, OTN
1425     service deletion must respect the three following steps:
1426     1. delete first all 10GE services supported over any ODU4 to be deleted
1427     2. delete ODU4
1428     3. delete OCH-OTU4 supporting the just deleted ODU4
1429
1430 Invoking PCE module
1431 ~~~~~~~~~~~~~~~~~~~
1432
1433 Use the following REST RPCs to invoke *PCE* module in order to check connectivity between xponder
1434 nodes and the availability of a supporting optical connectivity between the network-ports of the
1435 nodes.
1436
1437 Checking OTU4 service connectivity
1438 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1439
1440 **REST API** : *POST /rests/operations/transportpce-pce:path-computation-request*
1441
1442 **Sample JSON Data**
1443
1444 .. code:: json
1445
1446    {
1447       "input": {
1448            "service-name": "something",
1449            "resource-reserve": "true",
1450            "service-handler-header": {
1451              "request-id": "request1"
1452            },
1453            "service-a-end": {
1454              "service-rate": "100",
1455              "clli": "<clli-node>",
1456              "service-format": "OTU",
1457              "node-id": "<otn-node-id>"
1458            },
1459            "service-z-end": {
1460              "service-rate": "100",
1461              "clli": "<clli-node>",
1462              "service-format": "OTU",
1463              "node-id": "<otn-node-id>"
1464              },
1465            "pce-routing-metric": "hop-count"
1466        }
1467    }
1468
1469 .. note::
1470     here, the <otn-node-id> corresponds to the node-id as appearing in "openroadm-network" topology
1471     layer
1472
1473 Checking ODU4 service connectivity
1474 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1475
1476 **REST API** : *POST /rests/operations/transportpce-pce:path-computation-request*
1477
1478 **Sample JSON Data**
1479
1480 .. code:: json
1481
1482    {
1483       "input": {
1484            "service-name": "something",
1485            "resource-reserve": "true",
1486            "service-handler-header": {
1487              "request-id": "request1"
1488            },
1489            "service-a-end": {
1490              "service-rate": "100",
1491              "clli": "<clli-node>",
1492              "service-format": "ODU",
1493              "node-id": "<otn-node-id>"
1494            },
1495            "service-z-end": {
1496              "service-rate": "100",
1497              "clli": "<clli-node>",
1498              "service-format": "ODU",
1499              "node-id": "<otn-node-id>"
1500              },
1501            "pce-routing-metric": "hop-count"
1502        }
1503    }
1504
1505 .. note::
1506     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1507
1508 Checking 10GE/ODU2e service connectivity
1509 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1510
1511 **REST API** : *POST /rests/operations/transportpce-pce:path-computation-request*
1512
1513 **Sample JSON Data**
1514
1515 .. code:: json
1516
1517    {
1518       "input": {
1519            "service-name": "something",
1520            "resource-reserve": "true",
1521            "service-handler-header": {
1522              "request-id": "request1"
1523            },
1524            "service-a-end": {
1525              "service-rate": "10",
1526              "clli": "<clli-node>",
1527              "service-format": "Ethernet",
1528              "node-id": "<otn-node-id>"
1529            },
1530            "service-z-end": {
1531              "service-rate": "10",
1532              "clli": "<clli-node>",
1533              "service-format": "Ethernet",
1534              "node-id": "<otn-node-id>"
1535              },
1536            "pce-routing-metric": "hop-count"
1537        }
1538    }
1539
1540 .. note::
1541     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1542
1543
1544 odl-transportpce-tapi
1545 ---------------------
1546
1547 This feature allows TransportPCE application to expose at its northbound interface other APIs than
1548 those defined by the OpenROADM MSA. With this feature, TransportPCE provides part of the Transport-API
1549 specified by the Open Networking Foundation. More specifically, the Topology Service, Connectivity and Notification
1550 Service components are implemented, allowing to:
1551
1552 1. Expose to higher level applications an abstraction of its OpenROADM topologies in the form of topologies respecting the T-API modelling.
1553 2. Create/delete connectivity services between the Service Interface Points (SIPs) exposed by the T-API topology.
1554 3. Create/Delete Notification Subscription Service to expose to higher level applications T-API notifications through a Kafka server.
1555
1556 The current version of TransportPCE implements the *tapi-topology.yang*,
1557 *tapi-connectivity.yang* and *tapi-notification.yang* models in the revision
1558 2018-12-10 (T-API v2.1.2).
1559
1560 Additionally, support for the Path Computation Service will be added in future releases, which will allow T-PCE
1561 to compute a path over the T-API topology.
1562
1563 T-API Topology Service
1564 ~~~~~~~~~~~~~~~~~~~~~~
1565
1566 -  RPC calls implemented:
1567
1568    -  get-topology-details
1569
1570    -  get-node-details
1571
1572    -  get-node-edge-point-details
1573
1574    -  get-link-details
1575
1576    -  get-topology-list
1577
1578
1579 As in IETF or OpenROADM topologies, T-API topologies are composed of lists of nodes and links that
1580 abstract a set of network resources. T-API specifies the *T0 - Multi-layer topology* which is, as
1581 indicated by its name, a single topology that collapses network logical abstraction for all network
1582 layers. Thus, an OpenROADM device as, for example, an OTN xponder that manages the following network
1583 layers ETH, ODU, OTU, Optical wavelength, will be represented in T-API T0 topology by two nodes:
1584 one *DSR/ODU* node and one *Photonic Media* node. Each of them are linked together through one or
1585 several *transitional links* depending on the number of network/line ports on the device.
1586
1587 Aluminium SR2 comes with a complete refactoring of this module, handling the same way multi-layer
1588 abstraction of any Xponder terminal device, whether it is a 100G transponder, an OTN muxponder or
1589 again an OTN switch. For all these devices, the implementation manages the fact that only relevant
1590 ports must appear in the resulting TAPI topology abstraction. In other words, only client/network ports
1591 that are undirectly/directly connected to the ROADM infrastructure are considered for the abstraction.
1592 Moreover, the whole ROADM infrastructure of the network is also abstracted towards a single photonic
1593 node. Therefore, a pair of unidirectional xponder-output/xponder-input links present in *openroadm-topology*
1594 is represented by a bidirectional *OMS* link in TAPI topology.
1595 In the same way, a pair of unidirectional OTN links (OTU4, ODU4) present in *otn-topology* is also
1596 represented by a bidirectional OTN link in TAPI topology, while retaining their available bandwidth
1597 characteristics.
1598
1599 Phosphorus SR0 extends the T-API topology service implementation by bringing a fully described topology.
1600 *T0 - Full Multi-layer topology* is derived from the existing *T0 - Multi-layer topology*. But the ROADM
1601 infrastructure is not abstracted and the higher level application can get more details on the composition
1602 of the ROADM infrastructure controlled by TransportPCE. Each ROADM node found in the *openroadm-network*
1603 is converted into a *Photonic Media* node. The details of these T-API nodes are obtained from the
1604 *openroadm-topology*. Therefore, the external traffic ports of *Degree* and *SRG* nodes are represented
1605 with a set of Network Edge Points (NEPs) and SIPs belonging to the *Photonic Media* node and a pair of
1606 roadm-to-roadm links present in *openroadm-topology* is represented by a bidirectional *OMS* link in TAPI
1607 topology.
1608 Additionally, T-API topology related information is stored in TransportPCE datastore in the same way as
1609 OpenROADM topology layers. When a node is connected to the controller through the corresponding *REST API*,
1610 the T-API topology context gets updated dynamically and stored.
1611
1612 .. note::
1613
1614     A naming nomenclature is defined to be able to map T-API and OpenROADM data.
1615     i.e., T-API_roadm_Name = OpenROADM_roadmID+T-API_layer
1616     i.e., T-API_roadm_nep_Name = OpenROADM_roadmID+T-API_layer+OpenROADM_terminationPointID
1617
1618 Three kinds of topologies are currently implemented. The first one is the *"T0 - Multi-layer topology"*
1619 defined in the reference implementation of T-API. This topology gives an abstraction from data coming
1620 from openroadm-topology and otn-topology. Such topology may be rather complex since most of devices are
1621 represented through several nodes and links.
1622 Another topology, named *"Transponder 100GE"*, is also implemented. That latter provides a higher level
1623 of abstraction, much simpler, for the specific case of 100GE transponder, in the form of a single
1624 DSR node.
1625 Lastly, the *T0 - Full Multi-layer topology* topology was added. This topology collapses the data coming
1626 from openroadm-network, openroadm-topology and otn-topology. It gives a complete view of the optical
1627 network as defined in the reference implementation of T-API
1628
1629 The figure below shows an example of TAPI abstractions as performed by TransportPCE starting from Aluminium SR2.
1630
1631 .. figure:: ./images/TransportPCE-tapi-abstraction.jpg
1632    :alt: Example of T0-multi-layer TAPI abstraction in TransportPCE
1633
1634 In this specific case, as far as the "A" side is concerned, we connect TransportPCE to two xponder
1635 terminal devices at the netconf level :
1636 - XPDR-A1 is a 100GE transponder and is represented by XPDR-A1-XPDR1 node in *otn-topology*
1637 - SPDR-SA1 is an otn xponder that actually contains in its device configuration datastore two otn
1638 xponder nodes (the otn muxponder 10GE=>100G SPDR-SA1-XPDR1 and the otn switch 4x100GE => 4x100G SPDR-SA1-XPDR2)
1639 As represented on the bottom part of the figure, only one network port of XPDR-A1-XPDR1 is connected
1640 to the ROADM infrastructure, and only one network port of the otn muxponder is also attached to the
1641 ROADM infrastructure.
1642 Such network configuration will result in the TAPI *T0 - Multi-layer topology* abstraction as
1643 represented in the center of the figure. Let's notice that the otn switch (SPDR-SA1-XPDR2), not
1644 being attached to the ROADM infrastructure, is not abstracted.
1645 Moreover, 100GE transponder being connected, the TAPI *Transponder 100GE* topology will result in a
1646 single layer DSR node with only the two Owned Node Edge Ports representing the two 100GE client ports
1647 of respectively XPDR-A1-XPDR1 and XPDR-C1-XPDR1...
1648
1649
1650 **REST API** : *POST /rests/operations/tapi-topology:get-topology-details*
1651
1652 This request builds the TAPI *T0 - Multi-layer topology* abstraction with regard to the current
1653 state of *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1654
1655 **Sample JSON Data**
1656
1657 .. code:: json
1658
1659     {
1660       "tapi-topology:input": {
1661         "tapi-topology:topology-id-or-name": "T0 - Multi-layer topology"
1662        }
1663     }
1664
1665 This request builds the TAPI *Transponder 100GE* abstraction with regard to the current state of
1666 *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1667 Its main interest is to simply and directly retrieve 100GE client ports of 100G Transponders that may
1668 be connected together, through a point-to-point 100GE service running over a wavelength.
1669
1670 .. code:: json
1671
1672     {
1673       "tapi-topology:input": {
1674         "tapi-topology:topology-id-or-name": "Transponder 100GE"
1675         }
1676     }
1677
1678
1679 .. note::
1680
1681     As for the *T0 multi-layer* topology, only 100GE client port whose their associated 100G line
1682     port is connected to Add/Drop nodes of the ROADM infrastructure are retrieved in order to
1683     abstract only relevant information.
1684
1685 This request builds the TAPI *T0 - Full Multi-layer* topology with respect to the information existing in
1686 the T-API topology context stored in OpenDaylight datastores.
1687
1688 .. code:: json
1689
1690     {
1691       "tapi-topology:input": {
1692         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology"
1693         }
1694     }
1695
1696 **REST API** : *POST /rests/operations/tapi-topology:get-node-details*
1697
1698 This request returns the information, stored in the Topology Context, of the corresponding T-API node.
1699 The user can provide, either the Uuid associated to the attribute or its name.
1700
1701 **Sample JSON Data**
1702
1703 .. code:: json
1704
1705     {
1706       "tapi-topology:input": {
1707         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1708         "tapi-topology:node-id-or-name": "ROADM-A1+PHOTONIC_MEDIA"
1709       }
1710     }
1711
1712 **REST API** : *POST /rests/operations/tapi-topology:get-node-edge-point-details*
1713
1714 This request returns the information, stored in the Topology Context, of the corresponding T-API NEP.
1715 The user can provide, either the Uuid associated to the attribute or its name.
1716
1717 **Sample JSON Data**
1718
1719 .. code:: json
1720
1721     {
1722       "tapi-topology:input": {
1723         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1724         "tapi-topology:node-id-or-name": "ROADM-A1+PHOTONIC_MEDIA",
1725         "tapi-topology:ep-id-or-name": "ROADM-A1+PHOTONIC_MEDIA+DEG1-TTP-TXRX"
1726       }
1727     }
1728
1729 **REST API** : *POST /rests/operations/tapi-topology:get-link-details*
1730
1731 This request returns the information, stored in the Topology Context, of the corresponding T-API link.
1732 The user can provide, either the Uuid associated to the attribute or its name.
1733
1734 **Sample JSON Data**
1735
1736 .. code:: json
1737
1738     {
1739       "tapi-topology:input": {
1740         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1741         "tapi-topology:link-id-or-name": "ROADM-C1-DEG1-DEG1-TTP-TXRXtoROADM-A1-DEG2-DEG2-TTP-TXRX"
1742       }
1743     }
1744
1745 T-API Connectivity & Common Services
1746 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1747
1748 Phosphorus SR0 extends the T-API interface support by implementing the T-API connectivity Service.
1749 This interface enables a higher level controller or an orchestrator to request the creation of
1750 connectivity services as defined in the *tapi-connectivity* model. As it is necessary to indicate the
1751 two (or more) SIPs (or endpoints) of the connectivity service, the *tapi-common* model is implemented
1752 to retrieve from the datastore all the innformation related to the SIPs in the tapi-context.
1753 Current implementation of the connectivity service maps the *connectivity-request* into the appropriate
1754 *openroadm-service-create* and relies on the Service Handler to perform path calculation and configuration
1755 of devices. Results received from the PCE and the Rendererare mapped back into T-API to create the
1756 corresponding Connection End Points (CEPs) and Connections in the T-API Connectivity Context and store it
1757 in the datastore.
1758
1759 This first implementation includes the creation of:
1760
1761 -   ROADM-to-ROADM tapi-connectivity service (MC connectivity service)
1762 -   OTN tapi-connectivity services (OCh/OTU, OTSi/OTU & ODU connectivity services)
1763 -   Ethernet tapi-connectivity services (DSR connectivity service)
1764
1765 -  RPC calls implemented
1766
1767    -  create-connectivity-service
1768
1769    -  get-connectivity-service-details
1770
1771    -  get-connection-details
1772
1773    -  delete-connectivity-service
1774
1775    -  get-connection-end-point-details
1776
1777    -  get-connectivity-service-list
1778
1779    -  get-service-interface-point-details
1780
1781    -  get-service-interface-point-list
1782
1783 Creating a T-API Connectivity service
1784 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1785
1786 Use the *tapi* interface to create any end-to-end connectivity service on a T-API based
1787 network. Two kind of end-to-end "optical" connectivity services are managed by TransportPCE T-API module:
1788 - 10GE service from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
1789 - Media Channel (MC) connectivity service from client add/drop port (PP port of SRG) to
1790 client add/drop port of two ROADMs.
1791
1792 As mentioned earlier, T-API module interfaces with the Service Handler to automatically invoke the
1793 *renderer* module to create all required tapi connections and cross-connection on each device
1794 supporting the service.
1795
1796 Before creating a low-order OTN connectivity service (1GE or 10GE services terminating on
1797 client port of MUXPDR or SWITCH), the user must ensure that a high-order ODU4 container
1798 exists and has previously been configured (it means structured to support low-order otn services)
1799 to support low-order OTN containers.
1800
1801 Thus, OTN connectivity service creation implies three steps:
1802 1. OTSi/OTU connectivity service from network port to network port of two OTN Xponders (MUXPDR or SWITCH in Photonic media layer)
1803 2. ODU connectivity service from network port to network port of two OTN Xponders (MUXPDR or SWITCH in DSR/ODU layer)
1804 3. 10GE connectivity service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH in DSR/ODU layer)
1805
1806 The first step corresponds to the OCH-OTU4 service from network port to network port of OpenROADM.
1807 The corresponding T-API cross and top connections are created between the CEPs of the T-API nodes
1808 involved in each request.
1809
1810 Additionally, an *MC connectivity service* could be created between two ROADMs to create an optical
1811 tunnel and reserve resources in advance. This kind of service corresponds to the OC service creation
1812 use case described earlier.
1813
1814 The management of other OTN services through T-API (1GE-ODU0, 100GE...) is planned for future releases.
1815
1816 Any-Connectivity service creation
1817 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1818 As for the Service Creation described for OpenROADM, the initial steps are the same:
1819
1820 -   Connect netconf devices to the controller
1821 -   Create XPDR-RDM links and configure RDM-to-RDM links (in openroadm topologies)
1822
1823 Bidirectional T-API links between xpdr and rdm nodes must be created manually. To that end, use the
1824 following REST RPCs:
1825
1826 From xpdr <--> rdm:
1827 ^^^^^^^^^^^^^^^^^^^
1828
1829 **REST API** : *POST /rests/operations/transportpce-tapinetworkutils:init-xpdr-rdm-tapi-link*
1830
1831 **Sample JSON Data**
1832
1833 .. code:: json
1834
1835     {
1836         "input": {
1837             "xpdr-node": "<XPDR_OpenROADM_id>",
1838             "network-tp": "<XPDR_TP_OpenROADM_id>",
1839             "rdm-node": "<ROADM_OpenROADM_id>",
1840             "add-drop-tp": "<ROADM_TP_OpenROADM_id>"
1841         }
1842     }
1843
1844 Use the following REST RPC to invoke T-API module in order to create a bidirectional connectivity
1845 service between two devices. The network should be composed of two ROADMs and two Xponders (SWITCH or MUX)
1846
1847 **REST API** : *POST /rests/operations/tapi-connectivity:create-connectivity-service*
1848
1849 **Sample JSON Data**
1850
1851 .. code:: json
1852
1853     {
1854         "tapi-connectivity:input": {
1855             "tapi-connectivity:end-point": [
1856                 {
1857                     "tapi-connectivity:layer-protocol-name": "<Node_TAPI_Layer>",
1858                     "tapi-connectivity:service-interface-point": {
1859                         "tapi-connectivity:service-interface-point-uuid": "<SIP_UUID_of_NEP>"
1860                     },
1861                     "tapi-connectivity:administrative-state": "UNLOCKED",
1862                     "tapi-connectivity:operational-state": "ENABLED",
1863                     "tapi-connectivity:direction": "BIDIRECTIONAL",
1864                     "tapi-connectivity:role": "SYMMETRIC",
1865                     "tapi-connectivity:protection-role": "WORK",
1866                     "tapi-connectivity:local-id": "<OpenROADM node ID>",
1867                     "tapi-connectivity:name": [
1868                         {
1869                             "tapi-connectivity:value-name": "OpenROADM node id",
1870                             "tapi-connectivity:value": "<OpenROADM node ID>"
1871                         }
1872                     ]
1873                 },
1874                 {
1875                     "tapi-connectivity:layer-protocol-name": "<Node_TAPI_Layer>",
1876                     "tapi-connectivity:service-interface-point": {
1877                         "tapi-connectivity:service-interface-point-uuid": "<SIP_UUID_of_NEP>"
1878                     },
1879                     "tapi-connectivity:administrative-state": "UNLOCKED",
1880                     "tapi-connectivity:operational-state": "ENABLED",
1881                     "tapi-connectivity:direction": "BIDIRECTIONAL",
1882                     "tapi-connectivity:role": "SYMMETRIC",
1883                     "tapi-connectivity:protection-role": "WORK",
1884                     "tapi-connectivity:local-id": "<OpenROADM node ID>",
1885                     "tapi-connectivity:name": [
1886                         {
1887                             "tapi-connectivity:value-name": "OpenROADM node id",
1888                             "tapi-connectivity:value": "<OpenROADM node ID>"
1889                         }
1890                     ]
1891                 }
1892             ],
1893             "tapi-connectivity:connectivity-constraint": {
1894                 "tapi-connectivity:service-layer": "<TAPI_Service_Layer>",
1895                 "tapi-connectivity:service-type": "POINT_TO_POINT_CONNECTIVITY",
1896                 "tapi-connectivity:service-level": "Some service-level",
1897                 "tapi-connectivity:requested-capacity": {
1898                     "tapi-connectivity:total-size": {
1899                         "value": "<CAPACITY>",
1900                         "unit": "GB"
1901                     }
1902                 }
1903             },
1904             "tapi-connectivity:state": "Some state"
1905         }
1906     }
1907
1908 As for the previous RPC, MC and OTSi correspond to PHOTONIC_MEDIA layer services,
1909 ODU to ODU layer services and 10GE/DSR to DSR layer services. This RPC invokes the
1910 *Service Handler* module to trigger the *PCE* to compute a path over the
1911 *otn-topology* that must contains ODU4 links with valid bandwidth parameters. Once the path is computed
1912 and validated, the T-API CEPs (associated with a NEP), cross connections and top connections will be created
1913 according to the service request and the topology objects inside the computed path. Then, the *renderer* and
1914 *OLM* are invoked to implement the end-to-end path into the devices and to update the status of the connections
1915 and connectivity service.
1916
1917 .. note::
1918     Refer to the "Unconstrained E2E Service Provisioning" use cases from T-API Reference Implementation to get
1919     more details about the process of connectivity service creation
1920
1921 Deleting a connectivity service
1922 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1923
1924 Use the following REST RPC to invoke *TAPI* module in order to delete a given optical
1925 connectivity service.
1926
1927 **REST API** : *POST /rests/operations/tapi-connectivity:delete-connectivity-service*
1928
1929 **Sample JSON Data**
1930
1931 .. code:: json
1932
1933     {
1934         "tapi-connectivity:input": {
1935             "tapi-connectivity:service-id-or-name": "<Service_UUID_or_Name>"
1936         }
1937     }
1938
1939 .. note::
1940     Deleting OTN connectivity services implies proceeding in the reverse way to their creation. Thus, OTN
1941     connectivity service deletion must respect the three following steps:
1942     1. delete first all 10GE services supported over any ODU4 to be deleted
1943     2. delete ODU4
1944     3. delete MC-OTSi supporting the just deleted ODU4
1945
1946 T-API Notification Service
1947 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1948
1949 -  RPC calls implemented:
1950
1951    -  create-notification-subscription-service
1952
1953    -  get-supported-notification-types
1954
1955    -  delete-notification-subscription-service
1956
1957    -  get-notification-subscription-service-details
1958
1959    -  get-notification-subscription-service-list
1960
1961    -  get-notification-list
1962
1963 Sulfur SR1 extends the T-API interface support by implementing the T-API notification service. This feature
1964 allows TransportPCE to write and read tapi-notifications stored in topics of a Kafka server. It also upgrades
1965 the nbinotifications module to support the serialization and deserialization of tapi-notifications into JSON
1966 format and vice-versa. Current implementation of the notification service creates a Kafka topic and stores
1967 tapi-notification on reception of a create-notification-subscription-service request. Only connectivity-service
1968 related notifications are stored in the Kafka server.
1969
1970 In comparison with openroadm notifications, in which several pre-defined kafka topics are created on nbinotification
1971 module instantiation, tapi-related kafka topics are created on-demand. Upon reception of a
1972 *create-notification-subscription-service request*, a new topic will be created in the Kafka server.
1973 This topic is named after the connectivity-service UUID.
1974
1975 .. note::
1976     Creating a Notification Subscription Service could include a list of T-API object UUIDs, therefore 1 topic per UUID
1977     is created in the Kafka server.
1978
1979 In the current implementation, only Connectivity Service related notification are supported.
1980
1981 **REST API** : *POST /rests/operations/tapi-notification:get-supported-notification-types*
1982
1983 The response body will include the type of notifications supported and the object types
1984
1985 Use the following RPC to create a Notification Subscription Service.
1986
1987 **REST API** : *POST /rests/operations/tapi-notification:create-notification-subscription-service*
1988
1989 **Sample JSON Data**
1990
1991 .. code:: json
1992
1993     {
1994         "tapi-notification:input": {
1995             "tapi-notification:subscription-filter": {
1996                 "tapi-notification:requested-notification-types": [
1997                     "ALARM_EVENT"
1998                 ],
1999                 "tapi-notification:requested-object-types": [
2000                     "CONNECTIVITY_SERVICE"
2001                 ],
2002                 "tapi-notification:requested-layer-protocols": [
2003                     "<LAYER_PROTOCOL_NAME>"
2004                 ],
2005                 "tapi-notification:requested-object-identifier": [
2006                     "<Service_UUID>"
2007                 ],
2008                 "tapi-notification:include-content": true,
2009                 "tapi-notification:local-id": "localId",
2010                 "tapi-notification:name": [
2011                     {
2012                         "tapi-notification:value-name": "Subscription name",
2013                         "tapi-notification:value": "<notification_service_name>"
2014                     }
2015                 ]
2016             },
2017             "tapi-notification:subscription-state": "ACTIVE"
2018         }
2019     }
2020
2021 This call will return the *UUID* of the Notification Subscription service, which can later be used to retrieve the
2022 details of the created subscription, to delete the subscription (and all the related kafka topics) or to retrieve
2023 all the tapi notifications related to that subscription service.
2024
2025 The figure below shows an example of the application of tapi and nbinotifications in order to notify when there is
2026 a connectivity service creation process. Depending on the status of the process a tapi-notification with the
2027 corresponding updated state of the connectivity service is sent to the topic "Service_UUID".
2028
2029 .. figure:: ./images/TransportPCE-tapi-nbinotifications-service-example.jpg
2030    :alt: Example of tapi connectivity service notifications using the feature nbinotifications in TransportPCE
2031
2032 Additionally, when a connectivity service breaks down or is restored a tapi notification alarming the new status
2033 will be sent to a Kafka Server. Below an example of a tapi notification is shown.
2034
2035 **Sample JSON T-API notification**
2036
2037 .. code:: json
2038
2039     {
2040       "nbi-notifications:notification-tapi-service": {
2041         "layer-protocol-name": "<LAYER_PROTOCOL_NAME>",
2042         "notification-type": "ATTRIBUTE_VALUE_CHANGE",
2043         "changed-attributes": [
2044           {
2045             "value-name": "administrativeState",
2046             "old-value": "<LOCKED_OR_UNLOCKED>",
2047             "new-value": "<UNLOCKED_OR_LOCKED>"
2048           },
2049           {
2050             "value-name": "operationalState",
2051             "old-value": "DISABLED_OR_ENABLED",
2052             "new-value": "ENABLED_OR_DISABLED"
2053           }
2054         ],
2055         "target-object-name": [
2056           {
2057             "value-name": "Connectivity Service Name",
2058             "value": "<SERVICE_UUID>"
2059           }
2060         ],
2061         "uuid": "<NOTIFICATION_UUID>",
2062         "target-object-type": "CONNECTIVITY_SERVICE",
2063         "event-time-stamp": "2022-04-06T09:06:01+00:00",
2064         "target-object-identifier": "<SERVICE_UUID>"
2065       }
2066     }
2067
2068 To retrieve these tapi connectivity service notifications stored in the kafka server:
2069
2070 **REST API** : *POST /rests/operations/tapi-notification:get-notification-list*
2071
2072 **Sample JSON Data**
2073
2074 .. code:: json
2075
2076     {
2077         "tapi-notification:input": {
2078             "tapi-notification:subscription-id-or-name": "<SUBSCRIPTION_UUID_OR_NAME>",
2079             "tapi-notification:time-period": "time-period"
2080         }
2081     }
2082
2083 Further development will support more types of T-API objects, i.e., node, link, topology, connection...
2084
2085 odl-transportpce-dmaap-client
2086 -----------------------------
2087
2088 This feature allows TransportPCE application to send notifications on ONAP Dmaap Message router
2089 following service request results.
2090 This feature listens on NBI notifications and sends the PublishNotificationService content to
2091 Dmaap on the topic "unauthenticated. TPCE" through a POST request on /events/unauthenticated.TPCE
2092 It uses Jackson to serialize the notification to JSON and jersey client to send the POST request.
2093
2094 odl-transportpce-nbinotifications
2095 ---------------------------------
2096
2097 This feature allows TransportPCE application to write and read notifications stored in topics of a Kafka server.
2098 It is basically composed of two kinds of elements. First are the 'publishers' that are in charge of sending a notification to
2099 a Kafka server. To protect and only allow specific classes to send notifications, each publisher
2100 is dedicated to an authorized class.
2101 There are the 'subscribers' that are in charge of reading notifications from a Kafka server.
2102 So when the feature is called to write notification to a Kafka server, it will serialize the notification
2103 into JSON format and then will publish it in a topic of the server via a publisher.
2104 And when the feature is called to read notifications from a Kafka server, it will retrieve it from
2105 the topic of the server via a subscriber and will deserialize it.
2106
2107 For now, when the REST RPC service-create is called to create a bidirectional end-to-end service,
2108 depending on the success or the fail of the creation, the feature will notify the result of
2109 the creation to a Kafka server. The topics that store these notifications are named after the connection type
2110 (service, infrastructure, roadm-line). For instance, if the RPC service-create is called to create an
2111 infrastructure connection, the service notifications related to this connection will be stored in
2112 the topic 'infrastructure'.
2113
2114 The figure below shows an example of the application nbinotifications in order to notify the
2115 progress of a service creation.
2116
2117 .. figure:: ./images/TransportPCE-nbinotifications-service-example.jpg
2118    :alt: Example of service notifications using the feature nbinotifications in TransportPCE
2119
2120
2121 Depending on the status of the service creation, two kinds of notifications can be published
2122 to the topic 'service' of the Kafka server.
2123
2124 If the service was correctly implemented, the following notification will be published :
2125
2126
2127 -  **Service implemented !** : Indicates that the service was successfully implemented.
2128    It also contains all information concerning the new service.
2129
2130
2131 Otherwise, this notification will be published :
2132
2133
2134 -  **ServiceCreate failed ...** : Indicates that the process of service-create failed, and also contains
2135    the failure cause.
2136
2137
2138 To retrieve these service notifications stored in the Kafka server :
2139
2140 **REST API** : *POST /rests/operations/nbi-notifications:get-notifications-process-service*
2141
2142 **Sample JSON Data**
2143
2144 .. code:: json
2145
2146     {
2147       "input": {
2148         "connection-type": "service",
2149         "id-consumer": "consumer",
2150         "group-id": "test"
2151        }
2152     }
2153
2154 .. note::
2155     The field 'connection-type' corresponds to the topic that stores the notifications.
2156
2157 Another implementation of the notifications allows to notify any modification of operational state made about a service.
2158 So when a service breaks down or is restored, a notification alarming the new status will be sent to a Kafka Server.
2159 The topics that store these notifications in the Kafka server are also named after the connection type
2160 (service, infrastructure, roadm-line) accompanied of the string 'alarm'.
2161
2162 To retrieve these alarm notifications stored in the Kafka server :
2163
2164 **REST API** : *POST /rests/operations/nbi-notifications:get-notifications-alarm-service*
2165
2166 **Sample JSON Data**
2167
2168 .. code:: json
2169
2170     {
2171       "input": {
2172         "connection-type": "infrastructure",
2173         "id-consumer": "consumer",
2174         "group-id": "test"
2175        }
2176     }
2177
2178 .. note::
2179     This sample is used to retrieve all the alarm notifications related to infrastructure services.
2180
2181 Help
2182 ----
2183
2184 -  `TransportPCE Wiki <https://wiki.opendaylight.org/display/ODL/TransportPCE>`__