Merge changes I8b6da9f3,I425a1d19,Ia3dfe19f,Idd68c408,I73f0c083, ...
[transportpce.git] / docs / developer-guide.rst
1 .. _transportpce-dev-guide:
2
3 TransportPCE Developer Guide
4 ============================
5
6 Overview
7 --------
8
9 TransportPCE describes an application running on top of the OpenDaylight
10 controller. Its primary function is to control an optical transport
11 infrastructure using a non-proprietary South Bound Interface (SBI). It may be
12 interconnected with Controllers of different layers (L2, L3 Controller…), a
13 higher layer Controller and/or an Orchestrator through non-proprietary
14 Application Programing Interfaces (APIs). Control includes the capability to
15 configure the optical equipment, and to provision services according to a
16 request coming from a higher layer controller and/or an orchestrator.
17 This capability may rely on the controller only or it may be delegated to
18 distributed (standardized) protocols.
19
20
21 Architecture
22 ------------
23
24 TransportPCE modular architecture is described on the next diagram. Each main
25 function such as Topology management, Path Calculation Engine (PCE), Service
26 handler, Renderer \_responsible for the path configuration through optical
27 equipment\_ and Optical Line Management (OLM) is associated with a generic block
28 relying on open models, each of them communicating through published APIs.
29
30
31 .. figure:: ./images/TransportPCE-Diagram-Sulfur.jpg
32    :alt: TransportPCE architecture
33
34    TransportPCE architecture
35
36 Fluorine, Neon and Sodium releases of transportPCE are dedicated to the control
37 of WDM transport infrastructure. The WDM layer is built from colorless ROADMs
38 and transponders.
39
40 The interest of using a controller to provision automatically services strongly
41 relies on its ability to handle end to end optical services that span through
42 the different network domains, potentially equipped with equipment coming from
43 different suppliers. Thus, interoperability in the optical layer is a key
44 element to get the benefit of automated control.
45
46 Initial design of TransportPCE leverages OpenROADM Multi-Source-Agreement (MSA)
47 which defines interoperability specifications, consisting of both Optical
48 interoperability and Yang data models.
49
50 End to end OTN services such as OCH-OTU4, structured ODU4 or 10GE-ODU2e
51 services are supported since Magnesium SR2. OTN support continued to be
52 improved in the following releases of Magnesium and Aluminium.
53
54 Flexgrid was introduced in Aluminium. Depending on OpenROADM device models,
55 optical interfaces can be created according to the initial fixed grid (for
56 R1.2.1, 96 channels regularly spaced of 50 GHz), or to a flexgrid (for R2.2.1
57 use of specific number of subsequent frequency slots of 6.25 GHz depending on
58 one side of ROADMs and transponders capabilities and on the other side of the
59 rate of the channel.
60
61 Leveraging Flexgrid feature, high rate services are supported since Silicon.
62 First implementation allows rendering 400 GE services. This release also brings
63 asynchronous service creation and deletion, thanks to northbound notifications
64 modules based on a Kafka implementation, allowing interactions with the DMaaP
65 Bus of ONAP.
66
67 Phosphorus consolidates end to end support for high rate services (ODUC4, OTUC4),
68 allowing service creation and deletion from the NBI. The support of path
69 computation for high rate services (OTUC4) will be added through the different P
70 releases, relying on GNPy for impairment aware path computation. An experimental
71 support of T-API is provided allowing service-create/delete from a T-API version
72 2.1.1 compliant NBI. A T-API network topology, with different levels of abstraction
73 and service context are maintained in the MDSAL. Service state is managed,
74 monitoring device port state changes. Associated notifications are handled through
75 Kafka and  DMaaP clients.
76
77
78 Module description
79 ~~~~~~~~~~~~~~~~~~
80
81 ServiceHandler
82 ^^^^^^^^^^^^^^
83
84 Service Handler handles request coming from a higher level controller or an
85 orchestrator through the northbound API, as defined in the Open ROADM service model.
86 Current implementation addresses the following rpcs: service-create, temp-service-
87 create, service–delete, temp-service-delete, service-reroute, and service-restoration.
88 It checks the request consistency and trigs path calculation sending rpcs to the PCE.
89 If a valid path is returned by the PCE, path configuration is initiated relying on
90 Renderer and OLM. At the confirmation of a successful service creation, the Service
91 Handler updates the service-list/temp-service-list in the MD-SAL. For service deletion,
92 the Service Handler relies on the Renderer and the OLM to delete connections and reset
93 power levels associated with the service. The service-list is updated following a
94 successful service deletion. In Neon SR0 is added the support for service from ROADM
95 to ROADM, which brings additional flexibility and notably allows reserving resources
96 when transponders are not in place at day one. Magnesium SR2 fully supports end-to-end
97 OTN services which are part of the OTN infrastructure. It concerns the management of
98 OCH-OTU4 (also part of the optical infrastructure) and structured HO-ODU4 services.
99 Moreover, once these two kinds of OTN infrastructure service created, it is possible
100 to manage some LO-ODU services (1GE-ODU0, 10GE-ODU2e). 100GE services are also
101 supported over ODU4 in transponders or switchponders using higher rate network
102 interfaces.
103
104 In Silicon release, the management of TopologyUpdateNotification coming from the *Topology Management*
105 module was implemented. This functionality enables the controller to update the information of existing
106 services according to the online status of the network infrastructure. If any service is affected by
107 the topology update and the *odl-transportpce-nbi* feature is installed, the Service Handler will send a
108 notification to a Kafka server with the service update information.
109
110 PCE
111 ^^^
112
113 The Path Computation Element (PCE) is the component responsible for path
114 calculation. An interface allows the Service Handler or external components such as an
115 orchestrator to request a path computation and get a response from the PCE
116 including the computed path(s) in case of success, or errors and indication of
117 the reason for the failure in case the request cannot be satisfied. Additional
118 parameters can be provided by the PCE in addition to the computed paths if
119 requested by the client module. An interface to the Topology Management module
120 allows keeping PCE aligned with the latest changes in the topology. Information
121 about current and planned services is available in the MD-SAL data store.
122
123 Current implementation of PCE allows finding the shortest path, minimizing either the hop
124 count (default) or the propagation delay. Central wavelength is assigned considering a fixed
125 grid of 96 wavelengths 50 GHz spaced. The assignment of wavelengths according to a flexible
126 grid considering 768 subsequent slots of 6,25 GHz (total spectrum of 4.8 Thz), and their
127 occupation by existing services is planned for later releases.
128 In Neon SR0, the PCE calculates the OSNR, on the base of incremental noise specifications
129 provided in Open ROADM MSA. The support of unidirectional ports is also added.
130
131 PCE handles the following constraints as hard constraints:
132
133 -   **Node exclusion**
134 -   **SRLG exclusion**
135 -   **Maximum latency**
136
137 In Magnesium SR0, the interconnection of the PCE with GNPY (Gaussian Noise Python), an
138 open-source library developed in the scope of the Telecom Infra Project for building route
139 planning and optimizing performance in optical mesh networks, is fully supported. Impairment
140 aware path computation for service of higher rates (Beyond 100G) is planned across Phoshorus
141 releases. It implies to make B100G OpenROADM specifications available in GNPy libraries.
142
143 If the OSNR calculated by the PCE is too close to the limit defined in OpenROADM
144 specifications, the PCE forwards through a REST interface to GNPY external tool the topology
145 and the pre-computed path translated in routing constraints. GNPy calculates a set of Quality of
146 Transmission metrics for this path using its own library which includes models for OpenROADM.
147 The result is sent back to the PCE. If the path is validated, the PCE sends back a response to
148 the service handler. In case of invalidation of the path by GNPY, the PCE sends a new request to
149 GNPY, including only the constraints expressed in the path-computation-request initiated by the
150 Service Handler. GNPy then tries to calculate a path based on these relaxed constraints. The
151 result of the path computation is provided to the PCE which translates the path according to the
152 topology handled in transportPCE and forwards the results to the Service Handler.
153
154 GNPy relies on SNR and takes into account the linear and non-linear impairments
155 to check feasibility. In the related tests, GNPy module runs externally in a
156 docker and the communication with T-PCE is ensured via HTTPs.
157
158 Topology Management
159 ^^^^^^^^^^^^^^^^^^^
160
161 Topology management module builds the Topology according to the Network model
162 defined in OpenROADM. The topology is aligned with IETF I2RS RFC8345 model.
163 It includes several network layers:
164
165 -  **CLLI layer corresponds to the locations that host equipment**
166 -  **Network layer corresponds to a first level of disaggregation where we
167    separate Xponders (transponder, muxponders or switchponders) from ROADMs**
168 -  **Topology layer introduces a second level of disaggregation where ROADMs
169    Add/Drop modules ("SRGs") are separated from the degrees which includes line
170    amplifiers and WSS that switch wavelengths from one to another degree**
171 -  **OTN layer introduced in Magnesium includes transponders as well as switch-ponders and
172    mux-ponders having the ability to switch OTN containers from client to line cards. Mg SR0
173    release includes creation of the switching pool (used to model cross-connect matrices),
174    tributary-ports and tributary-slots at the initial connection of NETCONF devices.
175    The population of OTN links (OTU4 and ODU4), and the adjustment of the tributary ports/slots
176    pool occupancy when OTN services are created is supported since Magnesium SR2.**
177
178 Since Silicon release, the Topology Management module process NETCONF event received through an
179 event stream (as defined in RFC 5277) between devices and the NETCONF adapter of the controller.
180 Current implementation detects device configuration changes and updates the topology datastore accordingly.
181 Then, it sends a TopologyUpdateNotification to the *Service Handler* to indicate that a change has been
182 detected in the network that may affect some of the already existing services.
183
184 Renderer
185 ^^^^^^^^
186
187 The Renderer module, on request coming from the Service Handler through a service-
188 implementation-request /service delete rpc, sets/deletes the path corresponding to a specific
189 service between A and Z ends. The path description provided by the service-handler to the
190 renderer is based on abstracted resources (nodes, links and termination-points), as provided
191 by the PCE module. The renderer converts this path-description in a path topology based on
192 device resources (circuit-packs, ports,…).
193
194 The conversion from abstracted resources to device resources is performed relying on the
195 portmapping module which maintains the connections between these different resource types.
196 Portmapping module also allows to keep the topology independant from the devices releases.
197 In Neon (SR0), portmapping module has been enriched to support both openroadm 1.2.1 and 2.2.1
198 device models. The full support of openroadm 2.2.1 device models (both in the topology management
199 and the rendering function) has been added in Neon SR1. In Magnesium, portmapping is enriched with
200 the supported-interface-capability, OTN supporting-interfaces, and switching-pools (reflecting
201 cross-connection capabilities of OTN switch-ponders). The support for 7.1 devices models is
202 introduced in Silicon (no devices of intermediate releases have been proposed and made available
203 to the market by equipment manufacturers).
204
205 After the path is provided, the renderer first checks what are the existing interfaces on the
206 ports of the different nodes that the path crosses. It then creates missing interfaces. After all
207 needed interfaces have been created it sets the connections required in the nodes and
208 notifies the Service Handler on the status of the path creation. Path is created in 2 steps
209 (from A to Z and Z to A). In case the path between A and Z could not be fully created, a
210 rollback function is called to set the equipment on the path back to their initial configuration
211 (as they were before invoking the Renderer).
212
213 Magnesium brings the support of OTN services. SR0 supports the creation of OTU4, ODU4, ODU2/ODU2e
214 and ODU0 interfaces. The creation of these low-order otn interfaces must be triggered through
215 otn-service-path RPC. Magnesium SR2 fully supports end-to-end otn service implementation into devices
216 (service-implementation-request /service delete rpc, topology alignement after the service
217 has been created).
218
219 In Silicon releases, higher rate OTN interfaces (OTUC4) must be triggered through otn-service-
220 path RPC. Phosphorus SR0 supports end-to-end otn service implementation into devices
221 (service-implementation-request /service delete rpc, topology alignement after the service
222 has been created). One shall note that impairment aware path calculation for higher rates will
223 be made available across the Phosphorus release train.
224
225 OLM
226 ^^^
227
228 Optical Line Management module implements two main features: it is responsible
229 for setting up the optical power levels on the different interfaces, and is in
230 charge of adjusting these settings across the life of the optical
231 infrastructure.
232
233 After the different connections have been established in the ROADMS, between 2
234 Degrees for an express path, or between a SRG and a Degree for an Add or Drop
235 path; meaning the devices have set WSS and all other required elements to
236 provide path continuity, power setting are provided as attributes of these
237 connections. This allows the device to set all complementary elements such as
238 VOAs, to guaranty that the signal is launched at a correct power level
239 (in accordance to the specifications) in the fiber span. This also applies
240 to X-Ponders, as their output power must comply with the specifications defined
241 for the Add/Drop ports (SRG) of the ROADM. OLM has the responsibility of
242 calculating the right power settings, sending it to the device, and check the
243 PM retrieved from the device to verify that the setting was correctly applied
244 and the configuration was successfully completed.
245
246
247 Inventory
248 ^^^^^^^^^
249
250 TransportPCE Inventory module is responsible to keep track of devices connected in an external
251 MariaDB database. Other databases may be used as long as they comply with SQL and are compatible
252 with OpenDaylight (for example MySQL). At present, the module supports extracting and persisting
253 inventory of devices OpenROADM MSA version 1.2.1. Inventory module changes to support newer device
254 models (2.2.1, etc) and other models (network, service, etc) will be progressively included.
255
256 The inventory module can be activated by the associated karaf feature (odl-transporpce-inventory)
257 The database properties are supplied in the “opendaylight-release” and “opendaylight-snapshots”
258 profiles. Below is the settings.xml with properties included in the distribution.
259 The module can be rebuild from sources with different parameters.
260
261 Sample entry in settings.xml to declare an external inventory database:
262 ::
263
264     <profiles>
265       <profile>
266           <id>opendaylight-release</id>
267     [..]
268          <properties>
269                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
270                  <transportpce.db.database><<databasename>></transportpce.db.database>
271                  <transportpce.db.username><<username>></transportpce.db.username>
272                  <transportpce.db.password><<password>></transportpce.db.password>
273                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
274          </properties>
275     </profile>
276     [..]
277     <profile>
278           <id>opendaylight-snapshots</id>
279     [..]
280          <properties>
281                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
282                  <transportpce.db.database><<databasename>></transportpce.db.database>
283                  <transportpce.db.username><<username>></transportpce.db.username>
284                  <transportpce.db.password><<password>></transportpce.db.password>
285                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
286          </properties>
287         </profile>
288     </profiles>
289
290
291 Once the project built and when karaf is started, the cfg file is generated in etc folder with the
292 corresponding properties supplied in settings.xml. When devices with OpenROADM 1.2.1 device model
293 are mounted, the device listener in the inventory module loads several device attributes to various
294 tables as per the supplied database. The database structure details can be retrieved from the file
295 tests/inventory/initdb.sql inside project sources. Installation scripts and a docker file are also
296 provided.
297
298 Key APIs and Interfaces
299 -----------------------
300
301 External API
302 ~~~~~~~~~~~~
303
304 North API, interconnecting the Service Handler to higher level applications
305 relies on the Service Model defined in the MSA. The Renderer and the OLM are
306 developed to allow configuring Open ROADM devices through a southbound
307 Netconf/Yang interface and rely on the MSA’s device model.
308
309 ServiceHandler Service
310 ^^^^^^^^^^^^^^^^^^^^^^
311
312 -  RPC call
313
314    -  service-create (given service-name, service-aend, service-zend)
315
316    -  service-delete (given service-name)
317
318    -  service-reroute (given service-name, service-aend, service-zend)
319
320    -  service-restoration (given service-name, service-aend, service-zend)
321
322    -  temp-service-create (given common-id, service-aend, service-zend)
323
324    -  temp-service-delete (given common-id)
325
326 -  Data structure
327
328    -  service list : made of services
329    -  temp-service list : made of temporary services
330    -  service : composed of service-name, topology wich describes the detailed path (list of used resources)
331
332 -  Notification
333
334    - service-rpc-result : result of service RPC
335    - service-notification : service has been added, modified or removed
336
337 Netconf Service
338 ^^^^^^^^^^^^^^^
339
340 -  RPC call
341
342    -  connect-device : PUT
343    -  disconnect-device : DELETE
344    -  check-connected-device : GET
345
346 -  Data Structure
347
348    -  node list : composed of netconf nodes in topology-netconf
349
350 Internal APIs
351 ~~~~~~~~~~~~~
352
353 Internal APIs define REST APIs to interconnect TransportPCE modules :
354
355 -   Service Handler to PCE
356 -   PCE to Topology Management
357 -   Service Handler to Renderer
358 -   Renderer to OLM
359 -   Network Model to Service Handler
360
361 Pce Service
362 ^^^^^^^^^^^
363
364 -  RPC call
365
366    -  path-computation-request (given service-name, service-aend, service-zend)
367
368    -  cancel-resource-reserve (given service-name)
369
370 -  Notification
371
372    - service-path-rpc-result : result of service RPC
373
374 Renderer Service
375 ^^^^^^^^^^^^^^^^
376
377 -  RPC call
378
379    -  service-implementation-request (given service-name, service-aend, service-zend)
380
381    -  service-delete (given service-name)
382
383 -  Data structure
384
385    -  service path list : composed of service paths
386    -  service path : composed of service-name, path description giving the list of abstracted elements (nodes, tps, links)
387
388 -  Notification
389
390    - service-path-rpc-result : result of service RPC
391
392 Device Renderer
393 ^^^^^^^^^^^^^^^
394
395 -  RPC call
396
397    -  service-path used in SR0 as an intermediate solution to address directly the renderer
398       from a REST NBI to create OCH-OTU4-ODU4 interfaces on network port of otn devices.
399
400    -  otn-service-path used in SR0 as an intermediate solution to address directly the renderer
401       from a REST NBI for otn-service creation. Otn service-creation through
402       service-implementation-request call from the Service Handler will be supported in later
403       Magnesium releases
404
405 Topology Management Service
406 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
407
408 -  Data structure
409
410    -  network list : composed of networks(openroadm-topology, netconf-topology)
411    -  node list : composed of nodes identified by their node-id
412    -  link list : composed of links identified by their link-id
413    -  node : composed of roadm, xponder
414       link : composed of links of different types (roadm-to-roadm, express, add-drop ...)
415
416 OLM Service
417 ^^^^^^^^^^^
418
419 -  RPC call
420
421    -  get-pm (given node-id)
422
423    -  service-power-setup
424
425    -  service-power-turndown
426
427    -  service-power-reset
428
429    -  calculate-spanloss-base
430
431    -  calculate-spanloss-current
432
433 odl-transportpce-stubmodels
434 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
435
436    -  This feature provides function to be able to stub some of TransportPCE modules, pce and
437       renderer (Stubpce and Stubrenderer).
438       Stubs are used for development purposes and can be used for some of the functional tests.
439
440 Interfaces to external software
441 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
442
443 It defines the interfaces implemented to interconnect TransportPCE modules with other software in
444 order to perform specific tasks
445
446 GNPy interface
447 ^^^^^^^^^^^^^^
448
449 -  Request structure
450
451    -  topology : composed of list of elements and connections
452    -  service : source, destination, explicit-route-objects, path-constraints
453
454 -  Response structure
455
456    -  path-properties/path-metric : OSNR-0.1nm, OSNR-bandwidth, SNR-0.1nm, SNR-bandwidth,
457    -  path-properties/path-route-objects : composed of path elements
458
459
460 Running transportPCE project
461 ----------------------------
462
463 To use transportPCE controller, the first step is to connect the controller to optical nodes
464 through the NETCONF connector.
465
466 .. note::
467
468     In the current version, only optical equipment compliant with open ROADM datamodels are managed
469     by transportPCE.
470
471
472 Connecting nodes
473 ~~~~~~~~~~~~~~~~
474
475 To connect a node, use the following RESTconf request
476
477 **REST API** : *PUT /rests/data/network-topology:network-topology/topology=topology-netconf/node=<node-id>*
478
479 **Sample JSON Data**
480
481 .. code:: json
482
483     {
484         "node": [
485             {
486                 "node-id": "<node-id>",
487                 "netconf-node-topology:tcp-only": "false",
488                 "netconf-node-topology:reconnect-on-changed-schema": "false",
489                 "netconf-node-topology:host": "<node-ip-address>",
490                 "netconf-node-topology:default-request-timeout-millis": "120000",
491                 "netconf-node-topology:max-connection-attempts": "0",
492                 "netconf-node-topology:sleep-factor": "1.5",
493                 "netconf-node-topology:actor-response-wait-time": "5",
494                 "netconf-node-topology:concurrent-rpc-limit": "0",
495                 "netconf-node-topology:between-attempts-timeout-millis": "2000",
496                 "netconf-node-topology:port": "<netconf-port>",
497                 "netconf-node-topology:connection-timeout-millis": "20000",
498                 "netconf-node-topology:username": "<node-username>",
499                 "netconf-node-topology:password": "<node-password>",
500                 "netconf-node-topology:keepalive-delay": "300"
501             }
502         ]
503     }
504
505
506 Then check that the netconf session has been correctly established between the controller and the
507 node. the status of **netconf-node-topology:connection-status** must be **connected**
508
509 **REST API** : *GET /rests/data/network-topology:network-topology/topology=topology-netconf/node=<node-id>?content=nonconfig*
510
511
512 Node configuration discovery
513 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
514
515 Once the controller is connected to the node, transportPCE application automatically launchs a
516 discovery of the node configuration datastore and creates **Logical Connection Points** to any
517 physical ports related to transmission. All *circuit-packs* inside the node configuration are
518 analyzed.
519
520 Use the following RESTconf URI to check that function internally named *portMapping*.
521
522 **REST API** : *GET /rests/data/transportpce-portmapping:network*
523
524 .. note::
525
526     In ``org-openroadm-device.yang``, four types of optical nodes can be managed:
527         * rdm: ROADM device (optical switch)
528         * xpdr: Xponder device (device that converts client to optical channel interface)
529         * ila: in line amplifier (optical amplifier)
530         * extplug: external pluggable (an optical pluggable that can be inserted in an external unit such as a router)
531
532     TransportPCE currently supports rdm and xpdr
533
534 Depending on the kind of open ROADM device connected, different kind of *Logical Connection Points*
535 should appear, if the node configuration is not empty:
536
537 -  DEG<degree-number>-TTP-<port-direction>: created on the line port of a degree on a rdm equipment
538 -  SRG<srg-number>-PP<port-number>: created on the client port of a srg on a rdm equipment
539 -  XPDR<number>-CLIENT<port-number>: created on the client port of a xpdr equipment
540 -  XPDR<number>-NETWORK<port-number>: created on the line port of a xpdr equipment
541
542     For further details on openROADM device models, see `openROADM MSA white paper <https://0201.nccdn.net/1_2/000/000/134/c50/Open-ROADM-MSA-release-2-Device-White-paper-v1-1.pdf>`__.
543
544 Optical Network topology
545 ~~~~~~~~~~~~~~~~~~~~~~~~
546
547 Before creating an optical connectivity service, your topology must contain at least two xpdr
548 devices connected to two different rdm devices. Normally, the *openroadm-topology* is automatically
549 created by transportPCE. Nevertheless, depending on the configuration inside optical nodes, this
550 topology can be partial. Check that link of type *ROADMtoROADM* exists between two adjacent rdm
551 nodes.
552
553 **REST API** : *GET /rests/data/ietf-network:networks/network=openroadm-topology*
554
555 If it is not the case, you need to manually complement the topology with *ROADMtoROADM* link using
556 the following REST RPC:
557
558
559 **REST API** : *POST /rests/operations/transportpce-networkutils:init-roadm-nodes*
560
561 **Sample JSON Data**
562
563 .. code:: json
564
565     {
566       "input": {
567         "rdm-a-node": "<node-id-A>",
568         "deg-a-num": "<degree-A-number>",
569         "termination-point-a": "<Logical-Connection-Point>",
570         "rdm-z-node": "<node-id-Z>",
571         "deg-z-num": "<degree-Z-number>",
572         "termination-point-z": "<Logical-Connection-Point>"
573       }
574     }
575
576 *<Logical-Connection-Point> comes from the portMapping function*.
577
578 Unidirectional links between xpdr and rdm nodes must be created manually. To that end use the two
579 following REST RPCs:
580
581 From xpdr to rdm:
582 ^^^^^^^^^^^^^^^^^
583
584 **REST API** : *POST /rests/operations/transportpce-networkutils:init-xpdr-rdm-links*
585
586 **Sample JSON Data**
587
588 .. code:: json
589
590     {
591       "input": {
592         "links-input": {
593           "xpdr-node": "<xpdr-node-id>",
594           "xpdr-num": "1",
595           "network-num": "<xpdr-network-port-number>",
596           "rdm-node": "<rdm-node-id>",
597           "srg-num": "<srg-number>",
598           "termination-point-num": "<Logical-Connection-Point>"
599         }
600       }
601     }
602
603 From rdm to xpdr:
604 ^^^^^^^^^^^^^^^^^
605
606 **REST API** : *POST /rests/operations/transportpce-networkutils:init-rdm-xpdr-links*
607
608 **Sample JSON Data**
609
610 .. code:: json
611
612     {
613       "input": {
614         "links-input": {
615           "xpdr-node": "<xpdr-node-id>",
616           "xpdr-num": "1",
617           "network-num": "<xpdr-network-port-number>",
618           "rdm-node": "<rdm-node-id>",
619           "srg-num": "<srg-number>",
620           "termination-point-num": "<Logical-Connection-Point>"
621         }
622       }
623     }
624
625 OTN topology
626 ~~~~~~~~~~~~
627
628 Before creating an OTN service, your topology must contain at least two xpdr devices of MUXPDR
629 or SWITCH type connected to two different rdm devices. To check that these xpdr are present in the
630 OTN topology, use the following command on the REST API :
631
632 **REST API** : *GET /rests/data/ietf-network:networks/network=otn-topology*
633
634 An optical connectivity service shall have been created in a first setp. Since Magnesium SR2, the OTN
635 links are automatically populated in the topology after the Och, OTU4 and ODU4 interfaces have
636 been created on the two network ports of the xpdr.
637
638 Creating a service
639 ~~~~~~~~~~~~~~~~~~
640
641 Use the *service handler* module to create any end-to-end connectivity service on an OpenROADM
642 network. Two different kinds of end-to-end "optical" services are managed by TransportPCE:
643 - 100GE/400GE services from client port to client port of two transponders (TPDR)
644 - Optical Channel (OC) service from client add/drop port (PP port of SRG) to client add/drop port of
645 two ROADMs.
646
647 For these services, TransportPCE automatically invokes *renderer* module to create all required
648 interfaces and cross-connection on each device supporting the service.
649 As an example, the creation of a 100GE service implies among other things, the creation of OCH or
650 Optical Tributary Signal (OTSi), OTU4 and ODU4 interfaces on the Network port of TPDR devices.
651 The creation of a 400GE service implies the creation of OTSi, OTUC4, ODUC4 and ODU4 interfaces on
652 the Network port of TPDR devices.
653
654 Since Magnesium SR2, the *service handler* module directly manages some end-to-end otn
655 connectivity services.
656 Before creating a low-order OTN service (1GE or 10GE services terminating on client port of MUXPDR
657 or SWITCH), the user must ensure that a high-order ODU4 container exists and has previously been
658 configured (it means structured to support low-order otn services) to support low-order OTN containers.
659 Thus, OTN service creation implies three steps:
660 1. OCH-OTU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
661 2. HO-ODU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
662 3. 10GE service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
663
664 The management of other OTN services (1GE-ODU0, 100GE...) is planned for future releases.
665
666
667 100GE service creation
668 ^^^^^^^^^^^^^^^^^^^^^^
669
670 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
671 end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
672 nodes.
673
674 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
675
676 **Sample JSON Data**
677
678 .. code:: json
679
680     {
681         "input": {
682             "sdnc-request-header": {
683                 "request-id": "request-1",
684                 "rpc-action": "service-create",
685                 "request-system-id": "appname"
686             },
687             "service-name": "test1",
688             "common-id": "commonId",
689             "connection-type": "service",
690             "service-a-end": {
691                 "service-rate": "100",
692                 "node-id": "<xpdr-node-id>",
693                 "service-format": "Ethernet",
694                 "clli": "<ccli-name>",
695                 "tx-direction": {
696                     "port": {
697                         "port-device-name": "<xpdr-client-port>",
698                         "port-type": "fixed",
699                         "port-name": "<xpdr-client-port-number>",
700                         "port-rack": "000000.00",
701                         "port-shelf": "Chassis#1"
702                     },
703                     "lgx": {
704                         "lgx-device-name": "Some lgx-device-name",
705                         "lgx-port-name": "Some lgx-port-name",
706                         "lgx-port-rack": "000000.00",
707                         "lgx-port-shelf": "00"
708                     }
709                 },
710                 "rx-direction": {
711                     "port": {
712                         "port-device-name": "<xpdr-client-port>",
713                         "port-type": "fixed",
714                         "port-name": "<xpdr-client-port-number>",
715                         "port-rack": "000000.00",
716                         "port-shelf": "Chassis#1"
717                     },
718                     "lgx": {
719                         "lgx-device-name": "Some lgx-device-name",
720                         "lgx-port-name": "Some lgx-port-name",
721                         "lgx-port-rack": "000000.00",
722                         "lgx-port-shelf": "00"
723                     }
724                 },
725                 "optic-type": "gray"
726             },
727             "service-z-end": {
728                 "service-rate": "100",
729                 "node-id": "<xpdr-node-id>",
730                 "service-format": "Ethernet",
731                 "clli": "<ccli-name>",
732                 "tx-direction": {
733                     "port": {
734                         "port-device-name": "<xpdr-client-port>",
735                         "port-type": "fixed",
736                         "port-name": "<xpdr-client-port-number>",
737                         "port-rack": "000000.00",
738                         "port-shelf": "Chassis#1"
739                     },
740                     "lgx": {
741                         "lgx-device-name": "Some lgx-device-name",
742                         "lgx-port-name": "Some lgx-port-name",
743                         "lgx-port-rack": "000000.00",
744                         "lgx-port-shelf": "00"
745                     }
746                 },
747                 "rx-direction": {
748                     "port": {
749                         "port-device-name": "<xpdr-client-port>",
750                         "port-type": "fixed",
751                         "port-name": "<xpdr-client-port-number>",
752                         "port-rack": "000000.00",
753                         "port-shelf": "Chassis#1"
754                     },
755                     "lgx": {
756                         "lgx-device-name": "Some lgx-device-name",
757                         "lgx-port-name": "Some lgx-port-name",
758                         "lgx-port-rack": "000000.00",
759                         "lgx-port-shelf": "00"
760                     }
761                 },
762                 "optic-type": "gray"
763             },
764             "due-date": "yyyy-mm-ddT00:00:01Z",
765             "operator-contact": "some-contact-info"
766         }
767     }
768
769 Most important parameters for this REST RPC are the identification of the two physical client ports
770 on xpdr nodes.This RPC invokes the *PCE* module to compute a path over the *openroadm-topology* and
771 then invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
772
773
774 OC service creation
775 ^^^^^^^^^^^^^^^^^^^
776
777 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
778 end-to end Optical Channel (OC) connectivity service between two add/drop ports (PP port of SRG
779 node) over an optical network only composed of rdm nodes.
780
781 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
782
783 **Sample JSON Data**
784
785 .. code:: json
786
787     {
788         "input": {
789             "sdnc-request-header": {
790                 "request-id": "request-1",
791                 "rpc-action": "service-create",
792                 "request-system-id": "appname"
793             },
794             "service-name": "something",
795             "common-id": "commonId",
796             "connection-type": "roadm-line",
797             "service-a-end": {
798                 "service-rate": "100",
799                 "node-id": "<xpdr-node-id>",
800                 "service-format": "OC",
801                 "clli": "<ccli-name>",
802                 "tx-direction": {
803                     "port": {
804                         "port-device-name": "<xpdr-client-port>",
805                         "port-type": "fixed",
806                         "port-name": "<xpdr-client-port-number>",
807                         "port-rack": "000000.00",
808                         "port-shelf": "Chassis#1"
809                     },
810                     "lgx": {
811                         "lgx-device-name": "Some lgx-device-name",
812                         "lgx-port-name": "Some lgx-port-name",
813                         "lgx-port-rack": "000000.00",
814                         "lgx-port-shelf": "00"
815                     }
816                 },
817                 "rx-direction": {
818                     "port": {
819                         "port-device-name": "<xpdr-client-port>",
820                         "port-type": "fixed",
821                         "port-name": "<xpdr-client-port-number>",
822                         "port-rack": "000000.00",
823                         "port-shelf": "Chassis#1"
824                     },
825                     "lgx": {
826                         "lgx-device-name": "Some lgx-device-name",
827                         "lgx-port-name": "Some lgx-port-name",
828                         "lgx-port-rack": "000000.00",
829                         "lgx-port-shelf": "00"
830                     }
831                 },
832                 "optic-type": "gray"
833             },
834             "service-z-end": {
835                 "service-rate": "100",
836                 "node-id": "<xpdr-node-id>",
837                 "service-format": "OC",
838                 "clli": "<ccli-name>",
839                 "tx-direction": {
840                     "port": {
841                         "port-device-name": "<xpdr-client-port>",
842                         "port-type": "fixed",
843                         "port-name": "<xpdr-client-port-number>",
844                         "port-rack": "000000.00",
845                         "port-shelf": "Chassis#1"
846                     },
847                     "lgx": {
848                         "lgx-device-name": "Some lgx-device-name",
849                         "lgx-port-name": "Some lgx-port-name",
850                         "lgx-port-rack": "000000.00",
851                         "lgx-port-shelf": "00"
852                     }
853                 },
854                 "rx-direction": {
855                     "port": {
856                         "port-device-name": "<xpdr-client-port>",
857                         "port-type": "fixed",
858                         "port-name": "<xpdr-client-port-number>",
859                         "port-rack": "000000.00",
860                         "port-shelf": "Chassis#1"
861                     },
862                     "lgx": {
863                         "lgx-device-name": "Some lgx-device-name",
864                         "lgx-port-name": "Some lgx-port-name",
865                         "lgx-port-rack": "000000.00",
866                         "lgx-port-shelf": "00"
867                     }
868                 },
869                 "optic-type": "gray"
870             },
871             "due-date": "yyyy-mm-ddT00:00:01Z",
872             "operator-contact": "some-contact-info"
873         }
874     }
875
876 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
877 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
878 the devices.
879
880 OTN OCH-OTU4 service creation
881 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
882
883 Use the following REST RPC to invoke *service handler* module in order to create over the optical
884 infrastructure a bidirectional end-to-end OTU4 over an optical wavelength connectivity service
885 between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such service configure the
886 optical network infrastructure composed of rdm nodes.
887
888 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
889
890 **Sample JSON Data**
891
892 .. code:: json
893
894     {
895         "input": {
896             "sdnc-request-header": {
897                 "request-id": "request-1",
898                 "rpc-action": "service-create",
899                 "request-system-id": "appname"
900             },
901             "service-name": "something",
902             "common-id": "commonId",
903             "connection-type": "infrastructure",
904             "service-a-end": {
905                 "service-rate": "100",
906                 "node-id": "<xpdr-node-id>",
907                 "service-format": "OTU",
908                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
909                 "clli": "<ccli-name>",
910                 "tx-direction": {
911                     "port": {
912                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
913                         "port-type": "fixed",
914                         "port-name": "<xpdr-network-port-in-otn-topology>",
915                         "port-rack": "000000.00",
916                         "port-shelf": "Chassis#1"
917                     },
918                     "lgx": {
919                         "lgx-device-name": "Some lgx-device-name",
920                         "lgx-port-name": "Some lgx-port-name",
921                         "lgx-port-rack": "000000.00",
922                         "lgx-port-shelf": "00"
923                     }
924                 },
925                 "rx-direction": {
926                     "port": {
927                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
928                         "port-type": "fixed",
929                         "port-name": "<xpdr-network-port-in-otn-topology>",
930                         "port-rack": "000000.00",
931                         "port-shelf": "Chassis#1"
932                     },
933                     "lgx": {
934                         "lgx-device-name": "Some lgx-device-name",
935                         "lgx-port-name": "Some lgx-port-name",
936                         "lgx-port-rack": "000000.00",
937                         "lgx-port-shelf": "00"
938                     }
939                 },
940                 "optic-type": "gray"
941             },
942             "service-z-end": {
943                 "service-rate": "100",
944                 "node-id": "<xpdr-node-id>",
945                 "service-format": "OTU",
946                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
947                 "clli": "<ccli-name>",
948                 "tx-direction": {
949                     "port": {
950                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
951                         "port-type": "fixed",
952                         "port-name": "<xpdr-network-port-in-otn-topology>",
953                         "port-rack": "000000.00",
954                         "port-shelf": "Chassis#1"
955                     },
956                     "lgx": {
957                         "lgx-device-name": "Some lgx-device-name",
958                         "lgx-port-name": "Some lgx-port-name",
959                         "lgx-port-rack": "000000.00",
960                         "lgx-port-shelf": "00"
961                     }
962                 },
963                 "rx-direction": {
964                     "port": {
965                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
966                         "port-type": "fixed",
967                         "port-name": "<xpdr-network-port-in-otn-topology>",
968                         "port-rack": "000000.00",
969                         "port-shelf": "Chassis#1"
970                     },
971                     "lgx": {
972                         "lgx-device-name": "Some lgx-device-name",
973                         "lgx-port-name": "Some lgx-port-name",
974                         "lgx-port-rack": "000000.00",
975                         "lgx-port-shelf": "00"
976                     }
977                 },
978                 "optic-type": "gray"
979             },
980             "due-date": "yyyy-mm-ddT00:00:01Z",
981             "operator-contact": "some-contact-info"
982         }
983     }
984
985 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
986 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
987 the devices.
988
989 OTSi-OTUC4 service creation
990 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
991
992 Use the following REST RPC to invoke *service handler* module in order to create over the optical
993 infrastructure a bidirectional end-to-end OTUC4 over an optical Optical Tributary Signal
994 connectivity service between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such
995 service configure the optical network infrastructure composed of rdm nodes.
996
997 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
998
999 **Sample JSON Data**
1000
1001 .. code:: json
1002
1003     {
1004         "input": {
1005             "sdnc-request-header": {
1006                 "request-id": "request-1",
1007                 "rpc-action": "service-create",
1008                 "request-system-id": "appname"
1009             },
1010             "service-name": "something",
1011             "common-id": "commonId",
1012             "connection-type": "infrastructure",
1013             "service-a-end": {
1014                 "service-rate": "400",
1015                 "node-id": "<xpdr-node-id>",
1016                 "service-format": "OTU",
1017                 "otu-service-rate": "org-openroadm-otn-common-types:OTUCn",
1018                 "clli": "<ccli-name>",
1019                 "tx-direction": {
1020                     "port": {
1021                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1022                         "port-type": "fixed",
1023                         "port-name": "<xpdr-network-port-in-otn-topology>",
1024                         "port-rack": "000000.00",
1025                         "port-shelf": "Chassis#1"
1026                     },
1027                     "lgx": {
1028                         "lgx-device-name": "Some lgx-device-name",
1029                         "lgx-port-name": "Some lgx-port-name",
1030                         "lgx-port-rack": "000000.00",
1031                         "lgx-port-shelf": "00"
1032                     }
1033                 },
1034                 "rx-direction": {
1035                     "port": {
1036                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1037                         "port-type": "fixed",
1038                         "port-name": "<xpdr-network-port-in-otn-topology>",
1039                         "port-rack": "000000.00",
1040                         "port-shelf": "Chassis#1"
1041                     },
1042                     "lgx": {
1043                         "lgx-device-name": "Some lgx-device-name",
1044                         "lgx-port-name": "Some lgx-port-name",
1045                         "lgx-port-rack": "000000.00",
1046                         "lgx-port-shelf": "00"
1047                     }
1048                 },
1049                 "optic-type": "gray"
1050             },
1051             "service-z-end": {
1052                 "service-rate": "400",
1053                 "node-id": "<xpdr-node-id>",
1054                 "service-format": "OTU",
1055                 "otu-service-rate": "org-openroadm-otn-common-types:OTUCn",
1056                 "clli": "<ccli-name>",
1057                 "tx-direction": {
1058                     "port": {
1059                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1060                         "port-type": "fixed",
1061                         "port-name": "<xpdr-network-port-in-otn-topology>",
1062                         "port-rack": "000000.00",
1063                         "port-shelf": "Chassis#1"
1064                     },
1065                     "lgx": {
1066                         "lgx-device-name": "Some lgx-device-name",
1067                         "lgx-port-name": "Some lgx-port-name",
1068                         "lgx-port-rack": "000000.00",
1069                         "lgx-port-shelf": "00"
1070                     }
1071                 },
1072                 "rx-direction": {
1073                     "port": {
1074                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1075                         "port-type": "fixed",
1076                         "port-name": "<xpdr-network-port-in-otn-topology>",
1077                         "port-rack": "000000.00",
1078                         "port-shelf": "Chassis#1"
1079                     },
1080                     "lgx": {
1081                         "lgx-device-name": "Some lgx-device-name",
1082                         "lgx-port-name": "Some lgx-port-name",
1083                         "lgx-port-rack": "000000.00",
1084                         "lgx-port-shelf": "00"
1085                     }
1086                 },
1087                 "optic-type": "gray"
1088             },
1089             "due-date": "yyyy-mm-ddT00:00:01Z",
1090             "operator-contact": "some-contact-info"
1091         }
1092     }
1093
1094 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1095 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
1096 the devices.
1097
1098 One shall note that in Phosphorus SR0, as the OpenROADM 400G specification are not available (neither
1099 in the GNPy libraries, nor in the *PCE* module), path validation will be performed using the same
1100 asumptions as we use for 100G. This means the path may be validated whereas optical performances do
1101 not reach expected levels. This allows testing OpenROADM device implementing B100G rates, but shall
1102 not be used in operational conditions. The support for higher rate impairment aware path computation
1103 will be introduced across Phosphorus release train.
1104
1105 ODUC4 service creation
1106 ^^^^^^^^^^^^^^^^^^^^^^
1107
1108 For ODUC4 service creation, the REST RPC to invoke *service handler* module in order to create an
1109 ODUC4 over the OTSi-OTUC4 has the same format as the RPC used for the creation of this last. Only
1110 "service-format" needs to be changed to "ODU", and "otu-service-rate" : "org-openroadm-otn-common-
1111 types:OTUCn" needs to be replaced by: "odu-service-rate" : "org-openroadm-otn-common-types:ODUCn"
1112 in both service-a-end and service-z-end containers.
1113
1114 OTN HO-ODU4 service creation
1115 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1116
1117 Use the following REST RPC to invoke *service handler* module in order to create over the optical
1118 infrastructure a bidirectional end-to-end ODU4 OTN service over an OTU4 and structured to support
1119 low-order OTN services (ODU2e, ODU0). As for OTU4, such a service must be created between two network
1120 ports of OTN Xponder (MUXPDR or SWITCH).
1121
1122 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
1123
1124 **Sample JSON Data**
1125
1126 .. code:: json
1127
1128     {
1129         "input": {
1130             "sdnc-request-header": {
1131                 "request-id": "request-1",
1132                 "rpc-action": "service-create",
1133                 "request-system-id": "appname"
1134             },
1135             "service-name": "something",
1136             "common-id": "commonId",
1137             "connection-type": "infrastructure",
1138             "service-a-end": {
1139                 "service-rate": "100",
1140                 "node-id": "<xpdr-node-id>",
1141                 "service-format": "ODU",
1142                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
1143                 "clli": "<ccli-name>",
1144                 "tx-direction": {
1145                     "port": {
1146                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1147                         "port-type": "fixed",
1148                         "port-name": "<xpdr-network-port-in-otn-topology>",
1149                         "port-rack": "000000.00",
1150                         "port-shelf": "Chassis#1"
1151                     },
1152                     "lgx": {
1153                         "lgx-device-name": "Some lgx-device-name",
1154                         "lgx-port-name": "Some lgx-port-name",
1155                         "lgx-port-rack": "000000.00",
1156                         "lgx-port-shelf": "00"
1157                     }
1158                 },
1159                 "rx-direction": {
1160                     "port": {
1161                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1162                         "port-type": "fixed",
1163                         "port-name": "<xpdr-network-port-in-otn-topology>",
1164                         "port-rack": "000000.00",
1165                         "port-shelf": "Chassis#1"
1166                     },
1167                     "lgx": {
1168                         "lgx-device-name": "Some lgx-device-name",
1169                         "lgx-port-name": "Some lgx-port-name",
1170                         "lgx-port-rack": "000000.00",
1171                         "lgx-port-shelf": "00"
1172                     }
1173                 },
1174                 "optic-type": "gray"
1175             },
1176             "service-z-end": {
1177                 "service-rate": "100",
1178                 "node-id": "<xpdr-node-id>",
1179                 "service-format": "ODU",
1180                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
1181                 "clli": "<ccli-name>",
1182                 "tx-direction": {
1183                     "port": {
1184                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1185                         "port-type": "fixed",
1186                         "port-name": "<xpdr-network-port-in-otn-topology>",
1187                         "port-rack": "000000.00",
1188                         "port-shelf": "Chassis#1"
1189                     },
1190                     "lgx": {
1191                         "lgx-device-name": "Some lgx-device-name",
1192                         "lgx-port-name": "Some lgx-port-name",
1193                         "lgx-port-rack": "000000.00",
1194                         "lgx-port-shelf": "00"
1195                     }
1196                 },
1197                 "rx-direction": {
1198                     "port": {
1199                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1200                         "port-type": "fixed",
1201                         "port-name": "<xpdr-network-port-in-otn-topology>",
1202                         "port-rack": "000000.00",
1203                         "port-shelf": "Chassis#1"
1204                     },
1205                     "lgx": {
1206                         "lgx-device-name": "Some lgx-device-name",
1207                         "lgx-port-name": "Some lgx-port-name",
1208                         "lgx-port-rack": "000000.00",
1209                         "lgx-port-shelf": "00"
1210                     }
1211                 },
1212                 "optic-type": "gray"
1213             },
1214             "due-date": "yyyy-mm-ddT00:00:01Z",
1215             "operator-contact": "some-contact-info"
1216         }
1217     }
1218
1219 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1220 *otn-topology* that must contains OTU4 links with valid bandwidth parameters, and then
1221 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1222
1223 OTN 10GE-ODU2e service creation
1224 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1225
1226 Use the following REST RPC to invoke *service handler* module in order to create over the OTN
1227 infrastructure a bidirectional end-to-end 10GE-ODU2e OTN service over an ODU4.
1228 Such a service must be created between two client ports of OTN Xponder (MUXPDR or SWITCH)
1229 configured to support 10GE interfaces.
1230
1231 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
1232
1233 **Sample JSON Data**
1234
1235 .. code:: json
1236
1237     {
1238         "input": {
1239             "sdnc-request-header": {
1240                 "request-id": "request-1",
1241                 "rpc-action": "service-create",
1242                 "request-system-id": "appname"
1243             },
1244             "service-name": "something",
1245             "common-id": "commonId",
1246             "connection-type": "service",
1247             "service-a-end": {
1248                 "service-rate": "10",
1249                 "node-id": "<xpdr-node-id>",
1250                 "service-format": "Ethernet",
1251                 "clli": "<ccli-name>",
1252                 "subrate-eth-sla": {
1253                     "subrate-eth-sla": {
1254                         "committed-info-rate": "10000",
1255                         "committed-burst-size": "64"
1256                     }
1257                 },
1258                 "tx-direction": {
1259                     "port": {
1260                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1261                         "port-type": "fixed",
1262                         "port-name": "<xpdr-client-port-in-otn-topology>",
1263                         "port-rack": "000000.00",
1264                         "port-shelf": "Chassis#1"
1265                     },
1266                     "lgx": {
1267                         "lgx-device-name": "Some lgx-device-name",
1268                         "lgx-port-name": "Some lgx-port-name",
1269                         "lgx-port-rack": "000000.00",
1270                         "lgx-port-shelf": "00"
1271                     }
1272                 },
1273                 "rx-direction": {
1274                     "port": {
1275                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1276                         "port-type": "fixed",
1277                         "port-name": "<xpdr-client-port-in-otn-topology>",
1278                         "port-rack": "000000.00",
1279                         "port-shelf": "Chassis#1"
1280                     },
1281                     "lgx": {
1282                         "lgx-device-name": "Some lgx-device-name",
1283                         "lgx-port-name": "Some lgx-port-name",
1284                         "lgx-port-rack": "000000.00",
1285                         "lgx-port-shelf": "00"
1286                     }
1287                 },
1288                 "optic-type": "gray"
1289             },
1290             "service-z-end": {
1291                 "service-rate": "10",
1292                 "node-id": "<xpdr-node-id>",
1293                 "service-format": "Ethernet",
1294                 "clli": "<ccli-name>",
1295                 "subrate-eth-sla": {
1296                     "subrate-eth-sla": {
1297                         "committed-info-rate": "10000",
1298                         "committed-burst-size": "64"
1299                     }
1300                 },
1301                 "tx-direction": {
1302                     "port": {
1303                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1304                         "port-type": "fixed",
1305                         "port-name": "<xpdr-client-port-in-otn-topology>",
1306                         "port-rack": "000000.00",
1307                         "port-shelf": "Chassis#1"
1308                     },
1309                     "lgx": {
1310                         "lgx-device-name": "Some lgx-device-name",
1311                         "lgx-port-name": "Some lgx-port-name",
1312                         "lgx-port-rack": "000000.00",
1313                         "lgx-port-shelf": "00"
1314                     }
1315                 },
1316                 "rx-direction": {
1317                     "port": {
1318                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1319                         "port-type": "fixed",
1320                         "port-name": "<xpdr-client-port-in-otn-topology>",
1321                         "port-rack": "000000.00",
1322                         "port-shelf": "Chassis#1"
1323                     },
1324                     "lgx": {
1325                         "lgx-device-name": "Some lgx-device-name",
1326                         "lgx-port-name": "Some lgx-port-name",
1327                         "lgx-port-rack": "000000.00",
1328                         "lgx-port-shelf": "00"
1329                     }
1330                 },
1331                 "optic-type": "gray"
1332             },
1333             "due-date": "yyyy-mm-ddT00:00:01Z",
1334             "operator-contact": "some-contact-info"
1335         }
1336     }
1337
1338 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1339 *otn-topology* that must contains ODU4 links with valid bandwidth parameters, and then
1340 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1341
1342
1343 .. note::
1344     Since Magnesium SR2, the service-list corresponding to OCH-OTU4, ODU4 or again 10GE-ODU2e services is
1345     updated in the service-list datastore.
1346
1347 .. note::
1348     trib-slot is used when the equipment supports contiguous trib-slot allocation (supported from
1349     Magnesium SR0). The trib-slot provided corresponds to the first of the used trib-slots.
1350     complex-trib-slots will be used when the equipment does not support contiguous trib-slot
1351     allocation. In this case a list of the different trib-slots to be used shall be provided.
1352     The support for non contiguous trib-slot allocation is planned for later release.
1353
1354 Deleting a service
1355 ~~~~~~~~~~~~~~~~~~
1356
1357 Deleting any kind of service
1358 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1359
1360 Use the following REST RPC to invoke *service handler* module in order to delete a given optical
1361 connectivity service.
1362
1363 **REST API** : *POST /restconf/operations/org-openroadm-service:service-delete*
1364
1365 **Sample JSON Data**
1366
1367 .. code:: json
1368
1369     {
1370         "input": {
1371             "sdnc-request-header": {
1372                 "request-id": "request-1",
1373                 "rpc-action": "service-delete",
1374                 "request-system-id": "appname",
1375                 "notification-url": "http://localhost:8585/NotificationServer/notify"
1376             },
1377             "service-delete-req-info": {
1378                 "service-name": "something",
1379                 "tail-retention": "no"
1380             }
1381         }
1382     }
1383
1384 Most important parameters for this REST RPC is the *service-name*.
1385
1386
1387 .. note::
1388     Deleting OTN services implies proceeding in the reverse way to their creation. Thus, OTN
1389     service deletion must respect the three following steps:
1390     1. delete first all 10GE services supported over any ODU4 to be deleted
1391     2. delete ODU4
1392     3. delete OCH-OTU4 supporting the just deleted ODU4
1393
1394 Invoking PCE module
1395 ~~~~~~~~~~~~~~~~~~~
1396
1397 Use the following REST RPCs to invoke *PCE* module in order to check connectivity between xponder
1398 nodes and the availability of a supporting optical connectivity between the network-ports of the
1399 nodes.
1400
1401 Checking OTU4 service connectivity
1402 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1403
1404 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1405
1406 **Sample JSON Data**
1407
1408 .. code:: json
1409
1410    {
1411       "input": {
1412            "service-name": "something",
1413            "resource-reserve": "true",
1414            "service-handler-header": {
1415              "request-id": "request1"
1416            },
1417            "service-a-end": {
1418              "service-rate": "100",
1419              "clli": "<clli-node>",
1420              "service-format": "OTU",
1421              "node-id": "<otn-node-id>"
1422            },
1423            "service-z-end": {
1424              "service-rate": "100",
1425              "clli": "<clli-node>",
1426              "service-format": "OTU",
1427              "node-id": "<otn-node-id>"
1428              },
1429            "pce-metric": "hop-count"
1430        }
1431    }
1432
1433 .. note::
1434     here, the <otn-node-id> corresponds to the node-id as appearing in "openroadm-network" topology
1435     layer
1436
1437 Checking ODU4 service connectivity
1438 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1439
1440 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1441
1442 **Sample JSON Data**
1443
1444 .. code:: json
1445
1446    {
1447       "input": {
1448            "service-name": "something",
1449            "resource-reserve": "true",
1450            "service-handler-header": {
1451              "request-id": "request1"
1452            },
1453            "service-a-end": {
1454              "service-rate": "100",
1455              "clli": "<clli-node>",
1456              "service-format": "ODU",
1457              "node-id": "<otn-node-id>"
1458            },
1459            "service-z-end": {
1460              "service-rate": "100",
1461              "clli": "<clli-node>",
1462              "service-format": "ODU",
1463              "node-id": "<otn-node-id>"
1464              },
1465            "pce-metric": "hop-count"
1466        }
1467    }
1468
1469 .. note::
1470     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1471
1472 Checking 10GE/ODU2e service connectivity
1473 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1474
1475 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1476
1477 **Sample JSON Data**
1478
1479 .. code:: json
1480
1481    {
1482       "input": {
1483            "service-name": "something",
1484            "resource-reserve": "true",
1485            "service-handler-header": {
1486              "request-id": "request1"
1487            },
1488            "service-a-end": {
1489              "service-rate": "10",
1490              "clli": "<clli-node>",
1491              "service-format": "Ethernet",
1492              "node-id": "<otn-node-id>"
1493            },
1494            "service-z-end": {
1495              "service-rate": "10",
1496              "clli": "<clli-node>",
1497              "service-format": "Ethernet",
1498              "node-id": "<otn-node-id>"
1499              },
1500            "pce-metric": "hop-count"
1501        }
1502    }
1503
1504 .. note::
1505     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1506
1507
1508 odl-transportpce-tapi
1509 ---------------------
1510
1511 This feature allows TransportPCE application to expose at its northbound interface other APIs than
1512 those defined by the OpenROADM MSA. With this feature, TransportPCE provides part of the Transport-API
1513 specified by the Open Networking Foundation. More specifically, the Topology Service, Connectivity and Notification
1514 Service components are implemented, allowing to:
1515
1516 1. Expose to higher level applications an abstraction of its OpenROADM topologies in the form of topologies respecting the T-API modelling.
1517 2. Create/delete connectivity services between the Service Interface Points (SIPs) exposed by the T-API topology.
1518 3. Create/Delete Notification Subscription Service to expose to higher level applications T-API notifications through a Kafka server.
1519
1520 The current version of TransportPCE implements the *tapi-topology.yang*,
1521 *tapi-connectivity.yang* and *tapi-notification.yang* models in the revision
1522 2018-12-10 (T-API v2.1.2).
1523
1524 Additionally, support for the Path Computation Service will be added in future releases, which will allow T-PCE
1525 to compute a path over the T-API topology.
1526
1527 T-API Topology Service
1528 ~~~~~~~~~~~~~~~~~~~~~~
1529
1530 -  RPC calls implemented:
1531
1532    -  get-topology-details
1533
1534    -  get-node-details
1535
1536    -  get-node-edge-point-details
1537
1538    -  get-link-details
1539
1540    -  get-topology-list
1541
1542
1543 As in IETF or OpenROADM topologies, T-API topologies are composed of lists of nodes and links that
1544 abstract a set of network resources. T-API specifies the *T0 - Multi-layer topology* which is, as
1545 indicated by its name, a single topology that collapses network logical abstraction for all network
1546 layers. Thus, an OpenROADM device as, for example, an OTN xponder that manages the following network
1547 layers ETH, ODU, OTU, Optical wavelength, will be represented in T-API T0 topology by two nodes:
1548 one *DSR/ODU* node and one *Photonic Media* node. Each of them are linked together through one or
1549 several *transitional links* depending on the number of network/line ports on the device.
1550
1551 Aluminium SR2 comes with a complete refactoring of this module, handling the same way multi-layer
1552 abstraction of any Xponder terminal device, whether it is a 100G transponder, an OTN muxponder or
1553 again an OTN switch. For all these devices, the implementation manages the fact that only relevant
1554 ports must appear in the resulting TAPI topology abstraction. In other words, only client/network ports
1555 that are undirectly/directly connected to the ROADM infrastructure are considered for the abstraction.
1556 Moreover, the whole ROADM infrastructure of the network is also abstracted towards a single photonic
1557 node. Therefore, a pair of unidirectional xponder-output/xponder-input links present in *openroadm-topology*
1558 is represented by a bidirectional *OMS* link in TAPI topology.
1559 In the same way, a pair of unidirectional OTN links (OTU4, ODU4) present in *otn-topology* is also
1560 represented by a bidirectional OTN link in TAPI topology, while retaining their available bandwidth
1561 characteristics.
1562
1563 Phosphorus SR0 extends the T-API topology service implementation by bringing a fully described topology.
1564 *T0 - Full Multi-layer topology* is derived from the existing *T0 - Multi-layer topology*. But the ROADM
1565 infrastructure is not abstracted and the higher level application can get more details on the composition
1566 of the ROADM infrastructure controlled by TransportPCE. Each ROADM node found in the *openroadm-network*
1567 is converted into a *Photonic Media* node. The details of these T-API nodes are obtained from the
1568 *openroadm-topology*. Therefore, the external traffic ports of *Degree* and *SRG* nodes are represented
1569 with a set of Network Edge Points (NEPs) and SIPs belonging to the *Photonic Media* node and a pair of
1570 roadm-to-roadm links present in *openroadm-topology* is represented by a bidirectional *OMS* link in TAPI
1571 topology.
1572 Additionally, T-API topology related information is stored in TransportPCE datastore in the same way as
1573 OpenROADM topology layers. When a node is connected to the controller through the corresponding *REST API*,
1574 the T-API topology context gets updated dynamically and stored.
1575
1576 .. note::
1577
1578     A naming nomenclature is defined to be able to map T-API and OpenROADM data.
1579     i.e., T-API_roadm_Name = OpenROADM_roadmID+T-API_layer
1580     i.e., T-API_roadm_nep_Name = OpenROADM_roadmID+T-API_layer+OpenROADM_terminationPointID
1581
1582 Three kinds of topologies are currently implemented. The first one is the *"T0 - Multi-layer topology"*
1583 defined in the reference implementation of T-API. This topology gives an abstraction from data coming
1584 from openroadm-topology and otn-topology. Such topology may be rather complex since most of devices are
1585 represented through several nodes and links.
1586 Another topology, named *"Transponder 100GE"*, is also implemented. That latter provides a higher level
1587 of abstraction, much simpler, for the specific case of 100GE transponder, in the form of a single
1588 DSR node.
1589 Lastly, the *T0 - Full Multi-layer topology* topology was added. This topology collapses the data coming
1590 from openroadm-network, openroadm-topology and otn-topology. It gives a complete view of the optical
1591 network as defined in the reference implementation of T-API
1592
1593 The figure below shows an example of TAPI abstractions as performed by TransportPCE starting from Aluminium SR2.
1594
1595 .. figure:: ./images/TransportPCE-tapi-abstraction.jpg
1596    :alt: Example of T0-multi-layer TAPI abstraction in TransportPCE
1597
1598 In this specific case, as far as the "A" side is concerned, we connect TransportPCE to two xponder
1599 terminal devices at the netconf level :
1600 - XPDR-A1 is a 100GE transponder and is represented by XPDR-A1-XPDR1 node in *otn-topology*
1601 - SPDR-SA1 is an otn xponder that actually contains in its device configuration datastore two otn
1602 xponder nodes (the otn muxponder 10GE=>100G SPDR-SA1-XPDR1 and the otn switch 4x100GE => 4x100G SPDR-SA1-XPDR2)
1603 As represented on the bottom part of the figure, only one network port of XPDR-A1-XPDR1 is connected
1604 to the ROADM infrastructure, and only one network port of the otn muxponder is also attached to the
1605 ROADM infrastructure.
1606 Such network configuration will result in the TAPI *T0 - Multi-layer topology* abstraction as
1607 represented in the center of the figure. Let's notice that the otn switch (SPDR-SA1-XPDR2), not
1608 being attached to the ROADM infrastructure, is not abstracted.
1609 Moreover, 100GE transponder being connected, the TAPI *Transponder 100GE* topology will result in a
1610 single layer DSR node with only the two Owned Node Edge Ports representing the two 100GE client ports
1611 of respectively XPDR-A1-XPDR1 and XPDR-C1-XPDR1...
1612
1613
1614 **REST API** : *POST /restconf/operations/tapi-topology:get-topology-details*
1615
1616 This request builds the TAPI *T0 - Multi-layer topology* abstraction with regard to the current
1617 state of *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1618
1619 **Sample JSON Data**
1620
1621 .. code:: json
1622
1623     {
1624       "tapi-topology:input": {
1625         "tapi-topology:topology-id-or-name": "T0 - Multi-layer topology"
1626        }
1627     }
1628
1629 This request builds the TAPI *Transponder 100GE* abstraction with regard to the current state of
1630 *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1631 Its main interest is to simply and directly retrieve 100GE client ports of 100G Transponders that may
1632 be connected together, through a point-to-point 100GE service running over a wavelength.
1633
1634 .. code:: json
1635
1636     {
1637       "tapi-topology:input": {
1638         "tapi-topology:topology-id-or-name": "Transponder 100GE"
1639         }
1640     }
1641
1642
1643 .. note::
1644
1645     As for the *T0 multi-layer* topology, only 100GE client port whose their associated 100G line
1646     port is connected to Add/Drop nodes of the ROADM infrastructure are retrieved in order to
1647     abstract only relevant information.
1648
1649 This request builds the TAPI *T0 - Full Multi-layer* topology with respect to the information existing in
1650 the T-API topology context stored in OpenDaylight datastores.
1651
1652 .. code:: json
1653
1654     {
1655       "tapi-topology:input": {
1656         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology"
1657         }
1658     }
1659
1660 **REST API** : *POST /restconf/operations/tapi-topology:get-node-details*
1661
1662 This request returns the information, stored in the Topology Context, of the corresponding T-API node.
1663 The user can provide, either the Uuid associated to the attribute or its name.
1664
1665 **Sample JSON Data**
1666
1667 .. code:: json
1668
1669     {
1670       "tapi-topology:input": {
1671         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1672         "tapi-topology:node-id-or-name": "ROADM-A1+PHOTONIC_MEDIA"
1673       }
1674     }
1675
1676 **REST API** : *POST /restconf/operations/tapi-topology:get-node-edge-point-details*
1677
1678 This request returns the information, stored in the Topology Context, of the corresponding T-API NEP.
1679 The user can provide, either the Uuid associated to the attribute or its name.
1680
1681 **Sample JSON Data**
1682
1683 .. code:: json
1684
1685     {
1686       "tapi-topology:input": {
1687         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1688         "tapi-topology:node-id-or-name": "ROADM-A1+PHOTONIC_MEDIA",
1689         "tapi-topology:ep-id-or-name": "ROADM-A1+PHOTONIC_MEDIA+DEG1-TTP-TXRX"
1690       }
1691     }
1692
1693 **REST API** : *POST /restconf/operations/tapi-topology:get-link-details*
1694
1695 This request returns the information, stored in the Topology Context, of the corresponding T-API link.
1696 The user can provide, either the Uuid associated to the attribute or its name.
1697
1698 **Sample JSON Data**
1699
1700 .. code:: json
1701
1702     {
1703       "tapi-topology:input": {
1704         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1705         "tapi-topology:link-id-or-name": "ROADM-C1-DEG1-DEG1-TTP-TXRXtoROADM-A1-DEG2-DEG2-TTP-TXRX"
1706       }
1707     }
1708
1709 T-API Connectivity & Common Services
1710 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1711
1712 Phosphorus SR0 extends the T-API interface support by implementing the T-API connectivity Service.
1713 This interface enables a higher level controller or an orchestrator to request the creation of
1714 connectivity services as defined in the *tapi-connectivity* model. As it is necessary to indicate the
1715 two (or more) SIPs (or endpoints) of the connectivity service, the *tapi-common* model is implemented
1716 to retrieve from the datastore all the innformation related to the SIPs in the tapi-context.
1717 Current implementation of the connectivity service maps the *connectivity-request* into the appropriate
1718 *openroadm-service-create* and relies on the Service Handler to perform path calculation and configuration
1719 of devices. Results received from the PCE and the Rendererare mapped back into T-API to create the
1720 corresponding Connection End Points (CEPs) and Connections in the T-API Connectivity Context and store it
1721 in the datastore.
1722
1723 This first implementation includes the creation of:
1724
1725 -   ROADM-to-ROADM tapi-connectivity service (MC connectivity service)
1726 -   OTN tapi-connectivity services (OCh/OTU, OTSi/OTU & ODU connectivity services)
1727 -   Ethernet tapi-connectivity services (DSR connectivity service)
1728
1729 -  RPC calls implemented
1730
1731    -  create-connectivity-service
1732
1733    -  get-connectivity-service-details
1734
1735    -  get-connection-details
1736
1737    -  delete-connectivity-service
1738
1739    -  get-connection-end-point-details
1740
1741    -  get-connectivity-service-list
1742
1743    -  get-service-interface-point-details
1744
1745    -  get-service-interface-point-list
1746
1747 Creating a T-API Connectivity service
1748 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1749
1750 Use the *tapi* interface to create any end-to-end connectivity service on a T-API based
1751 network. Two kind of end-to-end "optical" connectivity services are managed by TransportPCE T-API module:
1752 - 10GE service from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
1753 - Media Channel (MC) connectivity service from client add/drop port (PP port of SRG) to
1754 client add/drop port of two ROADMs.
1755
1756 As mentioned earlier, T-API module interfaces with the Service Handler to automatically invoke the
1757 *renderer* module to create all required tapi connections and cross-connection on each device
1758 supporting the service.
1759
1760 Before creating a low-order OTN connectivity service (1GE or 10GE services terminating on
1761 client port of MUXPDR or SWITCH), the user must ensure that a high-order ODU4 container
1762 exists and has previously been configured (it means structured to support low-order otn services)
1763 to support low-order OTN containers.
1764
1765 Thus, OTN connectivity service creation implies three steps:
1766 1. OTSi/OTU connectivity service from network port to network port of two OTN Xponders (MUXPDR or SWITCH in Photonic media layer)
1767 2. ODU connectivity service from network port to network port of two OTN Xponders (MUXPDR or SWITCH in DSR/ODU layer)
1768 3. 10GE connectivity service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH in DSR/ODU layer)
1769
1770 The first step corresponds to the OCH-OTU4 service from network port to network port of OpenROADM.
1771 The corresponding T-API cross and top connections are created between the CEPs of the T-API nodes
1772 involved in each request.
1773
1774 Additionally, an *MC connectivity service* could be created between two ROADMs to create an optical
1775 tunnel and reserve resources in advance. This kind of service corresponds to the OC service creation
1776 use case described earlier.
1777
1778 The management of other OTN services through T-API (1GE-ODU0, 100GE...) is planned for future releases.
1779
1780 Any-Connectivity service creation
1781 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1782 As for the Service Creation described for OpenROADM, the initial steps are the same:
1783
1784 -   Connect netconf devices to the controller
1785 -   Create XPDR-RDM links and configure RDM-to-RDM links (in openroadm topologies)
1786
1787 Bidirectional T-API links between xpdr and rdm nodes must be created manually. To that end, use the
1788 following REST RPCs:
1789
1790 From xpdr <--> rdm:
1791 ^^^^^^^^^^^^^^^^^^^
1792
1793 **REST API** : *POST /restconf/operations/transportpce-tapinetworkutils:init-xpdr-rdm-tapi-link*
1794
1795 **Sample JSON Data**
1796
1797 .. code:: json
1798
1799     {
1800         "input": {
1801             "xpdr-node": "<XPDR_OpenROADM_id>",
1802             "network-tp": "<XPDR_TP_OpenROADM_id>",
1803             "rdm-node": "<ROADM_OpenROADM_id>",
1804             "add-drop-tp": "<ROADM_TP_OpenROADM_id>"
1805         }
1806     }
1807
1808 Use the following REST RPC to invoke T-API module in order to create a bidirectional connectivity
1809 service between two devices. The network should be composed of two ROADMs and two Xponders (SWITCH or MUX)
1810
1811 **REST API** : *POST /restconf/operations/tapi-connectivity:create-connectivity-service*
1812
1813 **Sample JSON Data**
1814
1815 .. code:: json
1816
1817     {
1818         "tapi-connectivity:input": {
1819             "tapi-connectivity:end-point": [
1820                 {
1821                     "tapi-connectivity:layer-protocol-name": "<Node_TAPI_Layer>",
1822                     "tapi-connectivity:service-interface-point": {
1823                         "tapi-connectivity:service-interface-point-uuid": "<SIP_UUID_of_NEP>"
1824                     },
1825                     "tapi-connectivity:administrative-state": "UNLOCKED",
1826                     "tapi-connectivity:operational-state": "ENABLED",
1827                     "tapi-connectivity:direction": "BIDIRECTIONAL",
1828                     "tapi-connectivity:role": "SYMMETRIC",
1829                     "tapi-connectivity:protection-role": "WORK",
1830                     "tapi-connectivity:local-id": "<OpenROADM node ID>",
1831                     "tapi-connectivity:name": [
1832                         {
1833                             "tapi-connectivity:value-name": "OpenROADM node id",
1834                             "tapi-connectivity:value": "<OpenROADM node ID>"
1835                         }
1836                     ]
1837                 },
1838                 {
1839                     "tapi-connectivity:layer-protocol-name": "<Node_TAPI_Layer>",
1840                     "tapi-connectivity:service-interface-point": {
1841                         "tapi-connectivity:service-interface-point-uuid": "<SIP_UUID_of_NEP>"
1842                     },
1843                     "tapi-connectivity:administrative-state": "UNLOCKED",
1844                     "tapi-connectivity:operational-state": "ENABLED",
1845                     "tapi-connectivity:direction": "BIDIRECTIONAL",
1846                     "tapi-connectivity:role": "SYMMETRIC",
1847                     "tapi-connectivity:protection-role": "WORK",
1848                     "tapi-connectivity:local-id": "<OpenROADM node ID>",
1849                     "tapi-connectivity:name": [
1850                         {
1851                             "tapi-connectivity:value-name": "OpenROADM node id",
1852                             "tapi-connectivity:value": "<OpenROADM node ID>"
1853                         }
1854                     ]
1855                 }
1856             ],
1857             "tapi-connectivity:connectivity-constraint": {
1858                 "tapi-connectivity:service-layer": "<TAPI_Service_Layer>",
1859                 "tapi-connectivity:service-type": "POINT_TO_POINT_CONNECTIVITY",
1860                 "tapi-connectivity:service-level": "Some service-level",
1861                 "tapi-connectivity:requested-capacity": {
1862                     "tapi-connectivity:total-size": {
1863                         "value": "<CAPACITY>",
1864                         "unit": "GB"
1865                     }
1866                 }
1867             },
1868             "tapi-connectivity:state": "Some state"
1869         }
1870     }
1871
1872 As for the previous RPC, MC and OTSi correspond to PHOTONIC_MEDIA layer services,
1873 ODU to ODU layer services and 10GE/DSR to DSR layer services. This RPC invokes the
1874 *Service Handler* module to trigger the *PCE* to compute a path over the
1875 *otn-topology* that must contains ODU4 links with valid bandwidth parameters. Once the path is computed
1876 and validated, the T-API CEPs (associated with a NEP), cross connections and top connections will be created
1877 according to the service request and the topology objects inside the computed path. Then, the *renderer* and
1878 *OLM* are invoked to implement the end-to-end path into the devices and to update the status of the connections
1879 and connectivity service.
1880
1881 .. note::
1882     Refer to the "Unconstrained E2E Service Provisioning" use cases from T-API Reference Implementation to get
1883     more details about the process of connectivity service creation
1884
1885 Deleting a connectivity service
1886 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1887
1888 Use the following REST RPC to invoke *TAPI* module in order to delete a given optical
1889 connectivity service.
1890
1891 **REST API** : *POST /restconf/operations/tapi-connectivity:delete-connectivity-service*
1892
1893 **Sample JSON Data**
1894
1895 .. code:: json
1896
1897     {
1898         "tapi-connectivity:input": {
1899             "tapi-connectivity:service-id-or-name": "<Service_UUID_or_Name>"
1900         }
1901     }
1902
1903 .. note::
1904     Deleting OTN connectivity services implies proceeding in the reverse way to their creation. Thus, OTN
1905     connectivity service deletion must respect the three following steps:
1906     1. delete first all 10GE services supported over any ODU4 to be deleted
1907     2. delete ODU4
1908     3. delete MC-OTSi supporting the just deleted ODU4
1909
1910 T-API Notification Service
1911 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1912
1913 -  RPC calls implemented:
1914
1915    -  create-notification-subscription-service
1916
1917    -  get-supported-notification-types
1918
1919    -  delete-notification-subscription-service
1920
1921    -  get-notification-subscription-service-details
1922
1923    -  get-notification-subscription-service-list
1924
1925    -  get-notification-list
1926
1927 Sulfur SR1 extends the T-API interface support by implementing the T-API notification service. This feature
1928 allows TransportPCE to write and read tapi-notifications stored in topics of a Kafka server. It also upgrades
1929 the nbinotifications module to support the serialization and deserialization of tapi-notifications into JSON
1930 format and vice-versa. Current implementation of the notification service creates a Kafka topic and stores
1931 tapi-notification on reception of a create-notification-subscription-service request. Only connectivity-service
1932 related notifications are stored in the Kafka server.
1933
1934 In comparison with openroadm notifications, in which several pre-defined kafka topics are created on nbinotification
1935 module instantiation, tapi-related kafka topics are created on-demand. Upon reception of a
1936 *create-notification-subscription-service request*, a new topic will be created in the Kafka server.
1937 This topic is named after the connectivity-service UUID.
1938
1939 .. note::
1940     Creating a Notification Subscription Service could include a list of T-API object UUIDs, therefore 1 topic per UUID
1941     is created in the Kafka server.
1942
1943 In the current implementation, only Connectivity Service related notification are supported.
1944
1945 **REST API** : *POST /restconf/operations/tapi-notification:get-supported-notification-types*
1946
1947 The response body will include the type of notifications supported and the object types
1948
1949 Use the following RPC to create a Notification Subscription Service.
1950
1951 **REST API** : *POST /restconf/operations/tapi-notification:create-notification-subscription-service*
1952
1953 **Sample JSON Data**
1954
1955 .. code:: json
1956
1957     {
1958         "tapi-notification:input": {
1959             "tapi-notification:subscription-filter": {
1960                 "tapi-notification:requested-notification-types": [
1961                     "ALARM_EVENT"
1962                 ],
1963                 "tapi-notification:requested-object-types": [
1964                     "CONNECTIVITY_SERVICE"
1965                 ],
1966                 "tapi-notification:requested-layer-protocols": [
1967                     "<LAYER_PROTOCOL_NAME>"
1968                 ],
1969                 "tapi-notification:requested-object-identifier": [
1970                     "<Service_UUID>"
1971                 ],
1972                 "tapi-notification:include-content": true,
1973                 "tapi-notification:local-id": "localId",
1974                 "tapi-notification:name": [
1975                     {
1976                         "tapi-notification:value-name": "Subscription name",
1977                         "tapi-notification:value": "<notification_service_name>"
1978                     }
1979                 ]
1980             },
1981             "tapi-notification:subscription-state": "ACTIVE"
1982         }
1983     }
1984
1985 This call will return the *UUID* of the Notification Subscription service, which can later be used to retrieve the
1986 details of the created subscription, to delete the subscription (and all the related kafka topics) or to retrieve
1987 all the tapi notifications related to that subscription service.
1988
1989 The figure below shows an example of the application of tapi and nbinotifications in order to notify when there is
1990 a connectivity service creation process. Depending on the status of the process a tapi-notification with the
1991 corresponding updated state of the connectivity service is sent to the topic "Service_UUID".
1992
1993 .. figure:: ./images/TransportPCE-tapi-nbinotifications-service-example.jpg
1994    :alt: Example of tapi connectivity service notifications using the feature nbinotifications in TransportPCE
1995
1996 Additionally, when a connectivity service breaks down or is restored a tapi notification alarming the new status
1997 will be sent to a Kafka Server. Below an example of a tapi notification is shown.
1998
1999 **Sample JSON T-API notification**
2000
2001 .. code:: json
2002
2003     {
2004       "nbi-notifications:notification-tapi-service": {
2005         "layer-protocol-name": "<LAYER_PROTOCOL_NAME>",
2006         "notification-type": "ATTRIBUTE_VALUE_CHANGE",
2007         "changed-attributes": [
2008           {
2009             "value-name": "administrativeState",
2010             "old-value": "<LOCKED_OR_UNLOCKED>",
2011             "new-value": "<UNLOCKED_OR_LOCKED>"
2012           },
2013           {
2014             "value-name": "operationalState",
2015             "old-value": "DISABLED_OR_ENABLED",
2016             "new-value": "ENABLED_OR_DISABLED"
2017           }
2018         ],
2019         "target-object-name": [
2020           {
2021             "value-name": "Connectivity Service Name",
2022             "value": "<SERVICE_UUID>"
2023           }
2024         ],
2025         "uuid": "<NOTIFICATION_UUID>",
2026         "target-object-type": "CONNECTIVITY_SERVICE",
2027         "event-time-stamp": "2022-04-06T09:06:01+00:00",
2028         "target-object-identifier": "<SERVICE_UUID>"
2029       }
2030     }
2031
2032 To retrieve these tapi connectivity service notifications stored in the kafka server:
2033
2034 **REST API** : *POST /restconf/operations/tapi-notification:get-notification-list*
2035
2036 **Sample JSON Data**
2037
2038 .. code:: json
2039
2040     {
2041         "tapi-notification:input": {
2042             "tapi-notification:subscription-id-or-name": "<SUBSCRIPTION_UUID_OR_NAME>",
2043             "tapi-notification:time-period": "time-period"
2044         }
2045     }
2046
2047 Further development will support more types of T-API objects, i.e., node, link, topology, connection...
2048
2049 odl-transportpce-dmaap-client
2050 -----------------------------
2051
2052 This feature allows TransportPCE application to send notifications on ONAP Dmaap Message router
2053 following service request results.
2054 This feature listens on NBI notifications and sends the PublishNotificationService content to
2055 Dmaap on the topic "unauthenticated. TPCE" through a POST request on /events/unauthenticated.TPCE
2056 It uses Jackson to serialize the notification to JSON and jersey client to send the POST request.
2057
2058 odl-transportpce-nbinotifications
2059 ---------------------------------
2060
2061 This feature allows TransportPCE application to write and read notifications stored in topics of a Kafka server.
2062 It is basically composed of two kinds of elements. First are the 'publishers' that are in charge of sending a notification to
2063 a Kafka server. To protect and only allow specific classes to send notifications, each publisher
2064 is dedicated to an authorized class.
2065 There are the 'subscribers' that are in charge of reading notifications from a Kafka server.
2066 So when the feature is called to write notification to a Kafka server, it will serialize the notification
2067 into JSON format and then will publish it in a topic of the server via a publisher.
2068 And when the feature is called to read notifications from a Kafka server, it will retrieve it from
2069 the topic of the server via a subscriber and will deserialize it.
2070
2071 For now, when the REST RPC service-create is called to create a bidirectional end-to-end service,
2072 depending on the success or the fail of the creation, the feature will notify the result of
2073 the creation to a Kafka server. The topics that store these notifications are named after the connection type
2074 (service, infrastructure, roadm-line). For instance, if the RPC service-create is called to create an
2075 infrastructure connection, the service notifications related to this connection will be stored in
2076 the topic 'infrastructure'.
2077
2078 The figure below shows an example of the application nbinotifications in order to notify the
2079 progress of a service creation.
2080
2081 .. figure:: ./images/TransportPCE-nbinotifications-service-example.jpg
2082    :alt: Example of service notifications using the feature nbinotifications in TransportPCE
2083
2084
2085 Depending on the status of the service creation, two kinds of notifications can be published
2086 to the topic 'service' of the Kafka server.
2087
2088 If the service was correctly implemented, the following notification will be published :
2089
2090
2091 -  **Service implemented !** : Indicates that the service was successfully implemented.
2092    It also contains all information concerning the new service.
2093
2094
2095 Otherwise, this notification will be published :
2096
2097
2098 -  **ServiceCreate failed ...** : Indicates that the process of service-create failed, and also contains
2099    the failure cause.
2100
2101
2102 To retrieve these service notifications stored in the Kafka server :
2103
2104 **REST API** : *POST /restconf/operations/nbi-notifications:get-notifications-process-service*
2105
2106 **Sample JSON Data**
2107
2108 .. code:: json
2109
2110     {
2111       "input": {
2112         "connection-type": "service",
2113         "id-consumer": "consumer",
2114         "group-id": "test"
2115        }
2116     }
2117
2118 .. note::
2119     The field 'connection-type' corresponds to the topic that stores the notifications.
2120
2121 Another implementation of the notifications allows to notify any modification of operational state made about a service.
2122 So when a service breaks down or is restored, a notification alarming the new status will be sent to a Kafka Server.
2123 The topics that store these notifications in the Kafka server are also named after the connection type
2124 (service, infrastructure, roadm-line) accompanied of the string 'alarm'.
2125
2126 To retrieve these alarm notifications stored in the Kafka server :
2127
2128 **REST API** : *POST /restconf/operations/nbi-notifications:get-notifications-alarm-service*
2129
2130 **Sample JSON Data**
2131
2132 .. code:: json
2133
2134     {
2135       "input": {
2136         "connection-type": "infrastructure",
2137         "id-consumer": "consumer",
2138         "group-id": "test"
2139        }
2140     }
2141
2142 .. note::
2143     This sample is used to retrieve all the alarm notifications related to infrastructure services.
2144
2145 Help
2146 ----
2147
2148 -  `TransportPCE Wiki <https://wiki.opendaylight.org/display/ODL/TransportPCE>`__