Extend tapi feature documentation
[transportpce.git] / docs / developer-guide.rst
1 .. _transportpce-dev-guide:
2
3 TransportPCE Developer Guide
4 ============================
5
6 Overview
7 --------
8
9 TransportPCE describes an application running on top of the OpenDaylight
10 controller. Its primary function is to control an optical transport
11 infrastructure using a non-proprietary South Bound Interface (SBI). It may be
12 interconnected with Controllers of different layers (L2, L3 Controller…), a
13 higher layer Controller and/or an Orchestrator through non-proprietary
14 Application Programing Interfaces (APIs). Control includes the capability to
15 configure the optical equipment, and to provision services according to a
16 request coming from a higher layer controller and/or an orchestrator.
17 This capability may rely on the controller only or it may be delegated to
18 distributed (standardized) protocols.
19
20
21 Architecture
22 ------------
23
24 TransportPCE modular architecture is described on the next diagram. Each main
25 function such as Topology management, Path Calculation Engine (PCE), Service
26 handler, Renderer \_responsible for the path configuration through optical
27 equipment\_ and Optical Line Management (OLM) is associated with a generic block
28 relying on open models, each of them communicating through published APIs.
29
30
31 .. figure:: ./images/TransportPCE-Diagramm-Magnesium.jpg
32    :alt: TransportPCE architecture
33
34    TransportPCE architecture
35
36 Fluorine, Neon and Sodium releases of transportPCE are dedicated to the control
37 of WDM transport infrastructure. The WDM layer is built from colorless ROADMs
38 and transponders.
39
40 The interest of using a controller to provision automatically services strongly
41 relies on its ability to handle end to end optical services that spans through
42 the different network domains, potentially equipped with equipment coming from
43 different suppliers. Thus, interoperability in the optical layer is a key
44 element to get the benefit of automated control.
45
46 Initial design of TransportPCE leverages OpenROADM Multi-Source-Agreement (MSA)
47 which defines interoperability specifications, consisting of both Optical
48 interoperability and Yang data models.
49
50 End to end OTN services such as OCH-OTU4, structured ODU4 or 10GE-ODU2e
51 services are supported since Magnesium SR2. OTN support will continue to be
52 improved in the following releases of Magnesium and Aluminium.
53
54 An experimental support of Flexgrid is introduced in Aluminium. Depending on
55 OpenROADM device models, optical interfaces can be created according to the
56 initial fixed grid (for R1.2.1, 96 channels regularly spaced of 50 GHz), or to
57 a flexgrid (for R2.2.1 use of specific number of subsequent frequency slots of
58 6.25 GHz depending on one side of ROADMs and transponders capabilities and on
59 the other side of the rate of the channel. The full support of Flexgrid,
60 including path computation and the creation of B100G (Beyond 100 Gbps) higher
61 rate interfaces will be added in the following releases of Aluminium.
62
63
64 Module description
65 ~~~~~~~~~~~~~~~~~~
66
67 ServiceHandler
68 ^^^^^^^^^^^^^^
69
70 Service Handler handles request coming from a higher level controller or an orchestrator
71 through the northbound API, as defined in the Open ROADM service model. Current
72 implementation addresses the following rpcs: service-create, temp-service-create,
73 service–delete, temp-service-delete, service-reroute, and service-restoration. It checks the
74 request consistency and trigs path calculation sending rpcs to the PCE. If a valid path is
75 returned by the PCE, path configuration is initiated relying on Renderer and OLM. At the
76 confirmation of a successful service creation, the Service Handler updates the service-
77 list/temp-service-list in the MD-SAL. For service deletion, the Service Handler relies on the
78 Renderer and the OLM to delete connections and reset power levels associated with the
79 service. The service-list is updated following a successful service deletion. In Neon SR0 is
80 added the support for service from ROADM to ROADM, which brings additional flexibility and
81 notably allows reserving resources when transponders are not in place at day one.
82 Magnesium SR2 fully supports end-to-end OTN services which are part of the OTN infrastructure.
83 It concerns the management of OCH-OTU4 (also part of the optical infrastructure) and structured
84 HO-ODU4 services. Moreover, once these two kinds of OTN infrastructure service created, it is
85 possible to manage some LO-ODU services (for the time being, only 10GE-ODU2e services).
86 The full support of OTN services, including 1GE-ODU0 or 100GE, will be introduced along next
87 releases (Mg/Al).
88
89 In Silicon release, the management of TopologyUpdateNotification coming from the *Topology Management*
90 module was implemented. This functionality enables the controller to update the information of existing
91 services according to the online status of the network infrastructure. If any service is affected by
92 the topology update and the *odl-transportpce-nbi* feature is installed, the Service Handler will send a
93 notification to a Kafka server with the service update information.
94
95 PCE
96 ^^^
97
98 The Path Computation Element (PCE) is the component responsible for path
99 calculation. An interface allows the Service Handler or external components such as an
100 orchestrator to request a path computation and get a response from the PCE
101 including the computed path(s) in case of success, or errors and indication of
102 the reason for the failure in case the request cannot be satisfied. Additional
103 parameters can be provided by the PCE in addition to the computed paths if
104 requested by the client module. An interface to the Topology Management module
105 allows keeping PCE aligned with the latest changes in the topology. Information
106 about current and planned services is available in the MD-SAL data store.
107
108 Current implementation of PCE allows finding the shortest path, minimizing either the hop
109 count (default) or the propagation delay. Central wavelength is assigned considering a fixed
110 grid of 96 wavelengths 50 GHz spaced. The assignment of wavelengths according to a flexible
111 grid considering 768 subsequent slots of 6,25 GHz (total spectrum of 4.8 Thz), and their
112 occupation by existing services is planned for later releases.
113 In Neon SR0, the PCE calculates the OSNR, on the base of incremental noise specifications
114 provided in Open ROADM MSA. The support of unidirectional ports is also added.
115
116 PCE handles the following constraints as hard constraints:
117
118 -   **Node exclusion**
119 -   **SRLG exclusion**
120 -   **Maximum latency**
121
122 In Magnesium SR0, the interconnection of the PCE with GNPY (Gaussian Noise Python), an
123 open-source library developed in the scope of the Telecom Infra Project for building route
124 planning and optimizing performance in optical mesh networks, is fully supported.
125
126 If the OSNR calculated by the PCE is too close to the limit defined in OpenROADM
127 specifications, the PCE forwards through a REST interface to GNPY external tool the topology
128 and the pre-computed path translated in routing constraints. GNPy calculates a set of Quality of
129 Transmission metrics for this path using its own library which includes models for OpenROADM.
130 The result is sent back to the PCE. If the path is validated, the PCE sends back a response to
131 the service handler. In case of invalidation of the path by GNPY, the PCE sends a new request to
132 GNPY, including only the constraints expressed in the path-computation-request initiated by the
133 Service Handler. GNPy then tries to calculate a path based on these relaxed constraints. The result
134 of the path computation is provided to the PCE which translates the path according to the topology
135 handled in transportPCE and forwards the results to the Service Handler.
136
137 GNPy relies on SNR and takes into account the linear and non-linear impairments
138 to check feasibility. In the related tests, GNPy module runs externally in a
139 docker and the communication with T-PCE is ensured via HTTPs.
140
141 Topology Management
142 ^^^^^^^^^^^^^^^^^^^
143
144 Topology management module builds the Topology according to the Network model
145 defined in OpenROADM. The topology is aligned with IETF I2RS RFC8345 model.
146 It includes several network layers:
147
148 -  **CLLI layer corresponds to the locations that host equipment**
149 -  **Network layer corresponds to a first level of disaggregation where we
150    separate Xponders (transponder, muxponders or switchponders) from ROADMs**
151 -  **Topology layer introduces a second level of disaggregation where ROADMs
152    Add/Drop modules ("SRGs") are separated from the degrees which includes line
153    amplifiers and WSS that switch wavelengths from one to another degree**
154 -  **OTN layer introduced in Magnesium includes transponders as well as switch-ponders and
155    mux-ponders having the ability to switch OTN containers from client to line cards. Mg SR0
156    release includes creation of the switching pool (used to model cross-connect matrices),
157    tributary-ports and tributary-slots at the initial connection of NETCONF devices.
158    The population of OTN links (OTU4 and ODU4), and the adjustment of the tributary ports/slots
159    pool occupancy when OTN services are created is supported since Magnesium SR2.**
160
161 Since Silicon release, the Topology Management module process NETCONF event received through an
162 event stream (as defined in RFC 5277) between devices and the NETCONF adapter of the controller.
163 Current implementation detects device configuration changes and updates the topology datastore accordingly.
164 Then, it sends a TopologyUpdateNotification to the *Service Handler* to indicate that a change has been
165 detected in the network that may affect some of the already existing services.
166
167 Renderer
168 ^^^^^^^^
169
170 The Renderer module, on request coming from the Service Handler through a service-
171 implementation-request /service delete rpc, sets/deletes the path corresponding to a specific
172 service between A and Z ends. The path description provided by the service-handler to the
173 renderer is based on abstracted resources (nodes, links and termination-points), as provided
174 by the PCE module. The renderer converts this path-description in a path topology based on
175 device resources (circuit-packs, ports,…).
176
177 The conversion from abstracted resources to device resources is performed relying on the
178 portmapping module which maintains the connections between these different resource types.
179 Portmapping module also allows to keep the topology independant from the devices releases.
180 In Neon (SR0), portmapping module has been enriched to support both openroadm 1.2.1 and 2.2.1
181 device models. The full support of openroadm 2.2.1 device models (both in the topology management
182 and the rendering function) has been added in Neon SR1. In Magnesium, portmapping is enriched with
183 the supported-interface-capability, OTN supporting-interfaces, and switching-pools (reflecting
184 cross-connection capabilities of OTN switch-ponders).
185
186 After the path is provided, the renderer first checks what are the existing interfaces on the
187 ports of the different nodes that the path crosses. It then creates missing interfaces. After all
188 needed interfaces have been created it sets the connections required in the nodes and
189 notifies the Service Handler on the status of the path creation. Path is created in 2 steps
190 (from A to Z and Z to A). In case the path between A and Z could not be fully created, a
191 rollback function is called to set the equipment on the path back to their initial configuration
192 (as they were before invoking the Renderer).
193
194 Magnesium brings the support of OTN services. SR0 supports the creation of OTU4, ODU4, ODU2/ODU2e
195 and ODU0 interfaces. The creation of these low-order otn interfaces must be triggered through
196 otn-service-path RPC. Magnesium SR2 fully supports end-to-end otn service implementation into devices
197 (service-implementation-request /service delete rpc, topology alignement after the service has been created).
198
199
200 OLM
201 ^^^
202
203 Optical Line Management module implements two main features: it is responsible
204 for setting up the optical power levels on the different interfaces, and is in
205 charge of adjusting these settings across the life of the optical
206 infrastructure.
207
208 After the different connections have been established in the ROADMS, between 2
209 Degrees for an express path, or between a SRG and a Degree for an Add or Drop
210 path; meaning the devices have set WSS and all other required elements to
211 provide path continuity, power setting are provided as attributes of these
212 connections. This allows the device to set all complementary elements such as
213 VOAs, to guaranty that the signal is launched at a correct power level
214 (in accordance to the specifications) in the fiber span. This also applies
215 to X-Ponders, as their output power must comply with the specifications defined
216 for the Add/Drop ports (SRG) of the ROADM. OLM has the responsibility of
217 calculating the right power settings, sending it to the device, and check the
218 PM retrieved from the device to verify that the setting was correctly applied
219 and the configuration was successfully completed.
220
221
222 Inventory
223 ^^^^^^^^^
224
225 TransportPCE Inventory module is responsible to keep track of devices connected in an external MariaDB database.
226 Other databases may be used as long as they comply with SQL and are compatible with OpenDaylight (for example MySQL).
227 At present, the module supports extracting and persisting inventory of devices OpenROADM MSA version 1.2.1.
228 Inventory module changes to support newer device models (2.2.1, etc) and other models (network, service, etc)
229 will be progressively included.
230
231 The inventory module can be activated by the associated karaf feature (odl-transporpce-inventory)
232 The database properties are supplied in the “opendaylight-release” and “opendaylight-snapshots” profiles.
233 Below is the settings.xml with properties included in the distribution.
234 The module can be rebuild from sources with different parameters.
235
236 Sample entry in settings.xml to declare an external inventory database:
237 ::
238
239     <profiles>
240       <profile>
241           <id>opendaylight-release</id>
242     [..]
243          <properties>
244                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
245                  <transportpce.db.database><<databasename>></transportpce.db.database>
246                  <transportpce.db.username><<username>></transportpce.db.username>
247                  <transportpce.db.password><<password>></transportpce.db.password>
248                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
249          </properties>
250     </profile>
251     [..]
252     <profile>
253           <id>opendaylight-snapshots</id>
254     [..]
255          <properties>
256                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
257                  <transportpce.db.database><<databasename>></transportpce.db.database>
258                  <transportpce.db.username><<username>></transportpce.db.username>
259                  <transportpce.db.password><<password>></transportpce.db.password>
260                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
261          </properties>
262         </profile>
263     </profiles>
264
265
266 Once the project built and when karaf is started, the cfg file is generated in etc folder with the corresponding
267 properties supplied in settings.xml. When devices with OpenROADM 1.2.1 device model are mounted, the device listener in
268 the inventory module loads several device attributes to various tables as per the supplied database.
269 The database structure details can be retrieved from the file tests/inventory/initdb.sql inside project sources.
270 Installation scripts and a docker file are also provided.
271
272 Key APIs and Interfaces
273 -----------------------
274
275 External API
276 ~~~~~~~~~~~~
277
278 North API, interconnecting the Service Handler to higher level applications
279 relies on the Service Model defined in the MSA. The Renderer and the OLM are
280 developed to allow configuring Open ROADM devices through a southbound
281 Netconf/Yang interface and rely on the MSA’s device model.
282
283 ServiceHandler Service
284 ^^^^^^^^^^^^^^^^^^^^^^
285
286 -  RPC call
287
288    -  service-create (given service-name, service-aend, service-zend)
289
290    -  service-delete (given service-name)
291
292    -  service-reroute (given service-name, service-aend, service-zend)
293
294    -  service-restoration (given service-name, service-aend, service-zend)
295
296    -  temp-service-create (given common-id, service-aend, service-zend)
297
298    -  temp-service-delete (given common-id)
299
300 -  Data structure
301
302    -  service list : made of services
303    -  temp-service list : made of temporary services
304    -  service : composed of service-name, topology wich describes the detailed path (list of used resources)
305
306 -  Notification
307
308    - service-rpc-result : result of service RPC
309    - service-notification : service has been added, modified or removed
310
311 Netconf Service
312 ^^^^^^^^^^^^^^^
313
314 -  RPC call
315
316    -  connect-device : PUT
317    -  disconnect-device : DELETE
318    -  check-connected-device : GET
319
320 -  Data Structure
321
322    -  node list : composed of netconf nodes in topology-netconf
323
324 Internal APIs
325 ~~~~~~~~~~~~~
326
327 Internal APIs define REST APIs to interconnect TransportPCE modules :
328
329 -   Service Handler to PCE
330 -   PCE to Topology Management
331 -   Service Handler to Renderer
332 -   Renderer to OLM
333 -   Network Model to Service Handler
334
335 Pce Service
336 ^^^^^^^^^^^
337
338 -  RPC call
339
340    -  path-computation-request (given service-name, service-aend, service-zend)
341
342    -  cancel-resource-reserve (given service-name)
343
344 -  Notification
345
346    - service-path-rpc-result : result of service RPC
347
348 Renderer Service
349 ^^^^^^^^^^^^^^^^
350
351 -  RPC call
352
353    -  service-implementation-request (given service-name, service-aend, service-zend)
354
355    -  service-delete (given service-name)
356
357 -  Data structure
358
359    -  service path list : composed of service paths
360    -  service path : composed of service-name, path description giving the list of abstracted elements (nodes, tps, links)
361
362 -  Notification
363
364    - service-path-rpc-result : result of service RPC
365
366 Device Renderer
367 ^^^^^^^^^^^^^^^
368
369 -  RPC call
370
371    -  service-path used in SR0 as an intermediate solution to address directly the renderer
372       from a REST NBI to create OCH-OTU4-ODU4 interfaces on network port of otn devices.
373
374    -  otn-service-path used in SR0 as an intermediate solution to address directly the renderer
375       from a REST NBI for otn-service creation. Otn service-creation through
376       service-implementation-request call from the Service Handler will be supported in later
377       Magnesium releases
378
379 Topology Management Service
380 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
381
382 -  Data structure
383
384    -  network list : composed of networks(openroadm-topology, netconf-topology)
385    -  node list : composed of nodes identified by their node-id
386    -  link list : composed of links identified by their link-id
387    -  node : composed of roadm, xponder
388       link : composed of links of different types (roadm-to-roadm, express, add-drop ...)
389
390 OLM Service
391 ^^^^^^^^^^^
392
393 -  RPC call
394
395    -  get-pm (given node-id)
396
397    -  service-power-setup
398
399    -  service-power-turndown
400
401    -  service-power-reset
402
403    -  calculate-spanloss-base
404
405    -  calculate-spanloss-current
406
407 odl-transportpce-stubmodels
408 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
409
410    -  This feature provides function to be able to stub some of TransportPCE modules, pce and
411       renderer (Stubpce and Stubrenderer).
412       Stubs are used for development purposes and can be used for some of the functional tests.
413
414 Interfaces to external software
415 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
416
417 It defines the interfaces implemented to interconnect TransportPCE modules with other software in
418 order to perform specific tasks
419
420 GNPy interface
421 ^^^^^^^^^^^^^^
422
423 -  Request structure
424
425    -  topology : composed of list of elements and connections
426    -  service : source, destination, explicit-route-objects, path-constraints
427
428 -  Response structure
429
430    -  path-properties/path-metric : OSNR-0.1nm, OSNR-bandwidth, SNR-0.1nm, SNR-bandwidth,
431    -  path-properties/path-route-objects : composed of path elements
432
433
434 Running transportPCE project
435 ----------------------------
436
437 To use transportPCE controller, the first step is to connect the controller to optical nodes
438 through the NETCONF connector.
439
440 .. note::
441
442     In the current version, only optical equipment compliant with open ROADM datamodels are managed
443     by transportPCE.
444
445
446 Connecting nodes
447 ~~~~~~~~~~~~~~~~
448
449 To connect a node, use the following JSON RPC
450
451 **REST API** : *POST /restconf/config/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
452
453 **Sample JSON Data**
454
455 .. code:: json
456
457     {
458         "node": [
459             {
460                 "node-id": "<node-id>",
461                 "netconf-node-topology:tcp-only": "false",
462                 "netconf-node-topology:reconnect-on-changed-schema": "false",
463                 "netconf-node-topology:host": "<node-ip-address>",
464                 "netconf-node-topology:default-request-timeout-millis": "120000",
465                 "netconf-node-topology:max-connection-attempts": "0",
466                 "netconf-node-topology:sleep-factor": "1.5",
467                 "netconf-node-topology:actor-response-wait-time": "5",
468                 "netconf-node-topology:concurrent-rpc-limit": "0",
469                 "netconf-node-topology:between-attempts-timeout-millis": "2000",
470                 "netconf-node-topology:port": "<netconf-port>",
471                 "netconf-node-topology:connection-timeout-millis": "20000",
472                 "netconf-node-topology:username": "<node-username>",
473                 "netconf-node-topology:password": "<node-password>",
474                 "netconf-node-topology:keepalive-delay": "300"
475             }
476         ]
477     }
478
479
480 Then check that the netconf session has been correctly established between the controller and the
481 node. the status of **netconf-node-topology:connection-status** must be **connected**
482
483 **REST API** : *GET /restconf/operational/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
484
485
486 Node configuration discovery
487 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
488
489 Once the controller is connected to the node, transportPCE application automatically launchs a
490 discovery of the node configuration datastore and creates **Logical Connection Points** to any
491 physical ports related to transmission. All *circuit-packs* inside the node configuration are
492 analyzed.
493
494 Use the following JSON RPC to check that function internally named *portMapping*.
495
496 **REST API** : *GET /restconf/config/portmapping:network*
497
498 .. note::
499
500     In ``org-openroadm-device.yang``, four types of optical nodes can be managed:
501         * rdm: ROADM device (optical switch)
502         * xpdr: Xponder device (device that converts client to optical channel interface)
503         * ila: in line amplifier (optical amplifier)
504         * extplug: external pluggable (an optical pluggable that can be inserted in an external unit such as a router)
505
506     TransportPCE currently supports rdm and xpdr
507
508 Depending on the kind of open ROADM device connected, different kind of *Logical Connection Points*
509 should appear, if the node configuration is not empty:
510
511 -  DEG<degree-number>-TTP-<port-direction>: created on the line port of a degree on a rdm equipment
512 -  SRG<srg-number>-PP<port-number>: created on the client port of a srg on a rdm equipment
513 -  XPDR<number>-CLIENT<port-number>: created on the client port of a xpdr equipment
514 -  XPDR<number>-NETWORK<port-number>: created on the line port of a xpdr equipment
515
516     For further details on openROADM device models, see `openROADM MSA white paper <https://0201.nccdn.net/1_2/000/000/134/c50/Open-ROADM-MSA-release-2-Device-White-paper-v1-1.pdf>`__.
517
518 Optical Network topology
519 ~~~~~~~~~~~~~~~~~~~~~~~~
520
521 Before creating an optical connectivity service, your topology must contain at least two xpdr
522 devices connected to two different rdm devices. Normally, the *openroadm-topology* is automatically
523 created by transportPCE. Nevertheless, depending on the configuration inside optical nodes, this
524 topology can be partial. Check that link of type *ROADMtoROADM* exists between two adjacent rdm
525 nodes.
526
527 **REST API** : *GET /restconf/config/ietf-network:network/openroadm-topology*
528
529 If it is not the case, you need to manually complement the topology with *ROADMtoROADM* link using
530 the following REST RPC:
531
532
533 **REST API** : *POST /restconf/operations/networkutils:init-roadm-nodes*
534
535 **Sample JSON Data**
536
537 .. code:: json
538
539     {
540       "networkutils:input": {
541         "networkutils:rdm-a-node": "<node-id-A>",
542         "networkutils:deg-a-num": "<degree-A-number>",
543         "networkutils:termination-point-a": "<Logical-Connection-Point>",
544         "networkutils:rdm-z-node": "<node-id-Z>",
545         "networkutils:deg-z-num": "<degree-Z-number>",
546         "networkutils:termination-point-z": "<Logical-Connection-Point>"
547       }
548     }
549
550 *<Logical-Connection-Point> comes from the portMapping function*.
551
552 Unidirectional links between xpdr and rdm nodes must be created manually. To that end use the two
553 following REST RPCs:
554
555 From xpdr to rdm:
556 ^^^^^^^^^^^^^^^^^
557
558 **REST API** : *POST /restconf/operations/networkutils:init-xpdr-rdm-links*
559
560 **Sample JSON Data**
561
562 .. code:: json
563
564     {
565       "networkutils:input": {
566         "networkutils:links-input": {
567           "networkutils:xpdr-node": "<xpdr-node-id>",
568           "networkutils:xpdr-num": "1",
569           "networkutils:network-num": "<xpdr-network-port-number>",
570           "networkutils:rdm-node": "<rdm-node-id>",
571           "networkutils:srg-num": "<srg-number>",
572           "networkutils:termination-point-num": "<Logical-Connection-Point>"
573         }
574       }
575     }
576
577 From rdm to xpdr:
578 ^^^^^^^^^^^^^^^^^
579
580 **REST API** : *POST /restconf/operations/networkutils:init-rdm-xpdr-links*
581
582 **Sample JSON Data**
583
584 .. code:: json
585
586     {
587       "networkutils:input": {
588         "networkutils:links-input": {
589           "networkutils:xpdr-node": "<xpdr-node-id>",
590           "networkutils:xpdr-num": "1",
591           "networkutils:network-num": "<xpdr-network-port-number>",
592           "networkutils:rdm-node": "<rdm-node-id>",
593           "networkutils:srg-num": "<srg-number>",
594           "networkutils:termination-point-num": "<Logical-Connection-Point>"
595         }
596       }
597     }
598
599 OTN topology
600 ~~~~~~~~~~~~
601
602 Before creating an OTN service, your topology must contain at least two xpdr devices of MUXPDR
603 or SWITCH type connected to two different rdm devices. To check that these xpdr are present in the
604 OTN topology, use the following command on the REST API :
605
606 **REST API** : *GET /restconf/config/ietf-network:network/otn-topology*
607
608 An optical connectivity service shall have been created in a first setp. Since Magnesium SR2, the OTN
609 links are automatically populated in the topology after the Och, OTU4 and ODU4 interfaces have
610 been created on the two network ports of the xpdr.
611
612 Creating a service
613 ~~~~~~~~~~~~~~~~~~
614
615 Use the *service handler* module to create any end-to-end connectivity service on an OpenROADM
616 network. Two kind of end-to-end "optical" services are managed by TransportPCE:
617 - 100GE service from client port to client port of two transponders (TPDR)
618 - Optical Channel (OC) service from client add/drop port (PP port of SRG) to client add/drop port of
619 two ROADMs.
620
621 For these services, TransportPCE automatically invokes *renderer* module to create all required
622 interfaces and cross-connection on each device supporting the service.
623 As an example, the creation of a 100GE service implies among other things, the creation of OCH, OTU4
624 and ODU4 interfaces on the Network port of TPDR devices.
625
626 Since Magnesium SR2, the *service handler* module directly manages some end-to-end otn
627 connectivity services.
628 Before creating a low-order OTN service (1GE or 10GE services terminating on client port of MUXPDR
629 or SWITCH), the user must ensure that a high-order ODU4 container exists and has previously been
630 configured (it means structured to support low-order otn services) to support low-order OTN containers.
631 Thus, OTN service creation implies three steps:
632 1. OCH-OTU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
633 2. HO-ODU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
634 3. 10GE service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
635
636 The management of other OTN services (1GE-ODU0, 100GE...) is planned for future releases.
637
638
639 100GE service creation
640 ^^^^^^^^^^^^^^^^^^^^^^
641
642 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
643 end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
644 nodes.
645
646 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
647
648 **Sample JSON Data**
649
650 .. code:: json
651
652     {
653         "input": {
654             "sdnc-request-header": {
655                 "request-id": "request-1",
656                 "rpc-action": "service-create",
657                 "request-system-id": "appname"
658             },
659             "service-name": "test1",
660             "common-id": "commonId",
661             "connection-type": "service",
662             "service-a-end": {
663                 "service-rate": "100",
664                 "node-id": "<xpdr-node-id>",
665                 "service-format": "Ethernet",
666                 "clli": "<ccli-name>",
667                 "tx-direction": {
668                     "port": {
669                         "port-device-name": "<xpdr-client-port>",
670                         "port-type": "fixed",
671                         "port-name": "<xpdr-client-port-number>",
672                         "port-rack": "000000.00",
673                         "port-shelf": "Chassis#1"
674                     },
675                     "lgx": {
676                         "lgx-device-name": "Some lgx-device-name",
677                         "lgx-port-name": "Some lgx-port-name",
678                         "lgx-port-rack": "000000.00",
679                         "lgx-port-shelf": "00"
680                     }
681                 },
682                 "rx-direction": {
683                     "port": {
684                         "port-device-name": "<xpdr-client-port>",
685                         "port-type": "fixed",
686                         "port-name": "<xpdr-client-port-number>",
687                         "port-rack": "000000.00",
688                         "port-shelf": "Chassis#1"
689                     },
690                     "lgx": {
691                         "lgx-device-name": "Some lgx-device-name",
692                         "lgx-port-name": "Some lgx-port-name",
693                         "lgx-port-rack": "000000.00",
694                         "lgx-port-shelf": "00"
695                     }
696                 },
697                 "optic-type": "gray"
698             },
699             "service-z-end": {
700                 "service-rate": "100",
701                 "node-id": "<xpdr-node-id>",
702                 "service-format": "Ethernet",
703                 "clli": "<ccli-name>",
704                 "tx-direction": {
705                     "port": {
706                         "port-device-name": "<xpdr-client-port>",
707                         "port-type": "fixed",
708                         "port-name": "<xpdr-client-port-number>",
709                         "port-rack": "000000.00",
710                         "port-shelf": "Chassis#1"
711                     },
712                     "lgx": {
713                         "lgx-device-name": "Some lgx-device-name",
714                         "lgx-port-name": "Some lgx-port-name",
715                         "lgx-port-rack": "000000.00",
716                         "lgx-port-shelf": "00"
717                     }
718                 },
719                 "rx-direction": {
720                     "port": {
721                         "port-device-name": "<xpdr-client-port>",
722                         "port-type": "fixed",
723                         "port-name": "<xpdr-client-port-number>",
724                         "port-rack": "000000.00",
725                         "port-shelf": "Chassis#1"
726                     },
727                     "lgx": {
728                         "lgx-device-name": "Some lgx-device-name",
729                         "lgx-port-name": "Some lgx-port-name",
730                         "lgx-port-rack": "000000.00",
731                         "lgx-port-shelf": "00"
732                     }
733                 },
734                 "optic-type": "gray"
735             },
736             "due-date": "yyyy-mm-ddT00:00:01Z",
737             "operator-contact": "some-contact-info"
738         }
739     }
740
741 Most important parameters for this REST RPC are the identification of the two physical client ports
742 on xpdr nodes.This RPC invokes the *PCE* module to compute a path over the *openroadm-topology* and
743 then invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
744
745
746 OC service creation
747 ^^^^^^^^^^^^^^^^^^^
748
749 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
750 end-to end Optical Channel (OC) connectivity service between two add/drop ports (PP port of SRG
751 node) over an optical network only composed of rdm nodes.
752
753 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
754
755 **Sample JSON Data**
756
757 .. code:: json
758
759     {
760         "input": {
761             "sdnc-request-header": {
762                 "request-id": "request-1",
763                 "rpc-action": "service-create",
764                 "request-system-id": "appname"
765             },
766             "service-name": "something",
767             "common-id": "commonId",
768             "connection-type": "roadm-line",
769             "service-a-end": {
770                 "service-rate": "100",
771                 "node-id": "<xpdr-node-id>",
772                 "service-format": "OC",
773                 "clli": "<ccli-name>",
774                 "tx-direction": {
775                     "port": {
776                         "port-device-name": "<xpdr-client-port>",
777                         "port-type": "fixed",
778                         "port-name": "<xpdr-client-port-number>",
779                         "port-rack": "000000.00",
780                         "port-shelf": "Chassis#1"
781                     },
782                     "lgx": {
783                         "lgx-device-name": "Some lgx-device-name",
784                         "lgx-port-name": "Some lgx-port-name",
785                         "lgx-port-rack": "000000.00",
786                         "lgx-port-shelf": "00"
787                     }
788                 },
789                 "rx-direction": {
790                     "port": {
791                         "port-device-name": "<xpdr-client-port>",
792                         "port-type": "fixed",
793                         "port-name": "<xpdr-client-port-number>",
794                         "port-rack": "000000.00",
795                         "port-shelf": "Chassis#1"
796                     },
797                     "lgx": {
798                         "lgx-device-name": "Some lgx-device-name",
799                         "lgx-port-name": "Some lgx-port-name",
800                         "lgx-port-rack": "000000.00",
801                         "lgx-port-shelf": "00"
802                     }
803                 },
804                 "optic-type": "gray"
805             },
806             "service-z-end": {
807                 "service-rate": "100",
808                 "node-id": "<xpdr-node-id>",
809                 "service-format": "OC",
810                 "clli": "<ccli-name>",
811                 "tx-direction": {
812                     "port": {
813                         "port-device-name": "<xpdr-client-port>",
814                         "port-type": "fixed",
815                         "port-name": "<xpdr-client-port-number>",
816                         "port-rack": "000000.00",
817                         "port-shelf": "Chassis#1"
818                     },
819                     "lgx": {
820                         "lgx-device-name": "Some lgx-device-name",
821                         "lgx-port-name": "Some lgx-port-name",
822                         "lgx-port-rack": "000000.00",
823                         "lgx-port-shelf": "00"
824                     }
825                 },
826                 "rx-direction": {
827                     "port": {
828                         "port-device-name": "<xpdr-client-port>",
829                         "port-type": "fixed",
830                         "port-name": "<xpdr-client-port-number>",
831                         "port-rack": "000000.00",
832                         "port-shelf": "Chassis#1"
833                     },
834                     "lgx": {
835                         "lgx-device-name": "Some lgx-device-name",
836                         "lgx-port-name": "Some lgx-port-name",
837                         "lgx-port-rack": "000000.00",
838                         "lgx-port-shelf": "00"
839                     }
840                 },
841                 "optic-type": "gray"
842             },
843             "due-date": "yyyy-mm-ddT00:00:01Z",
844             "operator-contact": "some-contact-info"
845         }
846     }
847
848 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
849 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
850 the devices.
851
852 OTN OCH-OTU4 service creation
853 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
854
855 Use the following REST RPC to invoke *service handler* module in order to create over the optical
856 infrastructure a bidirectional end-to-end OTU4 over an optical wavelength connectivity service
857 between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such service configure the
858 optical network infrastructure composed of rdm nodes.
859
860 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
861
862 **Sample JSON Data**
863
864 .. code:: json
865
866     {
867         "input": {
868             "sdnc-request-header": {
869                 "request-id": "request-1",
870                 "rpc-action": "service-create",
871                 "request-system-id": "appname"
872             },
873             "service-name": "something",
874             "common-id": "commonId",
875             "connection-type": "infrastructure",
876             "service-a-end": {
877                 "service-rate": "100",
878                 "node-id": "<xpdr-node-id>",
879                 "service-format": "OTU",
880                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
881                 "clli": "<ccli-name>",
882                 "tx-direction": {
883                     "port": {
884                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
885                         "port-type": "fixed",
886                         "port-name": "<xpdr-network-port-in-otn-topology>",
887                         "port-rack": "000000.00",
888                         "port-shelf": "Chassis#1"
889                     },
890                     "lgx": {
891                         "lgx-device-name": "Some lgx-device-name",
892                         "lgx-port-name": "Some lgx-port-name",
893                         "lgx-port-rack": "000000.00",
894                         "lgx-port-shelf": "00"
895                     }
896                 },
897                 "rx-direction": {
898                     "port": {
899                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
900                         "port-type": "fixed",
901                         "port-name": "<xpdr-network-port-in-otn-topology>",
902                         "port-rack": "000000.00",
903                         "port-shelf": "Chassis#1"
904                     },
905                     "lgx": {
906                         "lgx-device-name": "Some lgx-device-name",
907                         "lgx-port-name": "Some lgx-port-name",
908                         "lgx-port-rack": "000000.00",
909                         "lgx-port-shelf": "00"
910                     }
911                 },
912                 "optic-type": "gray"
913             },
914             "service-z-end": {
915                 "service-rate": "100",
916                 "node-id": "<xpdr-node-id>",
917                 "service-format": "OTU",
918                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
919                 "clli": "<ccli-name>",
920                 "tx-direction": {
921                     "port": {
922                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
923                         "port-type": "fixed",
924                         "port-name": "<xpdr-network-port-in-otn-topology>",
925                         "port-rack": "000000.00",
926                         "port-shelf": "Chassis#1"
927                     },
928                     "lgx": {
929                         "lgx-device-name": "Some lgx-device-name",
930                         "lgx-port-name": "Some lgx-port-name",
931                         "lgx-port-rack": "000000.00",
932                         "lgx-port-shelf": "00"
933                     }
934                 },
935                 "rx-direction": {
936                     "port": {
937                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
938                         "port-type": "fixed",
939                         "port-name": "<xpdr-network-port-in-otn-topology>",
940                         "port-rack": "000000.00",
941                         "port-shelf": "Chassis#1"
942                     },
943                     "lgx": {
944                         "lgx-device-name": "Some lgx-device-name",
945                         "lgx-port-name": "Some lgx-port-name",
946                         "lgx-port-rack": "000000.00",
947                         "lgx-port-shelf": "00"
948                     }
949                 },
950                 "optic-type": "gray"
951             },
952             "due-date": "yyyy-mm-ddT00:00:01Z",
953             "operator-contact": "some-contact-info"
954         }
955     }
956
957 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
958 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
959 the devices.
960
961 OTN HO-ODU4 service creation
962 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
963
964 Use the following REST RPC to invoke *service handler* module in order to create over the optical
965 infrastructure a bidirectional end-to-end ODU4 OTN service over an OTU4 and structured to support
966 low-order OTN services (ODU2e, ODU0). As for OTU4, such a service must be created between two network
967 ports of OTN Xponder (MUXPDR or SWITCH).
968
969 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
970
971 **Sample JSON Data**
972
973 .. code:: json
974
975     {
976         "input": {
977             "sdnc-request-header": {
978                 "request-id": "request-1",
979                 "rpc-action": "service-create",
980                 "request-system-id": "appname"
981             },
982             "service-name": "something",
983             "common-id": "commonId",
984             "connection-type": "infrastructure",
985             "service-a-end": {
986                 "service-rate": "100",
987                 "node-id": "<xpdr-node-id>",
988                 "service-format": "ODU",
989                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
990                 "clli": "<ccli-name>",
991                 "tx-direction": {
992                     "port": {
993                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
994                         "port-type": "fixed",
995                         "port-name": "<xpdr-network-port-in-otn-topology>",
996                         "port-rack": "000000.00",
997                         "port-shelf": "Chassis#1"
998                     },
999                     "lgx": {
1000                         "lgx-device-name": "Some lgx-device-name",
1001                         "lgx-port-name": "Some lgx-port-name",
1002                         "lgx-port-rack": "000000.00",
1003                         "lgx-port-shelf": "00"
1004                     }
1005                 },
1006                 "rx-direction": {
1007                     "port": {
1008                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1009                         "port-type": "fixed",
1010                         "port-name": "<xpdr-network-port-in-otn-topology>",
1011                         "port-rack": "000000.00",
1012                         "port-shelf": "Chassis#1"
1013                     },
1014                     "lgx": {
1015                         "lgx-device-name": "Some lgx-device-name",
1016                         "lgx-port-name": "Some lgx-port-name",
1017                         "lgx-port-rack": "000000.00",
1018                         "lgx-port-shelf": "00"
1019                     }
1020                 },
1021                 "optic-type": "gray"
1022             },
1023             "service-z-end": {
1024                 "service-rate": "100",
1025                 "node-id": "<xpdr-node-id>",
1026                 "service-format": "ODU",
1027                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
1028                 "clli": "<ccli-name>",
1029                 "tx-direction": {
1030                     "port": {
1031                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1032                         "port-type": "fixed",
1033                         "port-name": "<xpdr-network-port-in-otn-topology>",
1034                         "port-rack": "000000.00",
1035                         "port-shelf": "Chassis#1"
1036                     },
1037                     "lgx": {
1038                         "lgx-device-name": "Some lgx-device-name",
1039                         "lgx-port-name": "Some lgx-port-name",
1040                         "lgx-port-rack": "000000.00",
1041                         "lgx-port-shelf": "00"
1042                     }
1043                 },
1044                 "rx-direction": {
1045                     "port": {
1046                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1047                         "port-type": "fixed",
1048                         "port-name": "<xpdr-network-port-in-otn-topology>",
1049                         "port-rack": "000000.00",
1050                         "port-shelf": "Chassis#1"
1051                     },
1052                     "lgx": {
1053                         "lgx-device-name": "Some lgx-device-name",
1054                         "lgx-port-name": "Some lgx-port-name",
1055                         "lgx-port-rack": "000000.00",
1056                         "lgx-port-shelf": "00"
1057                     }
1058                 },
1059                 "optic-type": "gray"
1060             },
1061             "due-date": "yyyy-mm-ddT00:00:01Z",
1062             "operator-contact": "some-contact-info"
1063         }
1064     }
1065
1066 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1067 *otn-topology* that must contains OTU4 links with valid bandwidth parameters, and then
1068 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1069
1070 OTN 10GE-ODU2e service creation
1071 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1072
1073 Use the following REST RPC to invoke *service handler* module in order to create over the OTN
1074 infrastructure a bidirectional end-to-end 10GE-ODU2e OTN service over an ODU4.
1075 Such a service must be created between two client ports of OTN Xponder (MUXPDR or SWITCH)
1076 configured to support 10GE interfaces.
1077
1078 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
1079
1080 **Sample JSON Data**
1081
1082 .. code:: json
1083
1084     {
1085         "input": {
1086             "sdnc-request-header": {
1087                 "request-id": "request-1",
1088                 "rpc-action": "service-create",
1089                 "request-system-id": "appname"
1090             },
1091             "service-name": "something",
1092             "common-id": "commonId",
1093             "connection-type": "service",
1094             "service-a-end": {
1095                 "service-rate": "10",
1096                 "node-id": "<xpdr-node-id>",
1097                 "service-format": "Ethernet",
1098                 "clli": "<ccli-name>",
1099                 "subrate-eth-sla": {
1100                     "subrate-eth-sla": {
1101                         "committed-info-rate": "10000",
1102                         "committed-burst-size": "64"
1103                     }
1104                 },
1105                 "tx-direction": {
1106                     "port": {
1107                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1108                         "port-type": "fixed",
1109                         "port-name": "<xpdr-client-port-in-otn-topology>",
1110                         "port-rack": "000000.00",
1111                         "port-shelf": "Chassis#1"
1112                     },
1113                     "lgx": {
1114                         "lgx-device-name": "Some lgx-device-name",
1115                         "lgx-port-name": "Some lgx-port-name",
1116                         "lgx-port-rack": "000000.00",
1117                         "lgx-port-shelf": "00"
1118                     }
1119                 },
1120                 "rx-direction": {
1121                     "port": {
1122                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1123                         "port-type": "fixed",
1124                         "port-name": "<xpdr-client-port-in-otn-topology>",
1125                         "port-rack": "000000.00",
1126                         "port-shelf": "Chassis#1"
1127                     },
1128                     "lgx": {
1129                         "lgx-device-name": "Some lgx-device-name",
1130                         "lgx-port-name": "Some lgx-port-name",
1131                         "lgx-port-rack": "000000.00",
1132                         "lgx-port-shelf": "00"
1133                     }
1134                 },
1135                 "optic-type": "gray"
1136             },
1137             "service-z-end": {
1138                 "service-rate": "10",
1139                 "node-id": "<xpdr-node-id>",
1140                 "service-format": "Ethernet",
1141                 "clli": "<ccli-name>",
1142                 "subrate-eth-sla": {
1143                     "subrate-eth-sla": {
1144                         "committed-info-rate": "10000",
1145                         "committed-burst-size": "64"
1146                     }
1147                 },
1148                 "tx-direction": {
1149                     "port": {
1150                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1151                         "port-type": "fixed",
1152                         "port-name": "<xpdr-client-port-in-otn-topology>",
1153                         "port-rack": "000000.00",
1154                         "port-shelf": "Chassis#1"
1155                     },
1156                     "lgx": {
1157                         "lgx-device-name": "Some lgx-device-name",
1158                         "lgx-port-name": "Some lgx-port-name",
1159                         "lgx-port-rack": "000000.00",
1160                         "lgx-port-shelf": "00"
1161                     }
1162                 },
1163                 "rx-direction": {
1164                     "port": {
1165                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1166                         "port-type": "fixed",
1167                         "port-name": "<xpdr-client-port-in-otn-topology>",
1168                         "port-rack": "000000.00",
1169                         "port-shelf": "Chassis#1"
1170                     },
1171                     "lgx": {
1172                         "lgx-device-name": "Some lgx-device-name",
1173                         "lgx-port-name": "Some lgx-port-name",
1174                         "lgx-port-rack": "000000.00",
1175                         "lgx-port-shelf": "00"
1176                     }
1177                 },
1178                 "optic-type": "gray"
1179             },
1180             "due-date": "yyyy-mm-ddT00:00:01Z",
1181             "operator-contact": "some-contact-info"
1182         }
1183     }
1184
1185 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1186 *otn-topology* that must contains ODU4 links with valid bandwidth parameters, and then
1187 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1188
1189
1190 .. note::
1191     Since Magnesium SR2, the service-list corresponding to OCH-OTU4, ODU4 or again 10GE-ODU2e services is
1192     updated in the service-list datastore.
1193
1194 .. note::
1195     trib-slot is used when the equipment supports contiguous trib-slot allocation (supported from
1196     Magnesium SR0). The trib-slot provided corresponds to the first of the used trib-slots.
1197     complex-trib-slots will be used when the equipment does not support contiguous trib-slot
1198     allocation. In this case a list of the different trib-slots to be used shall be provided.
1199     The support for non contiguous trib-slot allocation is planned for later Magnesium release.
1200
1201 Deleting a service
1202 ~~~~~~~~~~~~~~~~~~
1203
1204 Deleting any kind of service
1205 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1206
1207 Use the following REST RPC to invoke *service handler* module in order to delete a given optical
1208 connectivity service.
1209
1210 **REST API** : *POST /restconf/operations/org-openroadm-service:service-delete*
1211
1212 **Sample JSON Data**
1213
1214 .. code:: json
1215
1216     {
1217         "input": {
1218             "sdnc-request-header": {
1219                 "request-id": "request-1",
1220                 "rpc-action": "service-delete",
1221                 "request-system-id": "appname",
1222                 "notification-url": "http://localhost:8585/NotificationServer/notify"
1223             },
1224             "service-delete-req-info": {
1225                 "service-name": "something",
1226                 "tail-retention": "no"
1227             }
1228         }
1229     }
1230
1231 Most important parameters for this REST RPC is the *service-name*.
1232
1233
1234 .. note::
1235     Deleting OTN services implies proceeding in the reverse way to their creation. Thus, OTN
1236     service deletion must respect the three following steps:
1237     1. delete first all 10GE services supported over any ODU4 to be deleted
1238     2. delete ODU4
1239     3. delete OCH-OTU4 supporting the just deleted ODU4
1240
1241 Invoking PCE module
1242 ~~~~~~~~~~~~~~~~~~~
1243
1244 Use the following REST RPCs to invoke *PCE* module in order to check connectivity between xponder
1245 nodes and the availability of a supporting optical connectivity between the network-ports of the
1246 nodes.
1247
1248 Checking OTU4 service connectivity
1249 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1250
1251 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1252
1253 **Sample JSON Data**
1254
1255 .. code:: json
1256
1257    {
1258       "input": {
1259            "service-name": "something",
1260            "resource-reserve": "true",
1261            "service-handler-header": {
1262              "request-id": "request1"
1263            },
1264            "service-a-end": {
1265              "service-rate": "100",
1266              "clli": "<clli-node>",
1267              "service-format": "OTU",
1268              "node-id": "<otn-node-id>"
1269            },
1270            "service-z-end": {
1271              "service-rate": "100",
1272              "clli": "<clli-node>",
1273              "service-format": "OTU",
1274              "node-id": "<otn-node-id>"
1275              },
1276            "pce-metric": "hop-count"
1277        }
1278    }
1279
1280 .. note::
1281     here, the <otn-node-id> corresponds to the node-id as appearing in "openroadm-network" topology
1282     layer
1283
1284 Checking ODU4 service connectivity
1285 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1286
1287 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1288
1289 **Sample JSON Data**
1290
1291 .. code:: json
1292
1293    {
1294       "input": {
1295            "service-name": "something",
1296            "resource-reserve": "true",
1297            "service-handler-header": {
1298              "request-id": "request1"
1299            },
1300            "service-a-end": {
1301              "service-rate": "100",
1302              "clli": "<clli-node>",
1303              "service-format": "ODU",
1304              "node-id": "<otn-node-id>"
1305            },
1306            "service-z-end": {
1307              "service-rate": "100",
1308              "clli": "<clli-node>",
1309              "service-format": "ODU",
1310              "node-id": "<otn-node-id>"
1311              },
1312            "pce-metric": "hop-count"
1313        }
1314    }
1315
1316 .. note::
1317     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1318
1319 Checking 10GE/ODU2e service connectivity
1320 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1321
1322 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1323
1324 **Sample JSON Data**
1325
1326 .. code:: json
1327
1328    {
1329       "input": {
1330            "service-name": "something",
1331            "resource-reserve": "true",
1332            "service-handler-header": {
1333              "request-id": "request1"
1334            },
1335            "service-a-end": {
1336              "service-rate": "10",
1337              "clli": "<clli-node>",
1338              "service-format": "Ethernet",
1339              "node-id": "<otn-node-id>"
1340            },
1341            "service-z-end": {
1342              "service-rate": "10",
1343              "clli": "<clli-node>",
1344              "service-format": "Ethernet",
1345              "node-id": "<otn-node-id>"
1346              },
1347            "pce-metric": "hop-count"
1348        }
1349    }
1350
1351 .. note::
1352     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1353
1354
1355 odl-transportpce-tapi
1356 ---------------------
1357
1358 This feature allows TransportPCE application to expose at its northbound interface other APIs than
1359 those defined by the OpenROADM MSA. With this feature, TransportPCE provides part of the Transport-API
1360 specified by the Open Networking Foundation. More specifically, the Topology Service and Connectivity
1361 Service components are implemented, allowing to expose to higher level applications an abstraction of
1362 its OpenROADM topologies in the form of topologies respecting the T-API modelling, as well as
1363 creating/deleting connectivity services between the Service Interface Points (SIPs) exposed by the
1364 T-API topology. The current version of TransportPCE implements the *tapi-topology.yang* and
1365 *tapi-connectivity.yang* models in the revision 2018-12-10 (T-API v2.1.2).
1366
1367 Additionally, support for the Notification Service component will be added in future releases, which
1368 will allow higher level applications to create/delete a Notification Subscription Service to receive
1369 several T-API notifications as defined in the *tapi-notification.yang* model.
1370
1371 T-API Topology Service
1372 ~~~~~~~~~~~~~~~~~~~~~~
1373
1374 -  RPC calls implemented:
1375
1376    -  get-topology-details
1377
1378    -  get-node-details
1379
1380    -  get-node-edge-point-details
1381
1382    -  get-link-details
1383
1384    -  get-topology-list
1385
1386
1387 As in IETF or OpenROADM topologies, T-API topologies are composed of lists of nodes and links that
1388 abstract a set of network resources. T-API specifies the *T0 - Multi-layer topology* which is, as
1389 indicated by its name, a single topology that collapses network logical abstraction for all network
1390 layers. Thus, an OpenROADM device as, for example, an OTN xponder that manages the following network
1391 layers ETH, ODU, OTU, Optical wavelength, will be represented in T-API T0 topology by two nodes:
1392 one *DSR/ODU* node and one *Photonic Media* node. Each of them are linked together through one or
1393 several *transitional links* depending on the number of network/line ports on the device.
1394
1395 Aluminium SR2 comes with a complete refactoring of this module, handling the same way multi-layer
1396 abstraction of any Xponder terminal device, whether it is a 100G transponder, an OTN muxponder or
1397 again an OTN switch. For all these devices, the implementation manages the fact that only relevant
1398 ports must appear in the resulting TAPI topology abstraction. In other words, only client/network ports
1399 that are undirectly/directly connected to the ROADM infrastructure are considered for the abstraction.
1400 Moreover, the whole ROADM infrastructure of the network is also abstracted towards a single photonic
1401 node. Therefore, a pair of unidirectional xponder-output/xponder-input links present in *openroadm-topology*
1402 is represented by a bidirectional *OMS* link in TAPI topology.
1403 In the same way, a pair of unidirectional OTN links (OTU4, ODU4) present in *otn-topology* is also
1404 represented by a bidirectional OTN link in TAPI topology, while retaining their available bandwidth
1405 characteristics.
1406
1407 Phosphorus SR0 extends the T-API topology service implementation by bringing a fully described topology.
1408 *T0 - Full Multi-layer topology* is derived from the existing *T0 - Multi-layer topology*. But the ROADM
1409 infrastructure is not abstracted and the higher level application can get more details on the composition
1410 of the ROADM infrastructure controlled by TransportPCE. Each ROADM node found in the *openroadm-network*
1411 is converted into a *Photonic Media* node. The details of these T-API nodes are obtained from the
1412 *openroadm-topology*. Therefore, the external traffic ports of *Degree* and *SRG* nodes are represented
1413 with a set of Network Edge Points (NEPs) and SIPs belonging to the *Photonic Media* node and a pair of
1414 roadm-to-roadm links present in *openroadm-topology* is represented by a bidirectional *OMS* link in TAPI
1415 topology.
1416 Additionally, T-API topology related information is stored in TransportPCE datastore in the same way as
1417 OpenROADM topology layers. When a node is connected to the controller through the corresponding *REST API*,
1418 the T-API topology context gets updated dynamically and stored.
1419
1420 .. note::
1421
1422     A naming nomenclature is defined to be able to map T-API and OpenROADM data.
1423     i.e., T-API_roadm_Name = OpenROADM_roadmID+T-API_layer
1424     i.e., T-API_roadm_nep_Name = OpenROADM_roadmID+T-API_layer+OpenROADM_terminationPointID
1425
1426 Three kinds of topologies are currently implemented. The first one is the *"T0 - Multi-layer topology"*
1427 defined in the reference implementation of T-API. This topology gives an abstraction from data coming
1428 from openroadm-topology and otn-topology. Such topology may be rather complex since most of devices are
1429 represented through several nodes and links.
1430 Another topology, named *"Transponder 100GE"*, is also implemented. That latter provides a higher level
1431 of abstraction, much simpler, for the specific case of 100GE transponder, in the form of a single
1432 DSR node.
1433 Lastly, the *T0 - Full Multi-layer topology* topology was added. This topology collapses the data coming
1434 from openroadm-network, openroadm-topology and otn-topology. It gives a complete view of the optical
1435 network as defined in the reference implementation of T-API
1436
1437 The figure below shows an example of TAPI abstractions as performed by TransportPCE starting from Aluminium SR2.
1438
1439 .. figure:: ./images/TransportPCE-tapi-abstraction.jpg
1440    :alt: Example of T0-multi-layer TAPI abstraction in TransportPCE
1441
1442 In this specific case, as far as the "A" side is concerned, we connect TransportPCE to two xponder
1443 terminal devices at the netconf level :
1444 - XPDR-A1 is a 100GE transponder and is represented by XPDR-A1-XPDR1 node in *otn-topology*
1445 - SPDR-SA1 is an otn xponder that actually contains in its device configuration datastore two otn
1446 xponder nodes (the otn muxponder 10GE=>100G SPDR-SA1-XPDR1 and the otn switch 4x100GE => 4x100G SPDR-SA1-XPDR2)
1447 As represented on the bottom part of the figure, only one network port of XPDR-A1-XPDR1 is connected
1448 to the ROADM infrastructure, and only one network port of the otn muxponder is also attached to the
1449 ROADM infrastructure.
1450 Such network configuration will result in the TAPI *T0 - Multi-layer topology* abstraction as
1451 represented in the center of the figure. Let's notice that the otn switch (SPDR-SA1-XPDR2), not
1452 being attached to the ROADM infrastructure, is not abstracted.
1453 Moreover, 100GE transponder being connected, the TAPI *Transponder 100GE* topology will result in a
1454 single layer DSR node with only the two Owned Node Edge Ports representing the two 100GE client ports
1455 of respectively XPDR-A1-XPDR1 and XPDR-C1-XPDR1...
1456
1457
1458 **REST API** : *POST /restconf/operations/tapi-topology:get-topology-details*
1459
1460 This request builds the TAPI *T0 - Multi-layer topology* abstraction with regard to the current
1461 state of *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1462
1463 **Sample JSON Data**
1464
1465 .. code:: json
1466
1467     {
1468       "tapi-topology:input": {
1469         "tapi-topology:topology-id-or-name": "T0 - Multi-layer topology"
1470        }
1471     }
1472
1473 This request builds the TAPI *Transponder 100GE* abstraction with regard to the current state of
1474 *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1475 Its main interest is to simply and directly retrieve 100GE client ports of 100G Transponders that may
1476 be connected together, through a point-to-point 100GE service running over a wavelength.
1477
1478 .. code:: json
1479
1480     {
1481       "tapi-topology:input": {
1482         "tapi-topology:topology-id-or-name": "Transponder 100GE"
1483         }
1484     }
1485
1486
1487 .. note::
1488
1489     As for the *T0 multi-layer* topology, only 100GE client port whose their associated 100G line
1490     port is connected to Add/Drop nodes of the ROADM infrastructure are retrieved in order to
1491     abstract only relevant information.
1492
1493 This request builds the TAPI *T0 - Full Multi-layer* topology with respect to the information existing in
1494 the T-API topology context stored in OpenDaylight datastores.
1495
1496 .. code:: json
1497
1498     {
1499       "tapi-topology:input": {
1500         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology"
1501         }
1502     }
1503
1504 **REST API** : *POST /restconf/operations/tapi-topology:get-node-details*
1505
1506 This request returns the information, stored in the Topology Context, of the corresponding T-API node.
1507 The user can provide, either the Uuid associated to the attribute or its name.
1508
1509 **Sample JSON Data**
1510
1511 .. code:: json
1512
1513     {
1514       "tapi-topology:input": {
1515         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1516         "tapi-topology:node-id-or-name": "ROADM-A1+PHOTONINC_MEDIA"
1517       }
1518     }
1519
1520 **REST API** : *POST /restconf/operations/tapi-topology:get-node-edge-point-details*
1521
1522 This request returns the information, stored in the Topology Context, of the corresponding T-API NEP.
1523 The user can provide, either the Uuid associated to the attribute or its name.
1524
1525 **Sample JSON Data**
1526
1527 .. code:: json
1528
1529     {
1530       "tapi-topology:input": {
1531         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1532         "tapi-topology:node-id-or-name": "ROADM-A1+PHOTONINC_MEDIA",
1533         "tapi-topology:ep-id-or-name": "ROADM-A1+PHOTONINC_MEDIA+DEG1-TTP-TXRX"
1534       }
1535     }
1536
1537 **REST API** : *POST /restconf/operations/tapi-topology:get-link-details*
1538
1539 This request returns the information, stored in the Topology Context, of the corresponding T-API link.
1540 The user can provide, either the Uuid associated to the attribute or its name.
1541
1542 **Sample JSON Data**
1543
1544 .. code:: json
1545
1546     {
1547       "tapi-topology:input": {
1548         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1549         "tapi-topology:link-id-or-name": "ROADM-C1-DEG1-DEG1-TTP-TXRXtoROADM-A1-DEG2-DEG2-TTP-TXRX"
1550       }
1551     }
1552
1553 T-API Connectivity & Common Services
1554 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1555
1556 Phosphorus SR0 extends the T-API interface support by implementing the T-API connectivity Service.
1557 This interface enables a higher level controller or an orchestrator to request the creation of
1558 connectivity services as defined in the *tapi-connectivity* model. As it is necessary to indicate the
1559 two (or more) SIPs (or endpoints) of the connectivity service, the *tapi-common* model is implemented
1560 to retrieve from the datastore all the innformation related to the SIPs in the tapi-context.
1561 Current implementation of the connectivity service maps the *connectivity-request* into the appropriate
1562 *openroadm-service-create* and relies on the Service Handler to perform path calculation and configuration
1563 of devices. Results received from the PCE and the Rendererare mapped back into T-API to create the
1564 corresponding Connection End Points (CEPs) and Connections in the T-API Connectivity Context and store it
1565 in the datastore.
1566
1567 This first implementation includes the creation of:
1568
1569 -   ROADM-to-ROADM tapi-connectivity service (MC connectivity service)
1570 -   OTN tapi-connectivity services (OCh/OTU, OTSi/OTU & ODU connectivity services)
1571 -   Ethernet tapi-connectivity services (DSR connectivity service)
1572
1573 -  RPC calls implemented
1574
1575    -  create-connectivity-service
1576
1577    -  get-connectivity-service-details
1578
1579    -  get-connection-details
1580
1581    -  delete-connectivity-service
1582
1583    -  get-connection-end-point-details
1584
1585    -  get-connectivity-service-list
1586
1587    -  get-service-interface-point-details
1588
1589    -  get-service-interface-point-list
1590
1591 Creating a T-API Connectivity service
1592 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1593
1594 Use the *tapi* interface to create any end-to-end connectivity service on a T-API based
1595 network. Two kind of end-to-end "optical" connectivity services are managed by TransportPCE T-API module:
1596 - 10GE service from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
1597 - Media Channel (MC) connectivity service from client add/drop port (PP port of SRG) to
1598 client add/drop port of two ROADMs.
1599
1600 As mentioned earlier, T-API module interfaces with the Service Handler to automatically invoke the
1601 *renderer* module to create all required tapi connections and cross-connection on each device
1602 supporting the service.
1603
1604 Before creating a low-order OTN connectivity service (1GE or 10GE services terminating on
1605 client port of MUXPDR or SWITCH), the user must ensure that a high-order ODU4 container
1606 exists and has previously been configured (it means structured to support low-order otn services)
1607 to support low-order OTN containers.
1608
1609 Thus, OTN connectivity service creation implies three steps:
1610 1. OTSi/OTU connectivity service from network port to network port of two OTN Xponders (MUXPDR or SWITCH in Photonic media layer)
1611 2. ODU connectivity service from network port to network port of two OTN Xponders (MUXPDR or SWITCH in DSR/ODU layer)
1612 3. 10GE connectivity service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH in DSR/ODU layer)
1613
1614 The first step corresponds to the OCH-OTU4 service from network port to network port of OpenROADM.
1615 The corresponding T-API cross and top connections are created between the CEPs of the T-API nodes
1616 involved in each request.
1617
1618 Additionally, an *MC connectivity service* could be created between two ROADMs to create an optical
1619 tunnel and reserve resources in advance. This kind of service corresponds to the OC service creation
1620 use case described earlier.
1621
1622 The management of other OTN services through T-API (1GE-ODU0, 100GE...) is planned for future releases.
1623
1624 Any-Connectivity service creation
1625 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1626 As for the Service Creation described for OpenROADM, the initial steps are the same:
1627
1628 -   Connect netconf devices to the controller
1629 -   Create XPDR-RDM links and configure RDM-to-RDM links (in openroadm topologies)
1630
1631 Bidirectional T-API links between xpdr and rdm nodes must be created manually. To that end, use the
1632 following REST RPCs:
1633
1634 From xpdr <--> rdm:
1635 ^^^^^^^^^^^^^^^^^^^
1636
1637 **REST API** : *POST /restconf/operations/transportpce-tapinetworkutils:init-xpdr-rdm-tapi-link*
1638
1639 **Sample JSON Data**
1640
1641 .. code:: json
1642
1643     {
1644         "input": {
1645             "xpdr-node": "<XPDR_OpenROADM_id>",
1646             "network-tp": "<XPDR_TP_OpenROADM_id>",
1647             "rdm-node": "<ROADM_OpenROADM_id>",
1648             "add-drop-tp": "<ROADM_TP_OpenROADM_id>"
1649         }
1650     }
1651
1652 Use the following REST RPC to invoke T-API module in order to create a bidirectional connectivity
1653 service between two devices. The network should be composed of two ROADMs and two Xponders (SWITCH or MUX)
1654
1655 **REST API** : *POST /restconf/operations/tapi-connectivity:create-connectivity-service*
1656
1657 **Sample JSON Data**
1658
1659 .. code:: json
1660
1661     {
1662         "tapi-connectivity:input": {
1663             "tapi-connectivity:end-point": [
1664                 {
1665                     "tapi-connectivity:layer-protocol-name": "<Node_TAPI_Layer>",
1666                     "tapi-connectivity:service-interface-point": {
1667                         "tapi-connectivity:service-interface-point-uuid": "<SIP_UUID_of_NEP>"
1668                     },
1669                     "tapi-connectivity:administrative-state": "UNLOCKED",
1670                     "tapi-connectivity:operational-state": "ENABLED",
1671                     "tapi-connectivity:direction": "BIDIRECTIONAL",
1672                     "tapi-connectivity:role": "SYMMETRIC",
1673                     "tapi-connectivity:protection-role": "WORK",
1674                     "tapi-connectivity:local-id": "<OpenROADM node ID>",
1675                     "tapi-connectivity:name": [
1676                         {
1677                             "tapi-connectivity:value-name": "OpenROADM node id",
1678                             "tapi-connectivity:value": "<OpenROADM node ID>"
1679                         }
1680                     ]
1681                 },
1682                 {
1683                     "tapi-connectivity:layer-protocol-name": "<Node_TAPI_Layer>",
1684                     "tapi-connectivity:service-interface-point": {
1685                         "tapi-connectivity:service-interface-point-uuid": "<SIP_UUID_of_NEP>"
1686                     },
1687                     "tapi-connectivity:administrative-state": "UNLOCKED",
1688                     "tapi-connectivity:operational-state": "ENABLED",
1689                     "tapi-connectivity:direction": "BIDIRECTIONAL",
1690                     "tapi-connectivity:role": "SYMMETRIC",
1691                     "tapi-connectivity:protection-role": "WORK",
1692                     "tapi-connectivity:local-id": "<OpenROADM node ID>",
1693                     "tapi-connectivity:name": [
1694                         {
1695                             "tapi-connectivity:value-name": "OpenROADM node id",
1696                             "tapi-connectivity:value": "<OpenROADM node ID>"
1697                         }
1698                     ]
1699                 }
1700             ],
1701             "tapi-connectivity:connectivity-constraint": {
1702                 "tapi-connectivity:service-layer": "<TAPI_Service_Layer>",
1703                 "tapi-connectivity:service-type": "POINT_TO_POINT_CONNECTIVITY",
1704                 "tapi-connectivity:service-level": "Some service-level",
1705                 "tapi-connectivity:requested-capacity": {
1706                     "tapi-connectivity:total-size": {
1707                         "value": "<CAPACITY>",
1708                         "unit": "GB"
1709                     }
1710                 }
1711             },
1712             "tapi-connectivity:state": "Some state"
1713         }
1714     }
1715
1716 As for the previous RPC, MC and OTSi correspond to PHOTONIC_MEDIA layer services,
1717 ODU to ODU layer services and 10GE/DSR to DSR layer services. This RPC invokes the
1718 *Service Handler* module to trigger the *PCE* to compute a path over the
1719 *otn-topology* that must contains ODU4 links with valid bandwidth parameters. Once the path is computed
1720 and validated, the T-API CEPs (associated with a NEP), cross connections and top connections will be created
1721 according to the service request and the topology objects inside the computed path. Then, the *renderer* and
1722 *OLM* are invoked to implement the end-to-end path into the devices and to update the status of the connections
1723 and connectivity service.
1724
1725 .. note::
1726     Refer to the "Unconstrained E2E Service Provisioning" use cases from T-API Reference Implementation to get
1727     more details about the process of connectivity service creation
1728
1729 Deleting a connectivity service
1730 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1731
1732 Use the following REST RPC to invoke *TAPI* module in order to delete a given optical
1733 connectivity service.
1734
1735 **REST API** : *POST /restconf/operations/tapi-connectivity:delete-connectivity-service*
1736
1737 **Sample JSON Data**
1738
1739 .. code:: json
1740
1741     {
1742         "tapi-connectivity:input": {
1743             "tapi-connectivity:service-id-or-name": "<Service_UUID_or_Name>"
1744         }
1745     }
1746
1747 .. note::
1748     Deleting OTN connectivity services implies proceeding in the reverse way to their creation. Thus, OTN
1749     connectivity service deletion must respect the three following steps:
1750     1. delete first all 10GE services supported over any ODU4 to be deleted
1751     2. delete ODU4
1752     3. delete MC-OTSi supporting the just deleted ODU4
1753
1754 T-API Notification Service
1755 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1756
1757 In future releases, the T-API notification service will be implemented. The objective will be to write and read
1758 T-API notifications stored in topics of a Kafka server as explained later in the odl-transportpce-nbinotifications
1759 section, but T-API based.
1760
1761
1762 odl-transportpce-dmaap-client
1763 -----------------------------
1764
1765 This feature allows TransportPCE application to send notifications on ONAP Dmaap Message router
1766 following service request results.
1767 This feature listens on NBI notifications and sends the PublishNotificationService content to
1768 Dmaap on the topic "unauthenticated. TPCE" through a POST request on /events/unauthenticated.TPCE
1769 It uses Jackson to serialize the notification to JSON and jersey client to send the POST request.
1770
1771 odl-transportpce-nbinotifications
1772 ---------------------------------
1773
1774 This feature allows TransportPCE application to write and read notifications stored in topics of a Kafka server.
1775 It is basically composed of two kinds of elements. First are the 'publishers' that are in charge of sending a notification to
1776 a Kafka server. To protect and only allow specific classes to send notifications, each publisher
1777 is dedicated to an authorized class.
1778 There are the 'subscribers' that are in charge of reading notifications from a Kafka server.
1779 So when the feature is called to write notification to a Kafka server, it will serialize the notification
1780 into JSON format and then will publish it in a topic of the server via a publisher.
1781 And when the feature is called to read notifications from a Kafka server, it will retrieve it from
1782 the topic of the server via a subscriber and will deserialize it.
1783
1784 For now, when the REST RPC service-create is called to create a bidirectional end-to-end service,
1785 depending on the success or the fail of the creation, the feature will notify the result of
1786 the creation to a Kafka server. The topics that store these notifications are named after the connection type
1787 (service, infrastructure, roadm-line). For instance, if the RPC service-create is called to create an
1788 infrastructure connection, the service notifications related to this connection will be stored in
1789 the topic 'infrastructure'.
1790
1791 The figure below shows an example of the application nbinotifications in order to notify the
1792 progress of a service creation.
1793
1794 .. figure:: ./images/TransportPCE-nbinotifications-service-example.jpg
1795    :alt: Example of service notifications using the feature nbinotifications in TransportPCE
1796
1797
1798 Depending on the status of the service creation, two kinds of notifications can be published
1799 to the topic 'service' of the Kafka server.
1800
1801 If the service was correctly implemented, the following notification will be published :
1802
1803
1804 -  **Service implemented !** : Indicates that the service was successfully implemented.
1805    It also contains all information concerning the new service.
1806
1807
1808 Otherwise, this notification will be published :
1809
1810
1811 -  **ServiceCreate failed ...** : Indicates that the process of service-create failed, and also contains
1812    the failure cause.
1813
1814
1815 To retrieve these service notifications stored in the Kafka server :
1816
1817 **REST API** : *POST /restconf/operations/nbi-notifications:get-notifications-process-service*
1818
1819 **Sample JSON Data**
1820
1821 .. code:: json
1822
1823     {
1824       "input": {
1825         "connection-type": "service",
1826         "id-consumer": "consumer",
1827         "group-id": "test"
1828        }
1829     }
1830
1831 .. note::
1832     The field 'connection-type' corresponds to the topic that stores the notifications.
1833
1834 Another implementation of the notifications allows to notify any modification of operational state made about a service.
1835 So when a service breaks down or is restored, a notification alarming the new status will be sent to a Kafka Server.
1836 The topics that store these notifications in the Kafka server are also named after the connection type
1837 (service, infrastructure, roadm-line) accompanied of the string 'alarm'.
1838
1839 To retrieve these alarm notifications stored in the Kafka server :
1840
1841 **REST API** : *POST /restconf/operations/nbi-notifications:get-notifications-alarm-service*
1842
1843 **Sample JSON Data**
1844
1845 .. code:: json
1846
1847     {
1848       "input": {
1849         "connection-type": "infrastructure",
1850         "id-consumer": "consumer",
1851         "group-id": "test"
1852        }
1853     }
1854
1855 .. note::
1856     This sample is used to retrieve all the alarm notifications related to infrastructure services.
1857
1858 Help
1859 ----
1860
1861 -  `TransportPCE Wiki <https://wiki.opendaylight.org/display/ODL/TransportPCE>`__