Add a checkbashisms tox profile
[transportpce.git] / docs / developer-guide.rst
1 .. _transportpce-dev-guide:
2
3 TransportPCE Developer Guide
4 ============================
5
6 Overview
7 --------
8
9 TransportPCE describes an application running on top of the OpenDaylight
10 controller. Its primary function is to control an optical transport
11 infrastructure using a non-proprietary South Bound Interface (SBI). It may be
12 interconnected with Controllers of different layers (L2, L3 Controller…), a
13 higher layer Controller and/or an Orchestrator through non-proprietary
14 Application Programing Interfaces (APIs). Control includes the capability to
15 configure the optical equipment, and to provision services according to a
16 request coming from a higher layer controller and/or an orchestrator.
17 This capability may rely on the controller only or it may be delegated to
18 distributed (standardized) protocols.
19
20
21 Architecture
22 ------------
23
24 TransportPCE modular architecture is described on the next diagram. Each main
25 function such as Topology management, Path Calculation Engine (PCE), Service
26 handler, Renderer \_responsible for the path configuration through optical
27 equipment\_ and Optical Line Management (OLM) is associated with a generic block
28 relying on open models, each of them communicating through published APIs.
29
30
31 .. figure:: ./images/TransportPCE-Diagramm-Magnesium.jpg
32    :alt: TransportPCE architecture
33
34    TransportPCE architecture
35
36 Fluorine, Neon and Sodium releases of transportPCE are dedicated to the control
37 of WDM transport infrastructure. The WDM layer is built from colorless ROADMs
38 and transponders.
39
40 The interest of using a controller to provision automatically services strongly
41 relies on its ability to handle end to end optical services that spans through
42 the different network domains, potentially equipped with equipment coming from
43 different suppliers. Thus, interoperability in the optical layer is a key
44 element to get the benefit of automated control.
45
46 Initial design of TransportPCE leverages OpenROADM Multi-Source-Agreement (MSA)
47 which defines interoperability specifications, consisting of both Optical
48 interoperability and Yang data models.
49
50 End to end OTN services such as OCH-OTU4, structured ODU4 or 10GE-ODU2e
51 services are supported since Magnesium SR2. OTN support will continue to be
52 improved in the following releases of Magnesium and Aluminium.
53
54 An experimental support of Flexgrid is introduced in Aluminium. Depending on
55 OpenROADM device models, optical interfaces can be created according to the
56 initial fixed grid (for R1.2.1, 96 channels regularly spaced of 50 GHz), or to
57 a flexgrid (for R2.2.1 use of specific number of subsequent frequency slots of
58 6.25 GHz depending on one side of ROADMs and transponders capabilities and on
59 the other side of the rate of the channel. The full support of Flexgrid,
60 including path computation and the creation of B100G (Beyond 100 Gbps) higher
61 rate interfaces will be added in the following releases of Aluminium.
62
63
64 Module description
65 ~~~~~~~~~~~~~~~~~~
66
67 ServiceHandler
68 ^^^^^^^^^^^^^^
69
70 Service Handler handles request coming from a higher level controller or an orchestrator
71 through the northbound API, as defined in the Open ROADM service model. Current
72 implementation addresses the following rpcs: service-create, temp-service-create,
73 service–delete, temp-service-delete, service-reroute, and service-restoration. It checks the
74 request consistency and trigs path calculation sending rpcs to the PCE. If a valid path is
75 returned by the PCE, path configuration is initiated relying on Renderer and OLM. At the
76 confirmation of a successful service creation, the Service Handler updates the service-
77 list/temp-service-list in the MD-SAL. For service deletion, the Service Handler relies on the
78 Renderer and the OLM to delete connections and reset power levels associated with the
79 service. The service-list is updated following a successful service deletion. In Neon SR0 is
80 added the support for service from ROADM to ROADM, which brings additional flexibility and
81 notably allows reserving resources when transponders are not in place at day one.
82 Magnesium SR2 fully supports end-to-end OTN services which are part of the OTN infrastructure.
83 It concerns the management of OCH-OTU4 (also part of the optical infrastructure) and structured
84 HO-ODU4 services. Moreover, once these two kinds of OTN infrastructure service created, it is
85 possible to manage some LO-ODU services (for the time being, only 10GE-ODU2e services).
86 The full support of OTN services, including 1GE-ODU0 or 100GE, will be introduced along next
87 releases (Mg/Al).
88
89 PCE
90 ^^^
91
92 The Path Computation Element (PCE) is the component responsible for path
93 calculation. An interface allows the Service Handler or external components such as an
94 orchestrator to request a path computation and get a response from the PCE
95 including the computed path(s) in case of success, or errors and indication of
96 the reason for the failure in case the request cannot be satisfied. Additional
97 parameters can be provided by the PCE in addition to the computed paths if
98 requested by the client module. An interface to the Topology Management module
99 allows keeping PCE aligned with the latest changes in the topology. Information
100 about current and planned services is available in the MD-SAL data store.
101
102 Current implementation of PCE allows finding the shortest path, minimizing either the hop
103 count (default) or the propagation delay. Central wavelength is assigned considering a fixed
104 grid of 96 wavelengths 50 GHz spaced. The assignment of wavelengths according to a flexible
105 grid considering 768 subsequent slots of 6,25 GHz (total spectrum of 4.8 Thz), and their
106 occupation by existing services is planned for later releases.
107 In Neon SR0, the PCE calculates the OSNR, on the base of incremental noise specifications
108 provided in Open ROADM MSA. The support of unidirectional ports is also added.
109
110 PCE handles the following constraints as hard constraints:
111
112 -   **Node exclusion**
113 -   **SRLG exclusion**
114 -   **Maximum latency**
115
116 In Magnesium SR0, the interconnection of the PCE with GNPY (Gaussian Noise Python), an
117 open-source library developed in the scope of the Telecom Infra Project for building route
118 planning and optimizing performance in optical mesh networks, is fully supported.
119
120 If the OSNR calculated by the PCE is too close to the limit defined in OpenROADM
121 specifications, the PCE forwards through a REST interface to GNPY external tool the topology
122 and the pre-computed path translated in routing constraints. GNPy calculates a set of Quality of
123 Transmission metrics for this path using its own library which includes models for OpenROADM.
124 The result is sent back to the PCE. If the path is validated, the PCE sends back a response to
125 the service handler. In case of invalidation of the path by GNPY, the PCE sends a new request to
126 GNPY, including only the constraints expressed in the path-computation-request initiated by the
127 Service Handler. GNPy then tries to calculate a path based on these relaxed constraints. The result
128 of the path computation is provided to the PCE which translates the path according to the topology
129 handled in transportPCE and forwards the results to the Service Handler.
130
131 GNPy relies on SNR and takes into account the linear and non-linear impairments
132 to check feasibility. In the related tests, GNPy module runs externally in a
133 docker and the communication with T-PCE is ensured via HTTPs.
134
135 Topology Management
136 ^^^^^^^^^^^^^^^^^^^
137
138 Topology management module builds the Topology according to the Network model
139 defined in OpenROADM. The topology is aligned with IETF I2RS RFC8345 model.
140 It includes several network layers:
141
142 -  **CLLI layer corresponds to the locations that host equipment**
143 -  **Network layer corresponds to a first level of disaggregation where we
144    separate Xponders (transponder, muxponders or switchponders) from ROADMs**
145 -  **Topology layer introduces a second level of disaggregation where ROADMs
146    Add/Drop modules ("SRGs") are separated from the degrees which includes line
147    amplifiers and WSS that switch wavelengths from one to another degree**
148 -  **OTN layer introduced in Magnesium includes transponders as well as switch-ponders and
149    mux-ponders having the ability to switch OTN containers from client to line cards. Mg SR0
150    release includes creation of the switching pool (used to model cross-connect matrices),
151    tributary-ports and tributary-slots at the initial connection of NETCONF devices.
152    The population of OTN links (OTU4 and ODU4), and the adjustment of the tributary ports/slots
153    pool occupancy when OTN services are created is supported since Magnesium SR2.**
154
155
156 Renderer
157 ^^^^^^^^
158
159 The Renderer module, on request coming from the Service Handler through a service-
160 implementation-request /service delete rpc, sets/deletes the path corresponding to a specific
161 service between A and Z ends. The path description provided by the service-handler to the
162 renderer is based on abstracted resources (nodes, links and termination-points), as provided
163 by the PCE module. The renderer converts this path-description in a path topology based on
164 device resources (circuit-packs, ports,…).
165
166 The conversion from abstracted resources to device resources is performed relying on the
167 portmapping module which maintains the connections between these different resource types.
168 Portmapping module also allows to keep the topology independant from the devices releases.
169 In Neon (SR0), portmapping module has been enriched to support both openroadm 1.2.1 and 2.2.1
170 device models. The full support of openroadm 2.2.1 device models (both in the topology management
171 and the rendering function) has been added in Neon SR1. In Magnesium, portmapping is enriched with
172 the supported-interface-capability, OTN supporting-interfaces, and switching-pools (reflecting
173 cross-connection capabilities of OTN switch-ponders).
174
175 After the path is provided, the renderer first checks what are the existing interfaces on the
176 ports of the different nodes that the path crosses. It then creates missing interfaces. After all
177 needed interfaces have been created it sets the connections required in the nodes and
178 notifies the Service Handler on the status of the path creation. Path is created in 2 steps
179 (from A to Z and Z to A). In case the path between A and Z could not be fully created, a
180 rollback function is called to set the equipment on the path back to their initial configuration
181 (as they were before invoking the Renderer).
182
183 Magnesium brings the support of OTN services. SR0 supports the creation of OTU4, ODU4, ODU2/ODU2e
184 and ODU0 interfaces. The creation of these low-order otn interfaces must be triggered through
185 otn-service-path RPC. Magnesium SR2 fully supports end-to-end otn service implementation into devices
186 (service-implementation-request /service delete rpc, topology alignement after the service has been created).
187
188
189 OLM
190 ^^^
191
192 Optical Line Management module implements two main features: it is responsible
193 for setting up the optical power levels on the different interfaces, and is in
194 charge of adjusting these settings across the life of the optical
195 infrastructure.
196
197 After the different connections have been established in the ROADMS, between 2
198 Degrees for an express path, or between a SRG and a Degree for an Add or Drop
199 path; meaning the devices have set WSS and all other required elements to
200 provide path continuity, power setting are provided as attributes of these
201 connections. This allows the device to set all complementary elements such as
202 VOAs, to guaranty that the signal is launched at a correct power level
203 (in accordance to the specifications) in the fiber span. This also applies
204 to X-Ponders, as their output power must comply with the specifications defined
205 for the Add/Drop ports (SRG) of the ROADM. OLM has the responsibility of
206 calculating the right power settings, sending it to the device, and check the
207 PM retrieved from the device to verify that the setting was correctly applied
208 and the configuration was successfully completed.
209
210
211 Inventory
212 ^^^^^^^^^
213
214 TransportPCE Inventory module is responsible to keep track of devices connected in an external MariaDB database.
215 Other databases may be used as long as they comply with SQL and are compatible with OpenDaylight (for example MySQL).
216 At present, the module supports extracting and persisting inventory of devices OpenROADM MSA version 1.2.1.
217 Inventory module changes to support newer device models (2.2.1, etc) and other models (network, service, etc)
218 will be progressively included.
219
220 The inventory module can be activated by the associated karaf feature (odl-transporpce-inventory)
221 The database properties are supplied in the “opendaylight-release” and “opendaylight-snapshots” profiles.
222 Below is the settings.xml with properties included in the distribution.
223 The module can be rebuild from sources with different parameters.
224
225 Sample entry in settings.xml to declare an external inventory database:
226 ::
227
228     <profiles>
229       <profile>
230           <id>opendaylight-release</id>
231     [..]
232          <properties>
233                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
234                  <transportpce.db.database><<databasename>></transportpce.db.database>
235                  <transportpce.db.username><<username>></transportpce.db.username>
236                  <transportpce.db.password><<password>></transportpce.db.password>
237                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
238          </properties>
239     </profile>
240     [..]
241     <profile>
242           <id>opendaylight-snapshots</id>
243     [..]
244          <properties>
245                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
246                  <transportpce.db.database><<databasename>></transportpce.db.database>
247                  <transportpce.db.username><<username>></transportpce.db.username>
248                  <transportpce.db.password><<password>></transportpce.db.password>
249                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
250          </properties>
251         </profile>
252     </profiles>
253
254
255 Once the project built and when karaf is started, the cfg file is generated in etc folder with the corresponding
256 properties supplied in settings.xml. When devices with OpenROADM 1.2.1 device model are mounted, the device listener in
257 the inventory module loads several device attributes to various tables as per the supplied database.
258 The database structure details can be retrieved from the file tests/inventory/initdb.sql inside project sources.
259 Installation scripts and a docker file are also provided.
260
261 Key APIs and Interfaces
262 -----------------------
263
264 External API
265 ~~~~~~~~~~~~
266
267 North API, interconnecting the Service Handler to higher level applications
268 relies on the Service Model defined in the MSA. The Renderer and the OLM are
269 developed to allow configuring Open ROADM devices through a southbound
270 Netconf/Yang interface and rely on the MSA’s device model.
271
272 ServiceHandler Service
273 ^^^^^^^^^^^^^^^^^^^^^^
274
275 -  RPC call
276
277    -  service-create (given service-name, service-aend, service-zend)
278
279    -  service-delete (given service-name)
280
281    -  service-reroute (given service-name, service-aend, service-zend)
282
283    -  service-restoration (given service-name, service-aend, service-zend)
284
285    -  temp-service-create (given common-id, service-aend, service-zend)
286
287    -  temp-service-delete (given common-id)
288
289 -  Data structure
290
291    -  service list : made of services
292    -  temp-service list : made of temporary services
293    -  service : composed of service-name, topology wich describes the detailed path (list of used resources)
294
295 -  Notification
296
297    - service-rpc-result : result of service RPC
298    - service-notification : service has been added, modified or removed
299
300 Netconf Service
301 ^^^^^^^^^^^^^^^
302
303 -  RPC call
304
305    -  connect-device : PUT
306    -  disconnect-device : DELETE
307    -  check-connected-device : GET
308
309 -  Data Structure
310
311    -  node list : composed of netconf nodes in topology-netconf
312
313 Internal APIs
314 ~~~~~~~~~~~~~
315
316 Internal APIs define REST APIs to interconnect TransportPCE modules :
317
318 -   Service Handler to PCE
319 -   PCE to Topology Management
320 -   Service Handler to Renderer
321 -   Renderer to OLM
322
323 Pce Service
324 ^^^^^^^^^^^
325
326 -  RPC call
327
328    -  path-computation-request (given service-name, service-aend, service-zend)
329
330    -  cancel-resource-reserve (given service-name)
331
332 -  Notification
333
334    - service-path-rpc-result : result of service RPC
335
336 Renderer Service
337 ^^^^^^^^^^^^^^^^
338
339 -  RPC call
340
341    -  service-implementation-request (given service-name, service-aend, service-zend)
342
343    -  service-delete (given service-name)
344
345 -  Data structure
346
347    -  service path list : composed of service paths
348    -  service path : composed of service-name, path description giving the list of abstracted elements (nodes, tps, links)
349
350 -  Notification
351
352    - service-path-rpc-result : result of service RPC
353
354 Device Renderer
355 ^^^^^^^^^^^^^^^
356
357 -  RPC call
358
359    -  service-path used in SR0 as an intermediate solution to address directly the renderer
360       from a REST NBI to create OCH-OTU4-ODU4 interfaces on network port of otn devices.
361
362    -  otn-service-path used in SR0 as an intermediate solution to address directly the renderer
363       from a REST NBI for otn-service creation. Otn service-creation through
364       service-implementation-request call from the Service Handler will be supported in later
365       Magnesium releases
366
367 Topology Management Service
368 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
369
370 -  Data structure
371
372    -  network list : composed of networks(openroadm-topology, netconf-topology)
373    -  node list : composed of nodes identified by their node-id
374    -  link list : composed of links identified by their link-id
375    -  node : composed of roadm, xponder
376       link : composed of links of different types (roadm-to-roadm, express, add-drop ...)
377
378 OLM Service
379 ^^^^^^^^^^^
380
381 -  RPC call
382
383    -  get-pm (given node-id)
384
385    -  service-power-setup
386
387    -  service-power-turndown
388
389    -  service-power-reset
390
391    -  calculate-spanloss-base
392
393    -  calculate-spanloss-current
394
395 odl-transportpce-stubmodels
396 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
397
398    -  This feature provides function to be able to stub some of TransportPCE modules, pce and
399       renderer (Stubpce and Stubrenderer).
400       Stubs are used for development purposes and can be used for some of the functionnal tests.
401
402 Interfaces to external software
403 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
404
405 It defines the interfaces implemented to interconnect TransportPCE modules with other software in
406 order to perform specific tasks
407
408 GNPy interface
409 ^^^^^^^^^^^^^^
410
411 -  Request structure
412
413    -  topology : composed of list of elements and connections
414    -  service : source, destination, explicit-route-objects, path-constraints
415
416 -  Response structure
417
418    -  path-properties/path-metric : OSNR-0.1nm, OSNR-bandwidth, SNR-0.1nm, SNR-bandwidth,
419    -  path-properties/path-route-objects : composed of path elements
420
421
422 Running transportPCE project
423 ----------------------------
424
425 To use transportPCE controller, the first step is to connect the controller to optical nodes
426 through the NETCONF connector.
427
428 .. note::
429
430     In the current version, only optical equipment compliant with open ROADM datamodels are managed
431     by transportPCE.
432
433
434 Connecting nodes
435 ~~~~~~~~~~~~~~~~
436
437 To connect a node, use the following JSON RPC
438
439 **REST API** : *POST /restconf/config/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
440
441 **Sample JSON Data**
442
443 .. code:: json
444
445     {
446         "node": [
447             {
448                 "node-id": "<node-id>",
449                 "netconf-node-topology:tcp-only": "false",
450                 "netconf-node-topology:reconnect-on-changed-schema": "false",
451                 "netconf-node-topology:host": "<node-ip-address>",
452                 "netconf-node-topology:default-request-timeout-millis": "120000",
453                 "netconf-node-topology:max-connection-attempts": "0",
454                 "netconf-node-topology:sleep-factor": "1.5",
455                 "netconf-node-topology:actor-response-wait-time": "5",
456                 "netconf-node-topology:concurrent-rpc-limit": "0",
457                 "netconf-node-topology:between-attempts-timeout-millis": "2000",
458                 "netconf-node-topology:port": "<netconf-port>",
459                 "netconf-node-topology:connection-timeout-millis": "20000",
460                 "netconf-node-topology:username": "<node-username>",
461                 "netconf-node-topology:password": "<node-password>",
462                 "netconf-node-topology:keepalive-delay": "300"
463             }
464         ]
465     }
466
467
468 Then check that the netconf session has been correctly established between the controller and the
469 node. the status of **netconf-node-topology:connection-status** must be **connected**
470
471 **REST API** : *GET /restconf/operational/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
472
473
474 Node configuration discovery
475 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
476
477 Once the controller is connected to the node, transportPCE application automatically launchs a
478 discovery of the node configuration datastore and creates **Logical Connection Points** to any
479 physical ports related to transmission. All *circuit-packs* inside the node configuration are
480 analyzed.
481
482 Use the following JSON RPC to check that function internally named *portMapping*.
483
484 **REST API** : *GET /restconf/config/portmapping:network*
485
486 .. note::
487
488     In ``org-openroadm-device.yang``, four types of optical nodes can be managed:
489         * rdm: ROADM device (optical switch)
490         * xpdr: Xponder device (device that converts client to optical channel interface)
491         * ila: in line amplifier (optical amplifier)
492         * extplug: external pluggable (an optical pluggable that can be inserted in an external unit such as a router)
493
494     TransportPCE currently supports rdm and xpdr
495
496 Depending on the kind of open ROADM device connected, different kind of *Logical Connection Points*
497 should appear, if the node configuration is not empty:
498
499 -  DEG<degree-number>-TTP-<port-direction>: created on the line port of a degree on a rdm equipment
500 -  SRG<srg-number>-PP<port-number>: created on the client port of a srg on a rdm equipment
501 -  XPDR<number>-CLIENT<port-number>: created on the client port of a xpdr equipment
502 -  XPDR<number>-NETWORK<port-number>: created on the line port of a xpdr equipment
503
504     For further details on openROADM device models, see `openROADM MSA white paper <https://0201.nccdn.net/1_2/000/000/134/c50/Open-ROADM-MSA-release-2-Device-White-paper-v1-1.pdf>`__.
505
506 Optical Network topology
507 ~~~~~~~~~~~~~~~~~~~~~~~~
508
509 Before creating an optical connectivity service, your topology must contain at least two xpdr
510 devices connected to two different rdm devices. Normally, the *openroadm-topology* is automatically
511 created by transportPCE. Nevertheless, depending on the configuration inside optical nodes, this
512 topology can be partial. Check that link of type *ROADMtoROADM* exists between two adjacent rdm
513 nodes.
514
515 **REST API** : *GET /restconf/config/ietf-network:network/openroadm-topology*
516
517 If it is not the case, you need to manually complement the topology with *ROADMtoROADM* link using
518 the following REST RPC:
519
520
521 **REST API** : *POST /restconf/operations/networkutils:init-roadm-nodes*
522
523 **Sample JSON Data**
524
525 .. code:: json
526
527     {
528       "networkutils:input": {
529         "networkutils:rdm-a-node": "<node-id-A>",
530         "networkutils:deg-a-num": "<degree-A-number>",
531         "networkutils:termination-point-a": "<Logical-Connection-Point>",
532         "networkutils:rdm-z-node": "<node-id-Z>",
533         "networkutils:deg-z-num": "<degree-Z-number>",
534         "networkutils:termination-point-z": "<Logical-Connection-Point>"
535       }
536     }
537
538 *<Logical-Connection-Point> comes from the portMapping function*.
539
540 Unidirectional links between xpdr and rdm nodes must be created manually. To that end use the two
541 following REST RPCs:
542
543 From xpdr to rdm:
544 ^^^^^^^^^^^^^^^^^
545
546 **REST API** : *POST /restconf/operations/networkutils:init-xpdr-rdm-links*
547
548 **Sample JSON Data**
549
550 .. code:: json
551
552     {
553       "networkutils:input": {
554         "networkutils:links-input": {
555           "networkutils:xpdr-node": "<xpdr-node-id>",
556           "networkutils:xpdr-num": "1",
557           "networkutils:network-num": "<xpdr-network-port-number>",
558           "networkutils:rdm-node": "<rdm-node-id>",
559           "networkutils:srg-num": "<srg-number>",
560           "networkutils:termination-point-num": "<Logical-Connection-Point>"
561         }
562       }
563     }
564
565 From rdm to xpdr:
566 ^^^^^^^^^^^^^^^^^
567
568 **REST API** : *POST /restconf/operations/networkutils:init-rdm-xpdr-links*
569
570 **Sample JSON Data**
571
572 .. code:: json
573
574     {
575       "networkutils:input": {
576         "networkutils:links-input": {
577           "networkutils:xpdr-node": "<xpdr-node-id>",
578           "networkutils:xpdr-num": "1",
579           "networkutils:network-num": "<xpdr-network-port-number>",
580           "networkutils:rdm-node": "<rdm-node-id>",
581           "networkutils:srg-num": "<srg-number>",
582           "networkutils:termination-point-num": "<Logical-Connection-Point>"
583         }
584       }
585     }
586
587 OTN topology
588 ~~~~~~~~~~~~
589
590 Before creating an OTN service, your topology must contain at least two xpdr devices of MUXPDR
591 or SWITCH type connected to two different rdm devices. To check that these xpdr are present in the
592 OTN topology, use the following command on the REST API :
593
594 **REST API** : *GET /restconf/config/ietf-network:network/otn-topology*
595
596 An optical connectivity service shall have been created in a first setp. Since Magnesium SR2, the OTN
597 links are automatically populated in the topology after the Och, OTU4 and ODU4 interfaces have
598 been created on the two network ports of the xpdr.
599
600 Creating a service
601 ~~~~~~~~~~~~~~~~~~
602
603 Use the *service handler* module to create any end-to-end connectivity service on an OpenROADM
604 network. Two kind of end-to-end "optical" services are managed by TransportPCE:
605 - 100GE service from client port to client port of two transponders (TPDR)
606 - Optical Channel (OC) service from client add/drop port (PP port of SRG) to client add/drop port of
607 two ROADMs.
608
609 For these services, TransportPCE automatically invokes *renderer* module to create all required
610 interfaces and cross-connection on each device supporting the service.
611 As an example, the creation of a 100GE service implies among other things, the creation of OCH, OTU4
612 and ODU4 interfaces on the Network port of TPDR devices.
613
614 Since Magnesium SR2, the *service handler* module directly manages some end-to-end otn
615 connectivity services.
616 Before creating a low-order OTN service (1GE or 10GE services terminating on client port of MUXPDR
617 or SWITCH), the user must ensure that a high-order ODU4 container exists and has previously been
618 configured (it means structured to support low-order otn services) to support low-order OTN containers.
619 Thus, OTN service creation implies three steps:
620 1. OCH-OTU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
621 2. HO-ODU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
622 3. 10GE service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
623
624 The management of other OTN services (1GE-ODU0, 100GE...) is planned for future releases.
625
626
627 100GE service creation
628 ^^^^^^^^^^^^^^^^^^^^^^
629
630 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
631 end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
632 nodes.
633
634 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
635
636 **Sample JSON Data**
637
638 .. code:: json
639
640     {
641         "input": {
642             "sdnc-request-header": {
643                 "request-id": "request-1",
644                 "rpc-action": "service-create",
645                 "request-system-id": "appname"
646             },
647             "service-name": "test1",
648             "common-id": "commonId",
649             "connection-type": "service",
650             "service-a-end": {
651                 "service-rate": "100",
652                 "node-id": "<xpdr-node-id>",
653                 "service-format": "Ethernet",
654                 "clli": "<ccli-name>",
655                 "tx-direction": {
656                     "port": {
657                         "port-device-name": "<xpdr-client-port>",
658                         "port-type": "fixed",
659                         "port-name": "<xpdr-client-port-number>",
660                         "port-rack": "000000.00",
661                         "port-shelf": "Chassis#1"
662                     },
663                     "lgx": {
664                         "lgx-device-name": "Some lgx-device-name",
665                         "lgx-port-name": "Some lgx-port-name",
666                         "lgx-port-rack": "000000.00",
667                         "lgx-port-shelf": "00"
668                     }
669                 },
670                 "rx-direction": {
671                     "port": {
672                         "port-device-name": "<xpdr-client-port>",
673                         "port-type": "fixed",
674                         "port-name": "<xpdr-client-port-number>",
675                         "port-rack": "000000.00",
676                         "port-shelf": "Chassis#1"
677                     },
678                     "lgx": {
679                         "lgx-device-name": "Some lgx-device-name",
680                         "lgx-port-name": "Some lgx-port-name",
681                         "lgx-port-rack": "000000.00",
682                         "lgx-port-shelf": "00"
683                     }
684                 },
685                 "optic-type": "gray"
686             },
687             "service-z-end": {
688                 "service-rate": "100",
689                 "node-id": "<xpdr-node-id>",
690                 "service-format": "Ethernet",
691                 "clli": "<ccli-name>",
692                 "tx-direction": {
693                     "port": {
694                         "port-device-name": "<xpdr-client-port>",
695                         "port-type": "fixed",
696                         "port-name": "<xpdr-client-port-number>",
697                         "port-rack": "000000.00",
698                         "port-shelf": "Chassis#1"
699                     },
700                     "lgx": {
701                         "lgx-device-name": "Some lgx-device-name",
702                         "lgx-port-name": "Some lgx-port-name",
703                         "lgx-port-rack": "000000.00",
704                         "lgx-port-shelf": "00"
705                     }
706                 },
707                 "rx-direction": {
708                     "port": {
709                         "port-device-name": "<xpdr-client-port>",
710                         "port-type": "fixed",
711                         "port-name": "<xpdr-client-port-number>",
712                         "port-rack": "000000.00",
713                         "port-shelf": "Chassis#1"
714                     },
715                     "lgx": {
716                         "lgx-device-name": "Some lgx-device-name",
717                         "lgx-port-name": "Some lgx-port-name",
718                         "lgx-port-rack": "000000.00",
719                         "lgx-port-shelf": "00"
720                     }
721                 },
722                 "optic-type": "gray"
723             },
724             "due-date": "yyyy-mm-ddT00:00:01Z",
725             "operator-contact": "some-contact-info"
726         }
727     }
728
729 Most important parameters for this REST RPC are the identification of the two physical client ports
730 on xpdr nodes.This RPC invokes the *PCE* module to compute a path over the *openroadm-topology* and
731 then invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
732
733
734 OC service creation
735 ^^^^^^^^^^^^^^^^^^^
736
737 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
738 end-to end Optical Channel (OC) connectivity service between two add/drop ports (PP port of SRG
739 node) over an optical network only composed of rdm nodes.
740
741 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
742
743 **Sample JSON Data**
744
745 .. code:: json
746
747     {
748         "input": {
749             "sdnc-request-header": {
750                 "request-id": "request-1",
751                 "rpc-action": "service-create",
752                 "request-system-id": "appname"
753             },
754             "service-name": "something",
755             "common-id": "commonId",
756             "connection-type": "roadm-line",
757             "service-a-end": {
758                 "service-rate": "100",
759                 "node-id": "<xpdr-node-id>",
760                 "service-format": "OC",
761                 "clli": "<ccli-name>",
762                 "tx-direction": {
763                     "port": {
764                         "port-device-name": "<xpdr-client-port>",
765                         "port-type": "fixed",
766                         "port-name": "<xpdr-client-port-number>",
767                         "port-rack": "000000.00",
768                         "port-shelf": "Chassis#1"
769                     },
770                     "lgx": {
771                         "lgx-device-name": "Some lgx-device-name",
772                         "lgx-port-name": "Some lgx-port-name",
773                         "lgx-port-rack": "000000.00",
774                         "lgx-port-shelf": "00"
775                     }
776                 },
777                 "rx-direction": {
778                     "port": {
779                         "port-device-name": "<xpdr-client-port>",
780                         "port-type": "fixed",
781                         "port-name": "<xpdr-client-port-number>",
782                         "port-rack": "000000.00",
783                         "port-shelf": "Chassis#1"
784                     },
785                     "lgx": {
786                         "lgx-device-name": "Some lgx-device-name",
787                         "lgx-port-name": "Some lgx-port-name",
788                         "lgx-port-rack": "000000.00",
789                         "lgx-port-shelf": "00"
790                     }
791                 },
792                 "optic-type": "gray"
793             },
794             "service-z-end": {
795                 "service-rate": "100",
796                 "node-id": "<xpdr-node-id>",
797                 "service-format": "OC",
798                 "clli": "<ccli-name>",
799                 "tx-direction": {
800                     "port": {
801                         "port-device-name": "<xpdr-client-port>",
802                         "port-type": "fixed",
803                         "port-name": "<xpdr-client-port-number>",
804                         "port-rack": "000000.00",
805                         "port-shelf": "Chassis#1"
806                     },
807                     "lgx": {
808                         "lgx-device-name": "Some lgx-device-name",
809                         "lgx-port-name": "Some lgx-port-name",
810                         "lgx-port-rack": "000000.00",
811                         "lgx-port-shelf": "00"
812                     }
813                 },
814                 "rx-direction": {
815                     "port": {
816                         "port-device-name": "<xpdr-client-port>",
817                         "port-type": "fixed",
818                         "port-name": "<xpdr-client-port-number>",
819                         "port-rack": "000000.00",
820                         "port-shelf": "Chassis#1"
821                     },
822                     "lgx": {
823                         "lgx-device-name": "Some lgx-device-name",
824                         "lgx-port-name": "Some lgx-port-name",
825                         "lgx-port-rack": "000000.00",
826                         "lgx-port-shelf": "00"
827                     }
828                 },
829                 "optic-type": "gray"
830             },
831             "due-date": "yyyy-mm-ddT00:00:01Z",
832             "operator-contact": "some-contact-info"
833         }
834     }
835
836 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
837 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
838 the devices.
839
840 OTN OCH-OTU4 service creation
841 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
842
843 Use the following REST RPC to invoke *service handler* module in order to create over the optical
844 infrastructure a bidirectional end-to-end OTU4 over an optical wavelength connectivity service
845 between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such service configure the
846 optical network infrastructure composed of rdm nodes.
847
848 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
849
850 **Sample JSON Data**
851
852 .. code:: json
853
854     {
855         "input": {
856             "sdnc-request-header": {
857                 "request-id": "request-1",
858                 "rpc-action": "service-create",
859                 "request-system-id": "appname"
860             },
861             "service-name": "something",
862             "common-id": "commonId",
863             "connection-type": "infrastructure",
864             "service-a-end": {
865                 "service-rate": "100",
866                 "node-id": "<xpdr-node-id>",
867                 "service-format": "OTU",
868                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
869                 "clli": "<ccli-name>",
870                 "tx-direction": {
871                     "port": {
872                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
873                         "port-type": "fixed",
874                         "port-name": "<xpdr-network-port-in-otn-topology>",
875                         "port-rack": "000000.00",
876                         "port-shelf": "Chassis#1"
877                     },
878                     "lgx": {
879                         "lgx-device-name": "Some lgx-device-name",
880                         "lgx-port-name": "Some lgx-port-name",
881                         "lgx-port-rack": "000000.00",
882                         "lgx-port-shelf": "00"
883                     }
884                 },
885                 "rx-direction": {
886                     "port": {
887                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
888                         "port-type": "fixed",
889                         "port-name": "<xpdr-network-port-in-otn-topology>",
890                         "port-rack": "000000.00",
891                         "port-shelf": "Chassis#1"
892                     },
893                     "lgx": {
894                         "lgx-device-name": "Some lgx-device-name",
895                         "lgx-port-name": "Some lgx-port-name",
896                         "lgx-port-rack": "000000.00",
897                         "lgx-port-shelf": "00"
898                     }
899                 },
900                 "optic-type": "gray"
901             },
902             "service-z-end": {
903                 "service-rate": "100",
904                 "node-id": "<xpdr-node-id>",
905                 "service-format": "OTU",
906                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
907                 "clli": "<ccli-name>",
908                 "tx-direction": {
909                     "port": {
910                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
911                         "port-type": "fixed",
912                         "port-name": "<xpdr-network-port-in-otn-topology>",
913                         "port-rack": "000000.00",
914                         "port-shelf": "Chassis#1"
915                     },
916                     "lgx": {
917                         "lgx-device-name": "Some lgx-device-name",
918                         "lgx-port-name": "Some lgx-port-name",
919                         "lgx-port-rack": "000000.00",
920                         "lgx-port-shelf": "00"
921                     }
922                 },
923                 "rx-direction": {
924                     "port": {
925                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
926                         "port-type": "fixed",
927                         "port-name": "<xpdr-network-port-in-otn-topology>",
928                         "port-rack": "000000.00",
929                         "port-shelf": "Chassis#1"
930                     },
931                     "lgx": {
932                         "lgx-device-name": "Some lgx-device-name",
933                         "lgx-port-name": "Some lgx-port-name",
934                         "lgx-port-rack": "000000.00",
935                         "lgx-port-shelf": "00"
936                     }
937                 },
938                 "optic-type": "gray"
939             },
940             "due-date": "yyyy-mm-ddT00:00:01Z",
941             "operator-contact": "some-contact-info"
942         }
943     }
944
945 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
946 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
947 the devices.
948
949 OTN HO-ODU4 service creation
950 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
951
952 Use the following REST RPC to invoke *service handler* module in order to create over the optical
953 infrastructure a bidirectional end-to-end ODU4 OTN service over an OTU4 and structured to support
954 low-order OTN services (ODU2e, ODU0). As for OTU4, such a service must be created between two network
955 ports of OTN Xponder (MUXPDR or SWITCH).
956
957 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
958
959 **Sample JSON Data**
960
961 .. code:: json
962
963     {
964         "input": {
965             "sdnc-request-header": {
966                 "request-id": "request-1",
967                 "rpc-action": "service-create",
968                 "request-system-id": "appname"
969             },
970             "service-name": "something",
971             "common-id": "commonId",
972             "connection-type": "infrastructure",
973             "service-a-end": {
974                 "service-rate": "100",
975                 "node-id": "<xpdr-node-id>",
976                 "service-format": "ODU",
977                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
978                 "clli": "<ccli-name>",
979                 "tx-direction": {
980                     "port": {
981                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
982                         "port-type": "fixed",
983                         "port-name": "<xpdr-network-port-in-otn-topology>",
984                         "port-rack": "000000.00",
985                         "port-shelf": "Chassis#1"
986                     },
987                     "lgx": {
988                         "lgx-device-name": "Some lgx-device-name",
989                         "lgx-port-name": "Some lgx-port-name",
990                         "lgx-port-rack": "000000.00",
991                         "lgx-port-shelf": "00"
992                     }
993                 },
994                 "rx-direction": {
995                     "port": {
996                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
997                         "port-type": "fixed",
998                         "port-name": "<xpdr-network-port-in-otn-topology>",
999                         "port-rack": "000000.00",
1000                         "port-shelf": "Chassis#1"
1001                     },
1002                     "lgx": {
1003                         "lgx-device-name": "Some lgx-device-name",
1004                         "lgx-port-name": "Some lgx-port-name",
1005                         "lgx-port-rack": "000000.00",
1006                         "lgx-port-shelf": "00"
1007                     }
1008                 },
1009                 "optic-type": "gray"
1010             },
1011             "service-z-end": {
1012                 "service-rate": "100",
1013                 "node-id": "<xpdr-node-id>",
1014                 "service-format": "ODU",
1015                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
1016                 "clli": "<ccli-name>",
1017                 "tx-direction": {
1018                     "port": {
1019                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1020                         "port-type": "fixed",
1021                         "port-name": "<xpdr-network-port-in-otn-topology>",
1022                         "port-rack": "000000.00",
1023                         "port-shelf": "Chassis#1"
1024                     },
1025                     "lgx": {
1026                         "lgx-device-name": "Some lgx-device-name",
1027                         "lgx-port-name": "Some lgx-port-name",
1028                         "lgx-port-rack": "000000.00",
1029                         "lgx-port-shelf": "00"
1030                     }
1031                 },
1032                 "rx-direction": {
1033                     "port": {
1034                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1035                         "port-type": "fixed",
1036                         "port-name": "<xpdr-network-port-in-otn-topology>",
1037                         "port-rack": "000000.00",
1038                         "port-shelf": "Chassis#1"
1039                     },
1040                     "lgx": {
1041                         "lgx-device-name": "Some lgx-device-name",
1042                         "lgx-port-name": "Some lgx-port-name",
1043                         "lgx-port-rack": "000000.00",
1044                         "lgx-port-shelf": "00"
1045                     }
1046                 },
1047                 "optic-type": "gray"
1048             },
1049             "due-date": "yyyy-mm-ddT00:00:01Z",
1050             "operator-contact": "some-contact-info"
1051         }
1052     }
1053
1054 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1055 *otn-topology* that must contains OTU4 links with valid bandwidth parameters, and then
1056 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1057
1058 OTN 10GE-ODU2e service creation
1059 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1060
1061 Use the following REST RPC to invoke *service handler* module in order to create over the OTN
1062 infrastructure a bidirectional end-to-end 10GE-ODU2e OTN service over an ODU4.
1063 Such a service must be created between two client ports of OTN Xponder (MUXPDR or SWITCH)
1064 configured to support 10GE interfaces.
1065
1066 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
1067
1068 **Sample JSON Data**
1069
1070 .. code:: json
1071
1072     {
1073         "input": {
1074             "sdnc-request-header": {
1075                 "request-id": "request-1",
1076                 "rpc-action": "service-create",
1077                 "request-system-id": "appname"
1078             },
1079             "service-name": "something",
1080             "common-id": "commonId",
1081             "connection-type": "service",
1082             "service-a-end": {
1083                 "service-rate": "10",
1084                 "node-id": "<xpdr-node-id>",
1085                 "service-format": "Ethernet",
1086                 "clli": "<ccli-name>",
1087                 "subrate-eth-sla": {
1088                     "subrate-eth-sla": {
1089                         "committed-info-rate": "10000",
1090                         "committed-burst-size": "64"
1091                     }
1092                 },
1093                 "tx-direction": {
1094                     "port": {
1095                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1096                         "port-type": "fixed",
1097                         "port-name": "<xpdr-client-port-in-otn-topology>",
1098                         "port-rack": "000000.00",
1099                         "port-shelf": "Chassis#1"
1100                     },
1101                     "lgx": {
1102                         "lgx-device-name": "Some lgx-device-name",
1103                         "lgx-port-name": "Some lgx-port-name",
1104                         "lgx-port-rack": "000000.00",
1105                         "lgx-port-shelf": "00"
1106                     }
1107                 },
1108                 "rx-direction": {
1109                     "port": {
1110                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1111                         "port-type": "fixed",
1112                         "port-name": "<xpdr-client-port-in-otn-topology>",
1113                         "port-rack": "000000.00",
1114                         "port-shelf": "Chassis#1"
1115                     },
1116                     "lgx": {
1117                         "lgx-device-name": "Some lgx-device-name",
1118                         "lgx-port-name": "Some lgx-port-name",
1119                         "lgx-port-rack": "000000.00",
1120                         "lgx-port-shelf": "00"
1121                     }
1122                 },
1123                 "optic-type": "gray"
1124             },
1125             "service-z-end": {
1126                 "service-rate": "10",
1127                 "node-id": "<xpdr-node-id>",
1128                 "service-format": "Ethernet",
1129                 "clli": "<ccli-name>",
1130                 "subrate-eth-sla": {
1131                     "subrate-eth-sla": {
1132                         "committed-info-rate": "10000",
1133                         "committed-burst-size": "64"
1134                     }
1135                 },
1136                 "tx-direction": {
1137                     "port": {
1138                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1139                         "port-type": "fixed",
1140                         "port-name": "<xpdr-client-port-in-otn-topology>",
1141                         "port-rack": "000000.00",
1142                         "port-shelf": "Chassis#1"
1143                     },
1144                     "lgx": {
1145                         "lgx-device-name": "Some lgx-device-name",
1146                         "lgx-port-name": "Some lgx-port-name",
1147                         "lgx-port-rack": "000000.00",
1148                         "lgx-port-shelf": "00"
1149                     }
1150                 },
1151                 "rx-direction": {
1152                     "port": {
1153                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1154                         "port-type": "fixed",
1155                         "port-name": "<xpdr-client-port-in-otn-topology>",
1156                         "port-rack": "000000.00",
1157                         "port-shelf": "Chassis#1"
1158                     },
1159                     "lgx": {
1160                         "lgx-device-name": "Some lgx-device-name",
1161                         "lgx-port-name": "Some lgx-port-name",
1162                         "lgx-port-rack": "000000.00",
1163                         "lgx-port-shelf": "00"
1164                     }
1165                 },
1166                 "optic-type": "gray"
1167             },
1168             "due-date": "yyyy-mm-ddT00:00:01Z",
1169             "operator-contact": "some-contact-info"
1170         }
1171     }
1172
1173 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1174 *otn-topology* that must contains ODU4 links with valid bandwidth parameters, and then
1175 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1176
1177
1178 .. note::
1179     Since Magnesium SR2, the service-list corresponding to OCH-OTU4, ODU4 or again 10GE-ODU2e services is
1180     updated in the service-list datastore.
1181
1182 .. note::
1183     trib-slot is used when the equipment supports contiguous trib-slot allocation (supported from
1184     Magnesium SR0). The trib-slot provided corresponds to the first of the used trib-slots.
1185     complex-trib-slots will be used when the equipment does not support contiguous trib-slot
1186     allocation. In this case a list of the different trib-slots to be used shall be provided.
1187     The support for non contiguous trib-slot allocation is planned for later Magnesium release.
1188
1189 Deleting a service
1190 ~~~~~~~~~~~~~~~~~~
1191
1192 Deleting any kind of service
1193 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1194
1195 Use the following REST RPC to invoke *service handler* module in order to delete a given optical
1196 connectivity service.
1197
1198 **REST API** : *POST /restconf/operations/org-openroadm-service:service-delete*
1199
1200 **Sample JSON Data**
1201
1202 .. code:: json
1203
1204     {
1205         "input": {
1206             "sdnc-request-header": {
1207                 "request-id": "request-1",
1208                 "rpc-action": "service-delete",
1209                 "request-system-id": "appname",
1210                 "notification-url": "http://localhost:8585/NotificationServer/notify"
1211             },
1212             "service-delete-req-info": {
1213                 "service-name": "something",
1214                 "tail-retention": "no"
1215             }
1216         }
1217     }
1218
1219 Most important parameters for this REST RPC is the *service-name*.
1220
1221
1222 .. note::
1223     Deleting OTN services implies proceeding in the reverse way to their creation. Thus, OTN
1224     service deletion must respect the three following steps:
1225     1. delete first all 10GE services supported over any ODU4 to be deleted
1226     2. delete ODU4
1227     3. delete OCH-OTU4 supporting the just deleted ODU4
1228
1229 Invoking PCE module
1230 ~~~~~~~~~~~~~~~~~~~
1231
1232 Use the following REST RPCs to invoke *PCE* module in order to check connectivity between xponder
1233 nodes and the availability of a supporting optical connectivity between the network-ports of the
1234 nodes.
1235
1236 Checking OTU4 service connectivity
1237 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1238
1239 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1240
1241 **Sample JSON Data**
1242
1243 .. code:: json
1244
1245    {
1246       "input": {
1247            "service-name": "something",
1248            "resource-reserve": "true",
1249            "service-handler-header": {
1250              "request-id": "request1"
1251            },
1252            "service-a-end": {
1253              "service-rate": "100",
1254              "clli": "<clli-node>",
1255              "service-format": "OTU",
1256              "node-id": "<otn-node-id>"
1257            },
1258            "service-z-end": {
1259              "service-rate": "100",
1260              "clli": "<clli-node>",
1261              "service-format": "OTU",
1262              "node-id": "<otn-node-id>"
1263              },
1264            "pce-metric": "hop-count"
1265        }
1266    }
1267
1268 .. note::
1269     here, the <otn-node-id> corresponds to the node-id as appearing in "openroadm-network" topology
1270     layer
1271
1272 Checking ODU4 service connectivity
1273 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1274
1275 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1276
1277 **Sample JSON Data**
1278
1279 .. code:: json
1280
1281    {
1282       "input": {
1283            "service-name": "something",
1284            "resource-reserve": "true",
1285            "service-handler-header": {
1286              "request-id": "request1"
1287            },
1288            "service-a-end": {
1289              "service-rate": "100",
1290              "clli": "<clli-node>",
1291              "service-format": "ODU",
1292              "node-id": "<otn-node-id>"
1293            },
1294            "service-z-end": {
1295              "service-rate": "100",
1296              "clli": "<clli-node>",
1297              "service-format": "ODU",
1298              "node-id": "<otn-node-id>"
1299              },
1300            "pce-metric": "hop-count"
1301        }
1302    }
1303
1304 .. note::
1305     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1306
1307 Checking 10GE/ODU2e service connectivity
1308 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1309
1310 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1311
1312 **Sample JSON Data**
1313
1314 .. code:: json
1315
1316    {
1317       "input": {
1318            "service-name": "something",
1319            "resource-reserve": "true",
1320            "service-handler-header": {
1321              "request-id": "request1"
1322            },
1323            "service-a-end": {
1324              "service-rate": "10",
1325              "clli": "<clli-node>",
1326              "service-format": "Ethernet",
1327              "node-id": "<otn-node-id>"
1328            },
1329            "service-z-end": {
1330              "service-rate": "10",
1331              "clli": "<clli-node>",
1332              "service-format": "Ethernet",
1333              "node-id": "<otn-node-id>"
1334              },
1335            "pce-metric": "hop-count"
1336        }
1337    }
1338
1339 .. note::
1340     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1341
1342
1343 odl-transportpce-tapi
1344 ---------------------
1345
1346 This feature allows TransportPCE application to expose at its northbound interface other APIs than
1347 those defined by the OpenROADM MSA. With this feature, TransportPCE provides part of the Transport-API
1348 specified by the Open Networking Foundation. More specifically, part of the Topology Service component
1349 is implemented, allowing to expose to higher level applications an abstraction of its OpenROADM
1350 topologies in the form of topologies respecting the T-API modelling. The current version of TransportPCE
1351 implements the *tapi-topology.yang* model in the revision 2018-12-10 (T-API v2.1.2).
1352
1353
1354 -  RPC call
1355
1356    -  get-topology-details
1357
1358 As in IETF or OpenROADM topologies, T-API topologies are composed of lists of nodes and links that
1359 abstract a set of network resources. T-API specifies the *T0 - Multi-layer topology* which is, as
1360 indicated by its name, a single topology that collapses network logical abstraction for all network
1361 layers. Thus, an OpenROADM device as, for example, an OTN xponder that manages the following network
1362 layers ETH, ODU, OTU, Optical wavelength, will be represented in T-API T0 topology by two nodes:
1363 one *DSR/ODU* node and one *Photonic Media* node. Each of them are linked together through one or
1364 several *transitional links* depending on the number of network/line ports on the device.
1365
1366 Aluminium SR2 comes with a complete refactoring of this module, handling the same way multi-layer
1367 abstraction of any Xponder terminal device, whether it is a 100G transponder, an OTN muxponder or
1368 again an OTN switch. For all these devices, the implementation manages the fact that only relevant
1369 ports must appear in the resulting TAPI topology abstraction. In other words, only client/network ports
1370 that are undirectly/directly connected to the ROADM infrastructure are considered for the abstraction.
1371 Moreover, the whole ROADM infrastructure of the network is also abstracted towards a single photonic
1372 node. Therefore, a pair of unidirectional xponder-output/xponder-input links present in *openroadm-topology*
1373 is represented by a bidirectional *OMS* link in TAPI topology.
1374 In the same way, a pair of unidirectional OTN links (OTU4, ODU4) present in *otn-topology* is also
1375 represented by a bidirectional OTN link in TAPI topology, while retaining their available bandwidth
1376 characteristics.
1377
1378 Two kinds of topologies are currently implemented. The first one is the *"T0 - Multi-layer topology"*
1379 defined in the reference implementation of T-API. This topology gives an abstraction from data coming
1380 from openroadm-topology and otn-topology. Such topology may be rather complex since most of devices are
1381 represented through several nodes and links.
1382 Another topology, named *"Transponder 100GE"*, is also implemented. That latter provides a higher level
1383 of abstraction, much simpler, for the specific case of 100GE transponder, in the form of a single
1384 DSR node.
1385
1386 The figure below shows an example of TAPI abstractions as performed by TransportPCE starting from Aluminium SR2.
1387
1388 .. figure:: ./images/TransportPCE-tapi-abstraction.jpg
1389    :alt: Example of T0-multi-layer TAPI abstraction in TransportPCE
1390
1391 In this specific case, as far as the "A" side is concerned, we connect TransportPCE to two xponder
1392 terminal devices at the netconf level :
1393 - XPDR-A1 is a 100GE transponder and is represented by XPDR-A1-XPDR1 node in *otn-topology*
1394 - SPDR-SA1 is an otn xponder that actually contains in its device configuration datastore two otn
1395 xponder nodes (the otn muxponder 10GE=>100G SPDR-SA1-XPDR1 and the otn switch 4x100GE => 4x100G SPDR-SA1-XPDR2)
1396 As represented on the bottom part of the figure, only one network port of XPDR-A1-XPDR1 is connected
1397 to the ROADM infrastructure, and only one network port of the otn muxponder is also attached to the
1398 ROADM infrastructure.
1399 Such network configuration will result in the TAPI *T0 - Multi-layer topology* abstraction as
1400 represented in the center of the figure. Let's notice that the otn switch (SPDR-SA1-XPDR2), not
1401 being attached to the ROADM infrastructure, is not abstracted.
1402 Moreover, 100GE transponder being connected, the TAPI *Transponder 100GE* topology will result in a
1403 single layer DSR node with only the two Owned Node Edge Ports representing the two 100GE client ports
1404 of respectively XPDR-A1-XPDR1 and XPDR-C1-XPDR1...
1405
1406
1407 **REST API** : *POST /restconf/operations/tapi-topology:get-topology-details*
1408
1409 This request builds the TAPI *T0 - Multi-layer topology* abstraction with regard to the current
1410 state of *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1411
1412 **Sample JSON Data**
1413
1414 .. code:: json
1415
1416     {
1417       "tapi-topology:input": {
1418         "tapi-topology:topology-id-or-name": "T0 - Multi-layer topology"
1419        }
1420     }
1421
1422 This request builds the TAPI *Transponder 100GE* abstraction with regard to the current state of
1423 *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1424 Its main interest is to simply and directly retrieve 100GE client ports of 100G Transponders that may
1425 be connected together, through a point-to-point 100GE service running over a wavelength.
1426
1427 .. code:: json
1428
1429     {
1430       "tapi-topology:input": {
1431         "tapi-topology:topology-id-or-name": "Transponder 100GE"
1432         }
1433     }
1434
1435
1436 .. note::
1437
1438     As for the *T0 multi-layer* topology, only 100GE client port whose their associated 100G line
1439     port is connected to Add/Drop nodes of the ROADM infrastructure are retrieved in order to
1440     abstract only relevant information.
1441
1442 Help
1443 ----
1444
1445 -  `TransportPCE Wiki <https://wiki.opendaylight.org/display/ODL/TransportPCE>`__