Update MgSR2 documentation for OTN part
[transportpce.git] / docs / developer-guide.rst
1 .. _transportpce-dev-guide:
2
3 TransportPCE Developer Guide
4 ============================
5
6 Overview
7 --------
8
9 TransportPCE describes an application running on top of the OpenDaylight
10 controller. Its primary function is to control an optical transport
11 infrastructure using a non-proprietary South Bound Interface (SBI). It may be
12 interconnected with Controllers of different layers (L2, L3 Controller…), a
13 higher layer Controller and/or an Orchestrator through non-proprietary
14 Application Programing Interfaces (APIs). Control includes the capability to
15 configure the optical equipment, and to provision services according to a
16 request coming from a higher layer controller and/or an orchestrator.
17 This capability may rely on the controller only or it may be delegated to
18 distributed (standardized) protocols.
19
20
21 Architecture
22 ------------
23
24 TransportPCE modular architecture is described on the next diagram. Each main
25 function such as Topology management, Path Calculation Engine (PCE), Service
26 handler, Renderer \_responsible for the path configuration through optical
27 equipment\_ and Optical Line Management (OLM) is associated with a generic block
28 relying on open models, each of them communicating through published APIs.
29
30
31 .. figure:: ./images/TransportPCE-Diagramm-Magnesium.jpg
32    :alt: TransportPCE architecture
33
34    TransportPCE architecture
35
36 Fluorine, Neon and Sodium releases of transportPCE are dedicated to the control
37 of WDM transport infrastructure. The WDM layer is built from colorless ROADMs
38 and transponders.
39
40 The interest of using a controller to provision automatically services strongly
41 relies on its ability to handle end to end optical services that spans through
42 the different network domains, potentially equipped with equipment coming from
43 different suppliers. Thus, interoperability in the optical layer is a key
44 element to get the benefit of automated control.
45
46 Initial design of TransportPCE leverages OpenROADM Multi-Source-Agreement (MSA)
47 which defines interoperability specifications, consisting of both Optical
48 interoperability and Yang data models.
49
50 Experimental support of OTN layer is introduced in Magnesium release of
51 OpenDaylight. By experimental, we mean not all features can be accessed through
52 northbound API based on RESTCONF encoded OpenROADM Service model. In the meanwhile,
53 "east/west" APIs shall be used to trigger a path computation in the PCE (using
54 path-computation-request RPC) and to create services (using otn-service-path RPC).
55 With Magnesium SR2, TransportPCE starts to manage some end-to-end OTN services, as for example,
56 OCH-OTU4, structured ODU4 or again 10GE-ODU2e services.
57 OTN support will continue to be improved in the following releases.
58
59
60
61 Module description
62 ~~~~~~~~~~~~~~~~~~
63
64 ServiceHandler
65 ^^^^^^^^^^^^^^
66
67 Service Handler handles request coming from a higher level controller or an orchestrator
68 through the northbound API, as defined in the Open ROADM service model. Current
69 implementation addresses the following rpcs: service-create, temp-service-create,
70 service–delete, temp-service-delete, service-reroute, and service-restoration. It checks the
71 request consistency and trigs path calculation sending rpcs to the PCE. If a valid path is
72 returned by the PCE, path configuration is initiated relying on Renderer and OLM. At the
73 confirmation of a successful service creation, the Service Handler updates the service-
74 list/temp-service-list in the MD-SAL. For service deletion, the Service Handler relies on the
75 Renderer and the OLM to delete connections and reset power levels associated with the
76 service. The service-list is updated following a successful service deletion. In Neon SR0 is
77 added the support for service from ROADM to ROADM, which brings additional flexibility and
78 notably allows reserving resources when transponders are not in place at day one.
79 Magnesium SR2 fully supports end-to-end OTN services which are part of the OTN infrastructure.
80 It concerns the management of OCH-OTU4 (also part of the optical infrastructure) and structured
81 HO-ODU4 services. Moreover, once these two kinds of OTN infrastructure service created, it is
82 possible to manage some LO-ODU services (for the time being, only 10GE-ODU2e services).
83 The full support of OTN services, including 1GE-ODU0 or 100GE, will be introduced along next
84 releases.
85
86 PCE
87 ^^^
88
89 The Path Computation Element (PCE) is the component responsible for path
90 calculation. An interface allows the Service Handler or external components such as an
91 orchestrator to request a path computation and get a response from the PCE
92 including the computed path(s) in case of success, or errors and indication of
93 the reason for the failure in case the request cannot be satisfied. Additional
94 parameters can be provided by the PCE in addition to the computed paths if
95 requested by the client module. An interface to the Topology Management module
96 allows keeping PCE aligned with the latest changes in the topology. Information
97 about current and planned services is available in the MD-SAL data store.
98
99 Current implementation of PCE allows finding the shortest path, minimizing either the hop
100 count (default) or the propagation delay. Wavelength is assigned considering a fixed grid of
101 96 wavelengths. In Neon SR0, the PCE calculates the OSNR, on the base of incremental
102 noise specifications provided in Open ROADM MSA. The support of unidirectional ports is
103 also added. PCE handles the following constraints as hard constraints:
104
105 -   **Node exclusion**
106 -   **SRLG exclusion**
107 -   **Maximum latency**
108
109 In Magnesium SR0, the interconnection of the PCE with GNPY (Gaussian Noise Python), an
110 open-source library developed in the scope of the Telecom Infra Project for building route
111 planning and optimizing performance in optical mesh networks, is fully supported.
112
113 If the OSNR calculated by the PCE is too close to the limit defined in OpenROADM
114 specifications, the PCE forwards through a REST interface to GNPY external tool the topology
115 and the pre-computed path translated in routing constraints. GNPy calculates a set of Quality of
116 Transmission metrics for this path using its own library which includes models for OpenROADM.
117 The result is sent back to the PCE. If the path is validated, the PCE sends back a response to
118 the service handler. In case of invalidation of the path by GNPY, the PCE sends a new request to
119 GNPY, including only the constraints expressed in the path-computation-request initiated by the
120 Service Handler. GNPy then tries to calculate a path based on these relaxed constraints. The result
121 of the path computation is provided to the PCE which translates the path according to the topology
122 handled in transportPCE and forwards the results to the Service Handler.
123
124 GNPy relies on SNR and takes into account the linear and non-linear impairments
125 to check feasibility. In the related tests, GNPy module runs externally in a
126 docker and the communication with T-PCE is ensured via HTTPs.
127
128 Topology Management
129 ^^^^^^^^^^^^^^^^^^^
130
131 Topology management module builds the Topology according to the Network model
132 defined in OpenROADM. The topology is aligned with IETF I2RS RFC8345 model.
133 It includes several network layers:
134
135 -  **CLLI layer corresponds to the locations that host equipment**
136 -  **Network layer corresponds to a first level of disaggregation where we
137    separate Xponders (transponder, muxponders or switchponders) from ROADMs**
138 -  **Topology layer introduces a second level of disaggregation where ROADMs
139    Add/Drop modules ("SRGs") are separated from the degrees which includes line
140    amplifiers and WSS that switch wavelengths from one to another degree**
141 -  **OTN layer introduced in Magnesium includes transponders as well as switch-ponders and
142    mux-ponders having the ability to switch OTN containers from client to line cards. SR0 release
143    includes creation of the switching pool (used to model cross-connect matrices),
144    tributary-ports and tributary-slots at the initial connection of NETCONF devices.
145    The population of OTN links (OTU4 and ODU4), and the adjustment of the tributary ports/slots
146    pool occupancy when OTN services are created is supported since Magnesium SR2.**
147
148
149 Renderer
150 ^^^^^^^^
151
152 The Renderer module, on request coming from the Service Handler through a service-
153 implementation-request /service delete rpc, sets/deletes the path corresponding to a specific
154 service between A and Z ends. The path description provided by the service-handler to the
155 renderer is based on abstracted resources (nodes, links and termination-points), as provided
156 by the PCE module. The renderer converts this path-description in a path topology based on
157 device resources (circuit-packs, ports,…).
158
159 The conversion from abstracted resources to device resources is performed relying on the
160 portmapping module which maintains the connections between these different resource types.
161 Portmapping module also allows to keep the topology independant from the devices releases.
162 In Neon (SR0), portmapping module has been enriched to support both openroadm 1.2.1 and 2.2.1
163 device models. The full support of openroadm 2.2.1 device models (both in the topology management
164 and the rendering function) has been added in Neon SR1. In Magnesium, portmapping is enriched with
165 the supported-interface-capability, OTN supporting-interfaces, and switching-pools (reflecting
166 cross-connection capabilities of OTN switch-ponders).
167
168 After the path is provided, the renderer first checks what are the existing interfaces on the
169 ports of the different nodes that the path crosses. It then creates missing interfaces. After all
170 needed interfaces have been created it sets the connections required in the nodes and
171 notifies the Service Handler on the status of the path creation. Path is created in 2 steps
172 (from A to Z and Z to A). In case the path between A and Z could not be fully created, a
173 rollback function is called to set the equipment on the path back to their initial configuration
174 (as they were before invoking the Renderer).
175
176 Magnesium brings the support of OTN services. SR0 supports the creation of OTU4, ODU4, ODU2/ODU2e
177 and ODU0 interfaces. The creation of these low-order otn interfaces must be triggered through
178 otn-service-path RPC. Magnesium SR2 fully supports end-to-end otn service implementation into devices
179 (service-implementation-request /service delete rpc, topology alignement after the service has been created).
180
181
182 OLM
183 ^^^
184
185 Optical Line Management module implements two main features: it is responsible
186 for setting up the optical power levels on the different interfaces, and is in
187 charge of adjusting these settings across the life of the optical
188 infrastructure.
189
190 After the different connections have been established in the ROADMS, between 2
191 Degrees for an express path, or between a SRG and a Degree for an Add or Drop
192 path; meaning the devices have set WSS and all other required elements to
193 provide path continuity, power setting are provided as attributes of these
194 connections. This allows the device to set all complementary elements such as
195 VOAs, to guaranty that the signal is launched at a correct power level
196 (in accordance to the specifications) in the fiber span. This also applies
197 to X-Ponders, as their output power must comply with the specifications defined
198 for the Add/Drop ports (SRG) of the ROADM. OLM has the responsibility of
199 calculating the right power settings, sending it to the device, and check the
200 PM retrieved from the device to verify that the setting was correctly applied
201 and the configuration was successfully completed.
202
203
204 Inventory
205 ^^^^^^^^^
206
207 TransportPCE Inventory module is responsible to keep track of devices connected in an external MariaDB database.
208 Other databases may be used as long as they comply with SQL and are compatible with OpenDaylight (for example MySQL).
209 At present, the module supports extracting and persisting inventory of devices OpenROADM MSA version 1.2.1.
210 Inventory module changes to support newer device models (2.2.1, etc) and other models (network, service, etc)
211 will be progressively included.
212
213 The inventory module can be activated by the associated karaf feature (odl-transporpce-inventory)
214 The database properties are supplied in the “opendaylight-release” and “opendaylight-snapshots” profiles.
215 Below is the settings.xml with properties included in the distribution.
216 The module can be rebuild from sources with different parameters.
217
218 Sample entry in settings.xml to declare an external inventory database:
219 ::
220
221     <profiles>
222       <profile>
223           <id>opendaylight-release</id>
224     [..]
225          <properties>
226                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
227                  <transportpce.db.database><<databasename>></transportpce.db.database>
228                  <transportpce.db.username><<username>></transportpce.db.username>
229                  <transportpce.db.password><<password>></transportpce.db.password>
230                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
231          </properties>
232     </profile>
233     [..]
234     <profile>
235           <id>opendaylight-snapshots</id>
236     [..]
237          <properties>
238                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
239                  <transportpce.db.database><<databasename>></transportpce.db.database>
240                  <transportpce.db.username><<username>></transportpce.db.username>
241                  <transportpce.db.password><<password>></transportpce.db.password>
242                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
243          </properties>
244         </profile>
245     </profiles>
246
247
248 Once the project built and when karaf is started, the cfg file is generated in etc folder with the corresponding
249 properties supplied in settings.xml. When devices with OpenROADM 1.2.1 device model are mounted, the device listener in
250 the inventory module loads several device attributes to various tables as per the supplied database.
251 The database structure details can be retrieved from the file tests/inventory/initdb.sql inside project sources.
252 Installation scripts and a docker file are also provided.
253
254 Key APIs and Interfaces
255 -----------------------
256
257 External API
258 ~~~~~~~~~~~~
259
260 North API, interconnecting the Service Handler to higher level applications
261 relies on the Service Model defined in the MSA. The Renderer and the OLM are
262 developed to allow configuring Open ROADM devices through a southbound
263 Netconf/Yang interface and rely on the MSA’s device model.
264
265 ServiceHandler Service
266 ^^^^^^^^^^^^^^^^^^^^^^
267
268 -  RPC call
269
270    -  service-create (given service-name, service-aend, service-zend)
271
272    -  service-delete (given service-name)
273
274    -  service-reroute (given service-name, service-aend, service-zend)
275
276    -  service-restoration (given service-name, service-aend, service-zend)
277
278    -  temp-service-create (given common-id, service-aend, service-zend)
279
280    -  temp-service-delete (given common-id)
281
282 -  Data structure
283
284    -  service list : made of services
285    -  temp-service list : made of temporary services
286    -  service : composed of service-name, topology wich describes the detailed path (list of used resources)
287
288 -  Notification
289
290    - service-rpc-result : result of service RPC
291    - service-notification : service has been added, modified or removed
292
293 Netconf Service
294 ^^^^^^^^^^^^^^^
295
296 -  RPC call
297
298    -  connect-device : PUT
299    -  disconnect-device : DELETE
300    -  check-connected-device : GET
301
302 -  Data Structure
303
304    -  node list : composed of netconf nodes in topology-netconf
305
306 Internal APIs
307 ~~~~~~~~~~~~~
308
309 Internal APIs define REST APIs to interconnect TransportPCE modules :
310
311 -   Service Handler to PCE
312 -   PCE to Topology Management
313 -   Service Handler to Renderer
314 -   Renderer to OLM
315
316 Pce Service
317 ^^^^^^^^^^^
318
319 -  RPC call
320
321    -  path-computation-request (given service-name, service-aend, service-zend)
322
323    -  cancel-resource-reserve (given service-name)
324
325 -  Notification
326
327    - service-path-rpc-result : result of service RPC
328
329 Renderer Service
330 ^^^^^^^^^^^^^^^^
331
332 -  RPC call
333
334    -  service-implementation-request (given service-name, service-aend, service-zend)
335
336    -  service-delete (given service-name)
337
338 -  Data structure
339
340    -  service path list : composed of service paths
341    -  service path : composed of service-name, path description giving the list of abstracted elements (nodes, tps, links)
342
343 -  Notification
344
345    - service-path-rpc-result : result of service RPC
346
347 Device Renderer
348 ^^^^^^^^^^^^^^^
349
350 -  RPC call
351
352    -  service-path used in SR0 as an intermediate solution to address directly the renderer
353       from a REST NBI to create OCH-OTU4-ODU4 interfaces on network port of otn devices.
354
355    -  otn-service-path used in SR0 as an intermediate solution to address directly the renderer
356       from a REST NBI for otn-service creation. Otn service-creation through
357       service-implementation-request call from the Service Handler will be supported in later
358       Magnesium releases
359
360 Topology Management Service
361 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
362
363 -  Data structure
364
365    -  network list : composed of networks(openroadm-topology, netconf-topology)
366    -  node list : composed of nodes identified by their node-id
367    -  link list : composed of links identified by their link-id
368    -  node : composed of roadm, xponder
369       link : composed of links of different types (roadm-to-roadm, express, add-drop ...)
370
371 OLM Service
372 ^^^^^^^^^^^
373
374 -  RPC call
375
376    -  get-pm (given node-id)
377
378    -  service-power-setup
379
380    -  service-power-turndown
381
382    -  service-power-reset
383
384    -  calculate-spanloss-base
385
386    -  calculate-spanloss-current
387
388 odl-transportpce-stubmodels
389 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
390
391    -  This feature provides function to be able to stub some of TransportPCE modules, pce and
392       renderer (Stubpce and Stubrenderer).
393       Stubs are used for development purposes and can be used for some of the functionnal tests.
394
395 Interfaces to external software
396 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
397
398 It defines the interfaces implemented to interconnect TransportPCE modules with other software in
399 order to perform specific tasks
400
401 GNPy interface
402 ^^^^^^^^^^^^^^
403
404 -  Request structure
405
406    -  topology : composed of list of elements and connections
407    -  service : source, destination, explicit-route-objects, path-constraints
408
409 -  Response structure
410
411    -  path-properties/path-metric : OSNR-0.1nm, OSNR-bandwidth, SNR-0.1nm, SNR-bandwidth,
412    -  path-properties/path-route-objects : composed of path elements
413
414
415 Running transportPCE project
416 ----------------------------
417
418 To use transportPCE controller, the first step is to connect the controller to optical nodes
419 through the NETCONF connector.
420
421 .. note::
422
423     In the current version, only optical equipment compliant with open ROADM datamodels are managed
424     by transportPCE.
425
426
427 Connecting nodes
428 ~~~~~~~~~~~~~~~~
429
430 To connect a node, use the following JSON RPC
431
432 **REST API** : *POST /restconf/config/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
433
434 **Sample JSON Data**
435
436 .. code:: json
437
438     {
439         "node": [
440             {
441                 "node-id": "<node-id>",
442                 "netconf-node-topology:tcp-only": "false",
443                 "netconf-node-topology:reconnect-on-changed-schema": "false",
444                 "netconf-node-topology:host": "<node-ip-address>",
445                 "netconf-node-topology:default-request-timeout-millis": "120000",
446                 "netconf-node-topology:max-connection-attempts": "0",
447                 "netconf-node-topology:sleep-factor": "1.5",
448                 "netconf-node-topology:actor-response-wait-time": "5",
449                 "netconf-node-topology:concurrent-rpc-limit": "0",
450                 "netconf-node-topology:between-attempts-timeout-millis": "2000",
451                 "netconf-node-topology:port": "<netconf-port>",
452                 "netconf-node-topology:connection-timeout-millis": "20000",
453                 "netconf-node-topology:username": "<node-username>",
454                 "netconf-node-topology:password": "<node-password>",
455                 "netconf-node-topology:keepalive-delay": "300"
456             }
457         ]
458     }
459
460
461 Then check that the netconf session has been correctly established between the controller and the
462 node. the status of **netconf-node-topology:connection-status** must be **connected**
463
464 **REST API** : *GET /restconf/operational/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
465
466
467 Node configuration discovery
468 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
469
470 Once the controller is connected to the node, transportPCE application automatically launchs a
471 discovery of the node configuration datastore and creates **Logical Connection Points** to any
472 physical ports related to transmission. All *circuit-packs* inside the node configuration are
473 analyzed.
474
475 Use the following JSON RPC to check that function internally named *portMapping*.
476
477 **REST API** : *GET /restconf/config/portmapping:network*
478
479 .. note::
480
481     In ``org-openroadm-device.yang``, four types of optical nodes can be managed:
482         * rdm: ROADM device (optical switch)
483         * xpdr: Xponder device (device that converts client to optical channel interface)
484         * ila: in line amplifier (optical amplifier)
485         * extplug: external pluggable (an optical pluggable that can be inserted in an external unit such as a router)
486
487     TransportPCE currently supports rdm and xpdr
488
489 Depending on the kind of open ROADM device connected, different kind of *Logical Connection Points*
490 should appear, if the node configuration is not empty:
491
492 -  DEG<degree-number>-TTP-<port-direction>: created on the line port of a degree on a rdm equipment
493 -  SRG<srg-number>-PP<port-number>: created on the client port of a srg on a rdm equipment
494 -  XPDR<number>-CLIENT<port-number>: created on the client port of a xpdr equipment
495 -  XPDR<number>-NETWORK<port-number>: created on the line port of a xpdr equipment
496
497     For further details on openROADM device models, see `openROADM MSA white paper <https://0201.nccdn.net/1_2/000/000/134/c50/Open-ROADM-MSA-release-2-Device-White-paper-v1-1.pdf>`__.
498
499 Optical Network topology
500 ~~~~~~~~~~~~~~~~~~~~~~~~
501
502 Before creating an optical connectivity service, your topology must contain at least two xpdr
503 devices connected to two different rdm devices. Normally, the *openroadm-topology* is automatically
504 created by transportPCE. Nevertheless, depending on the configuration inside optical nodes, this
505 topology can be partial. Check that link of type *ROADMtoROADM* exists between two adjacent rdm
506 nodes.
507
508 **REST API** : *GET /restconf/config/ietf-network:network/openroadm-topology*
509
510 If it is not the case, you need to manually complement the topology with *ROADMtoROADM* link using
511 the following REST RPC:
512
513
514 **REST API** : *POST /restconf/operations/networkutils:init-roadm-nodes*
515
516 **Sample JSON Data**
517
518 .. code:: json
519
520     {
521       "networkutils:input": {
522         "networkutils:rdm-a-node": "<node-id-A>",
523         "networkutils:deg-a-num": "<degree-A-number>",
524         "networkutils:termination-point-a": "<Logical-Connection-Point>",
525         "networkutils:rdm-z-node": "<node-id-Z>",
526         "networkutils:deg-z-num": "<degree-Z-number>",
527         "networkutils:termination-point-z": "<Logical-Connection-Point>"
528       }
529     }
530
531 *<Logical-Connection-Point> comes from the portMapping function*.
532
533 Unidirectional links between xpdr and rdm nodes must be created manually. To that end use the two
534 following REST RPCs:
535
536 From xpdr to rdm:
537 ^^^^^^^^^^^^^^^^^
538
539 **REST API** : *POST /restconf/operations/networkutils:init-xpdr-rdm-links*
540
541 **Sample JSON Data**
542
543 .. code:: json
544
545     {
546       "networkutils:input": {
547         "networkutils:links-input": {
548           "networkutils:xpdr-node": "<xpdr-node-id>",
549           "networkutils:xpdr-num": "1",
550           "networkutils:network-num": "<xpdr-network-port-number>",
551           "networkutils:rdm-node": "<rdm-node-id>",
552           "networkutils:srg-num": "<srg-number>",
553           "networkutils:termination-point-num": "<Logical-Connection-Point>"
554         }
555       }
556     }
557
558 From rdm to xpdr:
559 ^^^^^^^^^^^^^^^^^
560
561 **REST API** : *POST /restconf/operations/networkutils:init-rdm-xpdr-links*
562
563 **Sample JSON Data**
564
565 .. code:: json
566
567     {
568       "networkutils:input": {
569         "networkutils:links-input": {
570           "networkutils:xpdr-node": "<xpdr-node-id>",
571           "networkutils:xpdr-num": "1",
572           "networkutils:network-num": "<xpdr-network-port-number>",
573           "networkutils:rdm-node": "<rdm-node-id>",
574           "networkutils:srg-num": "<srg-number>",
575           "networkutils:termination-point-num": "<Logical-Connection-Point>"
576         }
577       }
578     }
579
580 OTN topology
581 ~~~~~~~~~~~~
582
583 Before creating an OTN service, your topology must contain at least two xpdr devices of MUXPDR
584 or SWITCH type connected to two different rdm devices. To check that these xpdr are present in the
585 OTN topology, use the following command on the REST API :
586
587 **REST API** : *GET /restconf/config/ietf-network:network/otn-topology*
588
589 An optical connectivity service shall have been created in a first setp. Since Magnesium SR2, the OTN
590 links are automatically populated in the topology after the Och, OTU4 and ODU4 interfaces have
591 been created on the two network ports of the xpdr.
592
593 Creating a service
594 ~~~~~~~~~~~~~~~~~~
595
596 Use the *service handler* module to create any end-to-end connectivity service on an OpenROADM
597 network. Two kind of end-to-end "optical" services are managed by TransportPCE:
598 - 100GE service from client port to client port of two transponders (TPDR)
599 - Optical Channel (OC) service from client add/drop port (PP port of SRG) to client add/drop port of
600 two ROADMs.
601
602 For these services, TransportPCE automatically invokes *renderer* module to create all required
603 interfaces and cross-connection on each device supporting the service.
604 As an example, the creation of a 100GE service implies among other things, the creation of OCH, OTU4
605 and ODU4 interfaces on the Network port of TPDR devices.
606
607 Since Magnesium SR2, the *service handler* module directly manages some end-to-end otn
608 connectivity services.
609 Before creating a low-order OTN service (1GE or 10GE services terminating on client port of MUXPDR
610 or SWITCH), the user must ensure that a high-order ODU4 container exists and has previously been
611 configured (it means structured to support low-order otn services) to support low-order OTN containers.
612 Thus, OTN service creation implies three steps:
613 1. OCH-OTU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
614 2. HO-ODU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
615 3. 10GE service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
616
617 The management of other OTN services (1GE-ODU0, 100GE...) is planned for future releases.
618
619
620 100GE service creation
621 ^^^^^^^^^^^^^^^^^^^^^^
622
623 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
624 end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
625 nodes.
626
627 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
628
629 **Sample JSON Data**
630
631 .. code:: json
632
633     {
634         "input": {
635             "sdnc-request-header": {
636                 "request-id": "request-1",
637                 "rpc-action": "service-create",
638                 "request-system-id": "appname"
639             },
640             "service-name": "test1",
641             "common-id": "commonId",
642             "connection-type": "service",
643             "service-a-end": {
644                 "service-rate": "100",
645                 "node-id": "<xpdr-node-id>",
646                 "service-format": "Ethernet",
647                 "clli": "<ccli-name>",
648                 "tx-direction": {
649                     "port": {
650                         "port-device-name": "<xpdr-client-port>",
651                         "port-type": "fixed",
652                         "port-name": "<xpdr-client-port-number>",
653                         "port-rack": "000000.00",
654                         "port-shelf": "Chassis#1"
655                     },
656                     "lgx": {
657                         "lgx-device-name": "Some lgx-device-name",
658                         "lgx-port-name": "Some lgx-port-name",
659                         "lgx-port-rack": "000000.00",
660                         "lgx-port-shelf": "00"
661                     }
662                 },
663                 "rx-direction": {
664                     "port": {
665                         "port-device-name": "<xpdr-client-port>",
666                         "port-type": "fixed",
667                         "port-name": "<xpdr-client-port-number>",
668                         "port-rack": "000000.00",
669                         "port-shelf": "Chassis#1"
670                     },
671                     "lgx": {
672                         "lgx-device-name": "Some lgx-device-name",
673                         "lgx-port-name": "Some lgx-port-name",
674                         "lgx-port-rack": "000000.00",
675                         "lgx-port-shelf": "00"
676                     }
677                 },
678                 "optic-type": "gray"
679             },
680             "service-z-end": {
681                 "service-rate": "100",
682                 "node-id": "<xpdr-node-id>",
683                 "service-format": "Ethernet",
684                 "clli": "<ccli-name>",
685                 "tx-direction": {
686                     "port": {
687                         "port-device-name": "<xpdr-client-port>",
688                         "port-type": "fixed",
689                         "port-name": "<xpdr-client-port-number>",
690                         "port-rack": "000000.00",
691                         "port-shelf": "Chassis#1"
692                     },
693                     "lgx": {
694                         "lgx-device-name": "Some lgx-device-name",
695                         "lgx-port-name": "Some lgx-port-name",
696                         "lgx-port-rack": "000000.00",
697                         "lgx-port-shelf": "00"
698                     }
699                 },
700                 "rx-direction": {
701                     "port": {
702                         "port-device-name": "<xpdr-client-port>",
703                         "port-type": "fixed",
704                         "port-name": "<xpdr-client-port-number>",
705                         "port-rack": "000000.00",
706                         "port-shelf": "Chassis#1"
707                     },
708                     "lgx": {
709                         "lgx-device-name": "Some lgx-device-name",
710                         "lgx-port-name": "Some lgx-port-name",
711                         "lgx-port-rack": "000000.00",
712                         "lgx-port-shelf": "00"
713                     }
714                 },
715                 "optic-type": "gray"
716             },
717             "due-date": "yyyy-mm-ddT00:00:01Z",
718             "operator-contact": "some-contact-info"
719         }
720     }
721
722 Most important parameters for this REST RPC are the identification of the two physical client ports
723 on xpdr nodes.This RPC invokes the *PCE* module to compute a path over the *openroadm-topology* and
724 then invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
725
726
727 OC service creation
728 ^^^^^^^^^^^^^^^^^^^
729
730 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
731 end-to end Optical Channel (OC) connectivity service between two add/drop ports (PP port of SRG
732 node) over an optical network only composed of rdm nodes.
733
734 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
735
736 **Sample JSON Data**
737
738 .. code:: json
739
740     {
741         "input": {
742             "sdnc-request-header": {
743                 "request-id": "request-1",
744                 "rpc-action": "service-create",
745                 "request-system-id": "appname"
746             },
747             "service-name": "something",
748             "common-id": "commonId",
749             "connection-type": "roadm-line",
750             "service-a-end": {
751                 "service-rate": "100",
752                 "node-id": "<xpdr-node-id>",
753                 "service-format": "OC",
754                 "clli": "<ccli-name>",
755                 "tx-direction": {
756                     "port": {
757                         "port-device-name": "<xpdr-client-port>",
758                         "port-type": "fixed",
759                         "port-name": "<xpdr-client-port-number>",
760                         "port-rack": "000000.00",
761                         "port-shelf": "Chassis#1"
762                     },
763                     "lgx": {
764                         "lgx-device-name": "Some lgx-device-name",
765                         "lgx-port-name": "Some lgx-port-name",
766                         "lgx-port-rack": "000000.00",
767                         "lgx-port-shelf": "00"
768                     }
769                 },
770                 "rx-direction": {
771                     "port": {
772                         "port-device-name": "<xpdr-client-port>",
773                         "port-type": "fixed",
774                         "port-name": "<xpdr-client-port-number>",
775                         "port-rack": "000000.00",
776                         "port-shelf": "Chassis#1"
777                     },
778                     "lgx": {
779                         "lgx-device-name": "Some lgx-device-name",
780                         "lgx-port-name": "Some lgx-port-name",
781                         "lgx-port-rack": "000000.00",
782                         "lgx-port-shelf": "00"
783                     }
784                 },
785                 "optic-type": "gray"
786             },
787             "service-z-end": {
788                 "service-rate": "100",
789                 "node-id": "<xpdr-node-id>",
790                 "service-format": "OC",
791                 "clli": "<ccli-name>",
792                 "tx-direction": {
793                     "port": {
794                         "port-device-name": "<xpdr-client-port>",
795                         "port-type": "fixed",
796                         "port-name": "<xpdr-client-port-number>",
797                         "port-rack": "000000.00",
798                         "port-shelf": "Chassis#1"
799                     },
800                     "lgx": {
801                         "lgx-device-name": "Some lgx-device-name",
802                         "lgx-port-name": "Some lgx-port-name",
803                         "lgx-port-rack": "000000.00",
804                         "lgx-port-shelf": "00"
805                     }
806                 },
807                 "rx-direction": {
808                     "port": {
809                         "port-device-name": "<xpdr-client-port>",
810                         "port-type": "fixed",
811                         "port-name": "<xpdr-client-port-number>",
812                         "port-rack": "000000.00",
813                         "port-shelf": "Chassis#1"
814                     },
815                     "lgx": {
816                         "lgx-device-name": "Some lgx-device-name",
817                         "lgx-port-name": "Some lgx-port-name",
818                         "lgx-port-rack": "000000.00",
819                         "lgx-port-shelf": "00"
820                     }
821                 },
822                 "optic-type": "gray"
823             },
824             "due-date": "yyyy-mm-ddT00:00:01Z",
825             "operator-contact": "some-contact-info"
826         }
827     }
828
829 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
830 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
831 the devices.
832
833 OTN OCH-OTU4 service creation
834 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
835
836 Use the following REST RPC to invoke *service handler* module in order to create over the optical
837 infrastructure a bidirectional end-to-end OTU4 over an optical wavelength connectivity service
838 between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such service configure the
839 optical network infrastructure composed of rdm nodes.
840
841 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
842
843 **Sample JSON Data**
844
845 .. code:: json
846
847     {
848         "input": {
849             "sdnc-request-header": {
850                 "request-id": "request-1",
851                 "rpc-action": "service-create",
852                 "request-system-id": "appname"
853             },
854             "service-name": "something",
855             "common-id": "commonId",
856             "connection-type": "infrastructure",
857             "service-a-end": {
858                 "service-rate": "100",
859                 "node-id": "<xpdr-node-id>",
860                 "service-format": "OTU",
861                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
862                 "clli": "<ccli-name>",
863                 "tx-direction": {
864                     "port": {
865                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
866                         "port-type": "fixed",
867                         "port-name": "<xpdr-network-port-in-otn-topology>",
868                         "port-rack": "000000.00",
869                         "port-shelf": "Chassis#1"
870                     },
871                     "lgx": {
872                         "lgx-device-name": "Some lgx-device-name",
873                         "lgx-port-name": "Some lgx-port-name",
874                         "lgx-port-rack": "000000.00",
875                         "lgx-port-shelf": "00"
876                     }
877                 },
878                 "rx-direction": {
879                     "port": {
880                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
881                         "port-type": "fixed",
882                         "port-name": "<xpdr-network-port-in-otn-topology>",
883                         "port-rack": "000000.00",
884                         "port-shelf": "Chassis#1"
885                     },
886                     "lgx": {
887                         "lgx-device-name": "Some lgx-device-name",
888                         "lgx-port-name": "Some lgx-port-name",
889                         "lgx-port-rack": "000000.00",
890                         "lgx-port-shelf": "00"
891                     }
892                 },
893                 "optic-type": "gray"
894             },
895             "service-z-end": {
896                 "service-rate": "100",
897                 "node-id": "<xpdr-node-id>",
898                 "service-format": "OTU",
899                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
900                 "clli": "<ccli-name>",
901                 "tx-direction": {
902                     "port": {
903                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
904                         "port-type": "fixed",
905                         "port-name": "<xpdr-network-port-in-otn-topology>",
906                         "port-rack": "000000.00",
907                         "port-shelf": "Chassis#1"
908                     },
909                     "lgx": {
910                         "lgx-device-name": "Some lgx-device-name",
911                         "lgx-port-name": "Some lgx-port-name",
912                         "lgx-port-rack": "000000.00",
913                         "lgx-port-shelf": "00"
914                     }
915                 },
916                 "rx-direction": {
917                     "port": {
918                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
919                         "port-type": "fixed",
920                         "port-name": "<xpdr-network-port-in-otn-topology>",
921                         "port-rack": "000000.00",
922                         "port-shelf": "Chassis#1"
923                     },
924                     "lgx": {
925                         "lgx-device-name": "Some lgx-device-name",
926                         "lgx-port-name": "Some lgx-port-name",
927                         "lgx-port-rack": "000000.00",
928                         "lgx-port-shelf": "00"
929                     }
930                 },
931                 "optic-type": "gray"
932             },
933             "due-date": "yyyy-mm-ddT00:00:01Z",
934             "operator-contact": "some-contact-info"
935         }
936     }
937
938 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
939 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
940 the devices.
941
942 OTN HO-ODU4 service creation
943 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
944
945 Use the following REST RPC to invoke *service handler* module in order to create over the optical
946 infrastructure a bidirectional end-to-end ODU4 OTN service over an OTU4 and structured to support
947 low-order OTN services (ODU2e, ODU0). As for OTU4, such a service must be created between two network
948 ports of OTN Xponder (MUXPDR or SWITCH).
949
950 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
951
952 **Sample JSON Data**
953
954 .. code:: json
955
956     {
957         "input": {
958             "sdnc-request-header": {
959                 "request-id": "request-1",
960                 "rpc-action": "service-create",
961                 "request-system-id": "appname"
962             },
963             "service-name": "something",
964             "common-id": "commonId",
965             "connection-type": "infrastructure",
966             "service-a-end": {
967                 "service-rate": "100",
968                 "node-id": "<xpdr-node-id>",
969                 "service-format": "ODU",
970                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
971                 "clli": "<ccli-name>",
972                 "tx-direction": {
973                     "port": {
974                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
975                         "port-type": "fixed",
976                         "port-name": "<xpdr-network-port-in-otn-topology>",
977                         "port-rack": "000000.00",
978                         "port-shelf": "Chassis#1"
979                     },
980                     "lgx": {
981                         "lgx-device-name": "Some lgx-device-name",
982                         "lgx-port-name": "Some lgx-port-name",
983                         "lgx-port-rack": "000000.00",
984                         "lgx-port-shelf": "00"
985                     }
986                 },
987                 "rx-direction": {
988                     "port": {
989                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
990                         "port-type": "fixed",
991                         "port-name": "<xpdr-network-port-in-otn-topology>",
992                         "port-rack": "000000.00",
993                         "port-shelf": "Chassis#1"
994                     },
995                     "lgx": {
996                         "lgx-device-name": "Some lgx-device-name",
997                         "lgx-port-name": "Some lgx-port-name",
998                         "lgx-port-rack": "000000.00",
999                         "lgx-port-shelf": "00"
1000                     }
1001                 },
1002                 "optic-type": "gray"
1003             },
1004             "service-z-end": {
1005                 "service-rate": "100",
1006                 "node-id": "<xpdr-node-id>",
1007                 "service-format": "ODU",
1008                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
1009                 "clli": "<ccli-name>",
1010                 "tx-direction": {
1011                     "port": {
1012                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1013                         "port-type": "fixed",
1014                         "port-name": "<xpdr-network-port-in-otn-topology>",
1015                         "port-rack": "000000.00",
1016                         "port-shelf": "Chassis#1"
1017                     },
1018                     "lgx": {
1019                         "lgx-device-name": "Some lgx-device-name",
1020                         "lgx-port-name": "Some lgx-port-name",
1021                         "lgx-port-rack": "000000.00",
1022                         "lgx-port-shelf": "00"
1023                     }
1024                 },
1025                 "rx-direction": {
1026                     "port": {
1027                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1028                         "port-type": "fixed",
1029                         "port-name": "<xpdr-network-port-in-otn-topology>",
1030                         "port-rack": "000000.00",
1031                         "port-shelf": "Chassis#1"
1032                     },
1033                     "lgx": {
1034                         "lgx-device-name": "Some lgx-device-name",
1035                         "lgx-port-name": "Some lgx-port-name",
1036                         "lgx-port-rack": "000000.00",
1037                         "lgx-port-shelf": "00"
1038                     }
1039                 },
1040                 "optic-type": "gray"
1041             },
1042             "due-date": "yyyy-mm-ddT00:00:01Z",
1043             "operator-contact": "some-contact-info"
1044         }
1045     }
1046
1047 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1048 *otn-topology* that must contains OTU4 links with valid bandwidth parameters, and then
1049 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1050
1051 OTN 10GE-ODU2e service creation
1052 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1053
1054 Use the following REST RPC to invoke *service handler* module in order to create over the OTN
1055 infrastructure a bidirectional end-to-end 10GE-ODU2e OTN service over an ODU4.
1056 Such a service must be created between two client ports of OTN Xponder (MUXPDR or SWITCH)
1057 configured to support 10GE interfaces.
1058
1059 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
1060
1061 **Sample JSON Data**
1062
1063 .. code:: json
1064
1065     {
1066         "input": {
1067             "sdnc-request-header": {
1068                 "request-id": "request-1",
1069                 "rpc-action": "service-create",
1070                 "request-system-id": "appname"
1071             },
1072             "service-name": "something",
1073             "common-id": "commonId",
1074             "connection-type": "service",
1075             "service-a-end": {
1076                 "service-rate": "10",
1077                 "node-id": "<xpdr-node-id>",
1078                 "service-format": "Ethernet",
1079                 "clli": "<ccli-name>",
1080                 "subrate-eth-sla": {
1081                     "subrate-eth-sla": {
1082                         "committed-info-rate": "10000",
1083                         "committed-burst-size": "64"
1084                     }
1085                 },
1086                 "tx-direction": {
1087                     "port": {
1088                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1089                         "port-type": "fixed",
1090                         "port-name": "<xpdr-client-port-in-otn-topology>",
1091                         "port-rack": "000000.00",
1092                         "port-shelf": "Chassis#1"
1093                     },
1094                     "lgx": {
1095                         "lgx-device-name": "Some lgx-device-name",
1096                         "lgx-port-name": "Some lgx-port-name",
1097                         "lgx-port-rack": "000000.00",
1098                         "lgx-port-shelf": "00"
1099                     }
1100                 },
1101                 "rx-direction": {
1102                     "port": {
1103                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1104                         "port-type": "fixed",
1105                         "port-name": "<xpdr-client-port-in-otn-topology>",
1106                         "port-rack": "000000.00",
1107                         "port-shelf": "Chassis#1"
1108                     },
1109                     "lgx": {
1110                         "lgx-device-name": "Some lgx-device-name",
1111                         "lgx-port-name": "Some lgx-port-name",
1112                         "lgx-port-rack": "000000.00",
1113                         "lgx-port-shelf": "00"
1114                     }
1115                 },
1116                 "optic-type": "gray"
1117             },
1118             "service-z-end": {
1119                 "service-rate": "10",
1120                 "node-id": "<xpdr-node-id>",
1121                 "service-format": "Ethernet",
1122                 "clli": "<ccli-name>",
1123                 "subrate-eth-sla": {
1124                     "subrate-eth-sla": {
1125                         "committed-info-rate": "10000",
1126                         "committed-burst-size": "64"
1127                     }
1128                 },
1129                 "tx-direction": {
1130                     "port": {
1131                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1132                         "port-type": "fixed",
1133                         "port-name": "<xpdr-client-port-in-otn-topology>",
1134                         "port-rack": "000000.00",
1135                         "port-shelf": "Chassis#1"
1136                     },
1137                     "lgx": {
1138                         "lgx-device-name": "Some lgx-device-name",
1139                         "lgx-port-name": "Some lgx-port-name",
1140                         "lgx-port-rack": "000000.00",
1141                         "lgx-port-shelf": "00"
1142                     }
1143                 },
1144                 "rx-direction": {
1145                     "port": {
1146                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1147                         "port-type": "fixed",
1148                         "port-name": "<xpdr-client-port-in-otn-topology>",
1149                         "port-rack": "000000.00",
1150                         "port-shelf": "Chassis#1"
1151                     },
1152                     "lgx": {
1153                         "lgx-device-name": "Some lgx-device-name",
1154                         "lgx-port-name": "Some lgx-port-name",
1155                         "lgx-port-rack": "000000.00",
1156                         "lgx-port-shelf": "00"
1157                     }
1158                 },
1159                 "optic-type": "gray"
1160             },
1161             "due-date": "yyyy-mm-ddT00:00:01Z",
1162             "operator-contact": "some-contact-info"
1163         }
1164     }
1165
1166 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1167 *otn-topology* that must contains ODU4 links with valid bandwidth parameters, and then
1168 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1169
1170
1171 .. note::
1172     Since Magnesium SR2, the service-list corresponding to OCH-OTU4, ODU4 or again 10GE-ODU2e services is
1173     updated in the service-list datastore.
1174
1175 .. note::
1176     trib-slot is used when the equipment supports contiguous trib-slot allocation (supported from
1177     Magnesium SR0). The trib-slot provided corresponds to the first of the used trib-slots.
1178     complex-trib-slots will be used when the equipment does not support contiguous trib-slot
1179     allocation. In this case a list of the different trib-slots to be used shall be provided.
1180     The support for non contiguous trib-slot allocation is planned for later Magnesium release.
1181
1182 Deleting a service
1183 ~~~~~~~~~~~~~~~~~~
1184
1185 Deleting any kind of service
1186 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1187
1188 Use the following REST RPC to invoke *service handler* module in order to delete a given optical
1189 connectivity service.
1190
1191 **REST API** : *POST /restconf/operations/org-openroadm-service:service-delete*
1192
1193 **Sample JSON Data**
1194
1195 .. code:: json
1196
1197     {
1198         "input": {
1199             "sdnc-request-header": {
1200                 "request-id": "request-1",
1201                 "rpc-action": "service-delete",
1202                 "request-system-id": "appname",
1203                 "notification-url": "http://localhost:8585/NotificationServer/notify"
1204             },
1205             "service-delete-req-info": {
1206                 "service-name": "something",
1207                 "tail-retention": "no"
1208             }
1209         }
1210     }
1211
1212 Most important parameters for this REST RPC is the *service-name*.
1213
1214
1215 .. note::
1216     Deleting OTN services implies proceeding in the reverse way to their creation. Thus, OTN
1217     service deletion must respect the three following steps:
1218     1. delete first all 10GE services supported over any ODU4 to be deleted
1219     2. delete ODU4
1220     3. delete OCH-OTU4 supporting the just deleted ODU4
1221
1222 Invoking PCE module
1223 ~~~~~~~~~~~~~~~~~~~
1224
1225 Use the following REST RPCs to invoke *PCE* module in order to check connectivity between xponder
1226 nodes and the availability of a supporting optical connectivity between the network-ports of the
1227 nodes.
1228
1229 Checking OTU4 service connectivity
1230 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1231
1232 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1233
1234 **Sample JSON Data**
1235
1236 .. code:: json
1237
1238    {
1239       "input": {
1240            "service-name": "something",
1241            "resource-reserve": "true",
1242            "service-handler-header": {
1243              "request-id": "request1"
1244            },
1245            "service-a-end": {
1246              "service-rate": "100",
1247              "clli": "<clli-node>",
1248              "service-format": "OTU",
1249              "node-id": "<otn-node-id>"
1250            },
1251            "service-z-end": {
1252              "service-rate": "100",
1253              "clli": "<clli-node>",
1254              "service-format": "OTU",
1255              "node-id": "<otn-node-id>"
1256              },
1257            "pce-metric": "hop-count"
1258        }
1259    }
1260
1261 .. note::
1262     here, the <otn-node-id> corresponds to the node-id as appearing in "openroadm-network" topology
1263     layer
1264
1265 Checking ODU4 service connectivity
1266 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1267
1268 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1269
1270 **Sample JSON Data**
1271
1272 .. code:: json
1273
1274    {
1275       "input": {
1276            "service-name": "something",
1277            "resource-reserve": "true",
1278            "service-handler-header": {
1279              "request-id": "request1"
1280            },
1281            "service-a-end": {
1282              "service-rate": "100",
1283              "clli": "<clli-node>",
1284              "service-format": "ODU",
1285              "node-id": "<otn-node-id>"
1286            },
1287            "service-z-end": {
1288              "service-rate": "100",
1289              "clli": "<clli-node>",
1290              "service-format": "ODU",
1291              "node-id": "<otn-node-id>"
1292              },
1293            "pce-metric": "hop-count"
1294        }
1295    }
1296
1297 .. note::
1298     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1299
1300 Checking 10GE/ODU2e service connectivity
1301 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1302
1303 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1304
1305 **Sample JSON Data**
1306
1307 .. code:: json
1308
1309    {
1310       "input": {
1311            "service-name": "something",
1312            "resource-reserve": "true",
1313            "service-handler-header": {
1314              "request-id": "request1"
1315            },
1316            "service-a-end": {
1317              "service-rate": "10",
1318              "clli": "<clli-node>",
1319              "service-format": "Ethernet",
1320              "node-id": "<otn-node-id>"
1321            },
1322            "service-z-end": {
1323              "service-rate": "10",
1324              "clli": "<clli-node>",
1325              "service-format": "Ethernet",
1326              "node-id": "<otn-node-id>"
1327              },
1328            "pce-metric": "hop-count"
1329        }
1330    }
1331
1332 .. note::
1333     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1334
1335
1336 Help
1337 ----
1338
1339 -  `TransportPCE Wiki <https://wiki.opendaylight.org/display/ODL/TransportPCE>`__