Update diagram and developer guide for Si and P
[transportpce.git] / docs / developer-guide.rst
1 .. _transportpce-dev-guide:
2
3 TransportPCE Developer Guide
4 ============================
5
6 Overview
7 --------
8
9 TransportPCE describes an application running on top of the OpenDaylight
10 controller. Its primary function is to control an optical transport
11 infrastructure using a non-proprietary South Bound Interface (SBI). It may be
12 interconnected with Controllers of different layers (L2, L3 Controller…), a
13 higher layer Controller and/or an Orchestrator through non-proprietary
14 Application Programing Interfaces (APIs). Control includes the capability to
15 configure the optical equipment, and to provision services according to a
16 request coming from a higher layer controller and/or an orchestrator.
17 This capability may rely on the controller only or it may be delegated to
18 distributed (standardized) protocols.
19
20
21 Architecture
22 ------------
23
24 TransportPCE modular architecture is described on the next diagram. Each main
25 function such as Topology management, Path Calculation Engine (PCE), Service
26 handler, Renderer \_responsible for the path configuration through optical
27 equipment\_ and Optical Line Management (OLM) is associated with a generic block
28 relying on open models, each of them communicating through published APIs.
29
30
31 .. figure:: ./images/TransportPCE-Diagram-Phosphorus.jpg
32    :alt: TransportPCE architecture
33
34    TransportPCE architecture
35
36 Fluorine, Neon and Sodium releases of transportPCE are dedicated to the control
37 of WDM transport infrastructure. The WDM layer is built from colorless ROADMs
38 and transponders.
39
40 The interest of using a controller to provision automatically services strongly
41 relies on its ability to handle end to end optical services that span through
42 the different network domains, potentially equipped with equipment coming from
43 different suppliers. Thus, interoperability in the optical layer is a key
44 element to get the benefit of automated control.
45
46 Initial design of TransportPCE leverages OpenROADM Multi-Source-Agreement (MSA)
47 which defines interoperability specifications, consisting of both Optical
48 interoperability and Yang data models.
49
50 End to end OTN services such as OCH-OTU4, structured ODU4 or 10GE-ODU2e
51 services are supported since Magnesium SR2. OTN support continued to be
52 improved in the following releases of Magnesium and Aluminium.
53
54 Flexgrid was introduced in Aluminium. Depending on OpenROADM device models,
55 optical interfaces can be created according to the initial fixed grid (for
56 R1.2.1, 96 channels regularly spaced of 50 GHz), or to a flexgrid (for R2.2.1
57 use of specific number of subsequent frequency slots of 6.25 GHz depending on
58 one side of ROADMs and transponders capabilities and on the other side of the
59 rate of the channel.
60
61 Leveraging Flexgrid feature, high rate services are supported since Silicon.
62 First implementation allows rendering 400 GE services. This release also brings
63 asynchronous service creation and deletion, thanks to northbound notifications
64 modules based on a Kafka implementation, allowing interactions with the DMaaP
65 Bus of ONAP.
66
67 Phosphorus consolidates end to end support for high rate services (ODUC4, OTUC4),
68 allowing service creation and deletion from the NBI. The support of path
69 computation for high rate services (OTUC4) will be added through the different P
70 releases, relying on GNPy for impairment aware path computation. An experimental
71 support of T-API is provided allowing service-create/delete from a T-API version
72 2.1.1 compliant NBI. A T-API network topology, with different levels of abstraction
73 and service context are maintained in the MDSAL. Service state is managed,
74 monitoring device port state changes. Associated notifications are handled through
75 Kafka and  DMaaP clients.
76
77
78 Module description
79 ~~~~~~~~~~~~~~~~~~
80
81 ServiceHandler
82 ^^^^^^^^^^^^^^
83
84 Service Handler handles request coming from a higher level controller or an
85 orchestrator through the northbound API, as defined in the Open ROADM service model.
86 Current implementation addresses the following rpcs: service-create, temp-service-
87 create, service–delete, temp-service-delete, service-reroute, and service-restoration.
88 It checks the request consistency and trigs path calculation sending rpcs to the PCE.
89 If a valid path is returned by the PCE, path configuration is initiated relying on
90 Renderer and OLM. At the confirmation of a successful service creation, the Service
91 Handler updates the service-list/temp-service-list in the MD-SAL. For service deletion,
92 the Service Handler relies on the Renderer and the OLM to delete connections and reset
93 power levels associated with the service. The service-list is updated following a
94 successful service deletion. In Neon SR0 is added the support for service from ROADM
95 to ROADM, which brings additional flexibility and notably allows reserving resources
96 when transponders are not in place at day one. Magnesium SR2 fully supports end-to-end
97 OTN services which are part of the OTN infrastructure. It concerns the management of
98 OCH-OTU4 (also part of the optical infrastructure) and structured HO-ODU4 services.
99 Moreover, once these two kinds of OTN infrastructure service created, it is possible
100 to manage some LO-ODU services (1GE-ODU0, 10GE-ODU2e). 100GE services are also
101 supported over ODU4 in transponders or switchponders using higher rate network
102 interfaces.
103
104 PCE
105 ^^^
106
107 The Path Computation Element (PCE) is the component responsible for path
108 calculation. An interface allows the Service Handler or external components such as an
109 orchestrator to request a path computation and get a response from the PCE
110 including the computed path(s) in case of success, or errors and indication of
111 the reason for the failure in case the request cannot be satisfied. Additional
112 parameters can be provided by the PCE in addition to the computed paths if
113 requested by the client module. An interface to the Topology Management module
114 allows keeping PCE aligned with the latest changes in the topology. Information
115 about current and planned services is available in the MD-SAL data store.
116
117 Current implementation of PCE allows finding the shortest path, minimizing either the hop
118 count (default) or the propagation delay. Central wavelength is assigned considering a fixed
119 grid of 96 wavelengths 50 GHz spaced. The assignment of wavelengths according to a flexible
120 grid considering 768 subsequent slots of 6,25 GHz (total spectrum of 4.8 Thz), and their
121 occupation by existing services is planned for later releases.
122 In Neon SR0, the PCE calculates the OSNR, on the base of incremental noise specifications
123 provided in Open ROADM MSA. The support of unidirectional ports is also added.
124
125 PCE handles the following constraints as hard constraints:
126
127 -   **Node exclusion**
128 -   **SRLG exclusion**
129 -   **Maximum latency**
130
131 In Magnesium SR0, the interconnection of the PCE with GNPY (Gaussian Noise Python), an
132 open-source library developed in the scope of the Telecom Infra Project for building route
133 planning and optimizing performance in optical mesh networks, is fully supported. Impairment
134 aware path computation for service of higher rates (Beyond 100G) is planned across Phoshorus
135 releases. It implies to make B100G OpenROADM specifications available in GNPy libraries.
136
137 If the OSNR calculated by the PCE is too close to the limit defined in OpenROADM
138 specifications, the PCE forwards through a REST interface to GNPY external tool the topology
139 and the pre-computed path translated in routing constraints. GNPy calculates a set of Quality of
140 Transmission metrics for this path using its own library which includes models for OpenROADM.
141 The result is sent back to the PCE. If the path is validated, the PCE sends back a response to
142 the service handler. In case of invalidation of the path by GNPY, the PCE sends a new request to
143 GNPY, including only the constraints expressed in the path-computation-request initiated by the
144 Service Handler. GNPy then tries to calculate a path based on these relaxed constraints. The
145 result of the path computation is provided to the PCE which translates the path according to the
146 topology handled in transportPCE and forwards the results to the Service Handler.
147
148 GNPy relies on SNR and takes into account the linear and non-linear impairments
149 to check feasibility. In the related tests, GNPy module runs externally in a
150 docker and the communication with T-PCE is ensured via HTTPs.
151
152 Topology Management
153 ^^^^^^^^^^^^^^^^^^^
154
155 Topology management module builds the Topology according to the Network model
156 defined in OpenROADM. The topology is aligned with IETF I2RS RFC8345 model.
157 It includes several network layers:
158
159 -  **CLLI layer corresponds to the locations that host equipment**
160 -  **Network layer corresponds to a first level of disaggregation where we
161    separate Xponders (transponder, muxponders or switchponders) from ROADMs**
162 -  **Topology layer introduces a second level of disaggregation where ROADMs
163    Add/Drop modules ("SRGs") are separated from the degrees which includes line
164    amplifiers and WSS that switch wavelengths from one to another degree**
165 -  **OTN layer introduced in Magnesium includes transponders as well as switch-ponders and
166    mux-ponders having the ability to switch OTN containers from client to line cards. Mg SR0
167    release includes creation of the switching pool (used to model cross-connect matrices),
168    tributary-ports and tributary-slots at the initial connection of NETCONF devices.
169    The population of OTN links (OTU4 and ODU4), and the adjustment of the tributary ports/slots
170    pool occupancy when OTN services are created is supported since Magnesium SR2.**
171
172
173 Renderer
174 ^^^^^^^^
175
176 The Renderer module, on request coming from the Service Handler through a service-
177 implementation-request /service delete rpc, sets/deletes the path corresponding to a specific
178 service between A and Z ends. The path description provided by the service-handler to the
179 renderer is based on abstracted resources (nodes, links and termination-points), as provided
180 by the PCE module. The renderer converts this path-description in a path topology based on
181 device resources (circuit-packs, ports,…).
182
183 The conversion from abstracted resources to device resources is performed relying on the
184 portmapping module which maintains the connections between these different resource types.
185 Portmapping module also allows to keep the topology independant from the devices releases.
186 In Neon (SR0), portmapping module has been enriched to support both openroadm 1.2.1 and 2.2.1
187 device models. The full support of openroadm 2.2.1 device models (both in the topology management
188 and the rendering function) has been added in Neon SR1. In Magnesium, portmapping is enriched with
189 the supported-interface-capability, OTN supporting-interfaces, and switching-pools (reflecting
190 cross-connection capabilities of OTN switch-ponders). The support for 7.1 devices models is
191 introduced in Silicon (no devices of intermediate releases have been proposed and made available
192 to the market by equipment manufacturers).
193
194 After the path is provided, the renderer first checks what are the existing interfaces on the
195 ports of the different nodes that the path crosses. It then creates missing interfaces. After all
196 needed interfaces have been created it sets the connections required in the nodes and
197 notifies the Service Handler on the status of the path creation. Path is created in 2 steps
198 (from A to Z and Z to A). In case the path between A and Z could not be fully created, a
199 rollback function is called to set the equipment on the path back to their initial configuration
200 (as they were before invoking the Renderer).
201
202 Magnesium brings the support of OTN services. SR0 supports the creation of OTU4, ODU4, ODU2/ODU2e
203 and ODU0 interfaces. The creation of these low-order otn interfaces must be triggered through
204 otn-service-path RPC. Magnesium SR2 fully supports end-to-end otn service implementation into devices
205 (service-implementation-request /service delete rpc, topology alignement after the service
206 has been created).
207
208 In Silicon releases, higher rate OTN interfaces (OTUC4) must be triggered through otn-service-
209 path RPC. Phosphorus SR0 supports end-to-end otn service implementation into devices
210 (service-implementation-request /service delete rpc, topology alignement after the service
211 has been created). One shall note that impairment aware path calculation for higher rates will
212 be made available across the Phosphorus release train.
213
214 OLM
215 ^^^
216
217 Optical Line Management module implements two main features: it is responsible
218 for setting up the optical power levels on the different interfaces, and is in
219 charge of adjusting these settings across the life of the optical
220 infrastructure.
221
222 After the different connections have been established in the ROADMS, between 2
223 Degrees for an express path, or between a SRG and a Degree for an Add or Drop
224 path; meaning the devices have set WSS and all other required elements to
225 provide path continuity, power setting are provided as attributes of these
226 connections. This allows the device to set all complementary elements such as
227 VOAs, to guaranty that the signal is launched at a correct power level
228 (in accordance to the specifications) in the fiber span. This also applies
229 to X-Ponders, as their output power must comply with the specifications defined
230 for the Add/Drop ports (SRG) of the ROADM. OLM has the responsibility of
231 calculating the right power settings, sending it to the device, and check the
232 PM retrieved from the device to verify that the setting was correctly applied
233 and the configuration was successfully completed.
234
235
236 Inventory
237 ^^^^^^^^^
238
239 TransportPCE Inventory module is responsible to keep track of devices connected in an external
240 MariaDB database. Other databases may be used as long as they comply with SQL and are compatible
241 with OpenDaylight (for example MySQL). At present, the module supports extracting and persisting
242 inventory of devices OpenROADM MSA version 1.2.1. Inventory module changes to support newer device
243 models (2.2.1, etc) and other models (network, service, etc) will be progressively included.
244
245 The inventory module can be activated by the associated karaf feature (odl-transporpce-inventory)
246 The database properties are supplied in the “opendaylight-release” and “opendaylight-snapshots”
247 profiles. Below is the settings.xml with properties included in the distribution.
248 The module can be rebuild from sources with different parameters.
249
250 Sample entry in settings.xml to declare an external inventory database:
251 ::
252
253     <profiles>
254       <profile>
255           <id>opendaylight-release</id>
256     [..]
257          <properties>
258                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
259                  <transportpce.db.database><<databasename>></transportpce.db.database>
260                  <transportpce.db.username><<username>></transportpce.db.username>
261                  <transportpce.db.password><<password>></transportpce.db.password>
262                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
263          </properties>
264     </profile>
265     [..]
266     <profile>
267           <id>opendaylight-snapshots</id>
268     [..]
269          <properties>
270                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
271                  <transportpce.db.database><<databasename>></transportpce.db.database>
272                  <transportpce.db.username><<username>></transportpce.db.username>
273                  <transportpce.db.password><<password>></transportpce.db.password>
274                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
275          </properties>
276         </profile>
277     </profiles>
278
279
280 Once the project built and when karaf is started, the cfg file is generated in etc folder with the
281 corresponding properties supplied in settings.xml. When devices with OpenROADM 1.2.1 device model
282 are mounted, the device listener in the inventory module loads several device attributes to various
283 tables as per the supplied database. The database structure details can be retrieved from the file
284 tests/inventory/initdb.sql inside project sources. Installation scripts and a docker file are also
285 provided.
286
287 Key APIs and Interfaces
288 -----------------------
289
290 External API
291 ~~~~~~~~~~~~
292
293 North API, interconnecting the Service Handler to higher level applications
294 relies on the Service Model defined in the MSA. The Renderer and the OLM are
295 developed to allow configuring Open ROADM devices through a southbound
296 Netconf/Yang interface and rely on the MSA’s device model.
297
298 ServiceHandler Service
299 ^^^^^^^^^^^^^^^^^^^^^^
300
301 -  RPC call
302
303    -  service-create (given service-name, service-aend, service-zend)
304
305    -  service-delete (given service-name)
306
307    -  service-reroute (given service-name, service-aend, service-zend)
308
309    -  service-restoration (given service-name, service-aend, service-zend)
310
311    -  temp-service-create (given common-id, service-aend, service-zend)
312
313    -  temp-service-delete (given common-id)
314
315 -  Data structure
316
317    -  service list : made of services
318    -  temp-service list : made of temporary services
319    -  service : composed of service-name, topology wich describes the detailed path (list of used resources)
320
321 -  Notification
322
323    - service-rpc-result : result of service RPC
324    - service-notification : service has been added, modified or removed
325
326 Netconf Service
327 ^^^^^^^^^^^^^^^
328
329 -  RPC call
330
331    -  connect-device : PUT
332    -  disconnect-device : DELETE
333    -  check-connected-device : GET
334
335 -  Data Structure
336
337    -  node list : composed of netconf nodes in topology-netconf
338
339 Internal APIs
340 ~~~~~~~~~~~~~
341
342 Internal APIs define REST APIs to interconnect TransportPCE modules :
343
344 -   Service Handler to PCE
345 -   PCE to Topology Management
346 -   Service Handler to Renderer
347 -   Renderer to OLM
348
349 Pce Service
350 ^^^^^^^^^^^
351
352 -  RPC call
353
354    -  path-computation-request (given service-name, service-aend, service-zend)
355
356    -  cancel-resource-reserve (given service-name)
357
358 -  Notification
359
360    - service-path-rpc-result : result of service RPC
361
362 Renderer Service
363 ^^^^^^^^^^^^^^^^
364
365 -  RPC call
366
367    -  service-implementation-request (given service-name, service-aend, service-zend)
368
369    -  service-delete (given service-name)
370
371 -  Data structure
372
373    -  service path list : composed of service paths
374    -  service path : composed of service-name, path description giving the list of abstracted elements (nodes, tps, links)
375
376 -  Notification
377
378    - service-path-rpc-result : result of service RPC
379
380 Device Renderer
381 ^^^^^^^^^^^^^^^
382
383 -  RPC call
384
385    -  service-path used in SR0 as an intermediate solution to address directly the renderer
386       from a REST NBI to create OCH-OTU4-ODU4 interfaces on network port of otn devices.
387
388    -  otn-service-path used in SR0 as an intermediate solution to address directly the renderer
389       from a REST NBI for otn-service creation. Otn service-creation through
390       service-implementation-request call from the Service Handler will be supported in later
391       Magnesium releases
392
393 Topology Management Service
394 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
395
396 -  Data structure
397
398    -  network list : composed of networks(openroadm-topology, netconf-topology)
399    -  node list : composed of nodes identified by their node-id
400    -  link list : composed of links identified by their link-id
401    -  node : composed of roadm, xponder
402       link : composed of links of different types (roadm-to-roadm, express, add-drop ...)
403
404 OLM Service
405 ^^^^^^^^^^^
406
407 -  RPC call
408
409    -  get-pm (given node-id)
410
411    -  service-power-setup
412
413    -  service-power-turndown
414
415    -  service-power-reset
416
417    -  calculate-spanloss-base
418
419    -  calculate-spanloss-current
420
421 odl-transportpce-stubmodels
422 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
423
424    -  This feature provides function to be able to stub some of TransportPCE modules, pce and
425       renderer (Stubpce and Stubrenderer).
426       Stubs are used for development purposes and can be used for some of the functionnal tests.
427
428 Interfaces to external software
429 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
430
431 It defines the interfaces implemented to interconnect TransportPCE modules with other software in
432 order to perform specific tasks
433
434 GNPy interface
435 ^^^^^^^^^^^^^^
436
437 -  Request structure
438
439    -  topology : composed of list of elements and connections
440    -  service : source, destination, explicit-route-objects, path-constraints
441
442 -  Response structure
443
444    -  path-properties/path-metric : OSNR-0.1nm, OSNR-bandwidth, SNR-0.1nm, SNR-bandwidth,
445    -  path-properties/path-route-objects : composed of path elements
446
447
448 Running transportPCE project
449 ----------------------------
450
451 To use transportPCE controller, the first step is to connect the controller to optical nodes
452 through the NETCONF connector.
453
454 .. note::
455
456     In the current version, only optical equipment compliant with open ROADM datamodels are managed
457     by transportPCE.
458
459
460 Connecting nodes
461 ~~~~~~~~~~~~~~~~
462
463 To connect a node, use the following JSON RPC
464
465 **REST API** : *POST /restconf/config/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
466
467 **Sample JSON Data**
468
469 .. code:: json
470
471     {
472         "node": [
473             {
474                 "node-id": "<node-id>",
475                 "netconf-node-topology:tcp-only": "false",
476                 "netconf-node-topology:reconnect-on-changed-schema": "false",
477                 "netconf-node-topology:host": "<node-ip-address>",
478                 "netconf-node-topology:default-request-timeout-millis": "120000",
479                 "netconf-node-topology:max-connection-attempts": "0",
480                 "netconf-node-topology:sleep-factor": "1.5",
481                 "netconf-node-topology:actor-response-wait-time": "5",
482                 "netconf-node-topology:concurrent-rpc-limit": "0",
483                 "netconf-node-topology:between-attempts-timeout-millis": "2000",
484                 "netconf-node-topology:port": "<netconf-port>",
485                 "netconf-node-topology:connection-timeout-millis": "20000",
486                 "netconf-node-topology:username": "<node-username>",
487                 "netconf-node-topology:password": "<node-password>",
488                 "netconf-node-topology:keepalive-delay": "300"
489             }
490         ]
491     }
492
493
494 Then check that the netconf session has been correctly established between the controller and the
495 node. the status of **netconf-node-topology:connection-status** must be **connected**
496
497 **REST API** : *GET /restconf/operational/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
498
499
500 Node configuration discovery
501 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
502
503 Once the controller is connected to the node, transportPCE application automatically launchs a
504 discovery of the node configuration datastore and creates **Logical Connection Points** to any
505 physical ports related to transmission. All *circuit-packs* inside the node configuration are
506 analyzed.
507
508 Use the following JSON RPC to check that function internally named *portMapping*.
509
510 **REST API** : *GET /restconf/config/portmapping:network*
511
512 .. note::
513
514     In ``org-openroadm-device.yang``, four types of optical nodes can be managed:
515         * rdm: ROADM device (optical switch)
516         * xpdr: Xponder device (device that converts client to optical channel interface)
517         * ila: in line amplifier (optical amplifier)
518         * extplug: external pluggable (an optical pluggable that can be inserted in an external unit such as a router)
519
520     TransportPCE currently supports rdm and xpdr
521
522 Depending on the kind of open ROADM device connected, different kind of *Logical Connection Points*
523 should appear, if the node configuration is not empty:
524
525 -  DEG<degree-number>-TTP-<port-direction>: created on the line port of a degree on a rdm equipment
526 -  SRG<srg-number>-PP<port-number>: created on the client port of a srg on a rdm equipment
527 -  XPDR<number>-CLIENT<port-number>: created on the client port of a xpdr equipment
528 -  XPDR<number>-NETWORK<port-number>: created on the line port of a xpdr equipment
529
530     For further details on openROADM device models, see `openROADM MSA white paper <https://0201.nccdn.net/1_2/000/000/134/c50/Open-ROADM-MSA-release-2-Device-White-paper-v1-1.pdf>`__.
531
532 Optical Network topology
533 ~~~~~~~~~~~~~~~~~~~~~~~~
534
535 Before creating an optical connectivity service, your topology must contain at least two xpdr
536 devices connected to two different rdm devices. Normally, the *openroadm-topology* is automatically
537 created by transportPCE. Nevertheless, depending on the configuration inside optical nodes, this
538 topology can be partial. Check that link of type *ROADMtoROADM* exists between two adjacent rdm
539 nodes.
540
541 **REST API** : *GET /restconf/config/ietf-network:network/openroadm-topology*
542
543 If it is not the case, you need to manually complement the topology with *ROADMtoROADM* link using
544 the following REST RPC:
545
546
547 **REST API** : *POST /restconf/operations/networkutils:init-roadm-nodes*
548
549 **Sample JSON Data**
550
551 .. code:: json
552
553     {
554       "networkutils:input": {
555         "networkutils:rdm-a-node": "<node-id-A>",
556         "networkutils:deg-a-num": "<degree-A-number>",
557         "networkutils:termination-point-a": "<Logical-Connection-Point>",
558         "networkutils:rdm-z-node": "<node-id-Z>",
559         "networkutils:deg-z-num": "<degree-Z-number>",
560         "networkutils:termination-point-z": "<Logical-Connection-Point>"
561       }
562     }
563
564 *<Logical-Connection-Point> comes from the portMapping function*.
565
566 Unidirectional links between xpdr and rdm nodes must be created manually. To that end use the two
567 following REST RPCs:
568
569 From xpdr to rdm:
570 ^^^^^^^^^^^^^^^^^
571
572 **REST API** : *POST /restconf/operations/networkutils:init-xpdr-rdm-links*
573
574 **Sample JSON Data**
575
576 .. code:: json
577
578     {
579       "networkutils:input": {
580         "networkutils:links-input": {
581           "networkutils:xpdr-node": "<xpdr-node-id>",
582           "networkutils:xpdr-num": "1",
583           "networkutils:network-num": "<xpdr-network-port-number>",
584           "networkutils:rdm-node": "<rdm-node-id>",
585           "networkutils:srg-num": "<srg-number>",
586           "networkutils:termination-point-num": "<Logical-Connection-Point>"
587         }
588       }
589     }
590
591 From rdm to xpdr:
592 ^^^^^^^^^^^^^^^^^
593
594 **REST API** : *POST /restconf/operations/networkutils:init-rdm-xpdr-links*
595
596 **Sample JSON Data**
597
598 .. code:: json
599
600     {
601       "networkutils:input": {
602         "networkutils:links-input": {
603           "networkutils:xpdr-node": "<xpdr-node-id>",
604           "networkutils:xpdr-num": "1",
605           "networkutils:network-num": "<xpdr-network-port-number>",
606           "networkutils:rdm-node": "<rdm-node-id>",
607           "networkutils:srg-num": "<srg-number>",
608           "networkutils:termination-point-num": "<Logical-Connection-Point>"
609         }
610       }
611     }
612
613 OTN topology
614 ~~~~~~~~~~~~
615
616 Before creating an OTN service, your topology must contain at least two xpdr devices of MUXPDR
617 or SWITCH type connected to two different rdm devices. To check that these xpdr are present in the
618 OTN topology, use the following command on the REST API :
619
620 **REST API** : *GET /restconf/config/ietf-network:network/otn-topology*
621
622 An optical connectivity service shall have been created in a first setp. Since Magnesium SR2, the OTN
623 links are automatically populated in the topology after the Och, OTU4 and ODU4 interfaces have
624 been created on the two network ports of the xpdr.
625
626 Creating a service
627 ~~~~~~~~~~~~~~~~~~
628
629 Use the *service handler* module to create any end-to-end connectivity service on an OpenROADM
630 network. Two different kinds of end-to-end "optical" services are managed by TransportPCE:
631 - 100GE/400GE services from client port to client port of two transponders (TPDR)
632 - Optical Channel (OC) service from client add/drop port (PP port of SRG) to client add/drop port of
633 two ROADMs.
634
635 For these services, TransportPCE automatically invokes *renderer* module to create all required
636 interfaces and cross-connection on each device supporting the service.
637 As an example, the creation of a 100GE service implies among other things, the creation of OCH or
638 Optical Tributary Signal (OTSi), OTU4 and ODU4 interfaces on the Network port of TPDR devices.
639 The creation of a 400GE service implies the creation of OTSi, OTUC4, ODUC4 and ODU4 interfaces on
640 the Network port of TPDR devices.
641
642 Since Magnesium SR2, the *service handler* module directly manages some end-to-end otn
643 connectivity services.
644 Before creating a low-order OTN service (1GE or 10GE services terminating on client port of MUXPDR
645 or SWITCH), the user must ensure that a high-order ODU4 container exists and has previously been
646 configured (it means structured to support low-order otn services) to support low-order OTN containers.
647 Thus, OTN service creation implies three steps:
648 1. OCH-OTU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
649 2. HO-ODU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
650 3. 10GE service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
651
652 The management of other OTN services (1GE-ODU0, 100GE...) is planned for future releases.
653
654
655 100GE service creation
656 ^^^^^^^^^^^^^^^^^^^^^^
657
658 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
659 end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
660 nodes.
661
662 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
663
664 **Sample JSON Data**
665
666 .. code:: json
667
668     {
669         "input": {
670             "sdnc-request-header": {
671                 "request-id": "request-1",
672                 "rpc-action": "service-create",
673                 "request-system-id": "appname"
674             },
675             "service-name": "test1",
676             "common-id": "commonId",
677             "connection-type": "service",
678             "service-a-end": {
679                 "service-rate": "100",
680                 "node-id": "<xpdr-node-id>",
681                 "service-format": "Ethernet",
682                 "clli": "<ccli-name>",
683                 "tx-direction": {
684                     "port": {
685                         "port-device-name": "<xpdr-client-port>",
686                         "port-type": "fixed",
687                         "port-name": "<xpdr-client-port-number>",
688                         "port-rack": "000000.00",
689                         "port-shelf": "Chassis#1"
690                     },
691                     "lgx": {
692                         "lgx-device-name": "Some lgx-device-name",
693                         "lgx-port-name": "Some lgx-port-name",
694                         "lgx-port-rack": "000000.00",
695                         "lgx-port-shelf": "00"
696                     }
697                 },
698                 "rx-direction": {
699                     "port": {
700                         "port-device-name": "<xpdr-client-port>",
701                         "port-type": "fixed",
702                         "port-name": "<xpdr-client-port-number>",
703                         "port-rack": "000000.00",
704                         "port-shelf": "Chassis#1"
705                     },
706                     "lgx": {
707                         "lgx-device-name": "Some lgx-device-name",
708                         "lgx-port-name": "Some lgx-port-name",
709                         "lgx-port-rack": "000000.00",
710                         "lgx-port-shelf": "00"
711                     }
712                 },
713                 "optic-type": "gray"
714             },
715             "service-z-end": {
716                 "service-rate": "100",
717                 "node-id": "<xpdr-node-id>",
718                 "service-format": "Ethernet",
719                 "clli": "<ccli-name>",
720                 "tx-direction": {
721                     "port": {
722                         "port-device-name": "<xpdr-client-port>",
723                         "port-type": "fixed",
724                         "port-name": "<xpdr-client-port-number>",
725                         "port-rack": "000000.00",
726                         "port-shelf": "Chassis#1"
727                     },
728                     "lgx": {
729                         "lgx-device-name": "Some lgx-device-name",
730                         "lgx-port-name": "Some lgx-port-name",
731                         "lgx-port-rack": "000000.00",
732                         "lgx-port-shelf": "00"
733                     }
734                 },
735                 "rx-direction": {
736                     "port": {
737                         "port-device-name": "<xpdr-client-port>",
738                         "port-type": "fixed",
739                         "port-name": "<xpdr-client-port-number>",
740                         "port-rack": "000000.00",
741                         "port-shelf": "Chassis#1"
742                     },
743                     "lgx": {
744                         "lgx-device-name": "Some lgx-device-name",
745                         "lgx-port-name": "Some lgx-port-name",
746                         "lgx-port-rack": "000000.00",
747                         "lgx-port-shelf": "00"
748                     }
749                 },
750                 "optic-type": "gray"
751             },
752             "due-date": "yyyy-mm-ddT00:00:01Z",
753             "operator-contact": "some-contact-info"
754         }
755     }
756
757 Most important parameters for this REST RPC are the identification of the two physical client ports
758 on xpdr nodes.This RPC invokes the *PCE* module to compute a path over the *openroadm-topology* and
759 then invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
760
761
762 OC service creation
763 ^^^^^^^^^^^^^^^^^^^
764
765 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
766 end-to end Optical Channel (OC) connectivity service between two add/drop ports (PP port of SRG
767 node) over an optical network only composed of rdm nodes.
768
769 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
770
771 **Sample JSON Data**
772
773 .. code:: json
774
775     {
776         "input": {
777             "sdnc-request-header": {
778                 "request-id": "request-1",
779                 "rpc-action": "service-create",
780                 "request-system-id": "appname"
781             },
782             "service-name": "something",
783             "common-id": "commonId",
784             "connection-type": "roadm-line",
785             "service-a-end": {
786                 "service-rate": "100",
787                 "node-id": "<xpdr-node-id>",
788                 "service-format": "OC",
789                 "clli": "<ccli-name>",
790                 "tx-direction": {
791                     "port": {
792                         "port-device-name": "<xpdr-client-port>",
793                         "port-type": "fixed",
794                         "port-name": "<xpdr-client-port-number>",
795                         "port-rack": "000000.00",
796                         "port-shelf": "Chassis#1"
797                     },
798                     "lgx": {
799                         "lgx-device-name": "Some lgx-device-name",
800                         "lgx-port-name": "Some lgx-port-name",
801                         "lgx-port-rack": "000000.00",
802                         "lgx-port-shelf": "00"
803                     }
804                 },
805                 "rx-direction": {
806                     "port": {
807                         "port-device-name": "<xpdr-client-port>",
808                         "port-type": "fixed",
809                         "port-name": "<xpdr-client-port-number>",
810                         "port-rack": "000000.00",
811                         "port-shelf": "Chassis#1"
812                     },
813                     "lgx": {
814                         "lgx-device-name": "Some lgx-device-name",
815                         "lgx-port-name": "Some lgx-port-name",
816                         "lgx-port-rack": "000000.00",
817                         "lgx-port-shelf": "00"
818                     }
819                 },
820                 "optic-type": "gray"
821             },
822             "service-z-end": {
823                 "service-rate": "100",
824                 "node-id": "<xpdr-node-id>",
825                 "service-format": "OC",
826                 "clli": "<ccli-name>",
827                 "tx-direction": {
828                     "port": {
829                         "port-device-name": "<xpdr-client-port>",
830                         "port-type": "fixed",
831                         "port-name": "<xpdr-client-port-number>",
832                         "port-rack": "000000.00",
833                         "port-shelf": "Chassis#1"
834                     },
835                     "lgx": {
836                         "lgx-device-name": "Some lgx-device-name",
837                         "lgx-port-name": "Some lgx-port-name",
838                         "lgx-port-rack": "000000.00",
839                         "lgx-port-shelf": "00"
840                     }
841                 },
842                 "rx-direction": {
843                     "port": {
844                         "port-device-name": "<xpdr-client-port>",
845                         "port-type": "fixed",
846                         "port-name": "<xpdr-client-port-number>",
847                         "port-rack": "000000.00",
848                         "port-shelf": "Chassis#1"
849                     },
850                     "lgx": {
851                         "lgx-device-name": "Some lgx-device-name",
852                         "lgx-port-name": "Some lgx-port-name",
853                         "lgx-port-rack": "000000.00",
854                         "lgx-port-shelf": "00"
855                     }
856                 },
857                 "optic-type": "gray"
858             },
859             "due-date": "yyyy-mm-ddT00:00:01Z",
860             "operator-contact": "some-contact-info"
861         }
862     }
863
864 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
865 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
866 the devices.
867
868 OTN OCH-OTU4 service creation
869 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
870
871 Use the following REST RPC to invoke *service handler* module in order to create over the optical
872 infrastructure a bidirectional end-to-end OTU4 over an optical wavelength connectivity service
873 between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such service configure the
874 optical network infrastructure composed of rdm nodes.
875
876 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
877
878 **Sample JSON Data**
879
880 .. code:: json
881
882     {
883         "input": {
884             "sdnc-request-header": {
885                 "request-id": "request-1",
886                 "rpc-action": "service-create",
887                 "request-system-id": "appname"
888             },
889             "service-name": "something",
890             "common-id": "commonId",
891             "connection-type": "infrastructure",
892             "service-a-end": {
893                 "service-rate": "100",
894                 "node-id": "<xpdr-node-id>",
895                 "service-format": "OTU",
896                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
897                 "clli": "<ccli-name>",
898                 "tx-direction": {
899                     "port": {
900                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
901                         "port-type": "fixed",
902                         "port-name": "<xpdr-network-port-in-otn-topology>",
903                         "port-rack": "000000.00",
904                         "port-shelf": "Chassis#1"
905                     },
906                     "lgx": {
907                         "lgx-device-name": "Some lgx-device-name",
908                         "lgx-port-name": "Some lgx-port-name",
909                         "lgx-port-rack": "000000.00",
910                         "lgx-port-shelf": "00"
911                     }
912                 },
913                 "rx-direction": {
914                     "port": {
915                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
916                         "port-type": "fixed",
917                         "port-name": "<xpdr-network-port-in-otn-topology>",
918                         "port-rack": "000000.00",
919                         "port-shelf": "Chassis#1"
920                     },
921                     "lgx": {
922                         "lgx-device-name": "Some lgx-device-name",
923                         "lgx-port-name": "Some lgx-port-name",
924                         "lgx-port-rack": "000000.00",
925                         "lgx-port-shelf": "00"
926                     }
927                 },
928                 "optic-type": "gray"
929             },
930             "service-z-end": {
931                 "service-rate": "100",
932                 "node-id": "<xpdr-node-id>",
933                 "service-format": "OTU",
934                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
935                 "clli": "<ccli-name>",
936                 "tx-direction": {
937                     "port": {
938                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
939                         "port-type": "fixed",
940                         "port-name": "<xpdr-network-port-in-otn-topology>",
941                         "port-rack": "000000.00",
942                         "port-shelf": "Chassis#1"
943                     },
944                     "lgx": {
945                         "lgx-device-name": "Some lgx-device-name",
946                         "lgx-port-name": "Some lgx-port-name",
947                         "lgx-port-rack": "000000.00",
948                         "lgx-port-shelf": "00"
949                     }
950                 },
951                 "rx-direction": {
952                     "port": {
953                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
954                         "port-type": "fixed",
955                         "port-name": "<xpdr-network-port-in-otn-topology>",
956                         "port-rack": "000000.00",
957                         "port-shelf": "Chassis#1"
958                     },
959                     "lgx": {
960                         "lgx-device-name": "Some lgx-device-name",
961                         "lgx-port-name": "Some lgx-port-name",
962                         "lgx-port-rack": "000000.00",
963                         "lgx-port-shelf": "00"
964                     }
965                 },
966                 "optic-type": "gray"
967             },
968             "due-date": "yyyy-mm-ddT00:00:01Z",
969             "operator-contact": "some-contact-info"
970         }
971     }
972
973 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
974 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
975 the devices.
976
977 OTSi-OTUC4 service creation
978 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
979
980 Use the following REST RPC to invoke *service handler* module in order to create over the optical
981 infrastructure a bidirectional end-to-end OTUC4 over an optical Optical Tributary Signal
982 connectivity service between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such
983 service configure the optical network infrastructure composed of rdm nodes.
984
985 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
986
987 **Sample JSON Data**
988
989 .. code:: json
990
991     {
992         "input": {
993             "sdnc-request-header": {
994                 "request-id": "request-1",
995                 "rpc-action": "service-create",
996                 "request-system-id": "appname"
997             },
998             "service-name": "something",
999             "common-id": "commonId",
1000             "connection-type": "infrastructure",
1001             "service-a-end": {
1002                 "service-rate": "400",
1003                 "node-id": "<xpdr-node-id>",
1004                 "service-format": "OTU",
1005                 "otu-service-rate": "org-openroadm-otn-common-types:OTUCn",
1006                 "clli": "<ccli-name>",
1007                 "tx-direction": {
1008                     "port": {
1009                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1010                         "port-type": "fixed",
1011                         "port-name": "<xpdr-network-port-in-otn-topology>",
1012                         "port-rack": "000000.00",
1013                         "port-shelf": "Chassis#1"
1014                     },
1015                     "lgx": {
1016                         "lgx-device-name": "Some lgx-device-name",
1017                         "lgx-port-name": "Some lgx-port-name",
1018                         "lgx-port-rack": "000000.00",
1019                         "lgx-port-shelf": "00"
1020                     }
1021                 },
1022                 "rx-direction": {
1023                     "port": {
1024                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1025                         "port-type": "fixed",
1026                         "port-name": "<xpdr-network-port-in-otn-topology>",
1027                         "port-rack": "000000.00",
1028                         "port-shelf": "Chassis#1"
1029                     },
1030                     "lgx": {
1031                         "lgx-device-name": "Some lgx-device-name",
1032                         "lgx-port-name": "Some lgx-port-name",
1033                         "lgx-port-rack": "000000.00",
1034                         "lgx-port-shelf": "00"
1035                     }
1036                 },
1037                 "optic-type": "gray"
1038             },
1039             "service-z-end": {
1040                 "service-rate": "400",
1041                 "node-id": "<xpdr-node-id>",
1042                 "service-format": "OTU",
1043                 "otu-service-rate": "org-openroadm-otn-common-types:OTUCn",
1044                 "clli": "<ccli-name>",
1045                 "tx-direction": {
1046                     "port": {
1047                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1048                         "port-type": "fixed",
1049                         "port-name": "<xpdr-network-port-in-otn-topology>",
1050                         "port-rack": "000000.00",
1051                         "port-shelf": "Chassis#1"
1052                     },
1053                     "lgx": {
1054                         "lgx-device-name": "Some lgx-device-name",
1055                         "lgx-port-name": "Some lgx-port-name",
1056                         "lgx-port-rack": "000000.00",
1057                         "lgx-port-shelf": "00"
1058                     }
1059                 },
1060                 "rx-direction": {
1061                     "port": {
1062                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1063                         "port-type": "fixed",
1064                         "port-name": "<xpdr-network-port-in-otn-topology>",
1065                         "port-rack": "000000.00",
1066                         "port-shelf": "Chassis#1"
1067                     },
1068                     "lgx": {
1069                         "lgx-device-name": "Some lgx-device-name",
1070                         "lgx-port-name": "Some lgx-port-name",
1071                         "lgx-port-rack": "000000.00",
1072                         "lgx-port-shelf": "00"
1073                     }
1074                 },
1075                 "optic-type": "gray"
1076             },
1077             "due-date": "yyyy-mm-ddT00:00:01Z",
1078             "operator-contact": "some-contact-info"
1079         }
1080     }
1081
1082 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1083 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
1084 the devices.
1085
1086 One shall note that in Phosphorus SR0, as the OpenROADM 400G specification are not available (neither
1087 in the GNPy libraries, nor in the *PCE* module), path validation will be performed using the same
1088 asumptions as we use for 100G. This means the path may be validated whereas optical performances do
1089 not reach expected levels. This allows testing OpenROADM device implementing B100G rates, but shall
1090 not be used in operational conditions. The support for higher rate impairment aware path computation
1091 will be introduced across Phosphorus release train.
1092
1093 ODUC4 service creation
1094 ^^^^^^^^^^^^^^^^^^^^^^
1095
1096 For ODUC4 service creation, the REST RPC to invoke *service handler* module in order to create an
1097 ODUC4 over the OTSi-OTUC4 has the same format as the RPC used for the creation of this last. Only
1098 "service-format" needs to be changed to "ODU", and "otu-service-rate" : "org-openroadm-otn-common-
1099 types:OTUCn" needs to be replaced by: "odu-service-rate" : "org-openroadm-otn-common-types:ODUCn"
1100 in both service-a-end and service-z-end containers.
1101
1102 OTN HO-ODU4 service creation
1103 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1104
1105 Use the following REST RPC to invoke *service handler* module in order to create over the optical
1106 infrastructure a bidirectional end-to-end ODU4 OTN service over an OTU4 and structured to support
1107 low-order OTN services (ODU2e, ODU0). As for OTU4, such a service must be created between two network
1108 ports of OTN Xponder (MUXPDR or SWITCH).
1109
1110 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
1111
1112 **Sample JSON Data**
1113
1114 .. code:: json
1115
1116     {
1117         "input": {
1118             "sdnc-request-header": {
1119                 "request-id": "request-1",
1120                 "rpc-action": "service-create",
1121                 "request-system-id": "appname"
1122             },
1123             "service-name": "something",
1124             "common-id": "commonId",
1125             "connection-type": "infrastructure",
1126             "service-a-end": {
1127                 "service-rate": "100",
1128                 "node-id": "<xpdr-node-id>",
1129                 "service-format": "ODU",
1130                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
1131                 "clli": "<ccli-name>",
1132                 "tx-direction": {
1133                     "port": {
1134                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1135                         "port-type": "fixed",
1136                         "port-name": "<xpdr-network-port-in-otn-topology>",
1137                         "port-rack": "000000.00",
1138                         "port-shelf": "Chassis#1"
1139                     },
1140                     "lgx": {
1141                         "lgx-device-name": "Some lgx-device-name",
1142                         "lgx-port-name": "Some lgx-port-name",
1143                         "lgx-port-rack": "000000.00",
1144                         "lgx-port-shelf": "00"
1145                     }
1146                 },
1147                 "rx-direction": {
1148                     "port": {
1149                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1150                         "port-type": "fixed",
1151                         "port-name": "<xpdr-network-port-in-otn-topology>",
1152                         "port-rack": "000000.00",
1153                         "port-shelf": "Chassis#1"
1154                     },
1155                     "lgx": {
1156                         "lgx-device-name": "Some lgx-device-name",
1157                         "lgx-port-name": "Some lgx-port-name",
1158                         "lgx-port-rack": "000000.00",
1159                         "lgx-port-shelf": "00"
1160                     }
1161                 },
1162                 "optic-type": "gray"
1163             },
1164             "service-z-end": {
1165                 "service-rate": "100",
1166                 "node-id": "<xpdr-node-id>",
1167                 "service-format": "ODU",
1168                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
1169                 "clli": "<ccli-name>",
1170                 "tx-direction": {
1171                     "port": {
1172                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1173                         "port-type": "fixed",
1174                         "port-name": "<xpdr-network-port-in-otn-topology>",
1175                         "port-rack": "000000.00",
1176                         "port-shelf": "Chassis#1"
1177                     },
1178                     "lgx": {
1179                         "lgx-device-name": "Some lgx-device-name",
1180                         "lgx-port-name": "Some lgx-port-name",
1181                         "lgx-port-rack": "000000.00",
1182                         "lgx-port-shelf": "00"
1183                     }
1184                 },
1185                 "rx-direction": {
1186                     "port": {
1187                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1188                         "port-type": "fixed",
1189                         "port-name": "<xpdr-network-port-in-otn-topology>",
1190                         "port-rack": "000000.00",
1191                         "port-shelf": "Chassis#1"
1192                     },
1193                     "lgx": {
1194                         "lgx-device-name": "Some lgx-device-name",
1195                         "lgx-port-name": "Some lgx-port-name",
1196                         "lgx-port-rack": "000000.00",
1197                         "lgx-port-shelf": "00"
1198                     }
1199                 },
1200                 "optic-type": "gray"
1201             },
1202             "due-date": "yyyy-mm-ddT00:00:01Z",
1203             "operator-contact": "some-contact-info"
1204         }
1205     }
1206
1207 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1208 *otn-topology* that must contains OTU4 links with valid bandwidth parameters, and then
1209 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1210
1211 OTN 10GE-ODU2e service creation
1212 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1213
1214 Use the following REST RPC to invoke *service handler* module in order to create over the OTN
1215 infrastructure a bidirectional end-to-end 10GE-ODU2e OTN service over an ODU4.
1216 Such a service must be created between two client ports of OTN Xponder (MUXPDR or SWITCH)
1217 configured to support 10GE interfaces.
1218
1219 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
1220
1221 **Sample JSON Data**
1222
1223 .. code:: json
1224
1225     {
1226         "input": {
1227             "sdnc-request-header": {
1228                 "request-id": "request-1",
1229                 "rpc-action": "service-create",
1230                 "request-system-id": "appname"
1231             },
1232             "service-name": "something",
1233             "common-id": "commonId",
1234             "connection-type": "service",
1235             "service-a-end": {
1236                 "service-rate": "10",
1237                 "node-id": "<xpdr-node-id>",
1238                 "service-format": "Ethernet",
1239                 "clli": "<ccli-name>",
1240                 "subrate-eth-sla": {
1241                     "subrate-eth-sla": {
1242                         "committed-info-rate": "10000",
1243                         "committed-burst-size": "64"
1244                     }
1245                 },
1246                 "tx-direction": {
1247                     "port": {
1248                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1249                         "port-type": "fixed",
1250                         "port-name": "<xpdr-client-port-in-otn-topology>",
1251                         "port-rack": "000000.00",
1252                         "port-shelf": "Chassis#1"
1253                     },
1254                     "lgx": {
1255                         "lgx-device-name": "Some lgx-device-name",
1256                         "lgx-port-name": "Some lgx-port-name",
1257                         "lgx-port-rack": "000000.00",
1258                         "lgx-port-shelf": "00"
1259                     }
1260                 },
1261                 "rx-direction": {
1262                     "port": {
1263                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1264                         "port-type": "fixed",
1265                         "port-name": "<xpdr-client-port-in-otn-topology>",
1266                         "port-rack": "000000.00",
1267                         "port-shelf": "Chassis#1"
1268                     },
1269                     "lgx": {
1270                         "lgx-device-name": "Some lgx-device-name",
1271                         "lgx-port-name": "Some lgx-port-name",
1272                         "lgx-port-rack": "000000.00",
1273                         "lgx-port-shelf": "00"
1274                     }
1275                 },
1276                 "optic-type": "gray"
1277             },
1278             "service-z-end": {
1279                 "service-rate": "10",
1280                 "node-id": "<xpdr-node-id>",
1281                 "service-format": "Ethernet",
1282                 "clli": "<ccli-name>",
1283                 "subrate-eth-sla": {
1284                     "subrate-eth-sla": {
1285                         "committed-info-rate": "10000",
1286                         "committed-burst-size": "64"
1287                     }
1288                 },
1289                 "tx-direction": {
1290                     "port": {
1291                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1292                         "port-type": "fixed",
1293                         "port-name": "<xpdr-client-port-in-otn-topology>",
1294                         "port-rack": "000000.00",
1295                         "port-shelf": "Chassis#1"
1296                     },
1297                     "lgx": {
1298                         "lgx-device-name": "Some lgx-device-name",
1299                         "lgx-port-name": "Some lgx-port-name",
1300                         "lgx-port-rack": "000000.00",
1301                         "lgx-port-shelf": "00"
1302                     }
1303                 },
1304                 "rx-direction": {
1305                     "port": {
1306                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1307                         "port-type": "fixed",
1308                         "port-name": "<xpdr-client-port-in-otn-topology>",
1309                         "port-rack": "000000.00",
1310                         "port-shelf": "Chassis#1"
1311                     },
1312                     "lgx": {
1313                         "lgx-device-name": "Some lgx-device-name",
1314                         "lgx-port-name": "Some lgx-port-name",
1315                         "lgx-port-rack": "000000.00",
1316                         "lgx-port-shelf": "00"
1317                     }
1318                 },
1319                 "optic-type": "gray"
1320             },
1321             "due-date": "yyyy-mm-ddT00:00:01Z",
1322             "operator-contact": "some-contact-info"
1323         }
1324     }
1325
1326 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1327 *otn-topology* that must contains ODU4 links with valid bandwidth parameters, and then
1328 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1329
1330
1331 .. note::
1332     Since Magnesium SR2, the service-list corresponding to OCH-OTU4, ODU4 or again 10GE-ODU2e services is
1333     updated in the service-list datastore.
1334
1335 .. note::
1336     trib-slot is used when the equipment supports contiguous trib-slot allocation (supported from
1337     Magnesium SR0). The trib-slot provided corresponds to the first of the used trib-slots.
1338     complex-trib-slots will be used when the equipment does not support contiguous trib-slot
1339     allocation. In this case a list of the different trib-slots to be used shall be provided.
1340     The support for non contiguous trib-slot allocation is planned for later release.
1341
1342 Deleting a service
1343 ~~~~~~~~~~~~~~~~~~
1344
1345 Deleting any kind of service
1346 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1347
1348 Use the following REST RPC to invoke *service handler* module in order to delete a given optical
1349 connectivity service.
1350
1351 **REST API** : *POST /restconf/operations/org-openroadm-service:service-delete*
1352
1353 **Sample JSON Data**
1354
1355 .. code:: json
1356
1357     {
1358         "input": {
1359             "sdnc-request-header": {
1360                 "request-id": "request-1",
1361                 "rpc-action": "service-delete",
1362                 "request-system-id": "appname",
1363                 "notification-url": "http://localhost:8585/NotificationServer/notify"
1364             },
1365             "service-delete-req-info": {
1366                 "service-name": "something",
1367                 "tail-retention": "no"
1368             }
1369         }
1370     }
1371
1372 Most important parameters for this REST RPC is the *service-name*.
1373
1374
1375 .. note::
1376     Deleting OTN services implies proceeding in the reverse way to their creation. Thus, OTN
1377     service deletion must respect the three following steps:
1378     1. delete first all 10GE services supported over any ODU4 to be deleted
1379     2. delete ODU4
1380     3. delete OCH-OTU4 supporting the just deleted ODU4
1381
1382 Invoking PCE module
1383 ~~~~~~~~~~~~~~~~~~~
1384
1385 Use the following REST RPCs to invoke *PCE* module in order to check connectivity between xponder
1386 nodes and the availability of a supporting optical connectivity between the network-ports of the
1387 nodes.
1388
1389 Checking OTU4 service connectivity
1390 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1391
1392 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1393
1394 **Sample JSON Data**
1395
1396 .. code:: json
1397
1398    {
1399       "input": {
1400            "service-name": "something",
1401            "resource-reserve": "true",
1402            "service-handler-header": {
1403              "request-id": "request1"
1404            },
1405            "service-a-end": {
1406              "service-rate": "100",
1407              "clli": "<clli-node>",
1408              "service-format": "OTU",
1409              "node-id": "<otn-node-id>"
1410            },
1411            "service-z-end": {
1412              "service-rate": "100",
1413              "clli": "<clli-node>",
1414              "service-format": "OTU",
1415              "node-id": "<otn-node-id>"
1416              },
1417            "pce-metric": "hop-count"
1418        }
1419    }
1420
1421 .. note::
1422     here, the <otn-node-id> corresponds to the node-id as appearing in "openroadm-network" topology
1423     layer
1424
1425 Checking ODU4 service connectivity
1426 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1427
1428 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1429
1430 **Sample JSON Data**
1431
1432 .. code:: json
1433
1434    {
1435       "input": {
1436            "service-name": "something",
1437            "resource-reserve": "true",
1438            "service-handler-header": {
1439              "request-id": "request1"
1440            },
1441            "service-a-end": {
1442              "service-rate": "100",
1443              "clli": "<clli-node>",
1444              "service-format": "ODU",
1445              "node-id": "<otn-node-id>"
1446            },
1447            "service-z-end": {
1448              "service-rate": "100",
1449              "clli": "<clli-node>",
1450              "service-format": "ODU",
1451              "node-id": "<otn-node-id>"
1452              },
1453            "pce-metric": "hop-count"
1454        }
1455    }
1456
1457 .. note::
1458     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1459
1460 Checking 10GE/ODU2e service connectivity
1461 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1462
1463 **REST API** : *POST /restconf/operations/transportpce-pce:path-computation-request*
1464
1465 **Sample JSON Data**
1466
1467 .. code:: json
1468
1469    {
1470       "input": {
1471            "service-name": "something",
1472            "resource-reserve": "true",
1473            "service-handler-header": {
1474              "request-id": "request1"
1475            },
1476            "service-a-end": {
1477              "service-rate": "10",
1478              "clli": "<clli-node>",
1479              "service-format": "Ethernet",
1480              "node-id": "<otn-node-id>"
1481            },
1482            "service-z-end": {
1483              "service-rate": "10",
1484              "clli": "<clli-node>",
1485              "service-format": "Ethernet",
1486              "node-id": "<otn-node-id>"
1487              },
1488            "pce-metric": "hop-count"
1489        }
1490    }
1491
1492 .. note::
1493     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1494
1495
1496 odl-transportpce-tapi
1497 ---------------------
1498
1499 This feature allows TransportPCE application to expose at its northbound interface other APIs than
1500 those defined by the OpenROADM MSA. With this feature, TransportPCE provides part of the Transport-API
1501 specified by the Open Networking Foundation. More specifically, part of the Topology Service component
1502 is implemented, allowing to expose to higher level applications an abstraction of its OpenROADM
1503 topologies in the form of topologies respecting the T-API modelling. The current version of TransportPCE
1504 implements the *tapi-topology.yang* model in the revision 2018-12-10 (T-API v2.1.2).
1505
1506
1507 -  RPC call
1508
1509    -  get-topology-details
1510
1511 As in IETF or OpenROADM topologies, T-API topologies are composed of lists of nodes and links that
1512 abstract a set of network resources. T-API specifies the *T0 - Multi-layer topology* which is, as
1513 indicated by its name, a single topology that collapses network logical abstraction for all network
1514 layers. Thus, an OpenROADM device as, for example, an OTN xponder that manages the following network
1515 layers ETH, ODU, OTU, Optical wavelength, will be represented in T-API T0 topology by two nodes:
1516 one *DSR/ODU* node and one *Photonic Media* node. Each of them are linked together through one or
1517 several *transitional links* depending on the number of network/line ports on the device.
1518
1519 Aluminium SR2 comes with a complete refactoring of this module, handling the same way multi-layer
1520 abstraction of any Xponder terminal device, whether it is a 100G transponder, an OTN muxponder or
1521 again an OTN switch. For all these devices, the implementation manages the fact that only relevant
1522 ports must appear in the resulting TAPI topology abstraction. In other words, only client/network ports
1523 that are undirectly/directly connected to the ROADM infrastructure are considered for the abstraction.
1524 Moreover, the whole ROADM infrastructure of the network is also abstracted towards a single photonic
1525 node. Therefore, a pair of unidirectional xponder-output/xponder-input links present in *openroadm-topology*
1526 is represented by a bidirectional *OMS* link in TAPI topology.
1527 In the same way, a pair of unidirectional OTN links (OTU4, ODU4) present in *otn-topology* is also
1528 represented by a bidirectional OTN link in TAPI topology, while retaining their available bandwidth
1529 characteristics.
1530
1531 Two kinds of topologies are currently implemented. The first one is the *"T0 - Multi-layer topology"*
1532 defined in the reference implementation of T-API. This topology gives an abstraction from data coming
1533 from openroadm-topology and otn-topology. Such topology may be rather complex since most of devices are
1534 represented through several nodes and links.
1535 Another topology, named *"Transponder 100GE"*, is also implemented. That latter provides a higher level
1536 of abstraction, much simpler, for the specific case of 100GE transponder, in the form of a single
1537 DSR node.
1538
1539 The figure below shows an example of TAPI abstractions as performed by TransportPCE starting from Aluminium SR2.
1540
1541 .. figure:: ./images/TransportPCE-tapi-abstraction.jpg
1542    :alt: Example of T0-multi-layer TAPI abstraction in TransportPCE
1543
1544 In this specific case, as far as the "A" side is concerned, we connect TransportPCE to two xponder
1545 terminal devices at the netconf level :
1546 - XPDR-A1 is a 100GE transponder and is represented by XPDR-A1-XPDR1 node in *otn-topology*
1547 - SPDR-SA1 is an otn xponder that actually contains in its device configuration datastore two otn
1548 xponder nodes (the otn muxponder 10GE=>100G SPDR-SA1-XPDR1 and the otn switch 4x100GE => 4x100G SPDR-SA1-XPDR2)
1549 As represented on the bottom part of the figure, only one network port of XPDR-A1-XPDR1 is connected
1550 to the ROADM infrastructure, and only one network port of the otn muxponder is also attached to the
1551 ROADM infrastructure.
1552 Such network configuration will result in the TAPI *T0 - Multi-layer topology* abstraction as
1553 represented in the center of the figure. Let's notice that the otn switch (SPDR-SA1-XPDR2), not
1554 being attached to the ROADM infrastructure, is not abstracted.
1555 Moreover, 100GE transponder being connected, the TAPI *Transponder 100GE* topology will result in a
1556 single layer DSR node with only the two Owned Node Edge Ports representing the two 100GE client ports
1557 of respectively XPDR-A1-XPDR1 and XPDR-C1-XPDR1...
1558
1559
1560 **REST API** : *POST /restconf/operations/tapi-topology:get-topology-details*
1561
1562 This request builds the TAPI *T0 - Multi-layer topology* abstraction with regard to the current
1563 state of *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1564
1565 **Sample JSON Data**
1566
1567 .. code:: json
1568
1569     {
1570       "tapi-topology:input": {
1571         "tapi-topology:topology-id-or-name": "T0 - Multi-layer topology"
1572        }
1573     }
1574
1575 This request builds the TAPI *Transponder 100GE* abstraction with regard to the current state of
1576 *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1577 Its main interest is to simply and directly retrieve 100GE client ports of 100G Transponders that may
1578 be connected together, through a point-to-point 100GE service running over a wavelength.
1579
1580 .. code:: json
1581
1582     {
1583       "tapi-topology:input": {
1584         "tapi-topology:topology-id-or-name": "Transponder 100GE"
1585         }
1586     }
1587
1588
1589 .. note::
1590
1591     As for the *T0 multi-layer* topology, only 100GE client port whose their associated 100G line
1592     port is connected to Add/Drop nodes of the ROADM infrastructure are retrieved in order to
1593     abstract only relevant information.
1594
1595 odl-transportpce-dmaap-client
1596 -----------------------------
1597
1598 This feature allows TransportPCE application to send notifications on ONAP Dmaap Message router
1599 following service request results.
1600 This feature listens on NBI notifications and sends the PublishNotificationService content to
1601 Dmaap on the topic "unauthenticated.TPCE" through a POST request on /events/unauthenticated.TPCE
1602 It uses Jackson to serialize the notification to JSON and jersey client to send the POST request.
1603
1604 odl-transportpce-nbinotifications
1605 ---------------------------------
1606
1607 This feature allows TransportPCE application to write and read notifications stored in topics of a Kafka server.
1608 It is basically composed of two kinds of elements. First are the 'publishers' that are in charge of sending a notification to
1609 a Kafka server. To protect and only allow specific classes to send notifications, each publisher
1610 is dedicated to an authorized class.
1611 Then are the 'subscribers' that are in charge of reading notifications from a Kafka server.
1612 So when the feature is called to write notification to a Kafka server, it will serialize the notification
1613 into JSON format and then will publish it in a topic of the server via a publisher.
1614 And when the feature is called to read notifications from a Kafka server, it will retrieve it from
1615 the topic of the server via a subscriber and will deserialize it.
1616
1617 For now, when the REST RPC service-create is called to create a bidirectional end-to-end service,
1618 depending on the success or the fail of the creation, the feature will notify the result of
1619 the creation to a Kafka server. The topics that store these notifications are named after the connection type
1620 (service, infrastructure, roadm-line). For instance, if the RPC service-create is called to create an
1621 infrastructure connection, the service notifications related to this connection will be stored in
1622 the topic 'infrastructure'.
1623
1624 The figure below shows an example of the application nbinotifications in order to notify the
1625 progress of a service creation.
1626
1627 .. figure:: ./images/TransportPCE-nbinotifications-service-example.jpg
1628    :alt: Example of service notifications using the feature nbinotifications in TransportPCE
1629
1630
1631 Depending on the status of the service creation, two kinds of notifications can be published
1632 to the topic 'service' of the Kafka server.
1633
1634 If the service was correctly implemented, the following notification will be published :
1635
1636
1637 -  **Service implemented !** : Indicates that the service was successfully implemented.
1638    It also contains all information concerning the new service.
1639
1640
1641 Otherwise, this notification will be published :
1642
1643
1644 -  **ServiceCreate failed ...** : Indicates that the process of service-create failed, and also contains
1645    the failure cause.
1646
1647
1648 To retrieve these service notifications stored in the Kafka server :
1649
1650 **REST API** : *POST /restconf/operations/nbi-notifications:get-notifications-process-service*
1651
1652 **Sample JSON Data**
1653
1654 .. code:: json
1655
1656     {
1657       "input": {
1658         "connection-type": "service",
1659         "id-consumer": "consumer",
1660         "group-id": "test"
1661        }
1662     }
1663
1664 .. note::
1665     The field 'connection-type' corresponds to the topic that stores the notifications.
1666
1667 Another implementation of the notifications allows to notify any modification of operational state made about a service.
1668 So when a service breaks down or is restored, a notification alarming the new status will be sent to a Kafka Server.
1669 The topics that store these notifications in the Kafka server are also named after the connection type
1670 (service, infrastructure, roadm-line) accompanied of the string 'alarm'.
1671
1672 To retrieve these alarm notifications stored in the Kafka server :
1673
1674 **REST API** : *POST /restconf/operations/nbi-notifications:get-notifications-alarm-service*
1675
1676 **Sample JSON Data**
1677
1678 .. code:: json
1679
1680     {
1681       "input": {
1682         "connection-type": "infrastructure",
1683         "id-consumer": "consumer",
1684         "group-id": "test"
1685        }
1686     }
1687
1688 .. note::
1689     This sample is used to retrieve all the alarm notifications related to infrastructure services.
1690
1691 Help
1692 ----
1693
1694 -  `TransportPCE Wiki <https://wiki.opendaylight.org/display/ODL/TransportPCE>`__