Corrections to developer Guide for Ar
[transportpce.git] / docs / developer-guide.rst
1 .. _transportpce-dev-guide:
2
3 TransportPCE Developer Guide
4 ============================
5
6 Overview
7 --------
8
9 TransportPCE describes an application running on top of the OpenDaylight
10 controller. Its primary function is to control an optical transport
11 infrastructure using a non-proprietary South Bound Interface (SBI). It may be
12 interconnected with Controllers of different layers (L2, L3 Controller…), a
13 higher layer Controller and/or an Orchestrator through non-proprietary
14 Application Programing Interfaces (APIs). Control includes the capability to
15 configure the optical equipment, and to provision services according to a
16 request coming from a higher layer controller and/or an orchestrator.
17 This capability may rely on the controller only or it may be delegated to
18 distributed (standardized) protocols.
19
20
21 Architecture
22 ------------
23
24 TransportPCE modular architecture is described on the next diagram. Each main
25 function such as Topology management, Path Calculation Engine (PCE), Service
26 handler, Renderer \_responsible for the path configuration through optical
27 equipment\_ and Optical Line Management (OLM) is associated with a generic block
28 relying on open models, each of them communicating through published APIs.
29
30
31 .. figure:: ./images/TransportPCE-Diagram-Sulfur.jpg
32    :alt: TransportPCE architecture
33
34    TransportPCE architecture
35
36 Fluorine, Neon and Sodium releases of transportPCE are dedicated to the control
37 of WDM transport infrastructure. The WDM layer is built from colorless ROADMs
38 and transponders.
39
40 The interest of using a controller to provision automatically services strongly
41 relies on its ability to handle end to end optical services that span through
42 the different network domains, potentially equipped with equipment coming from
43 different suppliers. Thus, interoperability in the optical layer is a key
44 element to get the benefit of automated control.
45
46 Initial design of TransportPCE leverages OpenROADM Multi-Source-Agreement (MSA)
47 which defines interoperability specifications, consisting of both Optical
48 interoperability and Yang data models.
49
50 End to end OTN services such as OCH-OTU4, structured ODU4 or 10GE-ODU2e
51 services are supported since Magnesium SR2. OTN support continued to be
52 improved in the following releases of Magnesium and Aluminium.
53
54 Flexgrid was introduced in Aluminium. Depending on OpenROADM device models,
55 optical interfaces can be created according to the initial fixed grid (for
56 R1.2.1, 96 channels regularly spaced of 50 GHz), or to a flexgrid (for R2.2.1
57 use of specific number of subsequent frequency slots of 6.25 GHz depending on
58 one side of ROADMs and transponders capabilities and on the other side of the
59 rate of the channel.
60
61 Leveraging Flexgrid feature, high rate services are supported since Silicon.
62 First implementation allows rendering 400 GE services. This release also brings
63 asynchronous service creation and deletion, thanks to northbound notifications
64 modules based on a Kafka implementation, allowing interactions with the DMaaP
65 Bus of ONAP.
66
67 Phosphorus consolidates end to end support for high rate services (ODUC4, OTUC4),
68 allowing service creation and deletion from the NBI. The support of path
69 computation for high rate services (OTUC4) has been added through the different P
70 releases, relying on GNPy for impairment aware path computation. An experimental
71 support of T-API is provided allowing service-create/delete from a T-API version
72 2.1.1 compliant NBI. A T-API network topology, with different levels of abstraction
73 and service context are maintained in the MDSAL. Service state is managed,
74 monitoring device port state changes. Associated notifications are handled through
75 Kafka and  DMaaP clients.
76
77 Sulfur is introducing OpenROADM service and network models 10.1, which include the
78 operational-modes catalog, needed for future support of Alien Wavelength use cases.
79 It also offers T-API notification support, handling the RPC associated with the
80 notification subscription service.
81
82 The Chlorine release brings structural changes to the project. indeed, all the official
83 yang models of the OpenROADM and ONF-TAPI communities are no longer managed directly
84 in the TransportPCE project but in a dedicated sub-project: transportpce/models.
85 Also, the implementation of these models which is made in TransportPCE now imports
86 the models already compiled by maven dependency.
87 From a functional point of view, Chlorine supports the autonomous reroute of WDM services
88 terminated on 100G or 400G Transponders, as well as the beginning of developments around
89 the OpenROAM catalog management.
90
91 The Argon release provides autonomous impairment aware path computation, relying on
92 OpenROADM operational-modes catalog. It is used in a first step of the path validation,
93 to evaluate the Optical Signal to Noise Ratio as well as the penalty associated with the
94 signal across the calculated pass. Validation of the optical path by GNPy is still
95 triggered, in a second step, leveraging advanced calculation of non linear contribution.
96
97
98 Module description
99 ~~~~~~~~~~~~~~~~~~
100
101 ServiceHandler
102 ^^^^^^^^^^^^^^
103
104 Service Handler handles request coming from a higher level controller or an
105 orchestrator through the northbound API, as defined in the Open ROADM service model.
106 Current implementation addresses the following rpcs: service-create, temp-service-
107 create, service–delete, temp-service-delete, service-reroute, and service-restoration.
108 It checks the request consistency and trigs path calculation sending rpcs to the PCE.
109 If a valid path is returned by the PCE, path configuration is initiated relying on
110 Renderer and OLM. At the confirmation of a successful service creation, the Service
111 Handler updates the service-list/temp-service-list in the MD-SAL. For service deletion,
112 the Service Handler relies on the Renderer and the OLM to delete connections and reset
113 power levels associated with the service. The service-list is updated following a
114 successful service deletion. In Neon SR0 is added the support for service from ROADM
115 to ROADM, which brings additional flexibility and notably allows reserving resources
116 when transponders are not in place at day one. Magnesium SR2 fully supports end-to-end
117 OTN services which are part of the OTN infrastructure. It concerns the management of
118 OCH-OTU4 (also part of the optical infrastructure) and structured HO-ODU4 services.
119 Moreover, once these two kinds of OTN infrastructure service created, it is possible
120 to manage some LO-ODU services (1GE-ODU0, 10GE-ODU2e). 100GE services are also
121 supported over ODU4 in transponders or switchponders using higher rate network
122 interfaces.
123
124 In Silicon release, the management of TopologyUpdateNotification coming from the *Topology Management*
125 module was implemented. This functionality enables the controller to update the information of existing
126 services according to the online status of the network infrastructure. If any service is affected by
127 the topology update and the *odl-transportpce-nbinotifications* feature is installed, the Service
128 Handler will send a notification to a Kafka server with the service update information.
129
130 PCE
131 ^^^
132
133 The Path Computation Element (PCE) is the component responsible for path
134 calculation. An interface allows the Service Handler or external components such as an
135 orchestrator to request a path computation and get a response from the PCE
136 including the computed path(s) in case of success, or errors and indication of
137 the reason for the failure in case the request cannot be satisfied. Additional
138 parameters can be provided by the PCE in addition to the computed paths if
139 requested by the client module. An interface to the Topology Management module
140 allows keeping PCE aligned with the latest changes in the topology. Information
141 about current and planned services is available in the MD-SAL data store.
142
143 Current implementation of PCE allows finding the shortest path, minimizing either the hop
144 count (default) or the propagation delay. The support of a flexible grid was introduced in Aluminium.
145 The central wavelength assignment depends on the  capabilities of the different devices on the path.
146 If one of the elements only supports a fixed grid, the wavelength is assigned considering a grid of
147 96 wavelengths 50 GHz spaced. If  all the devices on the path support a flexible grid, the assignment
148 of wavelengths is done according to a flexible grid considering 768 subsequent slots of 6,25 GHz
149 (total spectrum of 4.8 Thz).
150
151 The PCE module handles the following constraints as hard constraints:
152
153 -   **Node exclusion**
154 -   **SRLG exclusion**
155 -   **Maximum latency**
156
157 In Neon SR0, the PCE calculates the OSNR, on the base of incremental noise specifications provided
158 in Open ROADM MSA. The support of unidirectional ports is also added. The interconnection of the PCE
159 with GNPY (Gaussian Noise Python), an open-source library developed in the scope of the Telecom Infra
160 Project for building route planning and optimizing performance in optical mesh networks, is supported
161 since Magnesium SR0. This allowed introducing impairment aware path computation for (Beyond 100G)
162 services across Phoshorus releases.
163
164 In Argon, we introduce autonomous impairment aware path computation, leveraging OpenROADM yang
165 specification catalog (R10.1), which translates the optical specifications provided in the MSA into
166 models understandable by the controller. To each disaggregated element crossed along the path
167 (Transponders, ROADM add/drop modules and degrees), is associated an operational mode, for which each
168 physical parameters is described in the catalog. This allows evaluating the degradations that each
169 element, whether it is a device of fiber span, brings to the signal transmission. The resulting
170 Optical Signal to Noise Ratio is calculated, as well as the penalties associated with the cumulated
171 chromatic dispersion, Polarisation Mode Dispersion (PMD), Polarization Dependant Loss (PDL)… and the
172 non-linear contribution is evaluated.
173
174 All of this is done in accordance with OpenROADM optical specifications. Handling OpenROADM specification
175 catalogs improves the upgradability of the code, since the future evolution of the specifications only
176 implies to add new operational modes to the catalog while the associated code remains unchanged.
177
178 In Argon SR0, to benefit from this new functionality, the specification catalog must be manually loaded
179 into the data store.  The catalog includes 2 different parts, the first being dedicated to the
180 translation of OpenROADM specifications, the second (optional) being dedicated to specific operational
181 modes for transponders used in “bookended” mode (same transponders on both ends of the path). The
182 automatic filling of the first part of the catalog is planned in Ar SR1. In this release will also be
183 supported the 2 RPCs used to fill the different parts of the catalog :
184 -   **add-openroadm-operational-mode-to-catalog**
185 -   **add-specific-operational-mode-to-catalog**
186
187 Autonomous impairment aware path computation is triggered in Argon for any path at the WDM layer,
188 whatever is the service rate. The transmission margin is evaluated in both direction and the result is
189 provided in INFO Logs. GNPy is used in a second step to enforce path validation. Indeed, it gives
190 complementary information to the calculation made from OpenROADM specifications, with a finer assessment
191 of non-linear contribution, and potentially a consideration of the interaction with other channels
192 already provisioned on the network. This last capability will be added across Argon releases.
193 The PCE forwards through a REST interface to GNPY external tool the topology and the pre-computed path
194 translated in routing constraints. GNPy calculates a set of Quality of Transmission metrics for this path
195 using its own library which includes models for OpenROADM. The result is sent back to the PCE. If the
196 path is validated, the PCE sends back a response to the service handler. In case of invalidation of the
197 path by GNPY, the PCE sends a new request to GNPY, including only the constraints expressed in the
198 path-computation-request initiated by the Service Handler. GNPy then tries to calculate a path based on
199 these relaxed constraints. The result of the path computation is provided to the PCE which translates
200 the path according to the topology handled in transportPCE and forwards the results to the Service
201 Handler.
202
203 GNPy relies on SNR and takes into account the linear and non-linear impairments to check feasibility.
204 In the related tests, GNPy module runs externally in a docker and the communication with T-PCE is
205 ensured via HTTPs.
206
207 Topology Management
208 ^^^^^^^^^^^^^^^^^^^
209
210 Topology management module builds the Topology according to the Network model
211 defined in OpenROADM. The topology is aligned with IETF I2RS RFC8345 model.
212 It includes several network layers:
213
214 -  **CLLI layer corresponds to the locations that host equipment**
215 -  **Network layer corresponds to a first level of disaggregation where we
216    separate Xponders (transponder, muxponders or switchponders) from ROADMs**
217 -  **Topology layer introduces a second level of disaggregation where ROADMs
218    Add/Drop modules ("SRGs") are separated from the degrees which includes line
219    amplifiers and WSS that switch wavelengths from one to another degree**
220 -  **OTN layer introduced in Magnesium includes transponders as well as switch-ponders and
221    mux-ponders having the ability to switch OTN containers from client to line cards. Mg SR0
222    release includes creation of the switching pool (used to model cross-connect matrices),
223    tributary-ports and tributary-slots at the initial connection of NETCONF devices.
224    The population of OTN links (OTU4 and ODU4), and the adjustment of the tributary ports/slots
225    pool occupancy when OTN services are created is supported since Magnesium SR2.**
226
227 Since Silicon release, the Topology Management module process NETCONF event received through an
228 event stream (as defined in RFC 5277) between devices and the NETCONF adapter of the controller.
229 Current implementation detects device configuration changes and updates the topology datastore accordingly.
230 Then, it sends a TopologyUpdateNotification to the *Service Handler* to indicate that a change has been
231 detected in the network that may affect some of the already existing services.
232
233 Renderer
234 ^^^^^^^^
235
236 The Renderer module, on request coming from the Service Handler through a service-
237 implementation-request /service delete rpc, sets/deletes the path corresponding to a specific
238 service between A and Z ends. The path description provided by the service-handler to the
239 renderer is based on abstracted resources (nodes, links and termination-points), as provided
240 by the PCE module. The renderer converts this path-description in a path topology based on
241 device resources (circuit-packs, ports,…).
242
243 The conversion from abstracted resources to device resources is performed relying on the
244 portmapping module which maintains the connections between these different resource types.
245 Portmapping module also allows to keep the topology independant from the devices releases.
246 In Neon (SR0), portmapping module has been enriched to support both openroadm 1.2.1 and 2.2.1
247 device models. The full support of openroadm 2.2.1 device models (both in the topology management
248 and the rendering function) has been added in Neon SR1. In Magnesium, portmapping is enriched with
249 the supported-interface-capability, OTN supporting-interfaces, and switching-pools (reflecting
250 cross-connection capabilities of OTN switch-ponders). The support for 7.1 devices models is
251 introduced in Silicon (no devices of intermediate releases have been proposed and made available
252 to the market by equipment manufacturers).
253
254 After the path is provided, the renderer first checks what are the existing interfaces on the
255 ports of the different nodes that the path crosses. It then creates missing interfaces. After all
256 needed interfaces have been created it sets the connections required in the nodes and
257 notifies the Service Handler on the status of the path creation. Path is created in 2 steps
258 (from A to Z and Z to A). In case the path between A and Z could not be fully created, a
259 rollback function is called to set the equipment on the path back to their initial configuration
260 (as they were before invoking the Renderer).
261
262 Magnesium brings the support of OTN services. SR0 supports the creation of OTU4, ODU4, ODU2/ODU2e
263 and ODU0 interfaces. The creation of these low-order otn interfaces must be triggered through
264 otn-service-path RPC. Magnesium SR2 fully supports end-to-end otn service implementation into devices
265 (service-implementation-request /service delete rpc, topology alignement after the service
266 has been created).
267
268 In Silicon releases, higher rate OTN interfaces (OTUC4) must be triggered through otn-service-
269 path RPC. Phosphorus SR0 supports end-to-end otn service implementation into devices
270 (service-implementation-request /service delete rpc, topology alignement after the service
271 has been created). One shall note that impairment aware path calculation for higher rates will
272 be made available across the Phosphorus release train.
273
274 OLM
275 ^^^
276
277 Optical Line Management module implements two main features: it is responsible
278 for setting up the optical power levels on the different interfaces, and is in
279 charge of adjusting these settings across the life of the optical
280 infrastructure.
281
282 After the different connections have been established in the ROADMS, between 2
283 Degrees for an express path, or between a SRG and a Degree for an Add or Drop
284 path; meaning the devices have set WSS and all other required elements to
285 provide path continuity, power setting are provided as attributes of these
286 connections. This allows the device to set all complementary elements such as
287 VOAs, to guaranty that the signal is launched at a correct power level
288 (in accordance to the specifications) in the fiber span. This also applies
289 to X-Ponders, as their output power must comply with the specifications defined
290 for the Add/Drop ports (SRG) of the ROADM. OLM has the responsibility of
291 calculating the right power settings, sending it to the device, and check the
292 PM retrieved from the device to verify that the setting was correctly applied
293 and the configuration was successfully completed.
294
295
296 Inventory
297 ^^^^^^^^^
298
299 TransportPCE Inventory module is responsible to keep track of devices connected in an external
300 MariaDB database. Other databases may be used as long as they comply with SQL and are compatible
301 with OpenDaylight (for example MySQL). At present, the module supports extracting and persisting
302 inventory of devices OpenROADM MSA version 1.2.1. Inventory module changes to support newer device
303 models (2.2.1, etc) and other models (network, service, etc) will be progressively included.
304
305 The inventory module can be activated by the associated karaf feature (odl-transporpce-inventory)
306 The database properties are supplied in the “opendaylight-release” and “opendaylight-snapshots”
307 profiles. Below is the settings.xml with properties included in the distribution.
308 The module can be rebuild from sources with different parameters.
309
310 Sample entry in settings.xml to declare an external inventory database:
311 ::
312
313     <profiles>
314       <profile>
315           <id>opendaylight-release</id>
316     [..]
317          <properties>
318                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
319                  <transportpce.db.database><<databasename>></transportpce.db.database>
320                  <transportpce.db.username><<username>></transportpce.db.username>
321                  <transportpce.db.password><<password>></transportpce.db.password>
322                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
323          </properties>
324     </profile>
325     [..]
326     <profile>
327           <id>opendaylight-snapshots</id>
328     [..]
329          <properties>
330                  <transportpce.db.host><<hostname>>:3306</transportpce.db.host>
331                  <transportpce.db.database><<databasename>></transportpce.db.database>
332                  <transportpce.db.username><<username>></transportpce.db.username>
333                  <transportpce.db.password><<password>></transportpce.db.password>
334                  <karaf.localFeature>odl-transportpce-inventory</karaf.localFeature>
335          </properties>
336         </profile>
337     </profiles>
338
339
340 Once the project built and when karaf is started, the cfg file is generated in etc folder with the
341 corresponding properties supplied in settings.xml. When devices with OpenROADM 1.2.1 device model
342 are mounted, the device listener in the inventory module loads several device attributes to various
343 tables as per the supplied database. The database structure details can be retrieved from the file
344 tests/inventory/initdb.sql inside project sources. Installation scripts and a docker file are also
345 provided.
346
347 Key APIs and Interfaces
348 -----------------------
349
350 External API
351 ~~~~~~~~~~~~
352
353 North API, interconnecting the Service Handler to higher level applications
354 relies on the Service Model defined in the MSA. The Renderer and the OLM are
355 developed to allow configuring OpenROADM devices through a southbound
356 Netconf/Yang interface and rely on the MSA’s device model.
357
358 ServiceHandler Service
359 ^^^^^^^^^^^^^^^^^^^^^^
360
361 -  RPC call
362
363    -  service-create (given service-name, service-aend, service-zend)
364
365    -  service-delete (given service-name)
366
367    -  service-reroute (given service-name, service-aend, service-zend)
368
369    -  service-restoration (given service-name, service-aend, service-zend)
370
371    -  temp-service-create (given common-id, service-aend, service-zend)
372
373    -  temp-service-delete (given common-id)
374
375 -  Data structure
376
377    -  service list : made of services
378    -  temp-service list : made of temporary services
379    -  service : composed of service-name, topology wich describes the detailed path (list of used resources)
380
381 -  Notification
382
383    - service-rpc-result : result of service RPC
384    - service-notification : service has been added, modified or removed
385
386 Netconf Service
387 ^^^^^^^^^^^^^^^
388
389 -  RPC call
390
391    -  connect-device : PUT
392    -  disconnect-device : DELETE
393    -  check-connected-device : GET
394
395 -  Data Structure
396
397    -  node list : composed of netconf nodes in topology-netconf
398
399 Internal APIs
400 ~~~~~~~~~~~~~
401
402 Internal APIs define REST APIs to interconnect TransportPCE modules :
403
404 -   Service Handler to PCE
405 -   PCE to Topology Management
406 -   Service Handler to Renderer
407 -   Renderer to OLM
408 -   Network Model to Service Handler
409
410 Pce Service
411 ^^^^^^^^^^^
412
413 -  RPC call
414
415    -  path-computation-request (given service-name, service-aend, service-zend)
416
417    -  cancel-resource-reserve (given service-name)
418
419 -  Notification
420
421    - service-path-rpc-result : result of service RPC
422
423 Renderer Service
424 ^^^^^^^^^^^^^^^^
425
426 -  RPC call
427
428    -  service-implementation-request (given service-name, service-aend, service-zend)
429
430    -  service-delete (given service-name)
431
432 -  Data structure
433
434    -  service path list : composed of service paths
435    -  service path : composed of service-name, path description giving the list of abstracted elements (nodes, tps, links)
436
437 -  Notification
438
439    - service-path-rpc-result : result of service RPC
440
441 Device Renderer
442 ^^^^^^^^^^^^^^^
443
444 -  RPC call
445
446    -  service-path used in SR0 as an intermediate solution to address directly the renderer
447       from a REST NBI to create OCH-OTU4-ODU4 interfaces on network port of otn devices.
448
449    -  otn-service-path used in SR0 as an intermediate solution to address directly the renderer
450       from a REST NBI for otn-service creation. Otn service-creation through
451       service-implementation-request call from the Service Handler will be supported in later
452       Magnesium releases
453
454 Topology Management Service
455 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
456
457 -  Data structure
458
459    -  network list : composed of networks(openroadm-topology, netconf-topology)
460    -  node list : composed of nodes identified by their node-id
461    -  link list : composed of links identified by their link-id
462    -  node : composed of roadm, xponder
463       link : composed of links of different types (roadm-to-roadm, express, add-drop ...)
464
465 OLM Service
466 ^^^^^^^^^^^
467
468 -  RPC call
469
470    -  get-pm (given node-id)
471
472    -  service-power-setup
473
474    -  service-power-turndown
475
476    -  service-power-reset
477
478    -  calculate-spanloss-base
479
480    -  calculate-spanloss-current
481
482 odl-transportpce-stubmodels
483 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
484
485    -  This feature provides function to be able to stub some of TransportPCE modules, pce and
486       renderer (Stubpce and Stubrenderer).
487       Stubs are used for development purposes and can be used for some of the functional tests.
488
489 Interfaces to external software
490 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
491
492 It defines the interfaces implemented to interconnect TransportPCE modules with other software in
493 order to perform specific tasks
494
495 GNPy interface
496 ^^^^^^^^^^^^^^
497
498 -  Request structure
499
500    -  topology : composed of list of elements and connections
501    -  service : source, destination, explicit-route-objects, path-constraints
502
503 -  Response structure
504
505    -  path-properties/path-metric : OSNR-0.1nm, OSNR-bandwidth, SNR-0.1nm, SNR-bandwidth,
506    -  path-properties/path-route-objects : composed of path elements
507
508
509 Running transportPCE project
510 ----------------------------
511
512 To use transportPCE controller, the first step is to connect the controller to optical nodes
513 through the NETCONF connector.
514
515 .. note::
516
517     In the current version, only optical equipment compliant with open ROADM datamodels are managed
518     by transportPCE.
519
520     Since Chlorine release, the bierman implementation of RESTCONF is no longer supported for the benefit of the RFC8040.
521     Thus REST API must be compliant to the RFC8040 format.
522
523
524 Connecting nodes
525 ~~~~~~~~~~~~~~~~
526
527 To connect a node, use the following RESTconf request
528
529 **REST API** : *PUT /rests/data/network-topology:network-topology/topology=topology-netconf/node=<node-id>*
530
531 **Sample JSON Data**
532
533 .. code:: json
534
535     {
536         "node": [
537             {
538                 "node-id": "<node-id>",
539                 "netconf-node-topology:tcp-only": "false",
540                 "netconf-node-topology:reconnect-on-changed-schema": "false",
541                 "netconf-node-topology:host": "<node-ip-address>",
542                 "netconf-node-topology:default-request-timeout-millis": "120000",
543                 "netconf-node-topology:max-connection-attempts": "0",
544                 "netconf-node-topology:sleep-factor": "1.5",
545                 "netconf-node-topology:actor-response-wait-time": "5",
546                 "netconf-node-topology:concurrent-rpc-limit": "0",
547                 "netconf-node-topology:between-attempts-timeout-millis": "2000",
548                 "netconf-node-topology:port": "<netconf-port>",
549                 "netconf-node-topology:connection-timeout-millis": "20000",
550                 "netconf-node-topology:username": "<node-username>",
551                 "netconf-node-topology:password": "<node-password>",
552                 "netconf-node-topology:keepalive-delay": "300"
553             }
554         ]
555     }
556
557
558 Then check that the netconf session has been correctly established between the controller and the
559 node. the status of **netconf-node-topology:connection-status** must be **connected**
560
561 **REST API** : *GET /rests/data/network-topology:network-topology/topology=topology-netconf/node=<node-id>?content=nonconfig*
562
563
564 Node configuration discovery
565 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
566
567 Once the controller is connected to the node, transportPCE application automatically launchs a
568 discovery of the node configuration datastore and creates **Logical Connection Points** to any
569 physical ports related to transmission. All *circuit-packs* inside the node configuration are
570 analyzed.
571
572 Use the following RESTconf URI to check that function internally named *portMapping*.
573
574 **REST API** : *GET /rests/data/transportpce-portmapping:network*
575
576 .. note::
577
578     In ``org-openroadm-device.yang``, four types of optical nodes can be managed:
579         * rdm: ROADM device (optical switch)
580         * xpdr: Xponder device (device that converts client to optical channel interface)
581         * ila: in line amplifier (optical amplifier)
582         * extplug: external pluggable (an optical pluggable that can be inserted in an external unit such as a router)
583
584     TransportPCE currently supports rdm and xpdr
585
586 Depending on the kind of open ROADM device connected, different kind of *Logical Connection Points*
587 should appear, if the node configuration is not empty:
588
589 -  DEG<degree-number>-TTP-<port-direction>: created on the line port of a degree on a rdm equipment
590 -  SRG<srg-number>-PP<port-number>: created on the client port of a srg on a rdm equipment
591 -  XPDR<number>-CLIENT<port-number>: created on the client port of a xpdr equipment
592 -  XPDR<number>-NETWORK<port-number>: created on the line port of a xpdr equipment
593
594     For further details on openROADM device models, see `openROADM MSA white paper <https://0201.nccdn.net/1_2/000/000/134/c50/Open-ROADM-MSA-release-2-Device-White-paper-v1-1.pdf>`__.
595
596 Optical Network topology
597 ~~~~~~~~~~~~~~~~~~~~~~~~
598
599 Before creating an optical connectivity service, your topology must contain at least two xpdr
600 devices connected to two different rdm devices. Normally, the *openroadm-topology* is automatically
601 created by transportPCE. Nevertheless, depending on the configuration inside optical nodes, this
602 topology can be partial. Check that link of type *ROADMtoROADM* exists between two adjacent rdm
603 nodes.
604
605 **REST API** : *GET /rests/data/ietf-network:networks/network=openroadm-topology*
606
607 If it is not the case, you need to manually complement the topology with *ROADMtoROADM* link using
608 the following REST RPC:
609
610
611 **REST API** : *POST /rests/operations/transportpce-networkutils:init-roadm-nodes*
612
613 **Sample JSON Data**
614
615 .. code:: json
616
617     {
618       "input": {
619         "rdm-a-node": "<node-id-A>",
620         "deg-a-num": "<degree-A-number>",
621         "termination-point-a": "<Logical-Connection-Point>",
622         "rdm-z-node": "<node-id-Z>",
623         "deg-z-num": "<degree-Z-number>",
624         "termination-point-z": "<Logical-Connection-Point>"
625       }
626     }
627
628 *<Logical-Connection-Point> comes from the portMapping function*.
629
630 Unidirectional links between xpdr and rdm nodes must be created manually. To that end use the two
631 following REST RPCs:
632
633 From xpdr to rdm:
634 ^^^^^^^^^^^^^^^^^
635
636 **REST API** : *POST /rests/operations/transportpce-networkutils:init-xpdr-rdm-links*
637
638 **Sample JSON Data**
639
640 .. code:: json
641
642     {
643       "input": {
644         "links-input": {
645           "xpdr-node": "<xpdr-node-id>",
646           "xpdr-num": "1",
647           "network-num": "<xpdr-network-port-number>",
648           "rdm-node": "<rdm-node-id>",
649           "srg-num": "<srg-number>",
650           "termination-point-num": "<Logical-Connection-Point>"
651         }
652       }
653     }
654
655 From rdm to xpdr:
656 ^^^^^^^^^^^^^^^^^
657
658 **REST API** : *POST /rests/operations/transportpce-networkutils:init-rdm-xpdr-links*
659
660 **Sample JSON Data**
661
662 .. code:: json
663
664     {
665       "input": {
666         "links-input": {
667           "xpdr-node": "<xpdr-node-id>",
668           "xpdr-num": "1",
669           "network-num": "<xpdr-network-port-number>",
670           "rdm-node": "<rdm-node-id>",
671           "srg-num": "<srg-number>",
672           "termination-point-num": "<Logical-Connection-Point>"
673         }
674       }
675     }
676
677 OTN topology
678 ~~~~~~~~~~~~
679
680 Before creating an OTN service, your topology must contain at least two xpdr devices of MUXPDR
681 or SWITCH type connected to two different rdm devices. To check that these xpdr are present in the
682 OTN topology, use the following command on the REST API :
683
684 **REST API** : *GET /rests/data/ietf-network:networks/network=otn-topology*
685
686 An optical connectivity service shall have been created in a first setp. Since Magnesium SR2, the OTN
687 links are automatically populated in the topology after the Och, OTU4 and ODU4 interfaces have
688 been created on the two network ports of the xpdr.
689
690 Creating a service
691 ~~~~~~~~~~~~~~~~~~
692
693 Use the *service handler* module to create any end-to-end connectivity service on an OpenROADM
694 network. Two different kinds of end-to-end "optical" services are managed by TransportPCE:
695 - 100GE/400GE services from client port to client port of two transponders (TPDR)
696 - Optical Channel (OC) service from client add/drop port (PP port of SRG) to client add/drop port of
697 two ROADMs.
698
699 For these services, TransportPCE automatically invokes *renderer* module to create all required
700 interfaces and cross-connection on each device supporting the service.
701 As an example, the creation of a 100GE service implies among other things, the creation of OCH or
702 Optical Tributary Signal (OTSi), OTU4 and ODU4 interfaces on the Network port of TPDR devices.
703 The creation of a 400GE service implies the creation of OTSi, OTUC4, ODUC4 and ODU4 interfaces on
704 the Network port of TPDR devices.
705
706 Since Magnesium SR2, the *service handler* module directly manages some end-to-end otn
707 connectivity services.
708 Before creating a low-order OTN service (1GE or 10GE services terminating on client port of MUXPDR
709 or SWITCH), the user must ensure that a high-order ODU4 container exists and has previously been
710 configured (it means structured to support low-order otn services) to support low-order OTN containers.
711 Thus, OTN service creation implies three steps:
712 1. OCH-OTU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
713 2. HO-ODU4 service from network port to network port of two OTN Xponders (MUXPDR or SWITCH)
714 3. 10GE service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
715
716 The management of other OTN services (1GE-ODU0, 100GE...) is planned for future releases.
717
718
719 100GE service creation
720 ^^^^^^^^^^^^^^^^^^^^^^
721
722 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
723 end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
724 nodes.
725
726 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
727
728 **Sample JSON Data**
729
730 .. code:: json
731
732     {
733         "input": {
734             "sdnc-request-header": {
735                 "request-id": "request-1",
736                 "rpc-action": "service-create",
737                 "request-system-id": "appname"
738             },
739             "service-name": "test1",
740             "common-id": "commonId",
741             "connection-type": "service",
742             "service-a-end": {
743                 "service-rate": "100",
744                 "node-id": "<xpdr-node-id>",
745                 "service-format": "Ethernet",
746                 "clli": "<ccli-name>",
747                 "tx-direction": [{
748                     "port": {
749                         "port-device-name": "<xpdr-client-port>",
750                         "port-type": "fixed",
751                         "port-name": "<xpdr-client-port-number>",
752                         "port-rack": "000000.00",
753                         "port-shelf": "Chassis#1"
754                     },
755                     "lgx": {
756                         "lgx-device-name": "Some lgx-device-name",
757                         "lgx-port-name": "Some lgx-port-name",
758                         "lgx-port-rack": "000000.00",
759                         "lgx-port-shelf": "00"
760                     },
761                     "index": 0
762                 }],
763                 "rx-direction": [{
764                     "port": {
765                         "port-device-name": "<xpdr-client-port>",
766                         "port-type": "fixed",
767                         "port-name": "<xpdr-client-port-number>",
768                         "port-rack": "000000.00",
769                         "port-shelf": "Chassis#1"
770                     },
771                     "lgx": {
772                         "lgx-device-name": "Some lgx-device-name",
773                         "lgx-port-name": "Some lgx-port-name",
774                         "lgx-port-rack": "000000.00",
775                         "lgx-port-shelf": "00"
776                     },
777                     "index": 0
778                 }],
779                 "optic-type": "gray"
780             },
781             "service-z-end": {
782                 "service-rate": "100",
783                 "node-id": "<xpdr-node-id>",
784                 "service-format": "Ethernet",
785                 "clli": "<ccli-name>",
786                 "tx-direction": [{
787                     "port": {
788                         "port-device-name": "<xpdr-client-port>",
789                         "port-type": "fixed",
790                         "port-name": "<xpdr-client-port-number>",
791                         "port-rack": "000000.00",
792                         "port-shelf": "Chassis#1"
793                     },
794                     "lgx": {
795                         "lgx-device-name": "Some lgx-device-name",
796                         "lgx-port-name": "Some lgx-port-name",
797                         "lgx-port-rack": "000000.00",
798                         "lgx-port-shelf": "00"
799                     },
800                     "index": 0
801                 }],
802                 "rx-direction": [{
803                     "port": {
804                         "port-device-name": "<xpdr-client-port>",
805                         "port-type": "fixed",
806                         "port-name": "<xpdr-client-port-number>",
807                         "port-rack": "000000.00",
808                         "port-shelf": "Chassis#1"
809                     },
810                     "lgx": {
811                         "lgx-device-name": "Some lgx-device-name",
812                         "lgx-port-name": "Some lgx-port-name",
813                         "lgx-port-rack": "000000.00",
814                         "lgx-port-shelf": "00"
815                     },
816                     "index": 0
817                 }],
818                 "optic-type": "gray"
819             },
820             "due-date": "yyyy-mm-ddT00:00:01Z",
821             "operator-contact": "some-contact-info"
822         }
823     }
824
825 Most important parameters for this REST RPC are the identification of the two physical client ports
826 on xpdr nodes.This RPC invokes the *PCE* module to compute a path over the *openroadm-topology* and
827 then invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
828
829
830 OC service creation
831 ^^^^^^^^^^^^^^^^^^^
832
833 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
834 end-to end Optical Channel (OC) connectivity service between two add/drop ports (PP port of SRG
835 node) over an optical network only composed of rdm nodes.
836
837 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
838
839 **Sample JSON Data**
840
841 .. code:: json
842
843     {
844         "input": {
845             "sdnc-request-header": {
846                 "request-id": "request-1",
847                 "rpc-action": "service-create",
848                 "request-system-id": "appname"
849             },
850             "service-name": "something",
851             "common-id": "commonId",
852             "connection-type": "roadm-line",
853             "service-a-end": {
854                 "service-rate": "100",
855                 "node-id": "<xpdr-node-id>",
856                 "service-format": "OC",
857                 "clli": "<ccli-name>",
858                 "tx-direction": [{
859                     "port": {
860                         "port-device-name": "<xpdr-client-port>",
861                         "port-type": "fixed",
862                         "port-name": "<xpdr-client-port-number>",
863                         "port-rack": "000000.00",
864                         "port-shelf": "Chassis#1"
865                     },
866                     "lgx": {
867                         "lgx-device-name": "Some lgx-device-name",
868                         "lgx-port-name": "Some lgx-port-name",
869                         "lgx-port-rack": "000000.00",
870                         "lgx-port-shelf": "00"
871                     },
872                     "index": 0
873                 }],
874                 "rx-direction": [{
875                     "port": {
876                         "port-device-name": "<xpdr-client-port>",
877                         "port-type": "fixed",
878                         "port-name": "<xpdr-client-port-number>",
879                         "port-rack": "000000.00",
880                         "port-shelf": "Chassis#1"
881                     },
882                     "lgx": {
883                         "lgx-device-name": "Some lgx-device-name",
884                         "lgx-port-name": "Some lgx-port-name",
885                         "lgx-port-rack": "000000.00",
886                         "lgx-port-shelf": "00"
887                     },
888                     "index": 0
889                 }],
890                 "optic-type": "gray"
891             },
892             "service-z-end": {
893                 "service-rate": "100",
894                 "node-id": "<xpdr-node-id>",
895                 "service-format": "OC",
896                 "clli": "<ccli-name>",
897                 "tx-direction": [{
898                     "port": {
899                         "port-device-name": "<xpdr-client-port>",
900                         "port-type": "fixed",
901                         "port-name": "<xpdr-client-port-number>",
902                         "port-rack": "000000.00",
903                         "port-shelf": "Chassis#1"
904                     },
905                     "lgx": {
906                         "lgx-device-name": "Some lgx-device-name",
907                         "lgx-port-name": "Some lgx-port-name",
908                         "lgx-port-rack": "000000.00",
909                         "lgx-port-shelf": "00"
910                     },
911                     "index": 0
912                 }],
913                 "rx-direction": [{
914                     "port": {
915                         "port-device-name": "<xpdr-client-port>",
916                         "port-type": "fixed",
917                         "port-name": "<xpdr-client-port-number>",
918                         "port-rack": "000000.00",
919                         "port-shelf": "Chassis#1"
920                     },
921                     "lgx": {
922                         "lgx-device-name": "Some lgx-device-name",
923                         "lgx-port-name": "Some lgx-port-name",
924                         "lgx-port-rack": "000000.00",
925                         "lgx-port-shelf": "00"
926                     },
927                     "index": 0
928                 }],
929                 "optic-type": "gray"
930             },
931             "due-date": "yyyy-mm-ddT00:00:01Z",
932             "operator-contact": "some-contact-info"
933         }
934     }
935
936 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
937 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
938 the devices.
939
940 OTN OCH-OTU4 service creation
941 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
942
943 Use the following REST RPC to invoke *service handler* module in order to create over the optical
944 infrastructure a bidirectional end-to-end OTU4 over an optical wavelength connectivity service
945 between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such service configure the
946 optical network infrastructure composed of rdm nodes.
947
948 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
949
950 **Sample JSON Data**
951
952 .. code:: json
953
954     {
955         "input": {
956             "sdnc-request-header": {
957                 "request-id": "request-1",
958                 "rpc-action": "service-create",
959                 "request-system-id": "appname"
960             },
961             "service-name": "something",
962             "common-id": "commonId",
963             "connection-type": "infrastructure",
964             "service-a-end": {
965                 "service-rate": "100",
966                 "node-id": "<xpdr-node-id>",
967                 "service-format": "OTU",
968                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
969                 "clli": "<ccli-name>",
970                 "tx-direction": [{
971                     "port": {
972                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
973                         "port-type": "fixed",
974                         "port-name": "<xpdr-network-port-in-otn-topology>",
975                         "port-rack": "000000.00",
976                         "port-shelf": "Chassis#1"
977                     },
978                     "lgx": {
979                         "lgx-device-name": "Some lgx-device-name",
980                         "lgx-port-name": "Some lgx-port-name",
981                         "lgx-port-rack": "000000.00",
982                         "lgx-port-shelf": "00"
983                     },
984                     "index": 0
985                 }],
986                 "rx-direction": [{
987                     "port": {
988                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
989                         "port-type": "fixed",
990                         "port-name": "<xpdr-network-port-in-otn-topology>",
991                         "port-rack": "000000.00",
992                         "port-shelf": "Chassis#1"
993                     },
994                     "lgx": {
995                         "lgx-device-name": "Some lgx-device-name",
996                         "lgx-port-name": "Some lgx-port-name",
997                         "lgx-port-rack": "000000.00",
998                         "lgx-port-shelf": "00"
999                     },
1000                     "index": 0
1001                 }],
1002                 "optic-type": "gray"
1003             },
1004             "service-z-end": {
1005                 "service-rate": "100",
1006                 "node-id": "<xpdr-node-id>",
1007                 "service-format": "OTU",
1008                 "otu-service-rate": "org-openroadm-otn-common-types:OTU4",
1009                 "clli": "<ccli-name>",
1010                 "tx-direction": [{
1011                     "port": {
1012                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1013                         "port-type": "fixed",
1014                         "port-name": "<xpdr-network-port-in-otn-topology>",
1015                         "port-rack": "000000.00",
1016                         "port-shelf": "Chassis#1"
1017                     },
1018                     "lgx": {
1019                         "lgx-device-name": "Some lgx-device-name",
1020                         "lgx-port-name": "Some lgx-port-name",
1021                         "lgx-port-rack": "000000.00",
1022                         "lgx-port-shelf": "00"
1023                     },
1024                     "index": 0
1025                 }],
1026                 "rx-direction": [{
1027                     "port": {
1028                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1029                         "port-type": "fixed",
1030                         "port-name": "<xpdr-network-port-in-otn-topology>",
1031                         "port-rack": "000000.00",
1032                         "port-shelf": "Chassis#1"
1033                     },
1034                     "lgx": {
1035                         "lgx-device-name": "Some lgx-device-name",
1036                         "lgx-port-name": "Some lgx-port-name",
1037                         "lgx-port-rack": "000000.00",
1038                         "lgx-port-shelf": "00"
1039                     },
1040                     "index": 0
1041                 }],
1042                 "optic-type": "gray"
1043             },
1044             "due-date": "yyyy-mm-ddT00:00:01Z",
1045             "operator-contact": "some-contact-info"
1046         }
1047     }
1048
1049 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1050 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
1051 the devices.
1052
1053 OTSi-OTUC4 service creation
1054 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
1055
1056 Use the following REST RPC to invoke *service handler* module in order to create over the optical
1057 infrastructure a bidirectional end-to-end OTUC4 over an optical Optical Tributary Signal
1058 connectivity service between two optical network ports of OTN Xponder (MUXPDR or SWITCH). Such
1059 service configure the optical network infrastructure composed of rdm nodes.
1060
1061 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
1062
1063 **Sample JSON Data**
1064
1065 .. code:: json
1066
1067     {
1068         "input": {
1069             "sdnc-request-header": {
1070                 "request-id": "request-1",
1071                 "rpc-action": "service-create",
1072                 "request-system-id": "appname"
1073             },
1074             "service-name": "something",
1075             "common-id": "commonId",
1076             "connection-type": "infrastructure",
1077             "service-a-end": {
1078                 "service-rate": "400",
1079                 "node-id": "<xpdr-node-id>",
1080                 "service-format": "OTU",
1081                 "otu-service-rate": "org-openroadm-otn-common-types:OTUCn",
1082                 "clli": "<ccli-name>",
1083                 "tx-direction": [{
1084                     "port": {
1085                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1086                         "port-type": "fixed",
1087                         "port-name": "<xpdr-network-port-in-otn-topology>",
1088                         "port-rack": "000000.00",
1089                         "port-shelf": "Chassis#1"
1090                     },
1091                     "lgx": {
1092                         "lgx-device-name": "Some lgx-device-name",
1093                         "lgx-port-name": "Some lgx-port-name",
1094                         "lgx-port-rack": "000000.00",
1095                         "lgx-port-shelf": "00"
1096                     },
1097                     "index": 0
1098                 }],
1099                 "rx-direction": [{
1100                     "port": {
1101                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1102                         "port-type": "fixed",
1103                         "port-name": "<xpdr-network-port-in-otn-topology>",
1104                         "port-rack": "000000.00",
1105                         "port-shelf": "Chassis#1"
1106                     },
1107                     "lgx": {
1108                         "lgx-device-name": "Some lgx-device-name",
1109                         "lgx-port-name": "Some lgx-port-name",
1110                         "lgx-port-rack": "000000.00",
1111                         "lgx-port-shelf": "00"
1112                     },
1113                     "index": 0
1114                 }],
1115                 "optic-type": "gray"
1116             },
1117             "service-z-end": {
1118                 "service-rate": "400",
1119                 "node-id": "<xpdr-node-id>",
1120                 "service-format": "OTU",
1121                 "otu-service-rate": "org-openroadm-otn-common-types:OTUCn",
1122                 "clli": "<ccli-name>",
1123                 "tx-direction": [{
1124                     "port": {
1125                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1126                         "port-type": "fixed",
1127                         "port-name": "<xpdr-network-port-in-otn-topology>",
1128                         "port-rack": "000000.00",
1129                         "port-shelf": "Chassis#1"
1130                     },
1131                     "lgx": {
1132                         "lgx-device-name": "Some lgx-device-name",
1133                         "lgx-port-name": "Some lgx-port-name",
1134                         "lgx-port-rack": "000000.00",
1135                         "lgx-port-shelf": "00"
1136                     },
1137                     "index": 0
1138                 }],
1139                 "rx-direction": [{
1140                     "port": {
1141                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1142                         "port-type": "fixed",
1143                         "port-name": "<xpdr-network-port-in-otn-topology>",
1144                         "port-rack": "000000.00",
1145                         "port-shelf": "Chassis#1"
1146                     },
1147                     "lgx": {
1148                         "lgx-device-name": "Some lgx-device-name",
1149                         "lgx-port-name": "Some lgx-port-name",
1150                         "lgx-port-rack": "000000.00",
1151                         "lgx-port-shelf": "00"
1152                     },
1153                     "index": 0
1154                 }],
1155                 "optic-type": "gray"
1156             },
1157             "due-date": "yyyy-mm-ddT00:00:01Z",
1158             "operator-contact": "some-contact-info"
1159         }
1160     }
1161
1162 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1163 *openroadm-topology* and then invokes *renderer* and *OLM* to implement the end-to-end path into
1164 the devices.
1165
1166 One shall note that in Phosphorus SR0, as the OpenROADM 400G specification are not available (neither
1167 in the GNPy libraries, nor in the *PCE* module), path validation will be performed using the same
1168 asumptions as we use for 100G. This means the path may be validated whereas optical performances do
1169 not reach expected levels. This allows testing OpenROADM device implementing B100G rates, but shall
1170 not be used in operational conditions. The support for higher rate impairment aware path computation
1171 will be introduced across Phosphorus release train.
1172
1173 ODUC4 service creation
1174 ^^^^^^^^^^^^^^^^^^^^^^
1175
1176 For ODUC4 service creation, the REST RPC to invoke *service handler* module in order to create an
1177 ODUC4 over the OTSi-OTUC4 has the same format as the RPC used for the creation of this last. Only
1178 "service-format" needs to be changed to "ODU", and "otu-service-rate" : "org-openroadm-otn-common-
1179 types:OTUCn" needs to be replaced by: "odu-service-rate" : "org-openroadm-otn-common-types:ODUCn"
1180 in both service-a-end and service-z-end containers.
1181
1182 OTN HO-ODU4 service creation
1183 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1184
1185 Use the following REST RPC to invoke *service handler* module in order to create over the optical
1186 infrastructure a bidirectional end-to-end ODU4 OTN service over an OTU4 and structured to support
1187 low-order OTN services (ODU2e, ODU0). As for OTU4, such a service must be created between two network
1188 ports of OTN Xponder (MUXPDR or SWITCH).
1189
1190 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
1191
1192 **Sample JSON Data**
1193
1194 .. code:: json
1195
1196     {
1197         "input": {
1198             "sdnc-request-header": {
1199                 "request-id": "request-1",
1200                 "rpc-action": "service-create",
1201                 "request-system-id": "appname"
1202             },
1203             "service-name": "something",
1204             "common-id": "commonId",
1205             "connection-type": "infrastructure",
1206             "service-a-end": {
1207                 "service-rate": "100",
1208                 "node-id": "<xpdr-node-id>",
1209                 "service-format": "ODU",
1210                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
1211                 "clli": "<ccli-name>",
1212                 "tx-direction": [{
1213                     "port": {
1214                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1215                         "port-type": "fixed",
1216                         "port-name": "<xpdr-network-port-in-otn-topology>",
1217                         "port-rack": "000000.00",
1218                         "port-shelf": "Chassis#1"
1219                     },
1220                     "lgx": {
1221                         "lgx-device-name": "Some lgx-device-name",
1222                         "lgx-port-name": "Some lgx-port-name",
1223                         "lgx-port-rack": "000000.00",
1224                         "lgx-port-shelf": "00"
1225                     },
1226                     "index": 0
1227                 }],
1228                 "rx-direction": [{
1229                     "port": {
1230                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1231                         "port-type": "fixed",
1232                         "port-name": "<xpdr-network-port-in-otn-topology>",
1233                         "port-rack": "000000.00",
1234                         "port-shelf": "Chassis#1"
1235                     },
1236                     "lgx": {
1237                         "lgx-device-name": "Some lgx-device-name",
1238                         "lgx-port-name": "Some lgx-port-name",
1239                         "lgx-port-rack": "000000.00",
1240                         "lgx-port-shelf": "00"
1241                     },
1242                     "index": 0
1243                 }],
1244                 "optic-type": "gray"
1245             },
1246             "service-z-end": {
1247                 "service-rate": "100",
1248                 "node-id": "<xpdr-node-id>",
1249                 "service-format": "ODU",
1250                 "otu-service-rate": "org-openroadm-otn-common-types:ODU4",
1251                 "clli": "<ccli-name>",
1252                 "tx-direction": [{
1253                     "port": {
1254                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1255                         "port-type": "fixed",
1256                         "port-name": "<xpdr-network-port-in-otn-topology>",
1257                         "port-rack": "000000.00",
1258                         "port-shelf": "Chassis#1"
1259                     },
1260                     "lgx": {
1261                         "lgx-device-name": "Some lgx-device-name",
1262                         "lgx-port-name": "Some lgx-port-name",
1263                         "lgx-port-rack": "000000.00",
1264                         "lgx-port-shelf": "00"
1265                     },
1266                     "index": 0
1267                 }],
1268                 "rx-direction": [{
1269                     "port": {
1270                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1271                         "port-type": "fixed",
1272                         "port-name": "<xpdr-network-port-in-otn-topology>",
1273                         "port-rack": "000000.00",
1274                         "port-shelf": "Chassis#1"
1275                     },
1276                     "lgx": {
1277                         "lgx-device-name": "Some lgx-device-name",
1278                         "lgx-port-name": "Some lgx-port-name",
1279                         "lgx-port-rack": "000000.00",
1280                         "lgx-port-shelf": "00"
1281                     },
1282                     "index": 0
1283                 }],
1284                 "optic-type": "gray"
1285             },
1286             "due-date": "yyyy-mm-ddT00:00:01Z",
1287             "operator-contact": "some-contact-info"
1288         }
1289     }
1290
1291 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1292 *otn-topology* that must contains OTU4 links with valid bandwidth parameters, and then
1293 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1294
1295 OTN 10GE-ODU2e service creation
1296 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1297
1298 Use the following REST RPC to invoke *service handler* module in order to create over the OTN
1299 infrastructure a bidirectional end-to-end 10GE-ODU2e OTN service over an ODU4.
1300 Such a service must be created between two client ports of OTN Xponder (MUXPDR or SWITCH)
1301 configured to support 10GE interfaces.
1302
1303 **REST API** : *POST /rests/operations/org-openroadm-service:service-create*
1304
1305 **Sample JSON Data**
1306
1307 .. code:: json
1308
1309     {
1310         "input": {
1311             "sdnc-request-header": {
1312                 "request-id": "request-1",
1313                 "rpc-action": "service-create",
1314                 "request-system-id": "appname"
1315             },
1316             "service-name": "something",
1317             "common-id": "commonId",
1318             "connection-type": "service",
1319             "service-a-end": {
1320                 "service-rate": "10",
1321                 "node-id": "<xpdr-node-id>",
1322                 "service-format": "Ethernet",
1323                 "clli": "<ccli-name>",
1324                 "subrate-eth-sla": {
1325                     "subrate-eth-sla": {
1326                         "committed-info-rate": "10000",
1327                         "committed-burst-size": "64"
1328                     }
1329                 },
1330                 "tx-direction": [{
1331                     "port": {
1332                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1333                         "port-type": "fixed",
1334                         "port-name": "<xpdr-client-port-in-otn-topology>",
1335                         "port-rack": "000000.00",
1336                         "port-shelf": "Chassis#1"
1337                     },
1338                     "lgx": {
1339                         "lgx-device-name": "Some lgx-device-name",
1340                         "lgx-port-name": "Some lgx-port-name",
1341                         "lgx-port-rack": "000000.00",
1342                         "lgx-port-shelf": "00"
1343                     },
1344                     "index": 0
1345                 }],
1346                 "rx-direction": [{
1347                     "port": {
1348                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1349                         "port-type": "fixed",
1350                         "port-name": "<xpdr-client-port-in-otn-topology>",
1351                         "port-rack": "000000.00",
1352                         "port-shelf": "Chassis#1"
1353                     },
1354                     "lgx": {
1355                         "lgx-device-name": "Some lgx-device-name",
1356                         "lgx-port-name": "Some lgx-port-name",
1357                         "lgx-port-rack": "000000.00",
1358                         "lgx-port-shelf": "00"
1359                     },
1360                     "index": 0
1361                 }],
1362                 "optic-type": "gray"
1363             },
1364             "service-z-end": {
1365                 "service-rate": "10",
1366                 "node-id": "<xpdr-node-id>",
1367                 "service-format": "Ethernet",
1368                 "clli": "<ccli-name>",
1369                 "subrate-eth-sla": {
1370                     "subrate-eth-sla": {
1371                         "committed-info-rate": "10000",
1372                         "committed-burst-size": "64"
1373                     }
1374                 },
1375                 "tx-direction": [{
1376                     "port": {
1377                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1378                         "port-type": "fixed",
1379                         "port-name": "<xpdr-client-port-in-otn-topology>",
1380                         "port-rack": "000000.00",
1381                         "port-shelf": "Chassis#1"
1382                     },
1383                     "lgx": {
1384                         "lgx-device-name": "Some lgx-device-name",
1385                         "lgx-port-name": "Some lgx-port-name",
1386                         "lgx-port-rack": "000000.00",
1387                         "lgx-port-shelf": "00"
1388                     },
1389                     "index": 0
1390                 }],
1391                 "rx-direction": [{
1392                     "port": {
1393                         "port-device-name": "<xpdr-node-id-in-otn-topology>",
1394                         "port-type": "fixed",
1395                         "port-name": "<xpdr-client-port-in-otn-topology>",
1396                         "port-rack": "000000.00",
1397                         "port-shelf": "Chassis#1"
1398                     },
1399                     "lgx": {
1400                         "lgx-device-name": "Some lgx-device-name",
1401                         "lgx-port-name": "Some lgx-port-name",
1402                         "lgx-port-rack": "000000.00",
1403                         "lgx-port-shelf": "00"
1404                     },
1405                     "index": 0
1406                 }],
1407                 "optic-type": "gray"
1408             },
1409             "due-date": "yyyy-mm-ddT00:00:01Z",
1410             "operator-contact": "some-contact-info"
1411         }
1412     }
1413
1414 As for the previous RPC, this RPC invokes the *PCE* module to compute a path over the
1415 *otn-topology* that must contains ODU4 links with valid bandwidth parameters, and then
1416 invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
1417
1418
1419 .. note::
1420     Since Magnesium SR2, the service-list corresponding to OCH-OTU4, ODU4 or again 10GE-ODU2e services is
1421     updated in the service-list datastore.
1422
1423 .. note::
1424     trib-slot is used when the equipment supports contiguous trib-slot allocation (supported from
1425     Magnesium SR0). The trib-slot provided corresponds to the first of the used trib-slots.
1426     complex-trib-slots will be used when the equipment does not support contiguous trib-slot
1427     allocation. In this case a list of the different trib-slots to be used shall be provided.
1428     The support for non contiguous trib-slot allocation is planned for later release.
1429
1430 Deleting a service
1431 ~~~~~~~~~~~~~~~~~~
1432
1433 Deleting any kind of service
1434 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1435
1436 Use the following REST RPC to invoke *service handler* module in order to delete a given optical
1437 connectivity service.
1438
1439 **REST API** : *POST /rests/operations/org-openroadm-service:service-delete*
1440
1441 **Sample JSON Data**
1442
1443 .. code:: json
1444
1445     {
1446         "input": {
1447             "sdnc-request-header": {
1448                 "request-id": "request-1",
1449                 "rpc-action": "service-delete",
1450                 "request-system-id": "appname",
1451                 "notification-url": "http://localhost:8585/NotificationServer/notify"
1452             },
1453             "service-delete-req-info": {
1454                 "service-name": "something",
1455                 "tail-retention": "no"
1456             }
1457         }
1458     }
1459
1460 Most important parameters for this REST RPC is the *service-name*.
1461
1462
1463 .. note::
1464     Deleting OTN services implies proceeding in the reverse way to their creation. Thus, OTN
1465     service deletion must respect the three following steps:
1466     1. delete first all 10GE services supported over any ODU4 to be deleted
1467     2. delete ODU4
1468     3. delete OCH-OTU4 supporting the just deleted ODU4
1469
1470 Invoking PCE module
1471 ~~~~~~~~~~~~~~~~~~~
1472
1473 Use the following REST RPCs to invoke *PCE* module in order to check connectivity between xponder
1474 nodes and the availability of a supporting optical connectivity between the network-ports of the
1475 nodes.
1476
1477 Checking OTU4 service connectivity
1478 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1479
1480 **REST API** : *POST /rests/operations/transportpce-pce:path-computation-request*
1481
1482 **Sample JSON Data**
1483
1484 .. code:: json
1485
1486    {
1487       "input": {
1488            "service-name": "something",
1489            "resource-reserve": "true",
1490            "service-handler-header": {
1491              "request-id": "request1"
1492            },
1493            "service-a-end": {
1494              "service-rate": "100",
1495              "clli": "<clli-node>",
1496              "service-format": "OTU",
1497              "node-id": "<otn-node-id>"
1498            },
1499            "service-z-end": {
1500              "service-rate": "100",
1501              "clli": "<clli-node>",
1502              "service-format": "OTU",
1503              "node-id": "<otn-node-id>"
1504              },
1505            "pce-routing-metric": "hop-count"
1506        }
1507    }
1508
1509 .. note::
1510     here, the <otn-node-id> corresponds to the node-id as appearing in "openroadm-network" topology
1511     layer
1512
1513 Checking ODU4 service connectivity
1514 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1515
1516 **REST API** : *POST /rests/operations/transportpce-pce:path-computation-request*
1517
1518 **Sample JSON Data**
1519
1520 .. code:: json
1521
1522    {
1523       "input": {
1524            "service-name": "something",
1525            "resource-reserve": "true",
1526            "service-handler-header": {
1527              "request-id": "request1"
1528            },
1529            "service-a-end": {
1530              "service-rate": "100",
1531              "clli": "<clli-node>",
1532              "service-format": "ODU",
1533              "node-id": "<otn-node-id>"
1534            },
1535            "service-z-end": {
1536              "service-rate": "100",
1537              "clli": "<clli-node>",
1538              "service-format": "ODU",
1539              "node-id": "<otn-node-id>"
1540              },
1541            "pce-routing-metric": "hop-count"
1542        }
1543    }
1544
1545 .. note::
1546     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1547
1548 Checking 10GE/ODU2e service connectivity
1549 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1550
1551 **REST API** : *POST /rests/operations/transportpce-pce:path-computation-request*
1552
1553 **Sample JSON Data**
1554
1555 .. code:: json
1556
1557    {
1558       "input": {
1559            "service-name": "something",
1560            "resource-reserve": "true",
1561            "service-handler-header": {
1562              "request-id": "request1"
1563            },
1564            "service-a-end": {
1565              "service-rate": "10",
1566              "clli": "<clli-node>",
1567              "service-format": "Ethernet",
1568              "node-id": "<otn-node-id>"
1569            },
1570            "service-z-end": {
1571              "service-rate": "10",
1572              "clli": "<clli-node>",
1573              "service-format": "Ethernet",
1574              "node-id": "<otn-node-id>"
1575              },
1576            "pce-routing-metric": "hop-count"
1577        }
1578    }
1579
1580 .. note::
1581     here, the <otn-node-id> corresponds to the node-id as appearing in "otn-topology" layer
1582
1583
1584 odl-transportpce-tapi
1585 ---------------------
1586
1587 This feature allows TransportPCE application to expose at its northbound interface other APIs than
1588 those defined by the OpenROADM MSA. With this feature, TransportPCE provides part of the Transport-API
1589 specified by the Open Networking Foundation. More specifically, the Topology Service, Connectivity and Notification
1590 Service components are implemented, allowing to:
1591
1592 1. Expose to higher level applications an abstraction of its OpenROADM topologies in the form of topologies respecting the T-API modelling.
1593 2. Create/delete connectivity services between the Service Interface Points (SIPs) exposed by the T-API topology.
1594 3. Create/Delete Notification Subscription Service to expose to higher level applications T-API notifications through a Kafka server.
1595
1596 The current version of TransportPCE implements the *tapi-topology.yang*,
1597 *tapi-connectivity.yang* and *tapi-notification.yang* models in the revision
1598 2018-12-10 (T-API v2.1.2).
1599
1600 Additionally, support for the Path Computation Service will be added in future releases, which will allow T-PCE
1601 to compute a path over the T-API topology.
1602
1603 T-API Topology Service
1604 ~~~~~~~~~~~~~~~~~~~~~~
1605
1606 -  RPC calls implemented:
1607
1608    -  get-topology-details
1609
1610    -  get-node-details
1611
1612    -  get-node-edge-point-details
1613
1614    -  get-link-details
1615
1616    -  get-topology-list
1617
1618
1619 As in IETF or OpenROADM topologies, T-API topologies are composed of lists of nodes and links that
1620 abstract a set of network resources. T-API specifies the *T0 - Multi-layer topology* which is, as
1621 indicated by its name, a single topology that collapses network logical abstraction for all network
1622 layers. Thus, an OpenROADM device as, for example, an OTN xponder that manages the following network
1623 layers ETH, ODU, OTU, Optical wavelength, will be represented in T-API T0 topology by two nodes:
1624 one *DSR/ODU* node and one *Photonic Media* node. Each of them are linked together through one or
1625 several *transitional links* depending on the number of network/line ports on the device.
1626
1627 Aluminium SR2 comes with a complete refactoring of this module, handling the same way multi-layer
1628 abstraction of any Xponder terminal device, whether it is a 100G transponder, an OTN muxponder or
1629 again an OTN switch. For all these devices, the implementation manages the fact that only relevant
1630 ports must appear in the resulting TAPI topology abstraction. In other words, only client/network ports
1631 that are undirectly/directly connected to the ROADM infrastructure are considered for the abstraction.
1632 Moreover, the whole ROADM infrastructure of the network is also abstracted towards a single photonic
1633 node. Therefore, a pair of unidirectional xponder-output/xponder-input links present in *openroadm-topology*
1634 is represented by a bidirectional *OMS* link in TAPI topology.
1635 In the same way, a pair of unidirectional OTN links (OTU4, ODU4) present in *otn-topology* is also
1636 represented by a bidirectional OTN link in TAPI topology, while retaining their available bandwidth
1637 characteristics.
1638
1639 Phosphorus SR0 extends the T-API topology service implementation by bringing a fully described topology.
1640 *T0 - Full Multi-layer topology* is derived from the existing *T0 - Multi-layer topology*. But the ROADM
1641 infrastructure is not abstracted and the higher level application can get more details on the composition
1642 of the ROADM infrastructure controlled by TransportPCE. Each ROADM node found in the *openroadm-network*
1643 is converted into a *Photonic Media* node. The details of these T-API nodes are obtained from the
1644 *openroadm-topology*. Therefore, the external traffic ports of *Degree* and *SRG* nodes are represented
1645 with a set of Network Edge Points (NEPs) and SIPs belonging to the *Photonic Media* node and a pair of
1646 roadm-to-roadm links present in *openroadm-topology* is represented by a bidirectional *OMS* link in TAPI
1647 topology.
1648 Additionally, T-API topology related information is stored in TransportPCE datastore in the same way as
1649 OpenROADM topology layers. When a node is connected to the controller through the corresponding *REST API*,
1650 the T-API topology context gets updated dynamically and stored.
1651
1652 .. note::
1653
1654     A naming nomenclature is defined to be able to map T-API and OpenROADM data.
1655     i.e., T-API_roadm_Name = OpenROADM_roadmID+T-API_layer
1656     i.e., T-API_roadm_nep_Name = OpenROADM_roadmID+T-API_layer+OpenROADM_terminationPointID
1657
1658 Three kinds of topologies are currently implemented. The first one is the *"T0 - Multi-layer topology"*
1659 defined in the reference implementation of T-API. This topology gives an abstraction from data coming
1660 from openroadm-topology and otn-topology. Such topology may be rather complex since most of devices are
1661 represented through several nodes and links.
1662 Another topology, named *"Transponder 100GE"*, is also implemented. That latter provides a higher level
1663 of abstraction, much simpler, for the specific case of 100GE transponder, in the form of a single
1664 DSR node.
1665 Lastly, the *T0 - Full Multi-layer topology* topology was added. This topology collapses the data coming
1666 from openroadm-network, openroadm-topology and otn-topology. It gives a complete view of the optical
1667 network as defined in the reference implementation of T-API
1668
1669 The figure below shows an example of TAPI abstractions as performed by TransportPCE starting from Aluminium SR2.
1670
1671 .. figure:: ./images/TransportPCE-tapi-abstraction.jpg
1672    :alt: Example of T0-multi-layer TAPI abstraction in TransportPCE
1673
1674 In this specific case, as far as the "A" side is concerned, we connect TransportPCE to two xponder
1675 terminal devices at the netconf level :
1676 - XPDR-A1 is a 100GE transponder and is represented by XPDR-A1-XPDR1 node in *otn-topology*
1677 - SPDR-SA1 is an otn xponder that actually contains in its device configuration datastore two otn
1678 xponder nodes (the otn muxponder 10GE=>100G SPDR-SA1-XPDR1 and the otn switch 4x100GE => 4x100G SPDR-SA1-XPDR2)
1679 As represented on the bottom part of the figure, only one network port of XPDR-A1-XPDR1 is connected
1680 to the ROADM infrastructure, and only one network port of the otn muxponder is also attached to the
1681 ROADM infrastructure.
1682 Such network configuration will result in the TAPI *T0 - Multi-layer topology* abstraction as
1683 represented in the center of the figure. Let's notice that the otn switch (SPDR-SA1-XPDR2), not
1684 being attached to the ROADM infrastructure, is not abstracted.
1685 Moreover, 100GE transponder being connected, the TAPI *Transponder 100GE* topology will result in a
1686 single layer DSR node with only the two Owned Node Edge Ports representing the two 100GE client ports
1687 of respectively XPDR-A1-XPDR1 and XPDR-C1-XPDR1...
1688
1689
1690 **REST API** : *POST /rests/operations/tapi-topology:get-topology-details*
1691
1692 This request builds the TAPI *T0 - Multi-layer topology* abstraction with regard to the current
1693 state of *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1694
1695 **Sample JSON Data**
1696
1697 .. code:: json
1698
1699     {
1700       "tapi-topology:input": {
1701         "tapi-topology:topology-id-or-name": "T0 - Multi-layer topology"
1702        }
1703     }
1704
1705 This request builds the TAPI *Transponder 100GE* abstraction with regard to the current state of
1706 *openroadm-topology* and *otn-topology* topologies stored in OpenDaylight datastores.
1707 Its main interest is to simply and directly retrieve 100GE client ports of 100G Transponders that may
1708 be connected together, through a point-to-point 100GE service running over a wavelength.
1709
1710 .. code:: json
1711
1712     {
1713       "tapi-topology:input": {
1714         "tapi-topology:topology-id-or-name": "Transponder 100GE"
1715         }
1716     }
1717
1718
1719 .. note::
1720
1721     As for the *T0 multi-layer* topology, only 100GE client port whose their associated 100G line
1722     port is connected to Add/Drop nodes of the ROADM infrastructure are retrieved in order to
1723     abstract only relevant information.
1724
1725 This request builds the TAPI *T0 - Full Multi-layer* topology with respect to the information existing in
1726 the T-API topology context stored in OpenDaylight datastores.
1727
1728 .. code:: json
1729
1730     {
1731       "tapi-topology:input": {
1732         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology"
1733         }
1734     }
1735
1736 **REST API** : *POST /rests/operations/tapi-topology:get-node-details*
1737
1738 This request returns the information, stored in the Topology Context, of the corresponding T-API node.
1739 The user can provide, either the Uuid associated to the attribute or its name.
1740
1741 **Sample JSON Data**
1742
1743 .. code:: json
1744
1745     {
1746       "tapi-topology:input": {
1747         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1748         "tapi-topology:node-id-or-name": "ROADM-A1+PHOTONIC_MEDIA"
1749       }
1750     }
1751
1752 **REST API** : *POST /rests/operations/tapi-topology:get-node-edge-point-details*
1753
1754 This request returns the information, stored in the Topology Context, of the corresponding T-API NEP.
1755 The user can provide, either the Uuid associated to the attribute or its name.
1756
1757 **Sample JSON Data**
1758
1759 .. code:: json
1760
1761     {
1762       "tapi-topology:input": {
1763         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1764         "tapi-topology:node-id-or-name": "ROADM-A1+PHOTONIC_MEDIA",
1765         "tapi-topology:ep-id-or-name": "ROADM-A1+PHOTONIC_MEDIA+DEG1-TTP-TXRX"
1766       }
1767     }
1768
1769 **REST API** : *POST /rests/operations/tapi-topology:get-link-details*
1770
1771 This request returns the information, stored in the Topology Context, of the corresponding T-API link.
1772 The user can provide, either the Uuid associated to the attribute or its name.
1773
1774 **Sample JSON Data**
1775
1776 .. code:: json
1777
1778     {
1779       "tapi-topology:input": {
1780         "tapi-topology:topology-id-or-name": "T0 - Full Multi-layer topology",
1781         "tapi-topology:link-id-or-name": "ROADM-C1-DEG1-DEG1-TTP-TXRXtoROADM-A1-DEG2-DEG2-TTP-TXRX"
1782       }
1783     }
1784
1785 T-API Connectivity & Common Services
1786 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1787
1788 Phosphorus SR0 extends the T-API interface support by implementing the T-API connectivity Service.
1789 This interface enables a higher level controller or an orchestrator to request the creation of
1790 connectivity services as defined in the *tapi-connectivity* model. As it is necessary to indicate the
1791 two (or more) SIPs (or endpoints) of the connectivity service, the *tapi-common* model is implemented
1792 to retrieve from the datastore all the innformation related to the SIPs in the tapi-context.
1793 Current implementation of the connectivity service maps the *connectivity-request* into the appropriate
1794 *openroadm-service-create* and relies on the Service Handler to perform path calculation and configuration
1795 of devices. Results received from the PCE and the Rendererare mapped back into T-API to create the
1796 corresponding Connection End Points (CEPs) and Connections in the T-API Connectivity Context and store it
1797 in the datastore.
1798
1799 This first implementation includes the creation of:
1800
1801 -   ROADM-to-ROADM tapi-connectivity service (MC connectivity service)
1802 -   OTN tapi-connectivity services (OCh/OTU, OTSi/OTU & ODU connectivity services)
1803 -   Ethernet tapi-connectivity services (DSR connectivity service)
1804
1805 -  RPC calls implemented
1806
1807    -  create-connectivity-service
1808
1809    -  get-connectivity-service-details
1810
1811    -  get-connection-details
1812
1813    -  delete-connectivity-service
1814
1815    -  get-connection-end-point-details
1816
1817    -  get-connectivity-service-list
1818
1819    -  get-service-interface-point-details
1820
1821    -  get-service-interface-point-list
1822
1823 Creating a T-API Connectivity service
1824 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1825
1826 Use the *tapi* interface to create any end-to-end connectivity service on a T-API based
1827 network. Two kind of end-to-end "optical" connectivity services are managed by TransportPCE T-API module:
1828 - 10GE service from client port to client port of two OTN Xponders (MUXPDR or SWITCH)
1829 - Media Channel (MC) connectivity service from client add/drop port (PP port of SRG) to
1830 client add/drop port of two ROADMs.
1831
1832 As mentioned earlier, T-API module interfaces with the Service Handler to automatically invoke the
1833 *renderer* module to create all required tapi connections and cross-connection on each device
1834 supporting the service.
1835
1836 Before creating a low-order OTN connectivity service (1GE or 10GE services terminating on
1837 client port of MUXPDR or SWITCH), the user must ensure that a high-order ODU4 container
1838 exists and has previously been configured (it means structured to support low-order otn services)
1839 to support low-order OTN containers.
1840
1841 Thus, OTN connectivity service creation implies three steps:
1842 1. OTSi/OTU connectivity service from network port to network port of two OTN Xponders (MUXPDR or SWITCH in Photonic media layer)
1843 2. ODU connectivity service from network port to network port of two OTN Xponders (MUXPDR or SWITCH in DSR/ODU layer)
1844 3. 10GE connectivity service creation from client port to client port of two OTN Xponders (MUXPDR or SWITCH in DSR/ODU layer)
1845
1846 The first step corresponds to the OCH-OTU4 service from network port to network port of OpenROADM.
1847 The corresponding T-API cross and top connections are created between the CEPs of the T-API nodes
1848 involved in each request.
1849
1850 Additionally, an *MC connectivity service* could be created between two ROADMs to create an optical
1851 tunnel and reserve resources in advance. This kind of service corresponds to the OC service creation
1852 use case described earlier.
1853
1854 The management of other OTN services through T-API (1GE-ODU0, 100GE...) is planned for future releases.
1855
1856 Any-Connectivity service creation
1857 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1858 As for the Service Creation described for OpenROADM, the initial steps are the same:
1859
1860 -   Connect netconf devices to the controller
1861 -   Create XPDR-RDM links and configure RDM-to-RDM links (in openroadm topologies)
1862
1863 Bidirectional T-API links between xpdr and rdm nodes must be created manually. To that end, use the
1864 following REST RPCs:
1865
1866 From xpdr <--> rdm:
1867 ^^^^^^^^^^^^^^^^^^^
1868
1869 **REST API** : *POST /rests/operations/transportpce-tapinetworkutils:init-xpdr-rdm-tapi-link*
1870
1871 **Sample JSON Data**
1872
1873 .. code:: json
1874
1875     {
1876         "input": {
1877             "xpdr-node": "<XPDR_OpenROADM_id>",
1878             "network-tp": "<XPDR_TP_OpenROADM_id>",
1879             "rdm-node": "<ROADM_OpenROADM_id>",
1880             "add-drop-tp": "<ROADM_TP_OpenROADM_id>"
1881         }
1882     }
1883
1884 Use the following REST RPC to invoke T-API module in order to create a bidirectional connectivity
1885 service between two devices. The network should be composed of two ROADMs and two Xponders (SWITCH or MUX)
1886
1887 **REST API** : *POST /rests/operations/tapi-connectivity:create-connectivity-service*
1888
1889 **Sample JSON Data**
1890
1891 .. code:: json
1892
1893     {
1894         "tapi-connectivity:input": {
1895             "tapi-connectivity:end-point": [
1896                 {
1897                     "tapi-connectivity:layer-protocol-name": "<Node_TAPI_Layer>",
1898                     "tapi-connectivity:service-interface-point": {
1899                         "tapi-connectivity:service-interface-point-uuid": "<SIP_UUID_of_NEP>"
1900                     },
1901                     "tapi-connectivity:administrative-state": "UNLOCKED",
1902                     "tapi-connectivity:operational-state": "ENABLED",
1903                     "tapi-connectivity:direction": "BIDIRECTIONAL",
1904                     "tapi-connectivity:role": "SYMMETRIC",
1905                     "tapi-connectivity:protection-role": "WORK",
1906                     "tapi-connectivity:local-id": "<OpenROADM node ID>",
1907                     "tapi-connectivity:name": [
1908                         {
1909                             "tapi-connectivity:value-name": "OpenROADM node id",
1910                             "tapi-connectivity:value": "<OpenROADM node ID>"
1911                         }
1912                     ]
1913                 },
1914                 {
1915                     "tapi-connectivity:layer-protocol-name": "<Node_TAPI_Layer>",
1916                     "tapi-connectivity:service-interface-point": {
1917                         "tapi-connectivity:service-interface-point-uuid": "<SIP_UUID_of_NEP>"
1918                     },
1919                     "tapi-connectivity:administrative-state": "UNLOCKED",
1920                     "tapi-connectivity:operational-state": "ENABLED",
1921                     "tapi-connectivity:direction": "BIDIRECTIONAL",
1922                     "tapi-connectivity:role": "SYMMETRIC",
1923                     "tapi-connectivity:protection-role": "WORK",
1924                     "tapi-connectivity:local-id": "<OpenROADM node ID>",
1925                     "tapi-connectivity:name": [
1926                         {
1927                             "tapi-connectivity:value-name": "OpenROADM node id",
1928                             "tapi-connectivity:value": "<OpenROADM node ID>"
1929                         }
1930                     ]
1931                 }
1932             ],
1933             "tapi-connectivity:connectivity-constraint": {
1934                 "tapi-connectivity:service-layer": "<TAPI_Service_Layer>",
1935                 "tapi-connectivity:service-type": "POINT_TO_POINT_CONNECTIVITY",
1936                 "tapi-connectivity:service-level": "Some service-level",
1937                 "tapi-connectivity:requested-capacity": {
1938                     "tapi-connectivity:total-size": {
1939                         "value": "<CAPACITY>",
1940                         "unit": "GB"
1941                     }
1942                 }
1943             },
1944             "tapi-connectivity:state": "Some state"
1945         }
1946     }
1947
1948 As for the previous RPC, MC and OTSi correspond to PHOTONIC_MEDIA layer services,
1949 ODU to ODU layer services and 10GE/DSR to DSR layer services. This RPC invokes the
1950 *Service Handler* module to trigger the *PCE* to compute a path over the
1951 *otn-topology* that must contains ODU4 links with valid bandwidth parameters. Once the path is computed
1952 and validated, the T-API CEPs (associated with a NEP), cross connections and top connections will be created
1953 according to the service request and the topology objects inside the computed path. Then, the *renderer* and
1954 *OLM* are invoked to implement the end-to-end path into the devices and to update the status of the connections
1955 and connectivity service.
1956
1957 .. note::
1958     Refer to the "Unconstrained E2E Service Provisioning" use cases from T-API Reference Implementation to get
1959     more details about the process of connectivity service creation
1960
1961 Deleting a connectivity service
1962 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1963
1964 Use the following REST RPC to invoke *TAPI* module in order to delete a given optical
1965 connectivity service.
1966
1967 **REST API** : *POST /rests/operations/tapi-connectivity:delete-connectivity-service*
1968
1969 **Sample JSON Data**
1970
1971 .. code:: json
1972
1973     {
1974         "tapi-connectivity:input": {
1975             "tapi-connectivity:service-id-or-name": "<Service_UUID_or_Name>"
1976         }
1977     }
1978
1979 .. note::
1980     Deleting OTN connectivity services implies proceeding in the reverse way to their creation. Thus, OTN
1981     connectivity service deletion must respect the three following steps:
1982     1. delete first all 10GE services supported over any ODU4 to be deleted
1983     2. delete ODU4
1984     3. delete MC-OTSi supporting the just deleted ODU4
1985
1986 T-API Notification Service
1987 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1988
1989 -  RPC calls implemented:
1990
1991    -  create-notification-subscription-service
1992
1993    -  get-supported-notification-types
1994
1995    -  delete-notification-subscription-service
1996
1997    -  get-notification-subscription-service-details
1998
1999    -  get-notification-subscription-service-list
2000
2001    -  get-notification-list
2002
2003 Sulfur SR1 extends the T-API interface support by implementing the T-API notification service. This feature
2004 allows TransportPCE to write and read tapi-notifications stored in topics of a Kafka server. It also upgrades
2005 the nbinotifications module to support the serialization and deserialization of tapi-notifications into JSON
2006 format and vice-versa. Current implementation of the notification service creates a Kafka topic and stores
2007 tapi-notification on reception of a create-notification-subscription-service request. Only connectivity-service
2008 related notifications are stored in the Kafka server.
2009
2010 In comparison with openroadm notifications, in which several pre-defined kafka topics are created on nbinotification
2011 module instantiation, tapi-related kafka topics are created on-demand. Upon reception of a
2012 *create-notification-subscription-service request*, a new topic will be created in the Kafka server.
2013 This topic is named after the connectivity-service UUID.
2014
2015 .. note::
2016     Creating a Notification Subscription Service could include a list of T-API object UUIDs, therefore 1 topic per UUID
2017     is created in the Kafka server.
2018
2019 In the current implementation, only Connectivity Service related notification are supported.
2020
2021 **REST API** : *POST /rests/operations/tapi-notification:get-supported-notification-types*
2022
2023 The response body will include the type of notifications supported and the object types
2024
2025 Use the following RPC to create a Notification Subscription Service.
2026
2027 **REST API** : *POST /rests/operations/tapi-notification:create-notification-subscription-service*
2028
2029 **Sample JSON Data**
2030
2031 .. code:: json
2032
2033     {
2034         "tapi-notification:input": {
2035             "tapi-notification:subscription-filter": {
2036                 "tapi-notification:requested-notification-types": [
2037                     "ALARM_EVENT"
2038                 ],
2039                 "tapi-notification:requested-object-types": [
2040                     "CONNECTIVITY_SERVICE"
2041                 ],
2042                 "tapi-notification:requested-layer-protocols": [
2043                     "<LAYER_PROTOCOL_NAME>"
2044                 ],
2045                 "tapi-notification:requested-object-identifier": [
2046                     "<Service_UUID>"
2047                 ],
2048                 "tapi-notification:include-content": true,
2049                 "tapi-notification:local-id": "localId",
2050                 "tapi-notification:name": [
2051                     {
2052                         "tapi-notification:value-name": "Subscription name",
2053                         "tapi-notification:value": "<notification_service_name>"
2054                     }
2055                 ]
2056             },
2057             "tapi-notification:subscription-state": "ACTIVE"
2058         }
2059     }
2060
2061 This call will return the *UUID* of the Notification Subscription service, which can later be used to retrieve the
2062 details of the created subscription, to delete the subscription (and all the related kafka topics) or to retrieve
2063 all the tapi notifications related to that subscription service.
2064
2065 The figure below shows an example of the application of tapi and nbinotifications in order to notify when there is
2066 a connectivity service creation process. Depending on the status of the process a tapi-notification with the
2067 corresponding updated state of the connectivity service is sent to the topic "Service_UUID".
2068
2069 .. figure:: ./images/TransportPCE-tapi-nbinotifications-service-example.jpg
2070    :alt: Example of tapi connectivity service notifications using the feature nbinotifications in TransportPCE
2071
2072 Additionally, when a connectivity service breaks down or is restored a tapi notification alarming the new status
2073 will be sent to a Kafka Server. Below an example of a tapi notification is shown.
2074
2075 **Sample JSON T-API notification**
2076
2077 .. code:: json
2078
2079     {
2080       "nbi-notifications:notification-tapi-service": {
2081         "layer-protocol-name": "<LAYER_PROTOCOL_NAME>",
2082         "notification-type": "ATTRIBUTE_VALUE_CHANGE",
2083         "changed-attributes": [
2084           {
2085             "value-name": "administrativeState",
2086             "old-value": "<LOCKED_OR_UNLOCKED>",
2087             "new-value": "<UNLOCKED_OR_LOCKED>"
2088           },
2089           {
2090             "value-name": "operationalState",
2091             "old-value": "DISABLED_OR_ENABLED",
2092             "new-value": "ENABLED_OR_DISABLED"
2093           }
2094         ],
2095         "target-object-name": [
2096           {
2097             "value-name": "Connectivity Service Name",
2098             "value": "<SERVICE_UUID>"
2099           }
2100         ],
2101         "uuid": "<NOTIFICATION_UUID>",
2102         "target-object-type": "CONNECTIVITY_SERVICE",
2103         "event-time-stamp": "2022-04-06T09:06:01+00:00",
2104         "target-object-identifier": "<SERVICE_UUID>"
2105       }
2106     }
2107
2108 To retrieve these tapi connectivity service notifications stored in the kafka server:
2109
2110 **REST API** : *POST /rests/operations/tapi-notification:get-notification-list*
2111
2112 **Sample JSON Data**
2113
2114 .. code:: json
2115
2116     {
2117         "tapi-notification:input": {
2118             "tapi-notification:subscription-id-or-name": "<SUBSCRIPTION_UUID_OR_NAME>",
2119             "tapi-notification:time-period": "time-period"
2120         }
2121     }
2122
2123 Further development will support more types of T-API objects, i.e., node, link, topology, connection...
2124
2125 odl-transportpce-dmaap-client
2126 -----------------------------
2127
2128 This feature allows TransportPCE application to send notifications on ONAP Dmaap Message router
2129 following service request results.
2130 This feature listens on NBI notifications and sends the PublishNotificationService content to
2131 Dmaap on the topic "unauthenticated. TPCE" through a POST request on /events/unauthenticated.TPCE
2132 It uses Jackson to serialize the notification to JSON and jersey client to send the POST request.
2133
2134 odl-transportpce-nbinotifications
2135 ---------------------------------
2136
2137 This feature allows TransportPCE application to write and read notifications stored in topics of a Kafka server.
2138 It is basically composed of two kinds of elements. First are the 'publishers' that are in charge of sending a notification to
2139 a Kafka server. To protect and only allow specific classes to send notifications, each publisher
2140 is dedicated to an authorized class.
2141 There are the 'subscribers' that are in charge of reading notifications from a Kafka server.
2142 So when the feature is called to write notification to a Kafka server, it will serialize the notification
2143 into JSON format and then will publish it in a topic of the server via a publisher.
2144 And when the feature is called to read notifications from a Kafka server, it will retrieve it from
2145 the topic of the server via a subscriber and will deserialize it.
2146
2147 For now, when the REST RPC service-create is called to create a bidirectional end-to-end service,
2148 depending on the success or the fail of the creation, the feature will notify the result of
2149 the creation to a Kafka server. The topics that store these notifications are named after the connection type
2150 (service, infrastructure, roadm-line). For instance, if the RPC service-create is called to create an
2151 infrastructure connection, the service notifications related to this connection will be stored in
2152 the topic 'infrastructure'.
2153
2154 The figure below shows an example of the application nbinotifications in order to notify the
2155 progress of a service creation.
2156
2157 .. figure:: ./images/TransportPCE-nbinotifications-service-example.jpg
2158    :alt: Example of service notifications using the feature nbinotifications in TransportPCE
2159
2160
2161 Depending on the status of the service creation, two kinds of notifications can be published
2162 to the topic 'service' of the Kafka server.
2163
2164 If the service was correctly implemented, the following notification will be published :
2165
2166
2167 -  **Service implemented !** : Indicates that the service was successfully implemented.
2168    It also contains all information concerning the new service.
2169
2170
2171 Otherwise, this notification will be published :
2172
2173
2174 -  **ServiceCreate failed ...** : Indicates that the process of service-create failed, and also contains
2175    the failure cause.
2176
2177
2178 To retrieve these service notifications stored in the Kafka server :
2179
2180 **REST API** : *POST /rests/operations/nbi-notifications:get-notifications-process-service*
2181
2182 **Sample JSON Data**
2183
2184 .. code:: json
2185
2186     {
2187       "input": {
2188         "connection-type": "service",
2189         "id-consumer": "consumer",
2190         "group-id": "test"
2191        }
2192     }
2193
2194 .. note::
2195     The field 'connection-type' corresponds to the topic that stores the notifications.
2196
2197 Another implementation of the notifications allows to notify any modification of operational state made about a service.
2198 So when a service breaks down or is restored, a notification alarming the new status will be sent to a Kafka Server.
2199 The topics that store these notifications in the Kafka server are also named after the connection type
2200 (service, infrastructure, roadm-line) accompanied of the string 'alarm'.
2201
2202 To retrieve these alarm notifications stored in the Kafka server :
2203
2204 **REST API** : *POST /rests/operations/nbi-notifications:get-notifications-alarm-service*
2205
2206 **Sample JSON Data**
2207
2208 .. code:: json
2209
2210     {
2211       "input": {
2212         "connection-type": "infrastructure",
2213         "id-consumer": "consumer",
2214         "group-id": "test"
2215        }
2216     }
2217
2218 .. note::
2219     This sample is used to retrieve all the alarm notifications related to infrastructure services.
2220
2221 Help
2222 ----
2223
2224 -  `TransportPCE Wiki <https://wiki.opendaylight.org/display/ODL/TransportPCE>`__