mv BP XML from org/opendaylight to OSGI-INF/
[transportpce.git] / docs / developer-guide.rst
1 .. _transportpce-dev-guide:
2
3 TransportPCE Developer Guide
4 ============================
5
6 Overview
7 --------
8
9 TransportPCE describes an application running on top of the OpenDaylight
10 controller. Its primary function is to control an optical transport
11 infrastructure using a non-proprietary South Bound Interface (SBI). It may be
12 interconnected with Controllers of different layers (L2, L3 Controller…), a
13 higher layer Controller and/or an Orchestrator through non-proprietary
14 Application Programing Interfaces (APIs). Control includes the capability to
15 configure the optical equipment, and to provision services according to a
16 request coming from a higher layer controller and/or an orchestrator.
17 This capability may rely on the controller only or it may be delegated to
18 distributed (standardized) protocols.
19
20
21 Architecture
22 ------------
23
24 TransportPCE modular architecture is described on the next diagram. Each main
25 function such as Topology management, Path Calculation Engine (PCE), Service
26 handler, Renderer \_responsible for the path configuration through optical
27 equipment\_ and Optical Line Management (OLM) is associated with a generic block
28 relying on open models, each of them communicating through published APIs.
29
30
31 .. figure:: ./images/tpce_architecture.jpg
32    :alt: TransportPCE architecture
33
34    TransportPCE architecture
35
36 The current version of transportPCE is dedicated to the control of WDM transport
37 infrastructure. OTN layer will be integrated in a later step. The WDM layer is
38 built from colorless ROADMs and transponders.
39
40 The interest of using a controller to provision automatically services strongly
41 relies on its ability to handle end to end optical services that spans through
42 the different network domains, potentially equipped with equipment coming from
43 different suppliers. Thus, interoperability in the optical layer is a key
44 element to get the benefit of automated control.
45
46 Initial design of TransportPCE leverages Open ROADM Multi-Source-Agreement (MSA)
47 which defines interoperability specifications, consisting of both Optical
48 interoperability and Yang data models.
49
50 Module description
51 ~~~~~~~~~~~~~~~~~~
52
53 ServiceHandler
54 ^^^^^^^^^^^^^^
55
56 Service Handler handles request coming from a higher level controller or an
57 orchestrator through the northbound API, as defined in the Open ROADM service
58 model. Current implementation addresses the following rpcs: service-create,
59 service–delete, service-reroute.
60 It checks the request consistency and trigs path calculation sending rpcs to the
61 PCE. If a valid path is returned by the PCE, path configuration is initiated
62 relying on Renderer and OLM.
63 At the confirmation of a successful service creation, the Service Handler
64 updates the service-list in the MD-SAL.
65 For service deletion, the Service Handler relies on the Renderer and the OLM to
66 delete connections and reset power levels associated with the service.
67 The service-list is updated following a successful service deletion.
68
69
70 PCE
71 ^^^^^^^^^^^^^^
72
73 The Path Computation Element (PCE) is the component responsible for path
74 calculation. An interface allows the Renderer or external components such as an
75 orchestrator to request a path computation and get a response from the PCE
76 including the computed path(s) in case of success, or errors and indication of
77 the reason for the failure in case the request cannot be satisfied. Additional
78 parameters can be provided by the PCE in addition to the computed paths if
79 requested by the client module. An interface to the Topology Management module
80 allows keeping PCE aligned with the latest changes in the topology. Information
81 about current and planned services is available in the MD-SAL data store.
82
83 Current implementation of PCE allows finding the shortest path, minimizing
84 either the hop count (default) or the propagation delay. Wavelength is assigned
85 considering a fixed grid of 96 wavelengths. Current PCE handles the following
86 constraints as hard constraints:
87
88 -   **Node exclusion**
89 -   **SRLG exclusion**
90 -   **Maximum latency**
91
92
93 Topology Management
94 ^^^^^^^^^^^^^^^^^^^^^^^^
95
96 Topology management module builds the Topology according to the Network model
97 defined in OpenROADM. The topology is aligned with I2RS model. It includes
98 several network layers:
99
100 -  **CLLI layer corresponds to the locations that host equipment**
101 -  **Network layer corresponds to a first level of disaggregation where we
102    separate Xponders (transponder, muxponders or switchponders) from ROADMs**
103 -  **Topology layer introduces a second level of disaggregation where ROADMs
104    Add/Drop modules ("SRGs") are separated from the degrees which includes line
105    amplifiers and WSS that switch wavelengths from one to another degree**
106
107 OTN layer which includes OTN elements having or not the ability to switch OTN
108 containers from client to line cards is not currently implemented.
109
110 Renderer
111 ^^^^^^^^
112
113 The Renderer module, on request coming from the Service Handler through a
114 service-implementation-request /service delete rpc, sets/deletes the path
115 corresponding to a specific service between A and Z ends.
116 It first checks what are the existing interfaces on the ports of the different
117 nodes that the path crosses. It then creates missing interfaces. After all
118 needed interfaces have been created it sets the connections required in the
119 nodes and notifies the Service Handler on the status of the path creation.
120 Path is created in 2 steps (from A to Z and Z to A). In case the path between
121 A and Z could not be fully created, a rollback function is called to set the
122 equipment on the path back to their initial configuration (as they were before
123 invoking the Renderer).
124
125 OLM
126 ^^^^^^^^
127
128 Optical Line Management module implements two main features: it is responsible
129 for setting up the optical power levels on the different interfaces, and is in
130 charge of adjusting these settings across the life of the optical
131 infrastructure.
132
133 After the different connections have been established in the ROADMS, between 2
134 Degrees for an express path, or between a SRG and a Degree for an Add or Drop
135 path; meaning the devices have set WSS and all other required elements to
136 provide path continuity, power setting are provided as attributes of these
137 connections. This allows the device to set all complementary elements such as
138 VOAs, to guaranty that the signal is launched at a correct power level
139 (in accordance to the specifications) in the fiber span. This also applies
140 to X-Ponders, as their output power must comply with the specifications defined
141 for the Add/Drop ports (SRG) of the ROADM. OLM has the responsibility of
142 calculating the right power settings, sending it to the device, and check the
143 PM retrieved from the device to verify that the setting was correctly applied
144 and the configuration was successfully completed.
145
146 Key APIs and Interfaces
147 -----------------------
148
149 External API
150 ~~~~~~~~~~~~
151
152 North API, interconnecting the Service Handler to higher level applications
153 relies on the Service Model defined in the MSA. The Renderer and the OLM are
154 developed to allow configuring Open ROADM devices through a southbound
155 Netconf/Yang interface and rely on the MSA’s device model.
156
157 ServiceHandler Service
158 ^^^^^^^^^^^^^^^^^^^^^^
159
160 -  RPC call
161
162    -  service-create (given service-name, service-aend, service-zend)
163
164    -  service-delete (given service-name)
165
166    -  service-reroute (given service-name, service-aend, service-zend)
167
168 -  Data structure
169
170    -  service list : composed of services
171    -  service : composed of service-name, topology wich describes the detailed path (list of used resources)
172
173 -  Notification
174
175    - service-rpc-result : result of service RPC
176    - service-notification : service has been added, modified or removed
177
178 Netconf Service
179 ^^^^^^^^^^^^^^^
180
181 -  RPC call
182
183    -  connect-device : PUT
184    -  disconnect-device : DELETE
185    -  check-connected-device : GET
186
187 -  Data Structure
188
189    -  node list : composed of netconf nodes in topology-netconf
190
191
192 Internal APIs
193 ~~~~~~~~~~~~~
194
195 Internal APIs define REST APIs to interconnect TransportPCE modules :
196
197 -   Service Handler to PCE
198 -   PCE to Topology Management
199 -   Service Handler to Renderer
200 -   Renderer to OLM
201
202 Pce Service
203 ^^^^^^^^^^^
204
205 -  RPC call
206
207    -  path-computation-request (given service-name, service-aend, service-zend)
208
209    -  cancel-resource-reserve (given service-name)
210
211 -  Notification
212
213    - service-path-rpc-result : result of service RPC
214
215 Renderer Service
216 ^^^^^^^^^^^^^^^^
217
218 -  RPC call
219
220    -  service-implementation-request (given service-name, service-aend, service-zend)
221
222    -  service-delete (given service-name)
223
224 -  Data structure
225
226    -  service path list : composed of service paths
227    -  service path : composed of service-name, path description giving the list of abstracted elements (nodes, tps, links)
228
229 -  Notification
230
231    - service-path-rpc-result : result of service RPC
232
233 Topology Management Service
234 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
235
236 -  Data structure
237
238    -  network list : composed of networks(openroadm-topology, netconf-topology)
239    -  node list : composed of node-id
240    -  link list : composed of link-id
241    -  node : composed of roadm, xponder
242       link : composed of links of different types (roadm-to-roadm, express, add-drop ...)
243
244 OLM Service
245 ^^^^^^^^^^^
246
247 -  RPC call
248
249    -  get-pm (given node-id)
250
251    -  service-power-setup
252
253    -  service-power-turndown
254
255    -  service-power-reset
256
257    -  calculate-spanloss-base
258
259    -  calculate-spanloss-current
260
261
262 Running transportPCE project
263 ----------------------------
264
265 To use transportPCE controller, the first step is to connect the controller to optical nodes
266 through the NETCONF connector.
267
268 .. note::
269
270     In the current version, only optical equipment compliant with open ROADM datamodels are managed
271     by transportPCE.
272
273
274 Connecting nodes
275 ~~~~~~~~~~~~~~~~
276
277 To connect a node, use the following JSON RPC
278
279 **REST API** : *POST /restconf/config/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
280
281 **Sample JSON Data**
282
283 .. code:: json
284
285     {
286         "node": [
287             {
288                 "node-id": "<node-id>",
289                         "netconf-node-topology:tcp-only": "false",
290                         "netconf-node-topology:reconnect-on-changed-schema": "false",
291                         "netconf-node-topology:host": "<node-ip-address>",
292                         "netconf-node-topology:default-request-timeout-millis": "120000",
293                         "netconf-node-topology:max-connection-attempts": "0",
294                         "netconf-node-topology:sleep-factor": "1.5",
295                         "netconf-node-topology:actor-response-wait-time": "5",
296                         "netconf-node-topology:concurrent-rpc-limit": "0",
297                         "netconf-node-topology:between-attempts-timeout-millis": "2000",
298                         "netconf-node-topology:port": "<netconf-port>",
299                         "netconf-node-topology:connection-timeout-millis": "20000",
300                         "netconf-node-topology:username": "<node-username>",
301                         "netconf-node-topology:password": "<node-password>",
302                         "netconf-node-topology:keepalive-delay": "300"
303             }
304         ]
305     }
306
307
308 Then check that the netconf session has been correctly established between the controller and the
309 node. the status of **netconf-node-topology:connection-status** must be **connected**
310
311 **REST API** : *GET /restconf/operational/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
312
313
314 Node configuration discovery
315 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
316
317 Once the controller is connected to the node, transportPCE application automatically launchs a
318 discovery of the node configuration datastore and creates **Logical Connection Points** to any
319 physical ports related to transmission. All *circuit-packs* inside the node configuration are
320 analyzed.
321
322 Use the following JSON RPC to check that function internally named *portMapping*.
323
324 **REST API** : *GET /restconf/config/portmapping:network*
325
326 .. note::
327
328     In ``org-openroadm-device.yang``, two types of optical nodes can be managed:
329         * rdm: ROADM device (optical switch)
330         * xpdr: Xponder device (device that converts client to optical channel interface)
331
332 Depending on the kind of open ROADM device connected, different kind of *Logical Connection Points*
333 should appear, if the node configuration is not empty:
334
335 -  DEG<degree-number>-TTP-<port-direction>: created on the line port of a degree on a rdm equipment
336 -  SRG<srg-number>-PP<port-number>: created on the client port of a srg on a rdm equipment
337 -  XPDR<number>-CLIENT<port-number>: created on the client port of a xpdr equipment
338 -  XPDR<number>-NETWORK<port-number>: created on the line port of a xpdr equipment
339
340     For further details on openROADM device models, see `openROADM MSA white paper <https://0201.nccdn.net/1_2/000/000/134/c50/Open-ROADM-MSA-release-2-Device-White-paper-v1-1.pdf>`__.
341
342 Optical Network topology
343 ~~~~~~~~~~~~~~~~~~~~~~~~
344
345 Before creating an optical connectivity service, your topology must contain at least two xpdr
346 devices connected to two different rdm devices. Normally, the *openroadm-topology* is automatically
347 created by transportPCE. Nevertheless, depending on the configuration inside optical nodes, this
348 topology can be partial. Check that link of type *ROADMtoROADM* exists between two adjacent rdm
349 nodes.
350
351 **REST API** : *GET /restconf/config/ietf-network:network/openroadm-topology*
352
353 If it is not the case, you need to manually complement the topology with *ROADMtoROADM* link using
354 the following REST RPC:
355
356
357 **REST API** : *POST /restconf/operations/networkutils:init-roadm-nodes*
358
359 **Sample JSON Data**
360
361 .. code:: json
362
363     {
364       "networkutils:input": {
365         "networkutils:rdm-a-node": "<node-id-A>",
366         "networkutils:deg-a-num": "<degree-A-number>",
367         "networkutils:termination-point-a": "<Logical-Connection-Point>",
368         "networkutils:rdm-z-node": "<node-id-Z>",
369         "networkutils:deg-z-num": "<degree-Z-number>",
370         "networkutils:termination-point-z": "<Logical-Connection-Point>"
371       }
372     }
373
374 *<Logical-Connection-Point> comes from the portMapping function*.
375
376 Unidirectional links between xpdr and rdm nodes must be created manually. To that end use the two
377 following REST RPCs:
378
379 From xpdr to rdm:
380 ^^^^^^^^^^^^^^^^^
381
382 **REST API** : *POST /restconf/operations/networkutils:init-xpdr-rdm-links*
383
384 **Sample JSON Data**
385
386 .. code:: json
387
388     {
389       "networkutils:input": {
390         "networkutils:links-input": {
391           "networkutils:xpdr-node": "<xpdr-node-id>",
392           "networkutils:xpdr-num": "1",
393           "networkutils:network-num": "<xpdr-network-port-number>",
394           "networkutils:rdm-node": "<rdm-node-id>",
395           "networkutils:srg-num": "<srg-number>",
396           "networkutils:termination-point-num": "<Logical-Connection-Point>"
397         }
398       }
399     }
400
401 From rdm to xpdr:
402 ^^^^^^^^^^^^^^^^^
403
404 **REST API** : *POST /restconf/operations/networkutils:init-rdm-xpdr-links*
405
406 **Sample JSON Data**
407
408 .. code:: json
409
410     {
411       "networkutils:input": {
412         "networkutils:links-input": {
413           "networkutils:xpdr-node": "<xpdr-node-id>",
414           "networkutils:xpdr-num": "1",
415           "networkutils:network-num": "<xpdr-network-port-number>",
416           "networkutils:rdm-node": "<rdm-node-id>",
417           "networkutils:srg-num": "<srg-number>",
418           "networkutils:termination-point-num": "<Logical-Connection-Point>"
419         }
420       }
421     }
422
423
424 Creating a service
425 ~~~~~~~~~~~~~~~~~~
426
427 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
428 end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
429 nodes.
430
431 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
432
433 **Sample JSON Data**
434
435 .. code:: json
436
437     {
438         "input": {
439                 "sdnc-request-header": {
440                         "request-id": "request-1",
441                         "rpc-action": "service-create",
442                         "request-system-id": "appname"
443                 },
444                 "service-name": "test1",
445                 "common-id": "commonId",
446                 "connection-type": "service",
447                 "service-a-end": {
448                         "service-rate": "100",
449                         "node-id": "<xpdr-node-id>",
450                         "service-format": "Ethernet",
451                         "clli": "<ccli-name>",
452                         "tx-direction": {
453                                 "port": {
454                                         "port-device-name": "<xpdr-client-port>",
455                                         "port-type": "fixed",
456                                         "port-name": "<xpdr-client-port-number>",
457                                         "port-rack": "000000.00",
458                                         "port-shelf": "Chassis#1"
459                                 },
460                                 "lgx": {
461                                         "lgx-device-name": "Some lgx-device-name",
462                                         "lgx-port-name": "Some lgx-port-name",
463                                         "lgx-port-rack": "000000.00",
464                                         "lgx-port-shelf": "00"
465                                 }
466                         },
467                         "rx-direction": {
468                                 "port": {
469                                         "port-device-name": "<xpdr-client-port>",
470                                         "port-type": "fixed",
471                                         "port-name": "<xpdr-client-port-number>",
472                                         "port-rack": "000000.00",
473                                         "port-shelf": "Chassis#1"
474                                 },
475                                 "lgx": {
476                                         "lgx-device-name": "Some lgx-device-name",
477                                         "lgx-port-name": "Some lgx-port-name",
478                                         "lgx-port-rack": "000000.00",
479                                         "lgx-port-shelf": "00"
480                                 }
481                         },
482                         "optic-type": "gray"
483                 },
484                 "service-z-end": {
485                         "service-rate": "100",
486                         "node-id": "<xpdr-node-id>",
487                         "service-format": "Ethernet",
488                         "clli": "<ccli-name>",
489                         "tx-direction": {
490                                 "port": {
491                                         "port-device-name": "<xpdr-client-port>",
492                                         "port-type": "fixed",
493                                         "port-name": "<xpdr-client-port-number>",
494                                         "port-rack": "000000.00",
495                                         "port-shelf": "Chassis#1"
496                                 },
497                                 "lgx": {
498                                         "lgx-device-name": "Some lgx-device-name",
499                                         "lgx-port-name": "Some lgx-port-name",
500                                         "lgx-port-rack": "000000.00",
501                                         "lgx-port-shelf": "00"
502                                 }
503                         },
504                         "rx-direction": {
505                                 "port": {
506                                         "port-device-name": "<xpdr-client-port>",
507                                         "port-type": "fixed",
508                                         "port-name": "<xpdr-client-port-number>",
509                                         "port-rack": "000000.00",
510                                         "port-shelf": "Chassis#1"
511                                 },
512                                 "lgx": {
513                                         "lgx-device-name": "Some lgx-device-name",
514                                         "lgx-port-name": "Some lgx-port-name",
515                                         "lgx-port-rack": "000000.00",
516                                         "lgx-port-shelf": "00"
517                                 }
518                         },
519                         "optic-type": "gray"
520                 },
521                 "due-date": "yyyy-mm-ddT00:00:01Z",
522                 "operator-contact": "some-contact-info"
523         }
524     }
525
526 Most important parameters for this REST RPC are the identification of the two physical client ports
527 on xpdr nodes.This RPC invokes the *PCE* module to compute a path over the *openroadm-topology* and
528 then invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
529
530
531 Deleting a service
532 ~~~~~~~~~~~~~~~~~~
533
534 Use the following REST RPC to invoke *service handler* module in order to delete a given optical
535 connectivity service.
536
537 **REST API** : *POST /restconf/operations/org-openroadm-service:service-delete*
538
539 **Sample JSON Data**
540
541 .. code:: json
542
543     {
544         "input": {
545                 "sdnc-request-header": {
546                         "request-id": "request-1",
547                         "rpc-action": "service-delete",
548                         "request-system-id": "appname",
549                         "notification-url": "http://localhost:8585/NotificationServer/notify"
550                 },
551                 "service-delete-req-info": {
552                         "service-name": "test1",
553                         "tail-retention": "no"
554                 }
555         }
556     }
557
558 Most important parameters for this REST RPC is the *service-name*.
559
560
561 Help
562 ----
563
564 -  `TransportPCE Wiki <https://wiki.opendaylight.org/view/TransportPCE:Main>`__
565
566 -  TransportPCE Mailing List
567    (`developer <https://lists.opendaylight.org/mailman/listinfo/transportpce-dev>`__)