1 .. _transportpce-dev-guide:
3 TransportPCE Developer Guide
4 ============================
9 TransportPCE describes an application running on top of the OpenDaylight
10 controller. Its primary function is to control an optical transport
11 infrastructure using a non-proprietary South Bound Interface (SBI). It may be
12 interconnected with Controllers of different layers (L2, L3 Controller…), a
13 higher layer Controller and/or an Orchestrator through non-proprietary
14 Application Programing Interfaces (APIs). Control includes the capability to
15 configure the optical equipment, and to provision services according to a
16 request coming from a higher layer controller and/or an orchestrator.
17 This capability may rely on the controller only or it may be delegated to
18 distributed (standardized) protocols.
24 TransportPCE modular architecture is described on the next diagram. Each main
25 function such as Topology management, Path Calculation Engine (PCE), Service
26 handler, Renderer \_responsible for the path configuration through optical
27 equipment\_ and Optical Line Management (OLM) is associated with a generic block
28 relying on open models, each of them communicating through published APIs.
31 .. figure:: ./images/tpce_architecture.jpg
32 :alt: TransportPCE architecture
34 TransportPCE architecture
36 The current version of transportPCE is dedicated to the control of WDM transport
37 infrastructure. OTN layer will be integrated in a later step. The WDM layer is
38 built from colorless ROADMs and transponders.
40 The interest of using a controller to provision automatically services strongly
41 relies on its ability to handle end to end optical services that spans through
42 the different network domains, potentially equipped with equipment coming from
43 different suppliers. Thus, interoperability in the optical layer is a key
44 element to get the benefit of automated control.
46 Initial design of TransportPCE leverages Open ROADM Multi-Source-Agreement (MSA)
47 which defines interoperability specifications, consisting of both Optical
48 interoperability and Yang data models.
56 Service Handler handles request coming from a higher level controller or an
57 orchestrator through the northbound API, as defined in the Open ROADM service
58 model. Current implementation addresses the following rpcs: service-create,
59 service–delete, service-reroute.
60 It checks the request consistency and trigs path calculation sending rpcs to the
61 PCE. If a valid path is returned by the PCE, path configuration is initiated
62 relying on Renderer and OLM.
63 At the confirmation of a successful service creation, the Service Handler
64 updates the service-list in the MD-SAL.
65 For service deletion, the Service Handler relies on the Renderer and the OLM to
66 delete connections and reset power levels associated with the service.
67 The service-list is updated following a successful service deletion.
73 The Path Computation Element (PCE) is the component responsible for path
74 calculation. An interface allows the Renderer or external components such as an
75 orchestrator to request a path computation and get a response from the PCE
76 including the computed path(s) in case of success, or errors and indication of
77 the reason for the failure in case the request cannot be satisfied. Additional
78 parameters can be provided by the PCE in addition to the computed paths if
79 requested by the client module. An interface to the Topology Management module
80 allows keeping PCE aligned with the latest changes in the topology. Information
81 about current and planned services is available in the MD-SAL data store.
83 Current implementation of PCE allows finding the shortest path, minimizing
84 either the hop count (default) or the propagation delay. Wavelength is assigned
85 considering a fixed grid of 96 wavelengths. Current PCE handles the following
86 constraints as hard constraints:
94 ^^^^^^^^^^^^^^^^^^^^^^^^
96 Topology management module builds the Topology according to the Network model
97 defined in OpenROADM. The topology is aligned with I2RS model. It includes
98 several network layers:
100 - **CLLI layer corresponds to the locations that host equipment**
101 - **Network layer corresponds to a first level of disaggregation where we
102 separate Xponders (transponder, muxponders or switchponders) from ROADMs**
103 - **Topology layer introduces a second level of disaggregation where ROADMs
104 Add/Drop modules ("SRGs") are separated from the degrees which includes line
105 amplifiers and WSS that switch wavelengths from one to another degree**
107 OTN layer which includes OTN elements having or not the ability to switch OTN
108 containers from client to line cards is not currently implemented.
113 The Renderer module, on request coming from the Service Handler through a
114 service-implementation-request /service delete rpc, sets/deletes the path
115 corresponding to a specific service between A and Z ends.
116 It first checks what are the existing interfaces on the ports of the different
117 nodes that the path crosses. It then creates missing interfaces. After all
118 needed interfaces have been created it sets the connections required in the
119 nodes and notifies the Service Handler on the status of the path creation.
120 Path is created in 2 steps (from A to Z and Z to A). In case the path between
121 A and Z could not be fully created, a rollback function is called to set the
122 equipment on the path back to their initial configuration (as they were before
123 invoking the Renderer).
128 Optical Line Management module implements two main features: it is responsible
129 for setting up the optical power levels on the different interfaces, and is in
130 charge of adjusting these settings across the life of the optical
133 After the different connections have been established in the ROADMS, between 2
134 Degrees for an express path, or between a SRG and a Degree for an Add or Drop
135 path; meaning the devices have set WSS and all other required elements to
136 provide path continuity, power setting are provided as attributes of these
137 connections. This allows the device to set all complementary elements such as
138 VOAs, to guaranty that the signal is launched at a correct power level
139 (in accordance to the specifications) in the fiber span. This also applies
140 to X-Ponders, as their output power must comply with the specifications defined
141 for the Add/Drop ports (SRG) of the ROADM. OLM has the responsibility of
142 calculating the right power settings, sending it to the device, and check the
143 PM retrieved from the device to verify that the setting was correctly applied
144 and the configuration was successfully completed.
146 Key APIs and Interfaces
147 -----------------------
152 North API, interconnecting the Service Handler to higher level applications
153 relies on the Service Model defined in the MSA. The Renderer and the OLM are
154 developed to allow configuring Open ROADM devices through a southbound
155 Netconf/Yang interface and rely on the MSA’s device model.
157 ServiceHandler Service
158 ^^^^^^^^^^^^^^^^^^^^^^
162 - service-create (given service-name, service-aend, service-zend)
164 - service-delete (given service-name)
166 - service-reroute (given service-name, service-aend, service-zend)
170 - service list : composed of services
171 - service : composed of service-name, topology wich describes the detailed path (list of used resources)
175 - service-rpc-result : result of service RPC
176 - service-notification : service has been added, modified or removed
183 - connect-device : PUT
184 - disconnect-device : DELETE
185 - check-connected-device : GET
189 - node list : composed of netconf nodes in topology-netconf
195 Internal APIs define REST APIs to interconnect TransportPCE modules :
197 - Service Handler to PCE
198 - PCE to Topology Management
199 - Service Handler to Renderer
207 - path-computation-request (given service-name, service-aend, service-zend)
209 - cancel-resource-reserve (given service-name)
213 - service-path-rpc-result : result of service RPC
220 - service-implementation-request (given service-name, service-aend, service-zend)
222 - service-delete (given service-name)
226 - service path list : composed of service paths
227 - service path : composed of service-name, path description giving the list of abstracted elements (nodes, tps, links)
231 - service-path-rpc-result : result of service RPC
233 Topology Management Service
234 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
238 - network list : composed of networks(openroadm-topology, netconf-topology)
239 - node list : composed of node-id
240 - link list : composed of link-id
241 - node : composed of roadm, xponder
242 link : composed of links of different types (roadm-to-roadm, express, add-drop ...)
249 - get-pm (given node-id)
251 - service-power-setup
253 - service-power-turndown
255 - service-power-reset
257 - calculate-spanloss-base
259 - calculate-spanloss-current
262 Running transportPCE project
263 ----------------------------
265 To use transportPCE controller, the first step is to connect the controller to optical nodes
266 through the NETCONF connector.
270 In the current version, only optical equipment compliant with open ROADM datamodels are managed
277 To connect a node, use the following JSON RPC
279 **REST API** : *POST /restconf/config/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
288 "node-id": "<node-id>",
289 "netconf-node-topology:tcp-only": "false",
290 "netconf-node-topology:reconnect-on-changed-schema": "false",
291 "netconf-node-topology:host": "<node-ip-address>",
292 "netconf-node-topology:default-request-timeout-millis": "120000",
293 "netconf-node-topology:max-connection-attempts": "0",
294 "netconf-node-topology:sleep-factor": "1.5",
295 "netconf-node-topology:actor-response-wait-time": "5",
296 "netconf-node-topology:concurrent-rpc-limit": "0",
297 "netconf-node-topology:between-attempts-timeout-millis": "2000",
298 "netconf-node-topology:port": "<netconf-port>",
299 "netconf-node-topology:connection-timeout-millis": "20000",
300 "netconf-node-topology:username": "<node-username>",
301 "netconf-node-topology:password": "<node-password>",
302 "netconf-node-topology:keepalive-delay": "300"
308 Then check that the netconf session has been correctly established between the controller and the
309 node. the status of **netconf-node-topology:connection-status** must be **connected**
311 **REST API** : *GET /restconf/operational/network-topology:network-topology/topology/topology-netconf/node/<node-id>*
314 Node configuration discovery
315 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
317 Once the controller is connected to the node, transportPCE application automatically launchs a
318 discovery of the node configuration datastore and creates **Logical Connection Points** to any
319 physical ports related to transmission. All *circuit-packs* inside the node configuration are
322 Use the following JSON RPC to check that function internally named *portMapping*.
324 **REST API** : *GET /restconf/config/portmapping:network*
328 In ``org-openroadm-device.yang``, two types of optical nodes can be managed:
329 * rdm: ROADM device (optical switch)
330 * xpdr: Xponder device (device that converts client to optical channel interface)
332 Depending on the kind of open ROADM device connected, different kind of *Logical Connection Points*
333 should appear, if the node configuration is not empty:
335 - DEG<degree-number>-TTP-<port-direction>: created on the line port of a degree on a rdm equipment
336 - SRG<srg-number>-PP<port-number>: created on the client port of a srg on a rdm equipment
337 - XPDR<number>-CLIENT<port-number>: created on the client port of a xpdr equipment
338 - XPDR<number>-NETWORK<port-number>: created on the line port of a xpdr equipment
340 For further details on openROADM device models, see `openROADM MSA white paper <https://0201.nccdn.net/1_2/000/000/134/c50/Open-ROADM-MSA-release-2-Device-White-paper-v1-1.pdf>`__.
342 Optical Network topology
343 ~~~~~~~~~~~~~~~~~~~~~~~~
345 Before creating an optical connectivity service, your topology must contain at least two xpdr
346 devices connected to two different rdm devices. Normally, the *openroadm-topology* is automatically
347 created by transportPCE. Nevertheless, depending on the configuration inside optical nodes, this
348 topology can be partial. Check that link of type *ROADMtoROADM* exists between two adjacent rdm
351 **REST API** : *GET /restconf/config/ietf-network:network/openroadm-topology*
353 If it is not the case, you need to manually complement the topology with *ROADMtoROADM* link using
354 the following REST RPC:
357 **REST API** : *POST /restconf/operations/networkutils:init-roadm-nodes*
364 "networkutils:input": {
365 "networkutils:rdm-a-node": "<node-id-A>",
366 "networkutils:deg-a-num": "<degree-A-number>",
367 "networkutils:termination-point-a": "<Logical-Connection-Point>",
368 "networkutils:rdm-z-node": "<node-id-Z>",
369 "networkutils:deg-z-num": "<degree-Z-number>",
370 "networkutils:termination-point-z": "<Logical-Connection-Point>"
374 *<Logical-Connection-Point> comes from the portMapping function*.
376 Unidirectional links between xpdr and rdm nodes must be created manually. To that end use the two
382 **REST API** : *POST /restconf/operations/networkutils:init-xpdr-rdm-links*
389 "networkutils:input": {
390 "networkutils:links-input": {
391 "networkutils:xpdr-node": "<xpdr-node-id>",
392 "networkutils:xpdr-num": "1",
393 "networkutils:network-num": "<xpdr-network-port-number>",
394 "networkutils:rdm-node": "<rdm-node-id>",
395 "networkutils:srg-num": "<srg-number>",
396 "networkutils:termination-point-num": "<Logical-Connection-Point>"
404 **REST API** : *POST /restconf/operations/networkutils:init-rdm-xpdr-links*
411 "networkutils:input": {
412 "networkutils:links-input": {
413 "networkutils:xpdr-node": "<xpdr-node-id>",
414 "networkutils:xpdr-num": "1",
415 "networkutils:network-num": "<xpdr-network-port-number>",
416 "networkutils:rdm-node": "<rdm-node-id>",
417 "networkutils:srg-num": "<srg-number>",
418 "networkutils:termination-point-num": "<Logical-Connection-Point>"
427 Use the following REST RPC to invoke *service handler* module in order to create a bidirectional
428 end-to-end optical connectivity service between two xpdr over an optical network composed of rdm
431 **REST API** : *POST /restconf/operations/org-openroadm-service:service-create*
439 "sdnc-request-header": {
440 "request-id": "request-1",
441 "rpc-action": "service-create",
442 "request-system-id": "appname"
444 "service-name": "test1",
445 "common-id": "commonId",
446 "connection-type": "service",
448 "service-rate": "100",
449 "node-id": "<xpdr-node-id>",
450 "service-format": "Ethernet",
451 "clli": "<ccli-name>",
454 "port-device-name": "<xpdr-client-port>",
455 "port-type": "fixed",
456 "port-name": "<xpdr-client-port-number>",
457 "port-rack": "000000.00",
458 "port-shelf": "Chassis#1"
461 "lgx-device-name": "Some lgx-device-name",
462 "lgx-port-name": "Some lgx-port-name",
463 "lgx-port-rack": "000000.00",
464 "lgx-port-shelf": "00"
469 "port-device-name": "<xpdr-client-port>",
470 "port-type": "fixed",
471 "port-name": "<xpdr-client-port-number>",
472 "port-rack": "000000.00",
473 "port-shelf": "Chassis#1"
476 "lgx-device-name": "Some lgx-device-name",
477 "lgx-port-name": "Some lgx-port-name",
478 "lgx-port-rack": "000000.00",
479 "lgx-port-shelf": "00"
485 "service-rate": "100",
486 "node-id": "<xpdr-node-id>",
487 "service-format": "Ethernet",
488 "clli": "<ccli-name>",
491 "port-device-name": "<xpdr-client-port>",
492 "port-type": "fixed",
493 "port-name": "<xpdr-client-port-number>",
494 "port-rack": "000000.00",
495 "port-shelf": "Chassis#1"
498 "lgx-device-name": "Some lgx-device-name",
499 "lgx-port-name": "Some lgx-port-name",
500 "lgx-port-rack": "000000.00",
501 "lgx-port-shelf": "00"
506 "port-device-name": "<xpdr-client-port>",
507 "port-type": "fixed",
508 "port-name": "<xpdr-client-port-number>",
509 "port-rack": "000000.00",
510 "port-shelf": "Chassis#1"
513 "lgx-device-name": "Some lgx-device-name",
514 "lgx-port-name": "Some lgx-port-name",
515 "lgx-port-rack": "000000.00",
516 "lgx-port-shelf": "00"
521 "due-date": "yyyy-mm-ddT00:00:01Z",
522 "operator-contact": "some-contact-info"
526 Most important parameters for this REST RPC are the identification of the two physical client ports
527 on xpdr nodes.This RPC invokes the *PCE* module to compute a path over the *openroadm-topology* and
528 then invokes *renderer* and *OLM* to implement the end-to-end path into the devices.
534 Use the following REST RPC to invoke *service handler* module in order to delete a given optical
535 connectivity service.
537 **REST API** : *POST /restconf/operations/org-openroadm-service:service-delete*
545 "sdnc-request-header": {
546 "request-id": "request-1",
547 "rpc-action": "service-delete",
548 "request-system-id": "appname",
549 "notification-url": "http://localhost:8585/NotificationServer/notify"
551 "service-delete-req-info": {
552 "service-name": "test1",
553 "tail-retention": "no"
558 Most important parameters for this REST RPC is the *service-name*.
564 - `TransportPCE Wiki <https://wiki.opendaylight.org/view/TransportPCE:Main>`__
566 - TransportPCE Mailing List
567 (`developer <https://lists.opendaylight.org/mailman/listinfo/transportpce-dev>`__)