Move OVSDB documentation from docs
[ovsdb.git] / docs / ovsdb-user-guide.rst
1 OVSDB User Guide
2 ================
3
4 The OVSDB project implements the OVSDB protocol (RFC 7047), as well as
5 plugins to support OVSDB Schemas, such as the Open\_vSwitch database
6 schema and the hardware\_vtep database schema.
7
8 OVSDB Plugins
9 -------------
10
11 Overview and Architecture
12 ~~~~~~~~~~~~~~~~~~~~~~~~~
13
14 There are currently two OVSDB Southbound plugins:
15
16 -  odl-ovsdb-southbound: Implements the OVSDB Open\_vSwitch database
17    schema.
18
19 -  odl-ovsdb-hwvtepsouthbound: Implements the OVSDB hardware\_vtep
20    database schema.
21
22 These plugins are normally installed and used automatically by higher
23 level applications such as odl-ovsdb-openstack; however, they can also
24 be installed separately and used via their REST APIs as is described in
25 the following sections.
26
27 OVSDB Southbound Plugin
28 ~~~~~~~~~~~~~~~~~~~~~~~
29
30 The OVSDB Southbound Plugin provides support for managing OVS hosts via
31 an OVSDB model in the MD-SAL which maps to important tables and
32 attributes present in the Open\_vSwitch schema. The OVSDB Southbound
33 Plugin is able to connect actively or passively to OVS hosts and operate
34 as the OVSDB manager of the OVS host. Using the OVSDB protocol it is
35 able to manage the OVS database (OVSDB) on the OVS host as defined by
36 the Open\_vSwitch schema.
37
38 OVSDB YANG Model
39 ^^^^^^^^^^^^^^^^
40
41 The OVSDB Southbound Plugin provides a YANG model which is based on the
42 abstract `network topology
43 model <https://github.com/opendaylight/yangtools/blob/stable/boron/yang/yang-parser-impl/src/test/resources/ietf/network-topology%402013-10-21.yang>`__.
44
45 The details of the OVSDB YANG model are defined in the
46 `ovsdb.yang <https://github.com/opendaylight/ovsdb/blob/stable/boron/southbound/southbound-api/src/main/yang/ovsdb.yang>`__
47 file.
48
49 The OVSDB YANG model defines three augmentations:
50
51 **ovsdb-node-augmentation**
52     This augments the network-topology node and maps primarily to the
53     Open\_vSwitch table of the OVSDB schema. The ovsdb-node-augmentation
54     is a representation of the OVS host. It contains the following
55     attributes.
56
57     -  **connection-info** - holds the local and remote IP address and
58        TCP port numbers for the OpenDaylight to OVSDB node connections
59
60     -  **db-version** - version of the OVSDB database
61
62     -  **ovs-version** - version of OVS
63
64     -  **list managed-node-entry** - a list of references to
65        ovsdb-bridge-augmentation nodes, which are the OVS bridges
66        managed by this OVSDB node
67
68     -  **list datapath-type-entry** - a list of the datapath types
69        supported by the OVSDB node (e.g. *system*, *netdev*) - depends
70        on newer OVS versions
71
72     -  **list interface-type-entry** - a list of the interface types
73        supported by the OVSDB node (e.g. *internal*, *vxlan*, *gre*,
74        *dpdk*, etc.) - depends on newer OVS verions
75
76     -  **list openvswitch-external-ids** - a list of the key/value pairs
77        in the Open\_vSwitch table external\_ids column
78
79     -  **list openvswitch-other-config** - a list of the key/value pairs
80        in the Open\_vSwitch table other\_config column
81
82     -  **list managery-entry** - list of manager information entries and
83        connection status
84
85     -  **list qos-entries** - list of QoS entries present in the QoS
86        table
87
88     -  **list queues** - list of queue entries present in the queue
89        table
90
91 **ovsdb-bridge-augmentation**
92     This augments the network-topology node and maps to an specific
93     bridge in the OVSDB bridge table of the associated OVSDB node. It
94     contains the following attributes.
95
96     -  **bridge-uuid** - UUID of the OVSDB bridge
97
98     -  **bridge-name** - name of the OVSDB bridge
99
100     -  **bridge-openflow-node-ref** - a reference (instance-identifier)
101        of the OpenFlow node associated with this bridge
102
103     -  **list protocol-entry** - the version of OpenFlow protocol to use
104        with the OpenFlow controller
105
106     -  **list controller-entry** - a list of controller-uuid and
107        is-connected status of the OpenFlow controllers associated with
108        this bridge
109
110     -  **datapath-id** - the datapath ID associated with this bridge on
111        the OVSDB node
112
113     -  **datapath-type** - the datapath type of this bridge
114
115     -  **fail-mode** - the OVSDB fail mode setting of this bridge
116
117     -  **flow-node** - a reference to the flow node corresponding to
118        this bridge
119
120     -  **managed-by** - a reference to the ovsdb-node-augmentation
121        (OVSDB node) that is managing this bridge
122
123     -  **list bridge-external-ids** - a list of the key/value pairs in
124        the bridge table external\_ids column for this bridge
125
126     -  **list bridge-other-configs** - a list of the key/value pairs in
127        the bridge table other\_config column for this bridge
128
129 **ovsdb-termination-point-augmentation**
130     This augments the topology termination point model. The OVSDB
131     Southbound Plugin uses this model to represent both the OVSDB port
132     and OVSDB interface for a given port/interface in the OVSDB schema.
133     It contains the following attributes.
134
135     -  **port-uuid** - UUID of an OVSDB port row
136
137     -  **interface-uuid** - UUID of an OVSDB interface row
138
139     -  **name** - name of the port and interface
140
141     -  **interface-type** - the interface type
142
143     -  **list options** - a list of port options
144
145     -  **ofport** - the OpenFlow port number of the interface
146
147     -  **ofport\_request** - the requested OpenFlow port number for the
148        interface
149
150     -  **vlan-tag** - the VLAN tag value
151
152     -  **list trunks** - list of VLAN tag values for trunk mode
153
154     -  **vlan-mode** - the VLAN mode (e.g. access, native-tagged,
155        native-untagged, trunk)
156
157     -  **list port-external-ids** - a list of the key/value pairs in the
158        port table external\_ids column for this port
159
160     -  **list interface-external-ids** - a list of the key/value pairs
161        in the interface table external\_ids interface for this interface
162
163     -  **list port-other-configs** - a list of the key/value pairs in
164        the port table other\_config column for this port
165
166     -  **list interface-other-configs** - a list of the key/value pairs
167        in the interface table other\_config column for this interface
168
169     -  **list inteface-lldp** - LLDP Auto Attach configuration for the
170        interface
171
172     -  **qos** - UUID of the QoS entry in the QoS table assigned to this
173        port
174
175 Getting Started
176 ^^^^^^^^^^^^^^^
177
178 To install the OVSDB Southbound Plugin, use the following command at the
179 Karaf console:
180
181 ::
182
183     feature:install odl-ovsdb-southbound-impl-ui
184
185 After installing the OVSDB Southbound Plugin, and before any OVSDB
186 topology nodes have been created, the OVSDB topology will appear as
187 follows in the configuration and operational MD-SAL.
188
189 HTTP GET:
190
191 ::
192
193     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
194      or
195     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
196
197 Result Body:
198
199 ::
200
201     {
202       "topology": [
203         {
204           "topology-id": "ovsdb:1"
205         }
206       ]
207     }
208
209 Where
210
211 *<controller-ip>* is the IP address of the OpenDaylight controller
212
213 OpenDaylight as the OVSDB Manager
214 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
215
216 An OVS host is a system which is running the OVS software and is capable
217 of being managed by an OVSDB manager. The OVSDB Southbound Plugin is
218 capable of connecting to an OVS host and operating as an OVSDB manager.
219 Depending on the configuration of the OVS host, the connection of
220 OpenDaylight to the OVS host will be active or passive.
221
222 Active Connection to OVS Hosts
223 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
224
225 An active connection is when the OVSDB Southbound Plugin initiates the
226 connection to an OVS host. This happens when the OVS host is configured
227 to listen for the connection (i.e. the OVSDB Southbound Plugin is active
228 the the OVS host is passive). The OVS host is configured with the
229 following command:
230
231 ::
232
233     sudo ovs-vsctl set-manager ptcp:6640
234
235 This configures the OVS host to listen on TCP port 6640.
236
237 The OVSDB Southbound Plugin can be configured via the configuration
238 MD-SAL to actively connect to an OVS host.
239
240 HTTP PUT:
241
242 ::
243
244     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1
245
246 Body:
247
248 ::
249
250     {
251       "network-topology:node": [
252         {
253           "node-id": "ovsdb://HOST1",
254           "connection-info": {
255             "ovsdb:remote-port": "6640",
256             "ovsdb:remote-ip": "<ovs-host-ip>"
257           }
258         }
259       ]
260     }
261
262 Where
263
264 *<ovs-host-ip>* is the IP address of the OVS Host
265
266 Note that the configuration assigns a *node-id* of "ovsdb://HOST1" to
267 the OVSDB node. This *node-id* will be used as the identifier for this
268 OVSDB node in the MD-SAL.
269
270 Query the configuration MD-SAL for the OVSDB topology.
271
272 HTTP GET:
273
274 ::
275
276     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
277
278 Result Body:
279
280 ::
281
282     {
283       "topology": [
284         {
285           "topology-id": "ovsdb:1",
286           "node": [
287             {
288               "node-id": "ovsdb://HOST1",
289               "ovsdb:connection-info": {
290                 "remote-ip": "<ovs-host-ip>",
291                 "remote-port": 6640
292               }
293             }
294           ]
295         }
296       ]
297     }
298
299 As a result of the OVSDB node configuration being added to the
300 configuration MD-SAL, the OVSDB Southbound Plugin will attempt to
301 connect with the specified OVS host. If the connection is successful,
302 the plugin will connect to the OVS host as an OVSDB manager, query the
303 schemas and databases supported by the OVS host, and register to monitor
304 changes made to the OVSDB tables on the OVS host. It will also set an
305 external id key and value in the external-ids column of the
306 Open\_vSwtich table of the OVS host which identifies the MD-SAL instance
307 identifier of the OVSDB node. This ensures that the OVSDB node will use
308 the same *node-id* in both the configuration and operational MD-SAL.
309
310 ::
311
312     "opendaylight-iid" = "instance identifier of OVSDB node in the MD-SAL"
313
314 When the OVS host sends the OVSDB Southbound Plugin the first update
315 message after the monitoring has been established, the plugin will
316 populate the operational MD-SAL with the information it receives from
317 the OVS host.
318
319 Query the operational MD-SAL for the OVSDB topology.
320
321 HTTP GET:
322
323 ::
324
325     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
326
327 Result Body:
328
329 ::
330
331     {
332       "topology": [
333         {
334           "topology-id": "ovsdb:1",
335           "node": [
336             {
337               "node-id": "ovsdb://HOST1",
338               "ovsdb:openvswitch-external-ids": [
339                 {
340                   "external-id-key": "opendaylight-iid",
341                   "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
342                 }
343               ],
344               "ovsdb:connection-info": {
345                 "local-ip": "<controller-ip>",
346                 "remote-port": 6640,
347                 "remote-ip": "<ovs-host-ip>",
348                 "local-port": 39042
349               },
350               "ovsdb:ovs-version": "2.3.1-git4750c96",
351               "ovsdb:manager-entry": [
352                 {
353                   "target": "ptcp:6640",
354                   "connected": true,
355                   "number_of_connections": 1
356                 }
357               ]
358             }
359           ]
360         }
361       ]
362     }
363
364 To disconnect an active connection, just delete the configuration MD-SAL
365 entry.
366
367 HTTP DELETE:
368
369 ::
370
371     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1
372
373 Note in the above example, that */* characters which are part of the
374 *node-id* are specified in hexadecimal format as "%2F".
375
376 Passive Connection to OVS Hosts
377 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
378
379 A passive connection is when the OVS host initiates the connection to
380 the OVSDB Southbound Plugin. This happens when the OVS host is
381 configured to connect to the OVSDB Southbound Plugin. The OVS host is
382 configured with the following command:
383
384 ::
385
386     sudo ovs-vsctl set-manager tcp:<controller-ip>:6640
387
388 The OVSDB Southbound Plugin is configured to listen for OVSDB
389 connections on TCP port 6640. This value can be changed by editing the
390 "./karaf/target/assembly/etc/custom.properties" file and changing the
391 value of the "ovsdb.listenPort" attribute.
392
393 When a passive connection is made, the OVSDB node will appear first in
394 the operational MD-SAL. If the Open\_vSwitch table does not contain an
395 external-ids value of *opendaylight-iid*, then the *node-id* of the new
396 OVSDB node will be created in the format:
397
398 ::
399
400     "ovsdb://uuid/<actual UUID value>"
401
402 If there an *opendaylight-iid* value was already present in the
403 external-ids column, then the instance identifier defined there will be
404 used to create the *node-id* instead.
405
406 Query the operational MD-SAL for an OVSDB node after a passive
407 connection.
408
409 HTTP GET:
410
411 ::
412
413     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
414
415 Result Body:
416
417 ::
418
419     {
420       "topology": [
421         {
422           "topology-id": "ovsdb:1",
423           "node": [
424             {
425               "node-id": "ovsdb://uuid/163724f4-6a70-428a-a8a0-63b2a21f12dd",
426               "ovsdb:openvswitch-external-ids": [
427                 {
428                   "external-id-key": "system-id",
429                   "external-id-value": "ecf160af-e78c-4f6b-a005-83a6baa5c979"
430                 }
431               ],
432               "ovsdb:connection-info": {
433                 "local-ip": "<controller-ip>",
434                 "remote-port": 46731,
435                 "remote-ip": "<ovs-host-ip>",
436                 "local-port": 6640
437               },
438               "ovsdb:ovs-version": "2.3.1-git4750c96",
439               "ovsdb:manager-entry": [
440                 {
441                   "target": "tcp:10.11.21.7:6640",
442                   "connected": true,
443                   "number_of_connections": 1
444                 }
445               ]
446             }
447           ]
448         }
449       ]
450     }
451
452 Take note of the *node-id* that was created in this case.
453
454 Manage Bridges
455 ^^^^^^^^^^^^^^
456
457 The OVSDB Southbound Plugin can be used to manage bridges on an OVS
458 host.
459
460 This example shows how to add a bridge to the OVSDB node
461 *ovsdb://HOST1*.
462
463 HTTP PUT:
464
465 ::
466
467     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
468
469 Body:
470
471 ::
472
473     {
474       "network-topology:node": [
475         {
476           "node-id": "ovsdb://HOST1/bridge/brtest",
477           "ovsdb:bridge-name": "brtest",
478           "ovsdb:protocol-entry": [
479             {
480               "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
481             }
482           ],
483           "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
484         }
485       ]
486     }
487
488 Notice that the *ovsdb:managed-by* attribute is specified in the
489 command. This indicates the association of the new bridge node with its
490 OVSDB node.
491
492 Bridges can be updated. In the following example, OpenDaylight is
493 configured to be the OpenFlow controller for the bridge.
494
495 HTTP PUT:
496
497 ::
498
499     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
500
501 Body:
502
503 ::
504
505     {
506       "network-topology:node": [
507             {
508               "node-id": "ovsdb://HOST1/bridge/brtest",
509                  "ovsdb:bridge-name": "brtest",
510                   "ovsdb:controller-entry": [
511                     {
512                       "target": "tcp:<controller-ip>:6653"
513                     }
514                   ],
515                  "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
516             }
517         ]
518     }
519
520 If the OpenDaylight OpenFlow Plugin is installed, then checking on the
521 OVS host will show that OpenDaylight has successfully connected as the
522 controller for the bridge.
523
524 ::
525
526     $ sudo ovs-vsctl show
527         Manager "ptcp:6640"
528             is_connected: true
529         Bridge brtest
530             Controller "tcp:<controller-ip>:6653"
531                 is_connected: true
532             Port brtest
533                 Interface brtest
534                     type: internal
535         ovs_version: "2.3.1-git4750c96"
536
537 Query the operational MD-SAL to see how the bridge appears.
538
539 HTTP GET:
540
541 ::
542
543     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/
544
545 Result Body:
546
547 ::
548
549     {
550       "node": [
551         {
552           "node-id": "ovsdb://HOST1/bridge/brtest",
553           "ovsdb:bridge-name": "brtest",
554           "ovsdb:datapath-type": "ovsdb:datapath-type-system",
555           "ovsdb:datapath-id": "00:00:da:e9:0c:08:2d:45",
556           "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']",
557           "ovsdb:bridge-external-ids": [
558             {
559               "bridge-external-id-key": "opendaylight-iid",
560               "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']"
561             }
562           ],
563           "ovsdb:protocol-entry": [
564             {
565               "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
566             }
567           ],
568           "ovsdb:bridge-uuid": "080ce9da-101e-452d-94cd-ee8bef8a4b69",
569           "ovsdb:controller-entry": [
570             {
571               "target": "tcp:10.11.21.7:6653",
572               "is-connected": true,
573               "controller-uuid": "c39b1262-0876-4613-8bfd-c67eec1a991b"
574             }
575           ],
576           "termination-point": [
577             {
578               "tp-id": "brtest",
579               "ovsdb:port-uuid": "c808ae8d-7af2-4323-83c1-e397696dc9c8",
580               "ovsdb:ofport": 65534,
581               "ovsdb:interface-type": "ovsdb:interface-type-internal",
582               "ovsdb:interface-uuid": "49e9417f-4479-4ede-8faf-7c873b8c0413",
583               "ovsdb:name": "brtest"
584             }
585           ]
586         }
587       ]
588     }
589
590 Notice that just like with the OVSDB node, an *opendaylight-iid* has
591 been added to the external-ids column of the bridge since it was created
592 via the configuration MD-SAL.
593
594 A bridge node may be deleted as well.
595
596 HTTP DELETE:
597
598 ::
599
600     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
601
602 Manage Ports
603 ^^^^^^^^^^^^
604
605 Similarly, ports may be managed by the OVSDB Southbound Plugin.
606
607 This example illustrates how a port and various attributes may be
608 created on a bridge.
609
610 HTTP PUT:
611
612 ::
613
614     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
615
616 Body:
617
618 ::
619
620     {
621       "network-topology:termination-point": [
622         {
623           "ovsdb:options": [
624             {
625               "ovsdb:option": "remote_ip",
626               "ovsdb:value" : "10.10.14.11"
627             }
628           ],
629           "ovsdb:name": "testport",
630           "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
631           "tp-id": "testport",
632           "vlan-tag": "1",
633           "trunks": [
634             {
635               "trunk": "5"
636             }
637           ],
638           "vlan-mode":"access"
639         }
640       ]
641     }
642
643 Ports can be updated - add another VLAN trunk.
644
645 HTTP PUT:
646
647 ::
648
649     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
650
651 Body:
652
653 ::
654
655     {
656       "network-topology:termination-point": [
657         {
658           "ovsdb:name": "testport",
659           "tp-id": "testport",
660           "trunks": [
661             {
662               "trunk": "5"
663             },
664             {
665               "trunk": "500"
666             }
667           ]
668         }
669       ]
670     }
671
672 Query the operational MD-SAL for the port.
673
674 HTTP GET:
675
676 ::
677
678     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
679
680 Result Body:
681
682 ::
683
684     {
685       "termination-point": [
686         {
687           "tp-id": "testport",
688           "ovsdb:port-uuid": "b1262110-2a4f-4442-b0df-84faf145488d",
689           "ovsdb:options": [
690             {
691               "option": "remote_ip",
692               "value": "10.10.14.11"
693             }
694           ],
695           "ovsdb:port-external-ids": [
696             {
697               "external-id-key": "opendaylight-iid",
698               "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']/network-topology:termination-point[network-topology:tp-id='testport']"
699             }
700           ],
701           "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
702           "ovsdb:trunks": [
703             {
704               "trunk": 5
705             },
706             {
707               "trunk": 500
708             }
709           ],
710           "ovsdb:vlan-mode": "access",
711           "ovsdb:vlan-tag": 1,
712           "ovsdb:interface-uuid": "7cec653b-f407-45a8-baec-7eb36b6791c9",
713           "ovsdb:name": "testport",
714           "ovsdb:ofport": 1
715         }
716       ]
717     }
718
719 Remember that the OVSDB YANG model includes both OVSDB port and
720 interface table attributes in the termination-point augmentation. Both
721 kinds of attributes can be seen in the examples above. Again, note the
722 creation of an *opendaylight-iid* value in the external-ids column of
723 the port table.
724
725 Delete a port.
726
727 HTTP DELETE:
728
729 ::
730
731     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest2/termination-point/testport/
732
733 Overview of QoS and Queue
734 ^^^^^^^^^^^^^^^^^^^^^^^^^
735
736 The OVSDB Southbound Plugin provides the capability of managing the QoS
737 and Queue tables on an OVS host with OpenDaylight configured as the
738 OVSDB manager.
739
740 QoS and Queue Tables in OVSDB
741 '''''''''''''''''''''''''''''
742
743 The OVSDB includes a QoS and Queue table. Unlike most of the other
744 tables in the OVSDB, except the Open\_vSwitch table, the QoS and Queue
745 tables are "root set" tables, which means that entries, or rows, in
746 these tables are not automatically deleted if they can not be reached
747 directly or indirectly from the Open\_vSwitch table. This means that QoS
748 entries can exist and be managed independently of whether or not they
749 are referenced in a Port entry. Similarly, Queue entries can be managed
750 independently of whether or not they are referenced by a QoS entry.
751
752 Modelling of QoS and Queue Tables in OpenDaylight MD-SAL
753 ''''''''''''''''''''''''''''''''''''''''''''''''''''''''
754
755 Since the QoS and Queue tables are "root set" tables, they are modeled
756 in the OpenDaylight MD-SAL as lists which are part of the attributes of
757 the OVSDB node model.
758
759 The MD-SAL QoS and Queue models have an additonal identifier attribute
760 per entry (e.g. "qos-id" or "queue-id") which is not present in the
761 OVSDB schema. This identifier is used by the MD-SAL as a key for
762 referencing the entry. If the entry is created originally from the
763 configuration MD-SAL, then the value of the identifier is whatever is
764 specified by the configuration. If the entry is created on the OVSDB
765 node and received by OpenDaylight in an operational update, then the id
766 will be created in the following format.
767
768 ::
769
770     "queue-id": "queue://<UUID>"
771     "qos-id": "qos://<UUID>"
772
773 The UUID in the above identifiers is the actual UUID of the entry in the
774 OVSDB database.
775
776 When the QoS or Queue entry is created by the configuration MD-SAL, the
777 identifier will be configured as part of the external-ids column of the
778 entry. This will ensure that the corresponding entry that is created in
779 the operational MD-SAL uses the same identifier.
780
781 ::
782
783     "queues-external-ids": [
784       {
785         "queues-external-id-key": "opendaylight-queue-id",
786         "queues-external-id-value": "QUEUE-1"
787       }
788     ]
789
790 See more in the examples that follow in this section.
791
792 The QoS schema in OVSDB currently defines two types of QoS entries.
793
794 -  linux-htb
795
796 -  linux-hfsc
797
798 These QoS types are defined in the QoS model. Additional types will need
799 to be added to the model in order to be supported. See the examples that
800 folow for how the QoS type is specified in the model.
801
802 QoS entries can be configured with addtional attritubes such as
803 "max-rate". These are configured via the *other-config* column of the
804 QoS entry. Refer to OVSDB schema (in the reference section below) for
805 all of the relevant attributes that can be configured. The examples in
806 the rest of this section will demonstrate how the other-config column
807 may be configured.
808
809 Similarly, the Queue entries may be configured with additional
810 attributes via the other-config column.
811
812 Managing QoS and Queues via Configuration MD-SAL
813 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
814
815 This section will show some examples on how to manage QoS and Queue
816 entries via the configuration MD-SAL. The examples will be illustrated
817 by using RESTCONF (see `QoS and Queue Postman
818 Collection <https://github.com/opendaylight/ovsdb/blob/stable/boron/resources/commons/Qos-and-Queue-Collection.json.postman_collection>`__
819 ).
820
821 A pre-requisite for managing QoS and Queue entries is that the OVS host
822 must be present in the configuration MD-SAL.
823
824 For the following examples, the following OVS host is configured.
825
826 HTTP POST:
827
828 ::
829
830     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
831
832 Body:
833
834 ::
835
836     {
837       "node": [
838         {
839           "node-id": "ovsdb:HOST1",
840           "connection-info": {
841             "ovsdb:remote-ip": "<ovs-host-ip>",
842             "ovsdb:remote-port": "<ovs-host-ovsdb-port>"
843           }
844         }
845       ]
846     }
847
848 Where
849
850 -  *<controller-ip>* is the IP address of the OpenDaylight controller
851
852 -  *<ovs-host-ip>* is the IP address of the OVS host
853
854 -  *<ovs-host-ovsdb-port>* is the TCP port of the OVSDB server on the
855    OVS host (e.g. 6640)
856
857 This command creates an OVSDB node with the node-id "ovsdb:HOST1". This
858 OVSDB node will be used in the following examples.
859
860 QoS and Queue entries can be created and managed without a port, but
861 ultimately, QoS entries are associated with a port in order to use them.
862 For the following examples a test bridge and port will be created.
863
864 Create the test bridge.
865
866 HTTP PUT
867
868 ::
869
870     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test
871
872 Body:
873
874 ::
875
876     {
877       "network-topology:node": [
878         {
879           "node-id": "ovsdb:HOST1/bridge/br-test",
880           "ovsdb:bridge-name": "br-test",
881           "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']"
882         }
883       ]
884     }
885
886 Create the test port (which is modeled as a termination point in the
887 OpenDaylight MD-SAL).
888
889 HTTP PUT:
890
891 ::
892
893     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
894
895 Body:
896
897 ::
898
899     {
900       "network-topology:termination-point": [
901         {
902           "ovsdb:name": "testport",
903           "tp-id": "testport"
904         }
905       ]
906     }
907
908 If all of the previous steps were successful, a query of the operational
909 MD-SAL should look something like the following results. This indicates
910 that the configuration commands have been successfully instantiated on
911 the OVS host.
912
913 HTTP GET:
914
915 ::
916
917     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test
918
919 Result Body:
920
921 ::
922
923     {
924       "node": [
925         {
926           "node-id": "ovsdb:HOST1/bridge/br-test",
927           "ovsdb:bridge-name": "br-test",
928           "ovsdb:datapath-type": "ovsdb:datapath-type-system",
929           "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']",
930           "ovsdb:datapath-id": "00:00:8e:5d:22:3d:09:49",
931           "ovsdb:bridge-external-ids": [
932             {
933               "bridge-external-id-key": "opendaylight-iid",
934               "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']"
935             }
936           ],
937           "ovsdb:bridge-uuid": "3d225d8d-d060-4909-93ef-6f4db58ef7cc",
938           "termination-point": [
939             {
940               "tp-id": "br=-est",
941               "ovsdb:port-uuid": "f85f7aa7-4956-40e4-9c94-e6ca2d5cd254",
942               "ovsdb:ofport": 65534,
943               "ovsdb:interface-type": "ovsdb:interface-type-internal",
944               "ovsdb:interface-uuid": "29ff3692-6ed4-4ad7-a077-1edc277ecb1a",
945               "ovsdb:name": "br-test"
946             },
947             {
948               "tp-id": "testport",
949               "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
950               "ovsdb:port-external-ids": [
951                 {
952                   "external-id-key": "opendaylight-iid",
953                   "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
954                 }
955               ],
956               "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
957               "ovsdb:name": "testport"
958             }
959           ]
960         }
961       ]
962     }
963
964 Create Queue
965 ''''''''''''
966
967 Create a new Queue in the configuration MD-SAL.
968
969 HTTP PUT:
970
971 ::
972
973     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
974
975 Body:
976
977 ::
978
979     {
980       "ovsdb:queues": [
981         {
982           "queue-id": "QUEUE-1",
983           "dscp": 25,
984           "queues-other-config": [
985             {
986               "queue-other-config-key": "max-rate",
987               "queue-other-config-value": "3600000"
988             }
989           ]
990         }
991       ]
992     }
993
994 Query Queue
995 '''''''''''
996
997 Now query the operational MD-SAL for the Queue entry.
998
999 HTTP GET:
1000
1001 ::
1002
1003     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
1004
1005 Result Body:
1006
1007 ::
1008
1009     {
1010       "ovsdb:queues": [
1011         {
1012           "queue-id": "QUEUE-1",
1013           "queues-other-config": [
1014             {
1015               "queue-other-config-key": "max-rate",
1016               "queue-other-config-value": "3600000"
1017             }
1018           ],
1019           "queues-external-ids": [
1020             {
1021               "queues-external-id-key": "opendaylight-queue-id",
1022               "queues-external-id-value": "QUEUE-1"
1023             }
1024           ],
1025           "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
1026           "dscp": 25
1027         }
1028       ]
1029     }
1030
1031 Create QoS
1032 ''''''''''
1033
1034 Create a QoS entry. Note that the UUID of the Queue entry, obtained by
1035 querying the operational MD-SAL of the Queue entry, is specified in the
1036 queue-list of the QoS entry. Queue entries may be added to the QoS entry
1037 at the creation of the QoS entry, or by a subsequent update to the QoS
1038 entry.
1039
1040 HTTP PUT:
1041
1042 ::
1043
1044     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
1045
1046 Body:
1047
1048 ::
1049
1050     {
1051       "ovsdb:qos-entries": [
1052         {
1053           "qos-id": "QOS-1",
1054           "qos-type": "ovsdb:qos-type-linux-htb",
1055           "qos-other-config": [
1056             {
1057               "other-config-key": "max-rate",
1058               "other-config-value": "4400000"
1059             }
1060           ],
1061           "queue-list": [
1062             {
1063               "queue-number": "0",
1064               "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
1065             }
1066           ]
1067         }
1068       ]
1069     }
1070
1071 Query QoS
1072 '''''''''
1073
1074 Query the operational MD-SAL for the QoS entry.
1075
1076 HTTP GET:
1077
1078 ::
1079
1080     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
1081
1082 Result Body:
1083
1084 ::
1085
1086     {
1087       "ovsdb:qos-entries": [
1088         {
1089           "qos-id": "QOS-1",
1090           "qos-other-config": [
1091             {
1092               "other-config-key": "max-rate",
1093               "other-config-value": "4400000"
1094             }
1095           ],
1096           "queue-list": [
1097             {
1098               "queue-number": 0,
1099               "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
1100             }
1101           ],
1102           "qos-type": "ovsdb:qos-type-linux-htb",
1103           "qos-external-ids": [
1104             {
1105               "qos-external-id-key": "opendaylight-qos-id",
1106               "qos-external-id-value": "QOS-1"
1107             }
1108           ],
1109           "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
1110         }
1111       ]
1112     }
1113
1114 Add QoS to a Port
1115 '''''''''''''''''
1116
1117 Update the termination point entry to include the UUID of the QoS entry,
1118 obtained by querying the operational MD-SAL, to associate a QoS entry
1119 with a port.
1120
1121 HTTP PUT:
1122
1123 ::
1124
1125     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
1126
1127 Body:
1128
1129 ::
1130
1131     {
1132       "network-topology:termination-point": [
1133         {
1134           "ovsdb:name": "testport",
1135           "tp-id": "testport",
1136           "qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
1137         }
1138       ]
1139     }
1140
1141 Query the Port
1142 ''''''''''''''
1143
1144 Query the operational MD-SAL to see how the QoS entry appears in the
1145 termination point model.
1146
1147 HTTP GET:
1148
1149 ::
1150
1151     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
1152
1153 Result Body:
1154
1155 ::
1156
1157     {
1158       "termination-point": [
1159         {
1160           "tp-id": "testport",
1161           "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
1162           "ovsdb:port-external-ids": [
1163             {
1164               "external-id-key": "opendaylight-iid",
1165               "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
1166             }
1167           ],
1168           "ovsdb:qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31",
1169           "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
1170           "ovsdb:name": "testport"
1171         }
1172       ]
1173     }
1174
1175 Query the OVSDB Node
1176 ''''''''''''''''''''
1177
1178 Query the operational MD-SAL for the OVS host to see how the QoS and
1179 Queue entries appear as lists in the OVS node model.
1180
1181 HTTP GET:
1182
1183 ::
1184
1185     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/
1186
1187 Result Body (edited to only show information relevant to the QoS and
1188 Queue entries):
1189
1190 ::
1191
1192     {
1193       "node": [
1194         {
1195           "node-id": "ovsdb:HOST1",
1196           <content edited out>
1197           "ovsdb:queues": [
1198             {
1199               "queue-id": "QUEUE-1",
1200               "queues-other-config": [
1201                 {
1202                   "queue-other-config-key": "max-rate",
1203                   "queue-other-config-value": "3600000"
1204                 }
1205               ],
1206               "queues-external-ids": [
1207                 {
1208                   "queues-external-id-key": "opendaylight-queue-id",
1209                   "queues-external-id-value": "QUEUE-1"
1210                 }
1211               ],
1212               "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
1213               "dscp": 25
1214             }
1215           ],
1216           "ovsdb:qos-entries": [
1217             {
1218               "qos-id": "QOS-1",
1219               "qos-other-config": [
1220                 {
1221                   "other-config-key": "max-rate",
1222                   "other-config-value": "4400000"
1223                 }
1224               ],
1225               "queue-list": [
1226                 {
1227                   "queue-number": 0,
1228                   "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
1229                 }
1230               ],
1231               "qos-type": "ovsdb:qos-type-linux-htb",
1232               "qos-external-ids": [
1233                 {
1234                   "qos-external-id-key": "opendaylight-qos-id",
1235                   "qos-external-id-value": "QOS-1"
1236                 }
1237               ],
1238               "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
1239             }
1240           ]
1241           <content edited out>
1242         }
1243       ]
1244     }
1245
1246 Remove QoS from a Port
1247 ''''''''''''''''''''''
1248
1249 This example removes a QoS entry from the termination point and
1250 associated port. Note that this is a PUT command on the termination
1251 point with the QoS attribute absent. Other attributes of the termination
1252 point should be included in the body of the command so that they are not
1253 inadvertantly removed.
1254
1255 HTTP PUT:
1256
1257 ::
1258
1259     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
1260
1261 Body:
1262
1263 ::
1264
1265     {
1266       "network-topology:termination-point": [
1267         {
1268           "ovsdb:name": "testport",
1269           "tp-id": "testport"
1270         }
1271       ]
1272     }
1273
1274 Remove a Queue from QoS
1275 '''''''''''''''''''''''
1276
1277 This example removes the specific Queue entry from the queue list in the
1278 QoS entry. The queue entry is specified by the queue number, which is
1279 "0" in this example.
1280
1281 HTTP DELETE:
1282
1283 ::
1284
1285     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/queue-list/0/
1286
1287 Remove Queue
1288 ''''''''''''
1289
1290 Once all references to a specific queue entry have been removed from QoS
1291 entries, the Queue itself can be removed.
1292
1293 HTTP DELETE:
1294
1295 ::
1296
1297     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
1298
1299 Remove QoS
1300 ''''''''''
1301
1302 The QoS entry may be removed when it is no longer referenced by any
1303 ports.
1304
1305 HTTP DELETE:
1306
1307 ::
1308
1309     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
1310
1311 References
1312 ^^^^^^^^^^
1313
1314 `Openvswitch
1315 schema <http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf>`__
1316
1317 `OVSDB and Netvirt Postman
1318 Collection <https://github.com/opendaylight/ovsdb/blob/stable/boron/resources/commons>`__
1319
1320 OVSDB Hardware VTEP SouthBound Plugin
1321 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1322
1323 Overview
1324 ^^^^^^^^
1325
1326 Hwvtepsouthbound plugin is used to configure a hardware VTEP which
1327 implements hardware ovsdb schema. This page will show how to use
1328 RESTConf API of hwvtepsouthbound. There are two ways to connect to ODL:
1329
1330 **switch initiates connection..**
1331
1332 Both will be introduced respectively.
1333
1334 User Initiates Connection
1335 ^^^^^^^^^^^^^^^^^^^^^^^^^
1336
1337 Prerequisite
1338 ''''''''''''
1339
1340 Configure the hwvtep device/node to listen for the tcp connection in
1341 passive mode. In addition, management IP and tunnel source IP are also
1342 configured. After all this configuration is done, a physical switch is
1343 created automatically by the hwvtep node.
1344
1345 Connect to a hwvtep device/node
1346 '''''''''''''''''''''''''''''''
1347
1348 Send below Restconf request if you want to initiate the connection to a
1349 hwvtep node from the controller, where listening IP and port of hwvtep
1350 device/node are provided.
1351
1352 REST API: POST
1353 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/
1354
1355 ::
1356
1357     {
1358      "network-topology:node": [
1359            {
1360                "node-id": "hwvtep://192.168.1.115:6640",
1361                "hwvtep:connection-info":
1362                {
1363                    "hwvtep:remote-port": 6640,
1364                    "hwvtep:remote-ip": "192.168.1.115"
1365                }
1366            }
1367        ]
1368     }
1369
1370 Please replace *odl* in the URL with the IP address of your OpenDaylight
1371 controller and change *192.168.1.115* to your hwvtep node IP.
1372
1373 **NOTE**: The format of node-id is fixed. It will be one of the two:
1374
1375 User initiates connection from ODL:
1376
1377 ::
1378
1379      hwvtep://ip:port
1380
1381 Switch initiates connection:
1382
1383 ::
1384
1385      hwvtep://uuid/<uuid of switch>
1386
1387 The reason for using UUID is that we can distinguish between multiple
1388 switches if they are behind a NAT.
1389
1390 After this request is completed successfully, we can get the physical
1391 switch from the operational data store.
1392
1393 REST API: GET
1394 http://odl:8181/restconf/operational/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
1395
1396 There is no body in this request.
1397
1398 The response of the request is:
1399
1400 ::
1401
1402     {
1403        "node": [
1404              {
1405                "node-id": "hwvtep://192.168.1.115:6640",
1406                "hwvtep:switches": [
1407                  {
1408                    "switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640/physicalswitch/br0']"
1409                  }
1410                ],
1411                "hwvtep:connection-info": {
1412                  "local-ip": "192.168.92.145",
1413                  "local-port": 47802,
1414                  "remote-port": 6640,
1415                  "remote-ip": "192.168.1.115"
1416                }
1417              },
1418              {
1419                "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
1420                "hwvtep:management-ips": [
1421                  {
1422                    "management-ips-key": "192.168.1.115"
1423                  }
1424                ],
1425                "hwvtep:physical-switch-uuid": "37eb5abd-a6a3-4aba-9952-a4d301bdf371",
1426                "hwvtep:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']",
1427                "hwvtep:hwvtep-node-description": "",
1428                "hwvtep:tunnel-ips": [
1429                  {
1430                    "tunnel-ips-key": "192.168.1.115"
1431                  }
1432                ],
1433                "hwvtep:hwvtep-node-name": "br0"
1434              }
1435            ]
1436     }
1437
1438 If there is a physical switch which has already been created by manual
1439 configuration, we can get the node-id of the physical switch from this
1440 response, which is presented in “swith-ref”. If the switch does not
1441 exist, we need to create the physical switch. Currently, most hwvtep
1442 devices do not support running multiple switches.
1443
1444 Create a physical switch
1445 ''''''''''''''''''''''''
1446
1447 REST API: POST
1448 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/
1449
1450 request body:
1451
1452 ::
1453
1454     {
1455      "network-topology:node": [
1456            {
1457                "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
1458                "hwvtep-node-name": "ps0",
1459                "hwvtep-node-description": "",
1460                "management-ips": [
1461                  {
1462                    "management-ips-key": "192.168.1.115"
1463                  }
1464                ],
1465                "tunnel-ips": [
1466                  {
1467                    "tunnel-ips-key": "192.168.1.115"
1468                  }
1469                ],
1470                "managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']"
1471            }
1472        ]
1473     }
1474
1475 Note: "managed-by" must provided by user. We can get its value after the
1476 step *Connect to a hwvtep device/node* since the node-id of hwvtep
1477 device is provided by user. "managed-by" is a reference typed of
1478 instance identifier. Though the instance identifier is a little
1479 complicated for RestConf, the primary user of hwvtepsouthbound plugin
1480 will be provider-type code such as NetVirt and the instance identifier
1481 is much easier to write code for.
1482
1483 Create a logical switch
1484 '''''''''''''''''''''''
1485
1486 Creating a logical switch is effectively creating a logical network. For
1487 VxLAN, it is a tunnel network with the same VNI.
1488
1489 REST API: POST
1490 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
1491
1492 request body:
1493
1494 ::
1495
1496     {
1497      "logical-switches": [
1498            {
1499                "hwvtep-node-name": "ls0",
1500                "hwvtep-node-description": "",
1501                "tunnel-key": "10000"
1502             }
1503        ]
1504     }
1505
1506 Create a physical locator
1507 '''''''''''''''''''''''''
1508
1509 After the VXLAN network is ready, we will add VTEPs to it. A VTEP is
1510 described by a physical locator.
1511
1512 REST API: POST
1513 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
1514
1515 request body:
1516
1517 ::
1518
1519      {
1520       "termination-point": [
1521            {
1522                "tp-id": "vxlan_over_ipv4:192.168.0.116",
1523                "encapsulation-type": "encapsulation-type-vxlan-over-ipv4",
1524                "dst-ip": "192.168.0.116"
1525                }
1526           ]
1527      }
1528
1529 The "tp-id" of locator is "{encapsualation-type}: {dst-ip}".
1530
1531 Note: As far as we know, the OVSDB database does not allow the insertion
1532 of a new locator alone. So, no locator is inserted after this request is
1533 sent. We will trigger off the creation until other entity refer to it,
1534 such as remote-mcast-macs.
1535
1536 Create a remote-mcast-macs entry
1537 ''''''''''''''''''''''''''''''''
1538
1539 After adding a physical locator to a logical switch, we need to create a
1540 remote-mcast-macs entry to handle unknown traffic.
1541
1542 REST API: POST
1543 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
1544
1545 request body:
1546
1547 ::
1548
1549     {
1550      "remote-mcast-macs": [
1551            {
1552                "mac-entry-key": "00:00:00:00:00:00",
1553                "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
1554                "locator-set": [
1555                     {
1556                           "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://219.141.189.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
1557                     }
1558                ]
1559            }
1560        ]
1561     }
1562
1563 The physical locator *vxlan\_over\_ipv4:192.168.0.116* is just created
1564 in "Create a physical locator". It should be noted that list
1565 "locator-set" is immutable, that is, we must provide a set of
1566 "locator-ref" as a whole.
1567
1568 Note: "00:00:00:00:00:00" stands for "unknown-dst" since the type of
1569 mac-entry-key is yang:mac and does not accept "unknown-dst".
1570
1571 Create a physical port
1572 ''''''''''''''''''''''
1573
1574 Now we add a physical port into the physical switch
1575 "hwvtep://192.168.1.115:6640/physicalswitch/br0". The port is attached
1576 with a physical server or an L2 network and with the vlan 100.
1577
1578 REST API: POST
1579 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640%2Fphysicalswitch%2Fbr0
1580
1581 ::
1582
1583     {
1584      "network-topology:termination-point": [
1585            {
1586                "tp-id": "port0",
1587                "hwvtep-node-name": "port0",
1588                "hwvtep-node-description": "",
1589                "vlan-bindings": [
1590                    {
1591                      "vlan-id-key": "100",
1592                      "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']"
1593                    }
1594              ]
1595            }
1596        ]
1597     }
1598
1599 At this point, we have completed the basic configuration.
1600
1601 Typically, hwvtep devices learn local MAC addresses automatically. But
1602 they also support getting MAC address entries from ODL.
1603
1604 Create a local-mcast-macs entry
1605 '''''''''''''''''''''''''''''''
1606
1607 It is similar to *Create a remote-mcast-macs entry*.
1608
1609 Create a remote-ucast-macs
1610 ''''''''''''''''''''''''''
1611
1612 REST API: POST
1613 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
1614
1615 ::
1616
1617     request body:
1618
1619 ::
1620
1621     {
1622      "remote-ucast-macs": [
1623            {
1624                "mac-entry-key": "11:11:11:11:11:11",
1625                "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
1626                "ipaddr": "1.1.1.1",
1627                "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
1628            }
1629        ]
1630     }
1631
1632 Create a local-ucast-macs entry
1633 '''''''''''''''''''''''''''''''
1634
1635 This is similar to *Create a remote-ucast-macs*.
1636
1637 Switch Initiates Connection
1638 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
1639
1640 We do not need to connect to a hwvtep device/node when the switch
1641 initiates the connection. After switches connect to ODL successfully, we
1642 get the node-id’s of switches by reading the operational data store.
1643 Once the node-id of a hwvtep device is received, the remaining steps are
1644 the same as when the user initiates the connection.
1645
1646 References
1647 ^^^^^^^^^^
1648
1649 https://wiki.opendaylight.org/view/User_talk:Pzhang