Merge "Fix link to AsciiDoc Tips"
[docs.git] / docs / user-guide / ovsdb-netvirt.rst
1 OVSDB NetVirt
2 =============
3
4 NetVirt
5 -------
6
7 The OVSDB NetVirt project delivers two major pieces of functionality:
8
9 1. The OVSDB Southbound Protocol, and
10
11 2. NetVirt, a network virtualization solution.
12
13 The following diagram shows the system-level architecture of OVSDB
14 NetVirt in an OpenStack-based solution.
15
16 .. figure:: ./images/ovsdb/ovsdb-netvirt-architecture.jpg
17    :alt: OVSDB NetVirt Architecture
18
19    OVSDB NetVirt Architecture
20
21 NetVirt is a network virtualization solution that is a Neutron service
22 provider, and therefore supports the OpenStack Neutron Networking API
23 and extensions.
24
25 The OVSDB component implements the OVSDB protocol (RFC 7047), as well as
26 plugins to support OVSDB Schemas, such as the Open\_vSwitch database
27 schema and the hardware\_vtep database schema.
28
29 NetVirt has MDSAL-based interfaces with Neutron on the northbound side,
30 and OVSDB and OpenFlow plugins on the southbound side.
31
32 OVSDB NetVirt currently supports Open vSwitch virtual switches via
33 OpenFlow and OVSDB. Work is underway to support hardware gateways.
34
35 NetVirt services are enabled by installing the odl-ovsdb-openstack
36 feature using the following command:
37
38 ::
39
40     feature:install odl-ovsdb-openstack
41
42 To enable NetVirt’s distributed Layer 3 routing services, the following
43 line must be uncommented in the etc/custom.properties file in the
44 OpenDaylight distribution prior to starting karaf:
45
46 ::
47
48     ovsdb.l3.fwd.enabled=yes
49
50 To start the OpenDaylight controller, run the following application in
51 your distribution:
52
53 ::
54
55     bin/karaf
56
57 More details about using NetVirt with OpenStack can be found in the
58 following places:
59
60 1. The "OpenDaylight and OpenStack" guide, and
61
62 2. `Getting Started with OpenDaylight OVSDB Plugin Network
63    Virtualization <https://wiki.opendaylight.org/view/OVSDB_Integration:Main#Getting_Started_with_OpenDaylight_OVSDB_Plugin_Network_Virtualization>`__
64
65 Some additional details about using OpenStack Security Groups and the
66 Data Plane Development Kit (DPDK) are provided below.
67
68 Security groups
69 ~~~~~~~~~~~~~~~
70
71 The security group in openstack helps to filter packets based on
72 policies configured. The current implementation in openstack uses
73 iptables to realize security groups. In Opendaylight instead of iptable
74 rules, ovs flows are used. This will remove the many layers of
75 bridges/ports required in iptable implementation.
76
77 The current rules are applied on the basis of the following attributes:
78 ingress/egress, protocol, port range, and prefix. In the pipeline, table
79 40 is used for egress acl and table 90 for ingress acl rules.
80
81 Stateful Implementation
82 ^^^^^^^^^^^^^^^^^^^^^^^
83
84 The security group is implemented in two modes, stateful and stateless.
85 Stateful can be enabled by setting false to true in
86 etc/opendaylight/karaf/netvirt-impl-default-config.xml
87
88 The stateful implementation uses the conntrack capabilities of ovs and
89 tracks an existing connection. This mode requires OVS2.5 and linux
90 kernel 4.3. The ovs which is integrated with netfilter framework tracks
91 the connection using the five tuple(layer-3 protocol, source address,
92 destination address, layer-4 protocol, layer-4 key). The connection
93 state is independent of the upper level state of connection oriented
94 protocols like TCP, and even connectionless protocols like UDP will have
95 a pseudo state. With this implementation OVS sends the packet to the
96 netfilter framework to know whether there is an entry for to the
97 connection. netfilter will return the packet to OVS with the appropriate
98 flag set. Below are the states we are interested in:
99
100 ::
101
102     -trk - The packet was never send to netfilter framework
103
104 ::
105
106     +trk+est - It is already known and connection which was allowed previously, 
107     pass it to the next table.
108
109 ::
110
111     +trk+new - This is a new connection. So if there is a specific rule in the 
112     table which allows this traffic with a commit action an entry will be made 
113     in the netfilter framework. If there is no  specific rule to allow this 
114     traffic the packet will be dropped.
115
116 So, by default, a packet is be dropped unless there is a rule to allow
117 the packet.
118
119 Stateless Implementation
120 ^^^^^^^^^^^^^^^^^^^^^^^^
121
122 The stateless mode is for OVS 2.4 and below where connection tracking is
123 not supported. Here we have pseudo-connection tracking using the TCP SYN
124 flag. Other than TCP packets, all protocol packets is allowed by
125 default. For TCP packets, the SYN packets will be dropped by default
126 unless there is a specific rule which allows TCP SYN packets to a
127 particular port.
128
129 Fixed Rules
130 ^^^^^^^^^^^
131
132 The SecurityGroup are associated with the vm port when the vm is
133 spawned. By default a set of rules are applied to the vm port referred
134 to as fixed security group rule. This includes the DHCP rules the ARP
135 rule and the conntrack rules. The conntrack rules will be inserted only
136 in the stateful mode.
137
138 DHCP rules
139 ''''''''''
140
141 The DHCP rules added to the vm port when a vm is spawned. The fixed DHCP
142 rules are
143
144 -  Allow DHCP server traffic ingress.
145
146    ::
147
148        cookie=0x0, duration=36.848s, table=90, n_packets=2, n_bytes=717,
149        priority=61006,udp,dl_src=fa:16:3e:a1:f9:d0,
150        tp_src=67,tp_dst=68 actions=goto_table:100
151
152    ::
153
154        cookie=0x0, duration=36.566s, table=90, n_packets=0, n_bytes=0, 
155        priority=61006,udp6,dl_src=fa:16:3e:a1:f9:d0,
156        tp_src=547,tp_dst=546 actions=goto_table:100
157
158 -  Allow DHCP client traffic egress.
159
160    ::
161
162        cookie=0x0, duration=2165.596s, table=40, n_packets=2, n_bytes=674, 
163        priority=61012,udp,tp_src=68,tp_dst=67 actions=goto_table:50
164
165    ::
166
167        cookie=0x0, duration=2165.513s, table=40, n_packets=0, n_bytes=0, 
168        priority=61012,udp6,tp_src=546,tp_dst=547 actions=goto_table:50
169
170 -  Prevent DHCP server traffic from the vm port.(DHCP Spoofing)
171
172    ::
173
174        cookie=0x0, duration=34.711s, table=40, n_packets=0, n_bytes=0, 
175        priority=61011,udp,in_port=2,tp_src=67,tp_dst=68 actions=drop
176
177    ::
178
179        cookie=0x0, duration=34.519s, table=40, n_packets=0, n_bytes=0, 
180        priority=61011,udp6,in_port=2,tp_src=547,tp_dst=546 actions=drop
181
182 Arp rules
183 '''''''''
184
185 The default arp rules allows the arp traffic to go in and out of the vm
186 port.
187
188 ::
189
190     cookie=0x0, duration=35.015s, table=40, n_packets=10, n_bytes=420, 
191     priority=61010,arp,arp_sha=fa:16:3e:93:88:60 actions=goto_table:50
192
193 ::
194
195     cookie=0x0, duration=35.582s, table=90, n_packets=1, n_bytes=42, 
196     priority=61010,arp,arp_tha=fa:16:3e:93:88:60 actions=goto_table:100
197
198 Conntrack rules
199 '''''''''''''''
200
201 These rules are inserted only in stateful mode. The conntrack rules use
202 the netfilter framework to track packets. The below rules are added to
203 leverage it.
204
205 -  If a packet is not tracked(connection state –trk) it is send it to
206    the netfilter for tracking
207
208 -  If the packet is already tracked (netfilter filter returns connection
209    state +trk,+est) and if the connection is established, then allow the
210    packet to go through the pipeline.
211
212 -  The third rule is the default drop rule which will drop the packet,
213    if the packet is tracked and new(netfilter filter returns connection
214    state +trk,+new). This rule has lower priority than the custom rules
215    which shall be added.
216
217    ::
218
219        cookie=0x0, duration=35.015s table=40,priority=61021,in_port=3,
220        ct_state=-trk,action=ct"("table=0")"
221
222    ::
223
224        cookie=0x0, duration=35.015s table=40,priority=61020,in_port=3,
225        ct_state=+trk+est,action=goto_table:50
226
227    ::
228
229        cookie=0x0, duration=35.015s table=40,priority=36002,in_port=3,
230        ct_state=+new,actions=drop
231
232    ::
233
234        cookie=0x0, duration=35.015s table=90,priority=61022,
235        dl_dst=fa:16:3e:0d:8d:21,ct_state=+trk+est,action=goto_table:100
236
237    ::
238
239        cookie=0x0, duration=35.015s table=90,priority=61021,
240        dl_dst=fa:16:3e:0d:8d:21,ct_state=-trk,action=ct"("table=0")"
241
242    ::
243
244        cookie=0x0, duration=35.015s table=90,priority=36002,
245        dl_dst=fa:16:3e:0d:8d:21,ct_state=+new,actions=drop
246
247 TCP SYN Rule
248 ''''''''''''
249
250 This rule is inserted in stateless mode only. This rule will drop TCP
251 SYN packet by default
252
253 Custom Security Groups
254 ^^^^^^^^^^^^^^^^^^^^^^
255
256 ::
257
258        User can add security groups in openstack via command line or UI. When we associate this security group with a vm the flows related to each security group will be added in the related tables. A preconfigured security group called the default security group is available in neutron db. 
259
260 Stateful
261 ''''''''
262
263 If connection tracking is enabled the match will have connection state
264 and the action will have commit along with goto. The commit will send
265 the packet to the netfilter framework to cache the entry. After a
266 commit, for the next packet of this connection netfilter will return
267 +trk+est and the packet will match the fixed conntrack rule and get
268 forwarded to next table.
269
270 ::
271
272     cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
273     priority=61007,ct_state=+new+trk,icmp,dl_src=fa:16:3e:ee:a5:ec,
274     nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50
275
276 ::
277
278     cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0, 
279     priority=61007,ct_state=+new+trk,udp,dl_dst=fa:16:3e:22:59:2f,
280     nw_src=10.100.5.3,tp_dst=2222 actions=ct(commit),goto_table:100
281
282 ::
283
284     cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0, 
285     priority=61007,ct_state=+new+trk,tcp,dl_dst=fa:16:3e:22:59:2f,
286     nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100
287
288 Stateless
289 '''''''''
290
291 If the mode is stateless the match will have only the parameter
292 specified in the security rule and a goto in the action. The ct\_state
293 and commit action will be missing.
294
295 ::
296
297     cookie=0x0, duration=13211.171s, table=40, n_packets=0, n_bytes=0, 
298     priority=61007,icmp,dl_src=fa:16:3e:93:88:60,nw_dst=0.0.0.0/24,
299     icmp_type=2,icmp_code=4 actions=goto_table:50
300
301 ::
302
303     cookie=0x0, duration=199.674s, table=90, n_packets=0, n_bytes=0, 
304     priority=61007,udp,dl_dst=fa:16:3e:dc:49:ff,nw_src=10.100.5.3,tp_dst=2222 
305     actions=goto_table:100
306
307 ::
308
309     cookie=0x0, duration=199.780s, table=90, n_packets=0, n_bytes=0, 
310     priority=61007,tcp,dl_dst=fa:16:3e:93:88:60,nw_src=10.100.5.4,tp_dst=3333 
311     actions=goto_table:100
312
313 TCP/UDP Port Range
314 ''''''''''''''''''
315
316 The TCP/UDP port range is supported with the help of port mask. This
317 will dramatically reduce the number of flows required to cover a port
318 range. The below 7 rules can cover a port range from 333 to 777.
319
320 ::
321
322     cookie=0x0, duration=56.129s, table=90, n_packets=0, n_bytes=0, 
323     priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
324     tp_dst=0x200/0xff00 actions=goto_table:100
325
326 ::
327
328     cookie=0x0, duration=55.805s, table=90, n_packets=0, n_bytes=0, 
329     priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
330     tp_dst=0x160/0xffe0 actions=goto_table:100
331
332 ::
333
334     cookie=0x0, duration=55.587s, table=90, n_packets=0, n_bytes=0, 
335     priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
336     tp_dst=0x300/0xfff8 actions=goto_table:100
337
338 ::
339
340     cookie=0x0, duration=55.437s, table=90, n_packets=0, n_bytes=0, 
341     priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
342     tp_dst=0x150/0xfff0 actions=goto_table:100
343
344 ::
345
346     cookie=0x0, duration=55.282s, table=90, n_packets=0, n_bytes=0, 
347     priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
348     tp_dst=0x14e/0xfffe actions=goto_table:100
349
350 ::
351
352     cookie=0x0, duration=54.063s, table=90, n_packets=0, n_bytes=0, 
353     priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
354     tp_dst=0x308/0xfffe actions=goto_table:100
355
356 ::
357
358     cookie=0x0, duration=55.130s, table=90, n_packets=0, n_bytes=0, 
359     priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
360     tp_dst=333 actions=goto_table:100
361
362 CIDR/Remote Security Group
363 ^^^^^^^^^^^^^^^^^^^^^^^^^^
364
365 ::
366
367     When adding a security group we can select the rule to applicable to a 
368     set of CIDR or to a set of VMs which has a particular security group 
369     associated with it. 
370
371 If CIDR is selected there will be only one flow rule added allowing the
372 traffic from/to the IP’s belonging to that CIDR.
373
374 ::
375
376     cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
377     priority=61007,ct_state=+new+trk,icmp,dl_src=fa:16:3e:ee:a5:ec,
378     nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50
379
380 If a remote security group is selected a flow will be inserted for every
381 vm which has that security group associated.
382
383 ::
384
385     cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0, 
386     priority=61007,ct_state=+new+trk,udp,dl_dst=fa:16:3e:22:59:2f,
387     nw_src=10.100.5.3,tp_dst=2222    actions=ct(commit),goto_table:100
388
389 ::
390
391     cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0, 
392     priority=61007,ct_state=+new+trk,tcp,dl_dst=fa:16:3e:22:59:2f,
393     nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100
394
395 Rules supported in ODL
396 ^^^^^^^^^^^^^^^^^^^^^^
397
398 The following rules are supported in the current implementation. The
399 direction (ingress/egress) is always expected.
400
401 +--------------------+--------------------+--------------------+--------------------+
402 | Protocol           | Port Range         | IP Prefix          | Remote Security    |
403 |                    |                    |                    | Group supported    |
404 +--------------------+--------------------+--------------------+--------------------+
405 | Any                | Any                | Any                | Yes                |
406 +--------------------+--------------------+--------------------+--------------------+
407 | TCP                | 1 - 65535          | 0.0.0.0/0          | Yes                |
408 +--------------------+--------------------+--------------------+--------------------+
409 | UDP                | 1 - 65535          | 0.0.0.0/0          | Yes                |
410 +--------------------+--------------------+--------------------+--------------------+
411 | ICMP               | Any                | 0.0.0.0/0          | Yes                |
412 +--------------------+--------------------+--------------------+--------------------+
413
414 Table: Table Supported Rules
415
416 Note : IPV6 and port-range feature is not supported as of today
417
418 Using OVS with DPDK hosts and OVSDB NetVirt
419 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
420
421 The Data Plane Development Kit (`DPDK <http://dpdk.org/>`__) is a
422 userspace set of libraries and drivers designed for fast packet
423 processing. The userspace datapath variant of OVS can be built with DPDK
424 enabled to provide the performance features of DPDK to Open vSwitch
425 (OVS). In the 2.4.0 version of OVS, the Open\_vSwtich table schema was
426 enhanced to include the lists *datapath-types* and *interface-types*.
427 When the OVS with DPDK variant of OVS is running, the *inteface-types*
428 list will include DPDK interface types such as *dpdk* and
429 *dpdkvhostuser*. The OVSDB Southbound Plugin includes this information
430 in the OVSDB YANG model in the MD-SAL, so when a specific OVS host is
431 running OVS with DPDK, it is possible for NetVirt to detect that
432 information by checking that DPDK interface types are included in the
433 list of supported interface types.
434
435 For example, query the operational MD-SAL for OVSDB nodes:
436
437 HTTP GET:
438
439 ::
440
441     http://{{CONTROLLER-IP}}:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
442
443 Result Body:
444
445 ::
446
447     {
448       "topology": [
449         {
450           "topology-id": "ovsdb:1",
451           "node": [
452             < content edited out >
453             {
454               "node-id": "ovsdb://uuid/f9b58b6d-04db-459a-b914-fff82b738aec",
455               < content edited out >
456               "ovsdb:interface-type-entry": [
457                 {
458                   "interface-type": "ovsdb:interface-type-ipsec-gre"
459                 },
460                 {
461                   "interface-type": "ovsdb:interface-type-internal"
462                 },
463                 {
464                   "interface-type": "ovsdb:interface-type-system"
465                 },
466                 {
467                   "interface-type": "ovsdb:interface-type-patch"
468                 },
469                 {
470                   "interface-type": "ovsdb:interface-type-dpdkvhostuser"
471                 },
472                 {
473                   "interface-type": "ovsdb:interface-type-dpdk"
474                 },
475                 {
476                   "interface-type": "ovsdb:interface-type-dpdkr"
477                 },
478                 {
479                   "interface-type": "ovsdb:interface-type-vxlan"
480                 },
481                 {
482                   "interface-type": "ovsdb:interface-type-lisp"
483                 },
484                 {
485                   "interface-type": "ovsdb:interface-type-geneve"
486                 },
487                 {
488                   "interface-type": "ovsdb:interface-type-gre"
489                 },
490                 {
491                   "interface-type": "ovsdb:interface-type-tap"
492                 },
493                 {
494                   "interface-type": "ovsdb:interface-type-stt"
495                 }
496               ],
497               < content edited out >
498               "ovsdb:datapath-type-entry": [
499                 {
500                   "datapath-type": "ovsdb:datapath-type-netdev"
501                 },
502                 {
503                   "datapath-type": "ovsdb:datapath-type-system"
504                 }
505               ],
506               < content edited out >
507             },
508             < content edited out >
509           ]
510         }
511       ]
512     }
513
514 This example illustrates the output of an OVS with DPDK host because the
515 list of interface types includes types supported by DPDK.
516
517 Bridges on OVS with DPDK hosts need to be created with the *netdev*
518 datapath type and DPDK specific ports need to be created with the
519 appropriate interface type. The OpenDaylight OVSDB Southbound Plugin
520 supports these attributes.
521
522 The OpenDaylight NetVirt application checks whether the OVS host is
523 using OVS with DPDK when creating the bridges that are expected to be
524 present on the host, e.g. *br-int*.
525
526 Following are some tips for supporting hosts using OVS with DPDK when
527 using NetVirt as the Neutron service provider and *devstack* to deploy
528 Openstack.
529
530 In addition to the *networking-odl* ML2 plugin, enable the
531 *networking-odl-dpdk* plugin in *local.conf*.
532
533 ::
534
535     For working with Openstack Liberty
536     enable_plugin networking-odl https://github.com/FedericoRessi/networking-odl integration/liberty
537     enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk stable/liberty
538
539 ::
540
541     For working with Openstack Mitaka (or later) branch
542     enable_plugin networking-odl https://github.com/openstack/networking-odl
543     enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk
544
545 The order of these plugin lines is important. The *networking-odl*
546 plugin will install and setup *openvswitch*. The *networking-ovs-dpdk*
547 plugin will install OVS with DPDK. Note, the *networking-ovs-dpdk*
548 plugin is only being used here to setup OVS with DPDK. The
549 *networking-odl* plugin will be used as the Neutron ML2 driver.
550
551 For VXLAN tenant network support, the NetVirt application interacts with
552 OVS with DPDK host in the same way as OVS hosts using the kernel
553 datapath by creating VXLAN ports on *br-int* to communicate with other
554 tunnel endpoints. The IP address for the local tunnel endpoint may be
555 configured in the *local.conf* file. For example:
556
557 ::
558
559     ODL_LOCAL_IP=192.100.200.10
560
561 NetVirt will use this information to configure the VXLAN port on
562 *br-int*. On a host with the OVS kernel datapath, it is expected that
563 there will be a networking interface configured with this IP address. On
564 an OVS with DPDK host, an OVS bridge is created and a DPDK port is added
565 to the bridge. The local tunnel endpoint address is then assigned to the
566 bridge port of the bridge. So, for example, if the physical network
567 interface is associated with *eth0* on the host, a bridge named
568 *br-eth0* could be created. The DPDK port, such as *dpdk0* (per the
569 naming conventions of OVS with DPDK), is added to bridge *br-eth0*. The
570 local tunnel endpoint address is assigned to the network interface
571 *br-eth0* which is attached to bridge *br-eth0*. All of this setup is
572 not done by NetVirt. The *networking-ovs-dpdk* can be made to perform
573 this setup by putting configuration like the following in *local.conf*.
574
575 ::
576
577     ODL_LOCAL_IP=192.168.200.9
578     ODL_PROVIDER_MAPPINGS=physnet1:eth0,physnet2:eht1
579     OVS_DPDK_PORT_MAPPINGS=eth0:br-eth0,eth1:br-ex
580     OVS_BRIDGE_MAPPINGS=physnet1:br-eth0,physnet2:br-ex
581
582 The above settings associate the host networking interface *eth0* with
583 bridge *br-eth0*. The *networking-ovs-dpdk* plugin will determine the
584 DPDK port name associated with *eth0* and add it to the bridge
585 *br-eth0*. If using the NetVirt L3 support, these settings will enable
586 setup of the *br-ex* bridge and attach the DPDK port associated with
587 network interface *eth1* to it.
588
589 The following settings are included in *local.conf* to specify specific
590 attributes associated with OVS with DPDK. These are used by the
591 *networking-ovs-dpdk* plugin to configure OVS with DPDK.
592
593 ::
594
595     OVS_DATAPATH_TYPE=netdev
596     OVS_NUM_HUGEPAGES=8192
597     OVS_DPDK_MEM_SEGMENTS=8192
598     OVS_HUGEPAGE_MOUNT_PAGESIZE=2M
599     OVS_DPDK_RTE_LIBRTE_VHOST=y
600     OVS_DPDK_MODE=compute
601
602 Once the stack is up and running virtual machines may be deployed on OVS
603 with DPDK hosts. The *networking-odl* plugin handles ensuring that
604 *dpdkvhostuser* interfaces are utilized by Nova instead of the default
605 *tap* interface. The *dpdkvhostuser* interface provides the best
606 performance for VMs on OVS with DPDK hosts.
607
608 A Nova flavor is created for VMs that may be deployed on OVS with DPDK
609 hosts.
610
611 ::
612
613     nova flavor-create largepage-flavor 1002 1024 4 1
614     nova flavor-key 1002 set "hw:mem_page_size=large"
615
616 Then, just specify the flavor when creating a VM.
617
618 ::
619
620     nova boot --flavor largepage-flavor --image cirros-0.3.4-x86_64-uec --nic net-id=<NET ID VALUE> vm-name
621
622 OVSDB Plugins
623 -------------
624
625 Overview and Architecture
626 ~~~~~~~~~~~~~~~~~~~~~~~~~
627
628 There are currently two OVSDB Southbound plugins:
629
630 -  odl-ovsdb-southbound: Implements the OVSDB Open\_vSwitch database
631    schema.
632
633 -  odl-ovsdb-hwvtepsouthbound: Implements the OVSDB hardware\_vtep
634    database schema.
635
636 These plugins are normally installed and used automatically by higher
637 level applications such as odl-ovsdb-openstack; however, they can also
638 be installed separately and used via their REST APIs as is described in
639 the following sections.
640
641 OVSDB Southbound Plugin
642 ~~~~~~~~~~~~~~~~~~~~~~~
643
644 The OVSDB Southbound Plugin provides support for managing OVS hosts via
645 an OVSDB model in the MD-SAL which maps to important tables and
646 attributes present in the Open\_vSwitch schema. The OVSDB Southbound
647 Plugin is able to connect actively or passively to OVS hosts and operate
648 as the OVSDB manager of the OVS host. Using the OVSDB protocol it is
649 able to manage the OVS database (OVSDB) on the OVS host as defined by
650 the Open\_vSwitch schema.
651
652 OVSDB YANG Model
653 ^^^^^^^^^^^^^^^^
654
655 The OVSDB Southbound Plugin provides a YANG model which is based on the
656 abstract `network topology
657 model <https://github.com/opendaylight/yangtools/blob/stable/beryllium/yang/yang-parser-impl/src/test/resources/ietf/network-topology%402013-10-21.yang>`__.
658
659 The details of the OVSDB YANG model are defined in the
660 `ovsdb.yang <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/southbound/southbound-api/src/main/yang/ovsdb.yang>`__
661 file.
662
663 The OVSDB YANG model defines three augmentations:
664
665 **ovsdb-node-augmentation**
666     This augments the network-topology node and maps primarily to the
667     Open\_vSwitch table of the OVSDB schema. The ovsdb-node-augmentation
668     is a representation of the OVS host. It contains the following
669     attributes.
670
671     -  **connection-info** - holds the local and remote IP address and
672        TCP port numbers for the OpenDaylight to OVSDB node connections
673
674     -  **db-version** - version of the OVSDB database
675
676     -  **ovs-version** - version of OVS
677
678     -  **list managed-node-entry** - a list of references to
679        ovsdb-bridge-augmentation nodes, which are the OVS bridges
680        managed by this OVSDB node
681
682     -  **list datapath-type-entry** - a list of the datapath types
683        supported by the OVSDB node (e.g. *system*, *netdev*) - depends
684        on newer OVS versions
685
686     -  **list interface-type-entry** - a list of the interface types
687        supported by the OVSDB node (e.g. *internal*, *vxlan*, *gre*,
688        *dpdk*, etc.) - depends on newer OVS verions
689
690     -  **list openvswitch-external-ids** - a list of the key/value pairs
691        in the Open\_vSwitch table external\_ids column
692
693     -  **list openvswitch-other-config** - a list of the key/value pairs
694        in the Open\_vSwitch table other\_config column
695
696     -  **list managery-entry** - list of manager information entries and
697        connection status
698
699     -  **list qos-entries** - list of QoS entries present in the QoS
700        table
701
702     -  **list queues** - list of queue entries present in the queue
703        table
704
705 **ovsdb-bridge-augmentation**
706     This augments the network-topology node and maps to an specific
707     bridge in the OVSDB bridge table of the associated OVSDB node. It
708     contains the following attributes.
709
710     -  **bridge-uuid** - UUID of the OVSDB bridge
711
712     -  **bridge-name** - name of the OVSDB bridge
713
714     -  **bridge-openflow-node-ref** - a reference (instance-identifier)
715        of the OpenFlow node associated with this bridge
716
717     -  **list protocol-entry** - the version of OpenFlow protocol to use
718        with the OpenFlow controller
719
720     -  **list controller-entry** - a list of controller-uuid and
721        is-connected status of the OpenFlow controllers associated with
722        this bridge
723
724     -  **datapath-id** - the datapath ID associated with this bridge on
725        the OVSDB node
726
727     -  **datapath-type** - the datapath type of this bridge
728
729     -  **fail-mode** - the OVSDB fail mode setting of this bridge
730
731     -  **flow-node** - a reference to the flow node corresponding to
732        this bridge
733
734     -  **managed-by** - a reference to the ovsdb-node-augmentation
735        (OVSDB node) that is managing this bridge
736
737     -  **list bridge-external-ids** - a list of the key/value pairs in
738        the bridge table external\_ids column for this bridge
739
740     -  **list bridge-other-configs** - a list of the key/value pairs in
741        the bridge table other\_config column for this bridge
742
743 **ovsdb-termination-point-augmentation**
744     This augments the topology termination point model. The OVSDB
745     Southbound Plugin uses this model to represent both the OVSDB port
746     and OVSDB interface for a given port/interface in the OVSDB schema.
747     It contains the following attributes.
748
749     -  **port-uuid** - UUID of an OVSDB port row
750
751     -  **interface-uuid** - UUID of an OVSDB interface row
752
753     -  **name** - name of the port and interface
754
755     -  **interface-type** - the interface type
756
757     -  **list options** - a list of port options
758
759     -  **ofport** - the OpenFlow port number of the interface
760
761     -  **ofport\_request** - the requested OpenFlow port number for the
762        interface
763
764     -  **vlan-tag** - the VLAN tag value
765
766     -  **list trunks** - list of VLAN tag values for trunk mode
767
768     -  **vlan-mode** - the VLAN mode (e.g. access, native-tagged,
769        native-untagged, trunk)
770
771     -  **list port-external-ids** - a list of the key/value pairs in the
772        port table external\_ids column for this port
773
774     -  **list interface-external-ids** - a list of the key/value pairs
775        in the interface table external\_ids interface for this interface
776
777     -  **list port-other-configs** - a list of the key/value pairs in
778        the port table other\_config column for this port
779
780     -  **list interface-other-configs** - a list of the key/value pairs
781        in the interface table other\_config column for this interface
782
783     -  **list inteface-lldp** - LLDP Auto Attach configuration for the
784        interface
785
786     -  **qos** - UUID of the QoS entry in the QoS table assigned to this
787        port
788
789 Getting Started
790 ^^^^^^^^^^^^^^^
791
792 To install the OVSDB Southbound Plugin, use the following command at the
793 Karaf console:
794
795 ::
796
797     feature:install odl-ovsdb-southbound-impl-ui
798
799 After installing the OVSDB Southbound Plugin, and before any OVSDB
800 topology nodes have been created, the OVSDB topology will appear as
801 follows in the configuration and operational MD-SAL.
802
803 HTTP GET:
804
805 ::
806
807     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
808      or
809     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
810
811 Result Body:
812
813 ::
814
815     {
816       "topology": [
817         {
818           "topology-id": "ovsdb:1"
819         }
820       ]
821     }
822
823 Where
824
825 *<controller-ip>* is the IP address of the OpenDaylight controller
826
827 OpenDaylight as the OVSDB Manager
828 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
829
830 An OVS host is a system which is running the OVS software and is capable
831 of being managed by an OVSDB manager. The OVSDB Southbound Plugin is
832 capable of connecting to an OVS host and operating as an OVSDB manager.
833 Depending on the configuration of the OVS host, the connection of
834 OpenDaylight to the OVS host will be active or passive.
835
836 Active Connection to OVS Hosts
837 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
838
839 An active connection is when the OVSDB Southbound Plugin initiates the
840 connection to an OVS host. This happens when the OVS host is configured
841 to listen for the connection (i.e. the OVSDB Southbound Plugin is active
842 the the OVS host is passive). The OVS host is configured with the
843 following command:
844
845 ::
846
847     sudo ovs-vsctl set-manager ptcp:6640
848
849 This configures the OVS host to listen on TCP port 6640.
850
851 The OVSDB Southbound Plugin can be configured via the configuration
852 MD-SAL to actively connect to an OVS host.
853
854 HTTP PUT:
855
856 ::
857
858     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1
859
860 Body:
861
862 ::
863
864     {
865       "network-topology:node": [
866         {
867           "node-id": "ovsdb://HOST1",
868           "connection-info": {
869             "ovsdb:remote-port": "6640",
870             "ovsdb:remote-ip": "<ovs-host-ip>"
871           }
872         }
873       ]
874     }
875
876 Where
877
878 *<ovs-host-ip>* is the IP address of the OVS Host
879
880 Note that the configuration assigns a *node-id* of "ovsdb://HOST1" to
881 the OVSDB node. This *node-id* will be used as the identifier for this
882 OVSDB node in the MD-SAL.
883
884 Query the configuration MD-SAL for the OVSDB topology.
885
886 HTTP GET:
887
888 ::
889
890     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
891
892 Result Body:
893
894 ::
895
896     {
897       "topology": [
898         {
899           "topology-id": "ovsdb:1",
900           "node": [
901             {
902               "node-id": "ovsdb://HOST1",
903               "ovsdb:connection-info": {
904                 "remote-ip": "<ovs-host-ip>",
905                 "remote-port": 6640
906               }
907             }
908           ]
909         }
910       ]
911     }
912
913 As a result of the OVSDB node configuration being added to the
914 configuration MD-SAL, the OVSDB Southbound Plugin will attempt to
915 connect with the specified OVS host. If the connection is successful,
916 the plugin will connect to the OVS host as an OVSDB manager, query the
917 schemas and databases supported by the OVS host, and register to monitor
918 changes made to the OVSDB tables on the OVS host. It will also set an
919 external id key and value in the external-ids column of the
920 Open\_vSwtich table of the OVS host which identifies the MD-SAL instance
921 identifier of the OVSDB node. This ensures that the OVSDB node will use
922 the same *node-id* in both the configuration and operational MD-SAL.
923
924 ::
925
926     "opendaylight-iid" = "instance identifier of OVSDB node in the MD-SAL"
927
928 When the OVS host sends the OVSDB Southbound Plugin the first update
929 message after the monitoring has been established, the plugin will
930 populate the operational MD-SAL with the information it receives from
931 the OVS host.
932
933 Query the operational MD-SAL for the OVSDB topology.
934
935 HTTP GET:
936
937 ::
938
939     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
940
941 Result Body:
942
943 ::
944
945     {
946       "topology": [
947         {
948           "topology-id": "ovsdb:1",
949           "node": [
950             {
951               "node-id": "ovsdb://HOST1",
952               "ovsdb:openvswitch-external-ids": [
953                 {
954                   "external-id-key": "opendaylight-iid",
955                   "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
956                 }
957               ],
958               "ovsdb:connection-info": {
959                 "local-ip": "<controller-ip>",
960                 "remote-port": 6640,
961                 "remote-ip": "<ovs-host-ip>",
962                 "local-port": 39042
963               },
964               "ovsdb:ovs-version": "2.3.1-git4750c96",
965               "ovsdb:manager-entry": [
966                 {
967                   "target": "ptcp:6640",
968                   "connected": true,
969                   "number_of_connections": 1
970                 }
971               ]
972             }
973           ]
974         }
975       ]
976     }
977
978 To disconnect an active connection, just delete the configuration MD-SAL
979 entry.
980
981 HTTP DELETE:
982
983 ::
984
985     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1
986
987 Note in the above example, that */* characters which are part of the
988 *node-id* are specified in hexadecimal format as "%2F".
989
990 Passive Connection to OVS Hosts
991 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
992
993 A passive connection is when the OVS host initiates the connection to
994 the OVSDB Southbound Plugin. This happens when the OVS host is
995 configured to connect to the OVSDB Southbound Plugin. The OVS host is
996 configured with the following command:
997
998 ::
999
1000     sudo ovs-vsctl set-manager tcp:<controller-ip>:6640
1001
1002 The OVSDB Southbound Plugin is configured to listen for OVSDB
1003 connections on TCP port 6640. This value can be changed by editing the
1004 "./karaf/target/assembly/etc/custom.properties" file and changing the
1005 value of the "ovsdb.listenPort" attribute.
1006
1007 When a passive connection is made, the OVSDB node will appear first in
1008 the operational MD-SAL. If the Open\_vSwitch table does not contain an
1009 external-ids value of *opendaylight-iid*, then the *node-id* of the new
1010 OVSDB node will be created in the format:
1011
1012 ::
1013
1014     "ovsdb://uuid/<actual UUID value>"
1015
1016 If there an *opendaylight-iid* value was already present in the
1017 external-ids column, then the instance identifier defined there will be
1018 used to create the *node-id* instead.
1019
1020 Query the operational MD-SAL for an OVSDB node after a passive
1021 connection.
1022
1023 HTTP GET:
1024
1025 ::
1026
1027     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
1028
1029 Result Body:
1030
1031 ::
1032
1033     {
1034       "topology": [
1035         {
1036           "topology-id": "ovsdb:1",
1037           "node": [
1038             {
1039               "node-id": "ovsdb://uuid/163724f4-6a70-428a-a8a0-63b2a21f12dd",
1040               "ovsdb:openvswitch-external-ids": [
1041                 {
1042                   "external-id-key": "system-id",
1043                   "external-id-value": "ecf160af-e78c-4f6b-a005-83a6baa5c979"
1044                 }
1045               ],
1046               "ovsdb:connection-info": {
1047                 "local-ip": "<controller-ip>",
1048                 "remote-port": 46731,
1049                 "remote-ip": "<ovs-host-ip>",
1050                 "local-port": 6640
1051               },
1052               "ovsdb:ovs-version": "2.3.1-git4750c96",
1053               "ovsdb:manager-entry": [
1054                 {
1055                   "target": "tcp:10.11.21.7:6640",
1056                   "connected": true,
1057                   "number_of_connections": 1
1058                 }
1059               ]
1060             }
1061           ]
1062         }
1063       ]
1064     }
1065
1066 Take note of the *node-id* that was created in this case.
1067
1068 Manage Bridges
1069 ^^^^^^^^^^^^^^
1070
1071 The OVSDB Southbound Plugin can be used to manage bridges on an OVS
1072 host.
1073
1074 This example shows how to add a bridge to the OVSDB node
1075 *ovsdb://HOST1*.
1076
1077 HTTP PUT:
1078
1079 ::
1080
1081     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
1082
1083 Body:
1084
1085 ::
1086
1087     {
1088       "network-topology:node": [
1089         {
1090           "node-id": "ovsdb://HOST1/bridge/brtest",
1091           "ovsdb:bridge-name": "brtest",
1092           "ovsdb:protocol-entry": [
1093             {
1094               "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
1095             }
1096           ],
1097           "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
1098         }
1099       ]
1100     }
1101
1102 Notice that the *ovsdb:managed-by* attribute is specified in the
1103 command. This indicates the association of the new bridge node with its
1104 OVSDB node.
1105
1106 Bridges can be updated. In the following example, OpenDaylight is
1107 configured to be the OpenFlow controller for the bridge.
1108
1109 HTTP PUT:
1110
1111 ::
1112
1113     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
1114
1115 Body:
1116
1117 ::
1118
1119     {
1120       "network-topology:node": [
1121             {
1122               "node-id": "ovsdb://HOST1/bridge/brtest",
1123                  "ovsdb:bridge-name": "brtest",
1124                   "ovsdb:controller-entry": [
1125                     {
1126                       "target": "tcp:<controller-ip>:6653"
1127                     }
1128                   ],
1129                  "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
1130             }
1131         ]
1132     }
1133
1134 If the OpenDaylight OpenFlow Plugin is installed, then checking on the
1135 OVS host will show that OpenDaylight has successfully connected as the
1136 controller for the bridge.
1137
1138 ::
1139
1140     $ sudo ovs-vsctl show
1141         Manager "ptcp:6640"
1142             is_connected: true
1143         Bridge brtest
1144             Controller "tcp:<controller-ip>:6653"
1145                 is_connected: true
1146             Port brtest
1147                 Interface brtest
1148                     type: internal
1149         ovs_version: "2.3.1-git4750c96"
1150
1151 Query the operational MD-SAL to see how the bridge appears.
1152
1153 HTTP GET:
1154
1155 ::
1156
1157     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/
1158
1159 Result Body:
1160
1161 ::
1162
1163     {
1164       "node": [
1165         {
1166           "node-id": "ovsdb://HOST1/bridge/brtest",
1167           "ovsdb:bridge-name": "brtest",
1168           "ovsdb:datapath-type": "ovsdb:datapath-type-system",
1169           "ovsdb:datapath-id": "00:00:da:e9:0c:08:2d:45",
1170           "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']",
1171           "ovsdb:bridge-external-ids": [
1172             {
1173               "bridge-external-id-key": "opendaylight-iid",
1174               "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']"
1175             }
1176           ],
1177           "ovsdb:protocol-entry": [
1178             {
1179               "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
1180             }
1181           ],
1182           "ovsdb:bridge-uuid": "080ce9da-101e-452d-94cd-ee8bef8a4b69",
1183           "ovsdb:controller-entry": [
1184             {
1185               "target": "tcp:10.11.21.7:6653",
1186               "is-connected": true,
1187               "controller-uuid": "c39b1262-0876-4613-8bfd-c67eec1a991b"
1188             }
1189           ],
1190           "termination-point": [
1191             {
1192               "tp-id": "brtest",
1193               "ovsdb:port-uuid": "c808ae8d-7af2-4323-83c1-e397696dc9c8",
1194               "ovsdb:ofport": 65534,
1195               "ovsdb:interface-type": "ovsdb:interface-type-internal",
1196               "ovsdb:interface-uuid": "49e9417f-4479-4ede-8faf-7c873b8c0413",
1197               "ovsdb:name": "brtest"
1198             }
1199           ]
1200         }
1201       ]
1202     }
1203
1204 Notice that just like with the OVSDB node, an *opendaylight-iid* has
1205 been added to the external-ids column of the bridge since it was created
1206 via the configuration MD-SAL.
1207
1208 A bridge node may be deleted as well.
1209
1210 HTTP DELETE:
1211
1212 ::
1213
1214     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
1215
1216 Manage Ports
1217 ^^^^^^^^^^^^
1218
1219 Similarly, ports may be managed by the OVSDB Southbound Plugin.
1220
1221 This example illustrates how a port and various attributes may be
1222 created on a bridge.
1223
1224 HTTP PUT:
1225
1226 ::
1227
1228     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
1229
1230 Body:
1231
1232 ::
1233
1234     {
1235       "network-topology:termination-point": [
1236         {
1237           "ovsdb:options": [
1238             {
1239               "ovsdb:option": "remote_ip",
1240               "ovsdb:value" : "10.10.14.11"
1241             }
1242           ],
1243           "ovsdb:name": "testport",
1244           "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
1245           "tp-id": "testport",
1246           "vlan-tag": "1",
1247           "trunks": [
1248             {
1249               "trunk": "5"
1250             }
1251           ],
1252           "vlan-mode":"access"
1253         }
1254       ]
1255     }
1256
1257 Ports can be updated - add another VLAN trunk.
1258
1259 HTTP PUT:
1260
1261 ::
1262
1263     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
1264
1265 Body:
1266
1267 ::
1268
1269     {
1270       "network-topology:termination-point": [
1271         {
1272           "ovsdb:name": "testport",
1273           "tp-id": "testport",
1274           "trunks": [
1275             {
1276               "trunk": "5"
1277             },
1278             {
1279               "trunk": "500"
1280             }
1281           ]
1282         }
1283       ]
1284     }
1285
1286 Query the operational MD-SAL for the port.
1287
1288 HTTP GET:
1289
1290 ::
1291
1292     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
1293
1294 Result Body:
1295
1296 ::
1297
1298     {
1299       "termination-point": [
1300         {
1301           "tp-id": "testport",
1302           "ovsdb:port-uuid": "b1262110-2a4f-4442-b0df-84faf145488d",
1303           "ovsdb:options": [
1304             {
1305               "option": "remote_ip",
1306               "value": "10.10.14.11"
1307             }
1308           ],
1309           "ovsdb:port-external-ids": [
1310             {
1311               "external-id-key": "opendaylight-iid",
1312               "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']/network-topology:termination-point[network-topology:tp-id='testport']"
1313             }
1314           ],
1315           "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
1316           "ovsdb:trunks": [
1317             {
1318               "trunk": 5
1319             },
1320             {
1321               "trunk": 500
1322             }
1323           ],
1324           "ovsdb:vlan-mode": "access",
1325           "ovsdb:vlan-tag": 1,
1326           "ovsdb:interface-uuid": "7cec653b-f407-45a8-baec-7eb36b6791c9",
1327           "ovsdb:name": "testport",
1328           "ovsdb:ofport": 1
1329         }
1330       ]
1331     }
1332
1333 Remember that the OVSDB YANG model includes both OVSDB port and
1334 interface table attributes in the termination-point augmentation. Both
1335 kinds of attributes can be seen in the examples above. Again, note the
1336 creation of an *opendaylight-iid* value in the external-ids column of
1337 the port table.
1338
1339 Delete a port.
1340
1341 HTTP DELETE:
1342
1343 ::
1344
1345     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest2/termination-point/testport/
1346
1347 Overview of QoS and Queue
1348 ^^^^^^^^^^^^^^^^^^^^^^^^^
1349
1350 The OVSDB Southbound Plugin provides the capability of managing the QoS
1351 and Queue tables on an OVS host with OpenDaylight configured as the
1352 OVSDB manager.
1353
1354 QoS and Queue Tables in OVSDB
1355 '''''''''''''''''''''''''''''
1356
1357 The OVSDB includes a QoS and Queue table. Unlike most of the other
1358 tables in the OVSDB, except the Open\_vSwitch table, the QoS and Queue
1359 tables are "root set" tables, which means that entries, or rows, in
1360 these tables are not automatically deleted if they can not be reached
1361 directly or indirectly from the Open\_vSwitch table. This means that QoS
1362 entries can exist and be managed independently of whether or not they
1363 are referenced in a Port entry. Similarly, Queue entries can be managed
1364 independently of whether or not they are referenced by a QoS entry.
1365
1366 Modelling of QoS and Queue Tables in OpenDaylight MD-SAL
1367 ''''''''''''''''''''''''''''''''''''''''''''''''''''''''
1368
1369 Since the QoS and Queue tables are "root set" tables, they are modeled
1370 in the OpenDaylight MD-SAL as lists which are part of the attributes of
1371 the OVSDB node model.
1372
1373 The MD-SAL QoS and Queue models have an additonal identifier attribute
1374 per entry (e.g. "qos-id" or "queue-id") which is not present in the
1375 OVSDB schema. This identifier is used by the MD-SAL as a key for
1376 referencing the entry. If the entry is created originally from the
1377 configuration MD-SAL, then the value of the identifier is whatever is
1378 specified by the configuration. If the entry is created on the OVSDB
1379 node and received by OpenDaylight in an operational update, then the id
1380 will be created in the following format.
1381
1382 ::
1383
1384     "queue-id": "queue://<UUID>"
1385     "qos-id": "qos://<UUID>"
1386
1387 The UUID in the above identifiers is the actual UUID of the entry in the
1388 OVSDB database.
1389
1390 When the QoS or Queue entry is created by the configuration MD-SAL, the
1391 identifier will be configured as part of the external-ids column of the
1392 entry. This will ensure that the corresponding entry that is created in
1393 the operational MD-SAL uses the same identifier.
1394
1395 ::
1396
1397     "queues-external-ids": [
1398       {
1399         "queues-external-id-key": "opendaylight-queue-id",
1400         "queues-external-id-value": "QUEUE-1"
1401       }
1402     ]
1403
1404 See more in the examples that follow in this section.
1405
1406 The QoS schema in OVSDB currently defines two types of QoS entries.
1407
1408 -  linux-htb
1409
1410 -  linux-hfsc
1411
1412 These QoS types are defined in the QoS model. Additional types will need
1413 to be added to the model in order to be supported. See the examples that
1414 folow for how the QoS type is specified in the model.
1415
1416 QoS entries can be configured with addtional attritubes such as
1417 "max-rate". These are configured via the *other-config* column of the
1418 QoS entry. Refer to OVSDB schema (in the reference section below) for
1419 all of the relevant attributes that can be configured. The examples in
1420 the rest of this section will demonstrate how the other-config column
1421 may be configured.
1422
1423 Similarly, the Queue entries may be configured with additional
1424 attributes via the other-config column.
1425
1426 Managing QoS and Queues via Configuration MD-SAL
1427 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1428
1429 This section will show some examples on how to manage QoS and Queue
1430 entries via the configuration MD-SAL. The examples will be illustrated
1431 by using RESTCONF (see `QoS and Queue Postman
1432 Collection <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/resources/commons/Qos-and-Queue-Collection.json.postman_collection>`__
1433 ).
1434
1435 A pre-requisite for managing QoS and Queue entries is that the OVS host
1436 must be present in the configuration MD-SAL.
1437
1438 For the following examples, the following OVS host is configured.
1439
1440 HTTP POST:
1441
1442 ::
1443
1444     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
1445
1446 Body:
1447
1448 ::
1449
1450     {
1451       "node": [
1452         {
1453           "node-id": "ovsdb:HOST1",
1454           "connection-info": {
1455             "ovsdb:remote-ip": "<ovs-host-ip>",
1456             "ovsdb:remote-port": "<ovs-host-ovsdb-port>"
1457           }
1458         }
1459       ]
1460     }
1461
1462 Where
1463
1464 -  *<controller-ip>* is the IP address of the OpenDaylight controller
1465
1466 -  *<ovs-host-ip>* is the IP address of the OVS host
1467
1468 -  *<ovs-host-ovsdb-port>* is the TCP port of the OVSDB server on the
1469    OVS host (e.g. 6640)
1470
1471 This command creates an OVSDB node with the node-id "ovsdb:HOST1". This
1472 OVSDB node will be used in the following examples.
1473
1474 QoS and Queue entries can be created and managed without a port, but
1475 ultimately, QoS entries are associated with a port in order to use them.
1476 For the following examples a test bridge and port will be created.
1477
1478 Create the test bridge.
1479
1480 HTTP PUT
1481
1482 ::
1483
1484     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test
1485
1486 Body:
1487
1488 ::
1489
1490     {
1491       "network-topology:node": [
1492         {
1493           "node-id": "ovsdb:HOST1/bridge/br-test",
1494           "ovsdb:bridge-name": "br-test",
1495           "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']"
1496         }
1497       ]
1498     }
1499
1500 Create the test port (which is modeled as a termination point in the
1501 OpenDaylight MD-SAL).
1502
1503 HTTP PUT:
1504
1505 ::
1506
1507     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
1508
1509 Body:
1510
1511 ::
1512
1513     {
1514       "network-topology:termination-point": [
1515         {
1516           "ovsdb:name": "testport",
1517           "tp-id": "testport"
1518         }
1519       ]
1520     }
1521
1522 If all of the previous steps were successful, a query of the operational
1523 MD-SAL should look something like the following results. This indicates
1524 that the configuration commands have been successfully instantiated on
1525 the OVS host.
1526
1527 HTTP GET:
1528
1529 ::
1530
1531     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test
1532
1533 Result Body:
1534
1535 ::
1536
1537     {
1538       "node": [
1539         {
1540           "node-id": "ovsdb:HOST1/bridge/br-test",
1541           "ovsdb:bridge-name": "br-test",
1542           "ovsdb:datapath-type": "ovsdb:datapath-type-system",
1543           "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']",
1544           "ovsdb:datapath-id": "00:00:8e:5d:22:3d:09:49",
1545           "ovsdb:bridge-external-ids": [
1546             {
1547               "bridge-external-id-key": "opendaylight-iid",
1548               "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']"
1549             }
1550           ],
1551           "ovsdb:bridge-uuid": "3d225d8d-d060-4909-93ef-6f4db58ef7cc",
1552           "termination-point": [
1553             {
1554               "tp-id": "br=-est",
1555               "ovsdb:port-uuid": "f85f7aa7-4956-40e4-9c94-e6ca2d5cd254",
1556               "ovsdb:ofport": 65534,
1557               "ovsdb:interface-type": "ovsdb:interface-type-internal",
1558               "ovsdb:interface-uuid": "29ff3692-6ed4-4ad7-a077-1edc277ecb1a",
1559               "ovsdb:name": "br-test"
1560             },
1561             {
1562               "tp-id": "testport",
1563               "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
1564               "ovsdb:port-external-ids": [
1565                 {
1566                   "external-id-key": "opendaylight-iid",
1567                   "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
1568                 }
1569               ],
1570               "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
1571               "ovsdb:name": "testport"
1572             }
1573           ]
1574         }
1575       ]
1576     }
1577
1578 Create Queue
1579 ''''''''''''
1580
1581 Create a new Queue in the configuration MD-SAL.
1582
1583 HTTP PUT:
1584
1585 ::
1586
1587     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
1588
1589 Body:
1590
1591 ::
1592
1593     {
1594       "ovsdb:queues": [
1595         {
1596           "queue-id": "QUEUE-1",
1597           "dscp": 25,
1598           "queues-other-config": [
1599             {
1600               "queue-other-config-key": "max-rate",
1601               "queue-other-config-value": "3600000"
1602             }
1603           ]
1604         }
1605       ]
1606     }
1607
1608 Query Queue
1609 '''''''''''
1610
1611 Now query the operational MD-SAL for the Queue entry.
1612
1613 HTTP GET:
1614
1615 ::
1616
1617     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
1618
1619 Result Body:
1620
1621 ::
1622
1623     {
1624       "ovsdb:queues": [
1625         {
1626           "queue-id": "QUEUE-1",
1627           "queues-other-config": [
1628             {
1629               "queue-other-config-key": "max-rate",
1630               "queue-other-config-value": "3600000"
1631             }
1632           ],
1633           "queues-external-ids": [
1634             {
1635               "queues-external-id-key": "opendaylight-queue-id",
1636               "queues-external-id-value": "QUEUE-1"
1637             }
1638           ],
1639           "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
1640           "dscp": 25
1641         }
1642       ]
1643     }
1644
1645 Create QoS
1646 ''''''''''
1647
1648 Create a QoS entry. Note that the UUID of the Queue entry, obtained by
1649 querying the operational MD-SAL of the Queue entry, is specified in the
1650 queue-list of the QoS entry. Queue entries may be added to the QoS entry
1651 at the creation of the QoS entry, or by a subsequent update to the QoS
1652 entry.
1653
1654 HTTP PUT:
1655
1656 ::
1657
1658     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
1659
1660 Body:
1661
1662 ::
1663
1664     {
1665       "ovsdb:qos-entries": [
1666         {
1667           "qos-id": "QOS-1",
1668           "qos-type": "ovsdb:qos-type-linux-htb",
1669           "qos-other-config": [
1670             {
1671               "other-config-key": "max-rate",
1672               "other-config-value": "4400000"
1673             }
1674           ],
1675           "queue-list": [
1676             {
1677               "queue-number": "0",
1678               "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
1679             }
1680           ]
1681         }
1682       ]
1683     }
1684
1685 Query QoS
1686 '''''''''
1687
1688 Query the operational MD-SAL for the QoS entry.
1689
1690 HTTP GET:
1691
1692 ::
1693
1694     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
1695
1696 Result Body:
1697
1698 ::
1699
1700     {
1701       "ovsdb:qos-entries": [
1702         {
1703           "qos-id": "QOS-1",
1704           "qos-other-config": [
1705             {
1706               "other-config-key": "max-rate",
1707               "other-config-value": "4400000"
1708             }
1709           ],
1710           "queue-list": [
1711             {
1712               "queue-number": 0,
1713               "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
1714             }
1715           ],
1716           "qos-type": "ovsdb:qos-type-linux-htb",
1717           "qos-external-ids": [
1718             {
1719               "qos-external-id-key": "opendaylight-qos-id",
1720               "qos-external-id-value": "QOS-1"
1721             }
1722           ],
1723           "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
1724         }
1725       ]
1726     }
1727
1728 Add QoS to a Port
1729 '''''''''''''''''
1730
1731 Update the termination point entry to include the UUID of the QoS entry,
1732 obtained by querying the operational MD-SAL, to associate a QoS entry
1733 with a port.
1734
1735 HTTP PUT:
1736
1737 ::
1738
1739     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
1740
1741 Body:
1742
1743 ::
1744
1745     {
1746       "network-topology:termination-point": [
1747         {
1748           "ovsdb:name": "testport",
1749           "tp-id": "testport",
1750           "qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
1751         }
1752       ]
1753     }
1754
1755 Query the Port
1756 ''''''''''''''
1757
1758 Query the operational MD-SAL to see how the QoS entry appears in the
1759 termination point model.
1760
1761 HTTP GET:
1762
1763 ::
1764
1765     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
1766
1767 Result Body:
1768
1769 ::
1770
1771     {
1772       "termination-point": [
1773         {
1774           "tp-id": "testport",
1775           "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
1776           "ovsdb:port-external-ids": [
1777             {
1778               "external-id-key": "opendaylight-iid",
1779               "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
1780             }
1781           ],
1782           "ovsdb:qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31",
1783           "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
1784           "ovsdb:name": "testport"
1785         }
1786       ]
1787     }
1788
1789 Query the OVSDB Node
1790 ''''''''''''''''''''
1791
1792 Query the operational MD-SAL for the OVS host to see how the QoS and
1793 Queue entries appear as lists in the OVS node model.
1794
1795 HTTP GET:
1796
1797 ::
1798
1799     http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/
1800
1801 Result Body (edited to only show information relevant to the QoS and
1802 Queue entries):
1803
1804 ::
1805
1806     {
1807       "node": [
1808         {
1809           "node-id": "ovsdb:HOST1",
1810           <content edited out>
1811           "ovsdb:queues": [
1812             {
1813               "queue-id": "QUEUE-1",
1814               "queues-other-config": [
1815                 {
1816                   "queue-other-config-key": "max-rate",
1817                   "queue-other-config-value": "3600000"
1818                 }
1819               ],
1820               "queues-external-ids": [
1821                 {
1822                   "queues-external-id-key": "opendaylight-queue-id",
1823                   "queues-external-id-value": "QUEUE-1"
1824                 }
1825               ],
1826               "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
1827               "dscp": 25
1828             }
1829           ],
1830           "ovsdb:qos-entries": [
1831             {
1832               "qos-id": "QOS-1",
1833               "qos-other-config": [
1834                 {
1835                   "other-config-key": "max-rate",
1836                   "other-config-value": "4400000"
1837                 }
1838               ],
1839               "queue-list": [
1840                 {
1841                   "queue-number": 0,
1842                   "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
1843                 }
1844               ],
1845               "qos-type": "ovsdb:qos-type-linux-htb",
1846               "qos-external-ids": [
1847                 {
1848                   "qos-external-id-key": "opendaylight-qos-id",
1849                   "qos-external-id-value": "QOS-1"
1850                 }
1851               ],
1852               "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
1853             }
1854           ]
1855           <content edited out>
1856         }
1857       ]
1858     }
1859
1860 Remove QoS from a Port
1861 ''''''''''''''''''''''
1862
1863 This example removes a QoS entry from the termination point and
1864 associated port. Note that this is a PUT command on the termination
1865 point with the QoS attribute absent. Other attributes of the termination
1866 point should be included in the body of the command so that they are not
1867 inadvertantly removed.
1868
1869 HTTP PUT:
1870
1871 ::
1872
1873     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
1874
1875 Body:
1876
1877 ::
1878
1879     {
1880       "network-topology:termination-point": [
1881         {
1882           "ovsdb:name": "testport",
1883           "tp-id": "testport"
1884         }
1885       ]
1886     }
1887
1888 Remove a Queue from QoS
1889 '''''''''''''''''''''''
1890
1891 This example removes the specific Queue entry from the queue list in the
1892 QoS entry. The queue entry is specified by the queue number, which is
1893 "0" in this example.
1894
1895 HTTP DELETE:
1896
1897 ::
1898
1899     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/queue-list/0/
1900
1901 Remove Queue
1902 ''''''''''''
1903
1904 Once all references to a specific queue entry have been removed from QoS
1905 entries, the Queue itself can be removed.
1906
1907 HTTP DELETE:
1908
1909 ::
1910
1911     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
1912
1913 Remove QoS
1914 ''''''''''
1915
1916 The QoS entry may be removed when it is no longer referenced by any
1917 ports.
1918
1919 HTTP DELETE:
1920
1921 ::
1922
1923     http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
1924
1925 References
1926 ^^^^^^^^^^
1927
1928 `Openvswitch
1929 schema <http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf>`__
1930
1931 `OVSDB and Netvirt Postman
1932 Collection <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/resources/commons>`__
1933
1934 OVSDB Hardware VTEP SouthBound Plugin
1935 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1936
1937 Overview
1938 ^^^^^^^^
1939
1940 Hwvtepsouthbound plugin is used to configure a hardware VTEP which
1941 implements hardware ovsdb schema. This page will show how to use
1942 RESTConf API of hwvtepsouthbound. There are two ways to connect to ODL:
1943
1944 **switch initiates connection..**
1945
1946 Both will be introduced respectively.
1947
1948 User Initiates Connection
1949 ^^^^^^^^^^^^^^^^^^^^^^^^^
1950
1951 Prerequisite
1952 ''''''''''''
1953
1954 Configure the hwvtep device/node to listen for the tcp connection in
1955 passive mode. In addition, management IP and tunnel source IP are also
1956 configured. After all this configuration is done, a physical switch is
1957 created automatically by the hwvtep node.
1958
1959 Connect to a hwvtep device/node
1960 '''''''''''''''''''''''''''''''
1961
1962 Send below Restconf request if you want to initiate the connection to a
1963 hwvtep node from the controller, where listening IP and port of hwvtep
1964 device/node are provided.
1965
1966 REST API: POST
1967 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/
1968
1969 ::
1970
1971     {
1972      "network-topology:node": [
1973            {
1974                "node-id": "hwvtep://192.168.1.115:6640",
1975                "hwvtep:connection-info":
1976                {
1977                    "hwvtep:remote-port": 6640,
1978                    "hwvtep:remote-ip": "192.168.1.115"
1979                }
1980            }
1981        ]
1982     }
1983
1984 Please replace *odl* in the URL with the IP address of your OpendayLight
1985 controller and change *192.168.1.115* to your hwvtep node IP.
1986
1987 **NOTE**: The format of node-id is fixed. It will be one of the two:
1988
1989 User initiates connection from ODL:
1990
1991 ::
1992
1993      hwvtep://ip:port
1994
1995 Switch initiates connection:
1996
1997 ::
1998
1999      hwvtep://uuid/<uuid of switch>
2000
2001 The reason for using UUID is that we can distinguish between multiple
2002 switches if they are behind a NAT.
2003
2004 After this request is completed successfully, we can get the physical
2005 switch from the operational data store.
2006
2007 REST API: GET
2008 http://odl:8181/restconf/operational/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
2009
2010 There is no body in this request.
2011
2012 The response of the request is:
2013
2014 ::
2015
2016     {
2017        "node": [
2018              {
2019                "node-id": "hwvtep://192.168.1.115:6640",
2020                "hwvtep:switches": [
2021                  {
2022                    "switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640/physicalswitch/br0']"
2023                  }
2024                ],
2025                "hwvtep:connection-info": {
2026                  "local-ip": "192.168.92.145",
2027                  "local-port": 47802,
2028                  "remote-port": 6640,
2029                  "remote-ip": "192.168.1.115"
2030                }
2031              },
2032              {
2033                "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
2034                "hwvtep:management-ips": [
2035                  {
2036                    "management-ips-key": "192.168.1.115"
2037                  }
2038                ],
2039                "hwvtep:physical-switch-uuid": "37eb5abd-a6a3-4aba-9952-a4d301bdf371",
2040                "hwvtep:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']",
2041                "hwvtep:hwvtep-node-description": "",
2042                "hwvtep:tunnel-ips": [
2043                  {
2044                    "tunnel-ips-key": "192.168.1.115"
2045                  }
2046                ],
2047                "hwvtep:hwvtep-node-name": "br0"
2048              }
2049            ]
2050     }
2051
2052 If there is a physical switch which has already been created by manual
2053 configuration, we can get the node-id of the physical switch from this
2054 response, which is presented in “swith-ref”. If the switch does not
2055 exist, we need to create the physical switch. Currently, most hwvtep
2056 devices do not support running multiple switches.
2057
2058 Create a physical switch
2059 ''''''''''''''''''''''''
2060
2061 REST API: POST
2062 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/
2063
2064 request body:
2065
2066 ::
2067
2068     {
2069      "network-topology:node": [
2070            {
2071                "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
2072                "hwvtep-node-name": "ps0",
2073                "hwvtep-node-description": "",
2074                "management-ips": [
2075                  {
2076                    "management-ips-key": "192.168.1.115"
2077                  }
2078                ],
2079                "tunnel-ips": [
2080                  {
2081                    "tunnel-ips-key": "192.168.1.115"
2082                  }
2083                ],
2084                "managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']"
2085            }
2086        ]
2087     }
2088
2089 Note: "managed-by" must provided by user. We can get its value after the
2090 step *Connect to a hwvtep device/node* since the node-id of hwvtep
2091 device is provided by user. "managed-by" is a reference typed of
2092 instance identifier. Though the instance identifier is a little
2093 complicated for RestConf, the primary user of hwvtepsouthbound plugin
2094 will be provider-type code such as NetVirt and the instance identifier
2095 is much easier to write code for.
2096
2097 Create a logical switch
2098 '''''''''''''''''''''''
2099
2100 Creating a logical switch is effectively creating a logical network. For
2101 VxLAN, it is a tunnel network with the same VNI.
2102
2103 REST API: POST
2104 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
2105
2106 request body:
2107
2108 ::
2109
2110     {
2111      "logical-switches": [
2112            {
2113                "hwvtep-node-name": "ls0",
2114                "hwvtep-node-description": "",
2115                "tunnel-key": "10000"
2116             }
2117        ]
2118     }
2119
2120 Create a physical locator
2121 '''''''''''''''''''''''''
2122
2123 After the VXLAN network is ready, we will add VTEPs to it. A VTEP is
2124 described by a physical locator.
2125
2126 REST API: POST
2127 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
2128
2129 request body:
2130
2131 ::
2132
2133      {
2134       "termination-point": [
2135            {
2136                "tp-id": "vxlan_over_ipv4:192.168.0.116",
2137                "encapsulation-type": "encapsulation-type-vxlan-over-ipv4",
2138                "dst-ip": "192.168.0.116"
2139                }
2140           ]
2141      }
2142
2143 The "tp-id" of locator is "{encapsualation-type}: {dst-ip}".
2144
2145 Note: As far as we know, the OVSDB database does not allow the insertion
2146 of a new locator alone. So, no locator is inserted after this request is
2147 sent. We will trigger off the creation until other entity refer to it,
2148 such as remote-mcast-macs.
2149
2150 Create a remote-mcast-macs entry
2151 ''''''''''''''''''''''''''''''''
2152
2153 After adding a physical locator to a logical switch, we need to create a
2154 remote-mcast-macs entry to handle unknown traffic.
2155
2156 REST API: POST
2157 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
2158
2159 request body:
2160
2161 ::
2162
2163     {
2164      "remote-mcast-macs": [
2165            {
2166                "mac-entry-key": "00:00:00:00:00:00",
2167                "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
2168                "locator-set": [
2169                     {
2170                           "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://219.141.189.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
2171                     }
2172                ]
2173            }
2174        ]
2175     }
2176
2177 The physical locator *vxlan\_over\_ipv4:192.168.0.116* is just created
2178 in "Create a physical locator". It should be noted that list
2179 "locator-set" is immutable, that is, we must provide a set of
2180 "locator-ref" as a whole.
2181
2182 Note: "00:00:00:00:00:00" stands for "unknown-dst" since the type of
2183 mac-entry-key is yang:mac and does not accept "unknown-dst".
2184
2185 Create a physical port
2186 ''''''''''''''''''''''
2187
2188 Now we add a physical port into the physical switch
2189 "hwvtep://192.168.1.115:6640/physicalswitch/br0". The port is attached
2190 with a physical server or an L2 network and with the vlan 100.
2191
2192 REST API: POST
2193 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640%2Fphysicalswitch%2Fbr0
2194
2195 ::
2196
2197     {
2198      "network-topology:termination-point": [
2199            {
2200                "tp-id": "port0",
2201                "hwvtep-node-name": "port0",
2202                "hwvtep-node-description": "",
2203                "vlan-bindings": [
2204                    {
2205                      "vlan-id-key": "100",
2206                      "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']"
2207                    }
2208              ]
2209            }
2210        ]
2211     }
2212
2213 At this point, we have completed the basic configuration.
2214
2215 Typically, hwvtep devices learn local MAC addresses automatically. But
2216 they also support getting MAC address entries from ODL.
2217
2218 Create a local-mcast-macs entry
2219 '''''''''''''''''''''''''''''''
2220
2221 It is similar to *Create a remote-mcast-macs entry*.
2222
2223 Create a remote-ucast-macs
2224 ''''''''''''''''''''''''''
2225
2226 REST API: POST
2227 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
2228
2229 ::
2230
2231     request body:
2232
2233 ::
2234
2235     {
2236      "remote-ucast-macs": [
2237            {
2238                "mac-entry-key": "11:11:11:11:11:11",
2239                "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
2240                "ipaddr": "1.1.1.1",
2241                "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
2242            }
2243        ]
2244     }
2245
2246 Create a local-ucast-macs entry
2247 '''''''''''''''''''''''''''''''
2248
2249 This is similar to *Create a remote-ucast-macs*.
2250
2251 Switch Initiates Connection
2252 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2253
2254 We do not need to connect to a hwvtep device/node when the switch
2255 initiates the connection. After switches connect to ODL successfully, we
2256 get the node-id’s of switches by reading the operational data store.
2257 Once the node-id of a hwvtep device is received, the remaining steps are
2258 the same as when the user initiates the connection.
2259
2260 References
2261 ^^^^^^^^^^
2262
2263 https://wiki.opendaylight.org/view/User_talk:Pzhang
2264