7 The OVSDB NetVirt project delivers two major pieces of functionality:
9 1. The OVSDB Southbound Protocol, and
11 2. NetVirt, a network virtualization solution.
13 The following diagram shows the system-level architecture of OVSDB
14 NetVirt in an OpenStack-based solution.
16 .. figure:: ./images/ovsdb/ovsdb-netvirt-architecture.jpg
17 :alt: OVSDB NetVirt Architecture
19 OVSDB NetVirt Architecture
21 NetVirt is a network virtualization solution that is a Neutron service
22 provider, and therefore supports the OpenStack Neutron Networking API
25 The OVSDB component implements the OVSDB protocol (RFC 7047), as well as
26 plugins to support OVSDB Schemas, such as the Open\_vSwitch database
27 schema and the hardware\_vtep database schema.
29 NetVirt has MDSAL-based interfaces with Neutron on the northbound side,
30 and OVSDB and OpenFlow plugins on the southbound side.
32 OVSDB NetVirt currently supports Open vSwitch virtual switches via
33 OpenFlow and OVSDB. Work is underway to support hardware gateways.
35 NetVirt services are enabled by installing the odl-ovsdb-openstack
36 feature using the following command:
40 feature:install odl-ovsdb-openstack
42 To enable NetVirt’s distributed Layer 3 routing services, the following
43 line must be uncommented in the etc/custom.properties file in the
44 OpenDaylight distribution prior to starting karaf:
48 ovsdb.l3.fwd.enabled=yes
50 To start the OpenDaylight controller, run the following application in
57 More details about using NetVirt with OpenStack can be found in the
60 1. The "OpenDaylight and OpenStack" guide, and
62 2. `Getting Started with OpenDaylight OVSDB Plugin Network
63 Virtualization <https://wiki.opendaylight.org/view/OVSDB_Integration:Main#Getting_Started_with_OpenDaylight_OVSDB_Plugin_Network_Virtualization>`__
65 Some additional details about using OpenStack Security Groups and the
66 Data Plane Development Kit (DPDK) are provided below.
71 The security group in openstack helps to filter packets based on
72 policies configured. The current implementation in openstack uses
73 iptables to realize security groups. In Opendaylight instead of iptable
74 rules, ovs flows are used. This will remove the many layers of
75 bridges/ports required in iptable implementation.
77 The current rules are applied on the basis of the following attributes:
78 ingress/egress, protocol, port range, and prefix. In the pipeline, table
79 40 is used for egress acl and table 90 for ingress acl rules.
81 Stateful Implementation
82 ^^^^^^^^^^^^^^^^^^^^^^^
84 The security group is implemented in two modes, stateful and stateless.
85 Stateful can be enabled by setting false to true in
86 etc/opendaylight/karaf/netvirt-impl-default-config.xml
88 The stateful implementation uses the conntrack capabilities of ovs and
89 tracks an existing connection. This mode requires OVS2.5 and linux
90 kernel 4.3. The ovs which is integrated with netfilter framework tracks
91 the connection using the five tuple(layer-3 protocol, source address,
92 destination address, layer-4 protocol, layer-4 key). The connection
93 state is independent of the upper level state of connection oriented
94 protocols like TCP, and even connectionless protocols like UDP will have
95 a pseudo state. With this implementation OVS sends the packet to the
96 netfilter framework to know whether there is an entry for to the
97 connection. netfilter will return the packet to OVS with the appropriate
98 flag set. Below are the states we are interested in:
102 -trk - The packet was never send to netfilter framework
106 +trk+est - It is already known and connection which was allowed previously,
107 pass it to the next table.
111 +trk+new - This is a new connection. So if there is a specific rule in the
112 table which allows this traffic with a commit action an entry will be made
113 in the netfilter framework. If there is no specific rule to allow this
114 traffic the packet will be dropped.
116 So, by default, a packet is be dropped unless there is a rule to allow
119 Stateless Implementation
120 ^^^^^^^^^^^^^^^^^^^^^^^^
122 The stateless mode is for OVS 2.4 and below where connection tracking is
123 not supported. Here we have pseudo-connection tracking using the TCP SYN
124 flag. Other than TCP packets, all protocol packets is allowed by
125 default. For TCP packets, the SYN packets will be dropped by default
126 unless there is a specific rule which allows TCP SYN packets to a
132 The SecurityGroup are associated with the vm port when the vm is
133 spawned. By default a set of rules are applied to the vm port referred
134 to as fixed security group rule. This includes the DHCP rules the ARP
135 rule and the conntrack rules. The conntrack rules will be inserted only
136 in the stateful mode.
141 The DHCP rules added to the vm port when a vm is spawned. The fixed DHCP
144 - Allow DHCP server traffic ingress.
148 cookie=0x0, duration=36.848s, table=90, n_packets=2, n_bytes=717,
149 priority=61006,udp,dl_src=fa:16:3e:a1:f9:d0,
150 tp_src=67,tp_dst=68 actions=goto_table:100
154 cookie=0x0, duration=36.566s, table=90, n_packets=0, n_bytes=0,
155 priority=61006,udp6,dl_src=fa:16:3e:a1:f9:d0,
156 tp_src=547,tp_dst=546 actions=goto_table:100
158 - Allow DHCP client traffic egress.
162 cookie=0x0, duration=2165.596s, table=40, n_packets=2, n_bytes=674,
163 priority=61012,udp,tp_src=68,tp_dst=67 actions=goto_table:50
167 cookie=0x0, duration=2165.513s, table=40, n_packets=0, n_bytes=0,
168 priority=61012,udp6,tp_src=546,tp_dst=547 actions=goto_table:50
170 - Prevent DHCP server traffic from the vm port.(DHCP Spoofing)
174 cookie=0x0, duration=34.711s, table=40, n_packets=0, n_bytes=0,
175 priority=61011,udp,in_port=2,tp_src=67,tp_dst=68 actions=drop
179 cookie=0x0, duration=34.519s, table=40, n_packets=0, n_bytes=0,
180 priority=61011,udp6,in_port=2,tp_src=547,tp_dst=546 actions=drop
185 The default arp rules allows the arp traffic to go in and out of the vm
190 cookie=0x0, duration=35.015s, table=40, n_packets=10, n_bytes=420,
191 priority=61010,arp,arp_sha=fa:16:3e:93:88:60 actions=goto_table:50
195 cookie=0x0, duration=35.582s, table=90, n_packets=1, n_bytes=42,
196 priority=61010,arp,arp_tha=fa:16:3e:93:88:60 actions=goto_table:100
201 These rules are inserted only in stateful mode. The conntrack rules use
202 the netfilter framework to track packets. The below rules are added to
205 - If a packet is not tracked(connection state –trk) it is send it to
206 the netfilter for tracking
208 - If the packet is already tracked (netfilter filter returns connection
209 state +trk,+est) and if the connection is established, then allow the
210 packet to go through the pipeline.
212 - The third rule is the default drop rule which will drop the packet,
213 if the packet is tracked and new(netfilter filter returns connection
214 state +trk,+new). This rule has lower priority than the custom rules
215 which shall be added.
219 cookie=0x0, duration=35.015s table=40,priority=61021,in_port=3,
220 ct_state=-trk,action=ct"("table=0")"
224 cookie=0x0, duration=35.015s table=40,priority=61020,in_port=3,
225 ct_state=+trk+est,action=goto_table:50
229 cookie=0x0, duration=35.015s table=40,priority=36002,in_port=3,
230 ct_state=+new,actions=drop
234 cookie=0x0, duration=35.015s table=90,priority=61022,
235 dl_dst=fa:16:3e:0d:8d:21,ct_state=+trk+est,action=goto_table:100
239 cookie=0x0, duration=35.015s table=90,priority=61021,
240 dl_dst=fa:16:3e:0d:8d:21,ct_state=-trk,action=ct"("table=0")"
244 cookie=0x0, duration=35.015s table=90,priority=36002,
245 dl_dst=fa:16:3e:0d:8d:21,ct_state=+new,actions=drop
250 This rule is inserted in stateless mode only. This rule will drop TCP
251 SYN packet by default
253 Custom Security Groups
254 ^^^^^^^^^^^^^^^^^^^^^^
258 User can add security groups in openstack via command line or UI. When we associate this security group with a vm the flows related to each security group will be added in the related tables. A preconfigured security group called the default security group is available in neutron db.
263 If connection tracking is enabled the match will have connection state
264 and the action will have commit along with goto. The commit will send
265 the packet to the netfilter framework to cache the entry. After a
266 commit, for the next packet of this connection netfilter will return
267 +trk+est and the packet will match the fixed conntrack rule and get
268 forwarded to next table.
272 cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
273 priority=61007,ct_state=+new+trk,icmp,dl_src=fa:16:3e:ee:a5:ec,
274 nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50
278 cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0,
279 priority=61007,ct_state=+new+trk,udp,dl_dst=fa:16:3e:22:59:2f,
280 nw_src=10.100.5.3,tp_dst=2222 actions=ct(commit),goto_table:100
284 cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0,
285 priority=61007,ct_state=+new+trk,tcp,dl_dst=fa:16:3e:22:59:2f,
286 nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100
291 If the mode is stateless the match will have only the parameter
292 specified in the security rule and a goto in the action. The ct\_state
293 and commit action will be missing.
297 cookie=0x0, duration=13211.171s, table=40, n_packets=0, n_bytes=0,
298 priority=61007,icmp,dl_src=fa:16:3e:93:88:60,nw_dst=0.0.0.0/24,
299 icmp_type=2,icmp_code=4 actions=goto_table:50
303 cookie=0x0, duration=199.674s, table=90, n_packets=0, n_bytes=0,
304 priority=61007,udp,dl_dst=fa:16:3e:dc:49:ff,nw_src=10.100.5.3,tp_dst=2222
305 actions=goto_table:100
309 cookie=0x0, duration=199.780s, table=90, n_packets=0, n_bytes=0,
310 priority=61007,tcp,dl_dst=fa:16:3e:93:88:60,nw_src=10.100.5.4,tp_dst=3333
311 actions=goto_table:100
316 The TCP/UDP port range is supported with the help of port mask. This
317 will dramatically reduce the number of flows required to cover a port
318 range. The below 7 rules can cover a port range from 333 to 777.
322 cookie=0x0, duration=56.129s, table=90, n_packets=0, n_bytes=0,
323 priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
324 tp_dst=0x200/0xff00 actions=goto_table:100
328 cookie=0x0, duration=55.805s, table=90, n_packets=0, n_bytes=0,
329 priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
330 tp_dst=0x160/0xffe0 actions=goto_table:100
334 cookie=0x0, duration=55.587s, table=90, n_packets=0, n_bytes=0,
335 priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
336 tp_dst=0x300/0xfff8 actions=goto_table:100
340 cookie=0x0, duration=55.437s, table=90, n_packets=0, n_bytes=0,
341 priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
342 tp_dst=0x150/0xfff0 actions=goto_table:100
346 cookie=0x0, duration=55.282s, table=90, n_packets=0, n_bytes=0,
347 priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
348 tp_dst=0x14e/0xfffe actions=goto_table:100
352 cookie=0x0, duration=54.063s, table=90, n_packets=0, n_bytes=0,
353 priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
354 tp_dst=0x308/0xfffe actions=goto_table:100
358 cookie=0x0, duration=55.130s, table=90, n_packets=0, n_bytes=0,
359 priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
360 tp_dst=333 actions=goto_table:100
362 CIDR/Remote Security Group
363 ^^^^^^^^^^^^^^^^^^^^^^^^^^
367 When adding a security group we can select the rule to applicable to a
368 set of CIDR or to a set of VMs which has a particular security group
371 If CIDR is selected there will be only one flow rule added allowing the
372 traffic from/to the IP’s belonging to that CIDR.
376 cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
377 priority=61007,ct_state=+new+trk,icmp,dl_src=fa:16:3e:ee:a5:ec,
378 nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50
380 If a remote security group is selected a flow will be inserted for every
381 vm which has that security group associated.
385 cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0,
386 priority=61007,ct_state=+new+trk,udp,dl_dst=fa:16:3e:22:59:2f,
387 nw_src=10.100.5.3,tp_dst=2222 actions=ct(commit),goto_table:100
391 cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0,
392 priority=61007,ct_state=+new+trk,tcp,dl_dst=fa:16:3e:22:59:2f,
393 nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100
395 Rules supported in ODL
396 ^^^^^^^^^^^^^^^^^^^^^^
398 The following rules are supported in the current implementation. The
399 direction (ingress/egress) is always expected.
401 +--------------------+--------------------+--------------------+--------------------+
402 | Protocol | Port Range | IP Prefix | Remote Security |
403 | | | | Group supported |
404 +--------------------+--------------------+--------------------+--------------------+
405 | Any | Any | Any | Yes |
406 +--------------------+--------------------+--------------------+--------------------+
407 | TCP | 1 - 65535 | 0.0.0.0/0 | Yes |
408 +--------------------+--------------------+--------------------+--------------------+
409 | UDP | 1 - 65535 | 0.0.0.0/0 | Yes |
410 +--------------------+--------------------+--------------------+--------------------+
411 | ICMP | Any | 0.0.0.0/0 | Yes |
412 +--------------------+--------------------+--------------------+--------------------+
414 Table: Table Supported Rules
416 Note : IPV6 and port-range feature is not supported as of today
418 Using OVS with DPDK hosts and OVSDB NetVirt
419 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
421 The Data Plane Development Kit (`DPDK <http://dpdk.org/>`__) is a
422 userspace set of libraries and drivers designed for fast packet
423 processing. The userspace datapath variant of OVS can be built with DPDK
424 enabled to provide the performance features of DPDK to Open vSwitch
425 (OVS). In the 2.4.0 version of OVS, the Open\_vSwtich table schema was
426 enhanced to include the lists *datapath-types* and *interface-types*.
427 When the OVS with DPDK variant of OVS is running, the *inteface-types*
428 list will include DPDK interface types such as *dpdk* and
429 *dpdkvhostuser*. The OVSDB Southbound Plugin includes this information
430 in the OVSDB YANG model in the MD-SAL, so when a specific OVS host is
431 running OVS with DPDK, it is possible for NetVirt to detect that
432 information by checking that DPDK interface types are included in the
433 list of supported interface types.
435 For example, query the operational MD-SAL for OVSDB nodes:
441 http://{{CONTROLLER-IP}}:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
450 "topology-id": "ovsdb:1",
452 < content edited out >
454 "node-id": "ovsdb://uuid/f9b58b6d-04db-459a-b914-fff82b738aec",
455 < content edited out >
456 "ovsdb:interface-type-entry": [
458 "interface-type": "ovsdb:interface-type-ipsec-gre"
461 "interface-type": "ovsdb:interface-type-internal"
464 "interface-type": "ovsdb:interface-type-system"
467 "interface-type": "ovsdb:interface-type-patch"
470 "interface-type": "ovsdb:interface-type-dpdkvhostuser"
473 "interface-type": "ovsdb:interface-type-dpdk"
476 "interface-type": "ovsdb:interface-type-dpdkr"
479 "interface-type": "ovsdb:interface-type-vxlan"
482 "interface-type": "ovsdb:interface-type-lisp"
485 "interface-type": "ovsdb:interface-type-geneve"
488 "interface-type": "ovsdb:interface-type-gre"
491 "interface-type": "ovsdb:interface-type-tap"
494 "interface-type": "ovsdb:interface-type-stt"
497 < content edited out >
498 "ovsdb:datapath-type-entry": [
500 "datapath-type": "ovsdb:datapath-type-netdev"
503 "datapath-type": "ovsdb:datapath-type-system"
506 < content edited out >
508 < content edited out >
514 This example illustrates the output of an OVS with DPDK host because the
515 list of interface types includes types supported by DPDK.
517 Bridges on OVS with DPDK hosts need to be created with the *netdev*
518 datapath type and DPDK specific ports need to be created with the
519 appropriate interface type. The OpenDaylight OVSDB Southbound Plugin
520 supports these attributes.
522 The OpenDaylight NetVirt application checks whether the OVS host is
523 using OVS with DPDK when creating the bridges that are expected to be
524 present on the host, e.g. *br-int*.
526 Following are some tips for supporting hosts using OVS with DPDK when
527 using NetVirt as the Neutron service provider and *devstack* to deploy
530 In addition to the *networking-odl* ML2 plugin, enable the
531 *networking-odl-dpdk* plugin in *local.conf*.
535 For working with Openstack Liberty
536 enable_plugin networking-odl https://github.com/FedericoRessi/networking-odl integration/liberty
537 enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk stable/liberty
541 For working with Openstack Mitaka (or later) branch
542 enable_plugin networking-odl https://github.com/openstack/networking-odl
543 enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk
545 The order of these plugin lines is important. The *networking-odl*
546 plugin will install and setup *openvswitch*. The *networking-ovs-dpdk*
547 plugin will install OVS with DPDK. Note, the *networking-ovs-dpdk*
548 plugin is only being used here to setup OVS with DPDK. The
549 *networking-odl* plugin will be used as the Neutron ML2 driver.
551 For VXLAN tenant network support, the NetVirt application interacts with
552 OVS with DPDK host in the same way as OVS hosts using the kernel
553 datapath by creating VXLAN ports on *br-int* to communicate with other
554 tunnel endpoints. The IP address for the local tunnel endpoint may be
555 configured in the *local.conf* file. For example:
559 ODL_LOCAL_IP=192.100.200.10
561 NetVirt will use this information to configure the VXLAN port on
562 *br-int*. On a host with the OVS kernel datapath, it is expected that
563 there will be a networking interface configured with this IP address. On
564 an OVS with DPDK host, an OVS bridge is created and a DPDK port is added
565 to the bridge. The local tunnel endpoint address is then assigned to the
566 bridge port of the bridge. So, for example, if the physical network
567 interface is associated with *eth0* on the host, a bridge named
568 *br-eth0* could be created. The DPDK port, such as *dpdk0* (per the
569 naming conventions of OVS with DPDK), is added to bridge *br-eth0*. The
570 local tunnel endpoint address is assigned to the network interface
571 *br-eth0* which is attached to bridge *br-eth0*. All of this setup is
572 not done by NetVirt. The *networking-ovs-dpdk* can be made to perform
573 this setup by putting configuration like the following in *local.conf*.
577 ODL_LOCAL_IP=192.168.200.9
578 ODL_PROVIDER_MAPPINGS=physnet1:eth0,physnet2:eht1
579 OVS_DPDK_PORT_MAPPINGS=eth0:br-eth0,eth1:br-ex
580 OVS_BRIDGE_MAPPINGS=physnet1:br-eth0,physnet2:br-ex
582 The above settings associate the host networking interface *eth0* with
583 bridge *br-eth0*. The *networking-ovs-dpdk* plugin will determine the
584 DPDK port name associated with *eth0* and add it to the bridge
585 *br-eth0*. If using the NetVirt L3 support, these settings will enable
586 setup of the *br-ex* bridge and attach the DPDK port associated with
587 network interface *eth1* to it.
589 The following settings are included in *local.conf* to specify specific
590 attributes associated with OVS with DPDK. These are used by the
591 *networking-ovs-dpdk* plugin to configure OVS with DPDK.
595 OVS_DATAPATH_TYPE=netdev
596 OVS_NUM_HUGEPAGES=8192
597 OVS_DPDK_MEM_SEGMENTS=8192
598 OVS_HUGEPAGE_MOUNT_PAGESIZE=2M
599 OVS_DPDK_RTE_LIBRTE_VHOST=y
600 OVS_DPDK_MODE=compute
602 Once the stack is up and running virtual machines may be deployed on OVS
603 with DPDK hosts. The *networking-odl* plugin handles ensuring that
604 *dpdkvhostuser* interfaces are utilized by Nova instead of the default
605 *tap* interface. The *dpdkvhostuser* interface provides the best
606 performance for VMs on OVS with DPDK hosts.
608 A Nova flavor is created for VMs that may be deployed on OVS with DPDK
613 nova flavor-create largepage-flavor 1002 1024 4 1
614 nova flavor-key 1002 set "hw:mem_page_size=large"
616 Then, just specify the flavor when creating a VM.
620 nova boot --flavor largepage-flavor --image cirros-0.3.4-x86_64-uec --nic net-id=<NET ID VALUE> vm-name
625 Overview and Architecture
626 ~~~~~~~~~~~~~~~~~~~~~~~~~
628 There are currently two OVSDB Southbound plugins:
630 - odl-ovsdb-southbound: Implements the OVSDB Open\_vSwitch database
633 - odl-ovsdb-hwvtepsouthbound: Implements the OVSDB hardware\_vtep
636 These plugins are normally installed and used automatically by higher
637 level applications such as odl-ovsdb-openstack; however, they can also
638 be installed separately and used via their REST APIs as is described in
639 the following sections.
641 OVSDB Southbound Plugin
642 ~~~~~~~~~~~~~~~~~~~~~~~
644 The OVSDB Southbound Plugin provides support for managing OVS hosts via
645 an OVSDB model in the MD-SAL which maps to important tables and
646 attributes present in the Open\_vSwitch schema. The OVSDB Southbound
647 Plugin is able to connect actively or passively to OVS hosts and operate
648 as the OVSDB manager of the OVS host. Using the OVSDB protocol it is
649 able to manage the OVS database (OVSDB) on the OVS host as defined by
650 the Open\_vSwitch schema.
655 The OVSDB Southbound Plugin provides a YANG model which is based on the
656 abstract `network topology
657 model <https://github.com/opendaylight/yangtools/blob/stable/beryllium/yang/yang-parser-impl/src/test/resources/ietf/network-topology%402013-10-21.yang>`__.
659 The details of the OVSDB YANG model are defined in the
660 `ovsdb.yang <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/southbound/southbound-api/src/main/yang/ovsdb.yang>`__
663 The OVSDB YANG model defines three augmentations:
665 **ovsdb-node-augmentation**
666 This augments the network-topology node and maps primarily to the
667 Open\_vSwitch table of the OVSDB schema. The ovsdb-node-augmentation
668 is a representation of the OVS host. It contains the following
671 - **connection-info** - holds the local and remote IP address and
672 TCP port numbers for the OpenDaylight to OVSDB node connections
674 - **db-version** - version of the OVSDB database
676 - **ovs-version** - version of OVS
678 - **list managed-node-entry** - a list of references to
679 ovsdb-bridge-augmentation nodes, which are the OVS bridges
680 managed by this OVSDB node
682 - **list datapath-type-entry** - a list of the datapath types
683 supported by the OVSDB node (e.g. *system*, *netdev*) - depends
684 on newer OVS versions
686 - **list interface-type-entry** - a list of the interface types
687 supported by the OVSDB node (e.g. *internal*, *vxlan*, *gre*,
688 *dpdk*, etc.) - depends on newer OVS verions
690 - **list openvswitch-external-ids** - a list of the key/value pairs
691 in the Open\_vSwitch table external\_ids column
693 - **list openvswitch-other-config** - a list of the key/value pairs
694 in the Open\_vSwitch table other\_config column
696 - **list managery-entry** - list of manager information entries and
699 - **list qos-entries** - list of QoS entries present in the QoS
702 - **list queues** - list of queue entries present in the queue
705 **ovsdb-bridge-augmentation**
706 This augments the network-topology node and maps to an specific
707 bridge in the OVSDB bridge table of the associated OVSDB node. It
708 contains the following attributes.
710 - **bridge-uuid** - UUID of the OVSDB bridge
712 - **bridge-name** - name of the OVSDB bridge
714 - **bridge-openflow-node-ref** - a reference (instance-identifier)
715 of the OpenFlow node associated with this bridge
717 - **list protocol-entry** - the version of OpenFlow protocol to use
718 with the OpenFlow controller
720 - **list controller-entry** - a list of controller-uuid and
721 is-connected status of the OpenFlow controllers associated with
724 - **datapath-id** - the datapath ID associated with this bridge on
727 - **datapath-type** - the datapath type of this bridge
729 - **fail-mode** - the OVSDB fail mode setting of this bridge
731 - **flow-node** - a reference to the flow node corresponding to
734 - **managed-by** - a reference to the ovsdb-node-augmentation
735 (OVSDB node) that is managing this bridge
737 - **list bridge-external-ids** - a list of the key/value pairs in
738 the bridge table external\_ids column for this bridge
740 - **list bridge-other-configs** - a list of the key/value pairs in
741 the bridge table other\_config column for this bridge
743 **ovsdb-termination-point-augmentation**
744 This augments the topology termination point model. The OVSDB
745 Southbound Plugin uses this model to represent both the OVSDB port
746 and OVSDB interface for a given port/interface in the OVSDB schema.
747 It contains the following attributes.
749 - **port-uuid** - UUID of an OVSDB port row
751 - **interface-uuid** - UUID of an OVSDB interface row
753 - **name** - name of the port and interface
755 - **interface-type** - the interface type
757 - **list options** - a list of port options
759 - **ofport** - the OpenFlow port number of the interface
761 - **ofport\_request** - the requested OpenFlow port number for the
764 - **vlan-tag** - the VLAN tag value
766 - **list trunks** - list of VLAN tag values for trunk mode
768 - **vlan-mode** - the VLAN mode (e.g. access, native-tagged,
769 native-untagged, trunk)
771 - **list port-external-ids** - a list of the key/value pairs in the
772 port table external\_ids column for this port
774 - **list interface-external-ids** - a list of the key/value pairs
775 in the interface table external\_ids interface for this interface
777 - **list port-other-configs** - a list of the key/value pairs in
778 the port table other\_config column for this port
780 - **list interface-other-configs** - a list of the key/value pairs
781 in the interface table other\_config column for this interface
783 - **list inteface-lldp** - LLDP Auto Attach configuration for the
786 - **qos** - UUID of the QoS entry in the QoS table assigned to this
792 To install the OVSDB Southbound Plugin, use the following command at the
797 feature:install odl-ovsdb-southbound-impl-ui
799 After installing the OVSDB Southbound Plugin, and before any OVSDB
800 topology nodes have been created, the OVSDB topology will appear as
801 follows in the configuration and operational MD-SAL.
807 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
809 http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
818 "topology-id": "ovsdb:1"
825 *<controller-ip>* is the IP address of the OpenDaylight controller
827 OpenDaylight as the OVSDB Manager
828 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
830 An OVS host is a system which is running the OVS software and is capable
831 of being managed by an OVSDB manager. The OVSDB Southbound Plugin is
832 capable of connecting to an OVS host and operating as an OVSDB manager.
833 Depending on the configuration of the OVS host, the connection of
834 OpenDaylight to the OVS host will be active or passive.
836 Active Connection to OVS Hosts
837 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
839 An active connection is when the OVSDB Southbound Plugin initiates the
840 connection to an OVS host. This happens when the OVS host is configured
841 to listen for the connection (i.e. the OVSDB Southbound Plugin is active
842 the the OVS host is passive). The OVS host is configured with the
847 sudo ovs-vsctl set-manager ptcp:6640
849 This configures the OVS host to listen on TCP port 6640.
851 The OVSDB Southbound Plugin can be configured via the configuration
852 MD-SAL to actively connect to an OVS host.
858 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1
865 "network-topology:node": [
867 "node-id": "ovsdb://HOST1",
869 "ovsdb:remote-port": "6640",
870 "ovsdb:remote-ip": "<ovs-host-ip>"
878 *<ovs-host-ip>* is the IP address of the OVS Host
880 Note that the configuration assigns a *node-id* of "ovsdb://HOST1" to
881 the OVSDB node. This *node-id* will be used as the identifier for this
882 OVSDB node in the MD-SAL.
884 Query the configuration MD-SAL for the OVSDB topology.
890 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
899 "topology-id": "ovsdb:1",
902 "node-id": "ovsdb://HOST1",
903 "ovsdb:connection-info": {
904 "remote-ip": "<ovs-host-ip>",
913 As a result of the OVSDB node configuration being added to the
914 configuration MD-SAL, the OVSDB Southbound Plugin will attempt to
915 connect with the specified OVS host. If the connection is successful,
916 the plugin will connect to the OVS host as an OVSDB manager, query the
917 schemas and databases supported by the OVS host, and register to monitor
918 changes made to the OVSDB tables on the OVS host. It will also set an
919 external id key and value in the external-ids column of the
920 Open\_vSwtich table of the OVS host which identifies the MD-SAL instance
921 identifier of the OVSDB node. This ensures that the OVSDB node will use
922 the same *node-id* in both the configuration and operational MD-SAL.
926 "opendaylight-iid" = "instance identifier of OVSDB node in the MD-SAL"
928 When the OVS host sends the OVSDB Southbound Plugin the first update
929 message after the monitoring has been established, the plugin will
930 populate the operational MD-SAL with the information it receives from
933 Query the operational MD-SAL for the OVSDB topology.
939 http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
948 "topology-id": "ovsdb:1",
951 "node-id": "ovsdb://HOST1",
952 "ovsdb:openvswitch-external-ids": [
954 "external-id-key": "opendaylight-iid",
955 "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
958 "ovsdb:connection-info": {
959 "local-ip": "<controller-ip>",
961 "remote-ip": "<ovs-host-ip>",
964 "ovsdb:ovs-version": "2.3.1-git4750c96",
965 "ovsdb:manager-entry": [
967 "target": "ptcp:6640",
969 "number_of_connections": 1
978 To disconnect an active connection, just delete the configuration MD-SAL
985 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1
987 Note in the above example, that */* characters which are part of the
988 *node-id* are specified in hexadecimal format as "%2F".
990 Passive Connection to OVS Hosts
991 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
993 A passive connection is when the OVS host initiates the connection to
994 the OVSDB Southbound Plugin. This happens when the OVS host is
995 configured to connect to the OVSDB Southbound Plugin. The OVS host is
996 configured with the following command:
1000 sudo ovs-vsctl set-manager tcp:<controller-ip>:6640
1002 The OVSDB Southbound Plugin is configured to listen for OVSDB
1003 connections on TCP port 6640. This value can be changed by editing the
1004 "./karaf/target/assembly/etc/custom.properties" file and changing the
1005 value of the "ovsdb.listenPort" attribute.
1007 When a passive connection is made, the OVSDB node will appear first in
1008 the operational MD-SAL. If the Open\_vSwitch table does not contain an
1009 external-ids value of *opendaylight-iid*, then the *node-id* of the new
1010 OVSDB node will be created in the format:
1014 "ovsdb://uuid/<actual UUID value>"
1016 If there an *opendaylight-iid* value was already present in the
1017 external-ids column, then the instance identifier defined there will be
1018 used to create the *node-id* instead.
1020 Query the operational MD-SAL for an OVSDB node after a passive
1027 http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
1036 "topology-id": "ovsdb:1",
1039 "node-id": "ovsdb://uuid/163724f4-6a70-428a-a8a0-63b2a21f12dd",
1040 "ovsdb:openvswitch-external-ids": [
1042 "external-id-key": "system-id",
1043 "external-id-value": "ecf160af-e78c-4f6b-a005-83a6baa5c979"
1046 "ovsdb:connection-info": {
1047 "local-ip": "<controller-ip>",
1048 "remote-port": 46731,
1049 "remote-ip": "<ovs-host-ip>",
1052 "ovsdb:ovs-version": "2.3.1-git4750c96",
1053 "ovsdb:manager-entry": [
1055 "target": "tcp:10.11.21.7:6640",
1057 "number_of_connections": 1
1066 Take note of the *node-id* that was created in this case.
1071 The OVSDB Southbound Plugin can be used to manage bridges on an OVS
1074 This example shows how to add a bridge to the OVSDB node
1081 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
1088 "network-topology:node": [
1090 "node-id": "ovsdb://HOST1/bridge/brtest",
1091 "ovsdb:bridge-name": "brtest",
1092 "ovsdb:protocol-entry": [
1094 "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
1097 "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
1102 Notice that the *ovsdb:managed-by* attribute is specified in the
1103 command. This indicates the association of the new bridge node with its
1106 Bridges can be updated. In the following example, OpenDaylight is
1107 configured to be the OpenFlow controller for the bridge.
1113 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
1120 "network-topology:node": [
1122 "node-id": "ovsdb://HOST1/bridge/brtest",
1123 "ovsdb:bridge-name": "brtest",
1124 "ovsdb:controller-entry": [
1126 "target": "tcp:<controller-ip>:6653"
1129 "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
1134 If the OpenDaylight OpenFlow Plugin is installed, then checking on the
1135 OVS host will show that OpenDaylight has successfully connected as the
1136 controller for the bridge.
1140 $ sudo ovs-vsctl show
1144 Controller "tcp:<controller-ip>:6653"
1149 ovs_version: "2.3.1-git4750c96"
1151 Query the operational MD-SAL to see how the bridge appears.
1157 http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/
1166 "node-id": "ovsdb://HOST1/bridge/brtest",
1167 "ovsdb:bridge-name": "brtest",
1168 "ovsdb:datapath-type": "ovsdb:datapath-type-system",
1169 "ovsdb:datapath-id": "00:00:da:e9:0c:08:2d:45",
1170 "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']",
1171 "ovsdb:bridge-external-ids": [
1173 "bridge-external-id-key": "opendaylight-iid",
1174 "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']"
1177 "ovsdb:protocol-entry": [
1179 "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
1182 "ovsdb:bridge-uuid": "080ce9da-101e-452d-94cd-ee8bef8a4b69",
1183 "ovsdb:controller-entry": [
1185 "target": "tcp:10.11.21.7:6653",
1186 "is-connected": true,
1187 "controller-uuid": "c39b1262-0876-4613-8bfd-c67eec1a991b"
1190 "termination-point": [
1193 "ovsdb:port-uuid": "c808ae8d-7af2-4323-83c1-e397696dc9c8",
1194 "ovsdb:ofport": 65534,
1195 "ovsdb:interface-type": "ovsdb:interface-type-internal",
1196 "ovsdb:interface-uuid": "49e9417f-4479-4ede-8faf-7c873b8c0413",
1197 "ovsdb:name": "brtest"
1204 Notice that just like with the OVSDB node, an *opendaylight-iid* has
1205 been added to the external-ids column of the bridge since it was created
1206 via the configuration MD-SAL.
1208 A bridge node may be deleted as well.
1214 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
1219 Similarly, ports may be managed by the OVSDB Southbound Plugin.
1221 This example illustrates how a port and various attributes may be
1222 created on a bridge.
1228 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
1235 "network-topology:termination-point": [
1239 "ovsdb:option": "remote_ip",
1240 "ovsdb:value" : "10.10.14.11"
1243 "ovsdb:name": "testport",
1244 "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
1245 "tp-id": "testport",
1252 "vlan-mode":"access"
1257 Ports can be updated - add another VLAN trunk.
1263 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
1270 "network-topology:termination-point": [
1272 "ovsdb:name": "testport",
1273 "tp-id": "testport",
1286 Query the operational MD-SAL for the port.
1292 http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
1299 "termination-point": [
1301 "tp-id": "testport",
1302 "ovsdb:port-uuid": "b1262110-2a4f-4442-b0df-84faf145488d",
1305 "option": "remote_ip",
1306 "value": "10.10.14.11"
1309 "ovsdb:port-external-ids": [
1311 "external-id-key": "opendaylight-iid",
1312 "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']/network-topology:termination-point[network-topology:tp-id='testport']"
1315 "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
1324 "ovsdb:vlan-mode": "access",
1325 "ovsdb:vlan-tag": 1,
1326 "ovsdb:interface-uuid": "7cec653b-f407-45a8-baec-7eb36b6791c9",
1327 "ovsdb:name": "testport",
1333 Remember that the OVSDB YANG model includes both OVSDB port and
1334 interface table attributes in the termination-point augmentation. Both
1335 kinds of attributes can be seen in the examples above. Again, note the
1336 creation of an *opendaylight-iid* value in the external-ids column of
1345 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest2/termination-point/testport/
1347 Overview of QoS and Queue
1348 ^^^^^^^^^^^^^^^^^^^^^^^^^
1350 The OVSDB Southbound Plugin provides the capability of managing the QoS
1351 and Queue tables on an OVS host with OpenDaylight configured as the
1354 QoS and Queue Tables in OVSDB
1355 '''''''''''''''''''''''''''''
1357 The OVSDB includes a QoS and Queue table. Unlike most of the other
1358 tables in the OVSDB, except the Open\_vSwitch table, the QoS and Queue
1359 tables are "root set" tables, which means that entries, or rows, in
1360 these tables are not automatically deleted if they can not be reached
1361 directly or indirectly from the Open\_vSwitch table. This means that QoS
1362 entries can exist and be managed independently of whether or not they
1363 are referenced in a Port entry. Similarly, Queue entries can be managed
1364 independently of whether or not they are referenced by a QoS entry.
1366 Modelling of QoS and Queue Tables in OpenDaylight MD-SAL
1367 ''''''''''''''''''''''''''''''''''''''''''''''''''''''''
1369 Since the QoS and Queue tables are "root set" tables, they are modeled
1370 in the OpenDaylight MD-SAL as lists which are part of the attributes of
1371 the OVSDB node model.
1373 The MD-SAL QoS and Queue models have an additonal identifier attribute
1374 per entry (e.g. "qos-id" or "queue-id") which is not present in the
1375 OVSDB schema. This identifier is used by the MD-SAL as a key for
1376 referencing the entry. If the entry is created originally from the
1377 configuration MD-SAL, then the value of the identifier is whatever is
1378 specified by the configuration. If the entry is created on the OVSDB
1379 node and received by OpenDaylight in an operational update, then the id
1380 will be created in the following format.
1384 "queue-id": "queue://<UUID>"
1385 "qos-id": "qos://<UUID>"
1387 The UUID in the above identifiers is the actual UUID of the entry in the
1390 When the QoS or Queue entry is created by the configuration MD-SAL, the
1391 identifier will be configured as part of the external-ids column of the
1392 entry. This will ensure that the corresponding entry that is created in
1393 the operational MD-SAL uses the same identifier.
1397 "queues-external-ids": [
1399 "queues-external-id-key": "opendaylight-queue-id",
1400 "queues-external-id-value": "QUEUE-1"
1404 See more in the examples that follow in this section.
1406 The QoS schema in OVSDB currently defines two types of QoS entries.
1412 These QoS types are defined in the QoS model. Additional types will need
1413 to be added to the model in order to be supported. See the examples that
1414 folow for how the QoS type is specified in the model.
1416 QoS entries can be configured with addtional attritubes such as
1417 "max-rate". These are configured via the *other-config* column of the
1418 QoS entry. Refer to OVSDB schema (in the reference section below) for
1419 all of the relevant attributes that can be configured. The examples in
1420 the rest of this section will demonstrate how the other-config column
1423 Similarly, the Queue entries may be configured with additional
1424 attributes via the other-config column.
1426 Managing QoS and Queues via Configuration MD-SAL
1427 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1429 This section will show some examples on how to manage QoS and Queue
1430 entries via the configuration MD-SAL. The examples will be illustrated
1431 by using RESTCONF (see `QoS and Queue Postman
1432 Collection <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/resources/commons/Qos-and-Queue-Collection.json.postman_collection>`__
1435 A pre-requisite for managing QoS and Queue entries is that the OVS host
1436 must be present in the configuration MD-SAL.
1438 For the following examples, the following OVS host is configured.
1444 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
1453 "node-id": "ovsdb:HOST1",
1454 "connection-info": {
1455 "ovsdb:remote-ip": "<ovs-host-ip>",
1456 "ovsdb:remote-port": "<ovs-host-ovsdb-port>"
1464 - *<controller-ip>* is the IP address of the OpenDaylight controller
1466 - *<ovs-host-ip>* is the IP address of the OVS host
1468 - *<ovs-host-ovsdb-port>* is the TCP port of the OVSDB server on the
1469 OVS host (e.g. 6640)
1471 This command creates an OVSDB node with the node-id "ovsdb:HOST1". This
1472 OVSDB node will be used in the following examples.
1474 QoS and Queue entries can be created and managed without a port, but
1475 ultimately, QoS entries are associated with a port in order to use them.
1476 For the following examples a test bridge and port will be created.
1478 Create the test bridge.
1484 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test
1491 "network-topology:node": [
1493 "node-id": "ovsdb:HOST1/bridge/br-test",
1494 "ovsdb:bridge-name": "br-test",
1495 "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']"
1500 Create the test port (which is modeled as a termination point in the
1501 OpenDaylight MD-SAL).
1507 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
1514 "network-topology:termination-point": [
1516 "ovsdb:name": "testport",
1522 If all of the previous steps were successful, a query of the operational
1523 MD-SAL should look something like the following results. This indicates
1524 that the configuration commands have been successfully instantiated on
1531 http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test
1540 "node-id": "ovsdb:HOST1/bridge/br-test",
1541 "ovsdb:bridge-name": "br-test",
1542 "ovsdb:datapath-type": "ovsdb:datapath-type-system",
1543 "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']",
1544 "ovsdb:datapath-id": "00:00:8e:5d:22:3d:09:49",
1545 "ovsdb:bridge-external-ids": [
1547 "bridge-external-id-key": "opendaylight-iid",
1548 "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']"
1551 "ovsdb:bridge-uuid": "3d225d8d-d060-4909-93ef-6f4db58ef7cc",
1552 "termination-point": [
1555 "ovsdb:port-uuid": "f85f7aa7-4956-40e4-9c94-e6ca2d5cd254",
1556 "ovsdb:ofport": 65534,
1557 "ovsdb:interface-type": "ovsdb:interface-type-internal",
1558 "ovsdb:interface-uuid": "29ff3692-6ed4-4ad7-a077-1edc277ecb1a",
1559 "ovsdb:name": "br-test"
1562 "tp-id": "testport",
1563 "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
1564 "ovsdb:port-external-ids": [
1566 "external-id-key": "opendaylight-iid",
1567 "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
1570 "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
1571 "ovsdb:name": "testport"
1581 Create a new Queue in the configuration MD-SAL.
1587 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
1596 "queue-id": "QUEUE-1",
1598 "queues-other-config": [
1600 "queue-other-config-key": "max-rate",
1601 "queue-other-config-value": "3600000"
1611 Now query the operational MD-SAL for the Queue entry.
1617 http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
1626 "queue-id": "QUEUE-1",
1627 "queues-other-config": [
1629 "queue-other-config-key": "max-rate",
1630 "queue-other-config-value": "3600000"
1633 "queues-external-ids": [
1635 "queues-external-id-key": "opendaylight-queue-id",
1636 "queues-external-id-value": "QUEUE-1"
1639 "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
1648 Create a QoS entry. Note that the UUID of the Queue entry, obtained by
1649 querying the operational MD-SAL of the Queue entry, is specified in the
1650 queue-list of the QoS entry. Queue entries may be added to the QoS entry
1651 at the creation of the QoS entry, or by a subsequent update to the QoS
1658 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
1665 "ovsdb:qos-entries": [
1668 "qos-type": "ovsdb:qos-type-linux-htb",
1669 "qos-other-config": [
1671 "other-config-key": "max-rate",
1672 "other-config-value": "4400000"
1677 "queue-number": "0",
1678 "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
1688 Query the operational MD-SAL for the QoS entry.
1694 http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
1701 "ovsdb:qos-entries": [
1704 "qos-other-config": [
1706 "other-config-key": "max-rate",
1707 "other-config-value": "4400000"
1713 "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
1716 "qos-type": "ovsdb:qos-type-linux-htb",
1717 "qos-external-ids": [
1719 "qos-external-id-key": "opendaylight-qos-id",
1720 "qos-external-id-value": "QOS-1"
1723 "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
1731 Update the termination point entry to include the UUID of the QoS entry,
1732 obtained by querying the operational MD-SAL, to associate a QoS entry
1739 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
1746 "network-topology:termination-point": [
1748 "ovsdb:name": "testport",
1749 "tp-id": "testport",
1750 "qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
1758 Query the operational MD-SAL to see how the QoS entry appears in the
1759 termination point model.
1765 http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
1772 "termination-point": [
1774 "tp-id": "testport",
1775 "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
1776 "ovsdb:port-external-ids": [
1778 "external-id-key": "opendaylight-iid",
1779 "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
1782 "ovsdb:qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31",
1783 "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
1784 "ovsdb:name": "testport"
1789 Query the OVSDB Node
1790 ''''''''''''''''''''
1792 Query the operational MD-SAL for the OVS host to see how the QoS and
1793 Queue entries appear as lists in the OVS node model.
1799 http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/
1801 Result Body (edited to only show information relevant to the QoS and
1809 "node-id": "ovsdb:HOST1",
1810 <content edited out>
1813 "queue-id": "QUEUE-1",
1814 "queues-other-config": [
1816 "queue-other-config-key": "max-rate",
1817 "queue-other-config-value": "3600000"
1820 "queues-external-ids": [
1822 "queues-external-id-key": "opendaylight-queue-id",
1823 "queues-external-id-value": "QUEUE-1"
1826 "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
1830 "ovsdb:qos-entries": [
1833 "qos-other-config": [
1835 "other-config-key": "max-rate",
1836 "other-config-value": "4400000"
1842 "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
1845 "qos-type": "ovsdb:qos-type-linux-htb",
1846 "qos-external-ids": [
1848 "qos-external-id-key": "opendaylight-qos-id",
1849 "qos-external-id-value": "QOS-1"
1852 "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
1855 <content edited out>
1860 Remove QoS from a Port
1861 ''''''''''''''''''''''
1863 This example removes a QoS entry from the termination point and
1864 associated port. Note that this is a PUT command on the termination
1865 point with the QoS attribute absent. Other attributes of the termination
1866 point should be included in the body of the command so that they are not
1867 inadvertantly removed.
1873 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
1880 "network-topology:termination-point": [
1882 "ovsdb:name": "testport",
1888 Remove a Queue from QoS
1889 '''''''''''''''''''''''
1891 This example removes the specific Queue entry from the queue list in the
1892 QoS entry. The queue entry is specified by the queue number, which is
1893 "0" in this example.
1899 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/queue-list/0/
1904 Once all references to a specific queue entry have been removed from QoS
1905 entries, the Queue itself can be removed.
1911 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
1916 The QoS entry may be removed when it is no longer referenced by any
1923 http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
1929 schema <http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf>`__
1931 `OVSDB and Netvirt Postman
1932 Collection <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/resources/commons>`__
1934 OVSDB Hardware VTEP SouthBound Plugin
1935 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1940 Hwvtepsouthbound plugin is used to configure a hardware VTEP which
1941 implements hardware ovsdb schema. This page will show how to use
1942 RESTConf API of hwvtepsouthbound. There are two ways to connect to ODL:
1944 **switch initiates connection..**
1946 Both will be introduced respectively.
1948 User Initiates Connection
1949 ^^^^^^^^^^^^^^^^^^^^^^^^^
1954 Configure the hwvtep device/node to listen for the tcp connection in
1955 passive mode. In addition, management IP and tunnel source IP are also
1956 configured. After all this configuration is done, a physical switch is
1957 created automatically by the hwvtep node.
1959 Connect to a hwvtep device/node
1960 '''''''''''''''''''''''''''''''
1962 Send below Restconf request if you want to initiate the connection to a
1963 hwvtep node from the controller, where listening IP and port of hwvtep
1964 device/node are provided.
1967 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/
1972 "network-topology:node": [
1974 "node-id": "hwvtep://192.168.1.115:6640",
1975 "hwvtep:connection-info":
1977 "hwvtep:remote-port": 6640,
1978 "hwvtep:remote-ip": "192.168.1.115"
1984 Please replace *odl* in the URL with the IP address of your OpendayLight
1985 controller and change *192.168.1.115* to your hwvtep node IP.
1987 **NOTE**: The format of node-id is fixed. It will be one of the two:
1989 User initiates connection from ODL:
1995 Switch initiates connection:
1999 hwvtep://uuid/<uuid of switch>
2001 The reason for using UUID is that we can distinguish between multiple
2002 switches if they are behind a NAT.
2004 After this request is completed successfully, we can get the physical
2005 switch from the operational data store.
2008 http://odl:8181/restconf/operational/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
2010 There is no body in this request.
2012 The response of the request is:
2019 "node-id": "hwvtep://192.168.1.115:6640",
2020 "hwvtep:switches": [
2022 "switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640/physicalswitch/br0']"
2025 "hwvtep:connection-info": {
2026 "local-ip": "192.168.92.145",
2027 "local-port": 47802,
2028 "remote-port": 6640,
2029 "remote-ip": "192.168.1.115"
2033 "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
2034 "hwvtep:management-ips": [
2036 "management-ips-key": "192.168.1.115"
2039 "hwvtep:physical-switch-uuid": "37eb5abd-a6a3-4aba-9952-a4d301bdf371",
2040 "hwvtep:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']",
2041 "hwvtep:hwvtep-node-description": "",
2042 "hwvtep:tunnel-ips": [
2044 "tunnel-ips-key": "192.168.1.115"
2047 "hwvtep:hwvtep-node-name": "br0"
2052 If there is a physical switch which has already been created by manual
2053 configuration, we can get the node-id of the physical switch from this
2054 response, which is presented in “swith-ref”. If the switch does not
2055 exist, we need to create the physical switch. Currently, most hwvtep
2056 devices do not support running multiple switches.
2058 Create a physical switch
2059 ''''''''''''''''''''''''
2062 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/
2069 "network-topology:node": [
2071 "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
2072 "hwvtep-node-name": "ps0",
2073 "hwvtep-node-description": "",
2076 "management-ips-key": "192.168.1.115"
2081 "tunnel-ips-key": "192.168.1.115"
2084 "managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']"
2089 Note: "managed-by" must provided by user. We can get its value after the
2090 step *Connect to a hwvtep device/node* since the node-id of hwvtep
2091 device is provided by user. "managed-by" is a reference typed of
2092 instance identifier. Though the instance identifier is a little
2093 complicated for RestConf, the primary user of hwvtepsouthbound plugin
2094 will be provider-type code such as NetVirt and the instance identifier
2095 is much easier to write code for.
2097 Create a logical switch
2098 '''''''''''''''''''''''
2100 Creating a logical switch is effectively creating a logical network. For
2101 VxLAN, it is a tunnel network with the same VNI.
2104 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
2111 "logical-switches": [
2113 "hwvtep-node-name": "ls0",
2114 "hwvtep-node-description": "",
2115 "tunnel-key": "10000"
2120 Create a physical locator
2121 '''''''''''''''''''''''''
2123 After the VXLAN network is ready, we will add VTEPs to it. A VTEP is
2124 described by a physical locator.
2127 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
2134 "termination-point": [
2136 "tp-id": "vxlan_over_ipv4:192.168.0.116",
2137 "encapsulation-type": "encapsulation-type-vxlan-over-ipv4",
2138 "dst-ip": "192.168.0.116"
2143 The "tp-id" of locator is "{encapsualation-type}: {dst-ip}".
2145 Note: As far as we know, the OVSDB database does not allow the insertion
2146 of a new locator alone. So, no locator is inserted after this request is
2147 sent. We will trigger off the creation until other entity refer to it,
2148 such as remote-mcast-macs.
2150 Create a remote-mcast-macs entry
2151 ''''''''''''''''''''''''''''''''
2153 After adding a physical locator to a logical switch, we need to create a
2154 remote-mcast-macs entry to handle unknown traffic.
2157 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
2164 "remote-mcast-macs": [
2166 "mac-entry-key": "00:00:00:00:00:00",
2167 "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
2170 "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://219.141.189.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
2177 The physical locator *vxlan\_over\_ipv4:192.168.0.116* is just created
2178 in "Create a physical locator". It should be noted that list
2179 "locator-set" is immutable, that is, we must provide a set of
2180 "locator-ref" as a whole.
2182 Note: "00:00:00:00:00:00" stands for "unknown-dst" since the type of
2183 mac-entry-key is yang:mac and does not accept "unknown-dst".
2185 Create a physical port
2186 ''''''''''''''''''''''
2188 Now we add a physical port into the physical switch
2189 "hwvtep://192.168.1.115:6640/physicalswitch/br0". The port is attached
2190 with a physical server or an L2 network and with the vlan 100.
2193 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640%2Fphysicalswitch%2Fbr0
2198 "network-topology:termination-point": [
2201 "hwvtep-node-name": "port0",
2202 "hwvtep-node-description": "",
2205 "vlan-id-key": "100",
2206 "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']"
2213 At this point, we have completed the basic configuration.
2215 Typically, hwvtep devices learn local MAC addresses automatically. But
2216 they also support getting MAC address entries from ODL.
2218 Create a local-mcast-macs entry
2219 '''''''''''''''''''''''''''''''
2221 It is similar to *Create a remote-mcast-macs entry*.
2223 Create a remote-ucast-macs
2224 ''''''''''''''''''''''''''
2227 http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
2236 "remote-ucast-macs": [
2238 "mac-entry-key": "11:11:11:11:11:11",
2239 "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
2240 "ipaddr": "1.1.1.1",
2241 "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
2246 Create a local-ucast-macs entry
2247 '''''''''''''''''''''''''''''''
2249 This is similar to *Create a remote-ucast-macs*.
2251 Switch Initiates Connection
2252 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2254 We do not need to connect to a hwvtep device/node when the switch
2255 initiates the connection. After switches connect to ODL successfully, we
2256 get the node-id’s of switches by reading the operational data store.
2257 Once the node-id of a hwvtep device is received, the remaining steps are
2258 the same as when the user initiates the connection.
2263 https://wiki.opendaylight.org/view/User_talk:Pzhang