3 The OpenFlow Overlay (OfOverlay) feature enables the OpenFlow Overlay
4 renderer, which creates a network virtualization solution across nodes
5 that host OpenvSwitch software switches.
7 ===== Installing and Pre-requisites
9 From the karaf console in OpenDaylight:
11 feature:install odl-groupbasedpolicy-ofoverlay
13 This renderer is designed to work with OpenVSwitch (OVS) 2.1+ (although 2.3 is strongly recommended) and OpenFlow 1.3.
15 When used in conjunction with the <<Neutron,Neutron Mapper feature>> no extra OfOverlay specific setup is required.
17 When this feature is loaded "standalone", the user is required to configure infrastructure, such as
19 * instantiating OVS bridges,
20 * attaching hosts to the bridges,
21 * and creating the VXLAN/VXLAN-GPE tunnel ports on the bridges.
24 In Lithium, the *GBP* OfOverlay renderer also supports a table offset option, to offset the pipeline post-table 0
26 This is set by changing:
27 <gbp-ofoverlay-table-offset>0</gbp-ofoverlay-table-offset>
30 ./distribution-karaf/target/assembly/etc/opendaylight/karaf/15-groupbasedpolicy-ofoverlay.xml
32 ==== OpenFlow Overlay Architecture
34 These are the primary components of *GBP*. The OfOverlay components are highlighted in red.
36 .OfOverlay within *GBP*
37 image::groupbasedpolicy/ofoverlay-1-components.png[align="center",width=500]
39 In terms of the inner components of the *GBP* OfOverlay renderer:
41 .OfOverlay expanded view:
42 image::groupbasedpolicy/ofoverlay-2-components.png[align="center",width=500]
46 Launches components below:
50 Policy resolution is completely domain independent, and the OfOverlay leverages process policy information internally. See <<policyresolution,Policy Resolution process>>.
52 It listens to inputs to the _Tenants_ configuration datastore, validates tenant input, then writes this to the Tenants operational datastore.
54 From there an internal notification is generated to the PolicyManager.
56 In the next release, this will be moving to a non-renderer specific location.
60 The endpoint repository, in Lithium, operates in *orchestrated* mode. This means the user is responsible for the provisioning of endpoints via:
65 NOTE: When using the <<Neutron,Neutron mapper>> feature, everything is managed transparently via Neutron.
67 The Endpoint Manager is responsible for listening to Endpoint repository updates and notifying the Switch Manager when a valid Endpoint has been registered.
69 It also supplies utility functions to the flow pipeline process.
73 The Switch Manager has been refactored in Lithium to be purely a state manager.
75 Switches are in one of 3 states:
81 *Ready* is denoted by a connected switch:
83 * having a tunnel interface
84 * having at least one endpoint connected.
86 In this way *GBP* is not writing to switches it has no business to.
88 *Preparing* simply means the switch has a controller connection but is missing one of the above _complete and necessary_ conditions
90 *Disconnected* means a previously connected switch is no longer present in the Inventory operational datastore.
92 .OfOverlay Flow Pipeline
93 image::groupbasedpolicy/ofoverlay-3-flowpipeline.png[align="center",width=500]
95 The OfOverlay leverages Nicira registers as follows:
97 * REG0 = Source EndpointGroup + Tenant ordinal
98 * REG1 = Source Conditions + Tenant ordinal
99 * REG2 = Destination EndpointGroup + Tenant ordinal
100 * REG3 = Destination Conditions + Tenant ordinal
101 * REG4 = Bridge Domain + Tenant ordinal
102 * REG5 = Flood Domain + Tenant ordinal
103 * REG6 = Layer 3 Context + Tenant ordinal
107 Table 0 of the OpenFlow pipeline. Responsible for ensuring that only valid connections can send packets into the pipeline:
109 cookie=0x0, <snip> , priority=200,in_port=3 actions=goto_table:2
110 cookie=0x0, <snip> , priority=200,in_port=1 actions=goto_table:1
111 cookie=0x0, <snip> , priority=121,arp,in_port=5,dl_src=fa:16:3e:d5:b9:8d,arp_spa=10.1.1.3 actions=goto_table:2
112 cookie=0x0, <snip> , priority=120,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_src=10.1.1.3 actions=goto_table:2
113 cookie=0x0, <snip> , priority=115,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_dst=255.255.255.255 actions=goto_table:2
114 cookie=0x0, <snip> , priority=112,ipv6 actions=drop
115 cookie=0x0, <snip> , priority=111, ip actions=drop
116 cookie=0x0, <snip> , priority=110,arp actions=drop
117 cookie=0x0, <snip> ,in_port=5,dl_src=fa:16:3e:d5:b9:8d actions=goto_table:2
118 cookie=0x0, <snip> , priority=1 actions=drop
120 Ingress from tunnel interface, go to Table _Source Mapper_:
122 cookie=0x0, <snip> , priority=200,in_port=3 actions=goto_table:2
124 Ingress from outside, goto Table _Ingress NAT Mapper_:
126 cookie=0x0, <snip> , priority=200,in_port=1 actions=goto_table:1
128 ARP from Endpoint, go to Table _Source Mapper_:
130 cookie=0x0, <snip> , priority=121,arp,in_port=5,dl_src=fa:16:3e:d5:b9:8d,arp_spa=10.1.1.3 actions=goto_table:2
132 IPv4 from Endpoint, go to Table _Source Mapper_:
134 cookie=0x0, <snip> , priority=120,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_src=10.1.1.3 actions=goto_table:2
136 DHCP DORA from Endpoint, go to Table _Source Mapper_:
138 cookie=0x0, <snip> , priority=115,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_dst=255.255.255.255 actions=goto_table:2
140 Series of DROP tables with priority set to capture any non-specific traffic that should have matched above:
142 cookie=0x0, <snip> , priority=112,ipv6 actions=drop
143 cookie=0x0, <snip> , priority=111, ip actions=drop
144 cookie=0x0, <snip> , priority=110,arp actions=drop
146 "L2" catch all traffic not identified above:
148 cookie=0x0, <snip> ,in_port=5,dl_src=fa:16:3e:d5:b9:8d actions=goto_table:2
152 cookie=0x0, <snip> , priority=1 actions=drop
157 Table <<offset,_offset_>>+1.
159 ARP responder for external NAT address:
161 cookie=0x0, <snip> , priority=150,arp,arp_tpa=192.168.111.51,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:58:c3:dd->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0xfa163e58c3dd->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xc0a86f33->NXM_OF_ARP_SPA[],IN_PORT
163 Translate from Outside to Inside and perform same functions as SourceMapper.
165 cookie=0x0, <snip> , priority=100,ip,nw_dst=192.168.111.51 actions=set_field:10.1.1.2->ip_dst,set_field:fa:16:3e:58:c3:dd->eth_dst,load:0x2->NXM_NX_REG0[],load:0x1->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],load:0x3->NXM_NX_TUN_ID[0..31],goto_table:3
169 Table <<offset,_offset_>>+2.
171 Determines based on characteristics from the ingress port, which:
173 * EndpointGroup(s) it belongs to
175 * Tunnel VNID ordinal
177 Establishes tunnels at valid destination switches for ingress.
179 Ingress Tunnel established at remote node with VNID Ordinal that maps to Source EPG, Forwarding Context etc:
181 cookie=0x0, <snip>, priority=150,tun_id=0xd,in_port=3 actions=load:0xc->NXM_NX_REG0[],load:0xffffff->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],goto_table:3
183 Maps endpoint to Source EPG, Forwarding Context based on ingress port, and MAC:
185 cookie=0x0, <snip> , priority=100,in_port=5,dl_src=fa:16:3e:b4:b4:b1 actions=load:0xc->NXM_NX_REG0[],load:0x1->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],load:0xd->NXM_NX_TUN_ID[0..31],goto_table:3
189 cookie=0x0, duration=197.622s, table=2, n_packets=0, n_bytes=0, priority=1 actions=drop
193 Table <<offset,_offset_>>+3.
195 Determines based on characteristics of the endpoint:
197 * EndpointGroup(s) it belongs to
199 * Tunnel Destination value
201 Manages routing based on valid ingress nodes ARP'ing for their default gateway, and matches on either gateway MAC or destination endpoint MAC.
203 ARP for default gateway for the 10.1.1.0/24 subnet:
205 cookie=0x0, <snip> , priority=150,arp,reg6=0x7,arp_tpa=10.1.1.1,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:28:4c:82->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0xfa163e284c82->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xa010101->NXM_OF_ARP_SPA[],IN_PORT
207 Broadcast traffic destined for GroupTable:
209 cookie=0x0, <snip> , priority=140,reg5=0x5,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=load:0x5->NXM_NX_TUN_ID[0..31],group:5
211 Layer3 destination matching flows, where priority=100+masklength. Since *GBP* now support L3Prefix endpoint, we can set default routes etc:
213 cookie=0x0, <snip>, priority=132,ip,reg6=0x7,dl_dst=fa:16:3e:b4:b4:b1,nw_dst=10.1.1.3 actions=load:0xc->NXM_NX_REG2[],load:0x1->NXM_NX_REG3[],load:0x5->NXM_NX_REG7[],set_field:fa:16:3e:b4:b4:b1->eth_dst,dec_ttl,goto_table:4
215 Layer2 destination matching flows, designed to be caught only after last IP flow (lowest priority IP flow is 100):
217 cookie=0x0, duration=323.203s, table=3, n_packets=4, n_bytes=168, priority=50,reg4=0x4,dl_dst=fa:16:3e:58:c3:dd actions=load:0x2->NXM_NX_REG2[],load:0x1->NXM_NX_REG3[],load:0x2->NXM_NX_REG7[],goto_table:4
220 cookie=0x0, duration=323.207s, table=3, n_packets=6, n_bytes=588, priority=1 actions=drop
224 Table <<offset,_offset_>>+4.
226 Once the Source and Destination EndpointGroups are assigned, policy is enforced based on resolved rules.
228 In the case of <<SFC,Service Function Chaining>>, the encapsulation and destination for traffic destined to a chain, is discovered and enforced.
230 Policy flow, allowing IP traffic between EndpointGroups:
232 cookie=0x0, <snip> , priority=64998,ip,reg0=0x8,reg1=0x1,reg2=0xc,reg3=0x1 actions=goto_table:5
236 Table <<offset,_offset_>>+5.
238 Performs NAT function before Egressing OVS instance to the underlay network.
240 Inside to Outside NAT translation before sending to underlay:
242 cookie=0x0, <snip> , priority=100,ip,reg6=0x7,nw_src=10.1.1.2 actions=set_field:192.168.111.51->ip_src,goto_table:6
246 Table <<offset,_offset_>>+6.
248 Manages post-policy enforcement for endpoint specific destination effects. Specifically for <<SFC,Service Function Chaining>>, which is why we can support both symmetric and asymmetric chains
249 and distributed ingress/egress classification.
253 cookie=0x0, <snip>, priority=100 actions=output:NXM_NX_REG7[]
255 ==== Configuring OpenFlow Overlay via REST
257 NOTE: Please see the <<UX,UX>> section on how to configure *GBP* via the GUI.
262 POST http://{{controllerIp}}:8181/restconf/operations/endpoint:register-endpoint
265 "endpoint-group": "<epg0>",
266 "endpoint-groups" : ["<epg1>","<epg2>"],
267 "network-containment" : "<fowarding-model-context1>",
268 "l2-context": "<bridge-domain1>",
269 "mac-address": "<mac1>",
272 "ip-address": "<ipaddress1>",
273 "l3-context": "<l3_context1>"
276 "*ofoverlay:port-name*": "<ovs port name>",
277 "tenant": "<tenant1>"
282 NOTE: The usage of "port-name" preceded by "ofoverlay". In OpenDaylight, base datastore objects can be _augmented_. In *GBP*, the base endpoint model has no renderer
283 specifics, hence can be leveraged across multiple renderers.
285 *OVS Augmentations to Inventory*
288 PUT http://{{controllerIp}}:8181/restconf/config/opendaylight-inventory:nodes/
290 "opendaylight-inventory:nodes": {
293 "id": "openflow:123456",
294 "ofoverlay:tunnel": [
296 "tunnel-type": "overlay:tunnel-type-vxlan",
297 "ip": "<ip_address_of_ovs>",
299 "node-connector-id": "openflow:123456:1"
304 "id": "openflow:654321",
305 "ofoverlay:tunnel": [
307 "tunnel-type": "overlay:tunnel-type-vxlan",
308 "ip": "<ip_address_of_ovs>",
310 "node-connector-id": "openflow:654321:1"
319 *Tenants* see <<policyresolution,Policy Resolution>> and <<forwarding,Forwarding Model>> for details:
328 "name": "allow-http-clause",
330 "allow-http-subject",
338 "name": "allow-http-subject",
357 "name": "allow-http-rule"
362 "name": "allow-icmp-subject",
376 "name": "allow-icmp-rule"
385 "consumer-named-selector": [
394 "provider-named-selector": []
397 "consumer-named-selector": [],
399 "provider-named-selector": [
410 "l2-bridge-domain": [
432 "subject-feature-instances": {
433 "classifier-instance": [
435 "classifier-definition-id": "<id>",
449 "classifier-definition-id": "<id>",
463 "classifier-definition-id": "<id>",
476 "action-definition-id": "<id>"
483 "ip-prefix": "<ip_prefix>",
485 "virtual-router-ip": "<ip address>"
489 "ip-prefix": "<ip prefix>",
491 "virtual-router-ip": "<ip address>"
499 ==== Tutorials[[Demo]]
501 Comprehensive tutorials, along with a demonstration environment leveraging Vagrant
502 can be found on the https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)[*GBP* wiki]