3 The OpenFlow Overlay (OfOverlay) feature enables the OpenFlow Overlay
4 renderer, which creates a network virtualization solution across nodes
5 that host Open vSwitch software switches.
7 ===== Installing and Pre-requisites
9 From the Karaf console in OpenDaylight:
11 feature:install odl-groupbasedpolicy-ofoverlay
13 This renderer is designed to work with OpenVSwitch (OVS) 2.1+ (although 2.3 is strongly recommended) and OpenFlow 1.3.
15 When used in conjunction with the <<Neutron,Neutron Mapper feature>> no extra OfOverlay specific setup is required.
17 When this feature is loaded "standalone", the user is required to configure infrastructure, such as
19 * instantiating OVS bridges,
20 * attaching hosts to the bridges,
21 * and creating the VXLAN/VXLAN-GPE tunnel ports on the bridges.
24 The *GBP* OfOverlay renderer also supports a table offset option, to offset the pipeline post-table 0.
25 The value of table offset is stored in the config datastore and it may be rewritten at runtime.
28 PUT http://{{controllerIp}}:8181/restconf/config/ofoverlay:of-overlay-config
30 "of-overlay-config": {
31 "gbp-ofoverlay-table-offset": 6
36 The default value is set by changing:
37 <gbp-ofoverlay-table-offset>0</gbp-ofoverlay-table-offset>
40 distribution-karaf/target/assembly/etc/opendaylight/karaf/15-groupbasedpolicy-ofoverlay.xml
42 To avoid overwriting runtime changes, the default value is used only when the OfOverlay renderer starts and no other
43 value has been written before.
45 ==== OpenFlow Overlay Architecture
47 These are the primary components of *GBP*. The OfOverlay components are highlighted in red.
49 .OfOverlay within *GBP*
50 image::groupbasedpolicy/ofoverlay-1-components.png[align="center",width=500]
52 In terms of the inner components of the *GBP* OfOverlay renderer:
54 .OfOverlay expanded view:
55 image::groupbasedpolicy/ofoverlay-2-components.png[align="center",width=500]
59 Launches components below:
63 Policy resolution is completely domain independent, and the OfOverlay leverages process policy information internally. See <<policyresolution,Policy Resolution process>>.
65 It listens to inputs to the _Tenants_ configuration datastore, validates tenant input, then writes this to the Tenants operational datastore.
67 From there an internal notification is generated to the PolicyManager.
69 In the next release, this will be moving to a non-renderer specific location.
73 The endpoint repository operates in *orchestrated* mode. This means the user is responsible for the provisioning of endpoints via:
78 NOTE: When using the <<Neutron,Neutron mapper>> feature, everything is managed transparently via Neutron.
80 The Endpoint Manager is responsible for listening to Endpoint repository updates and notifying the Switch Manager when a valid Endpoint has been registered.
82 It also supplies utility functions to the flow pipeline process.
86 The Switch Manager is purely a state manager.
88 Switches are in one of 3 states:
94 *Ready* is denoted by a connected switch:
96 * having a tunnel interface
97 * having at least one endpoint connected.
99 In this way *GBP* is not writing to switches it has no business to.
101 *Preparing* simply means the switch has a controller connection but is missing one of the above _complete and necessary_ conditions
103 *Disconnected* means a previously connected switch is no longer present in the Inventory operational datastore.
105 .OfOverlay Flow Pipeline
106 image::groupbasedpolicy/ofoverlay-3-flowpipeline.png[align="center",width=500]
108 The OfOverlay leverages Nicira registers as follows:
110 * REG0 = Source EndpointGroup + Tenant ordinal
111 * REG1 = Source Conditions + Tenant ordinal
112 * REG2 = Destination EndpointGroup + Tenant ordinal
113 * REG3 = Destination Conditions + Tenant ordinal
114 * REG4 = Bridge Domain + Tenant ordinal
115 * REG5 = Flood Domain + Tenant ordinal
116 * REG6 = Layer 3 Context + Tenant ordinal
120 Table 0 of the OpenFlow pipeline. Responsible for ensuring that only valid connections can send packets into the pipeline:
122 cookie=0x0, <snip> , priority=200,in_port=3 actions=goto_table:2
123 cookie=0x0, <snip> , priority=200,in_port=1 actions=goto_table:1
124 cookie=0x0, <snip> , priority=121,arp,in_port=5,dl_src=fa:16:3e:d5:b9:8d,arp_spa=10.1.1.3 actions=goto_table:2
125 cookie=0x0, <snip> , priority=120,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_src=10.1.1.3 actions=goto_table:2
126 cookie=0x0, <snip> , priority=115,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_dst=255.255.255.255 actions=goto_table:2
127 cookie=0x0, <snip> , priority=112,ipv6 actions=drop
128 cookie=0x0, <snip> , priority=111, ip actions=drop
129 cookie=0x0, <snip> , priority=110,arp actions=drop
130 cookie=0x0, <snip> ,in_port=5,dl_src=fa:16:3e:d5:b9:8d actions=goto_table:2
131 cookie=0x0, <snip> , priority=1 actions=drop
133 Ingress from tunnel interface, go to Table _Source Mapper_:
135 cookie=0x0, <snip> , priority=200,in_port=3 actions=goto_table:2
137 Ingress from outside, goto Table _Ingress NAT Mapper_:
139 cookie=0x0, <snip> , priority=200,in_port=1 actions=goto_table:1
141 ARP from Endpoint, go to Table _Source Mapper_:
143 cookie=0x0, <snip> , priority=121,arp,in_port=5,dl_src=fa:16:3e:d5:b9:8d,arp_spa=10.1.1.3 actions=goto_table:2
145 IPv4 from Endpoint, go to Table _Source Mapper_:
147 cookie=0x0, <snip> , priority=120,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_src=10.1.1.3 actions=goto_table:2
149 DHCP DORA from Endpoint, go to Table _Source Mapper_:
151 cookie=0x0, <snip> , priority=115,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_dst=255.255.255.255 actions=goto_table:2
153 Series of DROP tables with priority set to capture any non-specific traffic that should have matched above:
155 cookie=0x0, <snip> , priority=112,ipv6 actions=drop
156 cookie=0x0, <snip> , priority=111, ip actions=drop
157 cookie=0x0, <snip> , priority=110,arp actions=drop
159 "L2" catch all traffic not identified above:
161 cookie=0x0, <snip> ,in_port=5,dl_src=fa:16:3e:d5:b9:8d actions=goto_table:2
165 cookie=0x0, <snip> , priority=1 actions=drop
170 Table <<offset,_offset_>>+1.
172 ARP responder for external NAT address:
174 cookie=0x0, <snip> , priority=150,arp,arp_tpa=192.168.111.51,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:58:c3:dd->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0xfa163e58c3dd->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xc0a86f33->NXM_OF_ARP_SPA[],IN_PORT
176 Translate from Outside to Inside and perform same functions as SourceMapper.
178 cookie=0x0, <snip> , priority=100,ip,nw_dst=192.168.111.51 actions=set_field:10.1.1.2->ip_dst,set_field:fa:16:3e:58:c3:dd->eth_dst,load:0x2->NXM_NX_REG0[],load:0x1->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],load:0x3->NXM_NX_TUN_ID[0..31],goto_table:3
182 Table <<offset,_offset_>>+2.
184 Determines based on characteristics from the ingress port, which:
186 * EndpointGroup(s) it belongs to
188 * Tunnel VNID ordinal
190 Establishes tunnels at valid destination switches for ingress.
192 Ingress Tunnel established at remote node with VNID Ordinal that maps to Source EPG, Forwarding Context etc:
194 cookie=0x0, <snip>, priority=150,tun_id=0xd,in_port=3 actions=load:0xc->NXM_NX_REG0[],load:0xffffff->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],goto_table:3
196 Maps endpoint to Source EPG, Forwarding Context based on ingress port, and MAC:
198 cookie=0x0, <snip> , priority=100,in_port=5,dl_src=fa:16:3e:b4:b4:b1 actions=load:0xc->NXM_NX_REG0[],load:0x1->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],load:0xd->NXM_NX_TUN_ID[0..31],goto_table:3
202 cookie=0x0, duration=197.622s, table=2, n_packets=0, n_bytes=0, priority=1 actions=drop
206 Table <<offset,_offset_>>+3.
208 Determines based on characteristics of the endpoint:
210 * EndpointGroup(s) it belongs to
212 * Tunnel Destination value
214 Manages routing based on valid ingress nodes ARP'ing for their default gateway, and matches on either gateway MAC or destination endpoint MAC.
216 ARP for default gateway for the 10.1.1.0/24 subnet:
218 cookie=0x0, <snip> , priority=150,arp,reg6=0x7,arp_tpa=10.1.1.1,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:28:4c:82->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0xfa163e284c82->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xa010101->NXM_OF_ARP_SPA[],IN_PORT
220 Broadcast traffic destined for GroupTable:
222 cookie=0x0, <snip> , priority=140,reg5=0x5,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=load:0x5->NXM_NX_TUN_ID[0..31],group:5
224 Layer3 destination matching flows, where priority=100+masklength. Since *GBP* now support L3Prefix endpoint, we can set default routes etc:
226 cookie=0x0, <snip>, priority=132,ip,reg6=0x7,dl_dst=fa:16:3e:b4:b4:b1,nw_dst=10.1.1.3 actions=load:0xc->NXM_NX_REG2[],load:0x1->NXM_NX_REG3[],load:0x5->NXM_NX_REG7[],set_field:fa:16:3e:b4:b4:b1->eth_dst,dec_ttl,goto_table:4
228 Layer2 destination matching flows, designed to be caught only after last IP flow (lowest priority IP flow is 100):
230 cookie=0x0, duration=323.203s, table=3, n_packets=4, n_bytes=168, priority=50,reg4=0x4,dl_dst=fa:16:3e:58:c3:dd actions=load:0x2->NXM_NX_REG2[],load:0x1->NXM_NX_REG3[],load:0x2->NXM_NX_REG7[],goto_table:4
233 cookie=0x0, duration=323.207s, table=3, n_packets=6, n_bytes=588, priority=1 actions=drop
237 Table <<offset,_offset_>>+4.
239 Once the Source and Destination EndpointGroups are assigned, policy is enforced based on resolved rules.
241 In the case of <<SFC,Service Function Chaining>>, the encapsulation and destination for traffic destined to a chain, is discovered and enforced.
243 Policy flow, allowing IP traffic between EndpointGroups:
245 cookie=0x0, <snip> , priority=64998,ip,reg0=0x8,reg1=0x1,reg2=0xc,reg3=0x1 actions=goto_table:5
249 Table <<offset,_offset_>>+5.
251 Performs NAT function before Egressing OVS instance to the underlay network.
253 Inside to Outside NAT translation before sending to underlay:
255 cookie=0x0, <snip> , priority=100,ip,reg6=0x7,nw_src=10.1.1.2 actions=set_field:192.168.111.51->ip_src,goto_table:6
259 Table <<offset,_offset_>>+6.
261 Manages post-policy enforcement for endpoint specific destination effects. Specifically for <<SFC,Service Function Chaining>>, which is why we can support both symmetric and asymmetric chains
262 and distributed ingress/egress classification.
266 cookie=0x0, <snip>, priority=100 actions=output:NXM_NX_REG7[]
268 ==== Configuring OpenFlow Overlay via REST
270 NOTE: Please see the <<UX,UX>> section on how to configure *GBP* via the GUI.
275 POST http://{{controllerIp}}:8181/restconf/operations/endpoint:register-endpoint
278 "endpoint-group": "<epg0>",
279 "endpoint-groups" : ["<epg1>","<epg2>"],
280 "network-containment" : "<fowarding-model-context1>",
281 "l2-context": "<bridge-domain1>",
282 "mac-address": "<mac1>",
285 "ip-address": "<ipaddress1>",
286 "l3-context": "<l3_context1>"
289 "*ofoverlay:port-name*": "<ovs port name>",
290 "tenant": "<tenant1>"
295 NOTE: The usage of "port-name" preceded by "ofoverlay". In OpenDaylight, base datastore objects can be _augmented_. In *GBP*, the base endpoint model has no renderer
296 specifics, hence can be leveraged across multiple renderers.
298 *OVS Augmentations to Inventory*
301 PUT http://{{controllerIp}}:8181/restconf/config/opendaylight-inventory:nodes/
303 "opendaylight-inventory:nodes": {
306 "id": "openflow:123456",
307 "ofoverlay:tunnel": [
309 "tunnel-type": "overlay:tunnel-type-vxlan",
310 "ip": "<ip_address_of_ovs>",
312 "node-connector-id": "openflow:123456:1"
317 "id": "openflow:654321",
318 "ofoverlay:tunnel": [
320 "tunnel-type": "overlay:tunnel-type-vxlan",
321 "ip": "<ip_address_of_ovs>",
323 "node-connector-id": "openflow:654321:1"
332 *Tenants* see <<policyresolution,Policy Resolution>> and <<forwarding,Forwarding Model>> for details:
341 "name": "allow-http-clause",
343 "allow-http-subject",
351 "name": "allow-http-subject",
370 "name": "allow-http-rule"
375 "name": "allow-icmp-subject",
389 "name": "allow-icmp-rule"
398 "consumer-named-selector": [
407 "provider-named-selector": []
410 "consumer-named-selector": [],
412 "provider-named-selector": [
423 "l2-bridge-domain": [
445 "subject-feature-instances": {
446 "classifier-instance": [
448 "classifier-definition-id": "<id>",
462 "classifier-definition-id": "<id>",
476 "classifier-definition-id": "<id>",
489 "action-definition-id": "<id>"
496 "ip-prefix": "<ip_prefix>",
498 "virtual-router-ip": "<ip address>"
502 "ip-prefix": "<ip prefix>",
504 "virtual-router-ip": "<ip address>"
512 ==== Tutorials[[Demo]]
514 Comprehensive tutorials, along with a demonstration environment leveraging Vagrant
515 can be found on the https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)[*GBP* wiki]