Migrate ALTO user docs to rst
[docs.git] / manuals / user-guide / src / main / asciidoc / groupbasedpolicy / odl-groupbasedpolicy-ofoverlay-user-guide.adoc
1 ==== Overview
2
3 The OpenFlow Overlay (OfOverlay) feature enables the OpenFlow Overlay
4 renderer, which creates a network virtualization solution across nodes
5 that host Open vSwitch software switches.
6
7 ===== Installing and Pre-requisites
8
9 From the Karaf console in OpenDaylight:
10
11  feature:install odl-groupbasedpolicy-ofoverlay
12
13 This renderer is designed to work with OpenVSwitch (OVS) 2.1+ (although 2.3 is strongly recommended) and OpenFlow 1.3.
14
15 When used in conjunction with the <<Neutron,Neutron Mapper feature>> no extra OfOverlay specific setup is required.
16
17 When this feature is loaded "standalone", the user is required to configure infrastructure, such as
18
19 * instantiating OVS bridges,
20 * attaching hosts to the bridges,
21 * and creating the VXLAN/VXLAN-GPE tunnel ports on the bridges.
22
23 [[offset]]
24 The *GBP* OfOverlay renderer also supports a table offset option, to offset the pipeline post-table 0.
25 The value of table offset is stored in the config datastore and it may be rewritten at runtime.
26
27 ----
28 PUT http://{{controllerIp}}:8181/restconf/config/ofoverlay:of-overlay-config
29 {
30     "of-overlay-config": {
31         "gbp-ofoverlay-table-offset": 6
32     }
33 }
34 ----
35
36 The default value is set by changing:
37  <gbp-ofoverlay-table-offset>0</gbp-ofoverlay-table-offset>
38
39 in file:
40 distribution-karaf/target/assembly/etc/opendaylight/karaf/15-groupbasedpolicy-ofoverlay.xml
41
42 To avoid overwriting runtime changes, the default value is used only when the OfOverlay renderer starts and no other
43 value has been written before.
44
45 ==== OpenFlow Overlay Architecture
46
47 These are the primary components of *GBP*. The OfOverlay components are highlighted in red.
48
49 .OfOverlay within *GBP*
50 image::groupbasedpolicy/ofoverlay-1-components.png[align="center",width=500]
51
52 In terms of the inner components of the *GBP* OfOverlay renderer:
53
54 .OfOverlay expanded view:
55 image::groupbasedpolicy/ofoverlay-2-components.png[align="center",width=500]
56
57 *OfOverlay Renderer*
58
59 Launches components below:
60
61 *Policy Resolver*
62
63 Policy resolution is completely domain independent, and the OfOverlay leverages process policy information internally. See <<policyresolution,Policy Resolution process>>.
64
65 It listens to inputs to the _Tenants_ configuration datastore, validates tenant input, then writes this to the Tenants operational datastore.
66
67 From there an internal notification is generated to the PolicyManager.
68
69 In the next release, this will be moving to a non-renderer specific location.
70
71 *Endpoint Manager*
72
73 The endpoint repository operates in *orchestrated* mode. This means the user is responsible for the provisioning of endpoints via:
74
75 * <<UX,UX/GUI>>
76 * REST API
77
78 NOTE: When using the <<Neutron,Neutron mapper>> feature, everything is managed transparently via Neutron.
79
80 The Endpoint Manager is responsible for listening to Endpoint repository updates and notifying the Switch Manager when a valid Endpoint has been registered.
81
82 It also supplies utility functions to the flow pipeline process.
83
84 *Switch Manager*
85
86 The Switch Manager is purely a state manager.
87
88 Switches are in one of 3 states:
89
90 * DISCONNECTED
91 * PREPARING
92 * READY
93
94 *Ready* is denoted by a connected switch:
95
96 * having a tunnel interface
97 * having at least one endpoint connected.
98
99 In this way *GBP* is not writing to switches it has no business to.
100
101 *Preparing* simply means the switch has a controller connection but is missing one of the above _complete and necessary_ conditions
102
103 *Disconnected* means a previously connected switch is no longer present in the Inventory operational datastore.
104
105 .OfOverlay Flow Pipeline
106 image::groupbasedpolicy/ofoverlay-3-flowpipeline.png[align="center",width=500]
107
108 The OfOverlay leverages Nicira registers as follows:
109
110 * REG0 = Source EndpointGroup + Tenant ordinal
111 * REG1 = Source Conditions + Tenant ordinal
112 * REG2 = Destination EndpointGroup + Tenant ordinal
113 * REG3 = Destination Conditions + Tenant ordinal
114 * REG4 = Bridge Domain + Tenant ordinal
115 * REG5 = Flood Domain + Tenant ordinal
116 * REG6 = Layer 3 Context + Tenant ordinal
117
118 *Port Security*
119
120 Table 0 of the OpenFlow pipeline. Responsible for ensuring that only valid connections can send packets into the pipeline:
121
122  cookie=0x0, <snip> , priority=200,in_port=3 actions=goto_table:2
123  cookie=0x0, <snip> , priority=200,in_port=1 actions=goto_table:1
124  cookie=0x0, <snip> , priority=121,arp,in_port=5,dl_src=fa:16:3e:d5:b9:8d,arp_spa=10.1.1.3 actions=goto_table:2
125  cookie=0x0, <snip> , priority=120,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_src=10.1.1.3 actions=goto_table:2
126  cookie=0x0, <snip> , priority=115,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_dst=255.255.255.255 actions=goto_table:2
127  cookie=0x0, <snip> , priority=112,ipv6 actions=drop
128  cookie=0x0, <snip> , priority=111, ip actions=drop
129  cookie=0x0, <snip> , priority=110,arp actions=drop
130  cookie=0x0, <snip> ,in_port=5,dl_src=fa:16:3e:d5:b9:8d actions=goto_table:2
131  cookie=0x0, <snip> , priority=1 actions=drop
132
133 Ingress from tunnel interface, go to Table _Source Mapper_:
134
135  cookie=0x0, <snip> , priority=200,in_port=3 actions=goto_table:2
136
137 Ingress from outside, goto Table _Ingress NAT Mapper_:
138
139  cookie=0x0, <snip> , priority=200,in_port=1 actions=goto_table:1
140
141 ARP from Endpoint, go to Table _Source Mapper_:
142
143  cookie=0x0, <snip> , priority=121,arp,in_port=5,dl_src=fa:16:3e:d5:b9:8d,arp_spa=10.1.1.3 actions=goto_table:2
144
145 IPv4 from Endpoint, go to Table _Source Mapper_:
146
147  cookie=0x0, <snip> , priority=120,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_src=10.1.1.3 actions=goto_table:2
148
149 DHCP DORA from Endpoint, go to Table _Source Mapper_:
150
151  cookie=0x0, <snip> , priority=115,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_dst=255.255.255.255 actions=goto_table:2
152
153 Series of DROP tables with priority set to capture any non-specific traffic that should have matched above:
154
155  cookie=0x0, <snip> , priority=112,ipv6 actions=drop
156  cookie=0x0, <snip> , priority=111, ip actions=drop
157  cookie=0x0, <snip> , priority=110,arp actions=drop
158
159 "L2" catch all traffic not identified above:
160
161  cookie=0x0, <snip> ,in_port=5,dl_src=fa:16:3e:d5:b9:8d actions=goto_table:2
162
163 Drop Flow:
164
165  cookie=0x0, <snip> , priority=1 actions=drop
166
167
168 *Ingress NAT Mapper*
169
170 Table <<offset,_offset_>>+1.
171
172 ARP responder for external NAT address:
173
174  cookie=0x0, <snip> , priority=150,arp,arp_tpa=192.168.111.51,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:58:c3:dd->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0xfa163e58c3dd->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xc0a86f33->NXM_OF_ARP_SPA[],IN_PORT
175
176 Translate from Outside to Inside and perform same functions as SourceMapper.
177
178  cookie=0x0, <snip> , priority=100,ip,nw_dst=192.168.111.51 actions=set_field:10.1.1.2->ip_dst,set_field:fa:16:3e:58:c3:dd->eth_dst,load:0x2->NXM_NX_REG0[],load:0x1->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],load:0x3->NXM_NX_TUN_ID[0..31],goto_table:3
179
180 *Source Mapper*
181
182 Table <<offset,_offset_>>+2.
183
184 Determines based on characteristics from the ingress port, which:
185
186 * EndpointGroup(s) it belongs to
187 * Forwarding context
188 * Tunnel VNID ordinal
189
190 Establishes tunnels at valid destination switches for ingress.
191
192 Ingress Tunnel established at remote node with VNID Ordinal that maps to Source EPG, Forwarding Context etc:
193
194  cookie=0x0, <snip>, priority=150,tun_id=0xd,in_port=3 actions=load:0xc->NXM_NX_REG0[],load:0xffffff->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],goto_table:3
195
196 Maps endpoint to Source EPG, Forwarding Context based on ingress port, and MAC:
197
198  cookie=0x0, <snip> , priority=100,in_port=5,dl_src=fa:16:3e:b4:b4:b1 actions=load:0xc->NXM_NX_REG0[],load:0x1->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],load:0xd->NXM_NX_TUN_ID[0..31],goto_table:3
199
200 Generic drop:
201
202  cookie=0x0, duration=197.622s, table=2, n_packets=0, n_bytes=0, priority=1 actions=drop
203
204 *Destination Mapper*
205
206 Table <<offset,_offset_>>+3.
207
208 Determines based on characteristics of the endpoint:
209
210 * EndpointGroup(s) it belongs to
211 * Forwarding context
212 * Tunnel Destination value
213
214 Manages routing based on valid ingress nodes ARP'ing for their default gateway, and matches on either gateway MAC or destination endpoint MAC.
215
216 ARP for default gateway for the 10.1.1.0/24 subnet:
217
218  cookie=0x0, <snip> , priority=150,arp,reg6=0x7,arp_tpa=10.1.1.1,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:28:4c:82->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0xfa163e284c82->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xa010101->NXM_OF_ARP_SPA[],IN_PORT
219
220 Broadcast traffic destined for GroupTable:
221
222  cookie=0x0, <snip> , priority=140,reg5=0x5,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=load:0x5->NXM_NX_TUN_ID[0..31],group:5
223
224 Layer3 destination matching flows, where priority=100+masklength. Since *GBP* now support L3Prefix endpoint, we can set default routes etc:
225
226  cookie=0x0, <snip>, priority=132,ip,reg6=0x7,dl_dst=fa:16:3e:b4:b4:b1,nw_dst=10.1.1.3 actions=load:0xc->NXM_NX_REG2[],load:0x1->NXM_NX_REG3[],load:0x5->NXM_NX_REG7[],set_field:fa:16:3e:b4:b4:b1->eth_dst,dec_ttl,goto_table:4
227
228 Layer2 destination matching flows, designed to be caught only after last IP flow (lowest priority IP flow is 100):
229
230  cookie=0x0, duration=323.203s, table=3, n_packets=4, n_bytes=168, priority=50,reg4=0x4,dl_dst=fa:16:3e:58:c3:dd actions=load:0x2->NXM_NX_REG2[],load:0x1->NXM_NX_REG3[],load:0x2->NXM_NX_REG7[],goto_table:4
231
232 General drop flow:
233  cookie=0x0, duration=323.207s, table=3, n_packets=6, n_bytes=588, priority=1 actions=drop
234
235 *Policy Enforcer*
236
237 Table <<offset,_offset_>>+4.
238
239 Once the Source and Destination EndpointGroups are assigned, policy is enforced based on resolved rules.
240
241 In the case of <<SFC,Service Function Chaining>>, the encapsulation and destination for traffic destined to a chain, is discovered and enforced.
242
243 Policy flow, allowing IP traffic between EndpointGroups:
244
245  cookie=0x0, <snip> , priority=64998,ip,reg0=0x8,reg1=0x1,reg2=0xc,reg3=0x1 actions=goto_table:5
246
247 *Egress NAT Mapper*
248
249 Table <<offset,_offset_>>+5.
250
251 Performs NAT function before Egressing OVS instance to the underlay network.
252
253 Inside to Outside NAT translation before sending to underlay:
254
255  cookie=0x0, <snip> , priority=100,ip,reg6=0x7,nw_src=10.1.1.2 actions=set_field:192.168.111.51->ip_src,goto_table:6
256
257 *External Mapper*
258
259 Table <<offset,_offset_>>+6.
260
261 Manages post-policy enforcement for endpoint specific destination effects. Specifically for <<SFC,Service Function Chaining>>, which is why we can support both symmetric and asymmetric chains
262 and distributed ingress/egress classification.
263
264 Generic allow:
265
266  cookie=0x0, <snip>, priority=100 actions=output:NXM_NX_REG7[]
267
268 ==== Configuring OpenFlow Overlay via REST
269
270 NOTE: Please see the <<UX,UX>> section on how to configure *GBP* via the GUI.
271
272 *Endpoint*
273
274 ----
275 POST http://{{controllerIp}}:8181/restconf/operations/endpoint:register-endpoint
276 {
277     "input": {
278         "endpoint-group": "<epg0>",
279         "endpoint-groups" : ["<epg1>","<epg2>"],
280         "network-containment" : "<fowarding-model-context1>",
281         "l2-context": "<bridge-domain1>",
282         "mac-address": "<mac1>",
283         "l3-address": [
284             {
285                 "ip-address": "<ipaddress1>",
286                 "l3-context": "<l3_context1>"
287             }
288         ],
289         "*ofoverlay:port-name*": "<ovs port name>",
290         "tenant": "<tenant1>"
291     }
292 }
293 ----
294
295 NOTE: The usage of "port-name" preceded by "ofoverlay". In OpenDaylight, base datastore objects can be _augmented_. In *GBP*, the base endpoint model has no renderer
296 specifics, hence can be leveraged across multiple renderers.
297
298 *OVS Augmentations to Inventory*
299
300 ----
301 PUT http://{{controllerIp}}:8181/restconf/config/opendaylight-inventory:nodes/
302 {
303     "opendaylight-inventory:nodes": {
304         "node": [
305             {
306                 "id": "openflow:123456",
307                 "ofoverlay:tunnel": [
308                     {
309                         "tunnel-type": "overlay:tunnel-type-vxlan",
310                         "ip": "<ip_address_of_ovs>",
311                         "port": 4789,
312                         "node-connector-id": "openflow:123456:1"
313                     }
314                 ]
315             },
316             {
317                 "id": "openflow:654321",
318                 "ofoverlay:tunnel": [
319                     {
320                         "tunnel-type": "overlay:tunnel-type-vxlan",
321                         "ip": "<ip_address_of_ovs>",
322                         "port": 4789,
323                         "node-connector-id": "openflow:654321:1"
324                     }
325                 ]
326             }
327         ]
328     }
329 }
330 ----
331
332 *Tenants* see <<policyresolution,Policy Resolution>> and <<forwarding,Forwarding Model>> for details:
333
334 ----
335 {
336   "policy:tenant": {
337     "contract": [
338       {
339         "clause": [
340           {
341             "name": "allow-http-clause",
342             "subject-refs": [
343               "allow-http-subject",
344               "allow-icmp-subject"
345             ]
346           }
347         ],
348         "id": "<id>",
349         "subject": [
350           {
351             "name": "allow-http-subject",
352             "rule": [
353               {
354                 "classifier-ref": [
355                   {
356                     "direction": "in",
357                     "name": "http-dest"
358                   },
359                   {
360                     "direction": "out",
361                     "name": "http-src"
362                   }
363                 ],
364                 "action-ref": [
365                   {
366                     "name": "allow1",
367                     "order": 0
368                   }
369                 ],
370                 "name": "allow-http-rule"
371               }
372             ]
373           },
374           {
375             "name": "allow-icmp-subject",
376             "rule": [
377               {
378                 "classifier-ref": [
379                   {
380                     "name": "icmp"
381                   }
382                 ],
383                 "action-ref": [
384                   {
385                     "name": "allow1",
386                     "order": 0
387                   }
388                 ],
389                 "name": "allow-icmp-rule"
390               }
391             ]
392           }
393         ]
394       }
395     ],
396     "endpoint-group": [
397       {
398         "consumer-named-selector": [
399           {
400             "contract": [
401               "<id>"
402             ],
403             "name": "<name>"
404           }
405         ],
406         "id": "<id>",
407         "provider-named-selector": []
408       },
409       {
410         "consumer-named-selector": [],
411         "id": "<id>",
412         "provider-named-selector": [
413           {
414             "contract": [
415               "<id>"
416             ],
417             "name": "<name>"
418           }
419         ]
420       }
421     ],
422     "id": "<id>",
423     "l2-bridge-domain": [
424       {
425         "id": "<id>",
426         "parent": "<id>"
427       }
428     ],
429     "l2-flood-domain": [
430       {
431         "id": "<id>",
432         "parent": "<id>"
433       },
434       {
435         "id": "<id>",
436         "parent": "<id>"
437       }
438     ],
439     "l3-context": [
440       {
441         "id": "<id>"
442       }
443     ],
444     "name": "GBPPOC",
445     "subject-feature-instances": {
446       "classifier-instance": [
447         {
448           "classifier-definition-id": "<id>",
449           "name": "http-dest",
450           "parameter-value": [
451             {
452               "int-value": "6",
453               "name": "proto"
454             },
455             {
456               "int-value": "80",
457               "name": "destport"
458             }
459           ]
460         },
461         {
462           "classifier-definition-id": "<id>",
463           "name": "http-src",
464           "parameter-value": [
465             {
466               "int-value": "6",
467               "name": "proto"
468             },
469             {
470               "int-value": "80",
471               "name": "sourceport"
472             }
473           ]
474         },
475         {
476           "classifier-definition-id": "<id>",
477           "name": "icmp",
478           "parameter-value": [
479             {
480               "int-value": "1",
481               "name": "proto"
482             }
483           ]
484         }
485       ],
486       "action-instance": [
487         {
488           "name": "allow1",
489           "action-definition-id": "<id>"
490         }
491       ]
492     },
493     "subnet": [
494       {
495         "id": "<id>",
496         "ip-prefix": "<ip_prefix>",
497         "parent": "<id>",
498         "virtual-router-ip": "<ip address>"
499       },
500       {
501         "id": "<id>",
502         "ip-prefix": "<ip prefix>",
503         "parent": "<id>",
504         "virtual-router-ip": "<ip address>"
505       }
506     ]
507   }
508 }
509 ----
510
511
512 ==== Tutorials[[Demo]]
513
514 Comprehensive tutorials, along with a demonstration environment leveraging Vagrant
515 can be found on the https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)[*GBP* wiki]
516