3 Link Aggregation Control Protocol User Guide
4 ============================================
9 This section contains information about how to use the LACP plugin
10 project with OpenDaylight, including configurations.
12 Link Aggregation Control Protocol Architecture
13 ----------------------------------------------
15 The LACP Project within OpenDaylight implements Link Aggregation Control
16 Protocol (LACP) as an MD-SAL service module and will be used to
17 auto-discover and aggregate multiple links between an OpenDaylight
18 controlled network and LACP-enabled endpoints or switches. The result is
19 the creation of a logical channel, which represents the aggregation of
20 the links. Link aggregation provides link resiliency and bandwidth
21 aggregation. This implementation adheres to IEEE Ethernet specification
22 `802.3ad <http://www.ieee802.org/3/hssg/public/apr07/frazier_01_0407.pdf>`__.
24 Configuring Link Aggregation Control Protocol
25 ---------------------------------------------
27 This feature can be enabled in the Karaf console of the OpenDaylight
28 Karaf distribution by issuing the following command:
32 feature:install odl-lacp-ui
36 1. Ensure that legacy (non-OpenFlow) switches are configured with
37 LACP mode active with a long timeout to allow for the LACP plugin
38 in OpenDaylight to respond to its messages.
40 2. Flows that want to take advantage of LACP-configured Link
41 Aggregation Groups (LAGs) must explicitly use a OpenFlow group
42 table entry created by the LACP plugin. The plugin only creates
43 group table entries, it does not program any flows on its own.
45 Administering or Managing Link Aggregation Control Protocol
46 -----------------------------------------------------------
48 LACP-discovered network inventory and network statistics can be viewed
49 using the following REST APIs.
51 1. List of aggregators available for a node:
55 http://<ControllerIP>:8181/restconf/operational/opendaylight-inventory:nodes/node/<node-id>
57 Aggregator information will appear within the ``<lacp-aggregators>``
60 2. To view only the information of an aggregator:
64 http://<ControllerIP>:8181/restconf/operational/opendaylight-inventory:nodes/node/<node-id>/lacp-aggregators/<agg-id>
66 The group ID associated with the aggregator can be found inside the
67 ``<lag-groupid>`` XML tag.
69 The group table entry information for the ``<lag-groupid>`` added for
70 the aggregator is also available in the ``opendaylight-inventory``
73 3. To view physical port information.
77 http://<ControllerIP>:8181/restconf/operational/opendaylight-inventory:nodes/node/<node-id>/node-connector/<node-connector-id>
79 Ports that are associated with an aggregator will have the tag
80 ``<lacp-agg-ref>`` updated with valid aggregator information.
85 The below tutorial demonstrates LACP LAG creation for a sample mininet
88 Sample LACP Topology creation on Mininet
89 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
93 sudo mn --controller=remote,ip=<Controller IP> --topo=linear,1 --switch ovsk,protocols=OpenFlow13
95 The above command will create a virtual network consisting of a switch
96 and a host. The switch will be connected to the controller.
98 Once the topology is discovered, verify the presence of a flow entry
99 with "dl\_type" set to "0x8809" to handle LACP packets using the below
104 ovs-ofctl -O OpenFlow13 dump-flows s1
105 OFPST_FLOW reply (OF1.3) (xid=0x2):
106 cookie=0x300000000000001e, duration=60.067s, table=0, n_packets=0, n_bytes=0, priority=5,dl_dst=01:80:c2:00:00:02,dl_type=0x8809 actions=CONTROLLER:65535
108 Configure an additional link between the switch (s1) and host (h1) using
109 the below command on mininet shell to aggregate 2 links:
113 mininet> py net.addLink(s1, net.get('h1'))
114 mininet> py s1.attach('s1-eth2')
116 The LACP module will listen for LACP control packets that are generated
117 from legacy switch (non-OpenFlow enabled). In our example, host (h1)
118 will act as a LACP packet generator. In order to generate the LACP
119 control packets, a bond interface has to be created on the host (h1)
120 with mode type set to LACP with long-timeout. To configure bond
121 interface, create a new file bonding.conf under the /etc/modprobe.d/
122 directory and insert the below lines in this new file:
127 options bonding mode=4
129 Here mode=4 is referred to LACP and the default timeout is set to long.
131 Enable bond interface and associate both physical interface h1-eth0 &
132 h1-eth1 as members of the bond interface on host (h1) using the below
133 commands on the mininet shell:
137 mininet> py net.get('h1').cmd('modprobe bonding')
138 mininet> py net.get('h1').cmd('ip link add bond0 type bond')
139 mininet> py net.get('h1').cmd('ip link set bond0 address <bond-mac-address>')
140 mininet> py net.get('h1').cmd('ip link set h1-eth0 down')
141 mininet> py net.get('h1').cmd('ip link set h1-eth0 master bond0')
142 mininet> py net.get('h1').cmd('ip link set h1-eth1 down')
143 mininet> py net.get('h1').cmd('ip link set h1-eth1 master bond0')
144 mininet> py net.get('h1').cmd('ip link set bond0 up')
146 Once the bond0 interface is up, the host (h1) will send LACP packets to
147 the switch (s1). The LACP Module will then create a LAG through exchange
148 of LACP packets between the host (h1) and switch (s1). To view the bond
149 interface output on the host (h1) side:
153 mininet> py net.get('h1').cmd('cat /proc/net/bonding/bond0')
154 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
155 Bonding Mode: IEEE 802.3ad Dynamic link aggregation
156 Transmit Hash Policy: layer2 (0)
158 MII Polling Interval (ms): 100
164 Aggregator selection policy (ad_select): stable
165 Active Aggregator Info:
170 Partner Mac Address: 00:00:00:00:01:01
174 Slave Interface: h1-eth0
178 Link Failure Count: 0
179 Permanent HW addr: 00:00:00:00:00:11
185 Slave Interface: h1-eth1
189 Link Failure Count: 0
190 Permanent HW addr: 00:00:00:00:00:12
194 A corresponding group table entry would be created on the OpenFlow
195 switch (s1) with "type" set to "select" to perform the LAG
196 functionality. To view the group entries:
200 mininet>ovs-ofctl -O Openflow13 dump-groups s1
201 OFPST_GROUP_DESC reply (OF1.3) (xid=0x2):
202 group_id=60169,type=select,bucket=weight:0,actions=output:1,output:2
204 To apply the LAG functionality on the switches, the flows should be
205 configured with action set to GroupId instead of output port. A sample
206 add-flow configuration with output action set to GroupId:
210 sudo ovs-ofctl -O Openflow13 add-flow s1 dl_type=0x0806,dl_src=SRC_MAC,dl_dst=DST_MAC,actions=group:60169