1 === VTN OpenStack Configuration
3 This guide describes how to set up OpenStack for integration with OpenDaylight Controller.
5 While OpenDaylight Controller provides several ways to integrate with OpenStack, this guide focus on the way which uses VTN features available on OpenDaylight controller.In the integration, VTN Manager work as network service provider for OpenStack.
7 VTN Manager features, enable OpenStack to work in pure OpenFlow environment in which all switches in data plane are OpenFlow switch.
10 To use OpenDaylight Controller (ODL) as Network Service Provider for Openstack.
13 * OpenDaylight Controller.
14 * OpenStack Control Node.
15 * OpenStack Compute Node.
16 * OpenFlow Switch like mininet(Not Mandatory).
18 The VTN features support multiple OpenStack nodes. You can deploy multiple OpenStack Compute Nodes.
19 In management plane, OpenDaylight Controller, OpenStack nodes and OpenFlow switches should communicate with each other.
20 In data plane, Open vSwitches running in OpenStack nodes should communicate with each other through a physical or logical OpenFlow switches. The core OpenFlow switches are not mandatory. Therefore, you can directly connect to the Open vSwitch's.
23 image::vtn/vtn_devstack_setup.png["LAB Setup" ,width= 500]
24 NOTE: Ubuntu 14.04 was used in both the nodes and Vsphere was used for this howto.
30 - Install Ubuntu 14.04 LTS in two servers (OpenStack Control node and Compute node respectively)
31 - While installing, Ubuntu mandates creation of a User, we created the user "stack"(We will use the same user for running devstack)
32 NOTE: You can also have multiple Compute nodes.
33 TIP: Please do minimal Install to avoid any issues in devstack bringup
36 - Login to both servers
37 - Disable Ubuntu Firewall
42 - Optionally install these packages
45 sudo apt-get install net-tools
47 - Edit sudo vim /etc/sudoers and add an entry as follows
50 stack ALL=(ALL) NOPASSWD: ALL
53 - Checked the output of ifconfig -a, two interfaces were listed eth0 and eth1 as indicated in the image above.
54 - We had connected eth0 interface to the Network where ODL Controller is reachable.
55 - eth1 interface in both servers were connected to a different network to act as data plane for the VM's created using the OpenStack.
56 - Manually edited the file : sudo vim /etc/network/interfaces and made entries as follows
59 stack@ubuntu-devstack:~/devstack$ cat /etc/network/interfaces
60 # This file describes the network interfaces available on your system
61 # and how to activate them. For more information, see interfaces(5).
62 # The loop-back network interface
64 iface lo inet loopback
65 # The primary network interface
67 iface eth0 inet static
68 address <IP_ADDRESS_TO_REACH_ODL>
70 broadcast <BROADCAST_IP_ADDRESS>
71 gateway <GATEWAY_IP_ADDRESS>
73 iface eth1 inet static
74 address <IP_ADDRESS_UNIQ>
77 NOTE: Please ensure that the eth0 interface is the default route and it is able to reach the ODL_IP_ADDRESS
78 NOTE: The entries for eth1 are not mandatory, If not set, we may have to manually do "ifup eth1" after the stacking is complete to activate the interface
81 - reboot both nodes after the user and network settings to have the network settings applied to the network
82 - Login again and check the output of ifconfig to ensure that both interfaces are listed
84 ==== ODL Settings and Execution
86 * VTN uses the configuration parameters from 'vtn.ini' file for the OpenStack integration.
87 * These values will be set for the OpenvSwitch, in all the participating OpenStack nodes.
88 * A configuration file 'vtn.ini'''' needs to be created manually in the 'configuration' directory.
89 * The contents of 'vtn.ini' should be as follows:
97 * The values of the configuration parameters must be changed based on the user environment.
98 * Especially, "portname" should be carefully configured, because if the value is wrong, OpenDaylight controller fails to forward packets.
99 * Other parameters works fine as is for general use cases.
101 *** The name of the bridge in Open vSwitch, that will be created by OpenDaylight Controller.
102 *** It must be "br-int".
104 *** The name of the port that will be created in the vbridge in Open vSwitch.
105 *** This must be the same name of the interface of OpenStack Nodes which is used for interconnecting OpenStack Nodes in data plane.(in our case:eth1)
106 *** By default, if vtn.ini is not created, VTN uses ens33 as portname.
108 *** OpenFlow protocol through which OpenFlow Switch and Controller communicate.
109 *** The values can be OpenFlow13 or OpenFlow10.
111 *** The value can be "standalone" or "secure".
112 *** Please use "secure" for general use cases.
114 ==== Start ODL Controller
116 * Please install the feature *odl-vtn-manager-neutron* that provides the integration with Neutron interface.
118 install odl-vtn-manager-neutron
120 TIP: After running ODL Controller, please ensure ODL Controller listens to the ports:6633,6653, 6640 and 8080
122 TIP: Please allow the ports in firewall for the devstack to be able to communicate with ODL Controller.
124 NOTE: 6633/6653 - OpenFlow Ports
126 NOTE: 6640 - OVS Manager Port
128 NOTE: 8282 - Port for REST API
132 ===== VTN Devstack Script
133 * The local.conf is a user-maintained settings file. This allows all custom settings for DevStack to be contained in a single file. This file is processed strictly in sequence.
134 The following datas are needed to be set in the local.conf file:
135 * Set the Host_IP as the detection is unreliable.
136 * Set FLOATING_RANGE to a range not used on the local network, i.e. 192.168.1.224/27. This configures IP addresses ending in 225-254 to be used as floating IPs.
137 * Set FLAT_INTERFACE to the Ethernet interface that connects the host to your local network. This is the interface that should be configured with the static IP address mentioned above.
138 * If the *_PASSWORD variables are not set, we will be prompted to enter values during the execution of stack.sh.
139 * Set ADMIN_PASSWORD . This password is used for the admin and demo accounts set up as OpenStack users. We can login to the OpenStack GUI with this credentials only.
140 * Set the MYSQL_PASSWORD. The default here is a random hex string which is inconvenient if you need to look at the database directly for anything.
141 * Set the RABBIT_PASSWORD. This is used by messaging services used by both the nodes.
142 * Set the service password. This is used by the OpenStack services (Nova, Glance, etc) to authenticate with Keystone.
144 ====== DevStack Control
148 HOST_IP=<CONTROL_NODE_MANAGEMENT_IF_IP_ADDRESS>#Please Add The Control Node IP Address in this line
149 FLAT_INTERFACE=<FLAT_INTERFACE_NAME>
150 SERVICE_HOST=$HOST_IP
154 RECLONE=yes #Make it "no" after stacking successfully the first time
157 LOGFILE=/opt/stack/logs/stack.sh.log
158 SCREEN_LOGDIR=/opt/stack/logs
159 #OFFLINE=True #Uncomment this after stacking successfully the first time
161 ADMIN_PASSWORD=labstack
162 MYSQL_PASSWORD=supersecret
163 RABBIT_PASSWORD=supersecret
164 SERVICE_PASSWORD=supersecret
165 SERVICE_TOKEN=supersecrettoken
166 ENABLE_TENANT_TUNNELS=false
168 disable_service rabbit
170 enable_service quantum
172 enable_service n-cond
173 disable_service n-net
175 enable_service q-dhcp
176 enable_service q-meta
177 enable_service horizon
178 enable_service quantum
179 enable_service tempest
180 ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-novnc,n-cauth,n-cauth,nova
181 ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
184 Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight
185 Q_ML2_TENANT_NETWORK_TYPE=local
186 Q_ML2_PLUGIN_TYPE_DRIVERS=local
187 disable_service n-net
189 enable_service q-dhcp
190 enable_service q-meta
191 enable_service neutron
192 enable_service odl-compute
193 ODL_MGR_IP=<ODL_IP_ADDRESS> #Please Add the ODL IP Address in this line
194 OVS_PHYSICAL_BRIDGE=br-int
196 url=http://<ODL_IP_ADDRESS>:8080/controller/nb/v2/neutron #Please Add the ODL IP Address in this line
200 ====== DevStack Compute
204 HOST_IP=<COMPUTE_NODE_MANAGEMENT_IP_ADDRESS> #Add the Compute node Management IP Address
205 SERVICE_HOST=<CONTROLLEr_NODE_MANAGEMENT_IP_ADDRESS> #Add the cotnrol Node Management IP Address here
209 RECLONE=yes #Make thgis "no" after stacking successfully once
210 #OFFLINE=True #Uncomment this line after stacking successfuly first time.
213 LOGFILE=/opt/stack/logs/stack.sh.log
214 SCREEN_LOGDIR=/opt/stack/logs
216 ADMIN_PASSWORD=labstack
217 MYSQL_PASSWORD=supersecret
218 RABBIT_PASSWORD=supersecret
219 SERVICE_PASSWORD=supersecret
220 SERVICE_TOKEN=supersecrettoken
222 ENABLED_SERVICES=n-cpu,rabbit,neutron
225 Q_ML2_PLUGIN_MECHANISM_DRIVERS=opendaylight
226 Q_ML2_TENANT_NETWORK_TYPE=local
227 Q_ML2_PLUGIN_TYPE_DRIVERS=local
228 enable_service odl-compute
229 ODL_MGR_IP=<ODL_IP_ADDRESS> #ADD ODL IP address here
230 OVS_PHYSICAL_BRIDGE=br-int
231 ENABLE_TENANT_TUNNELS=false
233 #Details of the Control node for various services
234 [[post-config|/etc/neutron/plugins/ml2/ml2_conf.ini]]
236 MYSQL_HOST=$SERVICE_HOST
237 RABBIT_HOST=$SERVICE_HOST
238 GLANCE_HOSTPORT=$SERVICE_HOST:9292
239 KEYSTONE_AUTH_HOST=$SERVICE_HOST
240 KEYSTONE_SERVICE_HOST=$SERVICE_HOST
241 NOVA_VNC_ENABLED=True
242 NOVNCPROXY_URL="http://<CONTROLLER_NODE_IP_ADDRESS>:6080/vnc_auto.html" #Add Controller Node IP address
243 VNCSERVER_LISTEN=$HOST_IP
244 VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
247 We have to comment OFFLINE=TRUE in local.conf files, this will make all the installations to happen automatically.
248 RECLONE=yes only when we set up the DevStack environment from scratch.
250 ===== Get Devstack (All nodes)
251 Install git application using
254 sudo apt-get install git
256 git clone https://git.openstack.org/openstack-dev/devstack;
258 Switch to stable/Juno Version branch
262 git checkout stable/juno
264 ===== Stack Control Node
266 .local.conf: <<_devstack_control,DevStack Control>>
268 cd devstack in the controller node
270 * Copy the contents of local.conf (devstack control node) and save it as "local.conf" in the devstack.
271 * Please modify the IP Address values as required.
276 ====== Verify Control Node stacking
277 * stack.sh prints out Horizon is now available at http://<CONTROL_NODE_IP_ADDRESS>:8080/
278 * Execute the command 'sudo ovs-vsctl show' in the control node terminal and verify if the bridge 'br-int' is created.
280 ===== Stack Compute Node
282 .local.conf: <<_devstack_compute,DevStack Compute>>
284 cd devstack in the controller node
286 * Copy the contents of local.conf (devstack compute node) and save it as local.conf in the 'devstack'''.
287 * Please modify the IP Address values as required.
293 ====== Verify Compute Node Stacking
294 * stack.sh prints out This is your host ip: <COMPUTE_NODE_IP_ADDRESS>
295 * Execute the command 'sudo ovs-vsctl show' in the control node terminal and verify if the bridge 'br-int' is created.
296 * The output of the ovs-vsctl show will be similar to the one seen in control node.
297 ===== Additional Verifications
298 * Please visit the ODL DLUX GUI after stacking all the nodes, http://<ODL_IP_ADDRESS>:8181/dlux/index.html. The switches, topology and the ports that are currently read can be validated.
300 TIP: If the interconnected between the OVS is not seen, Please bring up the interface for the dataplane manually using the below comamnd
303 ifup <interface_name>
305 TIP: Some versions of OVS, drop packets when there is a table-miss, So please add the below flow to all the nodes with OVS version (>=2.1)
308 ovs-ofctl --protocols=OpenFlow13 add-flow br-int priority=0,actions=output:CONTROLLER
310 TIP: Please Accept Promiscuous mode in the networks involving the interconnect.
312 ===== Create VM from Devstack Horizon GUI
313 * Login to http://<CONTROL_NODE_IP>:8080/ to check the horizon GUI.
316 image::vtn/OpenStackGui.png["Horizon",width= 600]
318 Enter the value for User Name as admin and enter the value for Password as labstack.
320 * We should first ensure both the hypervisors(control node and compute node) are mapped under hypervisors by clicking on Hpervisors tab.
323 image::vtn/Hypervisors.png["Hypervisors",width=512]
325 * Create a new Network from Horizon GUI.
326 * Click on Networks Tab.
327 * click on the Create Network button.
330 image::vtn/Create_Network.png["Create Network" ,width=600]
332 * A popup screen will appear.
333 * Enter network name and click Next button.
336 image::vtn/Creare_Network_Step_1.png["Step 1" ,width=600]
337 * Create a sub network by giving Network Address and click Next button .
340 image::vtn/Create_Network_Step_2.png[Step 2,width=600]
342 * Specify the additional details for subnetwork (please refer the image for your reference).
345 image::vtn/Create_Network_Step_3.png[Step 3,width=600]
347 * Click Create button
349 * Navigate to Instances tab in the GUI.
352 image::vtn/Instance_Creation.png["Instance Creation",width=512]
354 * Click on Launch Instances button.
357 image::vtn/Launch_Instance.png[Launch Instance,width=600]
359 * Click on Details tab to enter the VM details.For this demo we are creating Ten VM's(instances).
361 * In the Networking tab, we must select the network,for this we need to drag and drop the Available networks to Selected Networks (i.e.,) Drag vtn1 we created from Available networks to Selected Networks and click Launch to create the instances.
364 image::vtn/Launch_Instance_network.png[Launch Network,width=600]
366 * Ten VM's will be created.
369 image::vtn/Load_All_Instances.png[Load All Instances,width=600]
371 * Click on any VM displayed in the Instances tab and click the Console tab.
374 image::vtn/Instance_Console.png[Instance Console,width=600]
376 * Login to the VM console and verify with a ping command.
379 image::vtn/Instance_ping.png[Ping,width=600]
382 ===== Verification of Control and Compute Node after VM creation
383 The output of sudo ovs-vsctl command after VM creation
385 [stack@icehouse-compute-odl devstack]$ sudo ovs-vsctl show Manager "tcp:192.168.64.73:6640"
388 Controller "tcp:192.168.64.73:6633"
391 Port "tapa2e1ef67-79"
392 Interface "tapa2e1ef67-79"
393 Port "tap5f34d39d-5e"
394 Interface "tap5f34d39d-5e"
395 Port "tapc2858395-f9"
396 Interface "tapc2858395-f9"
397 Port "tapa9ea900a-4b"
398 Interface "tapa9ea900a-4b"
399 Port "tapc63ef3de-53"
400 Interface "tapc63ef3de-53"
401 Port "tap01d51478-8b"
402 Interface "tap01d51478-8b"
403 Port "tapa0b085ab-ce"
404 Interface "tapa0b085ab-ce"
405 Port "tapeab380de-8f"
406 Interface "tapeab380de-8f"
407 Port "tape404538c-0a"
408 Interface "tape404538c-0a"
409 Port "tap2940658d-15"
410 Interface "tap2940658d-15"
414 <code>[stack@icehouse-controller-odl devstack]$ sudo ovs-vsctl show
415 Manager "tcp:192.168.64.73:6640"
418 Controller "tcp:192.168.64.73:6633"
421 Port "tap71790d18-65"
422 Interface "tap71790d18-65"
427 NOTE:In the above scenario more nodes have been created in the compute node
430 * http://devstack.org/guides/multinode-lab.html
431 * https://wiki.opendaylight.org/view/File:Vtn_demo_hackfest_2014_march.pdf