documentation
release-process/index
javadoc
- opendaylight-with-openstack/index
release-notes/sample-release-notes
user-guide/index
developer-guide/index
+++ /dev/null
-#################################
-OpenDaylight with Openstack Guide
-#################################
-
-********
-Overview
-********
-
-OpenStack_ is a popular open source Infrastructure
-as a service project, covering compute, storage and network management.
-OpenStack can use OpenDaylight as its network management provider through the
-Modular Layer 2 (ML2) north-bound plug-in. OpenDaylight manages the network
-flows for the OpenStack compute nodes via the OVSDB south-bound plug-in. This
-page describes how to set that up, and how to tell when everything is working.
-
-********************
-Installing OpenStack
-********************
-
-Installing OpenStack is out of scope for this document, but to get started, it
-is useful to have a minimal multi-node OpenStack deployment.
-
-The reference deployment we will use for this document is a 3 node cluster:
-
-* One control node containing all of the management services for OpenStack_
- (Nova, Neutron, Glance, Swift, Cinder, Keystone)
-* Two compute nodes running nova-compute
-* Neutron using the OVS back-end and vxlan for tunnels
-
-Once you have installed OpenStack_, verify that it is working by connecting
-to Horizon and performing a few operations. To check the Neutron
-configuration, create two instances on a private subnet bridging to your
-public network, and verify that you can connect to them, and that they can
-see each other.
-
-***********************
-Installing OpenDaylight
-***********************
-
-* :doc:`NetVirt Documentation <netvirt:index>`
-
-.. toctree::
- :maxdepth: 1
-
- openstack-with-gbp
- openstack-with-gbp-vpp
- openstack-with-vtn
-
-.. _OpenStack: https://www.openstack.org/
+++ /dev/null
-Using Groupbasedpolicy's Neutron VPP Mapper
-===========================================
-
-Overview
---------
-Neutron VPP Mapper implements features for support policy-based routing for OpenStack Neutron interface involving VPP devices.
-It allows using of policy-based schemes defined in GBP controller in a network consisting of OpenStack-provided nodes routed by a VPP node.
-
-Architecture
-------------
-Neutron VPP Mapper listens to Neutron data store change events, as well as being able to access directly the store.
-If the data changed match certain criteria (see `Processing Neutron Configuration`_),
-Neutron VPP Mapper converts Neutron data specifically required to render a VPP node configuration with a given End Point,
-e.g., the virtual host interface name assigned to a ``vhostuser`` socket.
-Then the mapped data is stored in the VPP info data store.
-
-Administering Neutron VPP Mapper
---------------------------------
-To use the Neutron VPP Mapper in Karaf, at least the following Karaf features must be installed:
-
-* odl-groupbasedpolicy-neutron-vpp-mapper
-* odl-vbd-ui
-
-Initial pre-requisites
-----------------------
-A topology should exist in config datastore, it is necessary to define a node with a particular ``node-id``.
-Later, ``node-id`` will be used as a physical location reference in VPP renderer's bridge domain::
-
- GET http://localhost:8181/restconf/config/network-topology:network-topology/
-
- {
- "network-topology":{
- "topology":[
- {
- "topology-id":"datacentre",
- "node":[
- {
- "node-id":"dut2",
- "vlan-tunnel:super-interface":"GigabitEthernet0/9/0",
- "termination-point":[
- {
- "tp-id":"GigabitEthernet0/9/0",
- "neutron-provider-topology:physical-interface":{
- "interface-name":"GigabitEthernet0/9/0"
- }
- }
- ]
- }
- ]
- }
- ]
- }
- }
-
-
-Processing Neutron Configuration
---------------------------------
-``NeutronListener`` listens to the changes in Neutron datatree in config datastore. It filters the changes, processing only ``network`` and ``port`` entities.
-
-For a ``network`` entity it is checked that it has ``physical-network`` parameter set (i.e., it is backed-up by a physical network),
-and that ``network-type`` is ``vlan-network`` or ``"flat"``, and if this check has passed, a related bridge domain is created
-in VPP Renderer config datastore
-(``http://{{controller}}:{{port}}/restconf/config/vpp-renderer:config``), referenced to network by ``vlan`` field.
-
-In case of ``"vlan-network"``, the ``vlan`` field contains the same value as ``neutron-provider-ext:segmentation-id`` of network created by Neutron.
-
-In case of ``"flat"``, the VLAN specific parameters are not filled out.
-
-.. note:: In case of VXLAN network (i.e. ``network-type`` is ``"vxlan-network"``), no information is actually written
- into VPP Renderer datastore, as VXLAN is used for tenant-network (so no packets are going outside). Instead, VPP Renderer looks up GBP flood domains corresponding to existing VPP bridge domains trying to establish a VXLAN tunnel between them.
-
-For a ``port`` entity it is checked that ``vif-type`` contains ``"vhostuser"`` substring, and that ``device-owner`` contains a specific substring, namely ``"compute"``, ``"router"`` or ``"dhcp"``.
-
-In case of ``"compute"`` substring, a ``vhost-user`` is written to VPP Renderer config datastore.
-
-In case of ``"dhcp"`` or ``"router"``, a ``tap`` is written to VPP Renderer config datastore.
-
-Input/output examples
----------------------
-
-OpenStack is creating network, and these data are being put into the data store::
-
- PUT http://{{controller}}:{{port}}/restconf/config/neutron:neutron/networks
-
- {
- "networks": {
- "network": [
- {
- "uuid": "43282482-a677-4102-87d6-90708f30a115",
- "tenant-id": "94836b88-0e56-4150-aaa7-60f1c2b67faa",
- "neutron-provider-ext:segmentation-id": "2016",
- "neutron-provider-ext:network-type": "neutron-networks:network-type-vlan",
- "neutron-provider-ext:physical-network": "datacentre",
- "neutron-L3-ext:external": true,
- "name": "drexternal",
- "shared": false,
- "admin-state-up": true,
- "status": "ACTIVE"
- }
- ]
- }
- }
-
-Checking bridge domain in VPP Renderer config data store.
-Note that ``physical-location-ref`` is referring to ``"dut2"``, paired by ``neutron-provider-ext:physical-network`` -> ``topology-id``::
-
- GET http://{{controller}}:{{port}}/restconf/config/vpp-renderer:config
-
- {
- "config": {
- "bridge-domain": [
- {
- "id": "43282482-a677-4102-87d6-90708f30a115",
- "type": "vpp-renderer:vlan-network",
- "description": "drexternal",
- "vlan": 2016,
- "physical-location-ref": [
- {
- "node-id": "dut2",
- "interface": [
- "GigabitEthernet0/9/0"
- ]
- }
- ]
- }
- ]
- }
- }
-
-Port (compute)::
-
- PUT http://{{controller}}:{{port}}/restconf/config/neutron:neutron/ports
-
- {
- "ports": {
- "port": [
- {
- "uuid": "3d5dff96-25f5-4d4b-aa11-dc03f7f8d8e0",
- "tenant-id": "94836b88-0e56-4150-aaa7-60f1c2b67faa",
- "device-id": "dhcp58155ae3-f2e7-51ca-9978-71c513ab02ee-a91437c0-8492-47e2-b9d0-25c44aef6cda",
- "neutron-binding:vif-details": [
- {
- "details-key": "somekey"
- }
- ],
- "neutron-binding:host-id": "devstack-control",
- "neutron-binding:vif-type": "vhostuser",
- "neutron-binding:vnic-type": "normal",
- "mac-address": "fa:16:3e:4a:9f:c0",
- "name": "",
- "network-id": "a91437c0-8492-47e2-b9d0-25c44aef6cda",
- "neutron-portsecurity:port-security-enabled": false,
- "device-owner": "network:compute",
- "fixed-ips": [
- {
- "subnet-id": "0a5834ed-ed31-4425-832d-e273cac26325",
- "ip-address": "10.1.1.3"
- }
- ],
- "admin-state-up": true
- }
- ]
- }
- }
-
- GET http://{{controller}}:{{port}}/restconf/config/vpp-renderer:config
-
- {
- "config": {
- "vpp-endpoint": [
- {
- "context-type": "l2-l3-forwarding:l2-bridge-domain",
- "context-id": "a91437c0-8492-47e2-b9d0-25c44aef6cda",
- "address-type": "l2-l3-forwarding:mac-address-type",
- "address": "fa:16:3e:4a:9f:c0",
- "vpp-node-path": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='topology-netconf']/network-topology:node[network-topology:node-id='devstack-control']",
- "vpp-interface-name": "neutron_port_3d5dff96-25f5-4d4b-aa11-dc03f7f8d8e0",
- "socket": "/tmp/socket_3d5dff96-25f5-4d4b-aa11-dc03f7f8d8e0",
- "description": "neutron port"
- }
- ]
- }
- }
-
-Port (dhcp)::
-
- PUT http://{{controller}}:{{port}}/restconf/config/neutron:neutron/ports
-
- {
- "ports": {
- "port": [
- {
- "uuid": "3d5dff96-25f5-4d4b-aa11-dc03f7f8d8e0",
- "tenant-id": "94836b88-0e56-4150-aaa7-60f1c2b67faa",
- "device-id": "dhcp58155ae3-f2e7-51ca-9978-71c513ab02ee-a91437c0-8492-47e2-b9d0-25c44aef6cda",
- "neutron-binding:vif-details": [
- {
- "details-key": "somekey"
- }
- ],
- "neutron-binding:host-id": "devstack-control",
- "neutron-binding:vif-type": "vhostuser",
- "neutron-binding:vnic-type": "normal",
- "mac-address": "fa:16:3e:4a:9f:c0",
- "name": "",
- "network-id": "a91437c0-8492-47e2-b9d0-25c44aef6cda",
- "neutron-portsecurity:port-security-enabled": false,
- "device-owner": "network:dhcp",
- "fixed-ips": [
- {
- "subnet-id": "0a5834ed-ed31-4425-832d-e273cac26325",
- "ip-address": "10.1.1.3"
- }
- ],
- "admin-state-up": true
- }
- ]
- }
- }
-
- GET http://{{controller}}:{{port}}/restconf/config/vpp-renderer:config
-
- {
- "config": {
- "vpp-endpoint": [
- {
- "context-type": "l2-l3-forwarding:l2-bridge-domain",
- "context-id": "a91437c0-8492-47e2-b9d0-25c44aef6cda",
- "address-type": "l2-l3-forwarding:mac-address-type",
- "address": "fa:16:3e:4a:9f:c0",
- "vpp-node-path": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='topology-netconf']/network-topology:node[network-topology:node-id='devstack-control']",
- "vpp-interface-name": "neutron_port_3d5dff96-25f5-4d4b-aa11-dc03f7f8d8e0",
- "physical-address": "fa:16:3e:4a:9f:c0",
- "name": "tap3d5dff96-25",
- "description": "neutron port"
- }
- ]
- }
- }
+++ /dev/null
-OpenStack with GroupBasedPolicy
-===============================
-
-This section is for Application Developers and Network Administrators
-who are looking to integrate Group Based Policy with OpenStack.
-
-To enable the **GBP** Neutron Mapper feature, at the karaf console:
-
-.. code-block:: bash
-
- feature:install odl-groupbasedpolicy-neutronmapper
-
-Neutron Mapper has the following dependencies that are automatically loaded:
-
-.. code-block:: bash
-
- odl-neutron-service
-
-Neutron Northbound implementing REST API used by OpenStack
-
-.. code-block:: bash
-
- odl-groupbasedpolicy-base
-
-Base **GBP** feature set, such as policy resolution, data model etc.
-
-.. code-block:: bash
-
- odl-groupbasedpolicy-ofoverlay
-
-For this release, **GBP** has one renderer, hence this is loaded by default.
-
-REST calls from OpenStack Neutron are by the Neutron NorthBound project.
-
-**GBP** provides the implementation of the `Neutron V2.0 API <neutron_v2api_>`_.
-
-Features
---------
-
-List of supported Neutron entities:
-
-* Port
-* Network
-
- * Standard Internal
- * External provider L2/L3 network
-
-* Subnet
-* Security-groups
-* Routers
-
- * Distributed functionality with local routing per compute
- * External gateway access per compute node (dedicated port required)
- * Multiple routers per tenant
-
-* FloatingIP NAT
-* IPv4/IPv6 support
-
-The mapping of Neutron entities to **GBP** entities is as follows:
-
-**Neutron Port**
-
-.. figure:: images/groupbasedpolicy/neutronmapper-gbp-mapping-port.png
-
- Neutron Port
-
-The Neutron port is mapped to an endpoint.
-
-The current implementation supports one IP address per Neutron port.
-
-An endpoint and L3-endpoint belong to multiple EndpointGroups if the Neutron
-port is in multiple Neutron Security Groups.
-
-The key for endpoint is L2-bridge-domain obtained as the parent of
-L2-flood-domain representing Neutron network. The MAC address is from the
-Neutron port.
-An L3-endpoint is created based on L3-context (the parent of the
-L2-bridge-domain) and IP address of Neutron Port.
-
-**Neutron Network**
-
-.. figure:: images/groupbasedpolicy/neutronmapper-gbp-mapping-network.png
-
- Neutron Network
-
-A Neutron network has the following characteristics:
-
-* defines a broadcast domain
-* defines a L2 transmission domain
-* defines a L2 name space.
-
-To represent this, a Neutron Network is mapped to multiple **GBP** entities.
-The first mapping is to an L2 flood-domain to reflect that the Neutron network
-is one flooding or broadcast domain.
-An L2-bridge-domain is then associated as the parent of L2 flood-domain. This
-reflects both the L2 transmission domain as well as the L2 addressing namespace.
-
-The third mapping is to L3-context, which represents the distinct L3 address space.
-The L3-context is the parent of L2-bridge-domain.
-
-**Neutron Subnet**
-
-
-.. figure:: images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet.png
-
- Neutron Subnet
-
-Neutron subnet is associated with a Neutron network. The Neutron subnet is
-mapped to a *GBP* subnet where the parent of the subnet is L2-flood-domain
-representing the Neutron network.
-
-**Neutron Security Group**
-
-
-.. figure:: images/groupbasedpolicy/neutronmapper-gbp-mapping-securitygroup.png
-
- Neutron Security Group and Rules
-
-**GBP** entity representing Neutron security-group is EndpointGroup.
-
-**Infrastructure EndpointGroups**
-
-Neutron-mapper automatically creates EndpointGroups to manage key infrastructure
-items such as:
-
-* DHCP EndpointGroup - contains endpoints representing Neutron DHCP ports
-* Router EndpointGroup - contains endpoints representing Neutron router
- interfaces
-* External EndpointGroup - holds L3-endpoints representing Neutron router
- gateway ports, also associated with FloatingIP ports.
-
-**Neutron Security Group Rules**
-
-This mapping is most complicated among all others because Neutron
-security-group-rules are mapped to contracts with clauses,
-subjects, rules, action-refs, classifier-refs, etc.
-Contracts are used between endpoint groups representing Neutron Security Groups.
-For simplification it is important to note that Neutron security-group-rules are
-similar to a **GBP** rule containing:
-
-* classifier with direction
-* action of *allow*.
-
-
-**Neutron Routers**
-
-
-.. figure:: images/groupbasedpolicy/neutronmapper-gbp-mapping-router.png
-
- Neutron Router
-
-Neutron router is represented as a L3-context. This treats a router as a Layer3
-namespace, and hence every network attached to it a part
-of that Layer3 namespace.
-
-This allows for multiple routers per tenant with complete isolation.
-
-The mapping of the router to an endpoint represents the router's interface or
-gateway port.
-
-The mapping to an EndpointGroup represents the internal infrastructure
-EndpointGroups created by the **GBP** Neutron Mapper
-
-When a Neutron router interface is attached to a network/subnet, that
-network/subnet and its associated endpoints or Neutron Ports are seamlessly
-added to the namespace.
-
-**Neutron FloatingIP**
-
-When associated with a Neutron Port, this leverages the *GBP* OfOverlay
-renderer's NAT capabilities.
-
-A dedicated *external* interface on each Nova compute host allows for
-disitributed external access. Each Nova instance associated with a
-FloatingIP address can access the external network directly without having to
-route via the Neutron controller, or having to enable any form
-of Neutron distributed routing functionality.
-
-Assuming the gateway provisioned in the Neutron Subnet command for the external
-network is reachable, the combination of *GBP* Neutron Mapper and
-OfOverlay renderer will automatically ARP for this default gateway, requiring
-no user intervention.
-
-
-**Troubleshooting within GBP**
-
-Logging level for the mapping functionality can be set for package
-org.opendaylight.groupbasedpolicy.neutron.mapper. An example of enabling TRACE
-logging level on karaf console:
-
-.. code-block:: bash
-
- log:set TRACE org.opendaylight.groupbasedpolicy.neutron.mapper
-
-**Neutron mapping example**
-
-As an example for mapping can be used creation of Neutron network, subnet and
-port. When a Neutron network is created 3 **GBP** entities are created:
-l2-flood-domain, l2-bridge-domain, l3-context.
-
-.. figure:: images/groupbasedpolicy/neutronmapper-gbp-mapping-network-example.png
-
- Neutron network mapping
-
-After an subnet is created in the network mapping looks like this.
-
-.. figure:: images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet-example.png
-
- Neutron subnet mapping
-
-If an Neutron port is created in the subnet an endpoint and l3-endpoint are
-created. The endpoint has key composed from l2-bridge-domain and MAC address
-from Neutron port. A key of l3-endpoint is compesed from l3-context and IP
-address. The network containment of endpoint and l3-endpoint points to the
-subnet.
-
-
-.. figure:: images/groupbasedpolicy/neutronmapper-gbp-mapping-port-example.png
-
- Neutron port mapping
-
-Configuring GBP Neutron
------------------------
-
-No intervention passed initial OpenStack setup is required by the user.
-
-More information about configuration can be found in our DevStack demo
-environment on the `GBP wiki <gbp_wiki_>`_.
-
-Administering or Managing GBP Neutron
--------------------------------------
-
-For consistencies sake, all provisioning should be performed via the Neutron API. (CLI or Horizon).
-
-The mapped policies can be augmented via the **GBP** UX,UX, to:
-
-* Enable Service Function Chaining
-* Add endpoints from outside of Neutron i.e. VMs/containers not provisioned in OpenStack
-* Augment policies/contracts derived from Security Group Rules
-* Overlay additional contracts or groupings
-
-Tutorials
----------
-
-A DevStack demo environment can be found on the
-`GBP wiki <gbp_wiki_>`_.
-
-.. _gbp_wiki: https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)
-.. _neutron_v2api: http://developer.openstack.org/api-ref-networking-v2.html
+++ /dev/null
-.. _vtn-openstack-dev-guide:
-
-OpenStack with Virtual Tenant Network
-=====================================
-
-This section describes using OpenDaylight with the VTN manager feature providing
-network service for OpenStack. VTN manager utilizes the OVSDB southbound service
-and Neutron for this implementation. The below diagram depicts the communication
-of OpenDaylight and two virtual networks connected by an OpenFlow switch using
-this implementation.
-
-.. figure:: images/vtn/OpenStackDeveloperGuide.png
-
- OpenStack Architecture
-
-Configure OpenStack to work with OpenDaylight(VTN Feature) using PackStack
---------------------------------------------------------------------------
-
-Prerequisites to install OpenStack using PackStack
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-* Fresh CentOS 7.1 minimal install
-* Use the below commands to disable and remove Network Manager in CentOS 7.1,
-
-.. code-block:: bash
-
- systemctl stop NetworkManager
- systemctl disable NetworkManager
-
-* To make SELINUX as permissive, please open the file "/etc/sysconfig/selinux" and change it as "SELINUX=permissive".
-* After making selinux as permissive, please restart the CentOS 7.1 machine.
-
-Steps to install OpenStack PackStack in CentOS 7.1
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-* To install OpenStack juno, use the following command,
-
-.. code-block:: bash
-
- yum update -y
- yum -y install https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
-
-
-* To install the packstack installer, please use the below command,
-
-.. code-block:: bash
-
- yum -y install openstack-packstack
-
-* To create all-in-one setup, please use the below command,
-
-.. code-block:: bash
-
- packstack --allinone --provision-demo=n --provision-all-in-one-ovs-bridge=n
-
-* This will end up with Horizon started successfully message.
-
-Steps to install and deploy OpenDaylight in CentOS 7.1
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-* Download the latest Boron distribution code in the below link,
-
-.. code-block:: bash
-
- wget https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.5.0-Boron/distribution-karaf-0.5.0-Boron.zip
-
-
-* Unzip the Boron distribution code by using the below command,
-
-.. code-block:: bash
-
- unzip distribution-karaf-0.5.0-Boron.zip
-
-* Please do the below steps in the OpenDaylight to change jetty port,
-
- * Change the jetty port from 8080 to something else as swift proxy of
- OpenStack is using it.
- * Open the file "etc/jetty.xml" and change the jetty port from 8080 to 8910
- (we have used 8910 as jetty port you can use any other number).
- * Start VTN Manager and install odl-vtn-manager-neutron in it.
- * Ensure all the required ports(6633/6653,6640 and 8910) are in the listen
- mode by using the command "netstat -tunpl" in OpenDaylight.
-
-Steps to reconfigure OpenStack in CentOS 7.1
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-* Steps to stop Open vSwitch Agent and clean up ovs
-
-.. code-block:: bash
-
- sudo systemctl stop neutron-openvswitch-agent
- sudo systemctl disable neutron-openvswitch-agent
- sudo systemctl stop openvswitch
- sudo rm -rf /var/log/openvswitch/*
- sudo rm -rf /etc/openvswitch/conf.db
- sudo systemctl start openvswitch
- sudo ovs-vsctl show
-
-
-* Stop Neutron Server
-
-.. code-block:: bash
-
- systemctl stop neutron-server
-
-
-* Verify that OpenDaylight's ML2 interface is working:
-
-.. code-block:: bash
-
- curl -v admin:admin http://{CONTROL_HOST}:{PORT}/controller/nb/v2/neutron/networks
-
- {
- "networks" : [ ]
- }
-
-If this does not work or gives an error, check Neutron's log file in
-*/var/log/neutron/server.log*. Error messages here should give
-some clue as to what the problem is in the connection with OpenDaylight
-
-* Configure Neutron to use OpenDaylight's ML2 driver:
-
-.. code-block:: bash
-
- sudo crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers opendaylight
- sudo crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types local
- sudo crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers local
- sudo crudini --set /etc/neutron/dhcp_agent.ini DEFAULT ovs_use_veth True
-
- cat <<EOT | sudo tee -a /etc/neutron/plugins/ml2/ml2_conf.ini > /dev/null
- [ml2_odl]
- password = admin
- username = admin
- url = http://{CONTROL_HOST}:{PORT}/controller/nb/v2/neutron
- EOT
-
-* Reset Neutron's ML2 database
-
-.. code-block:: bash
-
- sudo mysql -e "drop database if exists neutron_ml2;"
- sudo mysql -e "create database neutron_ml2 character set utf8;"
- sudo mysql -e "grant all on neutron_ml2.* to 'neutron'@'%';"
- sudo neutron-db-manage --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
-
-* Start Neutron Server
-
-.. code-block:: bash
-
- sudo systemctl start neutron-server
-
-* Restart the Neutron DHCP service
-
-.. code-block:: bash
-
- system restart neutron-dhcp-agent.service
-
-* At this stage, your Open vSwitch configuration should be empty:
-
-.. code-block:: bash
-
- [root@dneary-odl-compute2 ~]# ovs-vsctl show
- 686989e8-7113-4991-a066-1431e7277e1f
- ovs_version: "2.3.1"
-
-
-* Set OpenDaylight as the manager on all nodes
-
-.. code-block:: bash
-
- ovs-vsctl set-manager tcp:127.0.0.1:6640
-
-
-* You should now see a section in your Open vSwitch configuration
- showing that you are connected to the OpenDaylight server, and OpenDaylight
- will automatically create a br-int bridge:
-
-.. code-block:: bash
-
- [root@dneary-odl-compute2 ~]# ovs-vsctl show
- 686989e8-7113-4991-a066-1431e7277e1f
- Manager "tcp:127.0.0.1:6640"
- is_connected: true
- Bridge br-int
- Controller "tcp:127.0.0.1:6633"
- is_connected: true
- fail_mode: secure
- Port "ens33"
- Interface "ens33"
- ovs_version: "2.3.1"
-
-* Add the default flow to OVS to forward packets to controller when there is a table-miss,
-
-.. code-block:: bash
-
- ovs-ofctl --protocols=OpenFlow13 add-flow br-int priority=0,actions=output:CONTROLLER
-
-* Please see the `VTN OpenStack PackStack support guide <VTN_OpenStack_PackStack_>`_
- on the wiki to create VM's from OpenStack Horizon GUI.
-
-Implementation details
-----------------------
-
-VTN Manager
-^^^^^^^^^^^
-Install **odl-vtn-manager-neutron** feature which provides the integration with
-Neutron interface.
-
-.. code-block:: bash
-
- feature:install odl-vtn-manager-neutron
-
-It subscribes to the events from Open vSwitch and also implements the Neutron
-requests received by OpenDaylight.
-
-Functional Behavior
-^^^^^^^^^^^^^^^^^^^
-
-**StartUp**
-
-* The ML2 implementation for OpenDaylight will ensure that when Open vSwitch is
- started, the ODL_IP_ADDRESS configured will be set as manager.
-* When OpenDaylight receives the update of the Open vSwitch on port 6640
- (manager port), VTN Manager handles the event and adds a bridge with required
- port mappings to the Open vSwitch at the OpenStack node.
-* When Neutron starts up, a new network create is POSTed to OpenDaylight, for
- which VTN Manager creates a Virtual Tenant Network.
-* *Network and Sub-Network Create:* Whenever a new sub network is created, VTN
- Manager will handle the same and create a vbridge under the VTN.
-* *VM Creation in OpenStack:* The interface mentioned as integration bridge in
- the configuration file will be added with more interfaces on creation of a
- new VM in OpenStack and the network is provisioned for it by the VTN Neutron
- feature. The addition of a new port is captured by the VTN Manager and it
- creates a vbridge interface with port mapping for the particular port. When
- the VM starts to communicate with other VMs, the VTN Manger will install flows
- in the Open vSwitch and other OpenFlow switches to facilitate communication
- between them.
-
-.. note::
-
- To use this feature, VTN feature should be installed
-
-Reference
----------
-
-https://wiki.opendaylight.org/images/5/5c/Integration_of_vtn_and_ovsdb_for_helium.pdf
-
-
-.. _VTN_OpenStack_PackStack: https://wiki.opendaylight.org/view/Release/Lithium/VTN/User_Guide/Openstack_Packstack_Support