-== Control And Provisioning of Wireless Access Points (CAPWAP) Protocol Plugin
+== CAPWAP Developer Guide
=== Overview
-
-CAPWAP plugin project aims to provide new southbound interface for controller to
-be able to monitor and manage CAPWAP compliant WTP network devices. The CAPWAP
+The Control And Provisioning of Wireless Access Points (CAPWAP) plugin project aims to
+provide new southbound interface for controller to be able to monitor and manage
+CAPWAP compliant wireless termination point (WTP) network devices. The CAPWAP
feature will provide REST based northbound APIs.
-=== CAPWAP Architecture
-
-TBD: Architecture Diagram
+=== CAPWAP Architecture
+The CAPWAP feature is implemented as an MD-SAL based provider module, which
+helps discover WTP devices and update their states in the MD-SAL operational datastore.
=== CAPWAP APIs and Interfaces
-
This section describes the APIs for interacting with the CAPWAP plugin.
-==== CAPWAP Topology
+==== Discovered WTPs
+The CAPWAP project maintains list of discovered CAPWAP WTPs that is YANG-based in MD-SAL.
+These models are available via RESTCONF.
-TBD:
+* Name: Discovered-WTPs
+* URL: http://${IPADDRESS}:8181/restconf/operational/capwap-impl:capwap-ac-root/
+* Description: Displays list of discovered WTPs and their basic attributes
=== API Reference Documentation
-
-TBD: Provide links to JavaDoc, REST API documentation, etc.
+Go to http://${IPADDRESS}:8181/apidoc/explorer/index.html, sign in, and expand the
+capwap-impl panel. From there, users can execute various API calls to test their
+CAPWAP deployment.
== IoTDM Developer Guide
=== Overview
-See https://wiki.opendaylight.org/view/IoTDM_Overview#Overview
-The onem2m resource tree according to the procedures documented in
-TS0001: OneM2M Functional Architecture, and TS0004: OneM2M Service
-Layer Core Protocol Specification. The two official methods to
-access the data tree are HTTP and CoAP. Typically, applications
-access the data tree with HTTP using the procedures outlined
-in TS0009 OneM2M HTTP Protocol Bindings. And, again, typically,
-devices or things access the data tree using CoAP using the
-procedures outlined in TS0008 OneM2M CoAP Protocol Bindings.
-These documents are available on http://onem2m.org
+The Internet of Things Data Management (IoTDM) on OpenDaylight
+project is about developing a data-centric middleware
+that will act as a oneM2M compliant IoT Data Broker and enable
+authorized applications to retrieve IoT data uploaded by any
+device. The OpenDaylight platform is used to implement the oneM2M
+data store which models a hierarchical containment tree, where each
+node in the tree represents an oneM2M resource. Typically, IoT
+devices and applications interact with the resource tree over
+standard protocols such as CoAP, MQTT, and HTTP.
+Initially, the oneM2M resource tree is used by applications to
+retrieve data. Possible applications are inventory or device
+management systems or big data analytic systems designed to
+make sense of the collected data. But, at some point,
+applications will need to configure the devices. Features and
+tools will have to be provided to enable configuration of the
+devices based on applications responding to user input, network
+conditions, or some set of programmable rules or policies possibly
+triggered by the receipt of data collected from the devices.
+The OpenDaylight platform, with its rich unique cross-section of SDN
+capabilities, NFV, and now IoT device and application management,
+can be bundled with a targeted set of features and deployed
+anywhere in the network to give the network service provider
+ultimate control. Depending on the use case, the OpenDaylight IoT
+platform can be configured with only IoT data collection capabilities
+where it is deployed near the IoT devices and its footprint needs to be
+small, or it can be configured to run as a highly scaled up and
+out distributed cluster with IoT, SDN and NFV functions enabled
+and deployed in a high traffic data center.
-The karaf feature odl-iotdm-onem2m is required in order to access
-the data tree before apps, and things can access it.
+=== oneM2M Architecture
+The architecture provides a framework that enables the support of
+the oneM2M resource containment tree. The onem2m-core implements
+the MDSAL RPCs defined in the onem2m-api YANG files. These RPCs
+enable oneM2M resources to be created, read, updated, and
+deleted (CRUD), and also enables the management of subscriptions.
+When resources are CRUDed, the onem2m-notifier issues oneM2M
+notification events to interested subscribers. TS0001: oneM2M
+Functional Architecture and TS0004: oneM2M Service Layer Protocol
+are great reference documents to learn details of oneM2M resource
+types, message flow, formats, and CRUD/N semantics. Both of these
+specifications can be found at
+http://onem2m.org/technical/published-documents
-=== OneM2M Architecture
- More text to follow
+The oneM2M resource tree is modeled in YANG and essentially is a
+meta-model for the tree. The oneM2M wire protocols allow the
+resource tree to be constructed via HTTP or CoAP messages that
+populate nodes in the tree with resource specific attributes.
+Each oneM2M resource type has semantic behaviour associated with
+it. For example: a container resource has attributes which
+control quotas on how many and how big the collection of data or
+content instance objects that can exist below it in the tree.
+Depending on the resource type, the oneM2M core software
+implements and enforces the resource type specific rules to
+ensure a well-behaved resource tree.
+
+The resource tree can be simultaneously accessed by many
+concurrent applications wishing to manage or access the tree,
+and also many devices can be reporting in new data or sensor
+readings into their appropriate place in the tree.
=== Key APIs and Interfaces
- More text to follow
+The API's to access the oneM2M datastore are well documented
+in TS0004 (referred above) found on onem2m.org
-RESTconf is available too but generally HTTP and CoAP are used to
+RESTCONF is available too but generally HTTP and CoAP are used to
access the oneM2M data tree.
-
-==== HTTP
- More text to follow
-
-==== CoAP
- More text to follow
-
-=== API Reference Documentation
- More text to follow
-
-include::ovsdb-northbound-developer.adoc[]
-
include::ovsdb-openstack-developer.adoc[]
include::ovsdb-southbound-developer.adoc[]
+++ /dev/null
-== OVSDB Northbound Developer Guide
-
-=== Overview
-The OVSDB Northbound feature's goal is to give low level access to
-an OVS instance. For instance, one would like to be able to directly create, read, update and delete rows into an OVS instance by using RESTCONF.
-
-The target audience for this feature is one that needs low level
-access to an OVS instance from inside OpenDaylight
-
-=== OVSDB Northbound Developer Architecture
-The northbound bundle architecture is as follow:
-
-- mdsal-northbound-aggregator
-
-- mdsal-northbound-api
-
-- mdsal-northbound-impl
-
-- mdsal-northbound-features
-
-
-=== Key APIs and Interfaces
-[width="80%",cols="10%,10%,10%,70%"]
-|=======
-|Type | Action | Input | URL
-|PUT | Insert Row |Row Data | restconf/config/network-topology:network-topology/topology/ovsdb:1/\{nodeId\}/tables/\{tableName\}/rows
-|GET | Show Row |N/A | restconf/config/network-topology:network-topology/topology/ovsdb:1/\{nodeId\}/tables/\{tableName\}/rows/\{rowUuid\}
-|GET | Show All Row |N/A | restconf/config/network-topology:network-topology/topology/ovsdb:1/\{nodeId\}/tables/\{tableName\}/rows
-|PUT | Update Row |Row Data | restconf/config/network-topology:network-topology/topology/ovsdb:1/\{nodeId\}/tables/\{tableName\}/rows/\{rowUuid\}
-|DELETE | Delete Row |N/A | restconf/config/network-topology:network-topology/topology/ovsdb:1/\{nodeId\}/tables/\{tableName\}/rows/\{rowUuid\}
-|=======
=== Overview
The Open vSwitch database (OVSDB) Plugin component for OpenDaylight implements
the OVSDB https://tools.ietf.org/html/rfc7047[RFC 7047] management protocol
-that allows the southbound configuration of switches that support OVSDB.The
+that allows the southbound configuration of switches that support OVSDB. The
component comprises a library and a plugin usages. The OVSDB protocol
-uses JSON-RPC calls to manipulate a physical or virtual switch that supports OVSDB.
+uses JSON-RPC calls to manipulate a physical or virtual switch that supports OVSDB.
Many vendors support OVSDB on various hardware platforms.
The OpenDaylight controller uses the library project to interact with an OVS
instance.
=== OVSDB Openstack Architecture
The OpenStack integration architecture uses the following technologies: +
-* https://tools.ietf.org/html/rfc7047[RFC 7047] and http://datatracker.ietf.org/doc/rfc7047/[The Open vSwitch Database Management Protocol]
-* https://www.opennetworking.org/images/stories/downloads/sdn-resources/onf-specifications/openflow/openflow-spec-v1.3.1.pdf[OpenFlow v1.3]
+* https://tools.ietf.org/html/rfc7047[RFC 7047] - The Open vSwitch Database Management Protocol
+* http://www.opennetworking.org/images/stories/downloads/sdn-resources/onf-specifications/openflow/openflow-switch-v1.3.4.pdf[OpenFlow v1.3]
* https://wiki.openstack.org/wiki/Neutron/ML2[OpenStack Neutron ML2 Plugin]
image:openstack_integration.png[Openstack Integration]
+++ /dev/null
-== OVSDB Northbound Installation Guide
-TBD
-
-=== Overview
-TBD
-
-=== Pre Requisites for Installing OVSDB Northbound
-TBD
-
-=== Preparing for Installation
-TBD
-
-=== Installing OVSDB Northbound
-TBD
-
-=== Verifying your Installation
-TBD
-
-==== Troubleshooting
-TBD
-
-=== Post Installation Configuration
-TBD
-
-=== Upgrading From a Previous Release
-TBD
-
-=== Uninstalling OVSDB Northbound
-TBD
logs relating to odl-ovsdb-openstack.
==== Troubleshooting
-There are no easy way to troubleshoot an installation of odl-ovsdb-openstack. Perhaps a combination of
-log:display | grep -i ovsdb in karaf, Open vSwitch commands (ovs-vsdctl) and openstack logs will be useful but will not
+There is no easy way to troubleshoot an installation of odl-ovsdb-openstack. Perhaps a combination of
+log:display | grep -i ovsdb in karaf, Open vSwitch commands (ovs-vsctl) and openstack logs will be useful but will not
explain everything.
=== Upgrading From a Previous Release
-== Control And Provisioning of Wireless Access Points (CAPWAP) Protocol Plugin
+== CAPWAP User Guide
+This document describes how to use the Control And Provisioning of Wireless
+Access Points (CAPWAP) feature in OpenDaylight. This document contains
+configuration, administration, and management sections for the feature.
=== Overview
-CAPWAP plugin project fills the gap Opendaylight Controller has w.r.t.
-managing CAPWAP compliant wireless termination point (WTP) network devices
-present in Enterprises networks. Intelligent applications (e.g. Radio planning, etc)
-can be developed by taping into the operational states, made available via
-REST APIs, of WTP network devices.
+CAPWAP feature fills the gap Opendaylight Controller has with respect to managing
+CAPWAP compliant wireless termination point (WTP) network devices present
+in enterprise networks. Intelligent applications (e.g. centralized firmware
+management, radio planning) can be developed by tapping into the
+WTP network device's operational states via REST APIs.
=== CAPWAP Architecture
-TBD: Architecture Diagram
+The CAPWAP feature is implemented as an MD-SAL based provider module, which
+helps discover WTP devices and update their states in MD-SAL operational datastore.
=== Scope of CAPWAP Project
-In Lithium release, CAPWAP project aims to only detect the WTPs and store their
-attributes in operation data store, which will be accessible via REST and JAVA APIs.
+In the Lithium release, CAPWAP project aims to only detect the WTPs and store their
+basic attributes in the operational data store, which is accessible via REST
+and JAVA APIs.
+
+=== Installing CAPWAP
+To install CAPWAP, download OpenDaylight and use the Karaf console to install
+the following feature:
+
+odl-capwap-ac-rest
=== Configuring CAPWAP
-TBD: Describe how to configure CAPWAP AC (Multicast groups to listen
-for WTP Discovery messages, AC Name, AC Descriptor, etc).
+As of Lithium, there are no configuration requirements.
=== Administering or Managing CAPWAP
-TBD: Describe steps for viewing CAPWAP topology and operational data.
+After installing the odl-capwap-ac-rest feature from the Karaf console, users
+can administer and manage CAPWAP from the APIDOCS explorer.
+
+Go to http://${IPADDRESS}:8181/apidoc/explorer/index.html, sign in, and expand
+the capwap-impl panel. From there, users can execute various API calls.
=== Tutorials
-==== CAPWAP Plugin Bringup Steps
+==== Viewing Discovered WTPs
===== Overview
This tutorial can be used as a walk through to understand the steps for
-starting the CAPWAP feature, doing initial configuration for AC,
-detecting CAPWAP WTPs, accessing the operation states of WTPs.
+starting the CAPWAP feature, detecting CAPWAP WTPs, accessing the
+operational states of WTPs.
===== Prerequisites
-It is assumed that user has access to a hardware/software based CAPWAP compliant WTP.
-
-===== Target Environment
-TBD: Topology Diagram
+It is assumed that user has access to at least one hardware/software based CAPWAP
+compliant WTP. These devices should be configured with OpenDaylight controller
+IP address as a CAPWAP Access Controller (AC) address. It is also assumed that
+WTPs and OpenDaylight controller share the same ethernet broadcast domain.
===== Instructions
-TBD: Describe Steps
-
+. Run the OpenDaylight distribution and install odl-capwap-ac-rest from the Karaf console.
+. Go to http://${IPADDRESS}:8181/apidoc/explorer/index.html
+. Expand capwap-impl
+. Click /operational/capwap-impl:capwap-ac-root/
+. Click "Try it out"
+. The above step should display list of WTPs discovered using ODL CAPWAP feature.
This feature can be enabled in the Karaf console of the OpenDaylight Karaf distribution by issuing the following command:
- feature:install odl-lacp-plugin
+ feature:install odl-lacp-ui
[NOTE]
====
Ports that are associated with an aggregator will have the tag +<lacp-agg-ref>+ updated with valid aggregator information.
=== Tutorials
-To be updated later
-
-==== <Tutorial Name>
-To be updated later
-
-===== Overview
-An overview of the use case.
-
-===== Prerequisites
-Legacy (non-openflow) switches should be enabled with LACP mode and configured with a long timeout.
-
-===== Target Environment
-To be updated later
-
-===== Instructions
-To be updated
+The below tutorial demonstrates LACP LAG creation for a sample mininet topology.
+
+==== Sample LACP Topology creation on Mininet
+ sudo mn --controller=remote,ip=<Controller IP> --topo=linear,1 --switch ovsk,protocols=OpenFlow13
+
+The above command will create a virtual network consisting of a switch and a host. The switch will be connected to the controller.
+
+Once the topology is discovered, verify the presence of a flow entry with "dl_type" set to "0x8809" to handle LACP packets using the below ovs-ofctl command:
+
+ ovs-ofctl -O OpenFlow13 dump-flows s1
+ OFPST_FLOW reply (OF1.3) (xid=0x2):
+ cookie=0x300000000000001e, duration=60.067s, table=0, n_packets=0, n_bytes=0, priority=5,dl_dst=01:80:c2:00:00:02,dl_type=0x8809 actions=CONTROLLER:65535
+
+Configure an additional link between the switch (s1) and host (h1) using the below command on mininet shell to aggregate 2 links:
+
+ mininet> py net.addLink(s1, net.get('h1'))
+ mininet> py s1.attach('s1-eth2')
+
+The LACP module will listen for LACP control packets that are generated from legacy switch (non-OpenFlow enabled). In our example, host (h1) will act as a LACP packet generator.
+In order to generate the LACP control packets, a bond interface has to be created on the host (h1) with mode type set to LACP with long-timeout. To configure bond interface, create a new file bonding.conf under the /etc/modprobe.d/ directory and insert the below lines in this new file:
+
+ alias bond0 bonding
+ options bonding mode=4
+
+Here mode=4 is referred to LACP and the default timeout is set to long.
+
+Enable bond interface and associate both physical interface h1-eth0 & h1-eth1 as members of the bond interface on host (h1) using the below commands on the mininet shell:
+
+ mininet> py net.get('h1').cmd('modprobe bonding')
+ mininet> py net.get('h1').cmd('ip link add bond0 type bond')
+ mininet> py net.get('h1').cmd('ip link set bond0 address <bond-mac-address>')
+ mininet> py net.get('h1').cmd('ip link set h1-eth0 down')
+ mininet> py net.get('h1').cmd('ip link set h1-eth0 master bond0')
+ mininet> py net.get('h1').cmd('ip link set h1-eth1 down')
+ mininet> py net.get('h1').cmd('ip link set h1-eth1 master bond0')
+ mininet> py net.get('h1').cmd('ip link set bond0 up')
+
+Once the bond0 interface is up, the host (h1) will send LACP packets to the switch (s1). The LACP Module will then create a LAG through exchange of LACP packets between the host (h1) and switch (s1). To view the bond interface output on the host (h1) side:
+
+ mininet> py net.get('h1').cmd('cat /proc/net/bonding/bond0')
+ Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
+ Bonding Mode: IEEE 802.3ad Dynamic link aggregation
+ Transmit Hash Policy: layer2 (0)
+ MII Status: up
+ MII Polling Interval (ms): 100
+ Up Delay (ms): 0
+ Down Delay (ms): 0
+ 802.3ad info
+ LACP rate: slow
+ Min links: 0
+ Aggregator selection policy (ad_select): stable
+ Active Aggregator Info:
+ Aggregator ID: 1
+ Number of ports: 2
+ Actor Key: 33
+ Partner Key: 27
+ Partner Mac Address: 00:00:00:00:01:01
+
+ Slave Interface: h1-eth0
+ MII Status: up
+ Speed: 10000 Mbps
+ Duplex: full
+ Link Failure Count: 0
+ Permanent HW addr: 00:00:00:00:00:11
+ Aggregator ID: 1
+ Slave queue ID: 0
+
+ Slave Interface: h1-eth1
+ MII Status: up
+ Speed: 10000 Mbps
+ Duplex: full
+ Link Failure Count: 0
+ Permanent HW addr: 00:00:00:00:00:12
+ Aggregator ID: 1
+ Slave queue ID: 0
+
+A corresponding group table entry would be created on the OpenFlow switch (s1) with "type" set to "select" to perform the LAG functionality. To view the group entries:
+
+ mininet>ovs-ofctl -O Openflow13 dump-groups s1
+ OFPST_GROUP_DESC reply (OF1.3) (xid=0x2):
+ group_id=60169,type=select,bucket=weight:0,actions=output:1,output:2
+
+To apply the LAG functionality on the switches, the flows should be configured with action set to GroupId instead of output port. A sample add-flow configuration with output action set to GroupId:
+
+ sudo ovs-ofctl -O Openflow13 add-flow s1 dl_type=0x0806,dl_src=SRC_MAC,dl_dst=DST_MAC,actions=group:60169
+++ /dev/null
-== OVSDB Northbound User Guide
-TBD
-
-=== Overview
-TBD
-
-=== OVSDB Northbound Architecture
-TBD
-
-=== Configuring OVSDB Northbound
-TBD
-
-=== Administering or Managing OVSDB Northbound
-TBD
-
-=== Tutorials
-TBD
-
-==== How to use OVSDN Northbound
-TBD
-
-===== Overview
-TBD
-
-===== Prerequisites
-TBD
-
-===== Target Environment
-TBD
-
-===== Instructions
-TBD
\ No newline at end of file
-include::ovsdb-northbound-user.adoc[]
-
include::ovsdb-openstack-user.adoc[]
\ No newline at end of file