4360 <http://tools.ietf.org/html/rfc4360>`__.
All the concepts are described in one yang model:
-`bgp-types.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/concepts/src/main/yang/bgp-types.yang;hb=refs/heads/stable/beryllium>`__.
+`bgp-types.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/concepts/src/main/yang/bgp-types.yang;hb=refs/heads/stable/boron>`__.
Outside generated classes, there is just one class
-`NextHopUtil <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/concepts/src/main/java/org/opendaylight/bgp/concepts/NextHopUtil.java;hb=refs/heads/stable/beryllium>`__
+`NextHopUtil <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/concepts/src/main/java/org/opendaylight/bgp/concepts/NextHopUtil.java;hb=refs/heads/stable/boron>`__
that contains methods for serializing and parsing NextHop.
BGP parser
*IMPL* module contains actual parsers and serializers for BGP messages
and
-`Activator <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/parser-impl/src/main/java/org/opendaylight/protocol/bgp/parser/impl/BGPActivator.java;hb=refs/heads/stable/beryllium>`__
+`Activator <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/parser-impl/src/main/java/org/opendaylight/protocol/bgp/parser/impl/BGPActivator.java;hb=refs/heads/stable/boron>`__
class
*SPI* module contains helper classes needed for registering parsers into
The configuration of bgp-parser-spi specifies one implementation of
*Extension provider* that will take care of registering mentioned parser
extensions:
-`SimpleBGPExtensionProviderContext <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/parser-spi/src/main/java/org/opendaylight/protocol/bgp/parser/spi/pojo/SimpleBGPExtensionProviderContext.java;hb=refs/heads/stable/beryllium>`__.
+`SimpleBGPExtensionProviderContext <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/parser-spi/src/main/java/org/opendaylight/protocol/bgp/parser/spi/pojo/SimpleBGPExtensionProviderContext.java;hb=refs/heads/stable/boron>`__.
All registries are implemented in package
-`bgp-parser-spi <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=tree;f=bgp/parser-spi/src/main/java/org/opendaylight/protocol/bgp/parser/spi;hb=refs/heads/stable/beryllium>`__.
+`bgp-parser-spi <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=tree;f=bgp/parser-spi/src/main/java/org/opendaylight/protocol/bgp/parser/spi;hb=refs/heads/stable/boron>`__.
Serializing
^^^^^^^^^^^
specific AFI/SAFI table is set to *true*. Without graceful restart, the
messages are generated by OpenDaylight itself and sent after second
keepalive for each AFI/SAFI. This is done in
-`BGPSynchronization <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/BGPSynchronization.java;hb=refs/heads/stable/beryllium>`__.
+`BGPSynchronization <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/BGPSynchronization.java;hb=refs/heads/stable/boron>`__.
**Peers**
-`BGPPeer <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/BGPPeer.java;hb=refs/heads/stable/beryllium>`__
+`BGPPeer <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/BGPPeer.java;hb=refs/heads/stable/boron>`__
has various meanings. If you configure BGP listener, *BGPPeer*
represents the BGP listener itself. If you are configuring BGP speaker,
you need to provide a list of peers, that are allowed to connect to this
speaker. Unknown peer represents, in this case, a peer that is allowed
to be refused. *BGPPeer* represents in this case peer, that is supposed
to connect to your speaker. *BGPPeer* is stored in
-`BGPPeerRegistry <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/StrictBGPPeerRegistry.java;hb=refs/heads/stable/beryllium>`__.
+`BGPPeerRegistry <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/StrictBGPPeerRegistry.java;hb=refs/heads/stable/boron>`__.
This registry controls the number of sessions. Our strict implementation
limits sessions to one per peer.
-`ApplicationPeer <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/ApplicationPeer.java;hb=refs/heads/stable/beryllium>`__
+`ApplicationPeer <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/ApplicationPeer.java;hb=refs/heads/stable/boron>`__
is a special case of peer, that has it’s own RIB. This RIB is populated
from RESTCONF. The RIB is synchronized with default BGP RIB. Incoming
routes to the default RIB are treated in the same way as they were from
*Ipv4Routes*, *Ipv6Routes*, *LinkstateRoutes* and *FlowspecRoutes*.
Each route type needs to provide a
-`RIBSupport.java <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-spi/src/main/java/org/opendaylight/protocol/bgp/rib/spi/RIBSupport.java;hb=refs/heads/stable/beryllium>`__
+`RIBSupport.java <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-spi/src/main/java/org/opendaylight/protocol/bgp/rib/spi/RIBSupport.java;hb=refs/heads/stable/boron>`__
implementation. *RIBSupport* tells RIB how to parse binding-aware data
(BGP Update message) to binding-independent (datastore format).
RIB
-**`AdjRibInWriter <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/AdjRibInWriter.java;hb=refs/heads/stable/beryllium>`__**
+**`AdjRibInWriter <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/AdjRibInWriter.java;hb=refs/heads/stable/boron>`__**
- represents the first step in putting data to datastore. This writer is
notified whenever a peer receives an Update message. The message is
transformed into binding-independent format and pushed into datastore to
*adj-rib-in*. This RIB is associated with a peer.
-**`EffectiveRibInWriter <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/EffectiveRibInWriter.java;hb=refs/heads/stable/beryllium>`__**
+**`EffectiveRibInWriter <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/EffectiveRibInWriter.java;hb=refs/heads/stable/boron>`__**
- this writer is notified whenever *adj-rib-in* is updated. It applies
all configured import policies to the routes and stores them in
*effective-rib-in*. This RIB is also associated with a peer.
-**`LocRibWriter <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/LocRibWriter.java;hb=refs/heads/stable/beryllium>`__**
+**`LocRibWriter <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/LocRibWriter.java;hb=refs/heads/stable/boron>`__**
- this writer is notified whenever **any** *effective-rib-in* is updated
(in any peer). Performs best path selection filtering and stores the
routes in *loc-rib*. It also determines which routes need to be
advertised and fills in *adj-rib-out* that is per peer as well.
-**`AdjRibOutListener <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/AdjRibOutListener.java;h=a14fd54a29ea613b381a36248f67491d968963b8;hb=refs/heads/stable/beryllium>`__**
+**`AdjRibOutListener <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/AdjRibOutListener.java;h=a14fd54a29ea613b381a36248f67491d968963b8;hb=refs/heads/stable/boron>`__**
- listens for changes in *adj-rib-out*, transforms the routes into
BGPUpdate messages and sends them to its associated peer.
--------
This module contains only one YANG model
-`bgp-inet.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/inet/src/main/yang/bgp-inet.yang;hb=refs/heads/stable/beryllium>`__
+`bgp-inet.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/inet/src/main/yang/bgp-inet.yang;hb=refs/heads/stable/boron>`__
that summarizes the ipv4 and ipv6 extensions to RIB routes and BGP
messages.
for IPv6 AFI. The RFC defines an extension to BGP in form of a new
subsequent address family, NLRI and extended communities. All of those
are defined in the
-`bgp-flowspec.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/flowspec/src/main/yang/bgp-flowspec.yang;hb=refs/heads/stable/beryllium>`__
+`bgp-flowspec.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/flowspec/src/main/yang/bgp-flowspec.yang;hb=refs/heads/stable/boron>`__
model. In addition to generated sources, the module contains parsers for
newly defined elements and RIBSupport for flowspec-routes. The route key
of flowspec routes is a string representing human-readable flowspec
version 04. The draft defines an extension to BGP in form of a new
address family, subsequent address family, NLRI and path attribute. All
of those are defined in the
-`bgp-linkstate.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/yang/bgp-linkstate.yang;hb=refs/heads/stable/beryllium>`__
+`bgp-linkstate.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/yang/bgp-linkstate.yang;hb=refs/heads/stable/boron>`__
model. In addition to generated sources, the module contains
-`LinkstateAttributeParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/java/org/opendaylight/protocol/bgp/linkstate/attribute/LinkstateAttributeParser.java;hb=refs/heads/stable/beryllium>`__,
-`LinkstateNlriParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/java/org/opendaylight/protocol/bgp/linkstate/nlri/LinkstateNlriParser.java;hb=refs/heads/stable/beryllium>`__,
+`LinkstateAttributeParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/java/org/opendaylight/protocol/bgp/linkstate/attribute/LinkstateAttributeParser.java;hb=refs/heads/stable/boron>`__,
+`LinkstateNlriParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/java/org/opendaylight/protocol/bgp/linkstate/nlri/LinkstateNlriParser.java;hb=refs/heads/stable/boron>`__,
activators for both, parser and RIB, and RIBSupport handler for
linkstate address family. As each route needs a key, in case of
linkstate, the route key is defined as a binary string, containing all
the NLRI. The AFI indicates, as usual, the address family of the
associated route. The fact that the NLRI contains a label is indicated
by using SAFI value 4. All of those are defined in
-`bgp-labeled-unicast.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob_plain;f=bgp/labeled-unicast/src/main/yang/bgp-labeled-unicast.yang;hb=refs/heads/stable/beryllium>`__
+`bgp-labeled-unicast.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob_plain;f=bgp/labeled-unicast/src/main/yang/bgp-labeled-unicast.yang;hb=refs/heads/stable/boron>`__
model. In addition to the generated sources, the module contains new
NLRI codec and RIBSupport. The route key is defined as a binary, where
whole NLRI information is encoded.
::
- cp -n ~/.m2/settings.xml{,.orig} ; \wget -q -O - https://raw.githubusercontent.com/opendaylight/odlparent/stable/lithium/settings.xml > ~/.m2/settings.xml
+ cp -n ~/.m2/settings.xml{,.orig} ; \wget -q -O - https://raw.githubusercontent.com/opendaylight/odlparent/stable/boron/settings.xml > ~/.m2/settings.xml
.. note::
Flow mod driver API
~~~~~~~~~~~~~~~~~~~
-The Beryllium release includes a flow mod driver for the HP 3800. This
+This release includes a flow mod driver for the HP 3800. This
driver adjusts the flows and push the same to the device. This API takes
the flow to be adjusted as input and displays the adjusted flow as
output in the REST output container. Here is the REST API to adjust and
DLUX modules are the individual features such as nodes and topology.
Each module has a defined structure and you can find all existing
modules at
-https://github.com/opendaylight/dlux/tree/stable/lithium/modules.
+https://github.com/opendaylight/dlux/tree/stable/boron/modules.
Module Structure
~~~~~~~~~~~~~~~~
1. Create a maven project to place blueprint configuration. For
reference, take a look at topology bundle, present at
- https://github.com/opendaylight/dlux/tree/stable/lithium/bundles/topology.
+ https://github.com/opendaylight/dlux/tree/stable/boron/bundles/topology.
All the existing DLUX modules' configurations are available under
bundles directory of DLUX code.
2. In pom.xml, you have to add a maven plugin to unpack your module code
under generated-resources of this project. For reference, you can
check pom.xml of dlux/bundles/topology at
- https://github.com/opendaylight/dlux/tree/stable/lithium/bundles/topology.
+ https://github.com/opendaylight/dlux/tree/stable/boron/bundles/topology.
Your bundle will eventually get deployed in Karaf as feature, so your
bundle should contain all your module code. If you want to combine
module and bundle project, that should not be an issue either.
--- /dev/null
+Fabric As A Service
+===================
+
+FaaS (Fabric As A service) has two layers of APIs. We describe the top
+level API in the user guide. This document focuses on the Fabric level
+API and describes each API’s semantics and example implementation. The
+second layer defines an abstraction layer called *''Fabric*'' API. The
+idea is to abstract network into a topology formed by a collections of
+fabric objects other than varies of physical devices.Each Fabric object
+provides a collection of unified services.The top level API enables
+application developers or users to write applications to map high level
+model such as GBP, Intent etc… into a logical network model, while the
+lower level gives the application more control to individual fabric
+object level. More importantly the Fabric API is more like SP (Service
+Provider API) a fabric provider or vendor can implement the SPI based on
+its own Fabric technique such as TRILL, SPB etc …
+
+For how to use first level API operation, please refer to user guide for
+more details.
+
+FaaS Architecture
+-----------------
+
+FaaS Architecture is an 3 layered architecture, on the top is the FaaS
+Application layer, in the middle is the Fabric manager and at the bottom
+are different types of fabric objects. From bottom up, it is
+
+Fabric and its controller (Fabric Controller)
+ The Fabric object provides an abstraction of a homogeneous network
+ or portion of the network and also has a built in Fabric controller
+ which provides management plane and control plane for the fabric.
+ The fabric controller implements the services required in Fabric
+ Service and monitor and control the fabric operation.
+
+Fabric Manager
+ Fabric Manager manages all the fabric objects. also Fabric manager
+ acts as a Unified Fabric Controller which provides inter-connect
+ fabric control and configuration Also Fabric Manager is FaaS API
+ service via Which FaaS user level logical network API (the top level
+ API as mentioned previously) exposed and implemented.
+
+FaaS renderer for GBP (Group Based Policy)
+ FaaS renderer for GBP is an application of FaaS and provides the
+ rendering service between GBP model and logical network model
+ provided by Fabric Manager.
+
+Fabric APIs and Interfaces
+--------------------------
+
+FaaS APIs have 4 groups as defined below
+
+Fabric Provisioning API
+ This set of APIs is used to create and remove Fabric Abstractions,
+ in other words, those APIs is to provision the underlay networks and
+ prepare to create overlay network(the logical network) on top of it.
+
+Fabric Service API
+ This set of APIs is used to create logical network over the Fabrics.
+
+EndPoint API
+ EndPoint API is used to bind a physical port which is the location
+ of the attachment of an EndPoint happens or will happen.
+
+OAM API
+ Those APIs are for Operations, Administration and Maintenance
+ purpose and In current release, OAM API is not implemented yet.
+
+Fabric Provisioning API
+~~~~~~~~~~~~~~~~~~~~~~~
+
+- `http://${ipaddress}:8181/restconf/operations/fabric:compose-fabric <http://${ipaddress}:8181/restconf/operations/fabric:compose-fabric>`__
+
+- `http://${ipaddress}:8181/restconf/operations/fabric:decompose-fabric <http://${ipaddress}:8181/restconf/operations/fabric:decompose-fabric>`__
+
+- `http://${ipaddress}:8181/restconf/operations/fabric:get-all-fabrics <http://${ipaddress}:8181/restconf/operations/fabric:get-all-fabrics>`__
+
+Fabric Service API
+~~~~~~~~~~~~~~~~~~
+
+- RESTCONF for creating Logical port, switch, router, routing entries
+ and link. Among them, both switches and routers have ports. links
+ connect ports.these 5 logical elements are basic building blocks of a
+ logical network.
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:create-logical-switch <http://${ipaddress}:8181/restconf/operations/fabric-service:create-logical-switch>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logical-switch <http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logical-switch>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:create-logical-router <http://${ipaddress}:8181/restconf/operations/fabric-service:create-logical-router>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logical-router <http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logical-router>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:add-static-route <http://${ipaddress}:8181/restconf/operations/fabric-service:add-static-route>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:create-logic-port <http://${ipaddress}:8181/restconf/operations/fabric-service:create-logic-port>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logic-port <http://${ipaddress}:8181/restconf/operations/fabric-service:rm-logic-port>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:create-gateway <http://${ipaddress}:8181/restconf/operations/fabric-service:create-gateway>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:rm-gateway <http://${ipaddress}:8181/restconf/operations/fabric-service:rm-gateway>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:port-binding-logical-to-fabric <http://${ipaddress}:8181/restconf/operations/fabric-service:port-binding-logical-to-fabric>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:port-binding-logical-to-device <http://${ipaddress}:8181/restconf/operations/fabric-service:port-binding-logical-to-device>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:add-port-function <http://${ipaddress}:8181/restconf/operations/fabric-service:add-port-function>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:add-acl <http://${ipaddress}:8181/restconf/operations/fabric-service:add-acl>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/fabric-service:del-acl <http://${ipaddress}:8181/restconf/operations/fabric-service:del-acl>`__
+
+EndPoint API
+~~~~~~~~~~~~
+
+The following APIs is to bind the physical ports to the logical ports on
+the logical switches:
+
+- `http://${ipaddress}:8181/restconf/operations/fabric-endpoint:register-endpoint <http://${ipaddress}:8181/restconf/operations/fabric-endpoint:register-endpoint>`__
+
+- `http://${ipaddress}:8181/restconf/operations/fabric-endpoint:unregister-endpoint <http://${ipaddress}:8181/restconf/operations/fabric-endpoint:unregister-endpoint>`__
+
+- `http://${ipaddress}:8181/restconf/operations/fabric-endpoint:locate-endpoint <http://${ipaddress}:8181/restconf/operations/fabric-endpoint:locate-endpoint>`__
+
+Others API
+~~~~~~~~~~
+
+- `http://${ipaddress}:8181/restconf/operations/fabric-resource:create-fabric-port <http://${ipaddress}:8181/restconf/operations/fabric-resource:create-fabric-port>`__
+
+API Reference Documentation
+---------------------------
+
+Go to
+`http://${ipaddress}:8181/restconf/apidoc/index.html <http://${ipaddress}:8181/restconf/apidoc/index.html>`__
+and expand on *''FaaS*'' related panel for more APIs.
+
controller
didm-developer-guide
dlux
+ fabric-as-a-service
infrautils-developer-guide
iotdm-developer-guide
l2switch-developer-guide
project. It can be found on the github mirror of OpenDaylight’s
repositories:
-- https://github.com/opendaylight/coretutorials/tree/stable/lithium/ncmount
+- https://github.com/opendaylight/coretutorials/tree/stable/boron/ncmount
or checked out from the official OpenDaylight repository:
be found in the
``onDataChanged(AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject>
change)`` callback of `NcmountProvider
-class <https://github.com/opendaylight/coretutorials/blob/stable/lithium/ncmount/impl/src/main/java/ncmount/impl/NcmountProvider.java>`__.
+class <https://github.com/opendaylight/coretutorials/blob/stable/boron/ncmount/impl/src/main/java/ncmount/impl/NcmountProvider.java>`__.
Reading data from the device
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
<version>${NETIDE_VERSION}</version>
</dependency>
-The current stable version for NetIDE is ``0.1.0-Beryllium``.
+The current stable version for NetIDE is ``0.2.0-Boron``.
Protocol specification
~~~~~~~~~~~~~~~~~~~~~~
- odl-nic-console: Provides a karaf CLI extension for intent CRUD
operations and mapping service operations.
-- odl-nic-renderer-of - Generic Openflow Renderer.
+- odl-nic-renderer-of - Generic OpenFlow Renderer.
- odl-nic-renderer-vtn - a feature that transforms an intent to a
network modification using the VTN project
MPLS VPN Service Diagram
where PE (Provider Edge) and P (Provider) switches are managed by
-Opendaylight. In NIC’s terminology the endpoints are the PE switches.
+OpenDaylight. In NIC’s terminology the endpoints are the PE switches.
There could be many P switches between the PEs.
In order for NIC to recognize endpoints as MPLS endpoints, the user is
3. Switch-Port: Ingress (or Egress) for source (or Destination) endpoint
of the source (or Destination) PE
-An intent:add between two MPLS endpoints renders Openflow rules for: 1.
+An intent:add between two MPLS endpoints renders OpenFlow rules for: 1.
push/pop labels to the MPLS endpoint nodes after an IPv4 Prefix match.
2. forward to port rule after MPLS label match to all the switches that
form the shortest path between the endpoints (calculated using Dijkstra
intent:add -f uva -t eur -a ALLOW
intent:add -f eur -t uva -a ALLOW
-5. Verify by running ovs command on mininet if the flows were pushed
+5. Verify by running ovs-ofctl command on mininet if the flows were pushed
correctly to the nodes that form the shortest path.
Example:
- ovsdb.library-karaf — the OVSDB library reference implementation
-- ovsdb.openstack.net-virt-sfc-karaf — openflow service function
+- ovsdb.openstack.net-virt-sfc-karaf — OpenFlow service function
chaining
- ovsdb.hwvtepsouthbound-karaf — the hw\_vtep schema southbound plugin
- Maven 3+
-Building a Karaf feature and deploying it in an Opendaylight Karaf distribution
+Building a Karaf feature and deploying it in an OpenDaylight Karaf distribution
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. From the root ovsdb/ directory, run **mvn clean install**.
bin/karaf
-1. Once karaf has started, and you see the Opendaylight ascii art in the
+1. Once karaf has started, and you see the OpenDaylight ascii art in the
console, the last step is to start the OVSDB plugin framework with
the following command in the karaf console:
Each of the SouthBound Plugins serves a different purpose, with some
overlapping. For example, the OpenFlow plugin might serve the Data-Plane
needs of an OVS element, while the OVSDB plugin can serve the management
-plane needs of the same OVS element. As the Openflow Plugin talks
+plane needs of the same OVS element. As the OpenFlow Plugin talks
OpenFlow protocol with the OVS element, the OVSDB plugin will use OVSDB
schema over JSON-RPC transport.
~~~~~~~~~~~~~~~~~~
| One of the primary services that most southbound plugins provide in
- Opendaylight a Connection Service. The service provides protocol
+ OpenDaylight a Connection Service. The service provides protocol
specific connectivity to network elements, and supports the
connectivity management services as specified by the OpenDaylight
Connection Manager. The connectivity services include:
Using the port security groups of Neutron, one can add rules that
restrict the network access of the tenants. The OVSDB Neutron
integration checks the port security rules configured, and apply them by
-means of openflow rules.
+means of OpenFlow rules.
Through the ML2 interface, Neutron security rules are available in the
port object, following this scope: Neutron Port → Security Group →
The OVSDB Southbound MD-SAL operates using a YANG model which is based
on the abstract topology node model found in the `network topology
-model <https://github.com/opendaylight/yangtools/blob/stable/lithium/model/ietf/ietf-topology/src/main/yang/network-topology%402013-10-21.yang>`__.
+model <https://github.com/opendaylight/yangtools/blob/stable/boron/model/ietf/ietf-topology/src/main/yang/network-topology%402013-10-21.yang>`__.
The augmentations for the OVSDB Southbound MD-SAL are defined in the
-`ovsdb.yang <https://github.com/opendaylight/ovsdb/blob/stable/lithium/southbound/southbound-api/src/main/yang/ovsdb.yang>`__
+`ovsdb.yang <https://github.com/opendaylight/ovsdb/blob/stable/boron/southbound/southbound-api/src/main/yang/ovsdb.yang>`__
file.
There are three augmentations:
(defaults to )
opendaylight-user@root> wrapper:install
- Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/bin/karaf-wrapper
- Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/bin/karaf-service
- Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/etc/karaf-wrapper.conf
- Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/lib/libwrapper.so
- Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/lib/karaf-wrapper.jar
- Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/lib/karaf-wrapper-main.jar
+ Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/bin/karaf-wrapper
+ Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/bin/karaf-service
+ Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/etc/karaf-wrapper.conf
+ Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/lib/libwrapper.so
+ Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/lib/karaf-wrapper.jar
+ Creating file: /home/user/odl/distribution-karaf-0.5.0-Boron/lib/karaf-wrapper-main.jar
Setup complete. You may wish to tweak the JVM properties in the wrapper configuration file:
- /home/user/odl/distribution-karaf-0.3.0-Lithium/etc/karaf-wrapper.conf
+ /home/user/odl/distribution-karaf-0.5.0-Boron/etc/karaf-wrapper.conf
before installing and starting the service.
Ubuntu/Debian Linux system detected:
To install the service:
- $ ln -s /home/user/odl/distribution-karaf-0.3.0-Lithium/bin/karaf-service /etc/init.d/
+ $ ln -s /home/user/odl/distribution-karaf-0.5.0-Boron/bin/karaf-service /etc/init.d/
To start the service when the machine is rebooted:
$ update-rc.d karaf-service defaults
Configuration of PCEP parsers specifies one implementation of *Extension
provider* that will take care of registering mentioned parser
extensions:
-`SimplePCEPExtensionProviderContext <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/spi/src/main/java/org/opendaylight/protocol/pcep/spi/pojo/SimplePCEPExtensionProviderContext.java;hb=refs/for/stable/beryllium>`__.
+`SimplePCEPExtensionProviderContext <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/spi/src/main/java/org/opendaylight/protocol/pcep/spi/pojo/SimplePCEPExtensionProviderContext.java;hb=refs/for/stable/boron>`__.
All registries are implemented in package
-`pcep-spi <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=tree;f=pcep/spi/src/main/java/org/opendaylight/protocol/pcep/spi/pojo;hb=refs/for/stable/beryllium>`__.
+`pcep-spi <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=tree;f=pcep/spi/src/main/java/org/opendaylight/protocol/pcep/spi/pojo;hb=refs/for/stable/boron>`__.
Parsing
^^^^^^^
The stateful draft declared new elements as well as additional fields or
TLVs (type,length,value) to known objects. All new elements are defined
in yang models, that contain augmentations to elements defined in
-`pcep-types.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/api/src/main/yang/pcep-types.yang;hb=refs/for/stable/beryllium>`__.
+`pcep-types.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/api/src/main/yang/pcep-types.yang;hb=refs/for/stable/boron>`__.
In the case of extending known elements, the *Parser* class merely
extends the base class and overrides necessary methods as shown in
following diagram:
| The yang models of subobject, SR-PCE-CAPABILITY TLV and appropriate
augmentations are defined in
- `odl-pcep-segment-routing.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/yang/odl-pcep-segment-routing.yang;hb=refs/for/stable/beryllium>`__.
+ `odl-pcep-segment-routing.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/yang/odl-pcep-segment-routing.yang;hb=refs/for/stable/boron>`__.
| The pcep-segment-routing module includes parsers/serializers for new
subobject
- (`SrEroSubobjectParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/java/org/opendaylight/protocol/pcep/segment/routing/SrEroSubobjectParser.java;hb=refs/for/stable/beryllium>`__)
+ (`SrEroSubobjectParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/java/org/opendaylight/protocol/pcep/segment/routing/SrEroSubobjectParser.java;hb=refs/for/stable/boron>`__)
and TLV
- (`SrPceCapabilityTlvParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/java/org/opendaylight/protocol/pcep/segment/routing/SrPceCapabilityTlvParser.java;hb=refs/for/stable/beryllium>`__).
+ (`SrPceCapabilityTlvParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/java/org/opendaylight/protocol/pcep/segment/routing/SrPceCapabilityTlvParser.java;hb=refs/for/stable/boron>`__).
The pcep-segment-routing module implements
`draft-ietf-pce-lsp-setup-type-01 <http://tools.ietf.org/html/draft-ietf-pce-lsp-setup-type-01>`__,
For Segment Routing, PST = 1 is defined.
The Path Setup Type TLV is modeled with yang in module
-`pcep-types.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/api/src/main/yang/pcep-types.yang;hb=refs/for/stable/beryllium>`__.
+`pcep-types.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/api/src/main/yang/pcep-types.yang;hb=refs/for/stable/boron>`__.
A parser/serializer is implemented in
-`PathSetupTypeTlvParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/impl/src/main/java/org/opendaylight/protocol/pcep/impl/tlv/PathSetupTypeTlvParser.java;hb=refs/for/stable/beryllium>`__
+`PathSetupTypeTlvParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/impl/src/main/java/org/opendaylight/protocol/pcep/impl/tlv/PathSetupTypeTlvParser.java;hb=refs/for/stable/boron>`__
and it is overriden in segment-routing module to provide the aditional
PST.
underlay topologies and underlay items from overlay items. The required
information for Link Computation is provided via the Link Computation
model in
-(`topology-link-computation.yang <https://git.opendaylight.org/gerrit/gitweb?p=topoprocessing.git;a=blob;f=topoprocessing-api/src/main/yang/topology-link-computation.yang;hb=refs/heads/stable/beryllium>`__).
+(`topology-link-computation.yang <https://git.opendaylight.org/gerrit/gitweb?p=topoprocessing.git;a=blob;f=topoprocessing-api/src/main/yang/topology-link-computation.yang;hb=refs/heads/stable/boron>`__).
Link Computation Functionality
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The contents of a PUT or POST should be a OpenDaylight Table Type
Pattern. An example of one is provided below. The example can also be
-found at ```parser/sample-TTP-from-tests.ttp`` in the TTP git
+found at `parser/sample-TTP-from-tests.ttp in the TTP git
repository <https://git.opendaylight.org/gerrit/gitweb?p=ttp.git;a=blob;f=parser/sample-TTP-from-tests.ttp;h=45130949b25c6f86b750959d27d04ec2208935fb;hb=HEAD>`__.
**Sample Table Type Pattern (json).**
Troubleshooting
---------------
-See `AsciiDoc Tips
-<https://wiki.opendaylight.org/view/Documentation/Tools/AsciiDoc_Tips>`_
-on the wiki for more information.
+See `AsciiDoc Tips`_ on the wiki for more information.
Common AsciiDoc mistakes
^^^^^^^^^^^^^^^^^^^^^^^^
-See also [[Documentation/Tools/AsciiDoc Tips|AsciiDoc Tips and Tricks]]
+See also `AsciiDoc Tips`_.
* Lists that get formatted incorrectly because of no blank line above
the list.
.. _Documentation Group: https://wiki.opendaylight.org/view/Documentation/
.. _RelEng/Builder: https://wiki.opendaylight.org/view/RelEng/Builder
.. _Pandoc: http://pandoc.org/
+.. _AsciiDoc Tips: https://wiki.opendaylight.org/view/Documentation/Tools/AsciiDoc_Tips
* odlparent_
* yangtools_
-.. _mdsal: https://nexus.opendaylight.org/content/sites/site/org.opendaylight.mdsal/beryllium/apidocs/
-.. _odlparent: https://nexus.opendaylight.org/content/sites/site/org.opendaylight.odlparent/beryllium/apidocs/index.html
-.. _yangtools: https://nexus.opendaylight.org/content/sites/site/org.opendaylight.yangtools/beryllium/apidocs/index.html
+.. _mdsal: https://nexus.opendaylight.org/content/sites/site/org.opendaylight.mdsal/boron/apidocs/
+.. _odlparent: https://nexus.opendaylight.org/content/sites/site/org.opendaylight.odlparent/boron/apidocs/index.html
+.. _yangtools: https://nexus.opendaylight.org/content/sites/site/org.opendaylight.yangtools/boron/apidocs/index.html
below are the ones you’re most likely to use when creating your network
environment.
- As a short example of installing a Karaf feature, OpenDaylight Beryllium
+ As a short example of installing a Karaf feature, OpenDaylight
offers Application Layer Traffic Optimization (ALTO). The Karaf feature to
install ALTO is odl-alto-all. On the Karaf console, the command to install it
is:
For Example::
- $ ls distribution-karaf-0.4.0-Beryllium.zip
- distribution-karaf-0.4.0-Beryllium.zip
- $ unzip distribution-karaf-0.4.0-Beryllium.zip
- Archive: distribution-karaf-0.4.0-Beryllium.zip
- creating: distribution-karaf-0.4.0-Beryllium/
- creating: distribution-karaf-0.4.0-Beryllium/configuration/
- creating: distribution-karaf-0.4.0-Beryllium/data/
- creating: distribution-karaf-0.4.0-Beryllium/data/tmp/
- creating: distribution-karaf-0.4.0-Beryllium/deploy/
- creating: distribution-karaf-0.4.0-Beryllium/etc/
- creating: distribution-karaf-0.4.0-Beryllium/externalapps/
+ $ ls distribution-karaf-0.5.x-Boron.zip
+ distribution-karaf-0.5.x-Boron.zip
+ $ unzip distribution-karaf-0.5.x-Boron.zip
+ Archive: distribution-karaf-0.5.x-Boron.zip
+ creating: distribution-karaf-0.5.x-Boron/
+ creating: distribution-karaf-0.5.x-Boron/configuration/
+ creating: distribution-karaf-0.5.x-Boron/data/
+ creating: distribution-karaf-0.5.x-Boron/data/tmp/
+ creating: distribution-karaf-0.5.x-Boron/deploy/
+ creating: distribution-karaf-0.5.x-Boron/etc/
+ creating: distribution-karaf-0.5.x-Boron/externalapps/
...
- inflating: distribution-karaf-0.4.0-Beryllium/bin/start.bat
- inflating: distribution-karaf-0.4.0-Beryllium/bin/status.bat
- inflating: distribution-karaf-0.4.0-Beryllium/bin/stop.bat
- $ cd distribution-karaf-0.4.0-Beryllium
+ inflating: distribution-karaf-0.5.x-Boron/bin/start.bat
+ inflating: distribution-karaf-0.5.x-Boron/bin/status.bat
+ inflating: distribution-karaf-0.5.x-Boron/bin/stop.bat
+ $ cd distribution-karaf-0.5.x-Boron
$ ./bin/karaf
________ ________ .__ .__ .__ __
http://stackoverflow.com/questions/35679852/karaf-exception-is-thrown-while-installing-org-fusesource-leveldbjni
-Beryllium features
-==================
+Karaf OpenDaylight Features
+===========================
-.. list-table:: Beryllium features
+.. list-table:: Karaf OpenDaylight features
:widths: 10 25 10 5
:header-rows: 1
- self+all
-Other Beryllium features
-========================
+Other OpenDaylight features
+===========================
-.. list-table:: Other Beryllium features
+.. list-table:: Other OpenDaylight features
:widths: 10 25 10 5
:header-rows: 1
- all
-Experimental Beryllium Features
-===============================
-The following functionality is labeled as experimental in OpenDaylight
-Beryllium and should be used accordingly. In general, it is not supposed to be
+Experimental OpenDaylight Features
+==================================
+The following functionality is labeled as experimental in this OpenDaylight
+release and should be used accordingly. In general, it is not supposed to be
used in production unless its limitations are well understood by those
deploying it.
-.. list-table:: Other Beryllium features
+.. list-table:: Other features
:widths: 10 25 10 5
:header-rows: 1
#. OpenDaylight concepts and tools
#. Explanations of OpenDaylight Apache Karaf features and other features that
extend network functionality
-#. OpenDaylight Beryllium system requirements and Release Notes
+#. OpenDaylight system requirements and Release Notes
#. OpenDaylight installation instructions
#. Feature tables with installation names and compatibility notes
* Support for persistent data stores
* Federation and SSO with OpenStack Keystone
-The Beryllium release of AAA includes experimental support for having the database of users and credentials stored in the cluster-aware MD-SAL datastore.
+This release of AAA includes experimental support for having the database of users and credentials stored in the cluster-aware MD-SAL datastore.
ALTO
====
is now available.
* Clustering - Full support for clustering and High Availability (HA) is
- available in the OpenDaylight Beryllium release. In particular, the OVSDB
+ available in the this OpenDaylight release. In particular, the OVSDB
southbound plugin supports clustering that any application can use, and the
Openstack network integration with OpenDaylight (through OVSDB Net-Virt) has
full clustering support. While there is no specific limit on cluster size, a
- 3-node cluster has been tested extensively as part of the Beryllium release.
+ 3-node cluster has been tested extensively as part of the release.
* Security Groups - Security Group support is available and implemented using
OpenFlow rules that provide superior functionality and performance over
* Service Function Chaining
* Open vSwitch southbound support for quality of service and Queue configuration
- Load Balancer as service (LBaaS) with Distributed Virtual Router, as offered
- in the Lithium release
+ Load Balancer as service (LBaaS) with Distributed Virtual Router
* Network Virtualization User interface for DLUX
specification. It currently supports OpenFlow versions 1.0 and 1.3.2.
In addition to support for the core OpenFlow specification, OpenDaylight
-Beryllium also includes preliminary support for the Table Type Patterns and
+also includes preliminary support for the Table Type Patterns and
OF-CONFIG specifications.
Path Computation Element Protocol (PCEP)
various types of collectors
* HSQL data store - Replacement of H2 data store to remove third party
component dependency from TSDR
-* Enhancement of existing data stores including HBase to support new features
- introduced in Beryllium
* Cassandra data store - Cassandra implementation of TSDR SPIs
* NetFlow data collector - Collect NetFlow data from network elements
* NetFlowV9 - version 9 Netflow collector
for using Centinel functionality in the OpenDaylight by enabling the
default Centinel feature. Centinel is a distributed reliable framework
for collection, aggregation and analysis of streaming data which is
-added in OpenDaylight Beryllium Release.
+added in this OpenDaylight release.
Overview
--------
receive events from multiple streaming sources
(e.g., Syslog, Thrift, Avro, AMQP, Log4j, HTTP/REST).
-In Beryllium, we develop a "Log Service" and plug-in for log analyzer (e.g., Graylog).
+In this release, we develop a "Log Service" and plug-in for log analyzer (e.g., Graylog).
The Log service process real time events coming from log analyzer.
Additionally, we provide stream collector (Flume- and Sqoop-based) that collects logs
from OpenDaylight and sinks them to persistence service (integrated with TSDR).
Upgrading From a Previous Release
---------------------------------
-Beryllium being the first release supporting Centinel functionality, only fresh installation is possible.
+Only fresh installation is supported.
Uninstalling Centinel
---------------------
feature:install odl-tsdr-hsqldb-all
-This will install hsqldb related dependency features (and can take sometime) as well as openflow statistics collector before returning control to the console.
+This will install hsqldb related dependency features (and can take sometime) as well as OpenFlow statistics collector before returning control to the console.
Installing HBase Data Store
#. Installing HBase server, and
#. Installing TSDR HBase Data Store features from ODL Karaf console.
-In Beryllium, we only support HBase single node running together on the same machine as OpenDaylight. Therefore, follow the steps to download and install HBase server onto the same machine as where OpenDaylight is running:
+In this release, we only support HBase single node running together on the same machine as OpenDaylight. Therefore, follow the steps to download and install HBase server onto the same machine as where OpenDaylight is running:
#. Create a folder in Linux operating system for the HBase server. For example, create an hbase directory under ``/usr/lib``::
#. Installing Cassandra server, and
#. Installing TSDR Cassandra Data Store features from ODL Karaf console.
-In Beryllium, we only support Cassadra single node running together on the same machine as OpenDaylight. Therefore, follow these steps to download and install Cassandra server onto the same machine as where OpenDaylight is running:
+In this release, we only support Cassadra single node running together on the same machine as OpenDaylight. Therefore, follow these steps to download and install Cassandra server onto the same machine as where OpenDaylight is running:
#. Install Cassandra (latest stable version) by downloading the zip file and untar the tar ball to cassandra/ directory on the testing machine::
After the TSDR data store is installed, no matter whether it is HBase data store, Cassandra data store, or HSQLDB data store, the user can verify the installation with the following steps.
-#. Verify if the following two tsdr commands are available from Karaf console::
+#. Verify if the following two TSDR commands are available from Karaf console::
tsdr:list
tsdr:purgeAll
-#. Verify if openflow statisitcs data can be received successfully:
+#. Verify if OpenFlow statistics data can be received successfully:
#. Run "feature:install odl-tsdr-openflow-statistics-collector" from Karaf.
* **Integration Group** hosted the OpenDaylight-wide tests and main release distribution
* **Release Engineering - autorelease** was used to build the Boron release artifacts and including the main release download.
-.. _AAA: https://wiki.opendaylight.org/view/AAA:Beryllium_Release_Notes
-.. _ALTO: https://wiki.opendaylight.org/view/ALTO:Beryllium:Release_Notes
-.. _BGPCEP: https://wiki.opendaylight.org/view/BGP_LS_PCEP:Beryllium_Release_Notes
-.. _CAPWAP: https://wiki.opendaylight.org/view/CAPWAP:Beryllium:Release_Notes
-.. _Controller: https://wiki.opendaylight.org/view/OpenDaylight_Controller:Beryllium:Release_Notes
-.. _DIDM: https://wiki.opendaylight.org/view/DIDM:_Beryllium_Release_Notes
-.. _DLUX: https://wiki.opendaylight.org/view/OpenDaylight_DLUX:Beryllium:Release_Notes
-.. _FaaS: https://wiki.opendaylight.org/view/FaaS:Beryllium_Release_Notes
-.. _Group_Based_Policy: https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)/Releases/Beryllium:Beryllium_Release_Notes
-.. _IoTDM: https://wiki.opendaylight.org/view/Iotdm:Beryllium_Release_Notes
-.. _L2_Switch: https://wiki.opendaylight.org/view/L2_Switch:Beryllium:Release_Notes
-.. _LACP: https://wiki.opendaylight.org/view/LACP:Beryllium:Release_Notes
-.. _LISP_Flow_Mapping: https://wiki.opendaylight.org/view/OpenDaylight_Lisp_Flow_Mapping:Beryllium_Release_Notes
-.. _MDSAL: https://wiki.opendaylight.org/view/MD-SAL:Beryllium:Release_Notes
-.. _NEMO: https://wiki.opendaylight.org/view/NEMO:Beryllium:Release_Notes
-.. _NETCONF: https://wiki.opendaylight.org/view/OpenDaylight_NETCONF:Beryllium_Release_Notes
+.. _AAA: https://wiki.opendaylight.org/view/AAA:Boron_Release_Notes
+.. _ALTO: https://wiki.opendaylight.org/view/ALTO:Boron:Release_Notes
+.. _BGPCEP: https://wiki.opendaylight.org/view/BGP_LS_PCEP:Boron_Release_Notes
+.. _CAPWAP: https://wiki.opendaylight.org/view/CAPWAP:Boron:Release_Notes
+.. _Controller: https://wiki.opendaylight.org/view/OpenDaylight_Controller:Boron:Release_Notes
+.. _DIDM: https://wiki.opendaylight.org/view/DIDM:_Boron_Release_Notes
+.. _DLUX: https://wiki.opendaylight.org/view/OpenDaylight_DLUX:Boron:Release_Notes
+.. _FaaS: https://wiki.opendaylight.org/view/FaaS:Boron_Release_Notes
+.. _Group_Based_Policy: https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)/Releases/Boron:Boron_Release_Notes
+.. _IoTDM: https://wiki.opendaylight.org/view/Iotdm:Boron_Release_Notes
+.. _L2_Switch: https://wiki.opendaylight.org/view/L2_Switch:Boron:Release_Notes
+.. _LACP: https://wiki.opendaylight.org/view/LACP:Boron:Release_Notes
+.. _LISP_Flow_Mapping: https://wiki.opendaylight.org/view/OpenDaylight_Lisp_Flow_Mapping:Boron_Release_Notes
+.. _MDSAL: https://wiki.opendaylight.org/view/MD-SAL:Boron:Release_Notes
+.. _NEMO: https://wiki.opendaylight.org/view/NEMO:Boron:Release_Notes
+.. _NETCONF: https://wiki.opendaylight.org/view/OpenDaylight_NETCONF:Boron_Release_Notes
.. _NetIDE: https://wiki.opendaylight.org/view/NetIDE:Release_Notes
-.. _NeXt: https://wiki.opendaylight.org/view/NeXt:Beryllium_Release_Notes
+.. _NeXt: https://wiki.opendaylight.org/view/NeXt:Boron_Release_Notes
.. _NIC: https://wiki.opendaylight.org/view/Network_Intent_Composition:Release_Notes
-.. _Neutron_Northbound: https://wiki.opendaylight.org/view/NeutronNorthbound:Beryllium:Release_Notes
-.. _OF-Config: https://wiki.opendaylight.org/view/OF-CONFIG:Beryllium:Release_Notes
-.. _OpFlex: https://wiki.opendaylight.org/view/OpFlex:Beryllium_Release_Notes
-.. _OpenFlow_Plugin: https://wiki.opendaylight.org/view/OpenDaylight_OpenFlow_Plugin:Beryllium_Release_Notes
-.. _OpenFlow_Protocol_Library: https://wiki.opendaylight.org/view/Openflow_Protocol_Library:Release_Notes:Beryllium_Release_Notes
-.. _OVSDB_Netvirt: https://wiki.opendaylight.org/view/OpenDaylight_OVSDB:Beryllium_Release_Notes
-.. _Packet_Cable: https://wiki.opendaylight.org/view/PacketCablePCMM:BerylliumReleaseNotes
-.. _SDN_Interface_Application: https://wiki.opendaylight.org/view/ODL-SDNi:Beryllium_Release_Notes
+.. _Neutron_Northbound: https://wiki.opendaylight.org/view/NeutronNorthbound:Boron:Release_Notes
+.. _OF-Config: https://wiki.opendaylight.org/view/OF-CONFIG:Boron:Release_Notes
+.. _OpFlex: https://wiki.opendaylight.org/view/OpFlex:Boron_Release_Notes
+.. _OpenFlow_Plugin: https://wiki.opendaylight.org/view/OpenDaylight_OpenFlow_Plugin:Boron_Release_Notes
+.. _OpenFlow_Protocol_Library: https://wiki.opendaylight.org/view/Openflow_Protocol_Library:Release_Notes:Boron_Release_Notes
+.. _OVSDB_Netvirt: https://wiki.opendaylight.org/view/OpenDaylight_OVSDB:Boron_Release_Notes
+.. _Packet_Cable: https://wiki.opendaylight.org/view/PacketCablePCMM:BoronReleaseNotes
+.. _SDN_Interface_Application: https://wiki.opendaylight.org/view/ODL-SDNi:Boron_Release_Notes
.. _SNBI: https://wiki.opendaylight.org/view/SNBI_Berrylium_Release_Notes
-.. _SNMP4SDN: https://wiki.opendaylight.org/view/SNMP4SDN:Beryllium_Release_Note
-.. _SNMP_Plugin: https://wiki.opendaylight.org/view/SNMP_Plugin:SNMP_Plugin:Beryllium_Release_Notes
-.. _SXP: https://wiki.opendaylight.org/view/SXP:Beryllium:Release_Notes
-.. _SFC: https://wiki.opendaylight.org/view/Service_Function_Chaining:Beryllium_Release_Notes
-.. _TCPMD5: https://wiki.opendaylight.org/view/TCPMD5:Beryllium_Release_Notes
-.. _TSDR: https://wiki.opendaylight.org/view/TSDR:Beryllium:Release_Notes
-.. _TTP: https://wiki.opendaylight.org/view/Table_Type_Patterns/Beryllium/Release_Notes
-.. _Topology_Processing_Framework: https://wiki.opendaylight.org/view/Topology_Processing_Framework:BERYLLIUM_Release_Notes
-.. _USC: https://wiki.opendaylight.org/view/USC:Beryllium:Release_Notes
-.. _VPN_Service: https://wiki.opendaylight.org/view/Vpnservice:Beryllium_Release_Notes
-.. _VTN: https://wiki.opendaylight.org/view/VTN:Beryllium:Release_Notes
-.. _YANG_Tools: https://wiki.opendaylight.org/view/YANG_Tools:Beryllium:Release_Notes
+.. _SNMP4SDN: https://wiki.opendaylight.org/view/SNMP4SDN:Boron_Release_Note
+.. _SNMP_Plugin: https://wiki.opendaylight.org/view/SNMP_Plugin:SNMP_Plugin:Boron_Release_Notes
+.. _SXP: https://wiki.opendaylight.org/view/SXP:Boron:Release_Notes
+.. _SFC: https://wiki.opendaylight.org/view/Service_Function_Chaining:Boron_Release_Notes
+.. _TCPMD5: https://wiki.opendaylight.org/view/TCPMD5:Boron_Release_Notes
+.. _TSDR: https://wiki.opendaylight.org/view/TSDR:Boron:Release_Notes
+.. _TTP: https://wiki.opendaylight.org/view/Table_Type_Patterns/Boron/Release_Notes
+.. _Topology_Processing_Framework: https://wiki.opendaylight.org/view/Topology_Processing_Framework:Boron_Release_Notes
+.. _USC: https://wiki.opendaylight.org/view/USC:Boron:Release_Notes
+.. _VPN_Service: https://wiki.opendaylight.org/view/Vpnservice:Boron_Release_Notes
+.. _VTN: https://wiki.opendaylight.org/view/VTN:Boron:Release_Notes
+.. _YANG_Tools: https://wiki.opendaylight.org/view/YANG_Tools:Boron:Release_Notes
odl-groupbasedpolicy-ofoverlay
-For Lithium, **GBP** has one renderer, hence this is loaded by default.
+For this release, **GBP** has one renderer, hence this is loaded by default.
REST calls from OpenStack Neutron are by the Neutron NorthBound project.
* On the control host, `Download
the latest OpenDaylight release <ODL_Downloads_>`_ (at the time of writing,
- this is Beryllium-SR2)
+ this is Boron)
* Uncompress it as root, and start OpenDaylight (you can start OpenDaylight
by running karaf directly, but exiting from the shell will shut it down):
.. code-block:: bash
- tar xvfz distribution-karaf-0.4.2-Beryllium-SR2.tar.gz
- cd distribution-karaf-0.4.2-Beryllium-SR2
+ tar xvfz distribution-karaf-0.5.0-Boron.tar.gz
+ cd distribution-karaf-0.5.0-Boron
./bin/start # Start OpenDaylight as a server process
* Connect to the Karaf shell, and install the odl-ovsdb-openstack bundle,
-Subproject commit d153e7728a07aba69e49fab176fe52b1de2cbbdf
+Subproject commit 691d006887da61e7669c32488379c7911c80807d
-Subproject commit b28444e372162528d42d86363918bfcf49d00058
+Subproject commit ea39ae2c0e86efa9e42b5244cdf0ab0e69208a01
-Subproject commit 34ef3da1a88922e439235b60b2eb7a4210ca02e2
+Subproject commit e75c7811202763b1fe4e2bb44d92cca59b2908df
--- /dev/null
+Authentication, Authorization and Accounting (AAA) Services
+===========================================================
+
+The Boron AAA services are based on the Apache Shiro Java Security
+Framework. The main configuration file for AAA is located at
+“etc/shiro.ini” relative to the ODL karaf home directory.
+
+Terms And Definitions
+---------------------
+
+Token
+ A claim of access to a group of resources on the controller
+
+Domain
+ A group of resources, direct or indirect, physical, logical, or
+ virtual, for the purpose of access control. ODL recommends using the
+ default “sdn" domain in the Boron release.
+
+User
+ A person who either owns or has access to a resource or group of
+ resources on the controller
+
+Role
+ Opaque representation of a set of permissions, which is merely a
+ unique string as admin or guest
+
+Credential
+ Proof of identity such as username and password, OTP, biometrics, or
+ others
+
+Client
+ A service or application that requires access to the controller
+
+Claim
+ A data set of validated assertions regarding a user, e.g. the role,
+ domain, name, etc.
+
+How to enable AAA
+-----------------
+
+AAA is enabled through installing the odl-aaa-shiro feature.
+odl-aaa-shiro is automatically installed as part of the odl-restconf
+offering.
+
+How to disable AAA
+------------------
+
+Edit the “etc/shiro.ini” file and replace the following:
+
+::
+
+ /** = authcBasic
+
+with
+
+::
+
+ /** = anon
+
+Then restart the karaf process.
+
+How application developers can leverage AAA to provide servlet security
+-----------------------------------------------------------------------
+
+In order to provide security to a servlet, add the following to the
+servlet’s web.xml file as the first filter definition:
+
+::
+
+ <context-param>
+ <param-name>shiroEnvironmentClass</param-name>
+ <param-value>org.opendaylight.aaa.shiro.web.env.KarafIniWebEnvironment</param-value>
+ </context-param>
+
+ <listener>
+ <listener-class>org.apache.shiro.web.env.EnvironmentLoaderListener</listener-class>
+ </listener>
+
+ <filter>
+ <filter-name>ShiroFilter</filter-name>
+ <filter-class>org.opendaylight.aaa.shiro.filters.AAAShiroFilter</filter-class>
+ </filter>
+
+ <filter-mapping>
+ <filter-name>AAAShiroFilter</filter-name>
+ <url-pattern>/*</url-pattern>
+ </filter-mapping>
+
+.. note::
+
+ It is very important to place this AAAShiroFilter as the first
+ javax.servlet.Filter, as Jersey applies Filters in the order they
+ appear within web.xml. Placing the AAAShiroFilter first ensures
+ incoming HTTP/HTTPS requests have proper credentials before any
+ other filtering is attempted.
+
+AAA Realms
+----------
+
+AAA plugin utilizes realms to support pluggable authentication &
+authorization schemes. There are two parent types of realms:
+
+- AuthenticatingRealm
+
+ - Provides no Authorization capability.
+
+ - Users authenticated through this type of realm are treated
+ equally.
+
+- AuthorizingRealm
+
+ - AuthorizingRealm is a more sophisticated AuthenticatingRealm,
+ which provides the additional mechanisms to distinguish users
+ based on roles.
+
+ - Useful for applications in which roles determine allowed
+ cabilities.
+
+ODL Contains Four Implementations
+
+- TokenAuthRealm
+
+ - An AuthorizingRealm built to bridge the Shiro-based AAA service
+ with the h2-based AAA implementation.
+
+ - Exposes a RESTful web service to manipulate IdM policy on a
+ per-node basis. If identical AAA policy is desired across a
+ cluster, the backing data store must be synchronized using an out
+ of band method.
+
+ - A python script located at “etc/idmtool” is included to help
+ manipulate data contained in the TokenAuthRealm.
+
+ - Enabled out of the box.
+
+- ODLJndiLdapRealm
+
+ - An AuthorizingRealm built to extract identity information from IdM
+ data contained on an LDAP server.
+
+ - Extracts group information from LDAP, which is translated into ODL
+ roles.
+
+ - Useful when federating against an existing LDAP server, in which
+ only certain types of users should have certain access privileges.
+
+ - Disabled out of the box.
+
+- ODLJndiLdapRealmAuthNOnly
+
+ - The same as ODLJndiLdapRealm, except without role extraction.
+ Thus, all LDAP users have equal authentication and authorization
+ rights.
+
+ - Disabled out of the box.
+
+- ActiveDirectoryRealm
+
+.. note::
+
+ More than one Realm implementation can be specified. Realms are
+ attempted in order until authentication succeeds or all realm
+ sources are exhausted.
+
+TokenAuthRealm Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+TokenAuthRealm stores IdM data in an h2 database on each node. Thus,
+configuration of a cluster currently requires configuring the desired
+IdM policy on each node. There are two supported methods to manipulate
+the TokenAuthRealm IdM configuration:
+
+- idmtool Configuration
+
+- RESTful Web Service Configuration
+
+idmtool Configuration
+^^^^^^^^^^^^^^^^^^^^^
+
+A utility script located at “etc/idmtool” is used to manipulate the
+TokenAuthRealm IdM policy. idmtool assumes a single domain (sdn), since
+multiple domains are not leveraged in the Boron release. General usage
+information for idmtool is derived through issuing the following
+command:
+
+::
+
+ $ python etc/idmtool -h
+ usage: idmtool [-h] [--target-host TARGET_HOST]
+ user
+ {list-users,add-user,change-password,delete-user,list-domains,list-roles,add-role,delete-role,add-grant,get-grants,delete-grant}
+ ...
+
+ positional arguments:
+ user username for BSC node
+ {list-users,add-user,change-password,delete-user,list-domains,list-roles,add-role,delete-role,add-grant,get-grants,delete-grant}
+ sub-command help
+ list-users list all users
+ add-user add a user
+ change-password change a password
+ delete-user delete a user
+ list-domains list all domains
+ list-roles list all roles
+ add-role add a role
+ delete-role delete a role
+ add-grant add a grant
+ get-grants get grants for userid on sdn
+ delete-grant delete a grant
+
+ optional arguments:
+ -h, --help show this help message and exit
+ --target-host TARGET_HOST
+ target host node
+
+Add a user
+''''''''''
+
+::
+
+ python etc/idmtool admin add-user newUser
+ Password:
+ Enter new password:
+ Re-enter password:
+ add_user(admin)
+
+ command succeeded!
+
+ json:
+ {
+ "description": "",
+ "domainid": "sdn",
+ "email": "",
+ "enabled": true,
+ "name": "newUser",
+ "password": "**********",
+ "salt": "**********",
+ "userid": "newUser@sdn"
+ }
+
+.. note::
+
+ AAA redacts the password and salt fields for security purposes.
+
+Delete a user
+'''''''''''''
+
+::
+
+ $ python etc/idmtool admin delete-user newUser@sdn
+ Password:
+ delete_user(newUser@sdn)
+
+ command succeeded!
+
+List all users
+''''''''''''''
+
+::
+
+ $ python etc/idmtool admin list-users
+ Password:
+ list_users
+
+ command succeeded!
+
+ json:
+ {
+ "users": [
+ {
+ "description": "user user",
+ "domainid": "sdn",
+ "email": "",
+ "enabled": true,
+ "name": "user",
+ "password": "**********",
+ "salt": "**********",
+ "userid": "user@sdn"
+ },
+ {
+ "description": "admin user",
+ "domainid": "sdn",
+ "email": "",
+ "enabled": true,
+ "name": "admin",
+ "password": "**********",
+ "salt": "**********",
+ "userid": "admin@sdn"
+ }
+ ]
+ }
+
+Change a user’s password
+''''''''''''''''''''''''
+
+::
+
+ $ python etc/idmtool admin change-password admin@sdn
+ Password:
+ Enter new password:
+ Re-enter password:
+ change_password(admin)
+
+ command succeeded!
+
+ json:
+ {
+ "description": "admin user",
+ "domainid": "sdn",
+ "email": "",
+ "enabled": true,
+ "name": "admin",
+ "password": "**********",
+ "salt": "**********",
+ "userid": "admin@sdn"
+ }
+
+Add a role
+''''''''''
+
+::
+
+ $ python etc/idmtool admin add-role network-admin
+ Password:
+ add_role(network-admin)
+
+ command succeeded!
+
+ json:
+ {
+ "description": "",
+ "domainid": "sdn",
+ "name": "network-admin",
+ "roleid": "network-admin@sdn"
+ }
+
+Delete a role
+'''''''''''''
+
+::
+
+ $ python etc/idmtool admin delete-role network-admin@sdn
+ Password:
+ delete_role(network-admin@sdn)
+
+ command succeeded!
+
+List all roles
+''''''''''''''
+
+::
+
+ $ python etc/idmtool admin list-roles
+ Password:
+ list_roles
+
+ command succeeded!
+
+ json:
+ {
+ "roles": [
+ {
+ "description": "a role for admins",
+ "domainid": "sdn",
+ "name": "admin",
+ "roleid": "admin@sdn"
+ },
+ {
+ "description": "a role for users",
+ "domainid": "sdn",
+ "name": "user",
+ "roleid": "user@sdn"
+ }
+ ]
+ }
+
+List all domains
+''''''''''''''''
+
+::
+
+ $ python etc/idmtool admin list-domains
+ Password:
+ list_domains
+
+ command succeeded!
+
+ json:
+ {
+ "domains": [
+ {
+ "description": "default odl sdn domain",
+ "domainid": "sdn",
+ "enabled": true,
+ "name": "sdn"
+ }
+ ]
+ }
+
+Add a grant
+'''''''''''
+
+::
+
+ $ python etc/idmtool admin add-grant user@sdn admin@sdn
+ Password:
+ add_grant(userid=user@sdn,roleid=admin@sdn)
+
+ command succeeded!
+
+ json:
+ {
+ "domainid": "sdn",
+ "grantid": "user@sdn@admin@sdn@sdn",
+ "roleid": "admin@sdn",
+ "userid": "user@sdn"
+ }
+
+Delete a grant
+''''''''''''''
+
+::
+
+ $ python etc/idmtool admin delete-grant user@sdn admin@sdn
+ Password:
+ http://localhost:8181/auth/v1/domains/sdn/users/user@sdn/roles/admin@sdn
+ delete_grant(userid=user@sdn,roleid=admin@sdn)
+
+ command succeeded!
+
+Get grants for a user
+'''''''''''''''''''''
+
+::
+
+ python etc/idmtool admin get-grants admin@sdn
+ Password:
+ get_grants(admin@sdn)
+
+ command succeeded!
+
+ json:
+ {
+ "roles": [
+ {
+ "description": "a role for users",
+ "domainid": "sdn",
+ "name": "user",
+ "roleid": "user@sdn"
+ },
+ {
+ "description": "a role for admins",
+ "domainid": "sdn",
+ "name": "admin",
+ "roleid": "admin@sdn"
+ }
+ ]
+ }
+
+RESTful Web Service
+^^^^^^^^^^^^^^^^^^^
+
+The TokenAuthRealm IdM policy is fully configurable through a RESTful
+web service. Full documentation for manipulating AAA IdM data is located
+online (https://wiki.opendaylight.org/images/0/00/AAA_Test_Plan.docx),
+and a few examples are included in this guide:
+
+Get All Users
+'''''''''''''
+
+::
+
+ curl -u admin:admin http://localhost:8181/auth/v1/users
+ OUTPUT:
+ {
+ "users": [
+ {
+ "description": "user user",
+ "domainid": "sdn",
+ "email": "",
+ "enabled": true,
+ "name": "user",
+ "password": "**********",
+ "salt": "**********",
+ "userid": "user@sdn"
+ },
+ {
+ "description": "admin user",
+ "domainid": "sdn",
+ "email": "",
+ "enabled": true,
+ "name": "admin",
+ "password": "**********",
+ "salt": "**********",
+ "userid": "admin@sdn"
+ }
+ ]
+ }
+
+Create a User
+'''''''''''''
+
+::
+
+ curl -u admin:admin -X POST -H "Content-Type: application/json" --data-binary @./user.json http://localhost:8181/auth/v1/users
+ PAYLOAD:
+ {
+ "name": "ryan",
+ "userid": "ryan@sdn",
+ "password": "ryan",
+ "domainid": "sdn",
+ "description": "Ryan's User Account",
+ "email": "ryandgoulding@gmail.com"
+ }
+
+ OUTPUT:
+ {
+ "userid":"ryan@sdn",
+ "name":"ryan",
+ "description":"Ryan's User Account",
+ "enabled":true,
+ "email":"ryandgoulding@gmail.com",
+ "password":"**********",
+ "salt":"**********",
+ "domainid":"sdn"
+ }
+
+Create an OAuth2 Token For Admin Scoped to SDN
+''''''''''''''''''''''''''''''''''''''''''''''
+
+::
+
+ curl -d 'grant_type=password&username=admin&password=a&scope=sdn' http://localhost:8181/oauth2/token
+
+ OUTPUT:
+ {
+ "expires_in":3600,
+ "token_type":"Bearer",
+ "access_token":"5a615fbc-bcad-3759-95f4-ad97e831c730"
+ }
+
+Use an OAuth2 Token
+'''''''''''''''''''
+
+::
+
+ curl -H "Authorization: Bearer 5a615fbc-bcad-3759-95f4-ad97e831c730" http://localhost:8181/auth/v1/domains
+ {
+ "domains":
+ [
+ {
+ "domainid":"sdn",
+ "name":"sdn”,
+ "description":"default odl sdn domain",
+ "enabled":true
+ }
+ ]
+ }
+
+ODLJndiLdapRealm Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+LDAP integration is provided in order to externalize identity
+management. To configure LDAP parameters, modify "etc/shiro.ini"
+parameters to include the ODLJndiLdapRealm:
+
+::
+
+ # ODL provides a few LDAP implementations, which are disabled out of the box.
+ # ODLJndiLdapRealm includes authorization functionality based on LDAP elements
+ # extracted through and LDAP search. This requires a bit of knowledge about
+ # how your LDAP system is setup. An example is provided below:
+ ldapRealm = org.opendaylight.aaa.shiro.realm.ODLJndiLdapRealm
+ ldapRealm.userDnTemplate = uid={0},ou=People,dc=DOMAIN,dc=TLD
+ ldapRealm.contextFactory.url = ldap://<URL>:389
+ ldapRealm.searchBase = dc=DOMAIN,dc=TLD
+ ldapRealm.ldapAttributeForComparison = objectClass
+ ldapRealm.groupRolesMap = "Person":"admin"
+ # ...
+ # further down in the file...
+ # Stacked realm configuration; realms are round-robbined until authentication succeeds or realm sources are exhausted.
+ securityManager.realms = $tokenAuthRealm, $ldapRealm
+
+This configuration allows federation with an external LDAP server, and
+the user’s ODL role parameters are mapped to corresponding LDAP
+attributes as specified by the groupRolesMap. Thus, an LDAP operator can
+provision attributes for LDAP users that support different ODL role
+structures.
+
+ODLJndiLdapRealmAuthNOnly Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Edit the "etc/shiro.ini" file and modify the following:
+
+::
+
+ ldapRealm = org.opendaylight.aaa.shiro.realm.ODLJndiLdapRealm
+ ldapRealm.userDnTemplate = uid={0},ou=People,dc=DOMAIN,dc=TLD
+ ldapRealm.contextFactory.url = ldap://<URL>:389
+ # ...
+ # further down in the file...
+ # Stacked realm configuration; realms are round-robbined until authentication succeeds or realm sources are exhausted.
+ securityManager.realms = $tokenAuthRealm, $ldapRealm
+
+This is useful for setups where all LDAP users are allowed equal access.
+
+Token Store Configuration Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Edit the file “etc/opendaylight/karaf/08-authn-config.xml” and edit the
+following: .\ **timeToLive**: Configure the maximum time, in
+milliseconds, that tokens are to be cached. Default is 360000. Save the
+file.
+
+Authorization Configuration
+---------------------------
+
+Shiro-Based Authorization
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+OpenDaylight AAA has support for Role Based Access Control based on the
+Apache Shiro permissions system. Configuration of the authorization
+system is done offline; authorization currently cannot be configured
+after the controller is started. Thus, Authorization in this
+release is aimed towards supporting coarse-grained security policies,
+with the aim to provide more robust configuration capabilities in the
+future. Shiro-based Authorization is documented on the Apache Shiro
+website (http://shiro.apache.org/web.html#Web-%7B%7B%5Curls%5C%7D%7D).
+
+Enable “admin” Role Based Access to the IdMLight RESTful web service
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Edit the “etc/shiro.ini” configuration file and add “/auth/v1/\ **=
+authcBasic, roles[admin]” above the line “/** = authcBasic” within the
+“urls” section.
+
+::
+
+ /auth/v1/** = authcBasic, roles[admin]
+ /** = authcBasic
+
+This will restrict the idmlight rest endpoints so that a grant for admin
+role must be present for the requesting user.
+
+.. note::
+
+ The ordering of the authorization rules above is important!
+
+AuthZ Broker Facade
+~~~~~~~~~~~~~~~~~~~
+
+ODL includes an experimental Authorization Broker Facade, which allows
+finer grained access control for REST endpoints. Since this feature was
+not well tested in the Boron release, it is recommended to use the
+Shiro-based mechanism instead, and rely on the Authorization Broker
+Facade for POC use only.
+
+AuthZ Broker Facade Feature Installation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To install the authorization broker facade, please issue the following
+command in the karaf shell:
+
+::
+
+ feature:install odl-restconf odl-aaa-authz
+
+Add an Authorization Rule
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The following shows how one might go about securing the controller so
+that only admins can access restconf.
+
+::
+
+ curl -u admin:admin -H “Content-Type: application/xml” --data-binary @./rule.json http://localhost:8181/restconf/config/authorization-schema:simple-authorization/policies/RestConfService/
+ cat ./rule.json
+ {
+ "policies": {
+ "resource": "*",
+ "service":"RestConfService",
+ "role": "admin"
+ }
+ }
+
+Accounting Configuration
+------------------------
+
+All AAA logging is output to the standard karaf.log file.
+
+::
+
+ log:set TRACE org.opendaylight.aaa
+
+This command enables the most verbose level of logging for AAA
+components.
+
Scope of CAPWAP Project
-----------------------
-In the Lithium release, CAPWAP project aims to only detect the WTPs and
+In this release, CAPWAP project aims to only detect the WTPs and
store their basic attributes in the operational data store, which is
accessible via REST and JAVA APIs.
Configuring CAPWAP
------------------
-As of Lithium, there are no configuration requirements.
+As of this release, there are no configuration requirements.
Administering or Managing CAPWAP
--------------------------------
Overview
--------
-In the Beryllium Release of Centinel, this framework enables SDN
+In this release of Centinel, this framework enables SDN
applications/services to receive events from multiple streaming sources
(e.g., Syslog, Thrift, Avro, AMQP, Log4j, HTTP/REST) and execute actions
like network configuration/batch processing/real-time analytics. It also
1. Check whether Graylog is up and running and plugins deployed as
mentioned in `installation
- guide <http://opendaylight.readthedocs.io/en/stable-beryllium/getting-started-guide/index.html>`__.
+ guide <https://opendaylight.readthedocs.io/en/stable-boron/getting-started-guide/project-specific-guides/centinel.html>`__.
2. Check whether HBase is up and respective tables and column families
as mentioned in `installation
- guide <http://opendaylight.readthedocs.io/en/stable-beryllium/getting-started-guide/index.html>`__
+ guide <https://opendaylight.readthedocs.io/en/stable-boron/getting-started-guide/project-specific-guides/centinel.html>`__
are created.
3. Check if apache flume is up and running.
Run REST adjust-flow command to adjust flows and push to the device
-------------------------------------------------------------------
-**Flow mod driver for HP 3800 device is added in Beryllium release.**
+**Flow mod driver for HP 3800 device**
This driver adjusts the flows and push the same to the device. This API
takes the flow to be adjusted as input and displays the adjusted flow as
--- /dev/null
+Fabric As A Service
+===================
+
+This document describes, from a user’s or application’s perspective, how
+to use the Fabric As A Service (FaaS) feature in OpenDaylight. This
+document contains configuration, administration, and management sections
+for the FaaS feature.
+
+Overview
+--------
+
+Currently network applications and network administrators mostly rely on
+lower level interfaces such as CLI, SNMP, OVSDB, NETCONF or OpenFlow to
+directly configure individual device for network service provisioning.
+In general, those interfaces are:
+
+- Technology oriented, not application oriented.
+
+- Vendor specific
+
+- Individual device oriented, not network oriented.
+
+- Not declarative, complicated and procedure oriented.
+
+To address the gap between application needs and network interface,
+there are a few application centric language proposed in OpenDaylight
+including Group Based Policy (GBP), Network Intent Composition (NIC),
+and NEtwork MOdeling (NEMO) trying to replace traditional southbound
+interface to application. Those languages are top-down abstractions and
+model application requirements in a more application-oriented way.
+
+After being involved with GBP development for a while, we feel the top
+down model still has a quite gap between the model and the underneath
+network since the existing interfaces to network devices lack of
+abstraction makes it very hard to map high level abstractions to the
+physical network. Often the applications built with these low level
+interfaces are coupled tightly with underneath technology and make the
+application’s architecture monolithic, error prone and hard to maintain.
+
+We think a bottom-up abstraction of network can simplify, reduce the
+gap, and make it easy to implement the application centric model.
+Moreover in some uses cases, an interface with network service oriented
+are still desired for example from network monitoring/troubleshooting
+perspective. That’s where the Fabric as a Service comes in.
+
+FaaS Architecture
+-----------------
+
+Fabric and its controller (Fabric Controller)
+ The Fabric object provides an abstraction of a homogeneous network
+ or portion of the network and also has a built in Fabric controller
+ which provides management plane and control plane for the fabric.
+ The fabric controller implements the services required in Fabric
+ Service and monitor and control the fabric operation.
+
+Fabric Manager
+ Fabric Manager manages all the fabric objects. also Fabric manager
+ acts as a Unified Fabric Controller which provides inter-connect
+ fabric control and configuration Also Fabric Manager is FaaS API
+ service via Which FaaS user level logical network API (the top level
+ API as mentioned previously) exposed and implemented.
+
+FaaS render for GBP (Group Based Policy)
+ FaaS render for GBP is an application of FaaS and provides the
+ rendering service between GBP model and logical network model
+ provided by Fabric Manager.
+
+FaaS RESTCONF API
+-----------------
+
+FaaS Provides two layers API:
+
+- The top layer is a **user level logical network** API which defines
+ CRUD services operating on the following abstracted constructs:
+
+ - vcontainer - virtual container
+
+ - logical Port - a input/out/access point of a logical device
+
+ - logical link - connects ports
+
+ - logical switch - a layer 2 forwarding device
+
+ - logical router - a layer 3 forwarding device
+
+ - Subnet
+
+ - Rule/ACL
+
+ - End Points Registration
+
+ - End Points Attachment
+
+Through these constructs, a logical network can be described without
+users knowing too much details about the physical network device and
+technology, i.e. users' network services is decoupled from underneath
+physical infrastructure. This decoupling brought the benefit that the
+users defined service is not locked in with any specific technology or
+physical devices. FaaS maps the logical network to the physical network
+configuration automatically which in maximum eliminates the manual
+provisioning work. As a result, human error is avoided and OPEX for
+network operators is massively reduced. Moreover migration from one
+technology to another is much easier to do and transparent to users'
+services.
+
+- The second layer defines an abstraction layer called **Fabric** API.
+ The idea is to abstract network into a topology formed by a
+ collections of fabric objects other than varies of physical
+ devices.Each Fabric object provides a collection of unified services.
+ The top level API enables application developers or users to write
+ applications to map high level model such as GBP, Intent etc… into a
+ logical network model, while the lower level gives the application
+ more control to individual fabric object level. More importantly the
+ Fabric API is more like SPI (Service Provider API) a fabric provider
+ or vendor can implement the SPI based on its own Fabric technique
+ such as TRILL, SPB etc …
+
+This document is focused on the top layer API. For how to use second
+level API operation, please refer to FaaS developer guide for more
+details.
+
+.. note::
+
+ that for any JSON data or link not described here, please go to
+ `http://${ipaddress}:8181/apidoc/explorer/index.htm <http://${ipaddress}:8181/apidoc/explorer/index.htm>`__
+ for details or clarification.
+
+Resource Management API
+-----------------------
+
+The FaaS Resource Management API provides services to allocate and
+reclaim the network resources provided by Fabric object. Those APIs can
+be accessed via RESTCONF rendered from YANG in MD-SAL.
+
+- Name: Create virtual container
+
+ - virtual container is an collections of resources allocated to a
+ tenant, for example, a list of physical ports, bandwidth or L2
+ network or logical routers quantity. etc…
+
+ - `http://${ipaddress}:8181/restconf/operations/vcontainer-topology:create-vcontainer <http://${ipaddress}:8181/restconf/operations/vcontainer-topology:create-vcontainer>`__
+
+ - Description: create a given virtual container.
+
+- Name: assign or remove fabric resource to a virtual container
+
+ - `http://${ipaddress}:8181/restconf/operations/vc-ld-node:add-vfabric-to-ld-node <http://${ipaddress}:8181/restconf/operations/vc-ld-node:add-vfabric-to-ld-node>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/vc-ld-node:rm-vfabric-to-ld-node <http://${ipaddress}:8181/restconf/operations/vc-ld-node:rm-vfabric-to-ld-node>`__
+
+- Name: assign or remove appliance to a virtual container
+
+ - `http://${ipaddress}:8181/restconf/operations/vc-ld-node:add-appliance-to-ld-node <http://${ipaddress}:8181/restconf/operations/vc-ld-node:add-appliance-to-ld-node>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/vc-ld-node:rm-appliance-to-ld-node <http://${ipaddress}:8181/restconf/operations/vc-ld-node:rm-appliance-to-ld-node>`__
+
+- Name: create or remove a child container
+
+ - `http://${ipaddress}:8181/restconf/operations/vc-ld-node:create-child-ld-node <http://${ipaddress}:8181/restconf/operations/vc-ld-node:create-child-ld-node>`__
+
+ - `http://${ipaddress}:8181/restconf/operations/vc-ld-node:rm-child-ld-node <http://${ipaddress}:8181/restconf/operations/vc-ld-node:rm-child-ld-node>`__
+
+- RESTCONF path for Virtual Container Datastore query (note:
+ vcontainer-id equals tenantID for now):
+
+ - `http://${ipaddress}:8181/restconf/config/network-topology/topology/{vcontainer-id}/vc-topology-attributes/ <http://${ipaddress}:8181/restconf/config/network-topology/topology/{vcontainer-id}/vc-topology-attributes/>`__
+
+ - `http://${ipaddress}:8181/restconf/config/network-topology/topology/{vcontainer-id}/node/{net-node-id}/vc-net-node-attributes <http://${ipaddress}:8181/restconf/config/network-topology/topology/{vcontainer-id}/node/{net-node-id}/vc-net-node-attributes>`__
+
+ - `http://${ipaddress}:8181/restconf/config/network-topology/topology/{vcontainer-id}/node/{ld-node-id}/vc-ld-node-attributes <http://${ipaddress}:8181/restconf/config/network-topology/topology/{vcontainer-id}/node/{ld-node-id}/vc-ld-node-attributes>`__
+
+Installing Fabric As A Service
+------------------------------
+
+To install FaaS, download OpenDaylight and use the Karaf console to
+install the following feature: **odl-restconf** **odl-faas-all**
+**odl-groupbasedpolicy-faas** (if needs to use FaaS to render GBP)
+
+Configuring FaaS
+----------------
+
+This section gives details about the configuration settings for various
+components in FaaS.
+
+The FaaS configuration files for the Karaf distribution are located in
+distribution/karaf/target/assembly/etc/faas
+
+- akka.conf
+
+ - This file contains configuration related to clustering. Potential
+ configuration properties can be found on the akka website at
+ http://doc.akka.io
+
+- fabric-factory.xml
+
+- vxlan-fabric.xml
+
+- vxlan-fabric-ovs-adapter.xml
+
+ - Those 3 files are used to initialize fabric module and located
+ under distribution/karaf/target/assembly/etc/opendaylight/karaf
+
+Managing FaaS
+-------------
+
+Start opendaylight karaf distribution
+
+- *>bin/karaf* Then From karaf console,Install features in the
+ following order:
+
+- *>feature:Install odl-restconf*
+
+- *>feature:install odl-faas-all*
+
+- *>feature install odl-groupbasedpolicy-faas*
+
+After installing features above, users can manage Fabric resource and
+FaaS logical network channels from the APIDOCS explorer via RESTCONF
+
+Go to
+`http://${ipaddress}:8181/apidoc/explorer/index.html <http://${ipaddress}:8181/apidoc/explorer/index.html>`__,
+sign in, and expand the FaaS panel. From there, users can execute
+various API calls to test their FaaS deployment such as create virtual
+container, create fabric, delete fabric, create/edit logical network
+elements.
+
+Tutorials
+---------
+
+Below are tutorials for 4 major use cases.
+
+1. to create and provision a fabric
+
+2. to allocate resource from the fabric to a tenant
+
+3. to define a logical network for a tenant. Currently there are two
+ ways to create a logical network
+
+ a. Create a GBP (Group Based Policy) profile for a tenant and then
+ convert it to a logical network via GBP FaaS render Or
+
+ b. Manually create a logical network via RESTCONF APIs.
+
+4. to attach or detach an Endpoint to a logical switch or logical router
+
+Create a fabric
+~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This tutorial walks users through the process of create a Fabric object
+
+Prerequisites
+^^^^^^^^^^^^^
+
+A set of virtual switches (OVS) have to be registered or discovered by
+ODL. Mininet is recommended to create a OVS network. After an OVS
+network is created, set up the controller IP pointing to ODL IP address
+in each of the OVS. From ODL, a physical topology can be viewed via ODL
+DLUX UI or retrieved via RESTCONF API.
+
+Instructions
+^^^^^^^^^^^^
+
+- Run the OpenDaylight distribution and install odl-faas-all from the
+ Karaf console.
+
+- Go to
+ ***`http://${ipaddress}:8181/apidoc/explorer/index.html <http://${ipaddress}:8181/apidoc/explorer/index.html>`__***
+
+- Get the network topology after OVS switches are registered in the
+ controller
+
+- Determine the nodes and links to be included in the to-be-defined
+ Fabric object.
+
+- Execute create-fabric RESTCONF API with the corresponding JSON data
+ as required.
+
+Create virtual container for a tenant
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The purpose of this tutorial is to allocate network resources to a
+tenant
+
+Overview
+^^^^^^^^
+
+This tutorial walks users through the process of create a Fabric
+
+Prerequisites
+^^^^^^^^^^^^^
+
+1 or more fabric objects have been created.
+
+Instructions
+^^^^^^^^^^^^
+
+- Run the OpenDaylight karaf distribution and install odl-faas-all
+ feature from the Karaf console. >feature:install odl-rest-conf
+ odl-faas-all odl-mdsal-apidoc
+
+- Go to
+ `http://${ipaddress}:8181/apidoc/explorer/index.html <http://${ipaddress}:8181/apidoc/explorer/index.html>`__
+
+- Execute create-vcontainer with the following restconf API with
+ corresponding JSON data >
+ `http://${ipaddress}:8181/restconf/operations/vcontainer-topology:create-vcontainer <http://${ipaddress}:8181/restconf/operations/vcontainer-topology:create-vcontainer>`__
+
+After a virtual container is created, fabric resource and appliance
+resource can be assigned to the container object via the following
+RESTConf API.
+
+- `http://${ipaddress}:8181/restconf/operations/vc-ld-node:add-vfabric-to-ld-node <http://${ipaddress}:8181/restconf/operations/vc-ld-node:add-vfabric-to-ld-node>`__
+
+- `http://${ipaddress}:8181/restconf/operations/vc-ld-node:add-appliance-to-ld-node <http://${ipaddress}:8181/restconf/operations/vc-ld-node:add-appliance-to-ld-node>`__
+
+Create a logical network
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This tutorial walks users through the process of create a logical
+network for a tenant
+
+Prerequisites
+^^^^^^^^^^^^^
+
+a virtual container has been created and assigned to the tenant
+
+Instructions
+^^^^^^^^^^^^
+
+Currently there are two ways to create a logical network.
+
+- Option 1 is to use logical network RESTConf REST API and directly
+ create individual network elements and connect them into a network
+
+- Option 2 is to define a GBP model and FaaS can map GBP model
+ automatically into a logical network. Notes that for option 2, if the
+ generated network requires some modification, we recommend modify the
+ GBP model rather than change the network directly due to there is no
+ synchronization from network back to GBP model in current release.
+
+Manual Provisioning
+^^^^^^^^^^^^^^^^^^^
+
+To create a logical switch
+
+- `http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:logical-switches:logical-switches <http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:logical-switches:logical-switches>`__
+ To create a logical router
+
+- `http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:logical-routers:logical-routers <http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:logical-routers:logical-routers>`__
+ To attach a logical switch to a router
+
+ - Step 1: updating/adding a port A on the logical switch
+ `http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:logical-switches:logical-switches <http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:logical-switches:logical-switches>`__
+
+ - Step 2: updating/adding a port B on the logical router
+ `http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:logical-routers:logical-routers <http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:logical-routers:logical-routers>`__
+
+ - Step 3; create a link between the port A and B
+ `http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:logical-edges:logical-edges <http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:logical-edges:logical-edges>`__
+
+- To add security policies (ACL or SFC) on a port
+ `http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:faas-security-rules <http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:faas-security-rules>`__
+
+- To query the logical network just created
+ `http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks <http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks>`__
+
+Provision via GBP FaaS Render
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+- Run the OpenDaylight distribution and install odl-faas-all and GBP
+ faas render feature from the Karaf console.
+
+- Go to
+ `http://${ipaddress}:8181/apidoc/explorer/index.html <http://${ipaddress}:8181/apidoc/explorer/index.html>`__
+
+- Execute "create GBP model" via GBP REST API. The GBP model then can
+ be automatically mapped into a logical network.
+
+Attach/detach an end point to a logical device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This tutorial walks users through the process of registering an End
+Point to a logical device either logical switch or router. The purpose
+of this API is to inform the FaaS where an endpoint physically attach.
+The location information consists of the binding information between
+physical port identifier and logical port information. The logical port
+is indicated by the endpoint either Layer 2 attribute(MAC address) or
+Layer 3 attribute (IP address) and logical network ID (VLAN ID). The
+logical network ID is indirectly indicated the tenant ID since it is
+mutual exclusive resource allocated to a tenant.
+
+Prerequisites
+^^^^^^^^^^^^^
+
+The logical switch to which those end points are attached has to be
+created beforehand. and the identifier of the logical switch is required
+for the following RESTCONF calls.
+
+Instructions
+^^^^^^^^^^^^
+
+- Run the OpenDaylight distribution and install odl-faas-all from the
+ Karaf console.
+
+- Go to
+ `http://${ipaddress}:8181/apidoc/explorer/index.html <http://${ipaddress}:8181/apidoc/explorer/index.html>`__
+
+- Execute "attach end point " with the following RESTCONF API and
+ corresponding JSON data:
+ `http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:faas-endpoints-locations <http://${ipaddress}:8181/restconf/configuration/faas-logical-networks:tenant-logical-networks:faas-endpoints-locations>`__
+
It is important to show the overall philosophy of **GBP** as it sets the
project’s direction.
-In the Beryllium release of OpenDaylight, **GBP** focused on **expressed
+In this release of OpenDaylight, **GBP** focused on **expressed
intent**, **refactoring of how renderers consume and publish Subject
Feature Definitions for multi-renderer support**.
- Endpoints:
- Define concrete uniquely identifiable entities. In Beryllium,
+ Define concrete uniquely identifiable entities. In this release,
examples could be a Docker container, or a Neutron port
- EndpointGroups:
The *classifier* and *action* portions of the model can be thought of as
hooks, with their definition provided by each *renderer* about its
-domain specific capabilities. In **GBP** Beryllium, there is one
+domain specific capabilities. In **GBP** for this release, there is one
renderer, the *`OpenFlow Overlay renderer (OfOverlay). <#OfOverlay>`__*
These hooks are filled with *definitions* of the types of *features* the
looks as so:
.. figure:: ./images/groupbasedpolicy/GBP_High-levelBerylliumArchitecture.png
- :alt: GBP High Level Beryllium Architecture
+ :alt: GBP High Level Architecture
- GBP High Level Beryllium Architecture
+ GBP High Level Architecture
The major benefit of this architecture is that the mapping of the
domain-specific-language is completely separate and independent of the
can now be leveraged across NetConf devices simultaneously:
.. figure:: ./images/groupbasedpolicy/GBP_High-levelExtraRenderer.png
- :alt: GBP High Level Beryllium Architecture - adding a renderer
+ :alt: GBP High Level Architecture - adding a renderer
- GBP High Level Beryllium Architecture - adding a renderer
+ GBP High Level Architecture - adding a renderer
As other domain-specific mappings occur, they too can leverage the same
renderers, as the renderers only need to implement the **GBP** access
mapping to the access and forwarding models. For instance:
.. figure:: ./images/groupbasedpolicy/High-levelBerylliumArchitectureEvolution2.png
- :alt: GBP High Level Beryllium Architecture - adding a renderer
+ :alt: GBP High Level Architecture - adding a renderer
- GBP High Level Beryllium Architecture - adding a renderer
+ GBP High Level Architecture - adding a renderer
In summary, the **GBP** architecture:
- Consumer Endpoint Identification Constraint
- Label based criteria for matching against endpoints. In Beryllium
+ Label based criteria for matching against endpoints. In this release
this can be used to label endpoints based on IpPrefix.
The second category is the provider matchers, which match against the
- Consumer Endpoint Identification Constraint
- Label based criteria for matching against endpoints. In Beryllium
+ Label based criteria for matching against endpoints. In this release
this can be used to label endpoints based on IpPrefix.
Clauses have a list of subjects that apply when all the matchers in the
Demo/Development environment
----------------------------
-The **GBP** project for Beryllium has two demo/development environments.
+The **GBP** project for this release has two demo/development environments.
- Docker based GBP and GBP+SFC integration Vagrant environment
cardinal_-opendaylight-monitoring-as-a-service
centinel-user-guide
didm-user-guide
+ fabric-as-a-service
genius-user-guide
group-based-policy-user-guide
l2switch-user-guide
pcep-user-guide
packetcable-user-guide
service-function-chaining
+ snbi-user-guide
snmp-plugin-user-guide
snmp4sdn-user-guide
sxp-user-guide
Running the L2Switch project
----------------------------
-To run the L2 Switch inside the Lithium OpenDaylight distribution simply
+To run the L2 Switch inside the OpenDaylight distribution simply
install the ``odl-l2switch-switch-ui`` feature;
::
Prerequisites
^^^^^^^^^^^^^
-- **OpenDaylight Beryllium**
+- **OpenDaylight Boron**
- **The Postman Chrome App**: the most convenient way to follow along
this tutorial is to use the `Postman Chrome
file. You can import this file to Postman by clicking *Import* at the
top, choosing *Download from link* and then entering the following
URL:
- ``https://git.opendaylight.org/gerrit/gitweb?p=lispflowmapping.git;a=blob_plain;f=resources/tutorial/Beryllium_Tutorial.json.postman_collection;hb=refs/heads/stable/beryllium``.
+ ``https://git.opendaylight.org/gerrit/gitweb?p=lispflowmapping.git;a=blob_plain;f=resources/tutorial/Beryllium_Tutorial.json.postman_collection;hb=refs/heads/stable/boron``.
Alternatively, you can save the file on your machine, or if you have
the repository checked out, you can import from there. You will need
to create a new Postman Environment and define some variables within:
Mapping RPC REST API. This is so that you can see the actual request
URLs and body content on the page.
-1. Install and run OpenDaylight Beryllium release on the controller VM.
- Please follow the general OpenDaylight Beryllium Installation Guide
+1. Install and run OpenDaylight Boron release on the controller VM.
+ Please follow the general OpenDaylight Boron Installation Guide
for this step. Once the OpenDaylight controller is running install
the *odl-lispflowmapping-msmr* feature from the Karaf CLI:
.. note::
- The ``resources/tutorial`` directory in the *stable/beryllium*
+ The ``resources/tutorial`` directory in the *stable/boron*
branch of the project git repository has the files used in the
tutorial `checked
- in <https://git.opendaylight.org/gerrit/gitweb?p=lispflowmapping.git;a=tree;f=resources/tutorial;hb=refs/heads/stable/beryllium>`__,
+ in <https://git.opendaylight.org/gerrit/gitweb?p=lispflowmapping.git;a=tree;f=resources/tutorial;hb=refs/heads/stable/boron>`__,
so you can just copy the files to ``/root/lispd.conf`` on the
respective VMs. You will also find the JSON files referenced
below in the same directory.
- Serves as an alternative interface for MD-SAL (besides RESTCONF)
and allows users to read/write data from MD-SAL’s datastore and to
invoke its rpcs (NETCONF notifications are not available in the
- Beryllium release of OpenDaylight)
+ Boron release of OpenDaylight)
.. note::
- `RFC-6470 <https://tools.ietf.org/html/rfc6470>`__
- (partially, only the schema-change notification is available in
- Beryllium release)
+ Boron release)
- `RFC-6022 <https://tools.ietf.org/html/rfc6022>`__
- `draft-ietf-netconf-yang-library-06 <https://tools.ietf.org/html/draft-ietf-netconf-yang-library-06>`__
-Notifications over NETCONF are not supported in the Beryllium release.
+Notifications over NETCONF are not supported in the Boron release.
**Tip**
**Tip**
Download testtool from OpenDaylight Nexus at:
- https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/netconf/netconf-testtool/1.0.2-Beryllium-SR2/
+ https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/netconf/netconf-testtool/1.1.0-Boron/
**Nexus contains 3 executable tools:**
Netconf-testtool is now part of default maven build profile for
controller and can be also downloaded from nexus. The executable jar for
testtool can be found at:
-`nexus-artifacts <https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/netconf/netconf-testtool/1.0.2-Beryllium-SR2/>`__
+`nexus-artifacts <https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/netconf/netconf-testtool/1.1.0-Boron/>`__
Running testtool
^^^^^^^^^^^^^^^^
Downloading and deploy Karaf distribution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- Get the Beryllium distribution.
+- Get the Boron distribution.
- Unzip the downloaded zip distribution.
The security group in openstack helps to filter packets based on
policies configured. The current implementation in openstack uses
-iptables to realize security groups. In Opendaylight instead of iptable
+iptables to realize security groups. In OpenDaylight instead of iptable
rules, ovs flows are used. This will remove the many layers of
bridges/ports required in iptable implementation.
The current rules are applied on the basis of the following attributes:
ingress/egress, protocol, port range, and prefix. In the pipeline, table
-40 is used for egress acl and table 90 for ingress acl rules.
+40 is used for egress ACL and table 90 for ingress ACL rules.
Stateful Implementation
^^^^^^^^^^^^^^^^^^^^^^^
::
- -trk - The packet was never send to netfilter framework
+ -trk - The packet was never send to netfilter framework
::
- +trk+est - It is already known and connection which was allowed previously,
- pass it to the next table.
+ +trk+est - It is already known and connection which was allowed previously,
+ pass it to the next table.
::
- +trk+new - This is a new connection. So if there is a specific rule in the
- table which allows this traffic with a commit action an entry will be made
- in the netfilter framework. If there is no specific rule to allow this
- traffic the packet will be dropped.
+ +trk+new - This is a new connection. So if there is a specific rule in the
+ table which allows this traffic with a commit action an entry will be made
+ in the netfilter framework. If there is no specific rule to allow this
+ traffic the packet will be dropped.
So, by default, a packet is be dropped unless there is a rule to allow
the packet.
::
- cookie=0x0, duration=36.848s, table=90, n_packets=2, n_bytes=717,
+ cookie=0x0, duration=36.848s, table=90, n_packets=2, n_bytes=717,
priority=61006,udp,dl_src=fa:16:3e:a1:f9:d0,
- tp_src=67,tp_dst=68 actions=goto_table:100
+ tp_src=67,tp_dst=68 actions=goto_table:100
::
- cookie=0x0, duration=36.566s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=36.566s, table=90, n_packets=0, n_bytes=0,
priority=61006,udp6,dl_src=fa:16:3e:a1:f9:d0,
- tp_src=547,tp_dst=546 actions=goto_table:100
+ tp_src=547,tp_dst=546 actions=goto_table:100
- Allow DHCP client traffic egress.
::
- cookie=0x0, duration=2165.596s, table=40, n_packets=2, n_bytes=674,
- priority=61012,udp,tp_src=68,tp_dst=67 actions=goto_table:50
+ cookie=0x0, duration=2165.596s, table=40, n_packets=2, n_bytes=674,
+ priority=61012,udp,tp_src=68,tp_dst=67 actions=goto_table:50
::
- cookie=0x0, duration=2165.513s, table=40, n_packets=0, n_bytes=0,
- priority=61012,udp6,tp_src=546,tp_dst=547 actions=goto_table:50
+ cookie=0x0, duration=2165.513s, table=40, n_packets=0, n_bytes=0,
+ priority=61012,udp6,tp_src=546,tp_dst=547 actions=goto_table:50
- Prevent DHCP server traffic from the vm port.(DHCP Spoofing)
::
- cookie=0x0, duration=34.711s, table=40, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=34.711s, table=40, n_packets=0, n_bytes=0,
priority=61011,udp,in_port=2,tp_src=67,tp_dst=68 actions=drop
::
- cookie=0x0, duration=34.519s, table=40, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=34.519s, table=40, n_packets=0, n_bytes=0,
priority=61011,udp6,in_port=2,tp_src=547,tp_dst=546 actions=drop
Arp rules
::
- cookie=0x0, duration=35.015s, table=40, n_packets=10, n_bytes=420,
+ cookie=0x0, duration=35.015s, table=40, n_packets=10, n_bytes=420,
priority=61010,arp,arp_sha=fa:16:3e:93:88:60 actions=goto_table:50
::
- cookie=0x0, duration=35.582s, table=90, n_packets=1, n_bytes=42,
- priority=61010,arp,arp_tha=fa:16:3e:93:88:60 actions=goto_table:100
+ cookie=0x0, duration=35.582s, table=90, n_packets=1, n_bytes=42,
+ priority=61010,arp,arp_tha=fa:16:3e:93:88:60 actions=goto_table:100
Conntrack rules
'''''''''''''''
::
- cookie=0x0, duration=35.015s table=40,priority=61021,in_port=3,
+ cookie=0x0, duration=35.015s table=40,priority=61021,in_port=3,
ct_state=-trk,action=ct"("table=0")"
::
- cookie=0x0, duration=35.015s table=40,priority=61020,in_port=3,
+ cookie=0x0, duration=35.015s table=40,priority=61020,in_port=3,
ct_state=+trk+est,action=goto_table:50
::
- cookie=0x0, duration=35.015s table=40,priority=36002,in_port=3,
+ cookie=0x0, duration=35.015s table=40,priority=36002,in_port=3,
ct_state=+new,actions=drop
::
- cookie=0x0, duration=35.015s table=90,priority=61022,
+ cookie=0x0, duration=35.015s table=90,priority=61022,
dl_dst=fa:16:3e:0d:8d:21,ct_state=+trk+est,action=goto_table:100
::
- cookie=0x0, duration=35.015s table=90,priority=61021,
+ cookie=0x0, duration=35.015s table=90,priority=61021,
dl_dst=fa:16:3e:0d:8d:21,ct_state=-trk,action=ct"("table=0")"
::
- cookie=0x0, duration=35.015s table=90,priority=36002,
+ cookie=0x0, duration=35.015s table=90,priority=36002,
dl_dst=fa:16:3e:0d:8d:21,ct_state=+new,actions=drop
TCP SYN Rule
::
- User can add security groups in openstack via command line or UI. When we associate this security group with a vm the flows related to each security group will be added in the related tables. A preconfigured security group called the default security group is available in neutron db.
+ User can add security groups in openstack via command line or UI. When we associate this security group with a vm the flows related to each security group will be added in the related tables. A preconfigured security group called the default security group is available in neutron db.
Stateful
''''''''
::
- cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
priority=61007,ct_state=+new+trk,icmp,dl_src=fa:16:3e:ee:a5:ec,
- nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50
+ nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50
::
- cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0,
priority=61007,ct_state=+new+trk,udp,dl_dst=fa:16:3e:22:59:2f,
- nw_src=10.100.5.3,tp_dst=2222 actions=ct(commit),goto_table:100
+ nw_src=10.100.5.3,tp_dst=2222 actions=ct(commit),goto_table:100
::
- cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0,
priority=61007,ct_state=+new+trk,tcp,dl_dst=fa:16:3e:22:59:2f,
- nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100
+ nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100
Stateless
'''''''''
::
- cookie=0x0, duration=13211.171s, table=40, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=13211.171s, table=40, n_packets=0, n_bytes=0,
priority=61007,icmp,dl_src=fa:16:3e:93:88:60,nw_dst=0.0.0.0/24,
- icmp_type=2,icmp_code=4 actions=goto_table:50
+ icmp_type=2,icmp_code=4 actions=goto_table:50
::
- cookie=0x0, duration=199.674s, table=90, n_packets=0, n_bytes=0,
- priority=61007,udp,dl_dst=fa:16:3e:dc:49:ff,nw_src=10.100.5.3,tp_dst=2222
+ cookie=0x0, duration=199.674s, table=90, n_packets=0, n_bytes=0,
+ priority=61007,udp,dl_dst=fa:16:3e:dc:49:ff,nw_src=10.100.5.3,tp_dst=2222
actions=goto_table:100
::
- cookie=0x0, duration=199.780s, table=90, n_packets=0, n_bytes=0,
- priority=61007,tcp,dl_dst=fa:16:3e:93:88:60,nw_src=10.100.5.4,tp_dst=3333
+ cookie=0x0, duration=199.780s, table=90, n_packets=0, n_bytes=0,
+ priority=61007,tcp,dl_dst=fa:16:3e:93:88:60,nw_src=10.100.5.4,tp_dst=3333
actions=goto_table:100
TCP/UDP Port Range
::
- cookie=0x0, duration=56.129s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=56.129s, table=90, n_packets=0, n_bytes=0,
priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
- tp_dst=0x200/0xff00 actions=goto_table:100
+ tp_dst=0x200/0xff00 actions=goto_table:100
::
- cookie=0x0, duration=55.805s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=55.805s, table=90, n_packets=0, n_bytes=0,
priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
- tp_dst=0x160/0xffe0 actions=goto_table:100
+ tp_dst=0x160/0xffe0 actions=goto_table:100
::
- cookie=0x0, duration=55.587s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=55.587s, table=90, n_packets=0, n_bytes=0,
priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
- tp_dst=0x300/0xfff8 actions=goto_table:100
+ tp_dst=0x300/0xfff8 actions=goto_table:100
::
- cookie=0x0, duration=55.437s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=55.437s, table=90, n_packets=0, n_bytes=0,
priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
- tp_dst=0x150/0xfff0 actions=goto_table:100
+ tp_dst=0x150/0xfff0 actions=goto_table:100
::
- cookie=0x0, duration=55.282s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=55.282s, table=90, n_packets=0, n_bytes=0,
priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
- tp_dst=0x14e/0xfffe actions=goto_table:100
+ tp_dst=0x14e/0xfffe actions=goto_table:100
::
- cookie=0x0, duration=54.063s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=54.063s, table=90, n_packets=0, n_bytes=0,
priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
- tp_dst=0x308/0xfffe actions=goto_table:100
+ tp_dst=0x308/0xfffe actions=goto_table:100
::
- cookie=0x0, duration=55.130s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=55.130s, table=90, n_packets=0, n_bytes=0,
priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
- tp_dst=333 actions=goto_table:100
+ tp_dst=333 actions=goto_table:100
CIDR/Remote Security Group
^^^^^^^^^^^^^^^^^^^^^^^^^^
::
- When adding a security group we can select the rule to applicable to a
- set of CIDR or to a set of VMs which has a particular security group
- associated with it.
+ When adding a security group we can select the rule to applicable to a
+ set of CIDR or to a set of VMs which has a particular security group
+ associated with it.
If CIDR is selected there will be only one flow rule added allowing the
traffic from/to the IP’s belonging to that CIDR.
::
- cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
priority=61007,ct_state=+new+trk,icmp,dl_src=fa:16:3e:ee:a5:ec,
- nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50
+ nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50
If a remote security group is selected a flow will be inserted for every
vm which has that security group associated.
::
- cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0,
priority=61007,ct_state=+new+trk,udp,dl_dst=fa:16:3e:22:59:2f,
- nw_src=10.100.5.3,tp_dst=2222 actions=ct(commit),goto_table:100
+ nw_src=10.100.5.3,tp_dst=2222 actions=ct(commit),goto_table:100
::
- cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0,
+ cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0,
priority=61007,ct_state=+new+trk,tcp,dl_dst=fa:16:3e:22:59:2f,
- nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100
+ nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100
Rules supported in ODL
^^^^^^^^^^^^^^^^^^^^^^
The OVSDB Southbound Plugin provides a YANG model which is based on the
abstract `network topology
-model <https://github.com/opendaylight/yangtools/blob/stable/beryllium/yang/yang-parser-impl/src/test/resources/ietf/network-topology%402013-10-21.yang>`__.
+model <https://github.com/opendaylight/yangtools/blob/stable/boron/yang/yang-parser-impl/src/test/resources/ietf/network-topology%402013-10-21.yang>`__.
The details of the OVSDB YANG model are defined in the
-`ovsdb.yang <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/southbound/southbound-api/src/main/yang/ovsdb.yang>`__
+`ovsdb.yang <https://github.com/opendaylight/ovsdb/blob/stable/boron/southbound/southbound-api/src/main/yang/ovsdb.yang>`__
file.
The OVSDB YANG model defines three augmentations:
This section will show some examples on how to manage QoS and Queue
entries via the configuration MD-SAL. The examples will be illustrated
by using RESTCONF (see `QoS and Queue Postman
-Collection <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/resources/commons/Qos-and-Queue-Collection.json.postman_collection>`__
+Collection <https://github.com/opendaylight/ovsdb/blob/stable/boron/resources/commons/Qos-and-Queue-Collection.json.postman_collection>`__
).
A pre-requisite for managing QoS and Queue entries is that the OVS host
schema <http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf>`__
`OVSDB and Netvirt Postman
-Collection <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/resources/commons>`__
+Collection <https://github.com/opendaylight/ovsdb/blob/stable/boron/resources/commons>`__
OVSDB Hardware VTEP SouthBound Plugin
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
{
- "network-topology:node": [
- {
- "node-id": "hwvtep://192.168.1.115:6640",
- "hwvtep:connection-info":
- {
- "hwvtep:remote-port": 6640,
- "hwvtep:remote-ip": "192.168.1.115"
- }
- }
- ]
+ "network-topology:node": [
+ {
+ "node-id": "hwvtep://192.168.1.115:6640",
+ "hwvtep:connection-info":
+ {
+ "hwvtep:remote-port": 6640,
+ "hwvtep:remote-ip": "192.168.1.115"
+ }
+ }
+ ]
}
-Please replace *odl* in the URL with the IP address of your OpendayLight
+Please replace *odl* in the URL with the IP address of your OpenDaylight
controller and change *192.168.1.115* to your hwvtep node IP.
**NOTE**: The format of node-id is fixed. It will be one of the two:
::
- hwvtep://ip:port
+ hwvtep://ip:port
Switch initiates connection:
::
- hwvtep://uuid/<uuid of switch>
+ hwvtep://uuid/<uuid of switch>
The reason for using UUID is that we can distinguish between multiple
switches if they are behind a NAT.
::
{
- "node": [
- {
- "node-id": "hwvtep://192.168.1.115:6640",
- "hwvtep:switches": [
- {
- "switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640/physicalswitch/br0']"
- }
- ],
- "hwvtep:connection-info": {
- "local-ip": "192.168.92.145",
- "local-port": 47802,
- "remote-port": 6640,
- "remote-ip": "192.168.1.115"
- }
- },
- {
- "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
- "hwvtep:management-ips": [
- {
- "management-ips-key": "192.168.1.115"
- }
- ],
- "hwvtep:physical-switch-uuid": "37eb5abd-a6a3-4aba-9952-a4d301bdf371",
- "hwvtep:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']",
- "hwvtep:hwvtep-node-description": "",
- "hwvtep:tunnel-ips": [
- {
- "tunnel-ips-key": "192.168.1.115"
- }
- ],
- "hwvtep:hwvtep-node-name": "br0"
- }
- ]
+ "node": [
+ {
+ "node-id": "hwvtep://192.168.1.115:6640",
+ "hwvtep:switches": [
+ {
+ "switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640/physicalswitch/br0']"
+ }
+ ],
+ "hwvtep:connection-info": {
+ "local-ip": "192.168.92.145",
+ "local-port": 47802,
+ "remote-port": 6640,
+ "remote-ip": "192.168.1.115"
+ }
+ },
+ {
+ "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
+ "hwvtep:management-ips": [
+ {
+ "management-ips-key": "192.168.1.115"
+ }
+ ],
+ "hwvtep:physical-switch-uuid": "37eb5abd-a6a3-4aba-9952-a4d301bdf371",
+ "hwvtep:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']",
+ "hwvtep:hwvtep-node-description": "",
+ "hwvtep:tunnel-ips": [
+ {
+ "tunnel-ips-key": "192.168.1.115"
+ }
+ ],
+ "hwvtep:hwvtep-node-name": "br0"
+ }
+ ]
}
If there is a physical switch which has already been created by manual
::
{
- "network-topology:node": [
- {
- "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
- "hwvtep-node-name": "ps0",
- "hwvtep-node-description": "",
- "management-ips": [
- {
- "management-ips-key": "192.168.1.115"
- }
- ],
- "tunnel-ips": [
- {
- "tunnel-ips-key": "192.168.1.115"
- }
- ],
- "managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']"
- }
- ]
+ "network-topology:node": [
+ {
+ "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
+ "hwvtep-node-name": "ps0",
+ "hwvtep-node-description": "",
+ "management-ips": [
+ {
+ "management-ips-key": "192.168.1.115"
+ }
+ ],
+ "tunnel-ips": [
+ {
+ "tunnel-ips-key": "192.168.1.115"
+ }
+ ],
+ "managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']"
+ }
+ ]
}
Note: "managed-by" must provided by user. We can get its value after the
::
{
- "logical-switches": [
- {
- "hwvtep-node-name": "ls0",
- "hwvtep-node-description": "",
- "tunnel-key": "10000"
- }
- ]
+ "logical-switches": [
+ {
+ "hwvtep-node-name": "ls0",
+ "hwvtep-node-description": "",
+ "tunnel-key": "10000"
+ }
+ ]
}
Create a physical locator
::
- {
- "termination-point": [
- {
- "tp-id": "vxlan_over_ipv4:192.168.0.116",
- "encapsulation-type": "encapsulation-type-vxlan-over-ipv4",
- "dst-ip": "192.168.0.116"
- }
- ]
- }
+ {
+ "termination-point": [
+ {
+ "tp-id": "vxlan_over_ipv4:192.168.0.116",
+ "encapsulation-type": "encapsulation-type-vxlan-over-ipv4",
+ "dst-ip": "192.168.0.116"
+ }
+ ]
+ }
The "tp-id" of locator is "{encapsualation-type}: {dst-ip}".
::
{
- "remote-mcast-macs": [
- {
- "mac-entry-key": "00:00:00:00:00:00",
- "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
- "locator-set": [
- {
- "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://219.141.189.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
- }
- ]
- }
- ]
+ "remote-mcast-macs": [
+ {
+ "mac-entry-key": "00:00:00:00:00:00",
+ "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
+ "locator-set": [
+ {
+ "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://219.141.189.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
+ }
+ ]
+ }
+ ]
}
The physical locator *vxlan\_over\_ipv4:192.168.0.116* is just created
::
{
- "network-topology:termination-point": [
- {
- "tp-id": "port0",
- "hwvtep-node-name": "port0",
- "hwvtep-node-description": "",
- "vlan-bindings": [
- {
- "vlan-id-key": "100",
- "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']"
- }
- ]
- }
- ]
+ "network-topology:termination-point": [
+ {
+ "tp-id": "port0",
+ "hwvtep-node-name": "port0",
+ "hwvtep-node-description": "",
+ "vlan-bindings": [
+ {
+ "vlan-id-key": "100",
+ "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']"
+ }
+ ]
+ }
+ ]
}
At this point, we have completed the basic configuration.
::
{
- "remote-ucast-macs": [
- {
- "mac-entry-key": "11:11:11:11:11:11",
- "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
- "ipaddr": "1.1.1.1",
- "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
- }
- ]
+ "remote-ucast-macs": [
+ {
+ "mac-entry-key": "11:11:11:11:11:11",
+ "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
+ "ipaddr": "1.1.1.1",
+ "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
+ }
+ ]
}
Create a local-ucast-macs entry
--- /dev/null
+SNBI User Guide
+===============
+
+This section describes how to use the SNBI feature in OpenDaylight and
+contains configuration, administration, and management section for the
+feature.
+
+Overview
+--------
+
+Key distribution in a scaled network has always been a challenge.
+Typically, operators must perform some manual key distribution process
+before secure communication is possible between a set of network
+devices. The Secure Network Bootstrapping Infrastructure (SNBI) project
+securely and automatically brings up an integrated set of network
+devices and controllers, simplifying the process of bootstrapping
+network devices with the keys required for secure communication. SNBI
+enables connectivity to the network devices by assigning unique IPv6
+addresses and bootstrapping devices with the required keys. Admission
+control of devices into a specific domain is achieved using whitelist of
+authorized devices.
+
+SNBI Architecture
+-----------------
+
+At a high level, SNBI architecture consists of the following components:
+
+- SNBI Registrar
+
+- SNBI Forwarding Element (FE)
+
+.. figure:: ./images/snbi/snbi_arch.png
+ :alt: SNBI Architecture Diagram
+
+ SNBI Architecture Diagram
+
+SNBI Registrar
+~~~~~~~~~~~~~~
+
+Registrar is a device in a network that validates device against a
+whitelist and delivers device domain certificate. Registrar includes the
+following:
+
+- RESCONF API for Domain Whitelist Configuration
+
+- SNBI Southbound Plugin
+
+- Certificate Authority
+
+**RESTCONF API for Domain Whitelist Configuration:.**
+
+Below is the YANG model to configure the whitelist of devices for a
+particular domain.
+
+::
+
+ module snbi {
+ //The yang version - today only 1 version exists. If omitted defaults to 1.
+ yang-version 1;
+
+ //a unique namespace for this SNBI module, to uniquely identify it from other modules that may have the same name.
+ namespace "http://netconfcentral.org/ns/snbi";
+
+ //a shorter prefix that represents the namespace for references used below
+ prefix snbi;
+
+ //Defines the organization which defined / owns this .yang file.
+ organization "Netconf Central";
+
+ //defines the primary contact of this yang file.
+ contact "snbi-dev";
+
+ //provides a description of this .yang file.
+ description "YANG version for SNBI.";
+
+ //defines the dates of revisions for this yang file
+ revision "2024-07-02" {
+ description "SNBI module";
+ }
+
+ typedef UDI {
+ type string;
+ description "Unique Device Identifier";
+ }
+
+ container snbi-domain {
+ leaf domain-name {
+ type string;
+ description "The SNBI domain name";
+ }
+
+ list device-list {
+ key "list-name";
+
+ leaf list-name {
+ type string;
+ description "Name of the device list";
+ }
+
+ leaf list-type {
+ type enumeration {
+ enum "white";
+ }
+ description "Indicates the type of the list";
+ }
+
+ leaf active {
+ type boolean;
+ description "Indicates whether the list is active or not";
+ }
+
+ list devices {
+ key "device-identifier";
+ leaf device-identifier {
+ type union {
+ type UDI;
+ }
+ }
+ }
+ }
+ }
+ }
+
+**Southbound Plugin:.**
+
+The Southbound Plugin implements the protocol state machine necessary to
+exchange device identifiers, and deliver certificates.
+
+**Certificate Authority:.**
+
+A simple certificate authority is implemented using the Bouncy Castle
+package. The Certificate Authority creates the certificates from the
+device CSR requests received from the devices. The certificates thus
+generated are delivered to the devices using the Southbound Plugin.
+
+SNBI Forwarding Element
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The forwarding element must be installed or unpacked on a Linux host
+whose network layer traffic must be secured. The FE performs the
+following functions:
+
+- Neighour Discovery
+
+- Bootstrap
+
+- Host Configuration
+
+**Neighbour Discovery:.**
+
+Neighbour Discovery (ND) is the first step in accommodating devices in a
+secure network. SNBI performs periodic neighbour discovery of SNBI
+agents by transmitting ND hello packets. The discovered devices are
+populated in an ND table. Neighbour Discovery is periodic and
+bidirectional. ND hello packets are transmitted every 10 seconds. A 40
+second refresh timer is set for each discovered neighbour. On expiry of
+the refresh timer, the Neighbour Adjacency is removed from the ND table
+as the Neighbour Adjacency is no longer valid. It is possible that the
+same SNBI neighbour is discovered on multiple links, the expiry of a
+device on one link does not automatically remove the device entry from
+the ND table.
+
+**Bootstrapping:.**
+
+Bootstrapping a device involves the following sequential steps:
+
+- Authenticate a device using device identifier (UDI or SUDI)
+
+- Allocate the appropriate device ID and IPv6 address to uniquely
+ identify the device in the network
+
+- Allocate the required keys by installing a Device Domain Certificate
+
+- Accommodate the device in the domain
+
+**Host Configuration:.**
+
+Involves configuring a host to create a secure overlay network,
+assigning appropriate ipv6 address, setting up gre tunnels, securing the
+tunnels traffic via IPsec and enabling connectivity via a routing
+protocol.
+
+The SNBI Forwarding Element is packaged in a docker container available
+at this link: https://hub.docker.com/r/snbi/boron/. For more information
+on docker, refer to this link: https://docs.docker.com/linux/.
+
+Prerequisites for Configuring SNBI
+----------------------------------
+
+Before proceeding further, ensure that the following system requirements
+are met:
+
+- 64bit Ubunutu 14.04 LTS
+
+- 4GB RAM
+
+- 4GB of hard disk space, sufficient enough to store certificates
+
+- Java Virtual Machine 1.8 or above
+
+- Apache Maven 3.3.3 or above
+
+- Make sure the time on all the devices or synced either manually or
+ using NTP
+
+- The docker version must be greater than 1.0 on a 14.04 Ubuntu
+
+Configuring SNBI
+----------------
+
+This section contains the following:
+
+- Setting up SNBI Registrar on the controller
+
+- Configuring Whitelist
+
+- Setting up SNBI FE on Linux Hosts
+
+Setting up SNBI Registrar on the controller
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This section contains the following:
+
+- Configuring the Registrar Host
+
+- Installing Karaf Package
+
+- Configuring SNBI Registrar
+
+**Configuring the Registrar Host:.**
+
+Before enabling the SNBI registrar service, assign an IPv6 address to an
+interface on the registrar host. This is to bind the registrar service
+to an IPv6 address (**fd08::aaaa:bbbb:1/128**).
+
+::
+
+ sudo ip link add snbi-ra type dummy
+ sudo ip addr add fd08::aaaa:bbbb:1/128 dev snbi-ra
+ sudo ifconfig snbi-ra up
+
+**Installing Karaf Package:.**
+
+Download the karaf package from this link:
+http://www.opendaylight.org/software/downloads, unzip and run the
+``karaf`` executable present in the bin folder. Here is an example of
+this step:
+
+::
+
+ cd distribution-karaf-0.3.0-Boron/bin
+ ./karaf
+
+Additional information on useful Karaf commands are available at this
+link:
+https://wiki.opendaylight.org/view/CrossProject:Integration_Group:karaf.
+
+**Configuring SNBI Registrar:.**
+
+Before you perform this step, ensure that you have completed the tasks
+`above <#_configuring_snbi>`__:
+
+To use RESTCONF APIs, install the RESTCONF feature available in the
+Karaf package. If required, install mdsal-apidocs module for access to
+documentation. Refer
+https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:Restconf_API_Explorer
+for more information on MDSAL API docs.
+
+Use the commands below to install the required features and verify the
+same.
+
+::
+
+ feature:install odl-restconf
+ feature:install odl-mdsal-apidocs
+ feature:install odl-snbi-all
+ feature:list -i
+
+After confirming that the features are installed, use the following
+command to start SNBI registrar:
+
+::
+
+ snbi:start <domain-name>
+
+Configuring Whitelist
+~~~~~~~~~~~~~~~~~~~~~
+
+The registrar must be configured with a whitelist of devices that are
+accommodated in a specific domain. The YANG for configuring the domain
+and the associated whitelist in the controller is avaialble at this
+link:
+https://wiki.opendaylight.org/view/SNBI_Architecture_and_Design#Registrar_YANG_Definition.
+It is recommended to use Postman to configure the registrar using
+RESTCONF.
+
+This section contains the following:
+
+- Installing PostMan
+
+- Configuring Whitelist using REST API
+
+**Installing PostMan:.**
+
+Follow the steps below to install postman on your Google Chrome Browser.
+
+- Install Postman via Google Chrome browser available at this link:
+ https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en
+
+- In the chrome browser address bar, enter: chrome://apps/
+
+- Click Postman.
+
+- Enter the URL.
+
+- Click Headers.
+
+- Enter Accept: header.
+
+- Click Basic Auth tab to create user credentials, such as user name
+ and password.
+
+- Send.
+
+You can download a sample Postman configuration to get started from this
+link: https://www.getpostman.com/collections/c929a2a4007ffd0a7b51
+
+**Configuring Whitelist using REST API:.**
+
+The POST method below configures a domain - "secure-domain" and
+configures a whitelist set of devices to be accommodated to the domain.
+
+::
+
+ {
+ "snbi-domain": {
+ "domain-name": "secure-domain",
+ "device-list": [
+ {
+ "list-name": "demo list",
+ "list-type": "white",
+ "active": true,
+ "devices": [
+ {
+ "device-id": "UDI-FirstFE"
+ },
+ {
+ "device-id": "UDI-dev1"
+ },
+ {
+ "device-id": "UDI-dev2"
+ }
+ ]
+ }
+ ]
+ }
+ }
+
+The associated device ID must be configured on the SNBI FE (see below).
+You can also use REST APIs using the API docs interface to push the
+domain and whitelist information. The API docs could be accessed at
+link:http://localhost:8080/apidoc/explorer. More details on the API docs
+is available at
+link:https://wiki.opendaylight.org/view/OpenDaylight\_Controller:MD-SAL:Restconf\_API\_Explorer
+
+Setting up SNBI FE on Linux Hosts
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The SNBI Daemon is used to bootstrap the host device with a valid device
+domain certificate and IP address for connectivity and to create a
+reachable overlay network by interacting with multiple software modules.
+
+**Device UDI:.**
+
+The Device UDI or the device Unique Identifier can be derived from a
+multitude of parameters in the host machine, but most derived parameters
+are already known or do not remain constant across reloads. Therefore,
+every SNBI FE must be configured explicitly with a UDI that is present
+in the device whitelist.
+
+**First Forwarding Element:.**
+
+The registrar service IP address must be provided to the first host
+(Forwarding Element) to be bootstrapped. As mentioned in the
+"Configuring the Registrar Host" section, the registrar service IP
+address is **fd08::aaaa:bbbb:1**. The First Forwarding Element must be
+configured with this IPv6 address.
+
+**Running the SNBI docker image:.**
+
+The SNBI FE in the docker image picks the UDI of the ForwardingElement
+via an environment variable provided when executing docker instance. If
+the Forwarding Element is a first forwarding element, the IP address of
+the registrar service should also be provided.
+
+::
+
+ sudo docker run -v /etc/timezone:/etc/timezone:ro --net=host --privileged=true
+ --rm -t -i -e SNBI_UDI=UDI-FirstFE -e SNBI_REGISTRAR=fd08::aaaa:bbbb:1 snbi/boron:latest /bin/bash
+
+After the docker image is executed, you are placed in the snbi.d command
+prompt.
+
+A new Forwarding Element is bootstrapped in the same way, except that
+the registrar IP address is not required while running the docker image.
+
+::
+
+ sudo docker run --net=host --privileged=true --rm -t -i -e SNBI_UDI=UDI-dev1 snbi/boron:latest /bin/bash
+
+Administering or Managing SNBI
+------------------------------
+
+The SNBI daemon provides various show commands to verify the current
+state of the daemon. The commands are completed automatically when you
+press Tab in your keyboard. There are help strings "?" to list commands.
+
+::
+
+ snbi.d > show snbi
+ device Host deevice
+ neighbors SNBI Neighbors
+ debugs Debugs enabled
+ certificate Certificate information
+
::
- cd <Beryllium_controller_directory>/bin
- (for example, cd distribution-karaf-x.x.x-Beryllium/bin)
+ cd <Boron_controller_directory>/bin
+ (for example, cd distribution-karaf-x.x.x-Boron/bin)
::
can be performed using OpenDaylight as a centralized point for relaying
the information.
-OpenDayLight can create filters for exporting and receiving IP-SGT
+OpenDaylight can create filters for exporting and receiving IP-SGT
bindings used on specific peer groups, thus can provide more complex
maintaining of policy groups.
--------
The version of the UNI Manager (UNIMgr) plug-in included in OpenDaylight
-Beryllium release is experimental, serving as a proof-of-concept (PoC)
+Boron release is experimental, serving as a proof-of-concept (PoC)
for using features of OpenDaylight to provision networked elements with
attributes satisfying Metro Ethernet Forum (MEF) requirements for
delivery of Carrier Ethernet service. This initial version of UNIMgr
cl-unimgr-mef.yang. A copy of this module is available in the
odl-unimgr-api bundle of the UNIMgr project.
-Limitations of the PoC version of UNI Manager in OpenDaylight Beryllium
+Limitations of the PoC version of UNI Manager in OpenDaylight Boron
include those listed below: \* Uses only OVSDB southbound interface of
OpenDaylight \* Only uses UNI ID, IP Address, and speed UNI attributes
\* Only uses a subset of EVC per UNI attributes \* Does not use MEF
-egrep --color -nir --include=*.{adoc,rst} beryllium\|lithium .
-egrep --color -nir --include=*.{adoc,rst} open\ ?flow . | grep -v OpenFlow | grep -v openflow: | grep -v \-openflow\- | grep -v openflowplugin | grep -v openflowjava | grep -v Openflow13 | grep -v \_OPENFLOW | grep -v OpenflowNode | grep --color -i OpenFlow
-egrep --color -nir --include=*.{adoc,rst} open\ ?daylight . | grep -v OpenDaylight | grep -v \.opendaylight\. | grep -v \/opendaylight | grep -v \=opendaylight | grep --color -i OpenDaylight
+egrep --color -nir --include=*.rst beryllium\|lithium docs
+egrep --color -nir --include=*.rst open\ ?flow docs | grep -v OpenFlow | grep -v openflow: | grep -v \-openflow\- | grep -v openflowplugin | grep -v openflowjava | grep -v Openflow13 | grep -v \_OPENFLOW | grep -v OpenflowNode | grep --color -i OpenFlow
+egrep --color -nir --include=*.rst open\ ?daylight docs | grep -v OpenDaylight | grep -v \.opendaylight\. | grep -v \/opendaylight | grep -v \=opendaylight | grep --color -i OpenDaylight
+egrep --color -nir --include=*.rst [^-]acl[^-_\":] docs | grep -v ACL | grep -vi maclearn | grep -vi oracle | grep --color -i acl
+#the ovs[db] search seemed to produce only false positives
+#egrep --color -nir --include=*.rst [^-/_\"\']ovs[^-:k_] docs | grep -v OVS | egrep -v ovsdb[:-] | grep --color -i ovs
include::nic/nic-dev.adoc[]
-include::nemo/odl-nemo-engine-dev.adoc[]
-
include::netide/netide-developer-guide.adoc[]
include::neutron/odl-neutron-service-dev.adoc[]
// commenting this out as it contains no content
//include::reservation/reservation-dev.adoc[]
+include::faas/odl-faas-dev.adoc[]
+
include::sfc/sfc.adoc[]
include::snbi/odl-snbi-dev.adoc[]
--- /dev/null
+== Fabric As A Service
+
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/fabric-as-a-service.html
+++ /dev/null
-== NEtwork MOdeling (NEMO)
-
-=== Overview
-TBD: Overview of the NEMO feature, what logical functionality it
-provides and why you might use it as a developer.
-
-=== NEMO Architecture
-TBD: NEMO components and how they work together.
-Also include information about how the feature integrates with
-OpenDaylight and architecture.
-
-=== Key APIs and Interfaces
-TBD: Document the key things a user would want to use.
-
-==== NEMO Intent API
-TBD: A description of NEMO Intent API.
-
-=== API Reference Documentation
-TBD: Provide links to JavaDoc, REST API documentation, etc.
include::odl-ofp-message-spy.adoc[]
+include::odl-ofp-forwardingrules-sync.adoc[]
+
+include::odl-ofp-singleton-cluster-aproach.adoc[]
+
// * OpenDaylight_OpenFlow_Plugin:Backlog:Extensibility[Extensibility Framework]
include::odl-ofp-yang-models.adoc[]
--- /dev/null
+=== Application: Forwarding Rules Synchronizer
+
+==== Basics
+
+===== Description
+
+Forwarding Rules Synchronizer (FRS) is a newer version of Forwarding Rules Manager (FRM). It was created to solve most shortcomings of FRM. FRS solving errors with retry mechanism. Sending barrier if needed. Using one service for flows, groups and meters. And it has less changes requests send to device since calculating difference and using compression queue.
+
+It is located in the Java package:
+
+[source, java]
+----
+package org.opendaylight.openflowplugin.applications.frsync;
+----
+
+===== Listeners
+
+* 1x config - FlowCapableNode
+* 1x operational - Node
+
+===== System of work
+
+* one listener in config datastore waiting for changes
+
+** update cache
+** skip event if operational not present for node
+** send syncup entry to reactor for synchronization
+*** node added: after part of modification and whole operational snapshot
+*** node updated: after and before part of modification
+*** node deleted: null and before part of modification
+
+
+* one listener in operational datastore waiting for changes
+
+** update cache
+** on device connected
+*** register for cluster services
+** on device disconnected remove from cache
+*** remove from cache
+*** unregister for cluster services
+** if registered for reconciliation
+*** do reconciliation through syncup (only when config present)
+
+
+* reactor
+_(provides syncup w/decorators assembled in this order)_
+
+** Cluster decorator - skip action if not master for device
+** FutureZip decorator (FutureZip extends Future decorator)
+*** Future - run delegate syncup in future - submit task to executor service
+*** FutureZip - provides state compression - compress optimized config delta if waiting for execution with new one
+** Guard decorator - per device level locking
+** Retry decorator - register for reconciliation if syncup failed
+** Reactor impl - calculate diff from after/before parts of syncup entry and execute
+
+===== Strategy
+
+In the _old_ FRM uses an incremental strategy with all changes made one by one, where FRS uses a flat batch system with changes made in bulk. It uses one service SalFlatBatchService instead of three (flow, group, meter).
+
+===== Boron release
+
+FRS is used in Boron as separate feature and it is not loaded by any other feature. It has to be run separately.
+
+ odl-openflowplugin-app-forwardingrules-sync
+
+==== FRS additions
+
+===== Retry mechanism
+
+* is started when change request to device return as failed (register for reconcile)
+* wait for next consistent operational and do reconciliation with actual config (not only diff)
+
+===== ZipQueue
+
+* only the diff (before/after) between last config changes is sent to device
+* when there are more config changes for device in a row waiting to be processed they are compressed into one entry (after is still replaced with the latest)
+
+===== Cluster-aware
+
+* FRS is cluster aware using ClusteringSingletonServiceProvider from the MD-SAL
+* on mastership change reconciliation is done (register for reconcile)
+
+===== SalFlatBatchService
+
+FRS uses service with implemented barrier waiting logic between dependent objects
+
+==== SalFlatBatchService for FRS
+
+SalFlatBatchService was created along forwardingrules-sync application as the service that should application used by default. This service uses only one input with bag of flow/group/meter objects and their common add/update/remove action. So you practically send only one input (of specific bags) to this service.
+
+===== Workflow
+
+* prepare plan of actions
+** mark actions where the barrier is needed before continue
+* run appropriate service calls
+** start all actions that can be run simultaneously
+** if there is barrier-needed mark, wait for all fired jobs and only then continue with the next action
+
+error handling:
+
+* there is flag to stop process on the first error (default set to false)
--- /dev/null
+=== Cluster singleton approach in plugin
+
+==== Basics
+
+===== Description
+
+The existing OpenDaylight service deployment model assumes symmetric clusters, where all services are activated on all nodes in the cluster. However, many services require that there is a single active service instance per cluster. We call such services 'singleton services'. The Entity Ownership Service (EOS) represents the base Leadership choice for one Entity instance. Every Cluster Singleton service *type* must have its own Entity and every Cluster Singleton service *instance* must have its own Entity Candidate. Every registered Entity Candidate should be notified about its actual role. All this "work" is done by MD-SAL so the Openflowplugin need "only" to register as service in *SingletonClusteringServiceProvider* given by MD-SAL.
+
+===== Change against using EOS service listener
+
+In this new clustering singleton approach plugin uses API from the MD-SAL project: SingletonClusteringService which comes with three methods.
+
+ instantiateServiceInstance()
+ closeServiceInstance()
+ getIdentifier()
+
+This service has to be registered to a SingletonClusteringServiceProvider from MD-SAL which take care if mastership is changed in cluster environment.
+
+First method in SingletonClusteringService is being called when the cluster node becomes a MASTER. Second is being called when status changes to SLAVE or device is disconnected from cluster. Last method plugins returns NodeId as ServiceGroupIdentifier
+Startup after device is connected
+
+On the start up the plugin we need to initialize first four managers for each working area providing information and services
+
+* Device manager
+* RPC manager
+* Role manager
+* Statistics manager
+
+After connection the device the listener Device manager get the event and start up to creating the context for this connection.
+Startup after device connection
+
+Services are managed by SinlgetonClusteringServiceProvider from MD-SAL project. So in startup we simply create a instance of LifecycleService and register all contexts into it.
+
+==== Role change
+
+Plugin is no longer registered as Entity Ownership Service (EOS) listener therefore does not need to and cannot respond on EOS ownership changes.
+
+===== Service start
+
+Services start asynchronously but the start is managed by LifecycleService. If something goes wrong LifecycleService stop starting services in context and this speeds up the reconnect process. But the services haven't changed and plugin need to start all this:
+
+* Activating transaction chain manager
+* Initial gathering of device statistics
+* Initial submit to DS
+* Sending role MASTER to device
+* RPC services registration
+* Statistics gathering start
+
+===== Service stop
+
+If closeServiceInstance occurred plugin just simply try to store all unsubmitted transactions and close the transaction chain manager, stop RPC services, stop Statistics gathering and after that all unregister txEntity from EOS.
\ No newline at end of file
== Authentication, Authorization and Accounting (AAA) Services
-The Boron AAA services are based on the Apache Shiro Java Security Framework. The main configuration file for AAA is located at “etc/shiro.ini” relative to the ODL karaf home directory.
-
-=== Terms And Definitions
-Token:: A claim of access to a group of resources on the controller
-Domain:: A group of resources, direct or indirect, physical, logical, or virtual, for the purpose of access control. ODL recommends using the default “sdn" domain in the Boron release.
-User:: A person who either owns or has access to a resource or group of resources on the controller
-Role:: Opaque representation of a set of permissions, which is merely a unique string as admin or guest
-Credential:: Proof of identity such as username and password, OTP, biometrics, or others
-Client:: A service or application that requires access to the controller
-Claim:: A data set of validated assertions regarding a user, e.g. the role, domain, name, etc.
-
-=== How to enable AAA
-AAA is enabled through installing the odl-aaa-shiro feature. odl-aaa-shiro is automatically installed as part of the odl-restconf offering.
-
-=== How to disable AAA
-Edit the “etc/shiro.ini” file and replace the following:
-
-----
-/** = authcBasic
-----
-
-with
-
-----
-/** = anon
-----
-
-Then restart the karaf process.
-
-NOTE: This is a change from the Lithium release, in which “etc/org.opendaylight.aaa.authn.cfg” was edited to set “authEnabled=false”. Please use the “shiro.ini” mechanism to disable AAA going forward.
-
-
-=== How application developers can leverage AAA to provide servlet security
-In order to provide security to a servlet, add the following to the servlet’s web.xml file as the first filter definition:
-
-----
-<context-param>
- <param-name>shiroEnvironmentClass</param-name>
- <param-value>org.opendaylight.aaa.shiro.web.env.KarafIniWebEnvironment</param-value>
-</context-param>
-
-<listener>
- <listener-class>org.apache.shiro.web.env.EnvironmentLoaderListener</listener-class>
-</listener>
-
-<filter>
- <filter-name>ShiroFilter</filter-name>
- <filter-class>org.opendaylight.aaa.shiro.filters.AAAShiroFilter</filter-class>
-</filter>
-
-<filter-mapping>
- <filter-name>AAAShiroFilter</filter-name>
- <url-pattern>/*</url-pattern>
-</filter-mapping>
-----
-
-NOTE: It is very important to place this AAAShiroFilter as the first javax.servlet.Filter, as Jersey applies Filters in the order they appear within web.xml. Placing the AAAShiroFilter first ensures incoming HTTP/HTTPS requests have proper credentials before any other filtering is attempted.
-
-=== AAA Realms
-AAA plugin utilizes realms to support pluggable authentication & authorization schemes. There are two parent types of realms:
-
-* AuthenticatingRealm
-** Provides no Authorization capability.
-** Users authenticated through this type of realm are treated equally.
-* AuthorizingRealm
-** AuthorizingRealm is a more sophisticated AuthenticatingRealm, which provides the additional mechanisms to distinguish users based on roles.
-** Useful for applications in which roles determine allowed cabilities.
-
-ODL Contains Four Implementations
-
-* TokenAuthRealm
-** An AuthorizingRealm built to bridge the Shiro-based AAA service with the Lithium h2-based AAA implementation.
-** Exposes a RESTful web service to manipulate IdM policy on a per-node basis. If identical AAA policy is desired across a cluster, the backing data store must be synchronized using an out of band method.
-** A python script located at “etc/idmtool” is included to help manipulate data contained in the TokenAuthRealm.
-** Enabled out of the box.
-* ODLJndiLdapRealm
-** An AuthorizingRealm built to extract identity information from IdM data contained on an LDAP server.
-** Extracts group information from LDAP, which is translated into ODL roles.
-** Useful when federating against an existing LDAP server, in which only certain types of users should have certain access privileges.
-** Disabled out of the box.
-* ODLJndiLdapRealmAuthNOnly
-** The same as ODLJndiLdapRealm, except without role extraction. Thus, all LDAP users have equal authentication and authorization rights.
-** Disabled out of the box.
-* ActiveDirectoryRealm
-
-NOTE: More than one Realm implementation can be specified. Realms are attempted in order until authentication succeeds or all realm sources are exhausted.
-
-==== TokenAuthRealm Configuration
-TokenAuthRealm stores IdM data in an h2 database on each node. Thus, configuration of a cluster currently requires configuring the desired IdM policy on each node. There are two supported methods to manipulate the TokenAuthRealm IdM configuration:
-
-* idmtool Configuration
-* RESTful Web Service Configuration
-
-===== idmtool Configuration
-A utility script located at “etc/idmtool” is used to manipulate the TokenAuthRealm IdM policy. idmtool assumes a single domain (sdn), since multiple domains are not leveraged in the Boron release. General usage information for idmtool is derived through issuing the following command:
-
-----
-$ python etc/idmtool -h
-usage: idmtool [-h] [--target-host TARGET_HOST]
- user
- {list-users,add-user,change-password,delete-user,list-domains,list-roles,add-role,delete-role,add-grant,get-grants,delete-grant}
- ...
-
-positional arguments:
- user username for BSC node
- {list-users,add-user,change-password,delete-user,list-domains,list-roles,add-role,delete-role,add-grant,get-grants,delete-grant}
- sub-command help
- list-users list all users
- add-user add a user
- change-password change a password
- delete-user delete a user
- list-domains list all domains
- list-roles list all roles
- add-role add a role
- delete-role delete a role
- add-grant add a grant
- get-grants get grants for userid on sdn
- delete-grant delete a grant
-
-optional arguments:
- -h, --help show this help message and exit
- --target-host TARGET_HOST
- target host node
-----
-
-====== Add a user
-
-----
-python etc/idmtool admin add-user newUser
-Password:
-Enter new password:
-Re-enter password:
-add_user(admin)
-
-command succeeded!
-
-json:
-{
- "description": "",
- "domainid": "sdn",
- "email": "",
- "enabled": true,
- "name": "newUser",
- "password": "**********",
- "salt": "**********",
- "userid": "newUser@sdn"
-}
-----
-
-NOTE: AAA redacts the password and salt fields for security purposes.
-
-====== Delete a user
-
-----
-$ python etc/idmtool admin delete-user newUser@sdn
-Password:
-delete_user(newUser@sdn)
-
-command succeeded!
-----
-
-====== List all users
-
-----
-$ python etc/idmtool admin list-users
-Password:
-list_users
-
-command succeeded!
-
-json:
-{
- "users": [
- {
- "description": "user user",
- "domainid": "sdn",
- "email": "",
- "enabled": true,
- "name": "user",
- "password": "**********",
- "salt": "**********",
- "userid": "user@sdn"
- },
- {
- "description": "admin user",
- "domainid": "sdn",
- "email": "",
- "enabled": true,
- "name": "admin",
- "password": "**********",
- "salt": "**********",
- "userid": "admin@sdn"
- }
- ]
-}
-----
-
-====== Change a user’s password
-
-----
-$ python etc/idmtool admin change-password admin@sdn
-Password:
-Enter new password:
-Re-enter password:
-change_password(admin)
-
-command succeeded!
-
-json:
-{
- "description": "admin user",
- "domainid": "sdn",
- "email": "",
- "enabled": true,
- "name": "admin",
- "password": "**********",
- "salt": "**********",
- "userid": "admin@sdn"
-}
-----
-
-====== Add a role
-
-----
-$ python etc/idmtool admin add-role network-admin
-Password:
-add_role(network-admin)
-
-command succeeded!
-
-json:
-{
- "description": "",
- "domainid": "sdn",
- "name": "network-admin",
- "roleid": "network-admin@sdn"
-}
-----
-
-====== Delete a role
-
-----
-$ python etc/idmtool admin delete-role network-admin@sdn
-Password:
-delete_role(network-admin@sdn)
-
-command succeeded!
-----
-
-====== List all roles
-
-----
-$ python etc/idmtool admin list-roles
-Password:
-list_roles
-
-command succeeded!
-
-json:
-{
- "roles": [
- {
- "description": "a role for admins",
- "domainid": "sdn",
- "name": "admin",
- "roleid": "admin@sdn"
- },
- {
- "description": "a role for users",
- "domainid": "sdn",
- "name": "user",
- "roleid": "user@sdn"
- }
- ]
-}
-----
-
-====== List all domains
-
-----
-$ python etc/idmtool admin list-domains
-Password:
-list_domains
-
-command succeeded!
-
-json:
-{
- "domains": [
- {
- "description": "default odl sdn domain",
- "domainid": "sdn",
- "enabled": true,
- "name": "sdn"
- }
- ]
-}
-----
-
-====== Add a grant
-
-----
-$ python etc/idmtool admin add-grant user@sdn admin@sdn
-Password:
-add_grant(userid=user@sdn,roleid=admin@sdn)
-
-command succeeded!
-
-json:
-{
- "domainid": "sdn",
- "grantid": "user@sdn@admin@sdn@sdn",
- "roleid": "admin@sdn",
- "userid": "user@sdn"
-}
-----
-
-====== Delete a grant
-
-----
-$ python etc/idmtool admin delete-grant user@sdn admin@sdn
-Password:
-http://localhost:8181/auth/v1/domains/sdn/users/user@sdn/roles/admin@sdn
-delete_grant(userid=user@sdn,roleid=admin@sdn)
-
-command succeeded!
-----
-
-====== Get grants for a user
-
-----
-python etc/idmtool admin get-grants admin@sdn
-Password:
-get_grants(admin@sdn)
-
-command succeeded!
-
-json:
-{
- "roles": [
- {
- "description": "a role for users",
- "domainid": "sdn",
- "name": "user",
- "roleid": "user@sdn"
- },
- {
- "description": "a role for admins",
- "domainid": "sdn",
- "name": "admin",
- "roleid": "admin@sdn"
- }
- ]
-}
-----
-
-===== RESTful Web Service
-The TokenAuthRealm IdM policy is fully configurable through a RESTful web service. Full documentation for manipulating AAA IdM data is located online (https://wiki.opendaylight.org/images/0/00/AAA_Test_Plan.docx), and a few examples are included in this guide:
-
-====== Get All Users
-
-----
-curl -u admin:admin http://localhost:8181/auth/v1/users
-OUTPUT:
-{
- "users": [
- {
- "description": "user user",
- "domainid": "sdn",
- "email": "",
- "enabled": true,
- "name": "user",
- "password": "**********",
- "salt": "**********",
- "userid": "user@sdn"
- },
- {
- "description": "admin user",
- "domainid": "sdn",
- "email": "",
- "enabled": true,
- "name": "admin",
- "password": "**********",
- "salt": "**********",
- "userid": "admin@sdn"
- }
- ]
-}
-----
-
-====== Create a User
-
-----
-curl -u admin:admin -X POST -H "Content-Type: application/json" --data-binary @./user.json http://localhost:8181/auth/v1/users
-PAYLOAD:
-{
- "name": "ryan",
- "userid": "ryan@sdn",
- "password": "ryan",
- "domainid": "sdn",
- "description": "Ryan's User Account",
- "email": "ryandgoulding@gmail.com"
-}
-
-OUTPUT:
-{
- "userid":"ryan@sdn",
- "name":"ryan",
- "description":"Ryan's User Account",
- "enabled":true,
- "email":"ryandgoulding@gmail.com",
- "password":"**********",
- "salt":"**********",
- "domainid":"sdn"
-}
-----
-
-====== Create an OAuth2 Token For Admin Scoped to SDN
-
-----
-curl -d 'grant_type=password&username=admin&password=a&scope=sdn' http://localhost:8181/oauth2/token
-
-OUTPUT:
-{
- "expires_in":3600,
- "token_type":"Bearer",
- "access_token":"5a615fbc-bcad-3759-95f4-ad97e831c730"
-}
-----
-
-====== Use an OAuth2 Token
-
-----
-curl -H "Authorization: Bearer 5a615fbc-bcad-3759-95f4-ad97e831c730" http://localhost:8181/auth/v1/domains
-{
- "domains":
- [
- {
- "domainid":"sdn",
- "name":"sdn”,
- "description":"default odl sdn domain",
- "enabled":true
- }
- ]
-}
-----
-
-==== ODLJndiLdapRealm Configuration
-LDAP integration is provided in order to externalize identity management. To configure LDAP parameters, modify "etc/shiro.ini" parameters to include the ODLJndiLdapRealm:
-
-----
-# ODL provides a few LDAP implementations, which are disabled out of the box.
-# ODLJndiLdapRealm includes authorization functionality based on LDAP elements
-# extracted through and LDAP search. This requires a bit of knowledge about
-# how your LDAP system is setup. An example is provided below:
-ldapRealm = org.opendaylight.aaa.shiro.realm.ODLJndiLdapRealm
-ldapRealm.userDnTemplate = uid={0},ou=People,dc=DOMAIN,dc=TLD
-ldapRealm.contextFactory.url = ldap://<URL>:389
-ldapRealm.searchBase = dc=DOMAIN,dc=TLD
-ldapRealm.ldapAttributeForComparison = objectClass
-ldapRealm.groupRolesMap = "Person":"admin"
-# ...
-# further down in the file...
-# Stacked realm configuration; realms are round-robbined until authentication succeeds or realm sources are exhausted.
-securityManager.realms = $tokenAuthRealm, $ldapRealm
-----
-
-This configuration allows federation with an external LDAP server, and the user's ODL role parameters are mapped to corresponding LDAP attributes as specified by the groupRolesMap. Thus, an LDAP operator can provision attributes for LDAP users that support different ODL role structures.
-
-==== ODLJndiLdapRealmAuthNOnly Configuration
-Edit the "etc/shiro.ini" file and modify the following:
-
-----
-ldapRealm = org.opendaylight.aaa.shiro.realm.ODLJndiLdapRealm
-ldapRealm.userDnTemplate = uid={0},ou=People,dc=DOMAIN,dc=TLD
-ldapRealm.contextFactory.url = ldap://<URL>:389
-# ...
-# further down in the file...
-# Stacked realm configuration; realms are round-robbined until authentication succeeds or realm sources are exhausted.
-securityManager.realms = $tokenAuthRealm, $ldapRealm
-----
-
-This is useful for setups where all LDAP users are allowed equal access.
-
-==== Token Store Configuration Parameters
-Edit the file “etc/opendaylight/karaf/08-authn-config.xml” and edit the following:
-.*timeToLive*: Configure the maximum time, in milliseconds, that tokens are to be cached. Default is 360000.
-Save the file.
-
-=== Authorization Configuration
-==== Shiro-Based Authorization
-OpenDaylight AAA has support for Role Based Access Control based on the Apache Shiro permissions system. Configuration of the authorization system is done offline; authorization currently cannot be configured after the controller is started. Thus, Authorization in the Beryllium release is aimed towards supporting coarse-grained security policies, with the aim to provide more robust configuration capabilities in the future. Shiro-based Authorization is documented on the Apache Shiro website (http://shiro.apache.org/web.html#Web-%7B%7B%5Curls%5C%7D%7D).
-
-==== Enable “admin” Role Based Access to the IdMLight RESTful web service
-Edit the “etc/shiro.ini” configuration file and add “/auth/v1/** = authcBasic, roles[admin]” above the line “/** = authcBasic” within the “urls” section.
-
-----
-/auth/v1/** = authcBasic, roles[admin]
-/** = authcBasic
-----
-
-This will restrict the idmlight rest endpoints so that a grant for admin role must be present for the requesting user.
-
-NOTE: The ordering of the authorization rules above is important!
-
-==== AuthZ Broker Facade
-
-ODL includes an experimental Authorization Broker Facade, which allows finer grained access control for REST endpoints. Since this feature was not well tested in the Boron release, it is recommended to use the Shiro-based mechanism instead, and rely on the Authorization Broker Facade for POC use only.
-
-===== AuthZ Broker Facade Feature Installation
-To install the authorization broker facade, please issue the following command in the karaf shell:
-
-----
-feature:install odl-restconf odl-aaa-authz
-----
-
-===== Add an Authorization Rule
-The following shows how one might go about securing the controller so that only admins can access restconf.
-
-----
-curl -u admin:admin -H “Content-Type: application/xml” --data-binary @./rule.json http://localhost:8181/restconf/config/authorization-schema:simple-authorization/policies/RestConfService/
-cat ./rule.json
-{
- "policies": {
- "resource": "*",
- "service":"RestConfService",
- "role": "admin"
- }
-}
-----
-
-=== Accounting Configuration
-All AAA logging is output to the standard karaf.log file.
-
-----
-log:set TRACE org.opendaylight.aaa
-----
-
-This command enables the most verbose level of logging for AAA components.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/authentication-and-authorization-services.html
include::didm/didm-user.adoc[]
+include::faas/odl-faas-user.adoc[]
+
include::groupbasedpolicy/odl-groupbasedpolicy-user-guide.adoc[]
include::l2switch/l2switch-user.adoc[]
--- /dev/null
+== Fabric As A Service
+
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/fabric-as-a-service.html
== SNBI User Guide
-This section describes how to use the SNBI feature in OpenDaylight and contains
-configuration, administration, and management section for the feature.
-=== Overview
-Key distribution in a scaled network has always been a challenge. Typically, operators must perform some manual key distribution process before secure communication is possible between a set of network devices. The Secure Network Bootstrapping Infrastructure (SNBI) project securely and automatically brings up an integrated set of network devices and controllers, simplifying the process of bootstrapping network devices with the keys required for secure communication. SNBI enables connectivity to the network devices by assigning unique IPv6 addresses and bootstrapping devices with the required keys. Admission control of devices into a specific domain is achieved using whitelist of authorized devices.
-
-=== SNBI Architecture
-At a high level, SNBI architecture consists of the following components:
-
-* SNBI Registrar
-* SNBI Forwarding Element (FE)
-
-.SNBI Architecture Diagram
-image::snbi/snbi_arch.png["SNBI Architecture",width=500]
-
-==== SNBI Registrar
-Registrar is a device in a network that validates device against a whitelist and delivers device domain certificate. Registrar includes the following:
-
-* RESCONF API for Domain Whitelist Configuration
-* SNBI Southbound Plugin
-* Certificate Authority
-
-.RESTCONF API for Domain Whitelist Configuration:
-Below is the YANG model to configure the whitelist of devices for a particular domain.
-----
-module snbi {
- //The yang version - today only 1 version exists. If omitted defaults to 1.
- yang-version 1;
-
- //a unique namespace for this SNBI module, to uniquely identify it from other modules that may have the same name.
- namespace "http://netconfcentral.org/ns/snbi";
-
- //a shorter prefix that represents the namespace for references used below
- prefix snbi;
-
- //Defines the organization which defined / owns this .yang file.
- organization "Netconf Central";
-
- //defines the primary contact of this yang file.
- contact "snbi-dev";
-
- //provides a description of this .yang file.
- description "YANG version for SNBI.";
-
- //defines the dates of revisions for this yang file
- revision "2024-07-02" {
- description "SNBI module";
- }
-
- typedef UDI {
- type string;
- description "Unique Device Identifier";
- }
-
- container snbi-domain {
- leaf domain-name {
- type string;
- description "The SNBI domain name";
- }
-
- list device-list {
- key "list-name";
-
- leaf list-name {
- type string;
- description "Name of the device list";
- }
-
- leaf list-type {
- type enumeration {
- enum "white";
- }
- description "Indicates the type of the list";
- }
-
- leaf active {
- type boolean;
- description "Indicates whether the list is active or not";
- }
-
- list devices {
- key "device-identifier";
- leaf device-identifier {
- type union {
- type UDI;
- }
- }
- }
- }
- }
-}
-----
-
-.Southbound Plugin:
-The Southbound Plugin implements the protocol state machine necessary to exchange device identifiers, and deliver certificates.
-
-.Certificate Authority:
-A simple certificate authority is implemented using the Bouncy Castle package. The Certificate Authority creates the certificates from the device CSR requests received from the devices. The certificates thus generated are delivered to the devices using the Southbound Plugin.
-
-==== SNBI Forwarding Element
-The forwarding element must be installed or unpacked on a Linux host whose network layer traffic must be secured. The FE performs the following functions:
-
-* Neighour Discovery
-* Bootstrap
-* Host Configuration
-
-.Neighbour Discovery:
-Neighbour Discovery (ND) is the first step in accommodating devices in a secure network. SNBI performs periodic neighbour discovery of SNBI agents by transmitting ND hello packets. The discovered devices are populated in an ND table. Neighbour Discovery is periodic and bidirectional. ND hello packets are transmitted every 10 seconds. A 40 second refresh timer is set for each discovered neighbour. On expiry of the refresh timer, the Neighbour Adjacency is removed from the ND table as the Neighbour Adjacency is no longer valid. It is possible that the same SNBI neighbour is discovered on multiple links, the expiry of a device on one link does not automatically remove the device entry from the ND table.
-
-.Bootstrapping:
-Bootstrapping a device involves the following sequential steps:
-
-* Authenticate a device using device identifier (UDI or SUDI)
-* Allocate the appropriate device ID and IPv6 address to uniquely identify the device in the network
-* Allocate the required keys by installing a Device Domain Certificate
-* Accommodate the device in the domain
-
-.Host Configuration:
-Involves configuring a host to create a secure overlay network, assigning appropriate ipv6 address, setting up gre tunnels, securing the tunnels traffic via IPsec and enabling connectivity via a routing protocol.
-
-The SNBI Forwarding Element is packaged in a docker container available at this link: https://hub.docker.com/r/snbi/boron/.
-For more information on docker, refer to this link: https://docs.docker.com/linux/.
-
-=== Prerequisites for Configuring SNBI
-Before proceeding further, ensure that the following system requirements are met:
-
-* 64bit Ubunutu 14.04 LTS
-* 4GB RAM
-* 4GB of hard disk space, sufficient enough to store certificates
-* Java Virtual Machine 1.8 or above
-* Apache Maven 3.3.3 or above
-* Make sure the time on all the devices or synced either manually or using NTP
-* The docker version must be greater than 1.0 on a 14.04 Ubuntu
-
-=== Configuring SNBI
-This section contains the following:
-
-* Setting up SNBI Registrar on the controller
-* Configuring Whitelist
-* Setting up SNBI FE on Linux Hosts
-
-==== Setting up SNBI Registrar on the controller
-This section contains the following:
-
-* Configuring the Registrar Host
-* Installing Karaf Package
-* Configuring SNBI Registrar
-
-.Configuring the Registrar Host:
-Before enabling the SNBI registrar service, assign an IPv6 address to an interface on the registrar host. This is to bind the registrar service to an IPv6 address (*fd08::aaaa:bbbb:1/128*).
-----
-sudo ip link add snbi-ra type dummy
-sudo ip addr add fd08::aaaa:bbbb:1/128 dev snbi-ra
-sudo ifconfig snbi-ra up
-----
-
-.Installing Karaf Package:
-Download the karaf package from this link: http://www.opendaylight.org/software/downloads, unzip and run the `karaf` executable present in the bin folder. Here is an example of this step:
-----
-cd distribution-karaf-0.3.0-Boron/bin
-./karaf
-----
-
-Additional information on useful Karaf commands are available at this link: https://wiki.opendaylight.org/view/CrossProject:Integration_Group:karaf.
-
-.Configuring SNBI Registrar:
-Before you perform this step, ensure that you have completed the tasks
-<<_configuring_snbi, above>>:
-
-To use RESTCONF APIs, install the RESTCONF feature available in the Karaf package.
-If required, install mdsal-apidocs module for access to documentation. Refer https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:Restconf_API_Explorer for more information on MDSAL API docs.
-
-Use the commands below to install the required features and verify the same.
-----
-feature:install odl-restconf
-feature:install odl-mdsal-apidocs
-feature:install odl-snbi-all
-feature:list -i
-----
-
-After confirming that the features are installed, use the following command to start SNBI registrar:
-----
-snbi:start <domain-name>
-----
-
-==== Configuring Whitelist
-The registrar must be configured with a whitelist of devices that are accommodated in a specific domain. The YANG for configuring the domain and the associated whitelist in the controller is avaialble at this link: https://wiki.opendaylight.org/view/SNBI_Architecture_and_Design#Registrar_YANG_Definition.
-It is recommended to use Postman to configure the registrar using RESTCONF.
-
-This section contains the following:
-
-* Installing PostMan
-* Configuring Whitelist using REST API
-
-.Installing PostMan:
-Follow the steps below to install postman on your Google Chrome Browser.
-
-* Install Postman via Google Chrome browser available at this link: https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en
-* In the chrome browser address bar, enter: chrome://apps/
-* Click Postman.
-* Enter the URL.
-* Click Headers.
-* Enter Accept: header.
-* Click Basic Auth tab to create user credentials, such as user name and password.
-* Send.
-
-You can download a sample Postman configuration to get started from this link: https://www.getpostman.com/collections/c929a2a4007ffd0a7b51
-
-.Configuring Whitelist using REST API:
-
-The POST method below configures a domain - "secure-domain" and configures a whitelist set of devices to be accommodated to the domain.
-----
-{
- "snbi-domain": {
- "domain-name": "secure-domain",
- "device-list": [
- {
- "list-name": "demo list",
- "list-type": "white",
- "active": true,
- "devices": [
- {
- "device-id": "UDI-FirstFE"
- },
- {
- "device-id": "UDI-dev1"
- },
- {
- "device-id": "UDI-dev2"
- }
- ]
- }
- ]
- }
-}
-----
-The associated device ID must be configured on the SNBI FE (see below).
-You can also use REST APIs using the API docs interface to push the domain and whitelist information. The API docs could be accessed at link:http://localhost:8080/apidoc/explorer. More details on the API docs is available at link:https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:Restconf_API_Explorer
-
-==== Setting up SNBI FE on Linux Hosts
-The SNBI Daemon is used to bootstrap the host device with a valid device domain certificate and IP address for connectivity and to create a reachable overlay network by interacting with multiple software modules.
-
-.Device UDI:
-The Device UDI or the device Unique Identifier can be derived from a multitude of parameters in the host machine, but most derived parameters are already known or do not remain constant across reloads. Therefore, every SNBI FE must be configured explicitly with a UDI that is present in the device whitelist.
-
-.First Forwarding Element:
-The registrar service IP address must be provided to the first host (Forwarding Element) to be bootstrapped. As mentioned in the "Configuring the Registrar Host" section, the registrar service IP address is *fd08::aaaa:bbbb:1*. The First Forwarding Element must be configured with this IPv6 address.
-
-.Running the SNBI docker image:
-The SNBI FE in the docker image picks the UDI of the ForwardingElement via an environment variable provided when executing docker instance. If the Forwarding Element is a first forwarding element, the IP address of the registrar service should also be provided.
-
-----
-sudo docker run -v /etc/timezone:/etc/timezone:ro --net=host --privileged=true
---rm -t -i -e SNBI_UDI=UDI-FirstFE -e SNBI_REGISTRAR=fd08::aaaa:bbbb:1 snbi/boron:latest /bin/bash
-----
-
-After the docker image is executed, you are placed in the snbi.d command prompt.
-
-A new Forwarding Element is bootstrapped in the same way, except that the registrar IP address is not required while running the docker image.
-----
-sudo docker run --net=host --privileged=true --rm -t -i -e SNBI_UDI=UDI-dev1 snbi/boron:latest /bin/bash
-----
-
-
-=== Administering or Managing SNBI
-The SNBI daemon provides various show commands to verify the current state of the daemon. The commands are completed automatically when you press Tab in your keyboard. There are help strings "?" to list commands.
-----
-snbi.d > show snbi
- device Host deevice
- neighbors SNBI Neighbors
- debugs Debugs enabled
- certificate Certificate information
-----
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/snbi-user-guide.html