+.. _sfc-user-guide:
+
Service Function Chaining
=========================
-OpenDaylight Service Function Chaining (SFC) Overiew
-----------------------------------------------------
+OpenDaylight Service Function Chaining (SFC) Overview
+-----------------------------------------------------
OpenDaylight Service Function Chaining (SFC) provides the ability to
define an ordered list of a network services (e.g. firewalls, load
SFC User Interface (SFC-UI) is based on Dlux project. It provides an
easy way to create, read, update and delete configuration stored in
-Datastore. Moreover, it shows the status of all SFC features (e.g
+datastore. Moreover, it shows the status of all SFC features (e.g
installed, uninstalled) and Karaf log messages as well.
SFC-UI Architecture
1. Run ODL distribution (run karaf)
-2. In karaf console execute: ``feature:install odl-sfc-ui``
+2. In Karaf console execute: ``feature:install odl-sfc-ui``
3. Visit SFC-UI on: ``http://<odl_ip_address>:8181/sfc/index.html``
-SFC Southbound REST Plugin
+SFC Southbound REST Plug-in
--------------------------
Overview
~~~~~~~~
-The Southbound REST Plugin is used to send configuration from DataStore
+The Southbound REST Plug-in is used to send configuration from datastore
down to network devices supporting a REST API (i.e. they have a
configured REST URI). It supports POST/PUT/DELETE operations, which are
triggered accordingly by changes in the SFC data stores.
- Service Function Schedule Type (SFST)
-- Service Function Forwader (SFF)
+- Service Function Forwarder (SFF)
- Rendered Service Path (RSP)
-Southbound REST Plugin Architecture
+Southbound REST Plug-in Architecture
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-From the user perspective, the REST plugin is another SFC Southbound
-plugin used to communicate with network devices.
+From the user perspective, the REST plug-in is another SFC Southbound
+plug-in used to communicate with network devices.
.. figure:: ./images/sfc/sb-rest-architecture-user.png
- :alt: Soutbound REST Plugin integration into ODL
+ :alt: Southbound REST Plug-in integration into ODL
- Soutbound REST Plugin integration into ODL
+ Southbound REST Plug-in integration into ODL
Configuring Southbound REST Plugin
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Run ODL distribution (run karaf)
-2. In karaf console execute: ``feature:install odl-sfc-sb-rest``
+2. In Karaf console execute: ``feature:install odl-sfc-sb-rest``
3. Configure REST URIs for SF/SFF through SFC User Interface or RESTCONF
(required configuration steps can be found in the tutorial stated
Tutorial
~~~~~~~~
-Comprehensive tutorial on how to use the Southbound REST Plugin and how
+Comprehensive tutorial on how to use the Southbound REST Plug-in and how
to control network devices with it can be found on:
https://wiki.opendaylight.org/view/Service_Function_Chaining:Main#SFC_101
Classifier, etc.) to OVS objects (like Bridge,
TerminationPoint=Port/Interface). The mapping takes care of automatic
instantiation (setup) of corresponding object whenever its counterpart
-is created. For example, when a new SFF is created, the SFC-OVS plugin
+is created. For example, when a new SFF is created, the SFC-OVS plug-in
will create a new OVS bridge and when a new OVS Bridge is created, the
-SFC-OVS plugin will create a new SFF.
+SFC-OVS plug-in will create a new SFF.
The feature is intended for SFC users willing to use Open vSwitch as
underlying network infrastructure for deploying RSPs (Rendered Service
SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing
information from/to OVS devices. From the user perspective SFC-OVS acts
-as a layer between SFC DataStore and OVSDB.
+as a layer between SFC datastore and OVSDB.
.. figure:: ./images/sfc/sfc-ovs-architecture-user.png
:alt: SFC-OVS integration into ODL
1. Run ODL distribution (run karaf)
-2. In karaf console execute: ``feature:install odl-sfc-ovs``
+2. In Karaf console execute: ``feature:install odl-sfc-ovs``
3. Configure Open vSwitch to use ODL as a manager, using following
command: ``ovs-vsctl set-manager tcp:<odl_ip_address>:6640``
Overview
''''''''
-This tutorial shows the usual workflow when OVS configuration is
+This tutorial shows the usual work flow when OVS configuration is
transformed to corresponding SFC objects (in this case SFF).
-Prerequisities
+Prerequisites
''''''''''''''
- Open vSwitch installed (ovs-vsctl command available in shell)
This tutorial shows the usual workflow during creation of OVS Bridge
with use of SFC APIs.
-Prerequisities
+Prerequisites
''''''''''''''
- Open vSwitch installed (ovs-vsctl command available in shell)
2. SFF data plane locator must be configured
-3. Classifier interface must be mannually added to SFF bridge.
+3. Classifier interface must be manually added to SFF bridge.
Administering or Managing Classifier
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Rules are created using appropriate iptables command. If the Access
Control Entry (ACE) rule is MAC address related both iptables and
-ip6tabeles rules re issued. If ACE rule is IPv4 address related, only
+IPv6 tables rules re issued. If ACE rule is IPv4 address related, only
iptables rules are issued, same for IPv6.
.. note::
Administering or Managing Classifier
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Classfier runs alongside sfc\_agent, therefore the commad for starting
+Classifier runs alongside sfc\_agent, therefore the command for starting
it locally is:
::
SFC OpenFlow Renderer High Level Architecture
+.. _sfc-user-guide-sfc-of-pipeline:
+
SFC OpenFlow Switch Flow pipeline
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
assuming VLAN is used for the SFF-SF, and the other where the RSP
ingress tunnel is NSH GRE (UDP port 4789):
-+-------------+--------------------------------------+--------------------------+
-| Priority | Match | Action |
-+=============+======================================+==========================+
-| 256 | EtherType==0x8847 (MPLS unicast) | Goto Table 2 |
-+-------------+--------------------------------------+--------------------------+
-| 256 | EtherType==0x8100 (VLAN) | Goto Table 2 |
-+-------------+--------------------------------------+--------------------------+
-| 256 | EtherType==0x0800,udp,tp\_dst==4789 | Goto Table 2 |
-| | (IP v4) | |
-+-------------+--------------------------------------+--------------------------+
-| 5 | Match Any | Drop |
-+-------------+--------------------------------------+--------------------------+
++----------+-------------------------------------+--------------+
+| Priority | Match | Action |
++==========+=====================================+==============+
+| 256 | EtherType==0x8847 (MPLS unicast) | Goto Table 2 |
++----------+-------------------------------------+--------------+
+| 256 | EtherType==0x8100 (VLAN) | Goto Table 2 |
++----------+-------------------------------------+--------------+
+| 256 | EtherType==0x0800,udp,tp\_dst==4789 | Goto Table 2 |
+| | (IP v4) | |
++----------+-------------------------------------+--------------+
+| 5 | Match Any | Drop |
++----------+-------------------------------------+--------------+
Table: Table Transport Ingress
- The RSP Path 2 (symmetric downlink path) uses MPLS label 101 for
ingress and 100 for egress
-+--------------------------+--------------------------+--------------------------+
-| Priority | Match | Action |
-+==========================+==========================+==========================+
-| 256 | MPLS Label==100 | RSP Path=1, Pop MPLS, |
-| | | Goto Table 4 |
-+--------------------------+--------------------------+--------------------------+
-| 256 | MPLS Label==101 | RSP Path=2, Pop MPLS, |
-| | | Goto Table 4 |
-+--------------------------+--------------------------+--------------------------+
-| 256 | VLAN ID==1000, IP | RSP Path=1, Pop VLAN, |
-| | DSCP==1 | Goto Table 4 |
-+--------------------------+--------------------------+--------------------------+
-| 256 | VLAN ID==1000, IP | RSP Path=2, Pop VLAN, |
-| | DSCP==2 | Goto Table 4 |
-+--------------------------+--------------------------+--------------------------+
-| 5 | Match Any | Goto Table 3 |
-+--------------------------+--------------------------+--------------------------+
++----------+-------------------+-----------------------+
+| Priority | Match | Action |
++==========+===================+=======================+
+| 256 | MPLS Label==100 | RSP Path=1, Pop MPLS, |
+| | | Goto Table 4 |
++----------+-------------------+-----------------------+
+| 256 | MPLS Label==101 | RSP Path=2, Pop MPLS, |
+| | | Goto Table 4 |
++----------+-------------------+-----------------------+
+| 256 | VLAN ID==1000, IP | RSP Path=1, Pop VLAN, |
+| | DSCP==1 | Goto Table 4 |
++----------+-------------------+-----------------------+
+| 256 | VLAN ID==1000, IP | RSP Path=2, Pop VLAN, |
+| | DSCP==2 | Goto Table 4 |
++----------+-------------------+-----------------------+
+| 5 | Match Any | Goto Table 3 |
++----------+-------------------+-----------------------+
Table: Table Path Mapper
NSH paths. RSP Path 1 ingress packets come from external to SFC, for
which we don’t have the source MAC address (MacSrc).
-+------------+--------------------------------+--------------------------------+
-| Priority | Match | Action |
-+============+================================+================================+
-| 256 | RSP Path==1, MacSrc==SF1 | MacDst=SFF2, Goto Table 10 |
-+------------+--------------------------------+--------------------------------+
-| 256 | RSP Path==2, MacSrc==SF1 | Goto Table 10 |
-+------------+--------------------------------+--------------------------------+
-| 256 | RSP Path==2, MacSrc==SFF2 | MacDst=SF1, Goto Table 10 |
-+------------+--------------------------------+--------------------------------+
-| 246 | RSP Path==1 | MacDst=SF1, Goto Table 10 |
-+------------+--------------------------------+--------------------------------+
-| 256 | nsp=3,nsi=255 (SFF Ingress RSP | load:0xa000002→NXM\_NX\_TUN\_I |
-| | 3) | PV4\_DST[], |
-| | | Goto Table 10 |
-+------------+--------------------------------+--------------------------------+
-| 256 | nsp=3,nsi=254 (SFF Ingress | load:0xa00000a→NXM\_NX\_TUN\_I |
-| | from SF, RSP 3) | PV4\_DST[], |
-| | | Goto Table 10 |
-+------------+--------------------------------+--------------------------------+
-| 256 | nsp=4,nsi=254 (SFF1 Ingress | load:0xa00000a→NXM\_NX\_TUN\_I |
-| | from SFF2) | PV4\_DST[], |
-| | | Goto Table 10 |
-+------------+--------------------------------+--------------------------------+
-| 5 | Match Any | Drop |
-+------------+--------------------------------+--------------------------------+
++----------+--------------------------------+--------------------------------+
+| Priority | Match | Action |
++==========+================================+================================+
+| 256 | RSP Path==1, MacSrc==SF1 | MacDst=SFF2, Goto Table 10 |
++----------+--------------------------------+--------------------------------+
+| 256 | RSP Path==2, MacSrc==SF1 | Goto Table 10 |
++----------+--------------------------------+--------------------------------+
+| 256 | RSP Path==2, MacSrc==SFF2 | MacDst=SF1, Goto Table 10 |
++----------+--------------------------------+--------------------------------+
+| 246 | RSP Path==1 | MacDst=SF1, Goto Table 10 |
++----------+--------------------------------+--------------------------------+
+| 256 | nsp=3,nsi=255 (SFF Ingress RSP | load:0xa000002→NXM\_NX\_TUN\_I |
+| | 3) | PV4\_DST[], |
+| | | Goto Table 10 |
++----------+--------------------------------+--------------------------------+
+| 256 | nsp=3,nsi=254 (SFF Ingress | load:0xa00000a→NXM\_NX\_TUN\_I |
+| | from SF, RSP 3) | PV4\_DST[], |
+| | | Goto Table 10 |
++----------+--------------------------------+--------------------------------+
+| 256 | nsp=4,nsi=254 (SFF1 Ingress | load:0xa00000a→NXM\_NX\_TUN\_I |
+| | from SFF2) | PV4\_DST[], |
+| | | Goto Table 10 |
++----------+--------------------------------+--------------------------------+
+| 5 | Match Any | Drop |
++----------+--------------------------------+--------------------------------+
Table: Table Next Hop
Here are two examples on SFF1. RSP Paths 1 and 2 are symmetric MPLS
paths that use VLAN for the SFF-SF. RSP Paths 3 and 4 are symmetric NSH
paths. Since it is assumed that switches used for NSH will only have one
-VXLANport, the NSH packets are just sent back where they came from.
-
-+------------+--------------------------------+--------------------------------+
-| Priority | Match | Action |
-+============+================================+================================+
-| 256 | RSP Path==1, MacDst==SF1 | Push VLAN ID 1000, Port=SF1 |
-+------------+--------------------------------+--------------------------------+
-| 256 | RSP Path==1, MacDst==SFF2 | Push MPLS Label 101, Port=SFF2 |
-+------------+--------------------------------+--------------------------------+
-| 256 | RSP Path==2, MacDst==SF1 | Push VLAN ID 1000, Port=SF1 |
-+------------+--------------------------------+--------------------------------+
-| 246 | RSP Path==2 | Push MPLS Label 100, |
-| | | Port=Ingress |
-+------------+--------------------------------+--------------------------------+
-| 256 | nsp=3,nsi=255 (SFF Ingress RSP | IN\_PORT |
-| | 3) | |
-+------------+--------------------------------+--------------------------------+
-| 256 | nsp=3,nsi=254 (SFF Ingress | IN\_PORT |
-| | from SF, RSP 3) | |
-+------------+--------------------------------+--------------------------------+
-| 256 | nsp=4,nsi=254 (SFF1 Ingress | IN\_PORT |
-| | from SFF2) | |
-+------------+--------------------------------+--------------------------------+
-| 5 | Match Any | Drop |
-+------------+--------------------------------+--------------------------------+
+VXLAN port, the NSH packets are just sent back where they came from.
+
++----------+--------------------------------+--------------------------------+
+| Priority | Match | Action |
++==========+================================+================================+
+| 256 | RSP Path==1, MacDst==SF1 | Push VLAN ID 1000, Port=SF1 |
++----------+--------------------------------+--------------------------------+
+| 256 | RSP Path==1, MacDst==SFF2 | Push MPLS Label 101, Port=SFF2 |
++----------+--------------------------------+--------------------------------+
+| 256 | RSP Path==2, MacDst==SF1 | Push VLAN ID 1000, Port=SF1 |
++----------+--------------------------------+--------------------------------+
+| 246 | RSP Path==2 | Push MPLS Label 100, |
+| | | Port=Ingress |
++----------+--------------------------------+--------------------------------+
+| 256 | nsp=3,nsi=255 (SFF Ingress RSP | IN\_PORT |
+| | 3) | |
++----------+--------------------------------+--------------------------------+
+| 256 | nsp=3,nsi=254 (SFF Ingress | IN\_PORT |
+| | from SF, RSP 3) | |
++----------+--------------------------------+--------------------------------+
+| 256 | nsp=4,nsi=254 (SFF1 Ingress | IN\_PORT |
+| | from SFF2) | |
++----------+--------------------------------+--------------------------------+
+| 5 | Match Any | Drop |
++----------+--------------------------------+--------------------------------+
Table: Table Transport Egress
as illustrated above. Additionally, the SFs must be created and
connected.
+Note that RSP symmetry depends on Service Function Path symmetric field, if present.
+If not, the RSP will be symmetric if any of the SFs involved in the chain
+has the bidirectional field set to true.
+
Target Environment
^^^^^^^^^^^^^^^^^^
In all the following configuration sections, replace the ``${JSON}``
string with the appropriate JSON configuration. Also, change the
-``localhost`` desintation in the URL accordingly.
+``localhost`` destination in the URL accordingly.
SFC OF Renderer NSH Tutorial
''''''''''''''''''''''''''''
{
"name": "sf1",
"type": "http-header-enrichment",
- "nsh-aware": true,
"ip-mgmt-address": "10.0.0.2",
"sf-data-plane-locator": [
{
{
"name": "sf2",
"type": "firewall",
- "nsh-aware": true,
"ip-mgmt-address": "10.0.0.3",
"sf-data-plane-locator": [
{
"service-function-chain": [
{
"name": "sfc-chain1",
- "symmetric": true,
"sfc-service-function": [
{
"name": "hdr-enrich-abstract1",
{
"input": {
"name": "sfc-path1",
- "parent-service-function-path": "sfc-path1",
- "symmetric": true
+ "parent-service-function-path": "sfc-path1"
}
}
{
"name": "sf1",
"type": "http-header-enrichment",
- "nsh-aware": false,
"ip-mgmt-address": "10.0.0.2",
"sf-data-plane-locator": [
{
{
"name": "sf2",
"type": "firewall",
- "nsh-aware": false,
"ip-mgmt-address": "10.0.0.3",
"sf-data-plane-locator": [
{
"service-function-chain": [
{
"name": "sfc-chain1",
- "symmetric": true,
"sfc-service-function": [
{
"name": "hdr-enrich-abstract1",
{
"input": {
"name": "sfc-path1",
- "parent-service-function-path": "sfc-path1",
- "symmetric": true
+ "parent-service-function-path": "sfc-path1"
}
}
{
"name": "Firewall",
"ip-mgmt-address": "172.25.73.23",
- "type": "service-function-type: firewall",
- "nsh-aware": true,
+ "type": "firewall",
"sf-data-plane-locator": [
{
"name": "firewall-dpl",
{
"name": "Dpi",
"ip-mgmt-address": "172.25.73.23",
- "type":"service-function-type: dpi",
- "nsh-aware": true,
+ "type":"dpi",
"sf-data-plane-locator": [
{
"name": "dpi-dpl",
{
"name": "Qos",
"ip-mgmt-address": "172.25.73.23",
- "type":"service-function-type: qos",
- "nsh-aware": true,
+ "type":"qos",
"sf-data-plane-locator": [
{
"name": "qos-dpl",
}
All these SFs are configured on the same device as the LSF. The next
-step is to prepare Service Function Chain. SFC is symmetric.
+step is to prepare Service Function Chain.
::
"service-function-chain": [
{
"name": "CSR3XSF",
- "symmetric": "true",
"sfc-service-function": [
{
"name": "Firewall",
- "type": "service-function-type: firewall"
+ "type": "firewall"
},
{
"name": "Dpi",
- "type": "service-function-type: dpi"
+ "type": "dpi"
},
{
"name": "Qos",
- "type": "service-function-type: qos"
+ "type": "qos"
}
]
}
{
"input": {
"name": "CSR3XSF-Path-RSP",
- "parent-service-function-path": "CSR3XSF-Path",
- "symmetric": true
+ "parent-service-function-path": "CSR3XSF-Path"
}
}
Prerequisites
'''''''''''''
-- Depolyed topology (include SFFs, SFs and their links).
+- Deployed topology (include SFFs, SFs and their links).
- Dijkstra’s algorithm. More information on Dijkstra’s algorithm can be
found on the wiki here:
"service-function-dictionary": [
{
"sff-sf-data-plane-locator": {
- "port": 10001,
- "ip": "10.3.1.103"
+ "sf-dpl-name": "sf1dpl",
+ "sff-dpl-name": "sff1dpl"
},
"name": "napt44-1",
- "type": "service-function-type:napt44"
+ "type": "napt44"
},
{
"sff-sf-data-plane-locator": {
- "port": 10003,
- "ip": "10.3.1.102"
+ "sf-dpl-name": "sf2dpl",
+ "sff-dpl-name": "sff2dpl"
},
"name": "firewall-1",
- "type": "service-function-type:firewall"
+ "type": "firewall"
}
],
"connected-sff-dictionary": [
"service-function-dictionary": [
{
"sff-sf-data-plane-locator": {
- "port": 10002,
- "ip": "10.3.1.103"
+ "sf-dpl-name": "sf1dpl",
+ "sff-dpl-name": "sff1dpl"
},
"name": "napt44-2",
- "type": "service-function-type:napt44"
+ "type": "napt44"
},
{
"sff-sf-data-plane-locator": {
- "port": 10004,
- "ip": "10.3.1.101"
+ "sf-dpl-name": "sf2dpl",
+ "sff-dpl-name": "sff2dpl"
},
"name": "firewall-2",
- "type": "service-function-type:firewall"
+ "type": "firewall"
}
],
"connected-sff-dictionary": [
"service-function-dictionary": [
{
"sff-sf-data-plane-locator": {
- "port": 10005,
- "ip": "10.3.1.104"
+ "sf-dpl-name": "sf1dpl",
+ "sff-dpl-name": "sff1dpl"
},
"name": "test-server",
- "type": "service-function-type:dpi"
+ "type": "dpi"
},
{
"sff-sf-data-plane-locator": {
- "port": 10006,
- "ip": "10.3.1.102"
+ "sf-dpl-name": "sf2dpl",
+ "sff-dpl-name": "sff2dpl"
},
"name": "test-client",
- "type": "service-function-type:dpi"
+ "type": "dpi"
}
],
"connected-sff-dictionary": [
}
],
"name": "napt44-1",
- "type": "service-function-type:napt44",
- "nsh-aware": true
+ "type": "napt44"
},
{
"rest-uri": "http://localhost:10002",
}
],
"name": "napt44-2",
- "type": "service-function-type:napt44",
- "nsh-aware": true
+ "type": "napt44"
},
{
"rest-uri": "http://localhost:10003",
}
],
"name": "firewall-1",
- "type": "service-function-type:firewall",
- "nsh-aware": true
+ "type": "firewall"
},
{
"rest-uri": "http://localhost:10004",
}
],
"name": "firewall-2",
- "type": "service-function-type:firewall",
- "nsh-aware": true
+ "type": "firewall"
},
{
"rest-uri": "http://localhost:10005",
}
],
"name": "test-server",
- "type": "service-function-type:dpi",
- "nsh-aware": true
+ "type": "dpi"
},
{
"rest-uri": "http://localhost:10006",
}
],
"name": "test-client",
- "type": "service-function-type:dpi",
- "nsh-aware": true
+ "type": "dpi"
}
]
}
}
-The depolyed topology like this:
+The deployed topology like this:
::
"ip-mgmt-address": "10.3.1.103",
"algorithm": "alg1",
"name": "SFG1",
- "type": "service-function-type:napt44",
+ "type": "napt44",
"sfc-service-function": [
{
"name":"napt44-104"
Overview
~~~~~~~~
-Early Service Function Chaining (SFC) Proof of Transit (SFC Proof of
-Transit) implements Service Chaining Proof of Transit functionality on
-capable switches. After the creation of an Rendered Service Path (RSP),
-a user can configure to enable SFC proof of transit on the selected RSP
-to effect the proof of transit.
+Several deployments use traffic engineering, policy routing, segment
+routing or service function chaining (SFC) to steer packets through a
+specific set of nodes. In certain cases regulatory obligations or a
+compliance policy require to prove that all packets that are supposed to
+follow a specific path are indeed being forwarded across the exact set
+of nodes specified. I.e. if a packet flow is supposed to go through a
+series of service functions or network nodes, it has to be proven that
+all packets of the flow actually went through the service chain or
+collection of nodes specified by the policy. In case the packets of a
+flow weren’t appropriately processed, a proof of transit egress device
+would be required to identify the policy violation and take
+corresponding actions (e.g. drop or redirect the packet, send an alert
+etc.) corresponding to the policy.
+
+Service Function Chaining (SFC) Proof of Transit (SFC PoT)
+implements Service Chaining Proof of Transit functionality on capable
+network devices. Proof of Transit defines mechanisms to securely
+prove that traffic transited the defined path. After the creation of an
+Rendered Service Path (RSP), a user can configure to enable SFC proof
+of transit on the selected RSP to effect the proof of transit.
+
+To ensure that the data traffic follows a specified path or a function
+chain, meta-data is added to user traffic in the form of a header. The
+meta-data is based on a 'share of a secret' and provisioned by the SFC
+PoT configuration from ODL over a secure channel to each of the nodes
+in the SFC. This meta-data is updated at each of the service-hop while
+a designated node called the verifier checks whether the collected
+meta-data allows the retrieval of the secret.
+
+The following diagram shows the overview and essentially utilizes Shamir's
+secret sharing algorithm, where each service is given a point on the
+curve and when the packet travels through each service, it collects these
+points (meta-data) and a verifier node tries to re-construct the curve
+using the collected points, thus verifying that the packet traversed
+through all the service functions along the chain.
+
+.. figure:: ./images/sfc/sfc-pot-intro.png
+ :alt: SFC Proof of Transit overview
+
+ SFC Proof of Transit overview
+
+Transport options for different protocols includes a new TLV in SR header
+for Segment Routing, NSH Type-2 meta-data, IPv6 extension headers, IPv4
+variants and for VXLAN-GPE. More details are captured in the following
+link.
+
+In-situ OAM: https://github.com/CiscoDevNet/iOAM
Common acronyms used in the following sections:
- RSP - Rendered Service Path
-- SFCPOT - Service Function Chain Proof of Transit
+- SFC PoT - Service Function Chain Proof of Transit
+
SFC Proof of Transit Architecture
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-When SFC Proof of Transit is initialized, all required listeners are
-registered to handle incoming data. It involves ``SfcPotNodeListener``
-which stores data about all node devices including their mountpoints
-(used here as databrokers), ``SfcPotRSPDataListener``,
-``RenderedPathListener``. ``RenderedPathListener`` is used to listen for
-RSP changes. ``SfcPotRSPDataListener`` implements RPC services to enable
-or disable SFC Proof of Transit on a particular RSP. When the SFC Proof
-of Transit is invoked, RSP listeners and service implementations are
-setup to receive SFCPOT configurations. When a user configures via a
-POST RPC call to enable SFCPOT on a particular RSP, the configuration
-drives the creation of necessary augmentations to the RSP to effect the
-SFCPOT configurations.
-
-SFC Proof of Transit details
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+SFC PoT feature is implemented as a two-part implementation with a
+north-bound handler that augments the RSP while a south-bound renderer
+auto-generates the required parameters and passes it on to the nodes
+that belong to the SFC.
-Several deployments use traffic engineering, policy routing, segment
-routing or service function chaining (SFC) to steer packets through a
-specific set of nodes. In certain cases regulatory obligations or a
-compliance policy require to prove that all packets that are supposed to
-follow a specific path are indeed being forwarded across the exact set
-of nodes specified. I.e. if a packet flow is supposed to go through a
-series of service functions or network nodes, it has to be proven that
-all packets of the flow actually went through the service chain or
-collection of nodes specified by the policy. In case the packets of a
-flow weren’t appropriately processed, a proof of transit egress device
-would be required to identify the policy violation and take
-corresponding actions (e.g. drop or redirect the packet, send an alert
-etc.) corresponding to the policy.
+The north-bound feature is enabled via odl-sfc-pot feature while the
+south-bound renderer is enabled via the odl-sfc-pot-netconf-renderer
+feature. For the purposes of SFC PoT handling, both features must be
+installed.
+
+RPC handlers to augment the RSP are part of ``SfcPotRpc`` while the
+RSP augmentation to enable or disable SFC PoT feature is done via
+``SfcPotRspProcessor``.
-The SFCPOT approach is based on meta-data which is added to every
-packet. The meta data is updated at every hop and is used to verify
-whether a packet traversed all required nodes. A particular path is
-either described by a set of secret keys, or a set of shares of a single
-secret. Nodes on the path retrieve their individual keys or shares of a
-key (using for e.g. Shamir’s Shared Sharing Secret scheme) from a
-central controller. The complete key set is only known to the verifier-
-which is typically the ultimate node on a path that requires proof of
-transit. Each node in the path uses its secret or share of the secret to
-update the meta-data of the packets as the packets pass through the
-node. When the verifier receives a packet, it can use its key(s) along
-with the meta-data to validate whether the packet traversed the service
-chain correctly.
SFC Proof of Transit entities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
the controller along with necessary parameters to control some of the
aspects of the SFC Proof of Transit process.
-The RPC handler identifies the RSP and generates SFC Proof of Transit
-parameters like secret share, secret etc., and adds the generated SFCPOT
-configuration parameters to SFC main as well as the various SFC hops.
-The last node in the SFC is configured as a verifier node to allow
-SFCPOT Proof of Transit process to be completed.
-
-The SFCPOT configuration generators and related handling are done by
-``SfcPotAPI``, ``SfcPotConfigGenerator``, ``SfcPotListener``,
-``SfcPotPolyAPI``, ``SfcPotPolyClassAPI`` and ``SfcPotPolyClass``.
+The RPC handler identifies the RSP and adds PoT feature meta-data like
+enable/disable, number of PoT profiles, profiles refresh parameters etc.,
+that directs the south-bound renderer appropriately when RSP changes
+are noticed via call-backs in the renderer handlers.
Administering SFC Proof of Transit
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- odl-sfc-pot
+Please note that the odl-sfc-pot-netconf-renderer or other renderers in future
+must be installed for the feature to take full-effect. The details of the renderer
+features are described in other parts of this document.
+
SFC Proof of Transit Tutorial
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^
To enable a device to handle SFC Proof of Transit, it is expected that
-the netconf server device advertise capability as under ioam-scv.yang
-present under src/main/yang folder of sfc-pot feature. It is also
-expected that netconf notifications be enabled and its support
-capability advertised as capabilities.
+the NETCONF node device advertise capability as under ioam-sb-pot.yang
+present under sfc-model/src/main/yang folder. It is also expected that base
+NETCONF support be enabled and its support capability advertised as capabilities.
+
+NETCONF support:``urn:ietf:params:netconf:base:1.0``
+
+PoT support: ``(urn:cisco:params:xml:ns:yang:sfc-ioam-sb-pot?revision=2017-01-12)sfc-ioam-sb-pot``
It is also expected that the devices are netconf mounted and available
in the topology-netconf store.
Instructions
^^^^^^^^^^^^
-When SFC Proof of Transit is installed, all netconf nodes in
-topology-netconf are processed and all capable nodes with accessible
-mountpoints are cached.
+When SFC Proof of Transit is installed, all netconf nodes in topology-netconf
+are processed and all capable nodes with accessible mountpoints are cached.
-First step is to create the required RSP as usually done.
+First step is to create the required RSP as is usually done using RSP creation
+steps in SFC main.
-Once RSP name is avaiable it is used to send a POST RPC to the
+Once RSP name is available it is used to send a POST RPC to the
controller similar to below:
-::
+POST -
+http://ODL-IP:8181/restconf/operations/sfc-ioam-nb-pot:enable-sfc-ioam-pot-rendered-path/
- POST ./restconf/operations/sfc-ioam-nb-pot:enable-sfc-ioam-pot-rendered-path
+.. code-block:: json
{
- "input": {
- "sfc-ioam-pot-rsp-name": "rsp1"
- }
+ "input":
+ {
+ "sfc-ioam-pot-rsp-name": "sfc-path-3sf3sff",
+ "ioam-pot-enable":true,
+ "ioam-pot-num-profiles":2,
+ "ioam-pot-bit-mask":"bits32",
+ "refresh-period-time-units":"milliseconds",
+ "refresh-period-value":5000
+ }
}
The following can be used to disable the SFC Proof of Transit on an RSP
-which removes the augmentations and stores back the RSP without the
-SFCPOT enabled features and also sending down a delete configuration to
-the SFCPOT configuration sub-tree in the nodes.
+which disables the PoT feature.
+
+POST -
+http://ODL-IP:8181/restconf/operations/sfc-ioam-nb-pot:disable-sfc-ioam-pot-rendered-path/
+
+.. code-block:: json
+
+ {
+ "input":
+ {
+ "sfc-ioam-pot-rsp-name": "sfc-path-3sf3sff",
+ }
+ }
+
+SFC PoT NETCONF Renderer User Guide
+-----------------------------------
+
+Overview
+~~~~~~~~
+
+The SFC Proof of Transit (PoT) NETCONF renderer implements SFC Proof of
+Transit functionality on NETCONF-capable devices, that have advertised
+support for in-situ OAM (iOAM) support.
+
+It listens for an update to an existing RSP with enable or disable proof of
+transit support and adds the auto-generated SFC PoT configuration parameters
+to all the SFC hop nodes. The last node in the SFC is configured as a
+verifier node to allow SFC PoT process to be completed.
+
+Common acronyms are used as below:
+
+- SF - Service Function
+
+- SFC - Service Function Chain
+
+- RSP - Rendered Service Path
+
+- SFF - Service Function Forwarder
+
+
+Mapping to SFC entities
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The renderer module listens to RSP updates in ``SfcPotNetconfRSPListener``
+and triggers configuration generation in ``SfcPotNetconfIoam`` class. Node
+arrival and leaving are managed via ``SfcPotNetconfNodeManager`` and
+``SfcPotNetconfNodeListener``. In addition there is a timer thread that
+runs to generate configuration periodically to refresh the profiles in the
+nodes that are part of the SFC.
+
+
+Administering SFC PoT NETCONF Renderer
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use the SFC Proof of Transit Karaf, the following Karaf features must
+be installed:
+
+- odl-sfc-model
+
+- odl-sfc-provider
+
+- odl-sfc-netconf
+
+- odl-restconf-all
+
+- odl-netconf-topology
+
+- odl-netconf-connector-all
+
+- odl-sfc-pot
+
+- odl-sfc-pot-netconf-renderer
+
+
+SFC PoT NETCONF Renderer Tutorial
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This tutorial is a simple example how to enable SFC PoT on NETCONF-capable
+devices.
+
+Preconditions
+^^^^^^^^^^^^^
+
+The NETCONF-capable device will have to support sfc-ioam-sb-pot.yang file.
+
+It is expected that a NETCONF-capable VPP device has Honeycomb (Hc2vpp)
+Java-based agent that helps to translate between NETCONF and VPP internal
+APIs.
+
+More details are here:
+In-situ OAM: https://github.com/CiscoDevNet/iOAM
+
+Steps
+^^^^^
+When the SFC PoT NETCONF renderer module is installed, all NETCONF nodes in
+topology-netconf are processed and all sfc-ioam-sb-pot yang capable nodes
+with accessible mountpoints are cached.
+
+The first step is to create RSP for the SFC as per SFC guidelines above.
+
+Enable SFC PoT is done on the RSP via RESTCONF to the ODL as outlined above.
+
+Internally, the NETCONF renderer will act on the callback to a modified RSP
+that has PoT enabled.
+
+In-situ OAM algorithms for auto-generation of SFC PoT parameters are
+generated automatically and sent to these nodes via NETCONF.
+
+Logical Service Function Forwarder
+----------------------------------
+
+Overview
+~~~~~~~~
+
+.. _sfc-user-guide-logical-sff-motivation:
+
+Rationale
+^^^^^^^^^
+When the current SFC is deployed in a cloud environment, it is assumed that each
+switch connected to a Service Function is configured as a Service Function Forwarder and
+each Service Function is connected to its Service Function Forwarder depending on the
+Compute Node where the Virtual Machine is located.
+
+.. figure:: ./images/sfc/sfc-in-cloud.png
+ :alt: Deploying SFC in Cloud Environments
+
+As shown in the picture above, this solution allows the basic cloud use cases to be fulfilled,
+as for example, the ones required in OPNFV Brahmaputra, however, some advanced use cases
+like the transparent migration of VMs can not be implemented. The Logical Service Function Forwarder
+enables the following advanced use cases:
+
+1. Service Function mobility without service disruption
+2. Service Functions load balancing and failover
+
+As shown in the picture below, the Logical Service Function Forwarder concept extends the current
+SFC northbound API to provide an abstraction of the underlying Data Center infrastructure.
+The Data Center underlaying network can be abstracted by a single SFF. This single SFF uses
+the logical port UUID as data plane locator to connect SFs globally and in a location-transparent manner.
+SFC makes use of `Genius <./genius-user-guide.html>`__ project to track the
+location of the SF's logical ports.
+
+.. figure:: ./images/sfc/single-logical-sff-concept.png
+ :alt: Single Logical SFF concept
+
+The SFC internally distributes the necessary flow state over the relevant switches based on the
+internal Data Center topology and the deployment of SFs.
+
+Changes in data model
+~~~~~~~~~~~~~~~~~~~~~
+The Logical Service Function Forwarder concept extends the current SFC northbound API to provide
+an abstraction of the underlying Data Center infrastructure.
+
+The Logical SFF simplifies the configuration of the current SFC data model by reducing the number
+of parameters to be be configured in every SFF, since the controller will discover those parameters
+by interacting with the services offered by the `Genius <./genius-user-guide.html>`__ project.
+
+The following picture shows the Logical SFF data model. The model gets simplified as most of the
+configuration parameters of the current SFC data model are discovered in runtime. The complete
+YANG model can be found here `logical SFF model
+<https://github.com/opendaylight/sfc/blob/master/sfc-model/src/main/yang/service-function-forwarder-logical.yang>`__.
+
+.. figure:: ./images/sfc/logical-sff-datamodel.png
+ :alt: Logical SFF data model
+
+How to configure the Logical SFF
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The following are examples to configure the Logical SFF:
+::
+
+ curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/restconf/config/service-function:service-functions/
+
+**Service Functions JSON.**
+
+::
+
+ {
+ "service-functions": {
+ "service-function": [
+ {
+ "name": "firewall-1",
+ "type": "firewall",
+ "sf-data-plane-locator": [
+ {
+ "name": "firewall-dpl",
+ "interface-name": "eccb57ae-5a2e-467f-823e-45d7bb2a6a9a",
+ "transport": "service-locator:eth-nsh",
+ "service-function-forwarder": "sfflogical1"
+
+ }
+ ]
+ },
+ {
+ "name": "dpi-1",
+ "type": "dpi",
+ "sf-data-plane-locator": [
+ {
+ "name": "dpi-dpl",
+ "interface-name": "df15ac52-e8ef-4e9a-8340-ae0738aba0c0",
+ "transport": "service-locator:eth-nsh",
+ "service-function-forwarder": "sfflogical1"
+ }
+ ]
+ }
+ ]
+ }
+ }
::
- POST ./restconf/operations/sfc-ioam-nb-pot:disable-sfc-ioam-pot-rendered-path
+ curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
+
+**Service Function Forwarders JSON.**
+
+::
{
- "input": {
- "sfc-ioam-pot-rsp-name": "rsp1"
- }
+ "service-function-forwarders": {
+ "service-function-forwarder": [
+ {
+ "name": "sfflogical1"
+ }
+ ]
+ }
+ }
+
+::
+
+ curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/
+
+**Service Function Chains JSON.**
+
+::
+
+ {
+ "service-function-chains": {
+ "service-function-chain": [
+ {
+ "name": "SFC1",
+ "sfc-service-function": [
+ {
+ "name": "dpi-abstract1",
+ "type": "dpi"
+ },
+ {
+ "name": "firewall-abstract1",
+ "type": "firewall"
+ }
+ ]
+ },
+ {
+ "name": "SFC2",
+ "sfc-service-function": [
+ {
+ "name": "dpi-abstract1",
+ "type": "dpi"
+ }
+ ]
+ }
+ ]
+ }
}
+::
+
+ curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8182/restconf/config/service-function-chain:service-function-paths/
+
+**Service Function Paths JSON.**
+
+::
+
+ {
+ "service-function-paths": {
+ "service-function-path": [
+ {
+ "name": "SFP1",
+ "service-chain-name": "SFC1",
+ "starting-index": 255,
+ "symmetric": "true",
+ "context-metadata": "NSH1",
+ "transport-type": "service-locator:vxlan-gpe"
+
+ }
+ ]
+ }
+ }
+
+As a result of above configuration, OpenDaylight renders the needed flows in all involved SFFs. Those flows implement:
+
+- Two Rendered Service Paths:
+
+ - dpi-1 (SF1), firewall-1 (SF2)
+ - firewall-1 (SF2), dpi-1 (SF1)
+
+- The communication between SFFs and SFs based on eth-nsh
+
+- The communication between SFFs based on vxlan-gpe
+
+The following picture shows a topology and traffic flow (in green) which corresponds to the above configuration.
+
+.. figure:: ./images/sfc/single-logical-sff-example.png
+ :alt: Logical SFF Example
+ :width: 800px
+ :height: 600px
+
+ Logical SFF Example
+
+
+
+The Logical SFF functionality allows OpenDaylight to find out the SFFs holding the SFs involved in a path. In this example
+the SFFs affected are Node3 and Node4 thus the controller renders the flows containing NSH parameters just in those SFFs.
+
+Here you have the new flows rendered in Node3 and Node4 which implement the NSH protocol. Every Rendered Service Path is represented
+by an NSP value. We provisioned a symmetric RSP so we get two NSPs: 8388613 and 5. Node3 holds the first SF of NSP 8388613 and
+the last SF of NSP 5. Node 4 holds the first SF of NSP 5 and the last SF of NSP 8388613. Both Node3 and Node4 will pop the NSH header
+when the received packet has gone through the last SF of its path.
+
+
+**Rendered flows Node 3**
+
+::
+
+ cookie=0x14, duration=59.264s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
+ cookie=0x14, duration=59.194s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
+ cookie=0x14, duration=59.257s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=5 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
+ cookie=0x14, duration=59.189s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=8388613 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
+ cookie=0xba5eba1100000203, duration=59.213s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=5 actions=pop_nsh,set_field:6e:e0:06:b4:c5:1e->eth_src,resubmit(,17)
+ cookie=0xba5eba1100000201, duration=59.213s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
+ cookie=0xba5eba1100000201, duration=59.188s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=8388613 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
+ cookie=0xba5eba1100000201, duration=59.182s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=set_field:0->tun_id,output:6
+
+**Rendered Flows Node 4**
+
+::
+
+ cookie=0x14, duration=69.040s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
+ cookie=0x14, duration=69.008s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
+ cookie=0x14, duration=69.040s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=5 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
+ cookie=0x14, duration=69.005s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=8388613 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
+ cookie=0xba5eba1100000201, duration=69.029s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=5 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
+ cookie=0xba5eba1100000201, duration=69.029s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=set_field:0->tun_id,output:1
+ cookie=0xba5eba1100000201, duration=68.999s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
+ cookie=0xba5eba1100000203, duration=68.996s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=8388613 actions=pop_nsh,set_field:02:14:84:5e:a8:5d->eth_src,resubmit(,17)
+
+
+An interesting scenario to show the Logical SFF strength is the migration of a SF from a compute node to another.
+The OpenDaylight will learn the new topology by itself, then it will re-render the new flows to the new SFFs affected.
+
+.. figure:: ./images/sfc/single-logical-sff-example-migration.png
+ :alt: Logical SFF - SF Migration Example
+ :width: 800px
+ :height: 600px
+
+ Logical SFF - SF Migration Example
+
+
+In our example, SF2 is moved from Node4 to Node2 then OpenDaylight removes NSH specific flows from Node4 and puts them in Node2.
+Check below flows showing this effect. Now Node3 keeps holding the first SF of NSP 8388613 and the last SF of NSP 5;
+but Node2 becomes the new holder of the first SF of NSP 5 and the last SF of NSP 8388613.
+
+
+**Rendered Flows Node 3 After Migration**
+
+::
+
+ cookie=0x14, duration=64.044s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
+ cookie=0x14, duration=63.947s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
+ cookie=0x14, duration=64.044s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=5 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
+ cookie=0x14, duration=63.947s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=8388613 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
+ cookie=0xba5eba1100000201, duration=64.034s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
+ cookie=0xba5eba1100000203, duration=64.034s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=5 actions=pop_nsh,set_field:6e:e0:06:b4:c5:1e->eth_src,resubmit(,17)
+ cookie=0xba5eba1100000201, duration=63.947s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=8388613 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
+ cookie=0xba5eba1100000201, duration=63.942s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=set_field:0->tun_id,output:2
+
+**Rendered Flows Node 2 After Migration**
+
+::
+
+ cookie=0x14, duration=56.856s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
+ cookie=0x14, duration=56.755s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
+ cookie=0x14, duration=56.847s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=5 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
+ cookie=0x14, duration=56.755s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=8388613 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
+ cookie=0xba5eba1100000201, duration=56.823s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=5 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
+ cookie=0xba5eba1100000201, duration=56.823s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=set_field:0->tun_id,output:4
+ cookie=0xba5eba1100000201, duration=56.755s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
+ cookie=0xba5eba1100000203, duration=56.750s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=8388613 actions=pop_nsh,set_field:02:14:84:5e:a8:5d->eth_src,resubmit(,17)
+
+**Rendered Flows Node 4 After Migration**
+
+::
+
+ -- No flows for NSH processing --
+
+.. _sfc-user-guide-classifier-impacts:
+
+Classifier impacts
+~~~~~~~~~~~~~~~~~~
+
+As previously mentioned, in the :ref:`Logical SFF rationale
+<sfc-user-guide-logical-sff-motivation>`, the Logical SFF feature relies on
+Genius to get the dataplane IDs of the OpenFlow switches, in order to properly
+steer the traffic through the chain.
+
+Since one of the classifier's objectives is to steer the packets *into* the
+SFC domain, the classifier has to be aware of where the first Service
+Function is located - if it migrates somewhere else, the classifier table
+has to be updated accordingly, thus enabling the seemless migration of Service
+Functions.
+
+For this feature, mobility of the client VM is out of scope, and should be
+managed by its high-availability module, or VNF manager.
+
+Keep in mind that classification *always* occur in the compute-node where
+the client VM (i.e. traffic origin) is running.
+
+How to attach the classifier to a Logical SFF
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In order to leverage this functionality, the classifier has to be configured
+using a Logical SFF as an attachment-point, specifying within it the neutron
+port to classify.
+
+The following examples show how to configure an ACL, and a classifier having
+a Logical SFF as an attachment-point:
+
+**Configure an ACL**
+
+The following ACL enables traffic intended for port 80 within the subnetwork
+192.168.2.0/24, for RSP1 and RSP1-Reverse.
+
+::
+
+ {
+ "access-lists": {
+ "acl": [
+ {
+ "acl-name": "ACL1",
+ "acl-type": "ietf-access-control-list:ipv4-acl",
+ "access-list-entries": {
+ "ace": [
+ {
+ "rule-name": "ACE1",
+ "actions": {
+ "service-function-acl:rendered-service-path": "RSP1"
+ },
+ "matches": {
+ "destination-ipv4-network": "192.168.2.0/24",
+ "source-ipv4-network": "192.168.2.0/24",
+ "protocol": "6",
+ "source-port-range": {
+ "lower-port": 0
+ },
+ "destination-port-range": {
+ "lower-port": 80
+ }
+ }
+ }
+ ]
+ }
+ },
+ {
+ "acl-name": "ACL2",
+ "acl-type": "ietf-access-control-list:ipv4-acl",
+ "access-list-entries": {
+ "ace": [
+ {
+ "rule-name": "ACE2",
+ "actions": {
+ "service-function-acl:rendered-service-path": "RSP1-Reverse"
+ },
+ "matches": {
+ "destination-ipv4-network": "192.168.2.0/24",
+ "source-ipv4-network": "192.168.2.0/24",
+ "protocol": "6",
+ "source-port-range": {
+ "lower-port": 80
+ },
+ "destination-port-range": {
+ "lower-port": 0
+ }
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+ }
+
+::
+
+ curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/ietf-access-control-list:access-lists/
+
+**Configure a classifier JSON**
+
+The following JSON provisions a classifier, having a Logical SFF as an
+attachment point. The value of the field 'interface' is where you
+indicate the neutron ports of the VMs you want to classify.
+
+::
+
+ {
+ "service-function-classifiers": {
+ "service-function-classifier": [
+ {
+ "name": "Classifier1",
+ "scl-service-function-forwarder": [
+ {
+ "name": "sfflogical1",
+ "interface": "09a78ba3-78ba-40f5-a3ea-1ce708367f2b"
+ }
+ ],
+ "acl": {
+ "name": "ACL1",
+ "type": "ietf-access-control-list:ipv4-acl"
+ }
+ }
+ ]
+ }
+ }
+
+::
+
+ curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-classifier:service-function-classifiers/
+
+.. _sfc-user-guide-pipeline-impacts:
+
+SFC pipeline impacts
+~~~~~~~~~~~~~~~~~~~~
+
+After binding SFC service with a particular interface by means of Genius, as explained in the :ref:`Genius User Guide <genius-user-guide-binding-services>`,
+the entry point in the SFC pipeline will be table 82 (SFC_TRANSPORT_CLASSIFIER_TABLE), and from that point, packet
+processing will be similar to the :ref:`SFC OpenFlow pipeline <sfc-user-guide-sfc-of-pipeline>`, just with another set
+of specific tables for the SFC service.
+
+This picture shows the SFC pipeline after service integration with Genius:
+
+.. figure:: ./images/sfc/LSFF_pipeline.png
+ :alt: SFC Logical SFF OpenFlow pipeline
+
+ SFC Logical SFF OpenFlow pipeline