3 Service Function Chaining
4 =========================
6 OpenDaylight Service Function Chaining (SFC) Overview
7 -----------------------------------------------------
9 OpenDaylight Service Function Chaining (SFC) provides the ability to
10 define an ordered list of a network services (e.g. firewalls, load
11 balancers). These service are then "stitched" together in the network to
12 create a service chain. This project provides the infrastructure
13 (chaining logic, APIs) needed for ODL to provision a service chain in
14 the network and an end-user application for defining such chains.
16 - ACE - Access Control Entry
18 - ACL - Access Control List
20 - SCF - Service Classifier Function
22 - SF - Service Function
24 - SFC - Service Function Chain
26 - SFF - Service Function Forwarder
28 - SFG - Service Function Group
30 - SFP - Service Function Path
32 - RSP - Rendered Service Path
34 - NSH - Network Service Header
42 The SFC User interface comes in two flavors:
44 - Web Interface (SFC-UI): is based on Dlux project. It provides an easy way to
45 create, read, update and delete configuration stored in the datastore.
46 Moreover, it shows the status of all SFC features (e.g installed,
47 uninstalled) and Karaf log messages as well.
49 - Command Line Interface (CLI): it provides several Karaf console commands to
50 show the SFC model (SF, SFFs, etc.) provisioned in the datastore.
53 SFC Web Interface (SFC-UI)
54 ~~~~~~~~~~~~~~~~~~~~~~~~~~
59 SFC-UI operates purely by using RESTCONF.
61 .. figure:: ./images/sfc/sfc-ui-architecture.png
62 :alt: SFC-UI integration into ODL
64 SFC-UI integration into ODL
69 1. Run ODL distribution (run karaf)
71 2. In Karaf console execute: ``feature:install odl-sfc-ui``
73 3. Visit SFC-UI on: ``http://<odl_ip_address>:8181/sfc/index.html``
76 SFC Command Line Interface (SFC-CLI)
77 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
82 The Karaf Container offers a complete Unix-like console that allows managing
83 the container. This console can be extended with custom commands to manage the
84 features deployed on it. This feature will add some basic commands to show the
85 provisioned SFC entities.
90 The SFC-CLI implements commands to show some of the provisioned SFC entities:
91 Service Functions, Service Function Forwarders, Service Function
92 Chains, Service Function Paths, Service Function Classifiers, Service Nodes and
93 Service Function Types:
95 * List one/all provisioned Service Functions:
99 sfc:sf-list [--name <name>]
101 * List one/all provisioned Service Function Forwarders:
105 sfc:sff-list [--name <name>]
107 * List one/all provisioned Service Function Chains:
111 sfc:sfc-list [--name <name>]
113 * List one/all provisioned Service Function Paths:
117 sfc:sfp-list [--name <name>]
119 * List one/all provisioned Service Function Classifiers:
123 sfc:sc-list [--name <name>]
125 * List one/all provisioned Service Nodes:
129 sfc:sn-list [--name <name>]
131 * List one/all provisioned Service Function Types:
135 sfc:sft-list [--name <name>]
137 SFC Southbound REST Plug-in
138 ---------------------------
143 The Southbound REST Plug-in is used to send configuration from datastore
144 down to network devices supporting a REST API (i.e. they have a
145 configured REST URI). It supports POST/PUT/DELETE operations, which are
146 triggered accordingly by changes in the SFC data stores.
148 - Access Control List (ACL)
150 - Service Classifier Function (SCF)
152 - Service Function (SF)
154 - Service Function Group (SFG)
156 - Service Function Schedule Type (SFST)
158 - Service Function Forwarder (SFF)
160 - Rendered Service Path (RSP)
162 Southbound REST Plug-in Architecture
163 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
165 From the user perspective, the REST plug-in is another SFC Southbound
166 plug-in used to communicate with network devices.
168 .. figure:: ./images/sfc/sb-rest-architecture-user.png
169 :alt: Southbound REST Plug-in integration into ODL
171 Southbound REST Plug-in integration into ODL
173 Configuring Southbound REST Plugin
174 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
176 1. Run ODL distribution (run karaf)
178 2. In Karaf console execute: ``feature:install odl-sfc-sb-rest``
180 3. Configure REST URIs for SF/SFF through SFC User Interface or RESTCONF
181 (required configuration steps can be found in the tutorial stated
187 Comprehensive tutorial on how to use the Southbound REST Plug-in and how
188 to control network devices with it can be found on:
189 https://wiki.opendaylight.org/view/Service_Function_Chaining:Main#SFC_101
197 SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices.
198 Integration is realized through mapping of SFC objects (like SF, SFF,
199 Classifier, etc.) to OVS objects (like Bridge,
200 TerminationPoint=Port/Interface). The mapping takes care of automatic
201 instantiation (setup) of corresponding object whenever its counterpart
202 is created. For example, when a new SFF is created, the SFC-OVS plug-in
203 will create a new OVS bridge and when a new OVS Bridge is created, the
204 SFC-OVS plug-in will create a new SFF.
206 The feature is intended for SFC users willing to use Open vSwitch as
207 underlying network infrastructure for deploying RSPs (Rendered Service
213 SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing
214 information from/to OVS devices. From the user perspective SFC-OVS acts
215 as a layer between SFC datastore and OVSDB.
217 .. figure:: ./images/sfc/sfc-ovs-architecture-user.png
218 :alt: SFC-OVS integration into ODL
220 SFC-OVS integration into ODL
225 1. Run ODL distribution (run karaf)
227 2. In Karaf console execute: ``feature:install odl-sfc-ovs``
229 3. Configure Open vSwitch to use ODL as a manager, using following
230 command: ``ovs-vsctl set-manager tcp:<odl_ip_address>:6640``
235 Verifying mapping from OVS to SFF
236 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
241 This tutorial shows the usual work flow when OVS configuration is
242 transformed to corresponding SFC objects (in this case SFF).
247 - Open vSwitch installed (ovs-vsctl command available in shell)
249 - SFC-OVS feature configured as stated above
254 1. ``ovs-vsctl set-manager tcp:<odl_ip_address>:6640``
256 2. ``ovs-vsctl add-br br1``
258 3. ``ovs-vsctl add-port br1 testPort``
263 a. visit SFC User Interface:
264 ``http://<odl_ip_address>:8181/sfc/index.html#/sfc/serviceforwarder``
266 b. use pure RESTCONF and send GET request to URL:
267 ``http://<odl_ip_address>:8181/restconf/config/service-function-forwarder:service-function-forwarders``
269 There should be SFF, which name will be ending with *br1* and the SFF
270 should containt two DataPlane locators: *br1* and *testPort*.
272 Verifying mapping from SFF to OVS
273 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
278 This tutorial shows the usual workflow during creation of OVS Bridge
279 with use of SFC APIs.
284 - Open vSwitch installed (ovs-vsctl command available in shell)
286 - SFC-OVS feature configured as stated above
291 1. In shell execute: ``ovs-vsctl set-manager tcp:<odl_ip_address>:6640``
293 2. Send POST request to URL:
294 ``http://<odl_ip_address>:8181/restconf/operations/service-function-forwarder-ovs:create-ovs-bridge``
295 Use Basic auth with credentials: "admin", "admin" and set
296 ``Content-Type: application/json``. The content of POST request
306 "ip": "<Open_vSwitch_ip_address>"
311 Open\_vSwitch\_ip\_address is IP address of machine, where Open vSwitch
317 In shell execute: ``ovs-vsctl show``. There should be Bridge with name
318 *br-test* and one port/interface called *br-test*.
320 Also, corresponding SFF for this OVS Bridge should be configured, which
321 can be verified through SFC User Interface or RESTCONF as stated in
324 SFC Classifier User Guide
325 -------------------------
330 Description of classifier can be found in:
331 https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/
333 There are two types of classifier:
335 1. OpenFlow Classifier
337 2. Iptables Classifier
342 OpenFlow Classifier implements the classification criteria based on
343 OpenFlow rules deployed into an OpenFlow switch. An Open vSwitch will
344 take the role of a classifier and performs various encapsulations such
345 NSH, VLAN, MPLS, etc. In the existing implementation, classifier can
346 support NSH encapsulation. Matching information is based on ACL for MAC
347 addresses, ports, protocol, IPv4 and IPv6. Supported protocols are TCP,
348 UDP and SCTP. Actions information in the OF rules, shall be forwarding
349 of the encapsulated packets with specific information related to the
352 Classifier Architecture
353 ^^^^^^^^^^^^^^^^^^^^^^^
355 The OVSDB Southbound interface is used to create an instance of a bridge
356 in a specific location (via IP address). This bridge contains the
357 OpenFlow rules that perform the classification of the packets and react
358 accordingly. The OpenFlow Southbound interface is used to translate the
359 ACL information into OF rules within the Open vSwitch.
363 in order to create the instance of the bridge that takes the role of
364 a classifier, an "empty" SFF must be created.
366 Configuring Classifier
367 ^^^^^^^^^^^^^^^^^^^^^^
369 1. An empty SFF must be created in order to host the ACL that contains
370 the classification information.
372 2. SFF data plane locator must be configured
374 3. Classifier interface must be manually added to SFF bridge.
376 Administering or Managing Classifier
377 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
379 Classification information is based on MAC addresses, protocol, ports
380 and IP. ACL gathers this information and is assigned to an RSP which
381 turns to be a specific path for a Service Chain.
386 Classifier manages everything from starting the packet listener to
387 creation (and removal) of appropriate ip(6)tables rules and marking
388 received packets accordingly. Its functionality is **available only on
389 Linux** as it leverdges **NetfilterQueue**, which provides access to
390 packets matched by an **iptables** rule. Classifier requires **root
391 privileges** to be able to operate.
393 So far it is capable of processing ACL for MAC addresses, ports, IPv4
394 and IPv6. Supported protocols are TCP and UDP.
396 Classifier Architecture
397 ^^^^^^^^^^^^^^^^^^^^^^^
399 Python code located in the project repository
400 sfc-py/common/classifier.py.
404 classifier assumes that Rendered Service Path (RSP) **already
405 exists** in ODL when an ACL referencing it is obtained
407 1. sfc\_agent receives an ACL and passes it for processing to the
410 2. the RSP (its SFF locator) referenced by ACL is requested from ODL
412 3. if the RSP exists in the ODL then ACL based iptables rules for it are
415 After this process is over, every packet successfully matched to an
416 iptables rule (i.e. successfully classified) will be NSH encapsulated
417 and forwarded to a related SFF, which knows how to traverse the RSP.
419 Rules are created using appropriate iptables command. If the Access
420 Control Entry (ACE) rule is MAC address related both iptables and
421 IPv6 tables rules re issued. If ACE rule is IPv4 address related, only
422 iptables rules are issued, same for IPv6.
426 iptables **raw** table contains all created rules
428 Configuring Classifier
429 ^^^^^^^^^^^^^^^^^^^^^^
431 | Classfier does’t need any configuration.
432 | Its only requirement is that the **second (2) Netfilter Queue** is not
433 used by any other process and is **avalilable for the classifier**.
435 Administering or Managing Classifier
436 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
438 Classifier runs alongside sfc\_agent, therefore the command for starting
443 sudo python3.4 sfc-py/sfc_agent.py --rest --odl-ip-port localhost:8181
444 --auto-sff-name --nfq-class
446 SFC OpenFlow Renderer User Guide
447 --------------------------------
452 The Service Function Chaining (SFC) OpenFlow Renderer (SFC OF Renderer)
453 implements Service Chaining on OpenFlow switches. It listens for the
454 creation of a Rendered Service Path (RSP), and once received it programs
455 Service Function Forwarders (SFF) that are hosted on OpenFlow capable
456 switches to steer packets through the service chain.
458 Common acronyms used in the following sections:
460 - SF - Service Function
462 - SFF - Service Function Forwarder
464 - SFC - Service Function Chain
466 - SFP - Service Function Path
468 - RSP - Rendered Service Path
470 SFC OpenFlow Renderer Architecture
471 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
473 The SFC OF Renderer is invoked after a RSP is created using an MD-SAL
474 listener called ``SfcOfRspDataListener``. Upon SFC OF Renderer
475 initialization, the ``SfcOfRspDataListener`` registers itself to listen
476 for RSP changes. When invoked, the ``SfcOfRspDataListener`` processes
477 the RSP and calls the ``SfcOfFlowProgrammerImpl`` to create the
478 necessary flows in the Service Function Forwarders configured in the
479 RSP. Refer to the following diagram for more details.
481 .. figure:: ./images/sfc/sfcofrenderer_architecture.png
482 :alt: SFC OpenFlow Renderer High Level Architecture
484 SFC OpenFlow Renderer High Level Architecture
486 .. _sfc-user-guide-sfc-of-pipeline:
488 SFC OpenFlow Switch Flow pipeline
489 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
491 The SFC OpenFlow Renderer uses the following tables for its Flow
494 - Table 0, Classifier
496 - Table 1, Transport Ingress
498 - Table 2, Path Mapper
500 - Table 3, Path Mapper ACL
504 - Table 10, Transport Egress
506 The OpenFlow Table Pipeline is intended to be generic to work for all of
507 the different encapsulations supported by SFC.
509 All of the tables are explained in detail in the following section.
511 The SFFs (SFF1 and SFF2), SFs (SF1), and topology used for the flow
512 tables in the following sections are as described in the following
515 .. figure:: ./images/sfc/sfcofrenderer_nwtopo.png
516 :alt: SFC OpenFlow Renderer Typical Network Topology
518 SFC OpenFlow Renderer Typical Network Topology
520 Classifier Table detailed
521 ^^^^^^^^^^^^^^^^^^^^^^^^^
523 It is possible for the SFF to also act as a classifier. This table maps
524 subscriber traffic to RSPs, and is explained in detail in the classifier
527 If the SFF is not a classifier, then this table will just have a simple
530 Transport Ingress Table detailed
531 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
533 The Transport Ingress table has an entry per expected tunnel transport
534 type to be received in a particular SFF, as established in the SFC
537 Here are two example on SFF1: one where the RSP ingress tunnel is MPLS
538 assuming VLAN is used for the SFF-SF, and the other where the RSP
539 ingress tunnel is NSH GRE (UDP port 4789):
541 +----------+-------------------------------------+--------------+
542 | Priority | Match | Action |
543 +==========+=====================================+==============+
544 | 256 | EtherType==0x8847 (MPLS unicast) | Goto Table 2 |
545 +----------+-------------------------------------+--------------+
546 | 256 | EtherType==0x8100 (VLAN) | Goto Table 2 |
547 +----------+-------------------------------------+--------------+
548 | 256 | EtherType==0x0800,udp,tp\_dst==4789 | Goto Table 2 |
550 +----------+-------------------------------------+--------------+
551 | 5 | Match Any | Drop |
552 +----------+-------------------------------------+--------------+
554 Table: Table Transport Ingress
556 Path Mapper Table detailed
557 ^^^^^^^^^^^^^^^^^^^^^^^^^^
559 The Path Mapper table has an entry per expected tunnel transport info to
560 be received in a particular SFF, as established in the SFC
561 configuration. The tunnel transport info is used to determine the RSP
562 Path ID, and is stored in the OpenFlow Metadata. This table is not used
563 for NSH, since the RSP Path ID is stored in the NSH header.
565 For SF nodes that do not support NSH tunneling, the IP header DSCP field
566 is used to store the RSP Path Id. The RSP Path Id is written to the DSCP
567 field in the Transport Egress table for those packets sent to an SF.
569 Here is an example on SFF1, assuming the following details:
571 - VLAN ID 1000 is used for the SFF-SF
573 - The RSP Path 1 tunnel uses MPLS label 100 for ingress and 101 for
576 - The RSP Path 2 (symmetric downlink path) uses MPLS label 101 for
577 ingress and 100 for egress
579 +----------+-------------------+-----------------------+
580 | Priority | Match | Action |
581 +==========+===================+=======================+
582 | 256 | MPLS Label==100 | RSP Path=1, Pop MPLS, |
584 +----------+-------------------+-----------------------+
585 | 256 | MPLS Label==101 | RSP Path=2, Pop MPLS, |
587 +----------+-------------------+-----------------------+
588 | 256 | VLAN ID==1000, IP | RSP Path=1, Pop VLAN, |
589 | | DSCP==1 | Goto Table 4 |
590 +----------+-------------------+-----------------------+
591 | 256 | VLAN ID==1000, IP | RSP Path=2, Pop VLAN, |
592 | | DSCP==2 | Goto Table 4 |
593 +----------+-------------------+-----------------------+
594 | 5 | Match Any | Goto Table 3 |
595 +----------+-------------------+-----------------------+
597 Table: Table Path Mapper
599 Path Mapper ACL Table detailed
600 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
602 This table is only populated when PacketIn packets are received from the
603 switch for TcpProxy type SFs. These flows are created with an inactivity
604 timer of 60 seconds and will be automatically deleted upon expiration.
606 Next Hop Table detailed
607 ^^^^^^^^^^^^^^^^^^^^^^^
609 The Next Hop table uses the RSP Path Id and appropriate packet fields to
610 determine where to send the packet next. For NSH, only the NSP (Network
611 Services Path, RSP ID) and NSI (Network Services Index, next hop) fields
612 from the NSH header are needed to determine the VXLAN tunnel destination
613 IP. For VLAN or MPLS, then the source MAC address is used to determine
614 the destination MAC address.
616 Here are two examples on SFF1, assuming SFF1 is connected to SFF2. RSP
617 Paths 1 and 2 are symmetric VLAN paths. RSP Paths 3 and 4 are symmetric
618 NSH paths. RSP Path 1 ingress packets come from external to SFC, for
619 which we don’t have the source MAC address (MacSrc).
621 +----------+--------------------------------+--------------------------------+
622 | Priority | Match | Action |
623 +==========+================================+================================+
624 | 256 | RSP Path==1, MacSrc==SF1 | MacDst=SFF2, Goto Table 10 |
625 +----------+--------------------------------+--------------------------------+
626 | 256 | RSP Path==2, MacSrc==SF1 | Goto Table 10 |
627 +----------+--------------------------------+--------------------------------+
628 | 256 | RSP Path==2, MacSrc==SFF2 | MacDst=SF1, Goto Table 10 |
629 +----------+--------------------------------+--------------------------------+
630 | 246 | RSP Path==1 | MacDst=SF1, Goto Table 10 |
631 +----------+--------------------------------+--------------------------------+
632 | 256 | nsp=3,nsi=255 (SFF Ingress RSP | load:0xa000002→NXM\_NX\_TUN\_I |
633 | | 3) | PV4\_DST[], |
634 | | | Goto Table 10 |
635 +----------+--------------------------------+--------------------------------+
636 | 256 | nsp=3,nsi=254 (SFF Ingress | load:0xa00000a→NXM\_NX\_TUN\_I |
637 | | from SF, RSP 3) | PV4\_DST[], |
638 | | | Goto Table 10 |
639 +----------+--------------------------------+--------------------------------+
640 | 256 | nsp=4,nsi=254 (SFF1 Ingress | load:0xa00000a→NXM\_NX\_TUN\_I |
641 | | from SFF2) | PV4\_DST[], |
642 | | | Goto Table 10 |
643 +----------+--------------------------------+--------------------------------+
644 | 5 | Match Any | Drop |
645 +----------+--------------------------------+--------------------------------+
647 Table: Table Next Hop
649 Transport Egress Table detailed
650 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
652 The Transport Egress table prepares egress tunnel information and sends
655 Here are two examples on SFF1. RSP Paths 1 and 2 are symmetric MPLS
656 paths that use VLAN for the SFF-SF. RSP Paths 3 and 4 are symmetric NSH
657 paths. Since it is assumed that switches used for NSH will only have one
658 VXLAN port, the NSH packets are just sent back where they came from.
660 +----------+--------------------------------+--------------------------------+
661 | Priority | Match | Action |
662 +==========+================================+================================+
663 | 256 | RSP Path==1, MacDst==SF1 | Push VLAN ID 1000, Port=SF1 |
664 +----------+--------------------------------+--------------------------------+
665 | 256 | RSP Path==1, MacDst==SFF2 | Push MPLS Label 101, Port=SFF2 |
666 +----------+--------------------------------+--------------------------------+
667 | 256 | RSP Path==2, MacDst==SF1 | Push VLAN ID 1000, Port=SF1 |
668 +----------+--------------------------------+--------------------------------+
669 | 246 | RSP Path==2 | Push MPLS Label 100, |
671 +----------+--------------------------------+--------------------------------+
672 | 256 | nsp=3,nsi=255 (SFF Ingress RSP | IN\_PORT |
674 +----------+--------------------------------+--------------------------------+
675 | 256 | nsp=3,nsi=254 (SFF Ingress | IN\_PORT |
676 | | from SF, RSP 3) | |
677 +----------+--------------------------------+--------------------------------+
678 | 256 | nsp=4,nsi=254 (SFF1 Ingress | IN\_PORT |
680 +----------+--------------------------------+--------------------------------+
681 | 5 | Match Any | Drop |
682 +----------+--------------------------------+--------------------------------+
684 Table: Table Transport Egress
686 Administering SFC OF Renderer
687 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
689 To use the SFC OpenFlow Renderer Karaf, at least the following Karaf
690 features must be installed.
692 - odl-openflowplugin-nxm-extensions
694 - odl-openflowplugin-flow-services
700 - odl-sfc-openflow-renderer
702 - odl-sfc-ui (optional)
704 The following command can be used to view all of the currently installed
709 opendaylight-user@root>feature:list -i
711 Or, pipe the command to a grep to see a subset of the currently
712 installed Karaf features:
716 opendaylight-user@root>feature:list -i | grep sfc
718 To install a particular feature, use the Karaf ``feature:install``
721 SFC OF Renderer Tutorial
722 ~~~~~~~~~~~~~~~~~~~~~~~~
727 In this tutorial, 2 different encapsulations will be shown: MPLS and
728 NSH. The following Network Topology diagram is a logical view of the
729 SFFs and SFs involved in creating the Service Chains.
731 .. figure:: ./images/sfc/sfcofrenderer_nwtopo.png
732 :alt: SFC OpenFlow Renderer Typical Network Topology
734 SFC OpenFlow Renderer Typical Network Topology
739 To use this example, SFF OpenFlow switches must be created and connected
740 as illustrated above. Additionally, the SFs must be created and
743 Note that RSP symmetry depends on Service Function Path symmetric field, if
744 present. If not, the RSP will be symmetric if any of the SFs involved in the
745 chain has the bidirectional field set to true.
750 The target environment is not important, but this use-case was created
756 The steps to use this tutorial are as follows. The referenced
757 configuration in the steps is listed in the following sections.
759 There are numerous ways to send the configuration. In the following
760 configuration chapters, the appropriate ``curl`` command is shown for
761 each configuration to be sent, including the URL.
763 Steps to configure the SFC OF Renderer tutorial:
765 1. Send the ``SF`` RESTCONF configuration
767 2. Send the ``SFF`` RESTCONF configuration
769 3. Send the ``SFC`` RESTCONF configuration
771 4. Send the ``SFP`` RESTCONF configuration
773 5. Create the ``RSP`` with a RESTCONF RPC command
775 Once the configuration has been successfully created, query the Rendered
776 Service Paths with either the SFC UI or via RESTCONF. Notice that the
777 RSP is symmetrical, so the following 2 RSPs will be created:
783 At this point the Service Chains have been created, and the OpenFlow
784 Switches are programmed to steer traffic through the Service Chain.
785 Traffic can now be injected from a client into the Service Chain. To
786 debug problems, the OpenFlow tables can be dumped with the following
787 commands, assuming SFF1 is called ``s1`` and SFF2 is called ``s2``.
791 sudo ovs-ofctl -O OpenFlow13 dump-flows s1
795 sudo ovs-ofctl -O OpenFlow13 dump-flows s2
797 In all the following configuration sections, replace the ``${JSON}``
798 string with the appropriate JSON configuration. Also, change the
799 ``localhost`` destination in the URL accordingly.
801 SFC OF Renderer NSH Tutorial
802 ''''''''''''''''''''''''''''
804 The following configuration sections show how to create the different
805 elements using NSH encapsulation.
807 | **NSH Service Function configuration**
809 The Service Function configuration can be sent with the following
814 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
815 --data '${JSON}' -X PUT --user
816 admin:admin http://localhost:8181/restconf/config/service-function:service-functions/
818 **SF configuration JSON.**
823 "service-functions": {
824 "service-function": [
827 "type": "http-header-enrichment",
828 "ip-mgmt-address": "10.0.0.2",
829 "sf-data-plane-locator": [
834 "transport": "service-locator:vxlan-gpe",
835 "service-function-forwarder": "sff1"
842 "ip-mgmt-address": "10.0.0.3",
843 "sf-data-plane-locator": [
848 "transport": "service-locator:vxlan-gpe",
849 "service-function-forwarder": "sff2"
857 | **NSH Service Function Forwarder configuration**
859 The Service Function Forwarder configuration can be sent with the
864 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
866 **SFF configuration JSON.**
871 "service-function-forwarders": {
872 "service-function-forwarder": [
875 "service-node": "openflow:2",
876 "sff-data-plane-locator": [
879 "data-plane-locator":
883 "transport": "service-locator:vxlan-gpe"
887 "service-function-dictionary": [
890 "sff-sf-data-plane-locator":
892 "sf-dpl-name": "sf1dpl",
893 "sff-dpl-name": "sff1dpl"
900 "service-node": "openflow:3",
901 "sff-data-plane-locator": [
904 "data-plane-locator":
908 "transport": "service-locator:vxlan-gpe"
912 "service-function-dictionary": [
915 "sff-sf-data-plane-locator":
917 "sf-dpl-name": "sf2dpl",
918 "sff-dpl-name": "sff2dpl"
927 | **NSH Service Function Chain configuration**
929 The Service Function Chain configuration can be sent with the following
934 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
935 --data '${JSON}' -X PUT --user
936 admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/
938 **SFC configuration JSON.**
943 "service-function-chains": {
944 "service-function-chain": [
946 "name": "sfc-chain1",
947 "sfc-service-function": [
949 "name": "hdr-enrich-abstract1",
950 "type": "http-header-enrichment"
953 "name": "firewall-abstract1",
962 | **NSH Service Function Path configuration**
964 The Service Function Path configuration can be sent with the following
969 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-path:service-function-paths/
971 **SFP configuration JSON.**
976 "service-function-paths": {
977 "service-function-path": [
980 "service-chain-name": "sfc-chain1",
981 "transport-type": "service-locator:vxlan-gpe",
988 | **NSH Rendered Service Path creation**
992 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X POST --user admin:admin http://localhost:8181/restconf/operations/rendered-service-path:create-rendered-path/
994 **RSP creation JSON.**
1000 "name": "sfc-path1",
1001 "parent-service-function-path": "sfc-path1"
1005 | **NSH Rendered Service Path removal**
1007 The following command can be used to remove a Rendered Service Path
1008 called ``sfc-path1``:
1010 .. code-block:: bash
1012 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '{"input": {"name": "sfc-path1" } }' -X POST --user admin:admin http://localhost:8181/restconf/operations/rendered-service-path:delete-rendered-path/
1014 | **NSH Rendered Service Path Query**
1016 The following command can be used to query all of the created Rendered
1019 .. code-block:: bash
1021 curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET --user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
1023 SFC OF Renderer MPLS Tutorial
1024 '''''''''''''''''''''''''''''
1026 The following configuration sections show how to create the different
1027 elements using MPLS encapsulation.
1029 | **MPLS Service Function configuration**
1031 The Service Function configuration can be sent with the following
1034 .. code-block:: bash
1036 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
1037 --data '${JSON}' -X PUT --user
1038 admin:admin http://localhost:8181/restconf/config/service-function:service-functions/
1040 **SF configuration JSON.**
1042 .. code-block:: json
1045 "service-functions": {
1046 "service-function": [
1049 "type": "http-header-enrichment",
1050 "ip-mgmt-address": "10.0.0.2",
1051 "sf-data-plane-locator": [
1054 "mac": "00:00:08:01:02:01",
1056 "transport": "service-locator:mac",
1057 "service-function-forwarder": "sff1"
1064 "ip-mgmt-address": "10.0.0.3",
1065 "sf-data-plane-locator": [
1068 "mac": "00:00:08:01:03:01",
1070 "transport": "service-locator:mac",
1071 "service-function-forwarder": "sff2"
1079 | **MPLS Service Function Forwarder configuration**
1081 The Service Function Forwarder configuration can be sent with the
1084 .. code-block:: bash
1086 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
1088 **SFF configuration JSON.**
1090 .. code-block:: json
1093 "service-function-forwarders": {
1094 "service-function-forwarder": [
1097 "service-node": "openflow:2",
1098 "sff-data-plane-locator": [
1100 "name": "ulSff1Ingress",
1101 "data-plane-locator":
1104 "transport": "service-locator:mpls"
1106 "service-function-forwarder-ofs:ofs-port":
1108 "mac": "11:11:11:11:11:11",
1113 "name": "ulSff1ToSff2",
1114 "data-plane-locator":
1117 "transport": "service-locator:mpls"
1119 "service-function-forwarder-ofs:ofs-port":
1121 "mac": "33:33:33:33:33:33",
1127 "data-plane-locator":
1129 "mac": "22:22:22:22:22:22",
1131 "transport": "service-locator:mac",
1133 "service-function-forwarder-ofs:ofs-port":
1135 "mac": "33:33:33:33:33:33",
1140 "service-function-dictionary": [
1143 "sff-sf-data-plane-locator":
1145 "sf-dpl-name": "sf1-sff1",
1146 "sff-dpl-name": "toSf1"
1153 "service-node": "openflow:3",
1154 "sff-data-plane-locator": [
1156 "name": "ulSff2Ingress",
1157 "data-plane-locator":
1160 "transport": "service-locator:mpls"
1162 "service-function-forwarder-ofs:ofs-port":
1164 "mac": "44:44:44:44:44:44",
1169 "name": "ulSff2Egress",
1170 "data-plane-locator":
1173 "transport": "service-locator:mpls"
1175 "service-function-forwarder-ofs:ofs-port":
1177 "mac": "66:66:66:66:66:66",
1183 "data-plane-locator":
1185 "mac": "55:55:55:55:55:55",
1187 "transport": "service-locator:mac"
1189 "service-function-forwarder-ofs:ofs-port":
1195 "service-function-dictionary": [
1198 "sff-sf-data-plane-locator":
1200 "sf-dpl-name": "sf2-sff2",
1201 "sff-dpl-name": "toSf2"
1204 "service-function-forwarder-ofs:ofs-port":
1215 | **MPLS Service Function Chain configuration**
1217 The Service Function Chain configuration can be sent with the following
1220 .. code-block:: bash
1222 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
1223 --data '${JSON}' -X PUT --user admin:admin
1224 http://localhost:8181/restconf/config/service-function-chain:service-function-chains/
1226 **SFC configuration JSON.**
1228 .. code-block:: json
1231 "service-function-chains": {
1232 "service-function-chain": [
1234 "name": "sfc-chain1",
1235 "sfc-service-function": [
1237 "name": "hdr-enrich-abstract1",
1238 "type": "http-header-enrichment"
1241 "name": "firewall-abstract1",
1250 | **MPLS Service Function Path configuration**
1252 The Service Function Path configuration can be sent with the following
1255 .. code-block:: bash
1257 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
1258 --data '${JSON}' -X PUT --user admin:admin
1259 http://localhost:8181/restconf/config/service-function-path:service-function-paths/
1261 **SFP configuration JSON.**
1263 .. code-block:: json
1266 "service-function-paths": {
1267 "service-function-path": [
1269 "name": "sfc-path1",
1270 "service-chain-name": "sfc-chain1",
1271 "transport-type": "service-locator:mpls",
1278 | **MPLS Rendered Service Path creation**
1280 .. code-block:: bash
1282 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
1283 --data '${JSON}' -X POST --user admin:admin
1284 http://localhost:8181/restconf/operations/rendered-service-path:create-rendered-path/
1286 **RSP creation JSON.**
1288 .. code-block:: json
1292 "name": "sfc-path1",
1293 "parent-service-function-path": "sfc-path1"
1297 | **MPLS Rendered Service Path removal**
1299 The following command can be used to remove a Rendered Service Path
1300 called ``sfc-path1``:
1302 .. code-block:: bash
1304 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
1305 --data '{"input": {"name": "sfc-path1" } }' -X POST --user
1306 admin:admin http://localhost:8181/restconf/operations/rendered-service-path:delete-rendered-path/
1308 | **MPLS Rendered Service Path Query**
1310 The following command can be used to query all of the created Rendered
1313 .. code-block:: bash
1315 curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET
1316 --user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
1318 SFC IOS XE Renderer User Guide
1319 ------------------------------
1324 The early Service Function Chaining (SFC) renderer for IOS-XE devices
1325 (SFC IOS-XE renderer) implements Service Chaining functionality on
1326 IOS-XE capable switches. It listens for the creation of a Rendered
1327 Service Path (RSP) and sets up Service Function Forwarders (SFF) that
1328 are hosted on IOS-XE switches to steer traffic through the service
1331 Common acronyms used in the following sections:
1333 - SF - Service Function
1335 - SFF - Service Function Forwarder
1337 - SFC - Service Function Chain
1341 - SFP - Service Function Path
1343 - RSP - Rendered Service Path
1345 - LSF - Local Service Forwarder
1347 - RSF - Remote Service Forwarder
1349 SFC IOS-XE Renderer Architecture
1350 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1352 When the SFC IOS-XE renderer is initialized, all required listeners are
1353 registered to handle incoming data. It involves CSR/IOS-XE
1354 ``NodeListener`` which stores data about all configurable devices
1355 including their mountpoints (used here as databrokers),
1356 ``ServiceFunctionListener``, ``ServiceForwarderListener`` (see mapping)
1357 and ``RenderedPathListener`` used to listen for RSP changes. When the
1358 SFC IOS-XE renderer is invoked, ``RenderedPathListener`` calls the
1359 ``IosXeRspProcessor`` which processes the RSP change and creates all
1360 necessary Service Paths and Remote Service Forwarders (if necessary) on
1363 Service Path details
1364 ~~~~~~~~~~~~~~~~~~~~
1366 Each Service Path is defined by index (represented by NSP) and contains
1367 service path entries. Each entry has appropriate service index (NSI) and
1368 definition of next hop. Next hop can be Service Function, different
1369 Service Function Forwarder or definition of end of chain - terminate.
1370 After terminating, the packet is sent to destination. If a SFF is
1371 defined as a next hop, it has to be present on device in the form of
1372 Remote Service Forwarder. RSFs are also created during RSP processing.
1374 Example of Service Path:
1378 service-chain service-path 200
1379 service-index 255 service-function firewall-1
1380 service-index 254 service-function dpi-1
1381 service-index 253 terminate
1383 Mapping to IOS-XE SFC entities
1384 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1386 Renderer contains mappers for SFs and SFFs. IOS-XE capable device is
1387 using its own definition of Service Functions and Service Function
1388 Forwarders according to appropriate .yang file.
1389 ``ServiceFunctionListener`` serves as a listener for SF changes. If SF
1390 appears in datastore, listener extracts its management ip address and
1391 looks into cached IOS-XE nodes. If some of available nodes match,
1392 Service function is mapped in ``IosXeServiceFunctionMapper`` to be
1393 understandable by IOS-XE device and it’s written into device’s config.
1394 ``ServiceForwarderListener`` is used in a similar way. All SFFs with
1395 suitable management ip address it mapped in
1396 ``IosXeServiceForwarderMapper``. Remapped SFFs are configured as a Local
1397 Service Forwarders. It is not possible to directly create Remote Service
1398 Forwarder using IOS-XE renderer. RSF is created only during RSP
1401 Administering SFC IOS-XE renderer
1402 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1404 To use the SFC IOS-XE Renderer Karaf, at least the following Karaf
1405 features must be installed:
1415 - odl-netconf-topology
1417 - odl-sfc-ios-xe-renderer
1419 SFC IOS-XE renderer Tutorial
1420 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1425 This tutorial is a simple example how to create Service Path on IOS-XE
1426 capable device using IOS-XE renderer
1431 To connect to IOS-XE device, it is necessary to use several modified
1432 yang models and override device’s ones. All .yang files are in the
1433 ``Yang/netconf`` folder in the ``sfc-ios-xe-renderer module`` in the SFC
1434 project. These files have to be copied to the ``cache/schema``
1435 directory, before Karaf is started. After that, custom capabilities have
1436 to be sent to network-topology:
1438 * PUT ./config/network-topology:network-topology/topology/topology-netconf/node/<device-name>
1442 <node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
1443 <node-id>device-name</node-id>
1444 <host xmlns="urn:opendaylight:netconf-node-topology">device-ip</host>
1445 <port xmlns="urn:opendaylight:netconf-node-topology">2022</port>
1446 <username xmlns="urn:opendaylight:netconf-node-topology">login</username>
1447 <password xmlns="urn:opendaylight:netconf-node-topology">password</password>
1448 <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
1449 <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">0</keepalive-delay>
1450 <yang-module-capabilities xmlns="urn:opendaylight:netconf-node-topology">
1451 <override>true</override>
1452 <capability xmlns="urn:opendaylight:netconf-node-topology">
1453 urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&revision=2013-07-15
1455 <capability xmlns="urn:opendaylight:netconf-node-topology">
1456 urn:ietf:params:xml:ns:yang:ietf-yang-types?module=ietf-yang-types&revision=2013-07-15
1458 <capability xmlns="urn:opendaylight:netconf-node-topology">
1459 urn:ios?module=ned&revision=2016-03-08
1461 <capability xmlns="urn:opendaylight:netconf-node-topology">
1462 http://tail-f.com/yang/common?module=tailf-common&revision=2015-05-22
1464 <capability xmlns="urn:opendaylight:netconf-node-topology">
1465 http://tail-f.com/yang/common?module=tailf-meta-extensions&revision=2013-11-07
1467 <capability xmlns="urn:opendaylight:netconf-node-topology">
1468 http://tail-f.com/yang/common?module=tailf-cli-extensions&revision=2015-03-19
1470 </yang-module-capabilities>
1475 The device name in the URL and in the XML must match.
1480 When the IOS-XE renderer is installed, all NETCONF nodes in
1481 topology-netconf are processed and all capable nodes with accessible
1482 mountpoints are cached. The first step is to create LSF on node.
1484 ``Service Function Forwarder configuration``
1486 * PUT ./config/service-function-forwarder:service-function-forwarders
1488 .. code-block:: json
1491 "service-function-forwarders": {
1492 "service-function-forwarder": [
1495 "ip-mgmt-address": "172.25.73.23",
1496 "sff-data-plane-locator": [
1498 "name": "CSR1Kv-2-dpl",
1499 "data-plane-locator": {
1500 "transport": "service-locator:vxlan-gpe",
1502 "ip": "10.99.150.10"
1511 If the IOS-XE node with appropriate management IP exists, this
1512 configuration is mapped and LSF is created on the device. The same
1513 approach is used for Service Functions.
1515 * PUT ./config/service-function:service-functions
1517 .. code-block:: json
1520 "service-functions": {
1521 "service-function": [
1524 "ip-mgmt-address": "172.25.73.23",
1526 "sf-data-plane-locator": [
1528 "name": "firewall-dpl",
1531 "transport": "service-locator:gre",
1532 "service-function-forwarder": "CSR1Kv-2"
1538 "ip-mgmt-address": "172.25.73.23",
1540 "sf-data-plane-locator": [
1545 "transport": "service-locator:gre",
1546 "service-function-forwarder": "CSR1Kv-2"
1552 "ip-mgmt-address": "172.25.73.23",
1554 "sf-data-plane-locator": [
1559 "transport": "service-locator:gre",
1560 "service-function-forwarder": "CSR1Kv-2"
1568 All these SFs are configured on the same device as the LSF. The next
1569 step is to prepare Service Function Chain.
1571 * PUT ./config/service-function-chain:service-function-chains/
1573 .. code-block:: json
1576 "service-function-chains": {
1577 "service-function-chain": [
1580 "sfc-service-function": [
1599 Service Function Path:
1601 * PUT ./config/service-function-path:service-function-paths/
1603 .. code-block:: json
1606 "service-function-paths": {
1607 "service-function-path": [
1609 "name": "CSR3XSF-Path",
1610 "service-chain-name": "CSR3XSF",
1611 "starting-index": 255,
1618 Without a classifier, there is possibility to POST RSP directly.
1620 * POST ./operations/rendered-service-path:create-rendered-path
1622 .. code-block:: json
1626 "name": "CSR3XSF-Path-RSP",
1627 "parent-service-function-path": "CSR3XSF-Path"
1631 The resulting configuration:
1636 service-chain service-function-forwarder local
1637 ip address 10.99.150.10
1639 service-chain service-function firewall
1641 encapsulation gre enhanced divert
1643 service-chain service-function dpi
1645 encapsulation gre enhanced divert
1647 service-chain service-function qos
1649 encapsulation gre enhanced divert
1651 service-chain service-path 1
1652 service-index 255 service-function firewall
1653 service-index 254 service-function dpi
1654 service-index 253 service-function qos
1655 service-index 252 terminate
1657 service-chain service-path 2
1658 service-index 255 service-function qos
1659 service-index 254 service-function dpi
1660 service-index 253 service-function firewall
1661 service-index 252 terminate
1664 Service Path 1 is direct, Service Path 2 is reversed. Path numbers may
1667 Service Function Scheduling Algorithms
1668 --------------------------------------
1673 When creating the Rendered Service Path, the origin SFC controller chose
1674 the first available service function from a list of service function
1675 names. This may result in many issues such as overloaded service
1676 functions and a longer service path as SFC has no means to understand
1677 the status of service functions and network topology. The service
1678 function selection framework supports at least four algorithms (Random,
1679 Round Robin, Load Balancing and Shortest Path) to select the most
1680 appropriate service function when instantiating the Rendered Service
1681 Path. In addition, it is an extensible framework that allows 3rd party
1682 selection algorithm to be plugged in.
1687 The following figure illustrates the service function selection
1688 framework and algorithms.
1690 .. figure:: ./images/sfc/sf-selection-arch.png
1691 :alt: SF Selection Architecture
1693 SF Selection Architecture
1695 A user has three different ways to select one service function selection
1698 1. Integrated RESTCONF Calls. OpenStack and/or other administration
1699 system could provide plugins to call the APIs to select one
1700 scheduling algorithm.
1702 2. Command line tools. Command line tools such as curl or browser
1703 plugins such as POSTMAN (for Google Chrome) and RESTClient (for
1704 Mozilla Firefox) could select schedule algorithm by making RESTCONF
1707 3. SFC-UI. Now the SFC-UI provides an option for choosing a selection
1708 algorithm when creating a Rendered Service Path.
1710 The RESTCONF northbound SFC API provides GUI/RESTCONF interactions for
1711 choosing the service function selection algorithm. MD-SAL data store
1712 provides all supported service function selection algorithms, and
1713 provides APIs to enable one of the provided service function selection
1714 algorithms. Once a service function selection algorithm is enabled, the
1715 service function selection algorithm will work when creating a Rendered
1718 Select SFs with Scheduler
1719 ~~~~~~~~~~~~~~~~~~~~~~~~~
1721 Administrator could use both the following ways to select one of the
1722 selection algorithm when creating a Rendered Service Path.
1724 - Command line tools. Command line tools includes Linux commands curl
1725 or even browser plugins such as POSTMAN(for Google Chrome) or
1726 RESTClient(for Mozilla Firefox). In this case, the following JSON
1727 content is needed at the moment:
1728 Service\_function\_schudule\_type.json
1730 .. code-block:: json
1733 "service-function-scheduler-types": {
1734 "service-function-scheduler-type": [
1737 "type": "service-function-scheduler-type:random",
1741 "name": "roundrobin",
1742 "type": "service-function-scheduler-type:round-robin",
1746 "name": "loadbalance",
1747 "type": "service-function-scheduler-type:load-balance",
1751 "name": "shortestpath",
1752 "type": "service-function-scheduler-type:shortest-path",
1759 If using the Linux curl command, it could be:
1761 .. code-block:: bash
1763 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
1764 --data '$${Service_function_schudule_type.json}' -X PUT
1765 --user admin:admin http://localhost:8181/restconf/config/service-function-scheduler-type:service-function-scheduler-types/
1768 Here is also a snapshot for using the RESTClient plugin:
1770 .. figure:: ./images/sfc/RESTClient-snapshot.png
1771 :alt: Mozilla Firefox RESTClient
1773 Mozilla Firefox RESTClient
1775 - SFC-UI.SFC-UI provides a drop down menu for service function
1776 selection algorithm. Here is a snapshot for the user interaction from
1777 SFC-UI when creating a Rendered Service Path.
1779 .. figure:: ./images/sfc/karaf-webui-select-a-type.png
1786 Some service function selection algorithms in the drop list are not
1787 implemented yet. Only the first three algorithms are committed at
1793 Select Service Function from the name list randomly.
1798 The Random algorithm is used to select one Service Function from the
1799 name list which it gets from the Service Function Type randomly.
1804 - Service Function information are stored in datastore.
1806 - Either no algorithm or the Random algorithm is selected.
1811 The Random algorithm will work either no algorithm type is selected or
1812 the Random algorithm is selected.
1817 Once the plugins are installed into Karaf successfully, a user can use
1818 his favorite method to select the Random scheduling algorithm type.
1819 There are no special instructions for using the Random algorithm.
1824 Select Service Function from the name list in Round Robin manner.
1829 The Round Robin algorithm is used to select one Service Function from
1830 the name list which it gets from the Service Function Type in a Round
1831 Robin manner, this will balance workloads to all Service Functions.
1832 However, this method cannot help all Service Functions load the same
1833 workload because it’s flow-based Round Robin.
1838 - Service Function information are stored in datastore.
1840 - Round Robin algorithm is selected
1845 The Round Robin algorithm will work one the Round Robin algorithm is
1851 Once the plugins are installed into Karaf successfully, a user can use
1852 his favorite method to select the Round Robin scheduling algorithm type.
1853 There are no special instructions for using the Round Robin algorithm.
1855 Load Balance Algorithm
1856 ^^^^^^^^^^^^^^^^^^^^^^
1858 Select appropriate Service Function by actual CPU utilization.
1863 The Load Balance Algorithm is used to select appropriate Service
1864 Function by actual CPU utilization of service functions. The CPU
1865 utilization of service function obtained from monitoring information
1866 reported via NETCONF.
1871 - CPU-utilization for Service Function.
1877 - Each VM has a NETCONF server and it could work with NETCONF client
1883 Set up VMs as Service Functions. enable NETCONF server in VMs. Ensure
1884 that you specify them separately. For example:
1886 a. Set up 4 VMs include 2 SFs' type are Firewall, Others are Napt44.
1887 Name them as firewall-1, firewall-2, napt44-1, napt44-2 as Service
1888 Function. The four VMs can run either the same server or different
1891 b. Install NETCONF server on every VM and enable it. More information on
1892 NETCONF can be found on the OpenDaylight wiki here:
1893 https://wiki.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf:Manual_netopeer_installation
1895 c. Get Monitoring data from NETCONF server. These monitoring data should
1896 be get from the NETCONF server which is running in VMs. The following
1897 static XML data is an example:
1899 static XML data like this:
1903 <?xml version="1.0" encoding="UTF-8"?>
1904 <service-function-description-monitor-report>
1906 <number-of-dataports>2</number-of-dataports>
1908 <supported-packet-rate>5</supported-packet-rate>
1909 <supported-bandwidth>10</supported-bandwidth>
1910 <supported-ACL-number>2000</supported-ACL-number>
1911 <RIB-size>200</RIB-size>
1912 <FIB-size>100</FIB-size>
1915 <port-id>1</port-id>
1916 <ipaddress>10.0.0.1</ipaddress>
1917 <macaddress>00:1e:67:a2:5f:f4</macaddress>
1918 <supported-bandwidth>20</supported-bandwidth>
1921 <port-id>2</port-id>
1922 <ipaddress>10.0.0.2</ipaddress>
1923 <macaddress>01:1e:67:a2:5f:f6</macaddress>
1924 <supported-bandwidth>10</supported-bandwidth>
1929 <SF-monitoring-info>
1930 <liveness>true</liveness>
1931 <resource-utilization>
1932 <packet-rate-utilization>10</packet-rate-utilization>
1933 <bandwidth-utilization>15</bandwidth-utilization>
1934 <CPU-utilization>12</CPU-utilization>
1935 <memory-utilization>17</memory-utilization>
1936 <available-memory>8</available-memory>
1937 <RIB-utilization>20</RIB-utilization>
1938 <FIB-utilization>25</FIB-utilization>
1939 <power-utilization>30</power-utilization>
1940 <SF-ports-bandwidth-utilization>
1941 <port-bandwidth-utilization>
1942 <port-id>1</port-id>
1943 <bandwidth-utilization>20</bandwidth-utilization>
1944 </port-bandwidth-utilization>
1945 <port-bandwidth-utilization>
1946 <port-id>2</port-id>
1947 <bandwidth-utilization>30</bandwidth-utilization>
1948 </port-bandwidth-utilization>
1949 </SF-ports-bandwidth-utilization>
1950 </resource-utilization>
1951 </SF-monitoring-info>
1952 </service-function-description-monitor-report>
1954 a. Unzip SFC release tarball.
1956 b. Run SFC: ${sfc}/bin/karaf. More information on Service Function
1957 Chaining can be found on the OpenDaylight SFC’s wiki page:
1958 https://wiki.opendaylight.org/view/Service_Function_Chaining:Main
1960 a. Deploy the SFC2 (firewall-abstract2⇒napt44-abstract2) and click
1961 button to Create Rendered Service Path in SFC UI
1962 (http://localhost:8181/sfc/index.html).
1964 b. Verify the Rendered Service Path to ensure the CPU utilization of the
1965 selected hop is the minimum one among all the service functions with
1966 same type. The correct RSP is firewall-1⇒napt44-2
1968 Shortest Path Algorithm
1969 ^^^^^^^^^^^^^^^^^^^^^^^
1971 Select appropriate Service Function by Dijkstra’s algorithm. Dijkstra’s
1972 algorithm is an algorithm for finding the shortest paths between nodes
1978 The Shortest Path Algorithm is used to select appropriate Service
1979 Function by actual topology.
1984 - Deployed topology (include SFFs, SFs and their links).
1986 - Dijkstra’s algorithm. More information on Dijkstra’s algorithm can be
1987 found on the wiki here:
1988 http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
1993 a. Unzip SFC release tarball.
1995 b. Run SFC: ${sfc}/bin/karaf.
1997 c. Depoly SFFs and SFs. import the service-function-forwarders.json and
1998 service-functions.json in UI
1999 (http://localhost:8181/sfc/index.html#/sfc/config)
2001 service-function-forwarders.json:
2003 .. code-block:: json
2006 "service-function-forwarders": {
2007 "service-function-forwarder": [
2010 "service-node": "OVSDB-test01",
2011 "rest-uri": "http://localhost:5001",
2012 "sff-data-plane-locator": [
2015 "service-function-forwarder-ovs:ovs-bridge": {
2016 "uuid": "4c3778e4-840d-47f4-b45e-0988e514d26c",
2017 "bridge-name": "br-tun"
2019 "data-plane-locator": {
2021 "ip": "192.168.1.1",
2022 "transport": "service-locator:vxlan-gpe"
2026 "service-function-dictionary": [
2028 "sff-sf-data-plane-locator": {
2029 "sf-dpl-name": "sf1dpl",
2030 "sff-dpl-name": "sff1dpl"
2036 "sff-sf-data-plane-locator": {
2037 "sf-dpl-name": "sf2dpl",
2038 "sff-dpl-name": "sff2dpl"
2040 "name": "firewall-1",
2044 "connected-sff-dictionary": [
2052 "service-node": "OVSDB-test01",
2053 "rest-uri": "http://localhost:5002",
2054 "sff-data-plane-locator": [
2057 "service-function-forwarder-ovs:ovs-bridge": {
2058 "uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a1",
2059 "bridge-name": "br-tun"
2061 "data-plane-locator": {
2063 "ip": "192.168.1.2",
2064 "transport": "service-locator:vxlan-gpe"
2068 "service-function-dictionary": [
2070 "sff-sf-data-plane-locator": {
2071 "sf-dpl-name": "sf1dpl",
2072 "sff-dpl-name": "sff1dpl"
2078 "sff-sf-data-plane-locator": {
2079 "sf-dpl-name": "sf2dpl",
2080 "sff-dpl-name": "sff2dpl"
2082 "name": "firewall-2",
2086 "connected-sff-dictionary": [
2094 "service-node": "OVSDB-test01",
2095 "rest-uri": "http://localhost:5005",
2096 "sff-data-plane-locator": [
2099 "service-function-forwarder-ovs:ovs-bridge": {
2100 "uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a4",
2101 "bridge-name": "br-tun"
2103 "data-plane-locator": {
2105 "ip": "192.168.1.2",
2106 "transport": "service-locator:vxlan-gpe"
2110 "service-function-dictionary": [
2112 "sff-sf-data-plane-locator": {
2113 "sf-dpl-name": "sf1dpl",
2114 "sff-dpl-name": "sff1dpl"
2116 "name": "test-server",
2120 "sff-sf-data-plane-locator": {
2121 "sf-dpl-name": "sf2dpl",
2122 "sff-dpl-name": "sff2dpl"
2124 "name": "test-client",
2128 "connected-sff-dictionary": [
2141 service-functions.json:
2143 .. code-block:: json
2146 "service-functions": {
2147 "service-function": [
2149 "rest-uri": "http://localhost:10001",
2150 "ip-mgmt-address": "10.3.1.103",
2151 "sf-data-plane-locator": [
2153 "name": "preferred",
2156 "service-function-forwarder": "SFF-br1"
2163 "rest-uri": "http://localhost:10002",
2164 "ip-mgmt-address": "10.3.1.103",
2165 "sf-data-plane-locator": [
2170 "service-function-forwarder": "SFF-br2"
2177 "rest-uri": "http://localhost:10003",
2178 "ip-mgmt-address": "10.3.1.103",
2179 "sf-data-plane-locator": [
2184 "service-function-forwarder": "SFF-br1"
2187 "name": "firewall-1",
2191 "rest-uri": "http://localhost:10004",
2192 "ip-mgmt-address": "10.3.1.103",
2193 "sf-data-plane-locator": [
2198 "service-function-forwarder": "SFF-br2"
2201 "name": "firewall-2",
2205 "rest-uri": "http://localhost:10005",
2206 "ip-mgmt-address": "10.3.1.103",
2207 "sf-data-plane-locator": [
2212 "service-function-forwarder": "SFF-br3"
2215 "name": "test-server",
2219 "rest-uri": "http://localhost:10006",
2220 "ip-mgmt-address": "10.3.1.103",
2221 "sf-data-plane-locator": [
2226 "service-function-forwarder": "SFF-br3"
2229 "name": "test-client",
2236 The deployed topology like this:
2238 .. code-block:: json
2240 +----+ +----+ +----+
2241 |sff1|+----------|sff3|---------+|sff2|
2242 +----+ +----+ +----+
2244 +--------------+ +--------------+
2246 +----------+ +--------+ +----------+ +--------+
2247 |firewall-1| |napt44-1| |firewall-2| |napt44-2|
2248 +----------+ +--------+ +----------+ +--------+
2250 - Deploy the SFC2(firewall-abstract2⇒napt44-abstract2), select
2251 "Shortest Path" as schedule type and click button to Create Rendered
2252 Service Path in SFC UI (http://localhost:8181/sfc/index.html).
2254 .. figure:: ./images/sfc/sf-schedule-type.png
2255 :alt: select schedule type
2257 select schedule type
2259 - Verify the Rendered Service Path to ensure the selected hops are
2260 linked in one SFF. The correct RSP is firewall-1⇒napt44-1 or
2261 firewall-2⇒napt44-2. The first SF type is Firewall in Service
2262 Function Chain. So the algorithm will select first Hop randomly among
2263 all the SFs type is Firewall. Assume the first selected SF is
2264 firewall-2. All the path from firewall-1 to SF which type is Napt44
2267 - Path1: firewall-2 → sff2 → napt44-2
2269 - Path2: firewall-2 → sff2 → sff3 → sff1 → napt44-1 The shortest
2270 path is Path1, so the selected next hop is napt44-2.
2272 .. figure:: ./images/sfc/sf-rendered-service-path.png
2273 :alt: rendered service path
2275 rendered service path
2277 Service Function Load Balancing User Guide
2278 ------------------------------------------
2283 SFC Load-Balancing feature implements load balancing of Service
2284 Functions, rather than a one-to-one mapping between
2285 Service-Function-Forwarder and Service-Function.
2287 Load Balancing Architecture
2288 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
2290 Service Function Groups (SFG) can replace Service Functions (SF) in the
2291 Rendered Path model. A Service Path can only be defined using SFGs or
2292 SFs, but not a combination of both.
2294 Relevant objects in the YANG model are as follows:
2296 1. Service-Function-Group-Algorithm:
2300 Service-Function-Group-Algorithms {
2301 Service-Function-Group-Algorithm {
2309 Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
2311 2. Service-Function-Group:
2315 Service-Function-Groups {
2316 Service-Function-Group {
2318 String serviceFunctionGroupAlgorithmName
2321 Service-Function-Group-Element {
2322 String service-function-name
2328 3. ServiceFunctionHop: holds a reference to a name of SFG (or SF)
2333 This tutorial will explain how to create a simple SFC configuration,
2334 with SFG instead of SF. In this example, the SFG will include two
2340 For general SFC setup and scenarios, please see the SFC wiki page:
2341 https://wiki.opendaylight.org/view/Service_Function_Chaining:Main#SFC_101
2347 http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms
2349 .. code-block:: json
2352 "service-function-group-algorithm": [
2360 (Header "content-type": application/json)
2362 Verify: get all algorithms
2363 ^^^^^^^^^^^^^^^^^^^^^^^^^^
2366 http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms
2368 In order to delete all algorithms: DELETE -
2369 http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms
2375 http://127.0.0.1:8181/restconf/config/service-function-group:service-function-groups
2377 .. code-block:: json
2380 "service-function-group": [
2382 "rest-uri": "http://localhost:10002",
2383 "ip-mgmt-address": "10.3.1.103",
2384 "algorithm": "alg1",
2387 "sfc-service-function": [
2392 "name":"napt44-103-1"
2399 Verify: get all SFG’s
2400 ^^^^^^^^^^^^^^^^^^^^^
2403 http://127.0.0.1:8181/restconf/config/service-function-group:service-function-groups
2405 SFC Proof of Transit User Guide
2406 -------------------------------
2411 Several deployments use traffic engineering, policy routing, segment
2412 routing or service function chaining (SFC) to steer packets through a
2413 specific set of nodes. In certain cases regulatory obligations or a
2414 compliance policy require to prove that all packets that are supposed to
2415 follow a specific path are indeed being forwarded across the exact set
2416 of nodes specified. I.e. if a packet flow is supposed to go through a
2417 series of service functions or network nodes, it has to be proven that
2418 all packets of the flow actually went through the service chain or
2419 collection of nodes specified by the policy. In case the packets of a
2420 flow weren’t appropriately processed, a proof of transit egress device
2421 would be required to identify the policy violation and take
2422 corresponding actions (e.g. drop or redirect the packet, send an alert
2423 etc.) corresponding to the policy.
2425 Service Function Chaining (SFC) Proof of Transit (SFC PoT)
2426 implements Service Chaining Proof of Transit functionality on capable
2427 network devices. Proof of Transit defines mechanisms to securely
2428 prove that traffic transited the defined path. After the creation of an
2429 Rendered Service Path (RSP), a user can configure to enable SFC proof
2430 of transit on the selected RSP to effect the proof of transit.
2432 To ensure that the data traffic follows a specified path or a function
2433 chain, meta-data is added to user traffic in the form of a header. The
2434 meta-data is based on a 'share of a secret' and provisioned by the SFC
2435 PoT configuration from ODL over a secure channel to each of the nodes
2436 in the SFC. This meta-data is updated at each of the service-hop while
2437 a designated node called the verifier checks whether the collected
2438 meta-data allows the retrieval of the secret.
2440 The following diagram shows the overview and essentially utilizes Shamir's
2441 secret sharing algorithm, where each service is given a point on the
2442 curve and when the packet travels through each service, it collects these
2443 points (meta-data) and a verifier node tries to re-construct the curve
2444 using the collected points, thus verifying that the packet traversed
2445 through all the service functions along the chain.
2447 .. figure:: ./images/sfc/sfc-pot-intro.png
2448 :alt: SFC Proof of Transit overview
2450 SFC Proof of Transit overview
2452 Transport options for different protocols includes a new TLV in SR header
2453 for Segment Routing, NSH Type-2 meta-data, IPv6 extension headers, IPv4
2454 variants and for VXLAN-GPE. More details are captured in the following
2457 In-situ OAM: https://github.com/CiscoDevNet/iOAM
2459 Common acronyms used in the following sections:
2461 - SF - Service Function
2463 - SFF - Service Function Forwarder
2465 - SFC - Service Function Chain
2467 - SFP - Service Function Path
2469 - RSP - Rendered Service Path
2471 - SFC PoT - Service Function Chain Proof of Transit
2474 SFC Proof of Transit Architecture
2475 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2477 SFC PoT feature is implemented as a two-part implementation with a
2478 north-bound handler that augments the RSP while a south-bound renderer
2479 auto-generates the required parameters and passes it on to the nodes
2480 that belong to the SFC.
2482 The north-bound feature is enabled via odl-sfc-pot feature while the
2483 south-bound renderer is enabled via the odl-sfc-pot-netconf-renderer
2484 feature. For the purposes of SFC PoT handling, both features must be
2487 RPC handlers to augment the RSP are part of ``SfcPotRpc`` while the
2488 RSP augmentation to enable or disable SFC PoT feature is done via
2489 ``SfcPotRspProcessor``.
2492 SFC Proof of Transit entities
2493 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2495 In order to implement SFC Proof of Transit for a service function chain,
2496 an RSP is a pre-requisite to identify the SFC to enable SFC PoT on. SFC
2497 Proof of Transit for a particular RSP is enabled by an RPC request to
2498 the controller along with necessary parameters to control some of the
2499 aspects of the SFC Proof of Transit process.
2501 The RPC handler identifies the RSP and adds PoT feature meta-data like
2502 enable/disable, number of PoT profiles, profiles refresh parameters etc.,
2503 that directs the south-bound renderer appropriately when RSP changes
2504 are noticed via call-backs in the renderer handlers.
2506 Administering SFC Proof of Transit
2507 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2509 To use the SFC Proof of Transit Karaf, at least the following Karaf
2510 features must be installed:
2520 - odl-netconf-topology
2522 - odl-netconf-connector-all
2526 Please note that the odl-sfc-pot-netconf-renderer or other renderers in future
2527 must be installed for the feature to take full-effect. The details of the renderer
2528 features are described in other parts of this document.
2530 SFC Proof of Transit Tutorial
2531 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2536 This tutorial is a simple example how to configure Service Function
2537 Chain Proof of Transit using SFC POT feature.
2542 To enable a device to handle SFC Proof of Transit, it is expected that
2543 the NETCONF node device advertise capability as under ioam-sb-pot.yang
2544 present under sfc-model/src/main/yang folder. It is also expected that base
2545 NETCONF support be enabled and its support capability advertised as capabilities.
2547 NETCONF support:``urn:ietf:params:netconf:base:1.0``
2549 PoT support: ``(urn:cisco:params:xml:ns:yang:sfc-ioam-sb-pot?revision=2017-01-12)sfc-ioam-sb-pot``
2551 It is also expected that the devices are netconf mounted and available
2552 in the topology-netconf store.
2557 When SFC Proof of Transit is installed, all netconf nodes in topology-netconf
2558 are processed and all capable nodes with accessible mountpoints are cached.
2560 First step is to create the required RSP as is usually done using RSP creation
2563 Once RSP name is available it is used to send a POST RPC to the
2564 controller similar to below:
2567 http://ODL-IP:8181/restconf/operations/sfc-ioam-nb-pot:enable-sfc-ioam-pot-rendered-path/
2569 .. code-block:: json
2574 "sfc-ioam-pot-rsp-name": "sfc-path-3sf3sff",
2575 "ioam-pot-enable":true,
2576 "ioam-pot-num-profiles":2,
2577 "ioam-pot-bit-mask":"bits32",
2578 "refresh-period-time-units":"milliseconds",
2579 "refresh-period-value":5000
2583 The following can be used to disable the SFC Proof of Transit on an RSP
2584 which disables the PoT feature.
2587 http://ODL-IP:8181/restconf/operations/sfc-ioam-nb-pot:disable-sfc-ioam-pot-rendered-path/
2589 .. code-block:: json
2594 "sfc-ioam-pot-rsp-name": "sfc-path-3sf3sff",
2598 SFC PoT NETCONF Renderer User Guide
2599 -----------------------------------
2604 The SFC Proof of Transit (PoT) NETCONF renderer implements SFC Proof of
2605 Transit functionality on NETCONF-capable devices, that have advertised
2606 support for in-situ OAM (iOAM) support.
2608 It listens for an update to an existing RSP with enable or disable proof of
2609 transit support and adds the auto-generated SFC PoT configuration parameters
2610 to all the SFC hop nodes. The last node in the SFC is configured as a
2611 verifier node to allow SFC PoT process to be completed.
2613 Common acronyms are used as below:
2615 - SF - Service Function
2617 - SFC - Service Function Chain
2619 - RSP - Rendered Service Path
2621 - SFF - Service Function Forwarder
2624 Mapping to SFC entities
2625 ~~~~~~~~~~~~~~~~~~~~~~~
2627 The renderer module listens to RSP updates in ``SfcPotNetconfRSPListener``
2628 and triggers configuration generation in ``SfcPotNetconfIoam`` class. Node
2629 arrival and leaving are managed via ``SfcPotNetconfNodeManager`` and
2630 ``SfcPotNetconfNodeListener``. In addition there is a timer thread that
2631 runs to generate configuration periodically to refresh the profiles in the
2632 nodes that are part of the SFC.
2635 Administering SFC PoT NETCONF Renderer
2636 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2638 To use the SFC Proof of Transit Karaf, the following Karaf features must
2649 - odl-netconf-topology
2651 - odl-netconf-connector-all
2655 - odl-sfc-pot-netconf-renderer
2658 SFC PoT NETCONF Renderer Tutorial
2659 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2664 This tutorial is a simple example how to enable SFC PoT on NETCONF-capable
2670 The NETCONF-capable device will have to support sfc-ioam-sb-pot.yang file.
2672 It is expected that a NETCONF-capable VPP device has Honeycomb (Hc2vpp)
2673 Java-based agent that helps to translate between NETCONF and VPP internal
2676 More details are here:
2677 In-situ OAM: https://github.com/CiscoDevNet/iOAM
2681 When the SFC PoT NETCONF renderer module is installed, all NETCONF nodes in
2682 topology-netconf are processed and all sfc-ioam-sb-pot yang capable nodes
2683 with accessible mountpoints are cached.
2685 The first step is to create RSP for the SFC as per SFC guidelines above.
2687 Enable SFC PoT is done on the RSP via RESTCONF to the ODL as outlined above.
2689 Internally, the NETCONF renderer will act on the callback to a modified RSP
2690 that has PoT enabled.
2692 In-situ OAM algorithms for auto-generation of SFC PoT parameters are
2693 generated automatically and sent to these nodes via NETCONF.
2695 Logical Service Function Forwarder
2696 ----------------------------------
2701 .. _sfc-user-guide-logical-sff-motivation:
2705 When the current SFC is deployed in a cloud environment, it is assumed that each
2706 switch connected to a Service Function is configured as a Service Function
2707 Forwarder and each Service Function is connected to its Service Function
2708 Forwarder depending on the Compute Node where the Virtual Machine is located.
2710 .. figure:: ./images/sfc/sfc-in-cloud.png
2711 :alt: Deploying SFC in Cloud Environments
2713 As shown in the picture above, this solution allows the basic cloud use cases to
2714 be fulfilled, as for example, the ones required in OPNFV Brahmaputra, however,
2715 some advanced use cases like the transparent migration of VMs can not be
2716 implemented. The Logical Service Function Forwarder enables the following
2719 1. Service Function mobility without service disruption
2720 2. Service Functions load balancing and failover
2722 As shown in the picture below, the Logical Service Function Forwarder concept
2723 extends the current SFC northbound API to provide an abstraction of the
2724 underlying Data Center infrastructure. The Data Center underlaying network can
2725 be abstracted by a single SFF. This single SFF uses the logical port UUID as
2726 data plane locator to connect SFs globally and in a location-transparent manner.
2727 SFC makes use of `Genius <./genius-user-guide.html>`__ project to track the
2728 location of the SF's logical ports.
2730 .. figure:: ./images/sfc/single-logical-sff-concept.png
2731 :alt: Single Logical SFF concept
2733 The SFC internally distributes the necessary flow state over the relevant
2734 switches based on the internal Data Center topology and the deployment of SFs.
2736 Changes in data model
2737 ~~~~~~~~~~~~~~~~~~~~~
2738 The Logical Service Function Forwarder concept extends the current SFC
2739 northbound API to provide an abstraction of the underlying Data Center
2742 The Logical SFF simplifies the configuration of the current SFC data model by
2743 reducing the number of parameters to be be configured in every SFF, since the
2744 controller will discover those parameters by interacting with the services
2745 offered by the `Genius <./genius-user-guide.html>`__ project.
2747 The following picture shows the Logical SFF data model. The model gets
2748 simplified as most of the configuration parameters of the current SFC data model
2749 are discovered in runtime. The complete YANG model can be found here
2750 `logical SFF model <https://github.com/opendaylight/sfc/blob/master/sfc-model/src/main/yang/service-function-forwarder-logical.yang>`__.
2752 .. figure:: ./images/sfc/logical-sff-datamodel.png
2753 :alt: Logical SFF data model
2755 How to configure the Logical SFF
2756 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2757 The following are examples to configure the Logical SFF:
2759 .. code-block:: bash
2761 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
2762 --data '${JSON}' -X PUT --user
2763 admin:admin http://localhost:8181/restconf/config/restconf/config/service-function:service-functions/
2765 **Service Functions JSON.**
2767 .. code-block:: json
2770 "service-functions": {
2771 "service-function": [
2773 "name": "firewall-1",
2775 "sf-data-plane-locator": [
2777 "name": "firewall-dpl",
2778 "interface-name": "eccb57ae-5a2e-467f-823e-45d7bb2a6a9a",
2779 "transport": "service-locator:eth-nsh",
2780 "service-function-forwarder": "sfflogical1"
2788 "sf-data-plane-locator": [
2791 "interface-name": "df15ac52-e8ef-4e9a-8340-ae0738aba0c0",
2792 "transport": "service-locator:eth-nsh",
2793 "service-function-forwarder": "sfflogical1"
2801 .. code-block:: bash
2803 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
2804 --data '${JSON}' -X PUT --user
2805 admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
2807 **Service Function Forwarders JSON.**
2809 .. code-block:: json
2812 "service-function-forwarders": {
2813 "service-function-forwarder": [
2815 "name": "sfflogical1"
2821 .. code-block:: bash
2823 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
2824 --data '${JSON}' -X PUT --user
2825 admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/
2827 **Service Function Chains JSON.**
2829 .. code-block:: json
2832 "service-function-chains": {
2833 "service-function-chain": [
2836 "sfc-service-function": [
2838 "name": "dpi-abstract1",
2842 "name": "firewall-abstract1",
2849 "sfc-service-function": [
2851 "name": "dpi-abstract1",
2860 .. code-block:: bash
2862 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
2863 --data '${JSON}' -X PUT --user
2864 admin:admin http://localhost:8182/restconf/config/service-function-chain:service-function-paths/
2866 **Service Function Paths JSON.**
2868 .. code-block:: json
2871 "service-function-paths": {
2872 "service-function-path": [
2875 "service-chain-name": "SFC1",
2876 "starting-index": 255,
2877 "symmetric": "true",
2878 "context-metadata": "NSH1",
2879 "transport-type": "service-locator:vxlan-gpe"
2886 As a result of above configuration, OpenDaylight renders the needed flows in all involved SFFs. Those flows implement:
2888 - Two Rendered Service Paths:
2890 - dpi-1 (SF1), firewall-1 (SF2)
2891 - firewall-1 (SF2), dpi-1 (SF1)
2893 - The communication between SFFs and SFs based on eth-nsh
2895 - The communication between SFFs based on vxlan-gpe
2897 The following picture shows a topology and traffic flow (in green) which corresponds to the above configuration.
2899 .. figure:: ./images/sfc/single-logical-sff-example.png
2900 :alt: Logical SFF Example
2907 The Logical SFF functionality allows OpenDaylight to find out the SFFs holding
2908 the SFs involved in a path. In this example the SFFs affected are Node3 and
2909 Node4 thus the controller renders the flows containing NSH parameters just in
2912 Here you have the new flows rendered in Node3 and Node4 which implement the NSH
2913 protocol. Every Rendered Service Path is represented by an NSP value. We
2914 provisioned a symmetric RSP so we get two NSPs: 8388613 and 5. Node3 holds the
2915 first SF of NSP 8388613 and the last SF of NSP 5. Node 4 holds the first SF of
2916 NSP 5 and the last SF of NSP 8388613. Both Node3 and Node4 will pop the NSH
2917 header when the received packet has gone through the last SF of its path.
2919 **Rendered flows Node 3**
2923 cookie=0x14, duration=59.264s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
2924 cookie=0x14, duration=59.194s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
2925 cookie=0x14, duration=59.257s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=5 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
2926 cookie=0x14, duration=59.189s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=8388613 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
2927 cookie=0xba5eba1100000203, duration=59.213s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=5 actions=pop_nsh,set_field:6e:e0:06:b4:c5:1e->eth_src,resubmit(,17)
2928 cookie=0xba5eba1100000201, duration=59.213s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
2929 cookie=0xba5eba1100000201, duration=59.188s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=8388613 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
2930 cookie=0xba5eba1100000201, duration=59.182s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=set_field:0->tun_id,output:6
2932 **Rendered Flows Node 4**
2936 cookie=0x14, duration=69.040s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
2937 cookie=0x14, duration=69.008s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
2938 cookie=0x14, duration=69.040s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=5 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
2939 cookie=0x14, duration=69.005s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=8388613 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
2940 cookie=0xba5eba1100000201, duration=69.029s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=5 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
2941 cookie=0xba5eba1100000201, duration=69.029s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=set_field:0->tun_id,output:1
2942 cookie=0xba5eba1100000201, duration=68.999s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
2943 cookie=0xba5eba1100000203, duration=68.996s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=8388613 actions=pop_nsh,set_field:02:14:84:5e:a8:5d->eth_src,resubmit(,17)
2946 An interesting scenario to show the Logical SFF strength is the migration of a
2947 SF from a compute node to another. The OpenDaylight will learn the new topology
2948 by itself, then it will re-render the new flows to the new SFFs affected.
2950 .. figure:: ./images/sfc/single-logical-sff-example-migration.png
2951 :alt: Logical SFF - SF Migration Example
2955 Logical SFF - SF Migration Example
2958 In our example, SF2 is moved from Node4 to Node2 then OpenDaylight removes NSH
2959 specific flows from Node4 and puts them in Node2. Check below flows showing this
2960 effect. Now Node3 keeps holding the first SF of NSP 8388613 and the last SF of
2961 NSP 5; but Node2 becomes the new holder of the first SF of NSP 5 and the last SF
2964 **Rendered Flows Node 3 After Migration**
2968 cookie=0x14, duration=64.044s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
2969 cookie=0x14, duration=63.947s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
2970 cookie=0x14, duration=64.044s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=5 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
2971 cookie=0x14, duration=63.947s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=8388613 actions=load:0x8e0a37cc9094->NXM_NX_ENCAP_ETH_SRC[],load:0x6ee006b4c51e->NXM_NX_ENCAP_ETH_DST[],goto_table:87
2972 cookie=0xba5eba1100000201, duration=64.034s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
2973 cookie=0xba5eba1100000203, duration=64.034s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=5 actions=pop_nsh,set_field:6e:e0:06:b4:c5:1e->eth_src,resubmit(,17)
2974 cookie=0xba5eba1100000201, duration=63.947s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=8388613 actions=load:0x800->NXM_NX_REG6[],resubmit(,220)
2975 cookie=0xba5eba1100000201, duration=63.942s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=set_field:0->tun_id,output:2
2977 **Rendered Flows Node 2 After Migration**
2981 cookie=0x14, duration=56.856s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=5 actions=goto_table:86
2982 cookie=0x14, duration=56.755s, table=83, n_packets=0, n_bytes=0, priority=250,nsp=8388613 actions=goto_table:86
2983 cookie=0x14, duration=56.847s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=5 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
2984 cookie=0x14, duration=56.755s, table=86, n_packets=0, n_bytes=0, priority=550,nsi=254,nsp=8388613 actions=load:0xbea93873f4fa->NXM_NX_ENCAP_ETH_SRC[],load:0x214845ea85d->NXM_NX_ENCAP_ETH_DST[],goto_table:87
2985 cookie=0xba5eba1100000201, duration=56.823s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=5 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
2986 cookie=0xba5eba1100000201, duration=56.823s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=5 actions=set_field:0->tun_id,output:4
2987 cookie=0xba5eba1100000201, duration=56.755s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=254,nsp=8388613 actions=load:0x1100->NXM_NX_REG6[],resubmit(,220)
2988 cookie=0xba5eba1100000203, duration=56.750s, table=87, n_packets=0, n_bytes=0, priority=650,nsi=253,nsp=8388613 actions=pop_nsh,set_field:02:14:84:5e:a8:5d->eth_src,resubmit(,17)
2990 **Rendered Flows Node 4 After Migration**
2994 -- No flows for NSH processing --
2996 .. _sfc-user-guide-classifier-impacts:
3001 As previously mentioned, in the :ref:`Logical SFF rationale
3002 <sfc-user-guide-logical-sff-motivation>`, the Logical SFF feature relies on
3003 Genius to get the dataplane IDs of the OpenFlow switches, in order to properly
3004 steer the traffic through the chain.
3006 Since one of the classifier's objectives is to steer the packets *into* the
3007 SFC domain, the classifier has to be aware of where the first Service
3008 Function is located - if it migrates somewhere else, the classifier table
3009 has to be updated accordingly, thus enabling the seemless migration of Service
3012 For this feature, mobility of the client VM is out of scope, and should be
3013 managed by its high-availability module, or VNF manager.
3015 Keep in mind that classification *always* occur in the compute-node where
3016 the client VM (i.e. traffic origin) is running.
3018 How to attach the classifier to a Logical SFF
3019 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3021 In order to leverage this functionality, the classifier has to be configured
3022 using a Logical SFF as an attachment-point, specifying within it the neutron
3025 The following examples show how to configure an ACL, and a classifier having
3026 a Logical SFF as an attachment-point:
3028 **Configure an ACL**
3030 The following ACL enables traffic intended for port 80 within the subnetwork
3031 192.168.2.0/24, for RSP1 and RSP1-Reverse.
3033 .. code-block:: json
3040 "acl-type": "ietf-access-control-list:ipv4-acl",
3041 "access-list-entries": {
3044 "rule-name": "ACE1",
3046 "service-function-acl:rendered-service-path": "RSP1"
3049 "destination-ipv4-network": "192.168.2.0/24",
3050 "source-ipv4-network": "192.168.2.0/24",
3052 "source-port-range": {
3055 "destination-port-range": {
3065 "acl-type": "ietf-access-control-list:ipv4-acl",
3066 "access-list-entries": {
3069 "rule-name": "ACE2",
3071 "service-function-acl:rendered-service-path": "RSP1-Reverse"
3074 "destination-ipv4-network": "192.168.2.0/24",
3075 "source-ipv4-network": "192.168.2.0/24",
3077 "source-port-range": {
3080 "destination-port-range": {
3092 .. code-block:: bash
3094 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
3095 --data '${JSON}' -X PUT --user
3096 admin:admin http://localhost:8181/restconf/config/ietf-access-control-list:access-lists/
3098 **Configure a classifier JSON**
3100 The following JSON provisions a classifier, having a Logical SFF as an
3101 attachment point. The value of the field 'interface' is where you
3102 indicate the neutron ports of the VMs you want to classify.
3104 .. code-block:: json
3107 "service-function-classifiers": {
3108 "service-function-classifier": [
3110 "name": "Classifier1",
3111 "scl-service-function-forwarder": [
3113 "name": "sfflogical1",
3114 "interface": "09a78ba3-78ba-40f5-a3ea-1ce708367f2b"
3119 "type": "ietf-access-control-list:ipv4-acl"
3126 .. code-block:: bash
3128 curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache"
3129 --data '${JSON}' -X PUT --user
3130 admin:admin http://localhost:8181/restconf/config/service-function-classifier:service-function-classifiers/
3132 .. _sfc-user-guide-pipeline-impacts:
3134 SFC pipeline impacts
3135 ~~~~~~~~~~~~~~~~~~~~
3137 After binding SFC service with a particular interface by means of Genius, as
3138 explained in the :ref:`Genius User Guide <genius-user-guide-binding-services>`,
3139 the entry point in the SFC pipeline will be table 82
3140 (SFC_TRANSPORT_CLASSIFIER_TABLE), and from that point, packet processing will be
3141 similar to the :ref:`SFC OpenFlow pipeline <sfc-user-guide-sfc-of-pipeline>`,
3142 just with another set of specific tables for the SFC service.
3144 This picture shows the SFC pipeline after service integration with Genius:
3146 .. figure:: ./images/sfc/LSFF_pipeline.png
3147 :alt: SFC Logical SFF OpenFlow pipeline
3149 SFC Logical SFF OpenFlow pipeline