3 Service Function Chaining
4 =========================
6 OpenDaylight Service Function Chaining (SFC) Overview
7 -----------------------------------------------------
9 OpenDaylight Service Function Chaining (SFC) provides the ability to
10 define an ordered list of a network services (e.g. firewalls, load
11 balancers). These service are then "stitched" together in the network to
12 create a service chain. This project provides the infrastructure
13 (chaining logic, APIs) needed for ODL to provision a service chain in
14 the network and an end-user application for defining such chains.
16 - ACE - Access Control Entry
18 - ACL - Access Control List
20 - SCF - Service Classifier Function
22 - SF - Service Function
24 - SFC - Service Function Chain
26 - SFF - Service Function Forwarder
28 - SFG - Service Function Group
30 - SFP - Service Function Path
32 - RSP - Rendered Service Path
34 - NSH - Network Service Header
36 SFC Classifier Control and Date plane Developer guide
37 -----------------------------------------------------
42 Description of classifier can be found in:
43 https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/
45 Classifier manages everything from starting the packet listener to
46 creation (and removal) of appropriate ip(6)tables rules and marking
47 received packets accordingly. Its functionality is **available only on
48 Linux** as it leverages **NetfilterQueue**, which provides access to
49 packets matched by an **iptables** rule. Classifier requires **root
50 privileges** to be able to operate.
52 So far it is capable of processing ACL for MAC addresses, ports, IPv4
53 and IPv6. Supported protocols are TCP and UDP.
55 Classifier Architecture
56 ~~~~~~~~~~~~~~~~~~~~~~~
58 Python code located in the project repository
59 sfc-py/common/classifier.py.
63 classifier assumes that Rendered Service Path (RSP) **already
64 exists** in ODL when an ACL referencing it is obtained
66 1. sfc\_agent receives an ACL and passes it for processing to the
69 2. the RSP (its SFF locator) referenced by ACL is requested from ODL
71 3. if the RSP exists in the ODL then ACL based iptables rules for it are
74 After this process is over, every packet successfully matched to an
75 iptables rule (i.e. successfully classified) will be NSH encapsulated
76 and forwarded to a related SFF, which knows how to traverse the RSP.
78 Rules are created using appropriate iptables command. If the Access
79 Control Entry (ACE) rule is MAC address related both iptables and
80 IPv6 tables rules are issued. If ACE rule is IPv4 address related, only
81 iptables rules are issued, same for IPv6.
85 iptables **raw** table contains all created rules
87 Information regarding already registered RSP(s) are stored in an
88 internal data-store, which is represented as a dictionary:
92 {rsp_id: {'name': <rsp_name>,
93 'chains': {'chain_name': (<ipv>,),
98 'starting-index': <starting-index>,
99 'transport-type': <transport-type>
105 - ``name``: name of the RSP
107 - ``chains``: dictionary of iptables chains related to the RSP with
108 information about IP version for which the chain exists
110 - ``SFF``: SFF forwarding parameters
112 - ``ip``: SFF IP address
116 - ``starting-index``: index given to packet at first RSP hop
118 - ``transport-type``: encapsulation protocol
120 Key APIs and Interfaces
121 ~~~~~~~~~~~~~~~~~~~~~~~
123 This features exposes API to configure classifier (corresponds to
124 service-function-classifier.yang)
126 API Reference Documentation
127 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
129 See: sfc-model/src/main/yang/service-function-classifier.yang
137 SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices.
138 Integration is realized through mapping of SFC objects (like SF, SFF,
139 Classifier, etc.) to OVS objects (like Bridge,
140 TerminationPoint=Port/Interface). The mapping takes care of automatic
141 instantiation (setup) of corresponding object whenever its counterpart
142 is created. For example, when a new SFF is created, the SFC-OVS plug-in
143 will create a new OVS bridge and when a new OVS Bridge is created, the
144 SFC-OVS plug-in will create a new SFF.
149 SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing
150 information from/to OVS devices. The core functionality consists of two
153 a. mapping from OVS to SFC
155 - OVS Bridge is mapped to SFF
157 - OVS TerminationPoints are mapped to SFF DataPlane locators
159 b. mapping from SFC to OVS
161 - SFF is mapped to OVS Bridge
163 - SFF DataPlane locators are mapped to OVS TerminationPoints
165 .. figure:: ./images/sfc/sfc-ovs-architecture.png
166 :alt: SFC < — > OVS mapping flow diagram
168 SFC < — > OVS mapping flow diagram
170 Key APIs and Interfaces
171 ~~~~~~~~~~~~~~~~~~~~~~~
173 - SFF to OVS mapping API (methods to convert SFF object to OVS Bridge
174 and OVS TerminationPoints)
176 - OVS to SFF mapping API (methods to convert OVS Bridge and OVS
177 TerminationPoints to SFF object)
179 SFC Southbound REST Plug-in
180 --------------------------
185 The Southbound REST Plug-in is used to send configuration from datastore
186 down to network devices supporting a REST API (i.e. they have a
187 configured REST URI). It supports POST/PUT/DELETE operations, which are
188 triggered accordingly by changes in the SFC data stores.
190 - Access Control List (ACL)
192 - Service Classifier Function (SCF)
194 - Service Function (SF)
196 - Service Function Group (SFG)
198 - Service Function Schedule Type (SFST)
200 - Service Function Forwarder (SFF)
202 - Rendered Service Path (RSP)
204 Southbound REST Plug-in Architecture
205 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
207 1. **listeners** - used to listen on changes in the SFC data stores
209 2. **JSON exporters** - used to export JSON-encoded data from
210 binding-aware data store objects
212 3. **tasks** - used to collect REST URIs of network devices and to send
213 JSON-encoded data down to these devices
215 .. figure:: ./images/sfc/sb-rest-architecture.png
216 :alt: Southbound REST Plug-in Architecture diagram
218 Southbound REST Plug-in Architecture diagram
220 Key APIs and Interfaces
221 ~~~~~~~~~~~~~~~~~~~~~~~
223 The plug-in provides Southbound REST API designated to listening REST
224 devices. It supports POST/PUT/DELETE operations. The operation (with
225 corresponding JSON-encoded data) is sent to unique REST URL belonging to
228 - Access Control List (ACL):
229 ``http://<host>:<port>/config/ietf-acl:access-lists/access-list/``
231 - Service Function (SF):
232 ``http://<host>:<port>/config/service-function:service-functions/service-function/``
234 - Service Function Group (SFG):
235 ``http://<host>:<port>/config/service-function:service-function-groups/service-function-group/``
237 - Service Function Schedule Type (SFST):
238 ``http://<host>:<port>/config/service-function-scheduler-type:service-function-scheduler-types/service-function-scheduler-type/``
240 - Service Function Forwarder (SFF):
241 ``http://<host>:<port>/config/service-function-forwarder:service-function-forwarders/service-function-forwarder/``
243 - Rendered Service Path (RSP):
244 ``http://<host>:<port>/operational/rendered-service-path:rendered-service-paths/rendered-service-path/``
246 Therefore, network devices willing to receive REST messages must listen
251 Service Classifier Function (SCF) URL does not exist, because SCF is
252 considered as one of the network devices willing to receive REST
253 messages. However, there is a listener hooked on the SCF data store,
254 which is triggering POST/PUT/DELETE operations of ACL object,
255 because ACL is referenced in ``service-function-classifier.yang``
257 Service Function Load Balancing Developer Guide
258 -----------------------------------------------
263 SFC Load-Balancing feature implements load balancing of Service
264 Functions, rather than a one-to-one mapping between Service Function
265 Forwarder and Service Function.
267 Load Balancing Architecture
268 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
270 Service Function Groups (SFG) can replace Service Functions (SF) in the
271 Rendered Path model. A Service Path can only be defined using SFGs or
272 SFs, but not a combination of both.
274 Relevant objects in the YANG model are as follows:
276 1. Service-Function-Group-Algorithm:
280 Service-Function-Group-Algorithms {
281 Service-Function-Group-Algorithm {
289 Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
291 2. Service-Function-Group:
295 Service-Function-Groups {
296 Service-Function-Group {
298 String serviceFunctionGroupAlgorithmName
301 Service-Function-Group-Element {
302 String service-function-name
308 3. ServiceFunctionHop: holds a reference to a name of SFG (or SF)
310 Key APIs and Interfaces
311 ~~~~~~~~~~~~~~~~~~~~~~~
313 This feature enhances the existing SFC API.
315 REST API commands include: \* For Service Function Group (SFG): read
316 existing SFG, write new SFG, delete existing SFG, add Service Function
317 (SF) to SFG, and delete SF from SFG \* For Service Function Group
318 Algorithm (SFG-Alg): read, write, delete
320 Bundle providing the REST API: sfc-sb-rest \* Service Function Groups
321 and Algorithms are defined in: sfc-sfg and sfc-sfg-alg \* Relevant JAVA
322 API: SfcProviderServiceFunctionGroupAPI,
323 SfcProviderServiceFunctionGroupAlgAPI
325 Service Function Scheduling Algorithms
326 --------------------------------------
331 When creating the Rendered Service Path (RSP), the earlier release of
332 SFC chose the first available service function from a list of service
333 function names. Now a new API is introduced to allow developers to
334 develop their own schedule algorithms when creating the RSP. There are
335 four scheduling algorithms (Random, Round Robin, Load Balance and
336 Shortest Path) are provided as examples for the API definition. This
337 guide gives a simple introduction of how to develop service function
338 scheduling algorithms based on the current extensible framework.
343 The following figure illustrates the service function selection
344 framework and algorithms.
346 .. figure:: ./images/sfc-sf-selection-arch.png
347 :alt: SF Scheduling Algorithm framework Architecture
349 SF Scheduling Algorithm framework Architecture
351 The YANG Model defines the Service Function Scheduling Algorithm type
352 identities and how they are stored in the MD-SAL data store for the
353 scheduling algorithms.
355 The MD-SAL data store stores all informations for the scheduling
356 algorithms, including their types, names, and status.
358 The API provides some basic APIs to manage the informations stored in
359 the MD-SAL data store, like putting new items into it, getting all
360 scheduling algorithms, etc.
362 The RESTCONF API provides APIs to manage the informations stored in the
363 MD-SAL data store through RESTful calls.
365 The Service Function Chain Renderer gets the enabled scheduling
366 algorithm type, and schedules the service functions with scheduling
367 algorithm implementation.
369 Key APIs and Interfaces
370 ~~~~~~~~~~~~~~~~~~~~~~~
372 While developing a new Service Function Scheduling Algorithm, a new
373 class should be added and it should extend the base schedule class
374 SfcServiceFunctionSchedulerAPI. And the new class should implement the
377 ``public List<String> scheduleServiceFuntions(ServiceFunctionChain chain, int serviceIndex)``.
379 - **``ServiceFunctionChain chain``**: the chain which will be rendered
381 - **``int serviceIndex``**: the initial service index for this rendered
384 - **``List<String>``**: a list of service function names which scheduled
385 by the Service Function Scheduling Algorithm.
387 API Reference Documentation
388 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
390 Please refer the API docs generated in the mdsal-apidocs.
393 SFC Proof of Transit Developer Guide
394 ------------------------------------
398 SFC Proof of Transit implements the in-situ OAM (iOAM) Proof of Transit
399 verification for SFCs and other paths. The implementation is broadly
400 divided into the North-bound (NB) and the South-bound (SB) side of the
401 application. The NB side is primarily charged with augmenting the RSP
402 with user-inputs for enabling the PoT on the RSP, while the SB side is
403 dedicated to auto-generated SFC PoT parameters, periodic refresh of these
404 parameters and delivering the parameters to the NETCONF and iOAM capable
405 nodes (eg. VPP instances).
409 The following diagram gives the high level overview of the different parts.
411 .. figure:: ./images/sfc-pot-int-arch.png
412 :alt: SFC Proof of Transit Internal Architecture
414 SFC Proof of Transit Internal Architecture
416 The Proof of Transit feature is enabled by two sub-features:
418 1. ODL SFC PoT: ``feature:install odl-sfc-pot``
420 2. ODL SFC PoT NETCONF Renderer: ``feature:install odl-sfc-pot-netconf-renderer``
426 The following classes and handlers are involved.
428 1. The class (SfcPotRpc) sets up RPC handlers for enabling the feature.
430 2. There are new RPC handlers for two new RPCs
431 (EnableSfcIoamPotRenderedPath and DisableSfcIoamPotRenderedPath) and
432 effected via SfcPotRspProcessor class.
434 3. When a user configures via a POST RPC call to enable Proof of Transit
435 on a particular SFC (via the Rendered Service Path), the configuration
436 drives the creation of necessary augmentations to the RSP
437 (to modify the RSP) to effect the Proof of Transit configurations.
439 4. The augmentation meta-data added to the RSP are defined in the
440 sfc-ioam-nb-pot.yang file.
444 There are no auto generated configuration parameters added to the RSP to
447 5. Adding SFC Proof of Transit meta-data to the RSP is done in the
448 SfcPotRspProcessor class.
450 6. Once the RSP is updated, the RSP data listeners in the SB renderer modules
451 (odl-sfc-pot-netconf-renderer) will listen to the RSP changes and send
452 out configurations to the necessary network nodes that are part of the SFC.
454 7. The configurations are handled mainly in the SfcPotAPI,
455 SfcPotConfigGenerator, SfcPotPolyAPI, SfcPotPolyClass and
456 SfcPotPolyClassAPI classes.
458 8. There is a sfc-ioam-sb-pot.yang file that shows the format of the iOAM
459 PoT configuration data sent to each node of the SFC.
461 9. A timer is started based on the “ioam-pot-refresh-period” value in the
462 SB renderer module that handles configuration
463 refresh periodically.
465 10. The SB and timer handling are done in the odl-sfc-pot-netconf-renderer module.
466 Note: This is NOT done in the NB odl-sfc-pot module to avoid periodic
467 updates to the RSP itself.
469 11. ODL creates a new profile of a set of keys and secrets at a constant rate
470 and updates an internal data store with the configuration. The controller
471 labels the configurations per RSP as “even” or “odd” – and the controller
472 cycles between “even” and “odd” labeled profiles. The rate at which these
473 profiles are communicated to the nodes is configurable and in future,
474 could be automatic based on profile usage. Once the profile has been
475 successfully communicated to all nodes (all Netconf transactions completed),
476 the controller sends an “enable pot-profile” request to the ingress node.
478 12. The nodes are to maintain two profiles (an even and an odd pot-profile).
479 One profile is currently active and in use, and one profile is about to
480 get used. A flag in the packet is indicating whether the odd or even
481 pot-profile is to be used by a node. This is to ensure that during profile
482 change we’re not disrupting the service. I.e. if the “odd” profile is
483 active, the controller can communicate the “even” profile to all nodes
484 and only if all the nodes have received it, the controller will tell
485 the ingress node to switch to the “even” profile. Given that the
486 indicator travels within the packet, all nodes will switch to the
487 “even” profile. The “even” profile gets active on all nodes – and nodes
488 are ready to receive a new “odd” profile.
490 13. HashedTimerWheel implementation is used to support the periodic
491 configuration refresh. The default refresh is 5 seconds to start with.
493 14. Depending on the last updated profile, the odd or the even profile is
494 updated in the fresh timer pop and the configurations are sent down
497 15. SfcPotTimerQueue, SfcPotTimerWheel, SfcPotTimerTask, SfcPotTimerData
498 and SfcPotTimerThread are the classes that handle the Proof of
499 Transit protocol profile refresh implementation.
501 16. The RSP data store is NOT being changed periodically and the timer
502 and configuration refresh modules are present in the SB renderer module
503 handler and hence there are are no scale or RSP churn issues
504 affecting the design.
506 The following diagram gives the overall sequence diagram of the interactions
507 between the different classes.
509 .. figure:: ./images/sfc-pot-time-seq.png
510 :alt: SFC Proof of Transit Sequence Diagram
512 SFC Proof of Transit Sequence Diagram
514 Logical Service Function Forwarder
515 ----------------------------------
523 When the current SFC is deployed in a cloud environment, it is assumed that each
524 switch connected to a Service Function is configured as a Service Function Forwarder and
525 each Service Function is connected to its Service Function Forwarder depending on the
526 Compute Node where the Virtual Machine is located. This solution allows the basic cloud
527 use cases to be fulfilled, as for example, the ones required in OPNFV Brahmaputra, however,
528 some advanced use cases, like the transparent migration of VMs can not be implemented.
529 The Logical Service Function Forwarder enables the following advanced use cases:
531 1. Service Function mobility without service disruption
532 2. Service Functions load balancing and failover
534 As shown in the picture below, the Logical Service Function Forwarder concept extends the current
535 SFC northbound API to provide an abstraction of the underlying Data Center infrastructure.
536 The Data Center underlaying network can be abstracted by a single SFF. This single SFF uses
537 the logical port UUID as data plane locator to connect SFs globally and in a location-transparent manner.
538 SFC makes use of Genius project to track the location of the SF's logical ports.
540 .. figure:: ./images/sfc/single-logical-sff-concept.png
541 :alt: Single Logical SFF concept
543 The SFC internally distributes the necessary flow state over the relevant switches based on the
544 internal Data Center topology and the deployment of SFs.
546 Changes in data model
547 ~~~~~~~~~~~~~~~~~~~~~
548 The Logical Service Function Forwarder concept extends the current SFC northbound API to provide
549 an abstraction of the underlying Data Center infrastructure.
551 The Logical SFF simplifies the configuration of the current SFC data model by reducing the number
552 of parameters to be be configured in every SFF, since the controller will discover those parameters
553 by interacting with the services offered by the Genius project.
555 The following picture shows the Logical SFF data model. The model gets simplified as most of the
556 configuration parameters of the current SFC data model are discovered in runtime. The complete
557 YANG model can be found here `logical SFF model
558 <https://github.com/opendaylight/sfc/blob/master/sfc-model/src/main/yang/service-function-forwarder-logical.yang>`__.
560 .. figure:: ./images/sfc/logical-sff-datamodel.png
561 :alt: Logical SFF data model
563 There are other minor changes in the data model; the SFC encapsulation type has been added or moved in the following files:
565 - `RSP data model <https://github.com/opendaylight/sfc/blob/master/sfc-model/src/main/yang/rendered-service-path.yang>`__
567 - `SFP data model <https://github.com/opendaylight/sfc/blob/master/sfc-model/src/main/yang/service-function-path.yang>`__
569 - `Service Locator data model <https://github.com/opendaylight/sfc/blob/master/sfc-model/src/main/yang/service-locator.yang>`__
571 Interaction with Genius
572 ~~~~~~~~~~~~~~~~~~~~~~~
574 Feature *sfc-genius* functionally enables SFC integration with Genius. This allows configuring a Logical SFF
575 and SFs attached to this Logical SFF via logical interfaces (i.e. neutron ports) that are registered with Genius.
577 As shown in the following picture, SFC will interact with Genius project's services to provide the
578 Logical SFF functionality.
580 .. figure:: ./images/sfc/sfc-genius-interaction.png
583 The following are the main Genius' services used by SFC:
585 1. Interaction with Interface Tunnel Manager (ITM)
587 2. Interaction with the Interface Manager
589 3. Interaction with Resource Manager
591 SFC Service registration with Genius
592 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
594 Genius handles the coexistence of different network services. As such, SFC service is registered with Genius
595 performing the following actions:
598 As soon as a Service Function associated to the Logical SFF is involved in a Rendered Service Path, SFC service is
599 bound to its logical interface via Genius Interface Manager. This has the effect of forwarding every incoming packet
600 from the Service Function to the SFC pipeline of the attached switch, as long as it is not consumed by a different
601 bound service with higher priority.
603 SFC Service Terminating Action
604 As soon as SFC service is bound to the interface of a Service Function for the first time on a specific switch, a
605 terminating service action is configured on that switch via Genius Interface Tunnel Manager. This has the effect of
606 forwarding every incoming packet from a different switch to the SFC pipeline as long as the traffic is VXLAN
607 encapsulated on VNI 0.
609 The following sequence diagrams depict how the overall process takes place:
611 .. figure:: ./images/sfc/sfc-genius-at-rsp-render.png
612 :alt: sfc-genius at RSP render
614 SFC genius module interaction with Genius at RSP creation.
616 .. figure:: ./images/sfc/sfc-genius-at-rsp-removal.png
617 :alt: sfc-genius at RSP removal
619 SFC genius module interaction with Genius at RSP removal.
621 For more information on how Genius allows different services to coexist, see the :ref:`Genius User Guide
622 <genius-user-guide>`.
626 During path rendering, Genius is queried to obtain needed information, such as:
628 - Location of a logical interface on the data-plane.
629 - Tunnel interface for a specific pair of source and destination switches.
630 - Egress OpenFlow actions to output packets to a specific interface.
632 See :ref:`RSP Rendering <sfc-genius-path-rendering>` section for more information.
636 Upon VM migration, it's logical interface is first unregistered and then registered with Genius, possibly at a new
637 physical location. *sfc-genius* reacts to this by re-rendering all the RSPs on which the associated SF
638 participates, if any.
640 The following picture illustrates the process:
642 .. figure:: ./images/sfc/sfc-genius-at-vm-migration.png
643 :alt: sfc-genius at VM migration
645 SFC genius module at VM migration.
647 .. _sfc-genius-path-rendering:
649 RSP Rendering changes for paths using the Logical SFF
650 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
652 #. **Construction of the auxiliary rendering graph**
654 When starting the rendering of a RSP, the SFC renderer builds an auxiliary graph with information about the required hops for traffic traversing the path. RSP processing is achieved by iteratively evaluating each of the entries in the graph, writing the required flows in the proper switch for each hop.
656 It is important to note that the graph includes both traffic ingress (i.e. traffic entering into the first SF) and traffic egress (i.e. traffic leaving the chain from the last SF) as hops. Therefore, the number of entries in the graph equals the number of SFs in the chain plus one.
658 .. figure:: ./images/sfc/sfc-genius-example-auxiliary-graph.png
660 The process of rendering a chain when the switches involved are part of the Logical SFF also starts with the construction of the hop graph. The difference is that when the SFs used in the chain are using a logical interface, the SFC renderer will also retrieve from Genius the DPIDs for the switches, storing them in the graph. In this context, those switches are the ones in the compute nodes each SF is hosted on at the time the chain is rendered.
662 .. figure:: ./images/sfc/sfc-genius-example-auxiliary-graph-logical-sff.png
664 #. **New transport processor**
666 Transport processors are classes which calculate and write the correct flows for a chain. Each transport processor specializes on writing the flows for a given combination of transport type and SFC encapsulation.
668 A specific transport processor has been created for paths using a Logical SFF. A particularity of this transport processor is that its use is not only determined by the transport / SFC encapsulation combination, but also because the chain is using a Logical SFF. The actual condition evaluated for selecting the Logical SFF transport processor is that the SFs in the chain are using logical interface locators, and that the DPIDs for those locators can be successfully retrieved from Genius.
670 .. figure:: ./images/sfc/transport_processors_class_diagram.png
672 The main differences between the Logical SFF transport processor and other processors are the following:
674 - Instead of srcSff, dstSff fields in the hops graph (which are all equal in a path using a Logical SFF), the Logical SFF transport processor uses previously stored srcDpnId, dstDpnId fields in order to know whether an actual hop between compute nodes must be performed or not (it is possible that two consecutive SFs are collocated in the same compute node).
676 - When a hop between switches really has to be performed, it relies on Genius for getting the actions to perform that hop. The retrieval of those actions involve two steps:
678 - First, Genius' Overlay Tunnel Manager module is used in order to retrieve the target interface for a jump between the source and the destination DPIDs.
680 - Then, egress instructions for that interface are retrieved from Genius's Interface Manager.
682 - There are no next hop rules between compute nodes, only egress instructions (the transport zone tunnels have all the required routing information).
684 - Next hop information towards SFs uses mac adresses which are also retrieved from the Genius datastore.
686 - The Logical SFF transport processor performs NSH decapsulation in the last switch of the chain.
688 #. **Post-rendering update of the operational data model**
690 When the rendering of a chain finishes successfully, the Logical SFF Transport Processor perform two operational datastore modifications in order to provide some relevant runtime information about the chain. The exposed information is the following:
692 - Rendered Service Path state: when the chain uses a Logical SFF, DPIDs for the switches in the compute nodes on which the SFs participating in the chain are hosted are added to the hop information.
694 - SFF state: A new list of all RSPs which use each DPID is has been added. It is updated on each RSP addition / deletion.
699 This section explains the changes made to the SFC classifier, enabling it
700 to be attached to Logical SFFs.
702 Refer to the following image to better understand the concept, and the required
703 steps to implement the feature.
705 .. figure:: ./images/sfc/sfc-classifier-genius-integration.png
706 :alt: Classifier integration with Genius
708 SFC classifier integration with Genius.
710 As stated in the :ref:`SFC User Guide <sfc-user-guide-classifier-impacts>`,
711 the classifier needs to be provisioned using logical interfaces as attachment
714 When that happens, MDSAL will trigger an event in the odl-sfc-scf-openflow feature
715 (i.e. the sfc-classifier), which is responsible for installing the classifier
716 flows in the classifier switches.
718 The first step of the process, is to bind the interfaces to classify in Genius,
719 in order for the desired traffic (originating from the VMs having the
720 provisioned attachment-points) to enter the SFC pipeline. This will make traffic
721 reach table 82 (SFC classifier table), coming from table 0 (table managed by
722 Genius, shared by all applications).
724 The next step, is deciding which flows to install in the SFC classifier table.
725 A table-miss flow will be installed, having a MatchAny clause, whose action is
726 to jump to Genius's egress dispatcher table. This enables traffic intended for
727 other applications to still be processed.
729 The flow that allows the SFC pipeline to continue is added next, having higher
730 match priority than the table-miss flow. This flow has two responsabilities:
732 1. **Push the NSH header, along with its metadata (required within the SFC pipeline)**
734 Features the specified ACL matches as match criteria, and push NSH along
735 with its metadata into the Action list.
737 2. **Advance the SFC pipeline**
739 Forward the traffic to the first Service Function in the RSP. This steers
740 packets into the SFC domain, and how it is done depends on whether the
741 classifier is co-located with the first service function in the specified
744 Should the classifier be co-located (i.e. in the same compute node), a
745 new instruction is appended to the flow, telling all matches to jump to
746 the transport ingress table.
748 If not, Genius's tunnel manager service is queried to get the tunnel
749 interface connecting the classifier node with the compute node where the
750 first Service Function is located, and finally, Genius's interface manager
751 service is queried asking for instructions on how to reach that tunnel
754 These actions are then appended to the Action list already containing push
755 NSH and push NSH metadata Actions, and written in an Apply-Actions
756 Instruction into the datastore.