Merge "Migrate ALTO user docs to rst"
authorColin Dixon <colin@colindixon.com>
Mon, 5 Sep 2016 18:13:56 +0000 (18:13 +0000)
committerGerrit Code Review <gerrit@opendaylight.org>
Mon, 5 Sep 2016 18:13:56 +0000 (18:13 +0000)
372 files changed:
docs/developer-guide/alto-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/atrium-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/bgp-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/bgp-monitoring-protocol-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/cardinal_-opendaylight-monitoring-as-a-service.rst [new file with mode: 0644]
docs/developer-guide/controller.rst [new file with mode: 0644]
docs/developer-guide/images/ConfigurationService-example1.png [new file with mode: 0644]
docs/developer-guide/images/Get.png [new file with mode: 0644]
docs/developer-guide/images/L3FwdSample.png [new file with mode: 0644]
docs/developer-guide/images/MonitorResponse.png [new file with mode: 0644]
docs/developer-guide/images/OVSDB_Eclipse.png [new file with mode: 0644]
docs/developer-guide/images/Put.png [new file with mode: 0644]
docs/developer-guide/images/Screenshot8.png [new file with mode: 0644]
docs/developer-guide/images/Transaction.jpg [new file with mode: 0644]
docs/developer-guide/images/bgpcep/PathAttributesSerialization.png [new file with mode: 0644]
docs/developer-guide/images/bgpcep/RIB.png [new file with mode: 0644]
docs/developer-guide/images/bgpcep/bgp-dependency-tree.png [new file with mode: 0644]
docs/developer-guide/images/bgpcep/pcep-dependency-tree.png [new file with mode: 0644]
docs/developer-guide/images/bgpcep/pcep-parsing.png [new file with mode: 0644]
docs/developer-guide/images/bgpcep/validation.png [new file with mode: 0644]
docs/developer-guide/images/configuration.jpg [new file with mode: 0644]
docs/developer-guide/images/netide/arch-engine.jpg [new file with mode: 0644]
docs/developer-guide/images/neutron/odl-neutron-service-developer-architecture.png [new file with mode: 0644]
docs/developer-guide/images/ocpplugin/ocp-sb-plugin.jpg [new file with mode: 0644]
docs/developer-guide/images/ocpplugin/ocpagent-state-machine.jpg [new file with mode: 0644]
docs/developer-guide/images/ocpplugin/ocpplugin-state-machine.jpg [new file with mode: 0644]
docs/developer-guide/images/ocpplugin/plugin-design.jpg [new file with mode: 0644]
docs/developer-guide/images/openflowjava/500px-UdpChannelPipeline.png [new file with mode: 0644]
docs/developer-guide/images/openflowjava/800px-Extensibility.png [new file with mode: 0644]
docs/developer-guide/images/openflowjava/800px-Extensibility2.png [new file with mode: 0644]
docs/developer-guide/images/openflowjava/Library_lifecycle.png [new file with mode: 0644]
docs/developer-guide/images/openstack_integration.png [new file with mode: 0644]
docs/developer-guide/images/ovsdb-sb-active-connection.jpg [new file with mode: 0644]
docs/developer-guide/images/ovsdb-sb-config-crud.jpg [new file with mode: 0644]
docs/developer-guide/images/ovsdb-sb-oper-crud.jpg [new file with mode: 0644]
docs/developer-guide/images/ovsdb-sb-passive-connection.jpg [new file with mode: 0644]
docs/developer-guide/images/ovsdb/ODL_SFC_Architecture.png [new file with mode: 0644]
docs/developer-guide/images/packetcable-developer-wireshark.png [new file with mode: 0644]
docs/developer-guide/images/sfc-sf-selection-arch.png [new file with mode: 0644]
docs/developer-guide/images/sfc/sb-rest-architecture.png [new file with mode: 0644]
docs/developer-guide/images/sfc/sfc-ovs-architecture.png [new file with mode: 0644]
docs/developer-guide/images/topoprocessing/Inventory_Rendering_Use_case.png [new file with mode: 0644]
docs/developer-guide/images/topoprocessing/Inventory_model_listener_diagram.png [new file with mode: 0644]
docs/developer-guide/images/topoprocessing/LinkComputation.png [new file with mode: 0644]
docs/developer-guide/images/topoprocessing/LinkComputationFlowDiagram.png [new file with mode: 0644]
docs/developer-guide/images/topoprocessing/ModelAdapter.png [new file with mode: 0644]
docs/developer-guide/images/topoprocessing/Network_topology_model_flow_diagram.png [new file with mode: 0644]
docs/developer-guide/images/topoprocessing/TopologyRequestHandler_classesRelationship.png [new file with mode: 0644]
docs/developer-guide/images/topoprocessing/wrapper.png [new file with mode: 0644]
docs/developer-guide/images/ttp-screen1-basic-auth.png [new file with mode: 0644]
docs/developer-guide/images/ttp-screen2-applied-basic-auth.png [new file with mode: 0644]
docs/developer-guide/images/ttp-screen3-sent-put.png [new file with mode: 0644]
docs/developer-guide/images/ttp-screen4-get-json.png [new file with mode: 0644]
docs/developer-guide/images/ttp-screen5-get-xml.png [new file with mode: 0644]
docs/developer-guide/index.rst
docs/developer-guide/infrautils-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/iotdm-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/lacp-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/nemo-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/netconf-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/netide-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/neutron-northbound.rst [new file with mode: 0644]
docs/developer-guide/neutron-service-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/ocp-plugin-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/of-config-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/openflow-protocol-library-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/ovsdb-netvirt.rst [new file with mode: 0644]
docs/developer-guide/packetcable-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/pcep-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/service-function-chaining.rst [new file with mode: 0644]
docs/developer-guide/topology-processing-framework-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/ttp-cli-tools-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/ttp-model-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/uni-manager-plug-in-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/unified-secure-channel.rst [new file with mode: 0644]
docs/developer-guide/yang-push-developer-guide.rst [new file with mode: 0644]
docs/developer-guide/yang-tools.rst [new file with mode: 0644]
docs/getting-started-guide/project-specific-guides/yangide.rst
docs/opendaylight-with-openstack/index.rst
docs/opendaylight-with-openstack/openstack-with-gbp-vpp.rst [new file with mode: 0755]
docs/submodules/aaa
docs/submodules/integration/test
docs/submodules/netconf
docs/submodules/odlparent
docs/submodules/releng/builder
docs/user-guide/atrium-user-guide.rst [new file with mode: 0644]
docs/user-guide/bgp-monitoring-protocol-user-guide.rst [new file with mode: 0644]
docs/user-guide/bgp-user-guide.rst [new file with mode: 0644]
docs/user-guide/capwap-user-guide.rst [new file with mode: 0644]
docs/user-guide/cardinal_-opendaylight-monitoring-as-a-service.rst [new file with mode: 0644]
docs/user-guide/centinel-user-guide.rst [new file with mode: 0644]
docs/user-guide/didm-user-guide.rst [new file with mode: 0644]
docs/user-guide/genius-user-guide.rst [new file with mode: 0644]
docs/user-guide/group-based-policy-user-guide.rst [new file with mode: 0644]
docs/user-guide/images/ODL_lfm_Be_component.jpg [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/GBPTerminology1.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/GBPTerminology2.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/GBPTerminology3.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/GBP_AccessModel_simple.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/GBP_Endpoint_EPG_Contract.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/GBP_Endpoint_EPG_Forwarding.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/GBP_ForwardingModel_simple.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/GBP_High-levelBerylliumArchitecture.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/GBP_High-levelExtraRenderer.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/High-levelBerylliumArchitectureEvolution2.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/IntentSystemPolicySurfaces.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-network-example.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-network.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-port-example.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-port.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-router.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-securitygroup.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet-example.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ofoverlay-1-components.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ofoverlay-2-components.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ofoverlay-3-flowpipeline.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/sfc-1-topology.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/sfc-2-symmetric.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/sfc-3-asymmetric.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-1-basicview.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-2-governanceview.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-3-governanceview-expressed.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-0.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-1-subject.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-2-epg.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-renderer.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-1.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-2.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-3.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-4.png [new file with mode: 0644]
docs/user-guide/images/groupbasedpolicy/ui-6-wizard.png [new file with mode: 0644]
docs/user-guide/images/l2switch-address-observations.png [new file with mode: 0644]
docs/user-guide/images/l2switch-hosts.png [new file with mode: 0644]
docs/user-guide/images/l2switch-stp-status.png [new file with mode: 0644]
docs/user-guide/images/netide/netide-flow.jpg [new file with mode: 0644]
docs/user-guide/images/netide/netidearch.jpg [new file with mode: 0644]
docs/user-guide/images/neutron/odl-neutron-service-architecture.png [new file with mode: 0644]
docs/user-guide/images/nic/Redirect_flow.png [new file with mode: 0644]
docs/user-guide/images/nic/Service_Chaining.png [new file with mode: 0644]
docs/user-guide/images/ocpplugin/dlux-ocp-apis.jpg [new file with mode: 0644]
docs/user-guide/images/ocpplugin/dlux-ocp-nodes.jpg [new file with mode: 0644]
docs/user-guide/images/ocpplugin/message_flow.jpg [new file with mode: 0644]
docs/user-guide/images/ocpplugin/ocp-sb-plugin.jpg [new file with mode: 0644]
docs/user-guide/images/ocpplugin/plugin-config.jpg [new file with mode: 0644]
docs/user-guide/images/ocpplugin/plugin-design.jpg [new file with mode: 0644]
docs/user-guide/images/ovsdb/ovsdb-netvirt-architecture.jpg [new file with mode: 0644]
docs/user-guide/images/packetcable-postman.png [new file with mode: 0644]
docs/user-guide/images/sfc/RESTClient-snapshot.png [new file with mode: 0644]
docs/user-guide/images/sfc/karaf-webui-select-a-type.png [new file with mode: 0644]
docs/user-guide/images/sfc/sb-rest-architecture-user.png [new file with mode: 0644]
docs/user-guide/images/sfc/sf-rendered-service-path.png [new file with mode: 0644]
docs/user-guide/images/sfc/sf-schedule-type.png [new file with mode: 0644]
docs/user-guide/images/sfc/sf-selection-arch.png [new file with mode: 0644]
docs/user-guide/images/sfc/sfc-ovs-architecture-user.png [new file with mode: 0644]
docs/user-guide/images/sfc/sfc-ui-architecture.png [new file with mode: 0644]
docs/user-guide/images/sfc/sfcofrenderer_architecture.png [new file with mode: 0644]
docs/user-guide/images/sfc/sfcofrenderer_nwtopo.png [new file with mode: 0644]
docs/user-guide/images/snmp4sdn_getvlantable_postman.jpg [new file with mode: 0644]
docs/user-guide/images/snmp4sdn_in_odl_architecture.jpg [new file with mode: 0644]
docs/user-guide/images/vtn/Creare_Network_Step_1.png [new file with mode: 0644]
docs/user-guide/images/vtn/Create_Network.png [new file with mode: 0644]
docs/user-guide/images/vtn/Create_Network_Step_2.png [new file with mode: 0644]
docs/user-guide/images/vtn/Create_Network_Step_3.png [new file with mode: 0644]
docs/user-guide/images/vtn/Dlux_login.png [new file with mode: 0644]
docs/user-guide/images/vtn/Dlux_topology.png [new file with mode: 0644]
docs/user-guide/images/vtn/How_to_provision_virtual_L2_network.png [new file with mode: 0644]
docs/user-guide/images/vtn/Hypervisors.png [new file with mode: 0644]
docs/user-guide/images/vtn/Instance_Console.png [new file with mode: 0644]
docs/user-guide/images/vtn/Instance_Creation.png [new file with mode: 0644]
docs/user-guide/images/vtn/Instance_ping.png [new file with mode: 0644]
docs/user-guide/images/vtn/Launch_Instance.png [new file with mode: 0644]
docs/user-guide/images/vtn/Launch_Instance_network.png [new file with mode: 0644]
docs/user-guide/images/vtn/Load_All_Instances.png [new file with mode: 0644]
docs/user-guide/images/vtn/Mininet_Configuration.png [new file with mode: 0644]
docs/user-guide/images/vtn/MutiController_Example_diagram.png [new file with mode: 0644]
docs/user-guide/images/vtn/OpenStackGui.png [new file with mode: 0644]
docs/user-guide/images/vtn/OpenStack_Demo_Picture.png [new file with mode: 0644]
docs/user-guide/images/vtn/Pathmap.png [new file with mode: 0644]
docs/user-guide/images/vtn/Service_Chaining_With_One_Service.png [new file with mode: 0644]
docs/user-guide/images/vtn/Service_Chaining_With_One_Service_LLD.png [new file with mode: 0644]
docs/user-guide/images/vtn/Service_Chaining_With_One_Service_Verification.png [new file with mode: 0644]
docs/user-guide/images/vtn/Service_Chaining_With_Two_Services.png [new file with mode: 0644]
docs/user-guide/images/vtn/Service_Chaining_With_Two_Services_LLD.png [new file with mode: 0644]
docs/user-guide/images/vtn/Single_Controller_Mapping.png [new file with mode: 0644]
docs/user-guide/images/vtn/Tenant2.png [new file with mode: 0644]
docs/user-guide/images/vtn/VTN_API.jpg [new file with mode: 0644]
docs/user-guide/images/vtn/VTN_Construction.jpg [new file with mode: 0644]
docs/user-guide/images/vtn/VTN_Flow_Filter.jpg [new file with mode: 0644]
docs/user-guide/images/vtn/VTN_Mapping.jpg [new file with mode: 0644]
docs/user-guide/images/vtn/VTN_Overview.jpg [new file with mode: 0644]
docs/user-guide/images/vtn/flow_filter_example.png [new file with mode: 0644]
docs/user-guide/images/vtn/setup_diagram_SCVMM.png [new file with mode: 0644]
docs/user-guide/images/vtn/vlanmap_using_mininet.png [new file with mode: 0644]
docs/user-guide/images/vtn/vtn-single-controller-topology-example.png [new file with mode: 0644]
docs/user-guide/images/vtn/vtn_devstack_setup.png [new file with mode: 0644]
docs/user-guide/images/vtn/vtn_stations.png [new file with mode: 0644]
docs/user-guide/index.rst
docs/user-guide/l2switch-user-guide.rst [new file with mode: 0644]
docs/user-guide/l3vpn-service_-user-guide.rst [new file with mode: 0644]
docs/user-guide/link-aggregation-control-protocol-user-guide.rst [new file with mode: 0644]
docs/user-guide/lisp-flow-mapping-user-guide.rst [new file with mode: 0644]
docs/user-guide/nemo-user-guide.rst [new file with mode: 0644]
docs/user-guide/netconf-user-guide.rst [new file with mode: 0644]
docs/user-guide/netide-user-guide.rst [new file with mode: 0644]
docs/user-guide/network-intent-composition-(nic)-user-guide.rst [new file with mode: 0644]
docs/user-guide/neutron-service-user-guide.rst [new file with mode: 0644]
docs/user-guide/ocp-plugin-user-guide.rst [new file with mode: 0644]
docs/user-guide/of-config-user-guide.rst [new file with mode: 0644]
docs/user-guide/opflex-agent-ovs-user-guide.rst [new file with mode: 0644]
docs/user-guide/ovsdb-netvirt.rst [new file with mode: 0644]
docs/user-guide/packetcable-user-guide.rst [new file with mode: 0644]
docs/user-guide/pcep-user-guide.rst [new file with mode: 0644]
docs/user-guide/service-function-chaining.rst [new file with mode: 0644]
docs/user-guide/snmp-plugin-user-guide.rst [new file with mode: 0644]
docs/user-guide/snmp4sdn-user-guide.rst [new file with mode: 0644]
docs/user-guide/sxp-user-guide.rst [new file with mode: 0644]
docs/user-guide/tsdr-user-guide.rst [new file with mode: 0644]
docs/user-guide/ttp-cli-tools-user-guide.rst [new file with mode: 0644]
docs/user-guide/uni-manager-plug-in-project.rst [new file with mode: 0644]
docs/user-guide/unified-secure-channel.rst [new file with mode: 0644]
docs/user-guide/using-the-opendaylight-user-interface-(dlux).rst
docs/user-guide/virtual-tenant-network-(vtn).rst [new file with mode: 0644]
docs/user-guide/yang-ide-user-guide.rst [new file with mode: 0644]
docs/user-guide/yang-push.rst [new file with mode: 0644]
manuals/developer-guide/src/main/asciidoc/alto/alto-developer-guide.adoc
manuals/developer-guide/src/main/asciidoc/atrium/odl-atrium-all-dev.adoc
manuals/developer-guide/src/main/asciidoc/bgpcep/odl-bgpcep-bgp-all-dev.adoc
manuals/developer-guide/src/main/asciidoc/bgpcep/odl-bgpcep-bmp-dev.adoc
manuals/developer-guide/src/main/asciidoc/bgpcep/odl-bgpcep-pcep-all-dev.adoc
manuals/developer-guide/src/main/asciidoc/bk-developers-guide.adoc
manuals/developer-guide/src/main/asciidoc/cardinal/odl-cardinal-dev.adoc
manuals/developer-guide/src/main/asciidoc/controller/config.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/controller/controller.adoc
manuals/developer-guide/src/main/asciidoc/controller/md-sal-data-tx.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/controller/md-sal-overview.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/controller/md-sal-rpc-routing.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/controller/netconf/odl-netconf-dev.adoc
manuals/developer-guide/src/main/asciidoc/controller/restconf.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/controller/websocket-notifications.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/iotdm/iotdm-dev.adoc
manuals/developer-guide/src/main/asciidoc/lacp/lacp-dev.adoc
manuals/developer-guide/src/main/asciidoc/netide/netide-developer-guide.adoc
manuals/developer-guide/src/main/asciidoc/neutron/neutron.adoc
manuals/developer-guide/src/main/asciidoc/neutron/odl-neutron-service-dev.adoc
manuals/developer-guide/src/main/asciidoc/ocpplugin/ocp-developer-guide.adoc
manuals/developer-guide/src/main/asciidoc/of-config/of-config-dev.adoc
manuals/developer-guide/src/main/asciidoc/openflowjava/odl-openflowjava-protocol-dev.adoc
manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-developer.adoc
manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-hwvtep-developer.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-library-developer.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-openstack-developer.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-overview.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-sfc-developer.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-southbound-developer.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/packetcable/packetcable-dev.adoc
manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-classifier-dev.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-load-balance-dev.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-ovs-dev.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-sb-rest-dev.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-sf-monitoring-dev.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-sf-scheduler-dev.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/sfc/sfc.adoc
manuals/developer-guide/src/main/asciidoc/sfc/sfc_overview.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/snbi/odl-snbi-dev.adoc [new file with mode: 0644]
manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-aggregation-filtration-dev.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-architecture-dev.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-framework-dev.adoc
manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-inventory-rendering-dev.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-link-computation-dev.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-wrapper-rpc-writing-dev.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/ttp/ttp-cli-tools-dev.adoc
manuals/developer-guide/src/main/asciidoc/ttp/ttp-model-dev.adoc
manuals/developer-guide/src/main/asciidoc/unimgr/odl-unimgr-channel-dev.adoc
manuals/developer-guide/src/main/asciidoc/usc/odl-usc-channel-dev.adoc
manuals/developer-guide/src/main/asciidoc/yang-push/odl-yang-push-dev.adoc
manuals/developer-guide/src/main/asciidoc/yangtools/yang-java-binding-explained.adoc [deleted file]
manuals/developer-guide/src/main/asciidoc/yangtools/yangtools.adoc
manuals/developer-guide/src/main/resources/images/snbi/docker_snbi.png [new file with mode: 0644]
manuals/developer-guide/src/main/resources/images/snbi/first_fe_bs.png [new file with mode: 0644]
manuals/developer-guide/src/main/resources/images/snbi/snbi_arch.png [new file with mode: 0644]
manuals/user-guide/src/main/asciidoc/aaa/aaa.adoc
manuals/user-guide/src/main/asciidoc/atrium/odl-atrium-all-user.adoc
manuals/user-guide/src/main/asciidoc/bgpcep/odl-bgpcep-bgp-all-user.adoc
manuals/user-guide/src/main/asciidoc/bgpcep/odl-bgpcep-bmp-user.adoc
manuals/user-guide/src/main/asciidoc/bgpcep/odl-bgpcep-pcep-all-user.adoc
manuals/user-guide/src/main/asciidoc/bk-user-guide.adoc
manuals/user-guide/src/main/asciidoc/capwap/capwap-user.adoc
manuals/user-guide/src/main/asciidoc/cardinal/odl-cardinal-user.adoc
manuals/user-guide/src/main/asciidoc/centinel/centinel-user-guide.adoc
manuals/user-guide/src/main/asciidoc/controller/netconf/odl-netconf-northbound-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/controller/netconf/odl-netconf-southbound-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/controller/netconf/odl-netconf-testtool-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/controller/netconf/odl-netconf-user.adoc
manuals/user-guide/src/main/asciidoc/didm/didm-user.adoc
manuals/user-guide/src/main/asciidoc/genius/genius-user-guide.adoc [new file with mode: 0644]
manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-faas-user-guide.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-iovisor-user-guide.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-neutronmapper-user-guide.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-ofoverlay-user-guide.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-sfc-user-guide.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-ui-user-guide.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-user-guide.adoc
manuals/user-guide/src/main/asciidoc/l2switch/l2switch-user.adoc
manuals/user-guide/src/main/asciidoc/lacp/lacp-user.adoc
manuals/user-guide/src/main/asciidoc/lfm/lispflowmapping-clustering-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/lfm/lispflowmapping-msmr-user.adoc
manuals/user-guide/src/main/asciidoc/nemo/odl-nemo-engine-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/netide/odl-netide-user-guide.adoc
manuals/user-guide/src/main/asciidoc/neutron/odl-neutron-service-user.adoc
manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_Log_Action.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_QoS_Attribute_Mapping.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_Redirect_Action.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_VTN_Renderer.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/nic/NIC_redirect_test_topology.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/nic/NIC_requirements.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/nic/nic-user.adoc
manuals/user-guide/src/main/asciidoc/ocpplugin/ocp-user-guide.adoc
manuals/user-guide/src/main/asciidoc/of-config/ofconfig-user.adoc
manuals/user-guide/src/main/asciidoc/openflowplugin/odl-ofp-example-flows.adoc
manuals/user-guide/src/main/asciidoc/opflex/agent-ovs-user.adoc
manuals/user-guide/src/main/asciidoc/ovsdb/odl-netvirt-user-guide.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovs-dpdk-user-guide.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-hwvtep-southbound-user-guide.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-netvirt-user-guide.adoc
manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-plugins-user-guide.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-security-groups.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-southbound-user-guide.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/packetcable/packetcable-user.adoc
manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-classifier-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-iosxe-renderer-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-load-balance-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-openflow-renderer-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-ovs-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-pot-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-sb-rest-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-sf-monitoring-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-sf-scheduler-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-ui-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/sfc/sfc.adoc
manuals/user-guide/src/main/asciidoc/sfc/sfc_overview.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/snbi/odl-snbi-user.adoc [new file with mode: 0644]
manuals/user-guide/src/main/asciidoc/snmp/snmp-user-guide.adoc
manuals/user-guide/src/main/asciidoc/snmp4sdn/snmp4sdn-user-guide.adoc
manuals/user-guide/src/main/asciidoc/sxp/odl-sxp-user.adoc
manuals/user-guide/src/main/asciidoc/tsdr/tsdr-user-guide.adoc
manuals/user-guide/src/main/asciidoc/ttp/ttp-cli-tools-user.adoc
manuals/user-guide/src/main/asciidoc/unimgr/unimgr-user.adoc
manuals/user-guide/src/main/asciidoc/usc/odl-usc-channel-user.adoc
manuals/user-guide/src/main/asciidoc/vpnservice/vpnservice-user.adoc
manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_Support_for_Microsoft_SCVMM.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_Troubleshoot_Coordinator_Installation.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_Use_VTN_to_make_packets_take_different_paths.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_configure_L2_Network_with_Multiple_Controllers.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_configure_L2_Network_with_Single_Controller.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_configure_flow_filters.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_test_vlanmap_using_mininet.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_view_Dataflows.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_view_STATIONS.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Configure_Flowfilters.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Configure_Service_Function_Chaining_Support.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Create_Mac_Map_In_VTN.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Provision_Virtual_L2_Network.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Use_VTN_to_change_the_path_of_the_packet_flow.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_View_Dataflows.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_test_vlan_map_using_mininet.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_OpenStack_Support-user.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/VTN_Overview.adoc [deleted file]
manuals/user-guide/src/main/asciidoc/vtn/vtn-user.adoc
manuals/user-guide/src/main/asciidoc/yang-push/odl-yang-push-user.adoc
manuals/user-guide/src/main/asciidoc/yangide/yangide-user.adoc
manuals/user-guide/src/main/resources/images/snbi/snbi_arch.png [new file with mode: 0644]

diff --git a/docs/developer-guide/alto-developer-guide.rst b/docs/developer-guide/alto-developer-guide.rst
new file mode 100644 (file)
index 0000000..8b16f53
--- /dev/null
@@ -0,0 +1,194 @@
+ALTO Developer Guide
+====================
+
+Overview
+--------
+
+The topics of this guide are:
+
+1. How to add alto projects as dependencies;
+
+2. How to put/fetch data from ALTO;
+
+3. Basic API and DataType;
+
+4. How to use customized service implementations.
+
+Adding ALTO Projects as Dependencies
+------------------------------------
+
+Most ALTO packages can be added as dependencies in Maven projects by
+putting the following code in the *pom.xml* file.
+
+::
+
+    <dependency>
+        <groupId>org.opendaylight.alto</groupId>
+        <artifactId>${THE_NAME_OF_THE_PACKAGE_YOU_NEED}</artifactId>
+        <version>${ALTO_VERSION}</version>
+    </dependency>
+
+The current stable version for ALTO is ``0.3.0-Boron``.
+
+Putting/Fetching data from ALTO
+-------------------------------
+
+Using RESTful API
+~~~~~~~~~~~~~~~~~
+
+There are two kinds of RESTful APIs for ALTO: the one provided by
+``alto-northbound`` which follows the formats defined in `RFC
+7285 <https://tools.ietf.org/html/rfc7285>`__, and the one provided by
+RESTCONF whose format is defined by the YANG model proposed in `this
+draft <https://tools.ietf.org/html/draft-shi-alto-yang-model-03>`__.
+
+One way to get the URLs for the resources from ``alto-northbound`` is to
+visit the IRD service first where there is a ``uri`` field for every
+entry. However, the IRD service is not yet implemented so currently the
+developers have to construct the URLs themselves. The base URL is
+``/alto`` and below is a list of the specific paths defined in
+``alto-core/standard-northbound-route`` using Jersey ``@Path``
+annotation:
+
+-  ``/ird/{rid}``: the path to access *IRD* services;
+
+-  ``/networkmap/{rid}[/{tag}]``: the path to access *Network Map* and
+   *Filtered Network Map* services;
+
+-  ``/costmap/{rid}[/{tag}[/{mode}/{metric}]]``: the path to access
+   *Cost Map* and *Filtered Cost Map* services;
+
+-  ``/endpointprop``: the path to access *Endpoint Property* services;
+
+-  ``/endpointcost``: the path to access *Endpoint Cost* services.
+
+.. note::
+
+    The segments in brackets are optional.
+
+If you want to fetch the data using RESTCONF, it is highly recommended
+to take a look at the ``apidoc`` page
+(`http://{controller\_ip}:8181/apidoc/explorer/index.html <http://{controller_ip}:8181/apidoc/explorer/index.html>`__)
+after installing the ``odl-alto-release`` feature in karaf.
+
+It is also worth pointing out that ``alto-northbound`` only supports
+``GET`` and ``POST`` operations so it is impossible to manipulate the
+data through its RESTful APIs. To modify the data, use ``PUT`` and
+``DELETE`` methods with RESTCONF.
+
+.. note::
+
+    The current implementation uses the ``configuration`` data store and
+    that enables the developers to modify the data directly through
+    RESTCONF. In the future this approach might be disabled in the core
+    packages of ALTO but may still be available as an extension.
+
+Using MD-SAL
+~~~~~~~~~~~~
+
+You can also fetch data from the datastore directly.
+
+First you must get the access to the datastore by registering your
+module with a data broker.
+
+Then an ``InstanceIdentifier`` must be created. Here is an example of
+how to build an ``InstanceIdentifier`` for a *network map*:
+
+::
+
+    import org.opendaylight...alto...Resources;
+    import org.opendaylight...alto...resources.NetworkMaps;
+    import org.opendaylight...alto...resources.network.maps.NetworkMap;
+    import org.opendaylight...alto...resources.network.maps.NetworkMapKey;
+    ...
+    protected
+    InstanceIdentifier<NetworkMap> getNetworkMapIID(String resource_id) {
+      ResourceId rid = ResourceId.getDefaultInstance(resource_id);
+      NetworkMapKey key = new NetworkMapKey(rid);
+      InstanceIdentifier<NetworkMap> iid = null;
+      iid = InstanceIdentifier.builder(Resources.class)
+                              .child(NetworkMaps.class)
+                              .child(NetworkMap.class, key)
+                              .build();
+      return iid;
+    }
+    ...
+
+With the ``InstanceIdentifier`` you can use ``ReadOnlyTransaction``,
+``WriteTransaction`` and ``ReadWriteTransaction`` to manipulate the data
+accordingly. The ``simple-impl`` package, which provides some of the
+AD-SAL APIs mentioned above, is using this method to get data from the
+datastore and then convert them into RFC7285-compatible objects.
+
+Basic API and DataType
+----------------------
+
+a. alto-basic-types: Defines basic types of ALTO protocol.
+
+b. alto-service-model-api: Includes the YANG models for the five basic
+   ALTO services defined in `RFC
+   7285 <https://tools.ietf.org/html/rfc7285>`__.
+
+c. alto-resourcepool: Manages the meta data of each ALTO service,
+   including capabilities and versions.
+
+d. alto-northbound: Provides the root of RFC7285-compatible services at
+   http://localhost:8080/alto.
+
+e. alto-northbound-route: Provides the root of the network map resources
+   at http://localhost:8080/alto/networkmap/.
+
+How to customize service
+------------------------
+
+Define new service API
+~~~~~~~~~~~~~~~~~~~~~~
+
+Add a new module in ``alto-core/standard-service-models``. For example,
+we named our service model module as ``model-example``.
+
+Implement service RPC
+~~~~~~~~~~~~~~~~~~~~~
+
+Add a new module in ``alto-basic`` to implement a service RPC in
+``alto-core``.
+
+Currently ``alto-core/standard-service-models/model-base`` has defined a
+template of the service RPC. You can define your own RPC using
+``augment`` in YANG. Here is an example in ``alto-simpleird``.
+
+.. code:: yang
+
+        grouping "alto-ird-request" {
+            container "ird-request" {
+            }
+        }
+        grouping "alto-ird-response" {
+            container "ird" {
+                container "meta" {
+                }
+                list "resource" {
+                    key "resource-id";
+                    leaf "resource-id" {
+                        type "alto-types:resource-id";
+                    }
+                }
+            }
+        }
+        augment "/base:query/base:input/base:request" {
+            case "ird-request-data" {
+                uses "alto-ird-request";
+            }
+        }
+        augment "/base:query/base:output/base:response" {
+            case "ird-response-data" {
+                uses "alto-ird-response";
+            }
+        }
+
+Register northbound route
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If necessary, you can add a northbound route module in
+``alto-core/standard-northbound-routes``.
+
diff --git a/docs/developer-guide/atrium-developer-guide.rst b/docs/developer-guide/atrium-developer-guide.rst
new file mode 100644 (file)
index 0000000..14fef81
--- /dev/null
@@ -0,0 +1,119 @@
+Atrium Developer Guide
+======================
+
+Overview
+--------
+
+Project Atrium is an open source SDN distribution - a vertically
+integrated set of open source components which together form a complete
+SDN stack. It’s goals are threefold:
+
+-  Close the large integration-gap of the elements that are needed to
+   build an SDN stack - while there are multiple choices at each layer,
+   there are missing pieces with poor or no integration.
+
+-  Overcome a massive gap in interoperability - This exists both at the
+   switch level, where existing products from different vendors have
+   limited compatibility, making it difficult to connect an arbitrary
+   switch and controller and at an API level, where its difficult to
+   write a portable application across multiple controller platforms.
+
+-  Work closely with network operators on deployable use-cases, so that
+   they could download near production quality code from one location,
+   and get started with functioning software defined networks on real
+   hardware.
+
+Architecture
+------------
+
+The key components of Atrium BGP Peering Router Application are as
+follows:
+
+-  Data Plane Switch - Data plane switch is the entity that uses flow
+   table entries installed by BGP Routing Application through SDN
+   controller. In the simplest form data plane switch with the installed
+   flows act like a BGP Router.
+
+-  OpenDaylight Controller - OpenDaylight SDN controller has many
+   utility applications or plugins which are leveraged by the BGP Router
+   application to manage the control plane information.
+
+-  BGP Routing Application - An application running within the
+   OpenDaylight runtime environment to handle I-BGP updates.
+
+-  `DIDM <#_didm_developer_guide>`__ - DIDM manages the drivers specific
+   to each data plane switch connected to the controller. The drivers
+   are created primarily to hide the underlying complexity of the
+   devices and to expose a uniform API to applications.
+
+-  Flow Objectives API - The driver implementation provides a pipeline
+   abstraction and exposes Flow Objectives API. This means applications
+   need to be aware of only the Flow Objectives API without worrying
+   about the Table IDs or the pipelines.
+
+-  Control Plane Switch - This component is primarily used to connect
+   the OpenDaylight SDN controller with the Quagga Soft-Router and
+   establish a path for forwarding E-BGP packets to and from Quagga.
+
+-  Quagga soft router - An open source routing software that handles
+   E-BGP updates.
+
+Key APIs and Interfaces
+-----------------------
+
+BGP Routing Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The BGP Routing Configuration maintains information about its BGP
+Speakers & BGP Peers.
+
+-  Configuration data about BGP speakers can be accessed from the below
+   URL:
+
+   ::
+
+       GET http://<controller_ip>:8181/restconf/config/bgpconfig:bgpSpeakers/
+
+-  Configuration data about BGP peers can be accessed from the below
+   URL:
+
+   ::
+
+       GET http://<controller_ip>:8181/restconf/config/bgpconfig:bgpPeers/
+
+Host Service
+~~~~~~~~~~~~
+
+Host Service API contains the host specific details that can be used
+during address resolution
+
+-  Host specific data can be accessed by using the below REST request:
+
+   ::
+
+       GET http://<controller_ip>:8181/restconf/config/hostservice-api:addresses/
+
+BGP Routing Information Base
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The BGP RIB module stores all the route information that it has learnt
+from its peers.
+
+-  Routing Information Base entries can be accessed from the URL below:
+
+   ::
+
+       GET http://<controller_ip>:8181/restconf/operational/bgp-rib:bgp-rib/
+
+Forwarding Information Base
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Forwarding Information Base is used to keep track of active FIB
+entries.
+
+-  FIB entries can be accessed from the URL below:
+
+   ::
+
+       GET http://<controller_ip>:8181/restconf/config/routingservice-api:fibEntries/
+
diff --git a/docs/developer-guide/bgp-developer-guide.rst b/docs/developer-guide/bgp-developer-guide.rst
new file mode 100644 (file)
index 0000000..df2df2a
--- /dev/null
@@ -0,0 +1,333 @@
+BGP Developer Guide
+===================
+
+Overview
+--------
+
+This section provides an overview of the ``odl-bgpcep-bgp-all`` Karaf
+feature. This feature will install everything needed for BGP (Border
+Gateway Protocol) from establishing the connection, storing the data in
+RIBs (Route Information Base) and displaying data in network-topology
+overview.
+
+BGP Architecture
+----------------
+
+Each feature represents a module in the BGPCEP codebase. The following
+diagram illustrates how the features are related.
+
+.. figure:: ./images/bgpcep/bgp-dependency-tree.png
+   :alt: BGP Dependency Tree
+
+   BGP Dependency Tree
+
+Key APIs and Interfaces
+-----------------------
+
+BGP concepts
+~~~~~~~~~~~~
+
+This module contains the base BGP concepts contained in `RFC
+4271 <http://tools.ietf.org/html/rfc4271>`__, `RFC
+4760 <http://tools.ietf.org/html/rfc4760>`__, `RFC
+4456 <http://tools.ietf.org/html/rfc4456>`__, `RFC
+1997 <http://tools.ietf.org/html/rfc1997>`__ and `RFC
+4360 <http://tools.ietf.org/html/rfc4360>`__.
+
+All the concepts are described in one yang model:
+`bgp-types.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/concepts/src/main/yang/bgp-types.yang;hb=refs/heads/stable/beryllium>`__.
+
+Outside generated classes, there is just one class
+`NextHopUtil <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/concepts/src/main/java/org/opendaylight/bgp/concepts/NextHopUtil.java;hb=refs/heads/stable/beryllium>`__
+that contains methods for serializing and parsing NextHop.
+
+BGP parser
+~~~~~~~~~~
+
+Base BGP parser includes messages and attributes from `RFC
+4271 <http://tools.ietf.org/html/rfc4271>`__, `RFC
+4760 <http://tools.ietf.org/html/rfc4760>`__, `RFC
+1997 <http://tools.ietf.org/html/rfc1997>`__ and `RFC
+4360 <http://tools.ietf.org/html/rfc4360>`__.
+
+*API* module defines BGP messages in YANG.
+
+*IMPL* module contains actual parsers and serializers for BGP messages
+and
+`Activator <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/parser-impl/src/main/java/org/opendaylight/protocol/bgp/parser/impl/BGPActivator.java;hb=refs/heads/stable/beryllium>`__
+class
+
+*SPI* module contains helper classes needed for registering parsers into
+activators
+
+Registration
+^^^^^^^^^^^^
+
+All parsers and serializers need to be registered into the *Extension
+provider*. This *Extension provider* is configured in initial
+configuration of the parser-spi module (``31-bgp.xml``).
+
+.. code:: xml
+
+     <module>
+      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:parser:spi">prefix:bgp-extensions-impl</type>
+      <name>global-bgp-extensions</name>
+      <extension>
+       <type xmlns:bgpspi="urn:opendaylight:params:xml:ns:yang:controller:bgp:parser:spi">bgpspi:extension</type>
+       <name>base-bgp-parser</name>
+      </extension>
+      <extension>
+       <type xmlns:bgpspi="urn:opendaylight:params:xml:ns:yang:controller:bgp:parser:spi">bgpspi:extension</type>
+       <name>bgp-linkstate</name>
+      </extension>
+     </module>
+
+-  *base-bgp-parser* - will register parsers and serializers implemented
+   in the bgp-parser-impl module
+
+-  *bgp-linkstate* - will register parsers and serializers implemented
+   in the bgp-linkstate module
+
+The bgp-linkstate module is a good example of a BGP parser extension.
+
+The configuration of bgp-parser-spi specifies one implementation of
+*Extension provider* that will take care of registering mentioned parser
+extensions:
+`SimpleBGPExtensionProviderContext <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/parser-spi/src/main/java/org/opendaylight/protocol/bgp/parser/spi/pojo/SimpleBGPExtensionProviderContext.java;hb=refs/heads/stable/beryllium>`__.
+All registries are implemented in package
+`bgp-parser-spi <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=tree;f=bgp/parser-spi/src/main/java/org/opendaylight/protocol/bgp/parser/spi;hb=refs/heads/stable/beryllium>`__.
+
+Serializing
+^^^^^^^^^^^
+
+The serializing of BGP elements is mostly done in the same way as in
+`PCEP <#_pcep_developer_guide>`__, the only exception is the
+serialization of path attributes, which is described here. Path
+attributes are different from any other BGP element, as path attributes
+don’t implement one common interface, but this interface contains
+getters for individual path attributes (this structure is because update
+message can contain exactly one instance of each path attribute). This
+means, that a given *PathAttributes* object, you can only get to the
+specific type of the path attribute through checking its presence.
+Therefore method *serialize()* in *AttributeRegistry*, won’t look up the
+registered class, instead it will go through the registrations and offer
+this object to the each registered parser. This way the object will be
+passed also to serializers unknown to module bgp-parser, for example to
+LinkstateAttributeParser. RFC 4271 recommends ordering path attributes,
+hence the serializers are ordered in a list as they are registered in
+the *Activator*. In other words, this is the only case, where
+registration ordering matters.
+
+.. figure:: ./images/bgpcep/PathAttributesSerialization.png
+   :alt: PathAttributesSerialization
+
+   PathAttributesSerialization
+
+*serialize()* method in each Path Attribute parser contains check for
+presence of its attribute in the PathAttributes object, which simply
+returns, if the attribute is not there:
+
+.. code:: java
+
+     if (pathAttributes.getAtomicAggregate() == null) {
+         return;
+     }
+     //continue with serialization of Atomic Aggregate
+
+BGP RIB
+-------
+
+The BGP RIB module can be divided into two parts:
+
+-  BGP listener and speaker session handling
+
+-  RIB handling.
+
+Session handling
+~~~~~~~~~~~~~~~~
+
+``31-bgp.xml`` defines only bgp-dispatcher and the parser it should be
+using (global-bgp-extensions).
+
+.. code:: xml
+
+    <module>
+     <type>prefix:bgp-dispatcher-impl</type>
+     <name>global-bgp-dispatcher</name>
+     <bgp-extensions>
+      <type>bgpspi:extensions</type>
+      <name>global-bgp-extensions</name>
+     </bgp-extensions>
+     <boss-group>
+      <type>netty:netty-threadgroup</type>
+      <name>global-boss-group</name>
+     </boss-group>
+     <worker-group>
+      <type>netty:netty-threadgroup</type>
+      <name>global-worker-group</name>
+     </worker-group>
+    </module>
+
+For user configuration of BGP, check User Guide.
+
+Synchronization
+~~~~~~~~~~~~~~~
+
+Synchronization is a phase, where upon connection, a BGP speaker sends
+all available data about topology to its new client. After the whole
+topology has been advertised, the synchronization is over. For the
+listener, the synchronization is over when the RIB receives End-of-RIB
+(EOR) messages. There is a special EOR message for each AFI (Address
+Family Identifier).
+
+-  IPv4 EOR is an empty Update message.
+
+-  Ipv6 EOR is an Update message with empty MP\_UNREACH attribute where
+   AFI and SAFI (Subsequent Address Family Identifier) are set to Ipv6.
+   OpenDaylight also supports EOR for IPv4 in this format.
+
+-  Linkstate EOR is an Update message with empty MP\_UNREACH attribute
+   where AFI and SAFI are set to Linkstate.
+
+For BGP connections, where both peers support graceful restart, the EORs
+are sent by the BGP speaker and are redirected to RIB, where the
+specific AFI/SAFI table is set to *true*. Without graceful restart, the
+messages are generated by OpenDaylight itself and sent after second
+keepalive for each AFI/SAFI. This is done in
+`BGPSynchronization <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/BGPSynchronization.java;hb=refs/heads/stable/beryllium>`__.
+
+**Peers**
+
+`BGPPeer <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/BGPPeer.java;hb=refs/heads/stable/beryllium>`__
+has various meanings. If you configure BGP listener, *BGPPeer*
+represents the BGP listener itself. If you are configuring BGP speaker,
+you need to provide a list of peers, that are allowed to connect to this
+speaker. Unknown peer represents, in this case, a peer that is allowed
+to be refused. *BGPPeer* represents in this case peer, that is supposed
+to connect to your speaker. *BGPPeer* is stored in
+`BGPPeerRegistry <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/StrictBGPPeerRegistry.java;hb=refs/heads/stable/beryllium>`__.
+This registry controls the number of sessions. Our strict implementation
+limits sessions to one per peer.
+
+`ApplicationPeer <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/ApplicationPeer.java;hb=refs/heads/stable/beryllium>`__
+is a special case of peer, that has it’s own RIB. This RIB is populated
+from RESTCONF. The RIB is synchronized with default BGP RIB. Incoming
+routes to the default RIB are treated in the same way as they were from
+a BGP peer (speaker or listener) in the network.
+
+RIB handling
+~~~~~~~~~~~~
+
+RIB (Route Information Base) is defined as a concept in `RFC
+4271 <http://tools.ietf.org/html/rfc4271#section-3.2>`__. RFC does not
+define how it should be implemented. In our implementation, the routes
+are stored in the MD-SAL datastore. There are four supported routes -
+*Ipv4Routes*, *Ipv6Routes*, *LinkstateRoutes* and *FlowspecRoutes*.
+
+Each route type needs to provide a
+`RIBSupport.java <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-spi/src/main/java/org/opendaylight/protocol/bgp/rib/spi/RIBSupport.java;hb=refs/heads/stable/beryllium>`__
+implementation. *RIBSupport* tells RIB how to parse binding-aware data
+(BGP Update message) to binding-independent (datastore format).
+
+Following picture describes the data flow from BGP message that is sent
+to *BGPPeer* to datastore and various types of RIB.
+
+.. figure:: ./images/bgpcep/RIB.png
+   :alt: RIB
+
+   RIB
+
+**`AdjRibInWriter <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/AdjRibInWriter.java;hb=refs/heads/stable/beryllium>`__**
+- represents the first step in putting data to datastore. This writer is
+notified whenever a peer receives an Update message. The message is
+transformed into binding-independent format and pushed into datastore to
+*adj-rib-in*. This RIB is associated with a peer.
+
+**`EffectiveRibInWriter <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/EffectiveRibInWriter.java;hb=refs/heads/stable/beryllium>`__**
+- this writer is notified whenever *adj-rib-in* is updated. It applies
+all configured import policies to the routes and stores them in
+*effective-rib-in*. This RIB is also associated with a peer.
+
+**`LocRibWriter <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/LocRibWriter.java;hb=refs/heads/stable/beryllium>`__**
+- this writer is notified whenever **any** *effective-rib-in* is updated
+(in any peer). Performs best path selection filtering and stores the
+routes in *loc-rib*. It also determines which routes need to be
+advertised and fills in *adj-rib-out* that is per peer as well.
+
+**`AdjRibOutListener <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/AdjRibOutListener.java;h=a14fd54a29ea613b381a36248f67491d968963b8;hb=refs/heads/stable/beryllium>`__**
+- listens for changes in *adj-rib-out*, transforms the routes into
+BGPUpdate messages and sends them to its associated peer.
+
+BGP inet
+--------
+
+This module contains only one YANG model
+`bgp-inet.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/inet/src/main/yang/bgp-inet.yang;hb=refs/heads/stable/beryllium>`__
+that summarizes the ipv4 and ipv6 extensions to RIB routes and BGP
+messages.
+
+BGP flowspec
+------------
+
+BGP flowspec is a module that implements `RFC
+5575 <http://tools.ietf.org/html/rfc5575>`__ for IPv4 AFI and
+`draft-ietf-idr-flow-spec-v6-06 <https://tools.ietf.org/html/draft-ietf-idr-flow-spec-v6-06>`__
+for IPv6 AFI. The RFC defines an extension to BGP in form of a new
+subsequent address family, NLRI and extended communities. All of those
+are defined in the
+`bgp-flowspec.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/flowspec/src/main/yang/bgp-flowspec.yang;hb=refs/heads/stable/beryllium>`__
+model. In addition to generated sources, the module contains parsers for
+newly defined elements and RIBSupport for flowspec-routes. The route key
+of flowspec routes is a string representing human-readable flowspec
+request.
+
+BGP linkstate
+-------------
+
+BGP linkstate is a module that implements
+`draft-ietf-idr-ls-distribution <http://tools.ietf.org/html/draft-ietf-idr-ls-distribution-04>`__
+version 04. The draft defines an extension to BGP in form of a new
+address family, subsequent address family, NLRI and path attribute. All
+of those are defined in the
+`bgp-linkstate.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/yang/bgp-linkstate.yang;hb=refs/heads/stable/beryllium>`__
+model. In addition to generated sources, the module contains
+`LinkstateAttributeParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/java/org/opendaylight/protocol/bgp/linkstate/attribute/LinkstateAttributeParser.java;hb=refs/heads/stable/beryllium>`__,
+`LinkstateNlriParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/java/org/opendaylight/protocol/bgp/linkstate/nlri/LinkstateNlriParser.java;hb=refs/heads/stable/beryllium>`__,
+activators for both, parser and RIB, and RIBSupport handler for
+linkstate address family. As each route needs a key, in case of
+linkstate, the route key is defined as a binary string, containing all
+the NLRI serialized to byte format. The BGP linkstate extension also
+supports distribution of MPLS TE state as defined in
+`draft-ietf-idr-te-lsp-distribution-03 <https://tools.ietf.org/html/draft-ietf-idr-te-lsp-distribution-03>`__,
+extension for Segment Routing
+`draft-gredler-idr-bgp-ls-segment-routing-ext-00 <https://tools.ietf.org/html/draft-gredler-idr-bgp-ls-segment-routing-ext-00>`__
+and Segment Routing Egress Peer Engineering
+`draft-ietf-idr-bgpls-segment-routing-epe-02 <https://tools.ietf.org/html/draft-ietf-idr-bgpls-segment-routing-epe-02>`__.
+
+BGP labeled-unicast
+-------------------
+
+BGP labeled unicast is a module that implements `RFC
+3107 <https://tools.ietf.org/html/rfc3107>`__. The RFC defines an
+extension to the BGP MP to carry Label Mapping Information as a part of
+the NLRI. The AFI indicates, as usual, the address family of the
+associated route. The fact that the NLRI contains a label is indicated
+by using SAFI value 4. All of those are defined in
+`bgp-labeled-unicast.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob_plain;f=bgp/labeled-unicast/src/main/yang/bgp-labeled-unicast.yang;hb=refs/heads/stable/beryllium>`__
+model. In addition to the generated sources, the module contains new
+NLRI codec and RIBSupport. The route key is defined as a binary, where
+whole NLRI information is encoded.
+
+BGP topology provider
+---------------------
+
+BGP data besides RIB, is stored in network-topology view. The format of
+how the data is displayed there conforms to
+`draft-clemm-netmod-yang-network-topo <https://tools.ietf.org/html/draft-clemm-netmod-yang-network-topo-01>`__.
+
+API Reference Documentation
+---------------------------
+
+Javadocs are generated while creating mvn:site and they are located in
+target/ directory in each module.
+
diff --git a/docs/developer-guide/bgp-monitoring-protocol-developer-guide.rst b/docs/developer-guide/bgp-monitoring-protocol-developer-guide.rst
new file mode 100644 (file)
index 0000000..a6e84a7
--- /dev/null
@@ -0,0 +1,161 @@
+BGP Monitoring Protocol Developer Guide
+=======================================
+
+Overview
+--------
+
+This section provides an overview of **feature odl-bgpcep-bmp**. This
+feature will install everything needed for BMP (BGP Monitoring Protocol)
+including establishing the connection, processing messages, storing
+information about monitored routers, peers and their Adj-RIB-In
+(unprocessed routing information) and Post-Policy Adj-RIB-In and
+displaying data in BGP RIBs overview. The OpenDaylight BMP plugin plays
+the role of a monitoring station.
+
+Key APIs and Interfaces
+-----------------------
+
+Session handling
+~~~~~~~~~~~~~~~~
+
+*32-bmp.xml* defines only bmp-dispatcher the parser should be using
+(global-bmp-extensions).
+
+.. code:: xml
+
+     <module>
+      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">prefix:bmp-dispatcher-impl</type>
+      <name>global-bmp-dispatcher</name>
+       <bmp-extensions>
+        <type xmlns:bmp-spi="urn:opendaylight:params:xml:ns:yang:controller:bmp:spi">bmp-spi:extensions</type>
+        <name>global-bmp-extensions</name>
+       </bmp-extensions>
+       <boss-group>
+        <type xmlns:netty="urn:opendaylight:params:xml:ns:yang:controller:netty">netty:netty-threadgroup</type>
+        <name>global-boss-group</name>
+       </boss-group>
+       <worker-group>
+        <type xmlns:netty="urn:opendaylight:params:xml:ns:yang:controller:netty">netty:netty-threadgroup</type>
+        <name>global-worker-group</name>
+      </worker-group>
+     </module>
+
+For user configuration of BMP, check User Guide.
+
+Parser
+~~~~~~
+
+The base BMP parser includes messages and attributes from
+https://tools.ietf.org/html/draft-ietf-grow-bmp-15
+
+Registration
+~~~~~~~~~~~~
+
+All parsers and serializers need to be registered into *Extension
+provider*. This *Extension provider* is configured in initial
+configuration of the parser (*32-bmp.xml*).
+
+.. code:: xml
+
+     <module>
+      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bmp:spi">prefix:bmp-extensions-impl</type>
+      <name>global-bmp-extensions</name>
+      <extension>
+       <type xmlns:bmp-spi="urn:opendaylight:params:xml:ns:yang:controller:bmp:spi">bmp-spi:extension</type>
+       <name>bmp-parser-base</name>
+      </extension>
+     </module>
+
+-  *bmp-parser-base* - will register parsers and serializers implemented
+   in bmp-impl module
+
+Parsing
+~~~~~~~
+
+Parsing of BMP elements is mostly done equally to BGP. Some of the BMP
+messages includes wrapped BGP messages.
+
+BMP Monitoring Station
+~~~~~~~~~~~~~~~~~~~~~~
+
+The BMP application (Monitoring Station) serves as message processor
+incoming from monitored routers. The processed message is transformed
+and relevant information is stored. Route information is stored in a BGP
+RIB data structure.
+
+BMP data is displayed only through one URL that is accessible from the
+base BMP URL:
+
+*`http://<controllerIP>:8181/restconf/operational/bmp-monitor:bmp-monitor <http://<controllerIP>:8181/restconf/operational/bmp-monitor:bmp-monitor>`__*
+
+Each Monitor station will be displayed and it may contains multiple
+monitored routers and peers within:
+
+.. code:: xml
+
+    <bmp-monitor xmlns="urn:opendaylight:params:xml:ns:yang:bmp-monitor">
+     <monitor>
+     <monitor-id>example-bmp-monitor</monitor-id>
+      <router>
+      <router-id>127.0.0.11</router-id>
+       <status>up</status>
+       <peer>
+        <peer-id>20.20.20.20</peer-id>
+        <as>72</as>
+        <type>global</type>
+        <peer-session>
+         <remote-port>5000</remote-port>
+         <timestamp-sec>5</timestamp-sec>
+         <status>up</status>
+         <local-address>10.10.10.10</local-address>
+         <local-port>220</local-port>
+        </peer-session>
+        <pre-policy-rib>
+         <tables>
+          <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv4-address-family</afi>
+          <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:unicast-subsequent-address-family</safi>
+          <ipv4-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-inet">
+           <ipv4-route>
+            <prefix>10.10.10.0/24</prefix>
+             <attributes>
+              ...
+             </attributes>
+           </ipv4-route>
+          </ipv4-routes>
+          <attributes>
+           <uptodate>true</uptodate>
+          </attributes>
+         </tables>
+        </pre-policy-rib>
+        <address>10.10.10.10</address>
+        <post-policy-rib>
+         ...
+        </post-policy-rib>
+        <bgp-id>20.20.20.20</bgp-id>
+        <stats>
+         <timestamp-sec>5</timestamp-sec>
+         <invalidated-cluster-list-loop>53</invalidated-cluster-list-loop>
+         <duplicate-prefix-advertisements>16</duplicate-prefix-advertisements>
+         <loc-rib-routes>100</loc-rib-routes>
+         <duplicate-withdraws>11</duplicate-withdraws>
+         <invalidated-as-confed-loop>55</invalidated-as-confed-loop>
+         <adj-ribs-in-routes>10</adj-ribs-in-routes>
+         <invalidated-as-path-loop>66</invalidated-as-path-loop>
+         <invalidated-originator-id>70</invalidated-originator-id>
+         <rejected-prefixes>8</rejected-prefixes>
+        </stats>
+       </peer>
+       <name>name</name>
+       <description>description</description>
+       <info>some info;</info>
+      </router>
+     </monitor>
+    </bmp-monitor>
+    </source>
+
+API Reference Documentation
+---------------------------
+
+Javadocs are generated while creating mvn:site and they are located in
+target/ directory in each module.
+
diff --git a/docs/developer-guide/cardinal_-opendaylight-monitoring-as-a-service.rst b/docs/developer-guide/cardinal_-opendaylight-monitoring-as-a-service.rst
new file mode 100644 (file)
index 0000000..3853b78
--- /dev/null
@@ -0,0 +1,126 @@
+Cardinal: OpenDaylight Monitoring as a Service
+==============================================
+
+Overview
+--------
+
+Cardinal (OpenDaylight Monitoring as a Service) enables OpenDaylight and
+the underlying software defined network to be remotely monitored by
+deployed Network Management Systems (NMS) or Analytics suite. In the
+Boron release, Cardinal adds:
+
+1. OpenDaylight MIB.
+
+2. Enable ODL diagnostics/monitoring to be exposed across SNMP (v2c, v3)
+   and REST north-bound.
+
+3. Extend ODL System health, Karaf parameter and feature info, ODL
+   plugin scalability and network parameters.
+
+4. Support autonomous notifications (SNMP Traps).
+
+Cardinal Architecture
+---------------------
+
+The Cardinal architecture can be found at the below link:
+
+https://wiki.opendaylight.org/images/8/89/Cardinal-ODL_Monitoring_as_a_Service_V2.pdf
+
+Key APIs and Interfaces
+-----------------------
+
+There are 2 main APIs for requesting snmpget request of the Karaf info
+and System info. To expose these APIs, it assumes that you already have
+the ``odl-cardinal`` and ``odl-restconf`` features installed. You can do
+that by entering the following at the Karaf console:
+
+::
+
+    feature:install odl-cardinal
+    feature:install odl-restconf-all
+
+System Info APIs
+~~~~~~~~~~~~~~~~
+
+Open the REST interface and using the basic authentication, execute REST
+APIs for system info as:
+
+::
+
+    http://localhost:8181/restconf/operational/cardinal:CardinalSystemInfo/
+
+You should get the response code of the same as 200 OK with the
+following output as:
+
+::
+
+    {
+      "CardinalSystemInfo": {
+        "odlSystemMemUsage": " 9",
+        "odlSystemSysInfo": " OpenDaylight Node Information",
+        "odlSystemOdlUptime": " 00:29",
+        "odlSystemCpuUsage": " 271",
+        "odlSystemHostAddress": " Address of the Host should come up"
+      }
+    }
+
+Karaf Info APIs
+~~~~~~~~~~~~~~~
+
+Open the REST interface and using the basic authentication, execute REST
+APIs for system info as:
+
+::
+
+    http://localhost:8181/restconf/operational/cardinal-karaf:CardinalKarafInfo/
+
+You should get the response code of the same as 200 OK with the
+following output as:
+
+::
+
+      {
+      "CardinalKarafInfo": {
+        "odlKarafBundleListActive1": " org.ops4j.pax.url.mvn_2.4.5 [1]",
+        "odlKarafBundleListActive2": " org.ops4j.pax.url.wrap_2.4.5 [2]",
+        "odlKarafBundleListActive3": " org.ops4j.pax.logging.pax-logging-api_1.8.4 [3]",
+        "odlKarafBundleListActive4": " org.ops4j.pax.logging.pax-logging-service_1.8.4 [4]",
+        "odlKarafBundleListActive5": " org.apache.karaf.service.guard_3.0.6 [5]",
+        "odlKarafBundleListActive6": " org.apache.felix.configadmin_1.8.4 [6]",
+        "odlKarafBundleListActive7": " org.apache.felix.fileinstall_3.5.2 [7]",
+        "odlKarafBundleListActive8": " org.objectweb.asm.all_5.0.3 [8]",
+        "odlKarafBundleListActive9": " org.apache.aries.util_1.1.1 [9]",
+        "odlKarafBundleListActive10": " org.apache.aries.proxy.api_1.0.1 [10]",
+        "odlKarafBundleListInstalled1": " org.ops4j.pax.url.mvn_2.4.5 [1]",
+        "odlKarafBundleListInstalled2": " org.ops4j.pax.url.wrap_2.4.5 [2]",
+        "odlKarafBundleListInstalled3": " org.ops4j.pax.logging.pax-logging-api_1.8.4 [3]",
+        "odlKarafBundleListInstalled4": " org.ops4j.pax.logging.pax-logging-service_1.8.4 [4]",
+        "odlKarafBundleListInstalled5": " org.apache.karaf.service.guard_3.0.6 [5]",
+        "odlKarafFeatureListInstalled1": " config",
+        "odlKarafFeatureListInstalled2": " region",
+        "odlKarafFeatureListInstalled3": " package",
+        "odlKarafFeatureListInstalled4": " http",
+        "odlKarafFeatureListInstalled5": " war",
+        "odlKarafFeatureListInstalled6": " kar",
+        "odlKarafFeatureListInstalled7": " ssh",
+        "odlKarafFeatureListInstalled8": " management",
+        "odlKarafFeatureListInstalled9": " odl-netty",
+        "odlKarafFeatureListInstalled10": " odl-lmax",
+        "odlKarafBundleListResolved1": " org.ops4j.pax.url.mvn_2.4.5 [1]",
+        "odlKarafBundleListResolved2": " org.ops4j.pax.url.wrap_2.4.5 [2]",
+        "odlKarafBundleListResolved3": " org.ops4j.pax.logging.pax-logging-api_1.8.4 [3]",
+        "odlKarafBundleListResolved4": " org.ops4j.pax.logging.pax-logging-service_1.8.4 [4]",
+        "odlKarafBundleListResolved5": " org.apache.karaf.service.guard_3.0.6 [5]",
+        "odlKarafFeatureListUnInstalled1": " aries-annotation",
+        "odlKarafFeatureListUnInstalled2": " wrapper",
+        "odlKarafFeatureListUnInstalled3": " service-wrapper",
+        "odlKarafFeatureListUnInstalled4": " obr",
+        "odlKarafFeatureListUnInstalled5": " http-whiteboard",
+        "odlKarafFeatureListUnInstalled6": " jetty",
+        "odlKarafFeatureListUnInstalled7": " webconsole",
+        "odlKarafFeatureListUnInstalled8": " scheduler",
+        "odlKarafFeatureListUnInstalled9": " eventadmin",
+        "odlKarafFeatureListUnInstalled10": " jasypt-encryption"
+      }
+    }
+
diff --git a/docs/developer-guide/controller.rst b/docs/developer-guide/controller.rst
new file mode 100644 (file)
index 0000000..6d656aa
--- /dev/null
@@ -0,0 +1,1868 @@
+Controller
+==========
+
+Overview
+--------
+
+OpenDaylight Controller is Java-based, model-driven controller using
+YANG as its modeling language for various aspects of the system and
+applications and with its components serves as a base platform for other
+OpenDaylight applications.
+
+The OpenDaylight Controller relies on the following technologies:
+
+-  **OSGI** - This framework is the back-end of OpenDaylight as it
+   allows dynamically loading of bundles and packages JAR files, and
+   binding bundles together for exchanging information.
+
+-  **Karaf** - Application container built on top of OSGI, which
+   simplifies operational aspects of packaging and installing
+   applications.
+
+-  **YANG** - a data modeling language used to model configuration and
+   state data manipulated by the applications, remote procedure calls,
+   and notifications.
+
+The OpenDaylight Controller provides following model-driven subsystems
+as a foundation for Java applications:
+
+-  **`Config Subsystem <#_config_subsystem>`__** - an activation,
+   dependency-injection and configuration framework, which allows
+   two-phase commits of configuration and dependency-injection, and
+   allows for run-time rewiring.
+
+-  **`MD-SAL <#_md_sal_overview>`__** - messaging and data storage
+   functionality for data, notifications and RPCs modeled by application
+   developers. MD-SAL uses YANG as the modeling for both interface and
+   data definitions, and provides a messaging and data-centric runtime
+   for such services based on YANG modeling.
+
+-  **MD-SAL Clustering** - enables cluster support for core MD-SAL
+   functionality and provides location-transparent accesss to
+   YANG-modeled data.
+
+The OpenDaylight Controller supports external access to applications and
+data using following model-driven protocols:
+
+-  **NETCONF** - XML-based RPC protocol, which provides abilities for
+   client to invoke YANG-modeled RPCs, receive notifications and to
+   read, modify and manipulate YANG modeled data.
+
+-  **RESTCONF** - HTTP-based protocol, which provides REST-like APIs to
+   manipulate YANG modeled data and invoke YANG modeled RPCs, using XML
+   or JSON as payload format.
+
+MD-SAL Overview
+---------------
+
+The Model-Driven Service Adaptation Layer (MD-SAL) is message-bus
+inspired extensible middleware component that provides messaging and
+data storage functionality based on data and interface models defined by
+application developers (i.e. user-defined models).
+
+The MD-SAL:
+
+-  Defines a **common-layer, concepts, data model building blocks and
+   messaging patterns** and provides infrastructure / framework for
+   applications and inter-application communication.
+
+-  Provide common support for user-defined transport and payload
+   formats, including payload serialization and adaptation (e.g. binary,
+   XML or JSON).
+
+The MD-SAL uses **YANG** as the modeling language for both interface and
+data definitions, and provides a messaging and data-centric runtime for
+such services based on YANG modeling.
+
+| The MD-SAL provides two different API types (flavours):
+
+-  **MD-SAL Binding:** MD-SAL APIs which extensively uses APIs and
+   classes generated from YANG models, which provides compile-time
+   safety.
+
+-  **MD-SAL DOM:** (Document Object Model) APIs which uses DOM-like
+   representation of data, which makes them more powerful, but provides
+   less compile-time safety.
+
+.. note::
+
+    Model-driven nature of the MD-SAL and **DOM**-based APIs allows for
+    behind-the-scene API and payload type mediation and transformation
+    to facilitate seamless communication between applications - this
+    enables for other components and applications to provide connectors
+    / expose different set of APIs and derive most of its functionality
+    purely from models, which all existing code can benefit from without
+    modification. For example **RESTCONF Connector** is an application
+    built on top of MD-SAL and exposes YANG-modeled application APIs
+    transparently via HTTP and adds support for XML and JSON payload
+    type.
+
+Basic concepts
+~~~~~~~~~~~~~~
+
+Basic concepts are building blocks which are used by applications, and
+from which MD-SAL uses to define messaging patterns and to provide
+services and behavior based on developer-supplied YANG models.
+
+Data Tree
+    All state-related data are modeled and represented as data tree,
+    with possibility to address any element / subtree
+
+    -  **Operational Data Tree** - Reported state of the system,
+       published by the providers using MD-SAL. Represents a feedback
+       loop for applications to observe state of the network / system.
+
+    -  **Configuration Data Tree** - Intended state of the system or
+       network, populated by consumers, which expresses their intention.
+
+Instance Identifier
+    Unique identifier of node / subtree in data tree, which provides
+    unambiguous information, how to reference and retrieve node /
+    subtree from conceptual data trees.
+
+Notification
+    Asynchronous transient event which may be consumed by subscribers
+    and they may act upon it
+
+RPC
+    asynchronous request-reply message pair, when request is triggered
+    by consumer, send to the provider, which in future replies with
+    reply message.
+
+    .. note::
+
+        In MD-SAL terminology, the term *RPC* is used to define the
+        input and output for a procedure (function) that is to be
+        provided by a provider, and mediated by the MD-SAL, that means
+        it may not result in remote call.
+
+Messaging Patterns
+~~~~~~~~~~~~~~~~~~
+
+MD-SAL provides several messaging patterns using broker derived from
+basic concepts, which are intended to transfer YANG modeled data between
+applications to provide data-centric integration between applications
+instead of API-centric integration.
+
+-  **Unicast communication**
+
+   -  **Remote Procedure Calls** - unicast between consumer and
+      provider, where consumer sends **request** message to provider,
+      which asynchronously responds with **reply** message
+
+-  **Publish / Subscribe**
+
+   -  **Notifications** - multicast transient message which is published
+      by provider and is delivered to subscribers
+
+   -  **Data Change Events** - multicast asynchronous event, which is
+      sent by data broker if there is change in conceptual data tree,
+      and is delivered to subscribers
+
+-  **Transactional access to Data Tree**
+
+   -  Transactional **reads** from conceptual **data tree** - read-only
+      transactions with isolation from other running transactions.
+
+   -  Transactional **modification** to conceptual **data tree** - write
+      transactions with isolation from other running transactions.
+
+   -  **Transaction chaining**
+
+MD-SAL Data Transactions
+------------------------
+
+MD-SAL **Data Broker** provides transactional access to conceptual
+**data trees** representing configuration and operational state.
+
+.. note::
+
+    **Data tree** usually represents state of the modeled data, usually
+    this is state of controller, applications and also external systems
+    (network devices).
+
+**Transactions** provide **`stable and isolated
+view <#_transaction_isolation>`__** from other currently running
+transactions. The state of running transaction and underlying data tree
+is not affected by other concurrently running transactions.
+
+Write-Only
+    Transaction provides only modification capabilities, but does not
+    provide read capabilities. Write-only transaction is allocated using
+    ``newWriteOnlyTransaction()``.
+
+    .. note::
+
+        This allows less state tracking for write-only transactions and
+        allows MD-SAL Clustering to optimize internal representation of
+        transaction in cluster.
+
+Read-Write
+    Transaction provides both read and write capabilities. It is
+    allocated using ``newReadWriteTransaction()``.
+
+Read-Only
+    Transaction provides stable read-only view based on current data
+    tree. Read-only view is not affected by any subsequent write
+    transactions. Read-only transaction is allocated using
+    ``newReadOnlyTransaction()``.
+
+    .. note::
+
+        If an application needs to observe changes itself in data tree,
+        it should use **data tree listeners** instead of read-only
+        transactions and polling data tree.
+
+Transactions may be allocated using the **data broker** itself or using
+**transaction chain**. In the case of **transaction chain**, the new
+allocated transaction is not based on current state of data tree, but
+rather on state introduced by previous transaction from the same chain,
+even if the commit for previous transaction has not yet occurred (but
+transaction was submitted).
+
+Write-Only & Read-Write Transaction
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Write-Only and Read-Write transactions provide modification capabilities
+for the conceptual data trees.
+
+1. application allocates new transactions using
+   ``newWriteOnlyTransaction()`` or ``newReadWriteTransaction()``.
+
+2. application `modifies data tree <#_modification_of_data_tree>`__
+   using ``put``, ``merge`` and/or ``delete``.
+
+3. application finishes transaction using
+   ```submit()`` <#_submitting_transaction>`__, which seals transaction
+   and submits it to be processed.
+
+4. application observes the result of the transaction commit using
+   either blocking or asynchronous calls.
+
+The **initial state** of the write transaction is a **stable snapshot**
+of the current data tree state captured when transaction was created and
+it’s state and underlying data tree are not affected by other
+concurrently running transactions.
+
+Write transactions are **isolated** from other concurrent write
+transactions. All **`writes are local <#_transaction_local_state>`__**
+to the transaction and represents only a **proposal of state change**
+for data tree and **are not visible** to any other concurrently running
+transactions (including read-only transactions).
+
+The transaction **`commit may fail <#_commit_failure_scenarios>`__** due
+to failing verification of data or concurrent transaction modifying and
+affected data in an incompatible way.
+
+Modification of Data Tree
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Write-only and read-write transaction provides following methods to
+modify data tree:
+
+put
+    .. code:: java
+
+        <T> void put(LogicalDatastoreType store, InstanceIdentifier<T> path, T data);
+
+    Stores a piece of data at a specified path. This acts as an **add /
+    replace** operation, which is to say that whole subtree will be
+    replaced by the specified data.
+
+merge
+    .. code:: java
+
+        <T> void merge(LogicalDatastoreType store, InstanceIdentifier<T> path, T data);
+
+    Merges a piece of data with the existing data at a specified path.
+    Any **pre-existing data** which are not explicitly overwritten
+    **will be preserved**. This means that if you store a container, its
+    child subtrees will be merged.
+
+delete
+    .. code:: java
+
+        void delete(LogicalDatastoreType store, InstanceIdentifier<?> path);
+
+    Removes a whole subtree from a specified path.
+
+Submitting transaction
+^^^^^^^^^^^^^^^^^^^^^^
+
+Transaction is submitted to be processed and committed using following
+method:
+
+.. code:: java
+
+    CheckedFuture<Void,TransactionCommitFailedException> submit();
+
+Applications publish the changes proposed in the transaction by calling
+``submit()`` on the transaction. This **seals the transaction**
+(preventing any further writes using this transaction) and submits it to
+be processed and applied to global conceptual data tree. The
+``submit()`` method does not block, but rather returns
+``ListenableFuture``, which will complete successfully once processing
+of transaction is finished and changes are applied to data tree. If
+**commit** of data failed, the future will fail with
+``TransactionFailedException``.
+
+Application may listen on commit state asynchronously using
+``ListenableFuture``.
+
+.. code:: java
+
+    Futures.addCallback( writeTx.submit(), new FutureCallback<Void>() { 
+            public void onSuccess( Void result ) { 
+                LOG.debug("Transaction committed successfully.");
+            }
+
+            public void onFailure( Throwable t ) { 
+                LOG.error("Commit failed.",e);
+            }
+        });
+
+-  Submits ``writeTx`` and registers application provided
+   ``FutureCallback`` on returned future.
+
+-  Invoked when future completed successfully - transaction ``writeTx``
+   was successfully committed to data tree.
+
+-  Invoked when future failed - commit of transaction ``writeTx``
+   failed. Supplied exception provides additional details and cause of
+   failure.
+
+If application need to block till commit is finished it may use
+``checkedGet()`` to wait till commit is finished.
+
+.. code:: java
+
+    try {
+        writeTx.submit().checkedGet(); 
+    } catch (TransactionCommitFailedException e) { 
+        LOG.error("Commit failed.",e);
+    }
+
+-  Submits ``writeTx`` and blocks till commit of ``writeTx`` is
+   finished. If commit fails ``TransactionCommitFailedException`` will
+   be thrown.
+
+-  Catches ``TransactionCommitFailedException`` and logs it.
+
+Transaction local state
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Read-Write transactions maintain transaction-local state, which renders
+all modifications as if they happened, but this is only local to
+transaction.
+
+Reads from the transaction returns data as if the previous modifications
+in transaction already happened.
+
+Let assume initial state of data tree for ``PATH`` is ``A``.
+
+.. code:: java
+
+    ReadWriteTransaction rwTx = broker.newReadWriteTransaction(); 
+
+    rwRx.read(OPERATIONAL,PATH).get(); 
+    rwRx.put(OPERATIONAL,PATH,B); 
+    rwRx.read(OPERATIONAL,PATH).get(); 
+    rwRx.put(OPERATIONAL,PATH,C); 
+    rwRx.read(OPERATIONAL,PATH).get(); 
+
+-  Allocates new ``ReadWriteTransaction``.
+
+-  Read from ``rwTx`` will return value ``A`` for ``PATH``.
+
+-  Writes value ``B`` to ``PATH`` using ``rwTx``.
+
+-  Read will return value ``B`` for ``PATH``, since previous write
+   occurred in same transaction.
+
+-  Writes value ``C`` to ``PATH`` using ``rwTx``.
+
+-  Read will return value ``C`` for ``PATH``, since previous write
+   occurred in same transaction.
+
+Transaction isolation
+~~~~~~~~~~~~~~~~~~~~~
+
+Running (not submitted) transactions are isolated from each other and
+changes done in one transaction are not observable in other currently
+running transaction.
+
+Lets assume initial state of data tree for ``PATH`` is ``A``.
+
+.. code:: java
+
+    ReadOnlyTransaction txRead = broker.newReadOnlyTransaction(); 
+    ReadWriteTransaction txWrite = broker.newReadWriteTransaction(); 
+
+    txRead.read(OPERATIONAL,PATH).get(); 
+    txWrite.put(OPERATIONAL,PATH,B); 
+    txWrite.read(OPERATIONAL,PATH).get(); 
+    txWrite.submit().get(); 
+    txRead.read(OPERATIONAL,PATH).get(); 
+    txAfterCommit = broker.newReadOnlyTransaction(); 
+    txAfterCommit.read(OPERATIONAL,PATH).get(); 
+
+-  Allocates read only transaction, which is based on data tree which
+   contains value ``A`` for ``PATH``.
+
+-  Allocates read write transaction, which is based on data tree which
+   contains value ``A`` for ``PATH``.
+
+-  Read from read-only transaction returns value ``A`` for ``PATH``.
+
+-  Data tree is updated using read-write transaction, ``PATH`` contains
+   ``B``. Change is not public and only local to transaction.
+
+-  Read from read-write transaction returns value ``B`` for ``PATH``.
+
+-  Submits changes in read-write transaction to be committed to data
+   tree. Once commit will finish, changes will be published and ``PATH``
+   will be updated for value ``B``. Previously allocated transactions
+   are not affected by this change.
+
+-  Read from previously allocated read-only transaction still returns
+   value ``A`` for ``PATH``, since it provides stable and isolated view.
+
+-  Allocates new read-only transaction, which is based on data tree,
+   which contains value ``B`` for ``PATH``.
+
+-  Read from new read-only transaction return value ``B`` for ``PATH``
+   since read-write transaction was committed.
+
+.. note::
+
+    Examples contain blocking calls on future only to illustrate that
+    action happened after other asynchronous action. The use of the
+    blocking call ``ListenableFuture#get()`` is discouraged for most
+    use-cases and you should use
+    ``Futures#addCallback(ListenableFuture, FutureCallback)`` to listen
+    asynchronously for result.
+
+Commit failure scenarios
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+A transaction commit may fail because of following reasons:
+
+Optimistic Lock Failure
+    Another transaction finished earlier and **modified the same node in
+    a non-compatible way**. The commit (and the returned future) will
+    fail with an ``OptimisticLockFailedException``.
+
+    It is the responsibility of the caller to create a new transaction
+    and submit the same modification again in order to update data tree.
+
+        **Warning**
+
+        ``OptimisticLockFailedException`` usually exposes **multiple
+        writers** to the same data subtree, which may conflict on same
+        resources.
+
+        In most cases, retrying may result in a probability of success.
+
+        There are scenarios, albeit unusual, where any number of retries
+        will not succeed. Therefore it is strongly recommended to limit
+        the number of retries (2 or 3) to avoid an endless loop.
+
+Data Validation
+    The data change introduced by this transaction **did not pass
+    validation** by commit handlers or data was incorrectly structured.
+    The returned future will fail with a
+    ``DataValidationFailedException``. User **should not retry** to
+    create new transaction with same data, since it probably will fail
+    again.
+
+Example conflict of two transactions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This example illustrates two concurrent transactions, which derived from
+same initial state of data tree and proposes conflicting modifications.
+
+.. code:: java
+
+    WriteTransaction txA = broker.newWriteTransaction();
+    WriteTransaction txB = broker.newWriteTransaction();
+
+    txA.put(CONFIGURATION, PATH, A);    
+    txB.put(CONFIGURATION, PATH, B);     
+
+    CheckedFuture<?,?> futureA = txA.submit(); 
+    CheckedFuture<?,?> futureB = txB.submit(); 
+
+-  Updates ``PATH`` to value ``A`` using ``txA``
+
+-  Updates ``PATH`` to value ``B`` using ``txB``
+
+-  Seals & submits ``txA``. The commit will be processed asynchronously
+   and data tree will be updated to contain value ``A`` for ``PATH``.
+   The returned ‘ListenableFuture’ will complete successfully once state
+   is applied to data tree.
+
+-  Seals & submits ``txB``. Commit of ``txB`` will fail, because
+   previous transaction also modified path in a concurrent way. The
+   state introduced by ``txB`` will not be applied. The returned
+   ``ListenableFuture`` will fail with ``OptimisticLockFailedException``
+   exception, which indicates that concurrent transaction prevented the
+   submitted transaction from being applied.
+
+Example asynchronous retry-loop
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code:: java
+
+    private void doWrite( final int tries ) {
+        WriteTransaction writeTx = dataBroker.newWriteOnlyTransaction();
+
+        MyDataObject data = ...;
+        InstanceIdentifier<MyDataObject> path = ...;
+        writeTx.put( LogicalDatastoreType.OPERATIONAL, path, data );
+
+        Futures.addCallback( writeTx.submit(), new FutureCallback<Void>() {
+            public void onSuccess( Void result ) {
+                // succeeded
+            }
+
+            public void onFailure( Throwable t ) {
+                if( t instanceof OptimisticLockFailedException && (( tries - 1 ) > 0)) {
+                    doWrite( tries - 1 );
+                }
+            }
+          });
+    }
+    ...
+    doWrite( 2 );
+
+Concurrent change compatibility
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are several sets of changes which could be considered incompatible
+between two transactions which are derived from same initial state.
+Rules for conflict detection applies recursively for each subtree level.
+
+Following table shows state changes and failures between two concurrent
+transactions, which are based on same initial state, ``tx1`` is
+submitted before ``tx2``.
+
+INFO: Following tables stores numeric values and shows data using
+``toString()`` to simplify examples.
+
++--------------------+--------------------+--------------------+--------------------+
+| Initial state      | tx1                | tx2                | Observable Result  |
++====================+====================+====================+====================+
+| Empty              | ``put(A,1)``       | ``put(A,2)``       | ``tx2`` will fail, |
+|                    |                    |                    | value of ``A`` is  |
+|                    |                    |                    | ``1``              |
++--------------------+--------------------+--------------------+--------------------+
+| Empty              | ``put(A,1)``       | ``merge(A,2)``     | value of ``A`` is  |
+|                    |                    |                    | ``2``              |
++--------------------+--------------------+--------------------+--------------------+
+| Empty              | ``merge(A,1)``     | ``put(A,2)``       | ``tx2`` will fail, |
+|                    |                    |                    | value of ``A`` is  |
+|                    |                    |                    | ``1``              |
++--------------------+--------------------+--------------------+--------------------+
+| Empty              | ``merge(A,1)``     | ``merge(A,2)``     | ``A`` is ``2``     |
++--------------------+--------------------+--------------------+--------------------+
+| A=0                | ``put(A,1)``       | ``put(A,2)``       | ``tx2`` will fail, |
+|                    |                    |                    | ``A`` is ``1``     |
++--------------------+--------------------+--------------------+--------------------+
+| A=0                | ``put(A,1)``       | ``merge(A,2)``     | ``A`` is ``2``     |
++--------------------+--------------------+--------------------+--------------------+
+| A=0                | ``merge(A,1)``     | ``put(A,2)``       | ``tx2`` will fail, |
+|                    |                    |                    | value of ``A`` is  |
+|                    |                    |                    | ``1``              |
++--------------------+--------------------+--------------------+--------------------+
+| A=0                | ``merge(A,1)``     | ``merge(A,2)``     | ``A`` is ``2``     |
++--------------------+--------------------+--------------------+--------------------+
+| A=0                | ``delete(A)``      | ``put(A,2)``       | ``tx2`` will fail, |
+|                    |                    |                    | ``A`` does not     |
+|                    |                    |                    | exists             |
++--------------------+--------------------+--------------------+--------------------+
+| A=0                | ``delete(A)``      | ``merge(A,2)``     | ``A`` is ``2``     |
++--------------------+--------------------+--------------------+--------------------+
+
+Table: Concurrent change resolution for leaves and leaf-list items
+
++--------------------+--------------------+--------------------+--------------------+
+| Initial state      | ``tx1``            | ``tx2``            | Result             |
++====================+====================+====================+====================+
+| Empty              | put(TOP,[])        | put(TOP,[])        | ``tx2`` will fail, |
+|                    |                    |                    | state is TOP=[]    |
++--------------------+--------------------+--------------------+--------------------+
+| Empty              | put(TOP,[])        | merge(TOP,[])      | TOP=[]             |
++--------------------+--------------------+--------------------+--------------------+
+| Empty              | put(TOP,[FOO=1])   | put(TOP,[BAR=1])   | ``tx2`` will fail, |
+|                    |                    |                    | state is           |
+|                    |                    |                    | TOP=[FOO=1]        |
++--------------------+--------------------+--------------------+--------------------+
+| Empty              | put(TOP,[FOO=1])   | merge(TOP,[BAR=1]) | TOP=[FOO=1,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| Empty              | merge(TOP,[FOO=1]) | put(TOP,[BAR=1])   | ``tx2`` will fail, |
+|                    |                    |                    | state is           |
+|                    |                    |                    | TOP=[FOO=1]        |
++--------------------+--------------------+--------------------+--------------------+
+| Empty              | merge(TOP,[FOO=1]) | merge(TOP,[BAR=1]) | TOP=[FOO=1,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | put(TOP,[FOO=1])   | put(TOP,[BAR=1])   | ``tx2`` will fail, |
+|                    |                    |                    | state is           |
+|                    |                    |                    | TOP=[FOO=1]        |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | put(TOP,[FOO=1])   | merge(TOP,[BAR=1]) | state is           |
+|                    |                    |                    | TOP=[FOO=1,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | merge(TOP,[FOO=1]) | put(TOP,[BAR=1])   | ``tx2`` will fail, |
+|                    |                    |                    | state is           |
+|                    |                    |                    | TOP=[FOO=1]        |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | merge(TOP,[FOO=1]) | merge(TOP,[BAR=1]) | state is           |
+|                    |                    |                    | TOP=[FOO=1,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | delete(TOP)        | put(TOP,[BAR=1])   | ``tx2`` will fail, |
+|                    |                    |                    | state is empty     |
+|                    |                    |                    | store              |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | delete(TOP)        | merge(TOP,[BAR=1]) | state is           |
+|                    |                    |                    | TOP=[BAR=1]        |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | put(TOP/FOO,1)     | put(TOP/BAR,1])    | state is           |
+|                    |                    |                    | TOP=[FOO=1,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | put(TOP/FOO,1)     | merge(TOP/BAR,1)   | state is           |
+|                    |                    |                    | TOP=[FOO=1,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | merge(TOP/FOO,1)   | put(TOP/BAR,1)     | state is           |
+|                    |                    |                    | TOP=[FOO=1,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | merge(TOP/FOO,1)   | merge(TOP/BAR,1)   | state is           |
+|                    |                    |                    | TOP=[FOO=1,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | delete(TOP)        | put(TOP/BAR,1)     | ``tx2`` will fail, |
+|                    |                    |                    | state is empty     |
+|                    |                    |                    | store              |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[]             | delete(TOP)        | merge(TOP/BAR,1]   | ``tx2`` will fail, |
+|                    |                    |                    | state is empty     |
+|                    |                    |                    | store              |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[FOO=1]        | put(TOP/FOO,2)     | put(TOP/BAR,1)     | state is           |
+|                    |                    |                    | TOP=[FOO=2,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[FOO=1]        | put(TOP/FOO,2)     | merge(TOP/BAR,1)   | state is           |
+|                    |                    |                    | TOP=[FOO=2,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[FOO=1]        | merge(TOP/FOO,2)   | put(TOP/BAR,1)     | state is           |
+|                    |                    |                    | TOP=[FOO=2,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[FOO=1]        | merge(TOP/FOO,2)   | merge(TOP/BAR,1)   | state is           |
+|                    |                    |                    | TOP=[FOO=2,BAR=1]  |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[FOO=1]        | delete(TOP/FOO)    | put(TOP/BAR,1)     | state is           |
+|                    |                    |                    | TOP=[BAR=1]        |
++--------------------+--------------------+--------------------+--------------------+
+| TOP=[FOO=1]        | delete(TOP/FOO)    | merge(TOP/BAR,1]   | state is           |
+|                    |                    |                    | TOP=[BAR=1]        |
++--------------------+--------------------+--------------------+--------------------+
+
+Table: Concurrent change resolution for containers, lists, list items
+
+MD-SAL RPC routing
+------------------
+
+The MD-SAL provides a way to deliver Remote Procedure Calls (RPCs) to a
+particular implementation based on content in the input as it is modeled
+in YANG. This part of the the RPC input is referred to as a **context
+reference**.
+
+The MD-SAL does not dictate the name of the leaf which is used for this
+RPC routing, but provides necessary functionality for YANG model author
+to define their **context reference** in their model of RPCs.
+
+MD-SAL routing behavior is modeled using following terminology and its
+application to YANG models:
+
+Context Type
+    Logical type of RPC routing. Context type is modeled as YANG
+    ``identity`` and is referenced in model to provide scoping
+    information.
+
+Context Instance
+    Conceptual location in data tree, which represents context in which
+    RPC could be executed. Context instance usually represent logical
+    point to which RPC execution is attached.
+
+Context Reference
+    Field of RPC input payload which contains Instance Identifier
+    referencing **context instance** in which the RPC should be
+    executed.
+
+Modeling a routed RPC
+~~~~~~~~~~~~~~~~~~~~~
+
+In order to define routed RPCs, the YANG model author needs to declare
+(or reuse) a **context type**, set of possible **context instances** and
+finally RPCs which will contain **context reference** on which they will
+be routed.
+
+Declaring a routing context type
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code:: yang
+
+    identity node-context {
+        description "Identity used to mark node context";
+    }
+
+This declares an identity named ``node-context``, which is used as
+marker for node-based routing and is used in other places to reference
+that routing type.
+
+Declaring possible context instances
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In order to define possible values of **context instances** for routed
+RPCs, we need to model that set accordingly using ``context-instance``
+extension from the ``yang-ext`` model.
+
+.. code:: yang
+
+    import yang-ext { prefix ext; }
+
+    /** Base structure **/
+    container nodes {
+        list node {
+            key "id";
+            ext:context-instance "node-context";
+            // other node-related fields would go here
+        }
+    }
+
+The statement ``ext:context-instance "node-context";`` marks any element
+of the ``list node`` as a possible valid **context instance** in
+``node-context`` based routing.
+
+.. note::
+
+    The existence of a **context instance** node in operational or
+    config data tree is not strongly tied to existence of RPC
+    implementation.
+
+    For most routed RPC models, there is relationship between the data
+    present in operational data tree and RPC implementation
+    availability, but this is not enforced by MD-SAL. This provides some
+    flexibility for YANG model writers to better specify their routing
+    model and requirements for implementations. Details when RPC
+    implementations are available should be documented in YANG model.
+
+    If user invokes RPC with a **context instance** that has no
+    registered implementation, the RPC invocation will fail with the
+    exception ``DOMRpcImplementationNotAvailableException``.
+
+Declaring a routed RPC
+^^^^^^^^^^^^^^^^^^^^^^
+
+To declare RPC to be routed based on ``node-context`` we need to add
+leaf of ``instance-identifier`` type (or type derived from
+``instance-identifier``) to the RPC and mark it as **context
+reference**.
+
+This is achieved using YANG extension ``context-reference`` from
+``yang-ext`` model on leaf, which will be used for RPC routing.
+
+.. code:: yang
+
+    rpc example-routed-rpc  {
+        input {
+            leaf node {
+                ext:context-reference "node-context";
+                type "instance-identifier";
+            }
+            // other input to the RPC would go here
+        }
+    }
+
+The statement ``ext:context-reference "node-context"`` marks
+``leaf node`` as **context reference** of type ``node-context``. The
+value of this leaf, will be used by the MD-SAL to select the particular
+RPC implementation that registered itself as the implementation of the
+RPC for particular **context instance**.
+
+Using routed RPCs
+~~~~~~~~~~~~~~~~~
+
+From a user perspective (e.g. invoking RPCs) there is no difference
+between routed and non-routed RPCs. Routing information is just an
+additional leaf in RPC which must be populated.
+
+Implementing a routed RPC
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Implementation
+
+Registering implementations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Implementations of a routed RPC (e.g., southbound plugins) will specify
+an instance-identifier for the **context reference** (in this case a
+node) for which they want to provide an implementation during
+registration. Consumers, e.g., those calling the RPC are required to
+specify that instance-identifier (in this case the identifier of a node)
+when invoking RPC.
+
+Simple code which showcases that for add-flow via Binding-Aware APIs
+(`RoutedServiceTest.java <https://git.opendaylight.org/gerrit/gitweb?p=controller.git;a=blob;f=opendaylight/md-sal/sal-binding-it/src/test/java/org/opendaylight/controller/test/sal/binding/it/RoutedServiceTest.java;h=d49d6f0e25e271e43c8550feb5eef63d96301184;hb=HEAD>`__
+):
+
+.. code:: java
+
+     61  @Override
+     62  public void onSessionInitiated(ProviderContext session) {
+     63      assertNotNull(session);
+     64      firstReg = session.addRoutedRpcImplementation(SalFlowService.class, salFlowService1);
+     65  }
+
+Line 64: We are registering salFlowService1 as implementation of
+SalFlowService RPC
+
+.. code:: java
+
+    107  NodeRef nodeOne = createNodeRef("foo:node:1");
+    109  /**
+    110   * Provider 1 registers path of node 1
+    111   */
+    112  firstReg.registerPath(NodeContext.class, nodeOne);
+
+Line 107: We are creating NodeRef (encapsulation of InstanceIdentifier)
+for "foo:node:1".
+
+Line 112: We register salFlowService1 as implementation for nodeOne.
+
+The salFlowService1 will be executed only for RPCs which contains
+Instance Identifier for foo:node:1.
+
+OpenDaylight Controller MD-SAL: RESTCONF
+----------------------------------------
+
+RESCONF operations overview
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+| RESTCONF allows access to datastores in the controller.
+| There are two datastores:
+
+-  Config: Contains data inserted via controller
+
+-  Operational: Contains other data
+
+.. note::
+
+    | Each request must start with the URI /restconf.
+    | RESTCONF listens on port 8080 for HTTP requests.
+
+RESTCONF supports **OPTIONS**, **GET**, **PUT**, **POST**, and
+**DELETE** operations. Request and response data can either be in the
+XML or JSON format. XML structures according to yang are defined at:
+`XML-YANG <http://tools.ietf.org/html/rfc6020>`__. JSON structures are
+defined at:
+`JSON-YANG <http://tools.ietf.org/html/draft-lhotka-netmod-yang-json-02>`__.
+Data in the request must have a correctly set **Content-Type** field in
+the http header with the allowed value of the media type. The media type
+of the requested data has to be set in the **Accept** field. Get the
+media types for each resource by calling the OPTIONS operation. Most of
+the paths of the pathsRestconf endpoints use `Instance
+Identifier <https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:Concepts#Instance_Identifier>`__.
+``<identifier>`` is used in the explanation of the operations.
+
+| **<identifier>**
+
+-  It must start with <moduleName>:<nodeName> where <moduleName> is a
+   name of the module and <nodeName> is the name of a node in the
+   module. It is sufficient to just use <nodeName> after
+   <moduleName>:<nodeName>. Each <nodeName> has to be separated by /.
+
+-  <nodeName> can represent a data node which is a list or container
+   yang built-in type. If the data node is a list, there must be defined
+   keys of the list behind the data node name for example,
+   <nodeName>/<valueOfKey1>/<valueOfKey2>.
+
+-  | The format <moduleName>:<nodeName> has to be used in this case as
+     well:
+   | Module A has node A1. Module B augments node A1 by adding node X.
+     Module C augments node A1 by adding node X. For clarity, it has to
+     be known which node is X (for example: C:X). For more details about
+     encoding, see: `RESTCONF 02 - Encoding YANG Instance Identifiers in
+     the Request
+     URI. <http://tools.ietf.org/html/draft-bierman-netconf-restconf-02#section-5.3.1>`__
+
+Mount point
+~~~~~~~~~~~
+
+| A Node can be behind a mount point. In this case, the URI has to be in
+  format <identifier>/**yang-ext:mount**/<identifier>. The first
+  <identifier> is the path to a mount point and the second <identifier>
+  is the path to a node behind the mount point. A URI can end in a mount
+  point itself by using <identifier>/**yang-ext:mount**.
+| More information on how to actually use mountpoints is available at:
+  `OpenDaylight
+  Controller:Config:Examples:Netconf <https://wiki.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf>`__.
+
+HTTP methods
+~~~~~~~~~~~~
+
+OPTIONS /restconf
+^^^^^^^^^^^^^^^^^
+
+-  Returns the XML description of the resources with the required
+   request and response media types in Web Application Description
+   Language (WADL)
+
+GET /restconf/config/<identifier>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Returns a data node from the Config datastore.
+
+-  <identifier> points to a data node which must be retrieved.
+
+GET /restconf/operational/<identifier>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Returns the value of the data node from the Operational datastore.
+
+-  <identifier> points to a data node which must be retrieved.
+
+PUT /restconf/config/<identifier>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Updates or creates data in the Config datastore and returns the state
+   about success.
+
+-  <identifier> points to a data node which must be stored.
+
+| **Example:**
+
+::
+
+    PUT http://<controllerIP>:8080/restconf/config/module1:foo/bar
+    Content-Type: applicaton/xml
+    <bar>
+      …
+    </bar>
+
+| **Example with mount point:**
+
+::
+
+    PUT http://<controllerIP>:8080/restconf/config/module1:foo1/foo2/yang-ext:mount/module2:foo/bar
+    Content-Type: applicaton/xml
+    <bar>
+      …
+    </bar>
+
+POST /restconf/config
+^^^^^^^^^^^^^^^^^^^^^
+
+-  Creates the data if it does not exist
+
+| For example:
+
+::
+
+    POST URL: http://localhost:8080/restconf/config/
+    content-type: application/yang.data+json
+    JSON payload:
+
+       {
+         "toaster:toaster" :
+         {
+           "toaster:toasterManufacturer" : "General Electric",
+           "toaster:toasterModelNumber" : "123",
+           "toaster:toasterStatus" : "up"
+         }
+      }
+
+POST /restconf/config/<identifier>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Creates the data if it does not exist in the Config datastore, and
+   returns the state about success.
+
+-  <identifier> points to a data node where data must be stored.
+
+-  The root element of data must have the namespace (data are in XML) or
+   module name (data are in JSON.)
+
+| **Example:**
+
+::
+
+    POST http://<controllerIP>:8080/restconf/config/module1:foo
+    Content-Type: applicaton/xml/
+    <bar xmlns=“module1namespace”>
+      …
+    </bar>
+
+**Example with mount point:**
+
+::
+
+    http://<controllerIP>:8080/restconf/config/module1:foo1/foo2/yang-ext:mount/module2:foo
+    Content-Type: applicaton/xml
+    <bar xmlns=“module2namespace”>
+      …
+    </bar>
+
+POST /restconf/operations/<moduleName>:<rpcName>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Invokes RPC.
+
+-  <moduleName>:<rpcName> - <moduleName> is the name of the module and
+   <rpcName> is the name of the RPC in this module.
+
+-  The Root element of the data sent to RPC must have the name “input”.
+
+-  The result can be the status code or the retrieved data having the
+   root element “output”.
+
+| **Example:**
+
+::
+
+    POST http://<controllerIP>:8080/restconf/operations/module1:fooRpc
+    Content-Type: applicaton/xml
+    Accept: applicaton/xml
+    <input>
+      …
+    </input>
+
+    The answer from the server could be:
+    <output>
+      …
+    </output>
+
+| **An example using a JSON payload:**
+
+::
+
+    POST http://localhost:8080/restconf/operations/toaster:make-toast
+    Content-Type: application/yang.data+json
+    {
+      "input" :
+      {
+         "toaster:toasterDoneness" : "10",
+         "toaster:toasterToastType":"wheat-bread"
+      }
+    }
+
+.. note::
+
+    Even though this is a default for the toasterToastType value in the
+    yang, you still need to define it.
+
+DELETE /restconf/config/<identifier>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Removes the data node in the Config datastore and returns the state
+   about success.
+
+-  <identifier> points to a data node which must be removed.
+
+More information is available in the `RESTCONF
+RFC <http://tools.ietf.org/html/draft-bierman-netconf-restconf-02>`__.
+
+How RESTCONF works
+~~~~~~~~~~~~~~~~~~
+
+| RESTCONF uses these base classes:
+
+InstanceIdentifier
+    Represents the path in the data tree
+
+ConsumerSession
+    Used for invoking RPCs
+
+DataBrokerService
+    Offers manipulation with transactions and reading data from the
+    datastores
+
+SchemaContext
+    Holds information about yang modules
+
+MountService
+    Returns MountInstance based on the InstanceIdentifier pointing to a
+    mount point
+
+MountInstace
+    Contains the SchemaContext behind the mount point
+
+DataSchemaNode
+    Provides information about the schema node
+
+SimpleNode
+    Possesses the same name as the schema node, and contains the value
+    representing the data node value
+
+CompositeNode
+    Can contain CompositeNode-s and SimpleNode-s
+
+GET in action
+~~~~~~~~~~~~~
+
+Figure 1 shows the GET operation with URI restconf/config/M:N where M is
+the module name, and N is the node name.
+
+.. figure:: ./images/Get.png
+   :alt: Get
+
+   Get
+
+1. The requested URI is translated into the InstanceIdentifier which
+   points to the data node. During this translation, the DataSchemaNode
+   that conforms to the data node is obtained. If the data node is
+   behind the mount point, the MountInstance is obtained as well.
+
+2. RESTCONF asks for the value of the data node from DataBrokerService
+   based on InstanceIdentifier.
+
+3. DataBrokerService returns CompositeNode as data.
+
+4. StructuredDataToXmlProvider or StructuredDataToJsonProvider is called
+   based on the **Accept** field from the http request. These two
+   providers can transform CompositeNode regarding DataSchemaNode to an
+   XML or JSON document.
+
+5. XML or JSON is returned as the answer on the request from the client.
+
+PUT in action
+~~~~~~~~~~~~~
+
+Figure 2 shows the PUT operation with the URI restconf/config/M:N where
+M is the module name, and N is the node name. Data is sent in the
+request either in the XML or JSON format.
+
+.. figure:: ./images/Put.png
+   :alt: Put
+
+   Put
+
+1. Input data is sent to JsonToCompositeNodeProvider or
+   XmlToCompositeNodeProvider. The correct provider is selected based on
+   the Content-Type field from the http request. These two providers can
+   transform input data to CompositeNode. However, this CompositeNode
+   does not contain enough information for transactions.
+
+2. The requested URI is translated into InstanceIdentifier which points
+   to the data node. DataSchemaNode conforming to the data node is
+   obtained during this translation. If the data node is behind the
+   mount point, the MountInstance is obtained as well.
+
+3. CompositeNode can be normalized by adding additional information from
+   DataSchemaNode.
+
+4. RESTCONF begins the transaction, and puts CompositeNode with
+   InstanceIdentifier into it. The response on the request from the
+   client is the status code which depends on the result from the
+   transaction.
+
+Something practical
+~~~~~~~~~~~~~~~~~~~
+
+1. Create a new flow on the switch openflow:1 in table 2.
+
+| **HTTP request**
+
+::
+
+    Operation: POST
+    URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2
+    Content-Type: application/xml
+
+::
+
+    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
+    <flow
+        xmlns="urn:opendaylight:flow:inventory">
+        <strict>false</strict>
+        <instructions>
+            <instruction>
+                <order>1</order>
+                <apply-actions>
+                    <action>
+                      <order>1</order>
+                        <flood-all-action/>
+                    </action>
+                </apply-actions>
+            </instruction>
+        </instructions>
+        <table_id>2</table_id>
+        <id>111</id>
+        <cookie_mask>10</cookie_mask>
+        <out_port>10</out_port>
+        <installHw>false</installHw>
+        <out_group>2</out_group>
+        <match>
+            <ethernet-match>
+                <ethernet-type>
+                    <type>2048</type>
+                </ethernet-type>
+            </ethernet-match>
+            <ipv4-destination>10.0.0.1/24</ipv4-destination>
+        </match>
+        <hard-timeout>0</hard-timeout>
+        <cookie>10</cookie>
+        <idle-timeout>0</idle-timeout>
+        <flow-name>FooXf22</flow-name>
+        <priority>2</priority>
+        <barrier>false</barrier>
+    </flow>
+
+| **HTTP response**
+
+::
+
+    Status: 204 No Content
+
+1. Change *strict* to *true* in the previous flow.
+
+| **HTTP request**
+
+::
+
+    Operation: PUT
+    URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2/flow/111
+    Content-Type: application/xml
+
+::
+
+    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
+    <flow
+        xmlns="urn:opendaylight:flow:inventory">
+        <strict>true</strict>
+        <instructions>
+            <instruction>
+                <order>1</order>
+                <apply-actions>
+                    <action>
+                      <order>1</order>
+                        <flood-all-action/>
+                    </action>
+                </apply-actions>
+            </instruction>
+        </instructions>
+        <table_id>2</table_id>
+        <id>111</id>
+        <cookie_mask>10</cookie_mask>
+        <out_port>10</out_port>
+        <installHw>false</installHw>
+        <out_group>2</out_group>
+        <match>
+            <ethernet-match>
+                <ethernet-type>
+                    <type>2048</type>
+                </ethernet-type>
+            </ethernet-match>
+            <ipv4-destination>10.0.0.1/24</ipv4-destination>
+        </match>
+        <hard-timeout>0</hard-timeout>
+        <cookie>10</cookie>
+        <idle-timeout>0</idle-timeout>
+        <flow-name>FooXf22</flow-name>
+        <priority>2</priority>
+        <barrier>false</barrier>
+    </flow>
+
+| **HTTP response**
+
+::
+
+    Status: 200 OK
+
+1. Show flow: check that *strict* is *true*.
+
+| **HTTP request**
+
+::
+
+    Operation: GET
+    URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2/flow/111
+    Accept: application/xml
+
+| **HTTP response**
+
+::
+
+    Status: 200 OK
+
+::
+
+    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
+    <flow
+        xmlns="urn:opendaylight:flow:inventory">
+        <strict>true</strict>
+        <instructions>
+            <instruction>
+                <order>1</order>
+                <apply-actions>
+                    <action>
+                      <order>1</order>
+                        <flood-all-action/>
+                    </action>
+                </apply-actions>
+            </instruction>
+        </instructions>
+        <table_id>2</table_id>
+        <id>111</id>
+        <cookie_mask>10</cookie_mask>
+        <out_port>10</out_port>
+        <installHw>false</installHw>
+        <out_group>2</out_group>
+        <match>
+            <ethernet-match>
+                <ethernet-type>
+                    <type>2048</type>
+                </ethernet-type>
+            </ethernet-match>
+            <ipv4-destination>10.0.0.1/24</ipv4-destination>
+        </match>
+        <hard-timeout>0</hard-timeout>
+        <cookie>10</cookie>
+        <idle-timeout>0</idle-timeout>
+        <flow-name>FooXf22</flow-name>
+        <priority>2</priority>
+        <barrier>false</barrier>
+    </flow>
+
+1. Delete the flow created.
+
+| **HTTP request**
+
+::
+
+    Operation: DELETE
+    URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2/flow/111
+
+| **HTTP response**
+
+::
+
+    Status: 200 OK
+
+Websocket change event notification subscription tutorial
+---------------------------------------------------------
+
+Subscribing to data change notifications makes it possible to obtain
+notifications about data manipulation (insert, change, delete) which are
+done on any specified **path** of any specified **datastore** with
+specific **scope**. In following examples *{odlAddress}* is address of
+server where ODL is running and *{odlPort}* is port on which
+OpenDaylight is running.
+
+Websocket notifications subscription process
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In this section we will learn what steps need to be taken in order to
+successfully subscribe to data change event notifications.
+
+Create stream
+^^^^^^^^^^^^^
+
+In order to use event notifications you first need to call RPC that
+creates notification stream that you can later listen to. You need to
+provide three parameters to this RPC:
+
+-  **path**: data store path that you plan to listen to. You can
+   register listener on containers, lists and leaves.
+
+-  **datastore**: data store type. *OPERATIONAL* or *CONFIGURATION*.
+
+-  **scope**: Represents scope of data change. Possible options are:
+
+   -  BASE: only changes directly to the data tree node specified in the
+      path will be reported
+
+   -  ONE: changes to the node and to direct child nodes will be
+      reported
+
+   -  SUBTREE: changes anywhere in the subtree starting at the node will
+      be reported
+
+The RPC to create the stream can be invoked via RESCONF like this:
+
+-  URI:
+   http://{odlAddress}:{odlPort}/restconf/operations/sal-remote:create-data-change-event-subscription
+
+-  HEADER: Content-Type=application/json
+
+-  OPERATION: POST
+
+-  DATA:
+
+   .. code:: json
+
+       {
+           "input": {
+               "path": "/toaster:toaster/toaster:toasterStatus",
+               "sal-remote-augment:datastore": "OPERATIONAL",
+               "sal-remote-augment:scope": "ONE"
+           }
+       }
+
+The response should look something like this:
+
+.. code:: json
+
+    {
+        "output": {
+            "stream-name": "toaster:toaster/toaster:toasterStatus/datastore=CONFIGURATION/scope=SUBTREE"
+        }
+    }
+
+**stream-name** is important because you will need to use it when you
+subscribe to the stream in the next step.
+
+.. note::
+
+    Internally, this will create a new listener for *stream-name* if it
+    did not already exist.
+
+Subscribe to stream
+^^^^^^^^^^^^^^^^^^^
+
+In order to subscribe to stream and obtain WebSocket location you need
+to call *GET* on your stream path. The URI should generally be
+http://{odlAddress}:{odlPort}/restconf/streams/stream/{streamName},
+where *{streamName}* is the *stream-name* parameter contained in
+response from *create-data-change-event-subscription* RPC from the
+previous step.
+
+-  URI:
+   http://{odlAddress}:{odlPort}/restconf/streams/stream/toaster:toaster/datastore=CONFIGURATION/scope=SUBTREE
+
+-  OPERATION: GET
+
+The expected response status is 200 OK and response body should be
+empty. You will get your WebSocket location from **Location** header of
+response. For example in our particular toaster example location header
+would have this value:
+*ws://{odlAddress}:8185/toaster:toaster/datastore=CONFIGURATION/scope=SUBTREE*
+
+.. note::
+
+    During this phase there is an internal check for to see if a
+    listener for the *stream-name* from the URI exists. If not, new a
+    new listener is registered with the DOM data broker.
+
+Receive notifications
+^^^^^^^^^^^^^^^^^^^^^
+
+You should now have a data change notification stream created and have
+location of a WebSocket. You can use this WebSocket to listen to data
+change notifications. To listen to notifications you can use a
+JavaScript client or if you are using chrome browser you can use the
+`Simple WebSocket
+Client <https://chrome.google.com/webstore/detail/simple-websocket-client/pfdhoblngboilpfeibdedpjgfnlcodoo>`__.
+
+Also, for testing purposes, there is simple Java application named
+WebSocketClient. The application is placed in the
+*-sal-rest-connector-classes.class* project. It accepts a WebSocket URI
+as and input parameter. After starting the utility (WebSocketClient
+class directly in Eclipse/InteliJ Idea) received notifications should be
+displayed in console.
+
+Notifications are always in XML format and look like this:
+
+.. code:: xml
+
+    <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
+        <eventTime>2014-09-11T09:58:23+02:00</eventTime>
+        <data-changed-notification xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:remote">
+            <data-change-event>
+                <path xmlns:meae="http://netconfcentral.org/ns/toaster">/meae:toaster</path>
+                <operation>updated</operation>
+                <data>
+                   <!-- updated data -->
+                </data>
+            </data-change-event>
+        </data-changed-notification>
+    </notification>
+
+Example use case
+~~~~~~~~~~~~~~~~
+
+The typical use case is listening to data change events to update web
+page data in real-time. In this tutorial we will be using toaster as the
+base.
+
+When you call *make-toast* RPC, it sets *toasterStatus* to "down" to
+reflect that the toaster is busy making toast. When it finishes,
+*toasterStatus* is set to "up" again. We will listen to this toaster
+status changes in data store and will reflect it on our web page in
+real-time thanks to WebSocket data change notification.
+
+Simple javascript client implementation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+We will create simple JavaScript web application that will listen
+updates on *toasterStatus* leaf and update some element of our web page
+according to new toaster status state.
+
+Create stream
+^^^^^^^^^^^^^
+
+First you need to create stream that you are planing to subscribe to.
+This can be achieved by invoking "create-data-change-event-subscription"
+RPC on RESTCONF via AJAX request. You need to provide data store
+**path** that you plan to listen on, **data store type** and **scope**.
+If the request is successful you can extract the **stream-name** from
+the response and use that to subscribe to the newly created stream. The
+*{username}* and *{password}* fields represent your credentials that you
+use to connect to OpenDaylight via RESTCONF:
+
+.. note::
+
+    The default user name and password are "admin".
+
+.. code:: javascript
+
+    function createStream() {
+        $.ajax(
+            {
+                url: 'http://{odlAddress}:{odlPort}/restconf/operations/sal-remote:create-data-change-event-subscription',
+                type: 'POST',
+                headers: {
+                  'Authorization': 'Basic ' + btoa('{username}:{password}'),
+                  'Content-Type': 'application/json'
+                },
+                data: JSON.stringify(
+                    {
+                        'input': {
+                            'path': '/toaster:toaster/toaster:toasterStatus',
+                            'sal-remote-augment:datastore': 'OPERATIONAL',
+                            'sal-remote-augment:scope': 'ONE'
+                        }
+                    }
+                )
+            }).done(function (data) {
+                // this function will be called when ajax call is executed successfully
+                subscribeToStream(data.output['stream-name']);
+            }).fail(function (data) {
+                // this function will be called when ajax call fails
+                console.log("Create stream call unsuccessful");
+            })
+    }
+
+Subscribe to stream
+^^^^^^^^^^^^^^^^^^^
+
+The Next step is to subscribe to the stream. To subscribe to the stream
+you need to call *GET* on
+*http://{odlAddress}:{odlPort}/restconf/streams/stream/{stream-name}*.
+If the call is successful, you get WebSocket address for this stream in
+**Location** parameter inside response header. You can get response
+header by calling *getResponseHeader(\ *Location*)* on HttpRequest
+object inside *done()* function call:
+
+.. code:: javascript
+
+    function subscribeToStream(streamName) {
+        $.ajax(
+            {
+                url: 'http://{odlAddress}:{odlPort}/restconf/streams/stream/' + streamName;
+                type: 'GET',
+                headers: {
+                  'Authorization': 'Basic ' + btoa('{username}:{password}'),
+                }
+            }
+        ).done(function (data, textStatus, httpReq) {
+            // we need function that has http request object parameter in order to access response headers.
+            listenToNotifications(httpReq.getResponseHeader('Location'));
+        }).fail(function (data) {
+            console.log("Subscribe to stream call unsuccessful");
+        });
+    }
+
+Receive notifications
+^^^^^^^^^^^^^^^^^^^^^
+
+Once you got WebSocket server location you can now connect to it and
+start receiving data change events. You need to define functions that
+will handle events on WebSocket. In order to process incoming events
+from OpenDaylight you need to provide a function that will handle
+*onmessage* events. The function must have one parameter that represents
+the received event object. The event data will be stored in
+*event.data*. The data will be in an XML format that you can then easily
+parse using jQuery.
+
+.. code:: javascript
+
+    function listenToNotifications(socketLocation) {
+        try {
+            var notificatinSocket = new WebSocket(socketLocation);
+
+            notificatinSocket.onmessage = function (event) {
+                // we process our received event here
+                console.log('Received toaster data change event.');
+                $($.parseXML(event.data)).find('data-change-event').each(
+                    function (index) {
+                        var operation = $(this).find('operation').text();
+                        if (operation == 'updated') {
+                            // toaster status was updated so we call function that gets the value of toasterStatus leaf
+                            updateToasterStatus();
+                            return false;
+                        }
+                    }
+                );
+            }
+            notificatinSocket.onerror = function (error) {
+                console.log("Socket error: " + error);
+            }
+            notificatinSocket.onopen = function (event) {
+                console.log("Socket connection opened.");
+            }
+            notificatinSocket.onclose = function (event) {
+                console.log("Socket connection closed.");
+            }
+            // if there is a problem on socket creation we get exception (i.e. when socket address is incorrect)
+        } catch(e) {
+            alert("Error when creating WebSocket" + e );
+        }
+    }
+
+The *updateToasterStatus()* function represents function that calls
+*GET* on the path that was modified and sets toaster status in some web
+page element according to received data. After the WebSocket connection
+has been established you can test events by calling make-toast RPC via
+RESTCONF.
+
+.. note::
+
+    for more information about WebSockets in JavaScript visit `Writing
+    WebSocket client
+    applications <https://developer.mozilla.org/en-US/docs/WebSockets/Writing_WebSocket_client_applications>`__
+
+Config Subsystem
+----------------
+
+Overview
+~~~~~~~~
+
+The Controller configuration operation has three stages:
+
+-  First, a Proposed configuration is created. Its target is to replace
+   the old configuration.
+
+-  Second, the Proposed configuration is validated, and then committed.
+   If it passes validation successfully, the Proposed configuration
+   state will be changed to Validated.
+
+-  Finally, a Validated configuration can be Committed, and the affected
+   modules can be reconfigured.
+
+In fact, each configuration operation is wrapped in a transaction. Once
+a transaction is created, it can be configured, that is to say, a user
+can abort the transaction during this stage. After the transaction
+configuration is done, it is committed to the validation stage. In this
+stage, the validation procedures are invoked. If one or more validations
+fail, the transaction can be reconfigured. Upon success, the second
+phase commit is invoked. If this commit is successful, the transaction
+enters the last stage, committed. After that, the desired modules are
+reconfigured. If the second phase commit fails, it means that the
+transaction is unhealthy - basically, a new configuration instance
+creation failed, and the application can be in an inconsistent state.
+
+.. figure:: ./images/configuration.jpg
+   :alt: Configuration states
+
+   Configuration states
+
+.. figure:: ./images/Transaction.jpg
+   :alt: Transaction states
+
+   Transaction states
+
+Validation
+~~~~~~~~~~
+
+To secure the consistency and safety of the new configuration and to
+avoid conflicts, the configuration validation process is necessary.
+Usually, validation checks the input parameters of a new configuration,
+and mostly verifies module-specific relationships. The validation
+procedure results in a decision on whether the proposed configuration is
+healthy.
+
+Dependency resolver
+~~~~~~~~~~~~~~~~~~~
+
+Since there can be dependencies between modules, a change in a module
+configuration can affect the state of other modules. Therefore, we need
+to verify whether dependencies on other modules can be resolved. The
+Dependency Resolver acts in a manner similar to dependency injectors.
+Basically, a dependency tree is built.
+
+APIs and SPIs
+~~~~~~~~~~~~~
+
+This section describes configuration system APIs and SPIs.
+
+SPIs
+^^^^
+
+**Module** org.opendaylight.controller.config.spi. Module is the common
+interface for all modules: every module must implement it. The module is
+designated to hold configuration attributes, validate them, and create
+instances of service based on the attributes. This instance must
+implement the AutoCloseable interface, owing to resources clean up. If
+the module was created from an already running instance, it contains an
+old instance of the module. A module can implement multiple services. If
+the module depends on other modules, setters need to be annotated with
+@RequireInterface.
+
+**Module creation**
+
+1. The module needs to be configured, set with all required attributes.
+
+2. The module is then moved to the commit stage for validation. If the
+   validation fails, the module attributes can be reconfigured.
+   Otherwise, a new instance is either created, or an old instance is
+   reconfigured. A module instance is identified by ModuleIdentifier,
+   consisting of the factory name and instance name.
+
+| **ModuleFactory** org.opendaylight.controller.config.spi. The
+  ModuleFactory interface must be implemented by each module factory.
+| A module factory can create a new module instance in two ways:
+
+-  From an existing module instance
+
+-  | An entirely new instance
+   | ModuleFactory can also return default modules, useful for
+     populating registry with already existing configurations. A module
+     factory implementation must have a globally unique name.
+
+APIs
+^^^^
+
++--------------------------------------+--------------------------------------+
+| ConfigRegistry                       | Represents functionality provided by |
+|                                      | a configuration transaction (create, |
+|                                      | destroy module, validate, or abort   |
+|                                      | transaction).                        |
++--------------------------------------+--------------------------------------+
+| ConfigTransactionController          | Represents functionality for         |
+|                                      | manipulating with configuration      |
+|                                      | transactions (begin, commit config). |
++--------------------------------------+--------------------------------------+
+| RuntimeBeanRegistratorAwareConfiBean | The module implementing this         |
+|                                      | interface will receive               |
+|                                      | RuntimeBeanRegistrator before        |
+|                                      | getInstance is invoked.              |
++--------------------------------------+--------------------------------------+
+
+Runtime APIs
+^^^^^^^^^^^^
+
++--------------------------------------+--------------------------------------+
+| RuntimeBean                          | Common interface for all runtime     |
+|                                      | beans                                |
++--------------------------------------+--------------------------------------+
+| RootRuntimeBeanRegistrator           | Represents functionality for root    |
+|                                      | runtime bean registration, which     |
+|                                      | subsequently allows hierarchical     |
+|                                      | registrations                        |
++--------------------------------------+--------------------------------------+
+| HierarchicalRuntimeBeanRegistration  | Represents functionality for runtime |
+|                                      | bean registration and                |
+|                                      | unreregistration from hierarchy      |
++--------------------------------------+--------------------------------------+
+
+JMX APIs
+^^^^^^^^
+
+| JMX API is purposed as a transition between the Client API and the JMX
+  platform.
+
++--------------------------------------+--------------------------------------+
+| ConfigTransactionControllerMXBean    | Extends ConfigTransactionController, |
+|                                      | executed by Jolokia clients on       |
+|                                      | configuration transaction.           |
++--------------------------------------+--------------------------------------+
+| ConfigRegistryMXBean                 | Represents entry point of            |
+|                                      | configuration management for         |
+|                                      | MXBeans.                             |
++--------------------------------------+--------------------------------------+
+| Object names                         | Object Name is the pattern used in   |
+|                                      | JMX to locate JMX beans. It consists |
+|                                      | of domain and key properties (at     |
+|                                      | least one key-value pair). Domain is |
+|                                      | defined as                           |
+|                                      | "org.opendaylight.controller". The   |
+|                                      | only mandatory property is "type".   |
++--------------------------------------+--------------------------------------+
+
+Use case scenarios
+^^^^^^^^^^^^^^^^^^
+
+| A few samples of successful and unsuccessful transaction scenarios
+  follow:
+
+**Successful commit scenario**
+
+1.  The user creates a transaction calling creteTransaction() method on
+    ConfigRegistry.
+
+2.  ConfigRegisty creates a transaction controller, and registers the
+    transaction as a new bean.
+
+3.  Runtime configurations are copied to the transaction. The user can
+    create modules and set their attributes.
+
+4.  The configuration transaction is to be committed.
+
+5.  The validation process is performed.
+
+6.  After successful validation, the second phase commit begins.
+
+7.  Modules proposed to be destroyed are destroyed, and their service
+    instances are closed.
+
+8.  Runtime beans are set to registrator.
+
+9.  The transaction controller invokes the method getInstance on each
+    module.
+
+10. The transaction is committed, and resources are either closed or
+    released.
+
+| **Validation failure scenario**
+| The transaction is the same as the previous case until the validation
+  process.
+
+1. If validation fails, (that is to day, illegal input attributes values
+   or dependency resolver failure), the validationException is thrown
+   and exposed to the user.
+
+2. The user can decide to reconfigure the transaction and commit again,
+   or abort the current transaction.
+
+3. On aborted transactions, TransactionController and JMXRegistrator are
+   properly closed.
+
+4. Unregistration event is sent to ConfigRegistry.
+
+Default module instances
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The configuration subsystem provides a way for modules to create default
+instances. A default instance is an instance of a module, that is
+created at the module bundle start-up (module becomes visible for
+configuration subsystem, for example, its bundle is activated in the
+OSGi environment). By default, no default instances are produced.
+
+The default instance does not differ from instances created later in the
+module life-cycle. The only difference is that the configuration for the
+default instance cannot be provided by the configuration subsystem. The
+module has to acquire the configuration for these instances on its own.
+It can be acquired from, for example, environment variables. After the
+creation of a default instance, it acts as a regular instance and fully
+participates in the configuration subsystem (It can be reconfigured or
+deleted in following transactions.).
+
diff --git a/docs/developer-guide/images/ConfigurationService-example1.png b/docs/developer-guide/images/ConfigurationService-example1.png
new file mode 100644 (file)
index 0000000..686a42f
Binary files /dev/null and b/docs/developer-guide/images/ConfigurationService-example1.png differ
diff --git a/docs/developer-guide/images/Get.png b/docs/developer-guide/images/Get.png
new file mode 100644 (file)
index 0000000..5c1f484
Binary files /dev/null and b/docs/developer-guide/images/Get.png differ
diff --git a/docs/developer-guide/images/L3FwdSample.png b/docs/developer-guide/images/L3FwdSample.png
new file mode 100644 (file)
index 0000000..c3f15f8
Binary files /dev/null and b/docs/developer-guide/images/L3FwdSample.png differ
diff --git a/docs/developer-guide/images/MonitorResponse.png b/docs/developer-guide/images/MonitorResponse.png
new file mode 100644 (file)
index 0000000..f3c50e5
Binary files /dev/null and b/docs/developer-guide/images/MonitorResponse.png differ
diff --git a/docs/developer-guide/images/OVSDB_Eclipse.png b/docs/developer-guide/images/OVSDB_Eclipse.png
new file mode 100644 (file)
index 0000000..4b33741
Binary files /dev/null and b/docs/developer-guide/images/OVSDB_Eclipse.png differ
diff --git a/docs/developer-guide/images/Put.png b/docs/developer-guide/images/Put.png
new file mode 100644 (file)
index 0000000..bfcaa87
Binary files /dev/null and b/docs/developer-guide/images/Put.png differ
diff --git a/docs/developer-guide/images/Screenshot8.png b/docs/developer-guide/images/Screenshot8.png
new file mode 100644 (file)
index 0000000..806aaa8
Binary files /dev/null and b/docs/developer-guide/images/Screenshot8.png differ
diff --git a/docs/developer-guide/images/Transaction.jpg b/docs/developer-guide/images/Transaction.jpg
new file mode 100644 (file)
index 0000000..258710a
Binary files /dev/null and b/docs/developer-guide/images/Transaction.jpg differ
diff --git a/docs/developer-guide/images/bgpcep/PathAttributesSerialization.png b/docs/developer-guide/images/bgpcep/PathAttributesSerialization.png
new file mode 100644 (file)
index 0000000..d4cca7d
Binary files /dev/null and b/docs/developer-guide/images/bgpcep/PathAttributesSerialization.png differ
diff --git a/docs/developer-guide/images/bgpcep/RIB.png b/docs/developer-guide/images/bgpcep/RIB.png
new file mode 100644 (file)
index 0000000..3c834c9
Binary files /dev/null and b/docs/developer-guide/images/bgpcep/RIB.png differ
diff --git a/docs/developer-guide/images/bgpcep/bgp-dependency-tree.png b/docs/developer-guide/images/bgpcep/bgp-dependency-tree.png
new file mode 100644 (file)
index 0000000..987afed
Binary files /dev/null and b/docs/developer-guide/images/bgpcep/bgp-dependency-tree.png differ
diff --git a/docs/developer-guide/images/bgpcep/pcep-dependency-tree.png b/docs/developer-guide/images/bgpcep/pcep-dependency-tree.png
new file mode 100644 (file)
index 0000000..3bc6b23
Binary files /dev/null and b/docs/developer-guide/images/bgpcep/pcep-dependency-tree.png differ
diff --git a/docs/developer-guide/images/bgpcep/pcep-parsing.png b/docs/developer-guide/images/bgpcep/pcep-parsing.png
new file mode 100644 (file)
index 0000000..9c2f4f2
Binary files /dev/null and b/docs/developer-guide/images/bgpcep/pcep-parsing.png differ
diff --git a/docs/developer-guide/images/bgpcep/validation.png b/docs/developer-guide/images/bgpcep/validation.png
new file mode 100644 (file)
index 0000000..df4b65f
Binary files /dev/null and b/docs/developer-guide/images/bgpcep/validation.png differ
diff --git a/docs/developer-guide/images/configuration.jpg b/docs/developer-guide/images/configuration.jpg
new file mode 100644 (file)
index 0000000..3b07a2b
Binary files /dev/null and b/docs/developer-guide/images/configuration.jpg differ
diff --git a/docs/developer-guide/images/netide/arch-engine.jpg b/docs/developer-guide/images/netide/arch-engine.jpg
new file mode 100644 (file)
index 0000000..9a67849
Binary files /dev/null and b/docs/developer-guide/images/netide/arch-engine.jpg differ
diff --git a/docs/developer-guide/images/neutron/odl-neutron-service-developer-architecture.png b/docs/developer-guide/images/neutron/odl-neutron-service-developer-architecture.png
new file mode 100644 (file)
index 0000000..8a03d6b
Binary files /dev/null and b/docs/developer-guide/images/neutron/odl-neutron-service-developer-architecture.png differ
diff --git a/docs/developer-guide/images/ocpplugin/ocp-sb-plugin.jpg b/docs/developer-guide/images/ocpplugin/ocp-sb-plugin.jpg
new file mode 100644 (file)
index 0000000..23cf919
Binary files /dev/null and b/docs/developer-guide/images/ocpplugin/ocp-sb-plugin.jpg differ
diff --git a/docs/developer-guide/images/ocpplugin/ocpagent-state-machine.jpg b/docs/developer-guide/images/ocpplugin/ocpagent-state-machine.jpg
new file mode 100644 (file)
index 0000000..42bdeee
Binary files /dev/null and b/docs/developer-guide/images/ocpplugin/ocpagent-state-machine.jpg differ
diff --git a/docs/developer-guide/images/ocpplugin/ocpplugin-state-machine.jpg b/docs/developer-guide/images/ocpplugin/ocpplugin-state-machine.jpg
new file mode 100644 (file)
index 0000000..f87a296
Binary files /dev/null and b/docs/developer-guide/images/ocpplugin/ocpplugin-state-machine.jpg differ
diff --git a/docs/developer-guide/images/ocpplugin/plugin-design.jpg b/docs/developer-guide/images/ocpplugin/plugin-design.jpg
new file mode 100644 (file)
index 0000000..03f5fd3
Binary files /dev/null and b/docs/developer-guide/images/ocpplugin/plugin-design.jpg differ
diff --git a/docs/developer-guide/images/openflowjava/500px-UdpChannelPipeline.png b/docs/developer-guide/images/openflowjava/500px-UdpChannelPipeline.png
new file mode 100644 (file)
index 0000000..84b7589
Binary files /dev/null and b/docs/developer-guide/images/openflowjava/500px-UdpChannelPipeline.png differ
diff --git a/docs/developer-guide/images/openflowjava/800px-Extensibility.png b/docs/developer-guide/images/openflowjava/800px-Extensibility.png
new file mode 100644 (file)
index 0000000..b4fc160
Binary files /dev/null and b/docs/developer-guide/images/openflowjava/800px-Extensibility.png differ
diff --git a/docs/developer-guide/images/openflowjava/800px-Extensibility2.png b/docs/developer-guide/images/openflowjava/800px-Extensibility2.png
new file mode 100644 (file)
index 0000000..1e2c97b
Binary files /dev/null and b/docs/developer-guide/images/openflowjava/800px-Extensibility2.png differ
diff --git a/docs/developer-guide/images/openflowjava/Library_lifecycle.png b/docs/developer-guide/images/openflowjava/Library_lifecycle.png
new file mode 100644 (file)
index 0000000..ae56d8c
Binary files /dev/null and b/docs/developer-guide/images/openflowjava/Library_lifecycle.png differ
diff --git a/docs/developer-guide/images/openstack_integration.png b/docs/developer-guide/images/openstack_integration.png
new file mode 100644 (file)
index 0000000..d58e698
Binary files /dev/null and b/docs/developer-guide/images/openstack_integration.png differ
diff --git a/docs/developer-guide/images/ovsdb-sb-active-connection.jpg b/docs/developer-guide/images/ovsdb-sb-active-connection.jpg
new file mode 100644 (file)
index 0000000..65a7179
Binary files /dev/null and b/docs/developer-guide/images/ovsdb-sb-active-connection.jpg differ
diff --git a/docs/developer-guide/images/ovsdb-sb-config-crud.jpg b/docs/developer-guide/images/ovsdb-sb-config-crud.jpg
new file mode 100644 (file)
index 0000000..973df6a
Binary files /dev/null and b/docs/developer-guide/images/ovsdb-sb-config-crud.jpg differ
diff --git a/docs/developer-guide/images/ovsdb-sb-oper-crud.jpg b/docs/developer-guide/images/ovsdb-sb-oper-crud.jpg
new file mode 100644 (file)
index 0000000..8de72b3
Binary files /dev/null and b/docs/developer-guide/images/ovsdb-sb-oper-crud.jpg differ
diff --git a/docs/developer-guide/images/ovsdb-sb-passive-connection.jpg b/docs/developer-guide/images/ovsdb-sb-passive-connection.jpg
new file mode 100644 (file)
index 0000000..635547f
Binary files /dev/null and b/docs/developer-guide/images/ovsdb-sb-passive-connection.jpg differ
diff --git a/docs/developer-guide/images/ovsdb/ODL_SFC_Architecture.png b/docs/developer-guide/images/ovsdb/ODL_SFC_Architecture.png
new file mode 100644 (file)
index 0000000..7401ef5
Binary files /dev/null and b/docs/developer-guide/images/ovsdb/ODL_SFC_Architecture.png differ
diff --git a/docs/developer-guide/images/packetcable-developer-wireshark.png b/docs/developer-guide/images/packetcable-developer-wireshark.png
new file mode 100644 (file)
index 0000000..806aaa8
Binary files /dev/null and b/docs/developer-guide/images/packetcable-developer-wireshark.png differ
diff --git a/docs/developer-guide/images/sfc-sf-selection-arch.png b/docs/developer-guide/images/sfc-sf-selection-arch.png
new file mode 100644 (file)
index 0000000..357bf59
Binary files /dev/null and b/docs/developer-guide/images/sfc-sf-selection-arch.png differ
diff --git a/docs/developer-guide/images/sfc/sb-rest-architecture.png b/docs/developer-guide/images/sfc/sb-rest-architecture.png
new file mode 100644 (file)
index 0000000..410622e
Binary files /dev/null and b/docs/developer-guide/images/sfc/sb-rest-architecture.png differ
diff --git a/docs/developer-guide/images/sfc/sfc-ovs-architecture.png b/docs/developer-guide/images/sfc/sfc-ovs-architecture.png
new file mode 100644 (file)
index 0000000..578bc5b
Binary files /dev/null and b/docs/developer-guide/images/sfc/sfc-ovs-architecture.png differ
diff --git a/docs/developer-guide/images/topoprocessing/Inventory_Rendering_Use_case.png b/docs/developer-guide/images/topoprocessing/Inventory_Rendering_Use_case.png
new file mode 100644 (file)
index 0000000..00a4cf4
Binary files /dev/null and b/docs/developer-guide/images/topoprocessing/Inventory_Rendering_Use_case.png differ
diff --git a/docs/developer-guide/images/topoprocessing/Inventory_model_listener_diagram.png b/docs/developer-guide/images/topoprocessing/Inventory_model_listener_diagram.png
new file mode 100644 (file)
index 0000000..b01e369
Binary files /dev/null and b/docs/developer-guide/images/topoprocessing/Inventory_model_listener_diagram.png differ
diff --git a/docs/developer-guide/images/topoprocessing/LinkComputation.png b/docs/developer-guide/images/topoprocessing/LinkComputation.png
new file mode 100644 (file)
index 0000000..9f97411
Binary files /dev/null and b/docs/developer-guide/images/topoprocessing/LinkComputation.png differ
diff --git a/docs/developer-guide/images/topoprocessing/LinkComputationFlowDiagram.png b/docs/developer-guide/images/topoprocessing/LinkComputationFlowDiagram.png
new file mode 100644 (file)
index 0000000..a1c7901
Binary files /dev/null and b/docs/developer-guide/images/topoprocessing/LinkComputationFlowDiagram.png differ
diff --git a/docs/developer-guide/images/topoprocessing/ModelAdapter.png b/docs/developer-guide/images/topoprocessing/ModelAdapter.png
new file mode 100644 (file)
index 0000000..0f0f8d2
Binary files /dev/null and b/docs/developer-guide/images/topoprocessing/ModelAdapter.png differ
diff --git a/docs/developer-guide/images/topoprocessing/Network_topology_model_flow_diagram.png b/docs/developer-guide/images/topoprocessing/Network_topology_model_flow_diagram.png
new file mode 100644 (file)
index 0000000..573e954
Binary files /dev/null and b/docs/developer-guide/images/topoprocessing/Network_topology_model_flow_diagram.png differ
diff --git a/docs/developer-guide/images/topoprocessing/TopologyRequestHandler_classesRelationship.png b/docs/developer-guide/images/topoprocessing/TopologyRequestHandler_classesRelationship.png
new file mode 100644 (file)
index 0000000..70b8a78
Binary files /dev/null and b/docs/developer-guide/images/topoprocessing/TopologyRequestHandler_classesRelationship.png differ
diff --git a/docs/developer-guide/images/topoprocessing/wrapper.png b/docs/developer-guide/images/topoprocessing/wrapper.png
new file mode 100644 (file)
index 0000000..1a7e073
Binary files /dev/null and b/docs/developer-guide/images/topoprocessing/wrapper.png differ
diff --git a/docs/developer-guide/images/ttp-screen1-basic-auth.png b/docs/developer-guide/images/ttp-screen1-basic-auth.png
new file mode 100644 (file)
index 0000000..6714e7c
Binary files /dev/null and b/docs/developer-guide/images/ttp-screen1-basic-auth.png differ
diff --git a/docs/developer-guide/images/ttp-screen2-applied-basic-auth.png b/docs/developer-guide/images/ttp-screen2-applied-basic-auth.png
new file mode 100644 (file)
index 0000000..b97546f
Binary files /dev/null and b/docs/developer-guide/images/ttp-screen2-applied-basic-auth.png differ
diff --git a/docs/developer-guide/images/ttp-screen3-sent-put.png b/docs/developer-guide/images/ttp-screen3-sent-put.png
new file mode 100644 (file)
index 0000000..1667d64
Binary files /dev/null and b/docs/developer-guide/images/ttp-screen3-sent-put.png differ
diff --git a/docs/developer-guide/images/ttp-screen4-get-json.png b/docs/developer-guide/images/ttp-screen4-get-json.png
new file mode 100644 (file)
index 0000000..f2ceaf1
Binary files /dev/null and b/docs/developer-guide/images/ttp-screen4-get-json.png differ
diff --git a/docs/developer-guide/images/ttp-screen5-get-xml.png b/docs/developer-guide/images/ttp-screen5-get-xml.png
new file mode 100644 (file)
index 0000000..eaadc2d
Binary files /dev/null and b/docs/developer-guide/images/ttp-screen5-get-xml.png differ
index 29dca06a9a0aa4143793047a5c2092217af48a88..4c5c3e5bde5baa31b5ba37d80673590ce100e63d 100644 (file)
@@ -24,12 +24,14 @@ Project-specific Developer Guides
    controller
    didm-developer-guide
    dlux
+   infrautils-developer-guide
    iotdm-developer-guide
    l2switch-developer-guide
    lacp-developer-guide
+   ../user-guide/lisp-flow-mapping-user-guide
+   nemo-developer-guide
    netconf-developer-guide
    network-intent-composition-(nic)-developer-guide
-   network-modeling-(nemo)
    netide-developer-guide
    neutron-service-developer-guide
    neutron-northbound
diff --git a/docs/developer-guide/infrautils-developer-guide.rst b/docs/developer-guide/infrautils-developer-guide.rst
new file mode 100644 (file)
index 0000000..c499b40
--- /dev/null
@@ -0,0 +1,24 @@
+Infrautils
+==========
+
+Overview
+--------
+
+Infrautils offer various utilities and infrastructures for other projects to use:
+
+Counters Infrastructure
+-----------------------
+Create, update and output counters is a basic tool for debugging and generating statistics in any system. 
+We have developed a counter infrastructure integrated into ODL which has already been successfully used with 
+multiple products, and more recently in debugging and fixing the OpenFlow plugin/Java and LACP modules.
+`Getting started with Counters  <https://wiki.opendaylight.org/view/Getting_started_with_Counters>`__
+
+Async Infrastructure
+-----------------------
+The decision to split a service into one or more threads with asynchronous interactions between them is 
+frequently dependent on constraints learned late in the development and even the deployment cycle. 
+In order to allow flexibility in making these decisions we have developed an infrastructure which is 
+configuration driven allowing agnostic code to be written under generic constrains which can then later 
+be customized according to the required constraints.
+`Getting started with Async  <https://git.opendaylight.org/gerrit/gitweb?p=infrautils.git;a=tree;f=samples/sample-async;h=dedd664da4a1bcfbe62261df73d19044d334f0b9;hb=refs/heads/master>`__
+
diff --git a/docs/developer-guide/iotdm-developer-guide.rst b/docs/developer-guide/iotdm-developer-guide.rst
new file mode 100644 (file)
index 0000000..45b17d6
--- /dev/null
@@ -0,0 +1,72 @@
+IoTDM Developer Guide
+=====================
+
+Overview
+--------
+
+The Internet of Things Data Management (IoTDM) on OpenDaylight project
+is about developing a data-centric middleware that will act as a oneM2M
+compliant IoT Data Broker and enable authorized applications to retrieve
+IoT data uploaded by any device. The OpenDaylight platform is used to
+implement the oneM2M data store which models a hierarchical containment
+tree, where each node in the tree represents an oneM2M resource.
+Typically, IoT devices and applications interact with the resource tree
+over standard protocols such as CoAP, MQTT, and HTTP. Initially, the
+oneM2M resource tree is used by applications to retrieve data. Possible
+applications are inventory or device management systems or big data
+analytic systems designed to make sense of the collected data. But, at
+some point, applications will need to configure the devices. Features
+and tools will have to be provided to enable configuration of the
+devices based on applications responding to user input, network
+conditions, or some set of programmable rules or policies possibly
+triggered by the receipt of data collected from the devices. The
+OpenDaylight platform, with its rich unique cross-section of SDN
+capabilities, NFV, and now IoT device and application management, can be
+bundled with a targeted set of features and deployed anywhere in the
+network to give the network service provider ultimate control. Depending
+on the use case, the OpenDaylight IoT platform can be configured with
+only IoT data collection capabilities where it is deployed near the IoT
+devices and its footprint needs to be small, or it can be configured to
+run as a highly scaled up and out distributed cluster with IoT, SDN and
+NFV functions enabled and deployed in a high traffic data center.
+
+oneM2M Architecture
+-------------------
+
+The architecture provides a framework that enables the support of the
+oneM2M resource containment tree. The onem2m-core implements the MDSAL
+RPCs defined in the onem2m-api YANG files. These RPCs enable oneM2M
+resources to be created, read, updated, and deleted (CRUD), and also
+enables the management of subscriptions. When resources are CRUDed, the
+onem2m-notifier issues oneM2M notification events to interested
+subscribers. TS0001: oneM2M Functional Architecture and TS0004: oneM2M
+Service Layer Protocol are great reference documents to learn details of
+oneM2M resource types, message flow, formats, and CRUD/N semantics. Both
+of these specifications can be found at
+http://onem2m.org/technical/published-documents
+
+The oneM2M resource tree is modeled in YANG and essentially is a
+meta-model for the tree. The oneM2M wire protocols allow the resource
+tree to be constructed via HTTP or CoAP messages that populate nodes in
+the tree with resource specific attributes. Each oneM2M resource type
+has semantic behaviour associated with it. For example: a container
+resource has attributes which control quotas on how many and how big the
+collection of data or content instance objects that can exist below it
+in the tree. Depending on the resource type, the oneM2M core software
+implements and enforces the resource type specific rules to ensure a
+well-behaved resource tree.
+
+The resource tree can be simultaneously accessed by many concurrent
+applications wishing to manage or access the tree, and also many devices
+can be reporting in new data or sensor readings into their appropriate
+place in the tree.
+
+Key APIs and Interfaces
+-----------------------
+
+The API’s to access the oneM2M datastore are well documented in TS0004
+(referred above) found on onem2m.org
+
+RESTCONF is available too but generally HTTP and CoAP are used to access
+the oneM2M data tree.
+
diff --git a/docs/developer-guide/lacp-developer-guide.rst b/docs/developer-guide/lacp-developer-guide.rst
new file mode 100644 (file)
index 0000000..52a182d
--- /dev/null
@@ -0,0 +1,116 @@
+LACP Developer Guide
+====================
+
+LACP Overview
+-------------
+
+The OpenDaylight LACP (Link Aggregation Control Protocol) project can be
+used to aggregate multiple links between OpenDaylight controlled network
+switches and LACP enabled legacy switches or hosts operating in active
+LACP mode.
+
+OpenDaylight LACP passively negotiates automatic bundling of multiple
+links to form a single LAG (Link Aggregation Group). LAGs are realised
+in the OpenDaylight controlled switches using OpenFlow 1.3+ group table
+functionality.
+
+LACP Architecture
+-----------------
+
+-  **inventory**
+
+   -  Maintains list of OpenDaylight controlled switches and port
+      information
+
+   -  List of LAGs created and physical ports that are part of the LAG
+
+   -  Interacts with MD-SAL to update LACP related information
+
+-  **inventorylistener**
+
+   -  This module interacts with MD-SAL for receiving
+      node/node-connector notifications
+
+-  **flow**
+
+   -  Programs the switch to punt LACP PDU (Protocol Data Unit) to
+      controller
+
+-  **packethandler**
+
+   -  Receives and transmits LACP PDUs to the LACP enabled endpoint
+
+   -  Provides infrastructure services for group table programming
+
+-  **core**
+
+   -  Performs LACP state machine processing
+
+How LAG programming is implemented
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The LAG representing the aggregated multiple physical ports are realized
+in the OpenDaylight controlled switches by creating a group table entry
+(Group table supported from OpenFlow 1.3 onwards). The group table entry
+has a group type **Select** and action referring to the aggregated
+physical ports. Any data traffic to be sent out through the LAG can be
+sent through the **group entry** available for the LAG.
+
+Suppose there are ports P1-P8 in a node. When LACP project is installed,
+a group table entry for handling broadcast traffic is automatically
+created on all the switches that have registered to the controller.
+
++--------------------------+--------------------------+--------------------------+
+| GroupID                  | GroupType                | EgressPorts              |
++==========================+==========================+==========================+
+| <B’castgID>              | ALL                      | P1,P2,…P8                |
++--------------------------+--------------------------+--------------------------+
+
+Now, assume P1 & P2 are now part of LAG1. The group table would be
+programmed as follows:
+
++--------------------------+--------------------------+--------------------------+
+| GroupID                  | GroupType                | EgressPorts              |
++==========================+==========================+==========================+
+| <B’castgID>              | ALL                      | P3,P4,…P8                |
++--------------------------+--------------------------+--------------------------+
+| <LAG1>                   | SELECT                   | P1,P2                    |
++--------------------------+--------------------------+--------------------------+
+
+When a second LAG, LAG2, is formed with ports P3 and P4,
+
++--------------------------+--------------------------+--------------------------+
+| GroupID                  | GroupType                | EgressPorts              |
++==========================+==========================+==========================+
+| <B’castgID>              | ALL                      | P5,P6,…P8                |
++--------------------------+--------------------------+--------------------------+
+| <LAG1>                   | SELECT                   | P1,P2                    |
++--------------------------+--------------------------+--------------------------+
+| <LAG2>                   | SELECT                   | P3,P4                    |
++--------------------------+--------------------------+--------------------------+
+
+How applications can program OpenFlow flows using LACP-created LAG groups
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+OpenDaylight controller modules can get the information of LAG by
+listening/querying the LACP Aggregator datastore.
+
+When any application receives packets, it can check, if the ingress port
+is part of a LAG by verifying the LAG Aggregator reference
+(lacp-agg-ref) for the source nodeConnector that OpenFlow plugin
+provides.
+
+When applications want to add flows to egress out of the LAG, they must
+use the group entry corresponding to the LAG.
+
+From the above example, for a flow to egress out of LAG1,
+
+**add-flow eth\_type=<xxxx>,ip\_dst=<x.x.x.x>,actions=output:<LAG1>**
+
+Similarly, when applications want traffic to be broadcasted, they should
+use the group table entries **<B’castgID>,<LAG1>,<LAG2>** in output
+action.
+
+For all applications, the group table information is accessible from
+LACP Aggregator datastore.
+
diff --git a/docs/developer-guide/nemo-developer-guide.rst b/docs/developer-guide/nemo-developer-guide.rst
new file mode 100644 (file)
index 0000000..3dd1951
--- /dev/null
@@ -0,0 +1,69 @@
+NEtwork MOdeling (NEMO)
+=======================
+
+Overview
+--------
+
+The NEMO engine provides REST APIs to express intent, and manage it. With this
+northbound API, user could query what intents have been handled successfully, and
+what types have been predefined.
+
+NEMO Architecture
+-----------------
+
+In NEMO project, it provides three features facing developer.
+
+* ``odl-nemo-engine``: it is a whole model to handle intent.
+
+* ``odl-nemo-openflow-renderer``: it is a southbound render to translate intent to flow
+  table in devices supporting for OpenFlow protocol.
+
+* ``odl-nemo-cli-render``: it is also a southbound render to translate intent into forwarding
+  table in devices supporting for traditional protocol.
+
+Key APIs and Interfaces
+-----------------------
+
+NEMO projects provide four basic REST methods for user to use.
+
+* PUT: store the information expressed in NEMO model directly without handled by NEMO engine.
+
+* POST: the information expressed in NEMO model will be handled by NEMO engine, and will
+  be translated into southbound configuration.
+
+* GET: obtain the data stored in data store.
+
+* DELETE: delete the data in data store.
+
+NEMO Intent API
+~~~~~~~~~~~~~~~
+
+NEMO provides several RPCs to handle user's intent. All RPCs use POST method.
+
+-  ``http://{controller-ip}:8181/restconf/operations/nemo-intent:register-user``: a REST API
+   to register a new user. It is the first and necessary step to express intent.
+
+-  ``http://{controller-ip}:8181/restconf/operations/nemo-intent:transaction-begin``: a REST
+   type to start a transaction. The intent exist in the transaction will be handled together.
+
+-  ``http://{controller-ip}:8181/restconf/operations/nemo-intent:transaction-end``: a REST API
+   to end a transaction. The intent exist in the transaction will be handled together.
+
+-  ``http://{controller-ip}:8181/restconf/operations/nemo-intent:structure-style-nemo-update``: a
+   REST API to create, import or update intent in a structure style, that is, user could express the
+   structure of intent in json body.
+
+-  ``http://{controller-ip}:8181/restconf/operations/nemo-intent:structure-style-nemo-delete``: a
+   REST API to delete intent in a structure style.
+
+-  ``http://{controller-ip}:8181/restconf/operations/nemo-intent:language-style-nemo-request``: a REST
+   API to create, import, update and delete intent in a language style, that is, user could express
+   intent with NEMO script. On the other hand, with this interface, user could query which intent have
+   been handled successfully.
+
+API Reference Documentation
+---------------------------
+
+Go to ``http://${IPADDRESS}:8181/apidoc/explorer/index.html``. User could see many useful APIs to
+deploy or query intent.
+
diff --git a/docs/developer-guide/netconf-developer-guide.rst b/docs/developer-guide/netconf-developer-guide.rst
new file mode 100644 (file)
index 0000000..f10b733
--- /dev/null
@@ -0,0 +1,237 @@
+NETCONF Developer Guide
+=======================
+
+.. note::
+
+    Reading the NETCONF section in the User Guide is likely useful as it
+    contains an overview of NETCONF in OpenDaylight and a how-to for
+    spawning and configuring NETCONF connectors.
+
+This chapter is recommended for application developers who want to
+interact with mounted NETCONF devices from their application code. It
+tries to demonstrate all the use cases from user guide with RESTCONF but
+now from the code level. One important difference would be the
+demonstration of NETCONF notifications and notification listeners. The
+notifications were not shown using RESTCONF because **RESTCONF does not
+support notifications from mounted NETCONF devices.**
+
+.. note::
+
+    It may also be useful to read the generic `OpenDaylight MD-SAL app
+    development
+    tutorial <https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:MD-SAL_App_Tutorial>`__
+    before diving into this chapter. This guide assumes awareness of
+    basic OpenDaylight application development.
+
+Sample app overview
+-------------------
+
+All the examples presented here are implemented by a sample OpenDaylight
+application called **ncmount** in the ``coretutorials`` OpenDaylight
+project. It can be found on the github mirror of OpenDaylight’s
+repositories:
+
+-  https://github.com/opendaylight/coretutorials/tree/stable/lithium/ncmount
+
+or checked out from the official OpenDaylight repository:
+
+-  https://git.opendaylight.org/gerrit/#/admin/projects/coretutorials
+
+**The application was built using the `project startup maven
+archetype <https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:Startup_Project_Archetype>`__
+and demonstrates how to:**
+
+-  preconfigure connectors to NETCONF devices
+
+-  retrieve MountPointService (registry of available mount points)
+
+-  listen and react to changing connection state of netconf-connector
+
+-  add custom device YANG models to the app and work with them
+
+-  read data from device in binding aware format (generated java APIs
+   from provided YANG models)
+
+-  write data into device in binding aware format
+
+-  trigger and listen to NETCONF notifications in binding aware format
+
+Detailed information about the structure of the application can be found
+at:
+https://wiki.opendaylight.org/view/Controller_Core_Functionality_Tutorials:Tutorials:Netconf_Mount
+
+.. note::
+
+    The code in ncmount is fully **binding aware** (works with generated
+    java APIs from provided YANG models). However it is also possible to
+    perform the same operations in **binding independent** manner.
+
+NcmountProvider
+~~~~~~~~~~~~~~~
+
+The NcmountProvider class (found in NcmountProvider.java) is the central
+point of the ncmount application and all the application logic is
+contained there. The following sections will detail its most interesting
+pieces.
+
+Retrieve MountPointService
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The MountPointService is a central registry of all available mount
+points in OpenDaylight. It is just another MD-SAL service and is
+available from the ``session`` attribute passed by
+``onSessionInitiated`` callback:
+
+::
+
+    @Override
+    public void onSessionInitiated(ProviderContext session) {
+        LOG.info("NcmountProvider Session Initiated");
+
+        // Get references to the data broker and mount service
+        this.mountService = session.getSALService(MountPointService.class);
+
+        ...
+
+        }
+    }
+
+Listen for connection state changes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+It is important to know when a mount point appears, when it is fully
+connected and when it is disconnected or removed. The exact states of a
+mount point are:
+
+-  Connected
+
+-  Connecting
+
+-  Unable to connect
+
+To receive this kind of information, an application has to register
+itself as a notification listener for the preconfigured netconf-topology
+subtree in MD-SAL’s datastore. This can be performed in the
+``onSessionInitiated`` callback as well:
+
+::
+
+    @Override
+    public void onSessionInitiated(ProviderContext session) {
+
+        ...
+
+        this.dataBroker = session.getSALService(DataBroker.class);
+
+        // Register ourselves as the REST API RPC implementation
+        this.rpcReg = session.addRpcImplementation(NcmountService.class, this);
+
+        // Register ourselves as data change listener for changes on Netconf
+        // nodes. Netconf nodes are accessed via "Netconf Topology" - a special
+        // topology that is created by the system infrastructure. It contains
+        // all Netconf nodes the Netconf connector knows about. NETCONF_TOPO_IID
+        // is equivalent to the following URL:
+        // .../restconf/operational/network-topology:network-topology/topology/topology-netconf
+        if (dataBroker != null) {
+            this.dclReg = dataBroker.registerDataChangeListener(LogicalDatastoreType.OPERATIONAL,
+                    NETCONF_TOPO_IID.child(Node.class),
+                    this,
+                    DataChangeScope.SUBTREE);
+        }
+    }
+
+The implementation of the callback from MD-SAL when the data change can
+be found in the
+``onDataChanged(AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject>
+change)`` callback of `NcmountProvider
+class <https://github.com/opendaylight/coretutorials/blob/stable/lithium/ncmount/impl/src/main/java/ncmount/impl/NcmountProvider.java>`__.
+
+Reading data from the device
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The first step when trying to interact with the device is to get the
+exact mount point instance (identified by an instance identifier) from
+the MountPointService:
+
+::
+
+    @Override
+    public Future<RpcResult<ShowNodeOutput>> showNode(ShowNodeInput input) {
+        LOG.info("showNode called, input {}", input);
+
+        // Get the mount point for the specified node
+        // Equivalent to '.../restconf/<config | operational>/opendaylight-inventory:nodes/node/<node-name>/yang-ext:mount/'
+        // Note that we can read both config and operational data from the same
+        // mount point
+        final Optional<MountPoint> xrNodeOptional = mountService.getMountPoint(NETCONF_TOPO_IID
+                .child(Node.class, new NodeKey(new NodeId(input.getNodeName()))));
+
+        Preconditions.checkArgument(xrNodeOptional.isPresent(),
+                "Unable to locate mountpoint: %s, not mounted yet or not configured",
+                input.getNodeName());
+        final MountPoint xrNode = xrNodeOptional.get();
+
+        ....
+    }
+
+.. note::
+
+    The triggering method in this case is called ``showNode``. It is a
+    YANG-defined RPC and NcmountProvider serves as an MD-SAL RPC
+    implementation among other things. This means that ``showNode`` an
+    be triggered using RESTCONF.
+
+The next step is to retrieve an instance of the ``DataBroker`` API from
+the mount point and start a read transaction:
+
+::
+
+    @Override
+    public Future<RpcResult<ShowNodeOutput>> showNode(ShowNodeInput input) {
+
+        ...
+
+        // Get the DataBroker for the mounted node
+        final DataBroker xrNodeBroker = xrNode.getService(DataBroker.class).get();
+        // Start a new read only transaction that we will use to read data
+        // from the device
+        final ReadOnlyTransaction xrNodeReadTx = xrNodeBroker.newReadOnlyTransaction();
+
+        ...
+    }
+
+Finally, it is possible to perform the read operation:
+
+::
+
+    @Override
+    public Future<RpcResult<ShowNodeOutput>> showNode(ShowNodeInput input) {
+
+        ...
+
+        InstanceIdentifier<InterfaceConfigurations> iid =
+                InstanceIdentifier.create(InterfaceConfigurations.class);
+
+        Optional<InterfaceConfigurations> ifConfig;
+        try {
+            // Read from a transaction is asynchronous, but a simple
+            // get/checkedGet makes the call synchronous
+            ifConfig = xrNodeReadTx.read(LogicalDatastoreType.CONFIGURATION, iid).checkedGet();
+        } catch (ReadFailedException e) {
+            throw new IllegalStateException("Unexpected error reading data from " + input.getNodeName(), e);
+        }
+
+        ...
+    }
+
+The instance identifier is used here again to specify a subtree to read
+from the device. At this point application can process the data as it
+sees fit. The ncmount app transforms the data into its own format and
+returns it from ``showNode``.
+
+.. note::
+
+    More information can be found in the source code of ncmount sample
+    app + on wiki:
+    https://wiki.opendaylight.org/view/Controller_Core_Functionality_Tutorials:Tutorials:Netconf_Mount
+
diff --git a/docs/developer-guide/netide-developer-guide.rst b/docs/developer-guide/netide-developer-guide.rst
new file mode 100644 (file)
index 0000000..42d6054
--- /dev/null
@@ -0,0 +1,299 @@
+NetIDE Developer Guide
+======================
+
+Overview
+--------
+
+The NetIDE Network Engine enables portability and cooperation inside a
+single network by using a client/server multi-controller SDN
+architecture. Separate "Client SDN Controllers" host the various SDN
+Applications with their access to the actual physical network abstracted
+and coordinated through a single "Server SDN Controller", in this
+instance OpenDaylight. This allows applications written for
+Ryu/Floodlight/Pyretic to execute on OpenDaylight managed
+infrastructure.
+
+The "Network Engine" is modular by design:
+
+-  An OpenDaylight plugin, "shim", sends/receives messages to/from
+   subscribed SDN Client Controllers. This consumes the ODL OpenFlow
+   Plugin
+
+-  An initial suite of SDN Client Controller "Backends": Floodlight,
+   Ryu, Pyretic. Further controllers may be added over time as the
+   engine is extensible.
+
+The Network Engine provides a compatibility layer capable of translating
+calls of the network applications running on top of the client
+controllers, into calls for the server controller framework. The
+communication between the client and the server layers is achieved
+through the NetIDE intermediate protocol, which is an application-layer
+protocol on top of TCP that transmits the network control/management
+messages from the client to the server controller and vice-versa.
+Between client and server controller sits the Core Layer which also
+"speaks" the intermediate protocol. The core layer implements three main
+functions:
+
+i.   interfacing with the client backends and server shim, controlling
+     the lifecycle of controllers as well as modules in them,
+
+ii.  orchestrating the execution of individual modules (in one client
+     controller) or complete applications (possibly spread across
+     multiple client controllers),
+
+iii. interfacing with the tools.
+
+.. figure:: ./images/netide/arch-engine.jpg
+   :alt: NetIDE Network Engine Architecture
+
+   NetIDE Network Engine Architecture
+
+NetIDE Intermediate Protocol
+----------------------------
+
+The Intermediate Protocol serves several needs, it has to:
+
+i.   carry control messages between core and shim/backend, e.g., to
+     start up/take down a particular module, providing unique
+     identifiers for modules,
+
+ii.  carry event and action messages between shim, core, and backend,
+     properly demultiplexing such messages to the right module based on
+     identifiers,
+
+iii. encapsulate messages specific to a particular SBI protocol version
+     (e.g., OpenFlow 1.X, NETCONF, etc.) towards the client controllers
+     with proper information to recognize these messages as such.
+
+The NetIDE packages can be added as dependencies in Maven projects by
+putting the following code in the *pom.xml* file.
+
+::
+
+    <dependency>
+        <groupId>org.opendaylight.netide</groupId>
+        <artifactId>api</artifactId>
+        <version>${NETIDE_VERSION}</version>
+    </dependency>
+
+The current stable version for NetIDE is ``0.1.0-Beryllium``.
+
+Protocol specification
+~~~~~~~~~~~~~~~~~~~~~~
+
+Messages of the NetIDE protocol contain two basic elements: the NetIDE
+header and the data (or payload). The NetIDE header, described below, is
+placed before the payload and serves as the communication and control
+link between the different components of the Network Engine. The payload
+can contain management messages, used by the components of the Network
+Engine to exchange relevant information, or control/configuration
+messages (such as OpenFlow, NETCONF, etc.) crossing the Network Engine
+generated by either network application modules or by the network
+elements.
+
+The NetIDE header is defined as follows:
+
+::
+
+     0                   1                   2                   3
+     0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    |   netide_ver  |      type     |             length            |
+    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    |                         xid                                   |
+    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    |                       module_id                               |
+    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+    |                                                               |
+    +                     datapath_id                               +
+    |                                                               |
+    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+
+where each tick mark represents one bit position. Alternatively, in a
+C-style coding format, the NetIDE header can be represented with the
+following structure:
+
+::
+
+    struct netide_header {
+        uint8_t netide_ver ;
+        uint8_t type ;
+        uint16_t length ;
+        uint32_t xid
+        uint32_t module_id
+        uint64_t datapath_id
+    };
+
+-  ``netide_ver`` is the version of the NetIDE protocol (the current
+   version is v1.2, which is identified with value 0x03).
+
+-  ``length`` is the total length of the payload in bytes.
+
+-  ``type`` contains a code that indicates the type of the message
+   according with the following values:
+
+   ::
+
+       enum type {
+           NETIDE_HELLO = 0x01 ,
+           NETIDE_ERROR = 0x02 ,
+           NETIDE_MGMT = 0x03 ,
+           MODULE_ANNOUNCEMENT = 0x04 ,
+           MODULE_ACKNOWLEDGE = 0x05 ,
+           NETIDE_HEARTBEAT = 0x06 ,
+           NETIDE_OPENFLOW = 0x11 ,
+           NETIDE_NETCONF = 0x12 ,
+           NETIDE_OPFLEX = 0x13
+       };
+
+-  ``datapath_id`` is a 64-bit field that uniquely identifies the
+   network elements.
+
+-  ``module_id`` is a 32-bits field that uniquely identifies Backends
+   and application modules running on top of each client controller. The
+   composition mechanism in the core layer leverages on this field to
+   implement the correct execution flow of these modules.
+
+-  ``xid`` is the transaction identifier associated to the each message.
+   Replies must use the same value to facilitate the pairing.
+
+Module announcement
+~~~~~~~~~~~~~~~~~~~
+
+The first operation performed by a Backend is registering itself and the
+modules that it is running to the Core. This is done by using the
+``MODULE_ANNOUNCEMENT`` and ``MODULE_ACKNOWLEDGE`` message types. As a
+result of this process, each Backend and application module can be
+recognized by the Core through an identifier (the ``module_id``) placed
+in the NetIDE header. First, a Backend registers itself by using the
+following schema: backend-<platform name>-<pid>.
+
+For example,odule a Ryu Backend will register by using the following
+name in the message backend-ryu-12345 where 12345 is the process ID of
+the registering instance of the Ryu platform. The format of the message
+is the following:
+
+::
+
+    struct NetIDE_message {
+        netide_ver = 0x03
+        type = MODULE_ANNOUNCEMENT
+        length = len(" backend -< platform_name >-<pid >")
+        xid = 0
+        module_id = 0
+        datapath_id = 0
+        data = " backend -< platform_name >-<pid >"
+    }
+
+The answer generated by the Core will include a ``module_id`` number and
+the Backend name in the payload (the same indicated in the
+``MODULE_ANNOUNCEMENT`` message):
+
+::
+
+    struct NetIDE_message {
+        netide_ver = 0x03
+        type = MODULE_ACKNOWLEDGE
+        length = len(" backend -< platform_name >-<pid >")
+        xid = 0
+        module_id = MODULE_ID
+        datapath_id = 0
+        data = " backend -< platform_name >-<pid >"
+    }
+
+Once a Backend has successfully registered itself, it can start
+registering its modules with the same procedure described above by
+indicating the name of the module in the data (e.g. data="Firewall").
+From this point on, the Backend will insert its own ``module_id`` in the
+header of the messages it generates (e.g. heartbeat, hello messages,
+OpenFlow echo messages from the client controllers, etc.). Otherwise, it
+will encapsulate the control/configuration messages (e.g. FlowMod,
+PacketOut, FeatureRequest, NETCONF request, etc.) generated by network
+application modules with the specific +module\_id+s.
+
+Heartbeat
+~~~~~~~~~
+
+The heartbeat mechanism has been introduced after the adoption of the
+ZeroMQ messaging queuing library to transmit the NetIDE messages.
+Unfortunately, the ZeroMQ library does not offer any mechanism to find
+out about disrupted connections (and also completely unresponsive
+peers). This limitation of the ZeroMQ library can be an issue for the
+Core’s composition mechanism and for the tools connected to the Network
+Engine, as they cannot understand when an client controller disconnects
+or crashes. As a consequence, Backends must periodically send (let’s say
+every 5 seconds) a "heartbeat" message to the Core. If the Core does not
+receive at least one "heartbeat" message from the Backend within a
+certain timeframe, the Core considers it disconnected, removes all the
+related data from its memory structures and informs the relevant tools.
+The format of the message is the following:
+
+::
+
+    struct NetIDE_message {
+        netide_ver = 0x03
+        type = NETIDE_HEARTBEAT
+        length = 0
+        xid = 0
+        module_id = backend -id
+        datapath_id = 0
+        data = 0
+    }
+
+Handshake
+~~~~~~~~~
+
+Upon a successful connection with the Core, the client controller must
+immediately send a hello message with the list of the control and/or
+management protocols needed by the applications deployed on top of it.
+
+::
+
+    struct NetIDE_message {
+        struct netide_header header ;
+        uint8 data [0]
+    };
+
+The header contains the following values:
+
+-  ``netide ver=0x03``
+
+-  ``type=NETIDE_HELLO``
+
+-  ``length=2*NR_PROTOCOLS``
+
+-  ``data`` contains one 2-byte word (in big endian order) for each
+   protocol, with the first byte containing the code of the protocol
+   according to the above enum, while the second byte in- dictates the
+   version of the protocol (e.g. according to the ONF specification,
+   0x01 for OpenFlow v1.0, 0x02 for OpenFlow v1.1, etc.). NETCONF
+   version is marked with 0x01 that refers to the specification in the
+   RFC6241, while OpFlex version is marked with 0x00 since this protocol
+   is still in work-in-progress stage.
+
+The Core relays hello messages to the server controller which responds
+with another hello message containing the following:
+
+-  ``netide ver=0x03``
+
+-  ``type=NETIDE_HELLO``
+
+-  ``length=2*NR_PROTOCOLS``
+
+If at least one of the protocols requested by the client is supported.
+In particular, ``data`` contains the codes of the protocols that match
+the client’s request (2-bytes words, big endian order). If the hand-
+shake fails because none of the requested protocols is supported by the
+server controller, the header of the answer is as follows:
+
+-  ``netide ver=0x03``
+
+-  ``type=NETIDE_ERROR``
+
+-  ``length=2*NR_PROTOCOLS``
+
+-  ``data`` contains the codes of all the protocols supported by the
+   server controller (2-bytes words, big endian order). In this case,
+   the TCP session is terminated by the server controller just after the
+   answer is received by the client. \`
+
diff --git a/docs/developer-guide/neutron-northbound.rst b/docs/developer-guide/neutron-northbound.rst
new file mode 100644 (file)
index 0000000..9ea8714
--- /dev/null
@@ -0,0 +1,128 @@
+Neutron Northbound
+==================
+
+How to add new API support
+--------------------------
+
+OpenStack Neutron is a moving target. It is continuously adding new
+features as new rest APIs. Here is a basic step to add new API support:
+
+In the Neutron Northbound project:
+
+-  Add new YANG model for it under ``neutron/model/src/main/yang`` and
+   ``update neutron.yang``
+
+-  Add northbound API for it, and neutron-spi
+
+   -  Implement ``Neutron<New API>Request.java`` and
+      ``Neutron<New API>Norhtbound.java`` under
+      ``neutron/northbound-api/src/main/java/org/opendaylight/neutron/northbound/api/``
+
+   -  Implement ``INeutron<New API>CRUD.java`` and new data structure if
+      any under
+      ``neutron/neutron-spi/src/main/java/org/opendaylight/neutron/spi/``
+
+   -  update
+      ``neutron/neutron-spi/src/main/java/org/opendaylight/neutron/spi/NeutronCRUDInterfaces.java``
+      to wire new CRUD interface
+
+   -  Add unit tests, ``Neutron<New structure>JAXBTest.java`` under
+      ``neutron/neutron-spi/src/test/java/org/opendaylight/neutron/spi/``
+
+-  update
+   ``neutron/northbound-api/src/main/java/org/opendaylight/neutron/northbound/api/NeutronNorthboundRSApplication.java``
+   to wire new northbound api to ``RSApplication``
+
+-  Add transcriber, ``Neutron<New API>Interface.java`` under
+   ``transcriber/src/main/java/org/opendaylight/neutron/transcriber/``
+
+-  update
+   ``transcriber/src/main/java/org/opendaylight/neutron/transcriber/NeutronTranscriberProvider.java``
+   to wire a new transcriber
+
+   -  Add integration tests ``Neutron<New API>Tests.java`` under
+      ``integration/test/src/test/java/org/opendaylight/neutron/e2etest/``
+
+   -  update
+      ``integration/test/src/test/java/org/opendaylight/neutron/e2etest/ITNeutronE2E.java``
+      to run a newly added tests.
+
+In OpenStack networking-odl
+
+-  Add new driver (or plugin) for new API with tests.
+
+In a southbound Neutron Provider
+
+-  implement actual backend to realize those new API by listening
+   related YANG models.
+
+How to write transcriber
+------------------------
+
+For each Neutron data object, there is an ``Neutron*Interface`` defined
+within the transcriber artifact that will write that object to the
+MD-SAL configuration datastore.
+
+All ``Neutron*Interface`` extend ``AbstractNeutronInterface``, in which
+two methods are defined:
+
+-  one takes the Neutron object as input, and will create a data object
+   from it.
+
+-  one takes an uuid as input, and will create a data object containing
+   the uuid.
+
+::
+
+    protected abstract T toMd(S neutronObject);
+    protected abstract T toMd(String uuid);
+
+In addition the ``AbstractNeutronInterface`` class provides several
+other helper methods (``addMd``, ``updateMd``, ``removeMd``), which
+handle the actual writing to the configuration datastore.
+
+The semantics of the ``toMD()`` methods
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Each of the Neutron YANG models defines structures containing data.
+Further each YANG-modeled structure has it own builder. A particular
+``toMD()`` method instantiates an instance of the correct builder, fills
+in the properties of the builder from the corresponding values of the
+Neutron object and then creates the YANG-modeled structures via the
+``build()`` method.
+
+As an example, one of the ``toMD`` code for Neutron Networks is
+presented below:
+
+::
+
+    protected Network toMd(NeutronNetwork network) {
+        NetworkBuilder networkBuilder = new NetworkBuilder();
+        networkBuilder.setAdminStateUp(network.getAdminStateUp());
+        if (network.getNetworkName() != null) {
+            networkBuilder.setName(network.getNetworkName());
+        }
+        if (network.getShared() != null) {
+            networkBuilder.setShared(network.getShared());
+        }
+        if (network.getStatus() != null) {
+            networkBuilder.setStatus(network.getStatus());
+        }
+        if (network.getSubnets() != null) {
+            List<Uuid> subnets = new ArrayList<Uuid>();
+            for( String subnet : network.getSubnets()) {
+                subnets.add(toUuid(subnet));
+            }
+            networkBuilder.setSubnets(subnets);
+        }
+        if (network.getTenantID() != null) {
+            networkBuilder.setTenantId(toUuid(network.getTenantID()));
+        }
+        if (network.getNetworkUUID() != null) {
+            networkBuilder.setUuid(toUuid(network.getNetworkUUID()));
+        } else {
+            logger.warn("Attempting to write neutron network without UUID");
+        }
+        return networkBuilder.build();
+    }
+
diff --git a/docs/developer-guide/neutron-service-developer-guide.rst b/docs/developer-guide/neutron-service-developer-guide.rst
new file mode 100644 (file)
index 0000000..3ecead0
--- /dev/null
@@ -0,0 +1,161 @@
+Neutron Service Developer Guide
+===============================
+
+Overview
+--------
+
+This Karaf feature (``odl-neutron-service``) provides integration
+support for OpenStack Neutron via the OpenDaylight ML2 mechanism driver.
+The Neutron Service is only one of the components necessary for
+OpenStack integration. It defines YANG models for OpenStack Neutron data
+models and northbound API via REST API and YANG model RESTCONF.
+
+Those developers who want to add new provider for new OpenStack Neutron
+extensions/services (Neutron constantly adds new extensions/services and
+OpenDaylight will keep up with those new things) need to communicate
+with this Neutron Service or add models to Neutron Service. If you want
+to add new extensions/services themselves to the Neutron Service, new
+YANG data models need to be added, but that is out of scope of this
+document because this guide is for a developer who will be *using* the
+feature to build something separate, but *not* somebody who will be
+developing code for this feature itself.
+
+Neutron Service Architecture
+----------------------------
+
+.. figure:: ./images/neutron/odl-neutron-service-developer-architecture.png
+   :alt: Neutron Service Architecture
+
+   Neutron Service Architecture
+
+The Neutron Service defines YANG models for OpenStack Neutron
+integration. When OpenStack admins/users request changes
+(creation/update/deletion) of Neutron resources, e.g., Neutron network,
+Neutron subnet, Neutron port, the corresponding YANG model within
+OpenDaylight will be modified. The OpenDaylight OpenStack will subscribe
+the changes on those models and will be notified those modification
+through MD-SAL when changes are made. Then the provider will do the
+necessary tasks to realize OpenStack integration. How to realize it (or
+even not realize it) is up to each provider. The Neutron Service itself
+does not take care of it.
+
+How to Write a SB Neutron Consumer
+----------------------------------
+
+In Boron, there is only one options for SB Neutron Consumers:
+
+-  Listening for changes via the Neutron YANG model
+
+Until Beryllium there was another way with the legacy I\*Aware
+interface. From Boron, the interface was eliminated. So all the SB
+Neutron Consumers have to use Neutron YANG model.
+
+Neutron YANG models
+-------------------
+
+Neutron service defines YANG models for Neutron. The details can be
+found at
+
+-  https://git.opendaylight.org/gerrit/gitweb?p=neutron.git;a=tree;f=model/src/main/yang;hb=refs/heads/stable/boron
+
+Basically those models are based on OpenStack Neutron API definitions.
+For exact definitions, OpenStack Neutron source code needs to be
+referred as the above documentation doesn’t always cover the necessary
+details. There is nothing special to utilize those Neutron YANG models.
+The basic procedure will be:
+
+1. subscribe for changes made to the the model
+
+2. respond on the data change notification for each models
+
+.. note::
+
+    Currently there is no way to refuse the request configuration at
+    this point. That is left to future work.
+
+.. code:: java
+
+    public class NeutronNetworkChangeListener implements DataChangeListener, AutoCloseable {
+        private ListenerRegistration<DataChangeListener> registration;
+        private DataBroker db;
+
+        public NeutronNetworkChangeListener(DataBroker db){
+            this.db = db;
+            // create identity path to register on service startup
+            InstanceIdentifier<Network> path = InstanceIdentifier
+                    .create(Neutron.class)
+                    .child(Networks.class)
+                    .child(Network.class);
+            LOG.debug("Register listener for Neutron Network model data changes");
+            // register for Data Change Notification
+            registration =
+                    this.db.registerDataChangeListener(LogicalDatastoreType.CONFIGURATION, path, this, DataChangeScope.ONE);
+
+        }
+
+        @Override
+        public void onDataChanged(
+                AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> changes) {
+            LOG.trace("Data changes : {}",changes);
+
+            // handle data change notification
+            Object[] subscribers = NeutronIAwareUtil.getInstances(INeutronNetworkAware.class, this);
+            createNetwork(changes, subscribers);
+            updateNetwork(changes, subscribers);
+            deleteNetwork(changes, subscribers);
+        }
+    }
+
+Neutron configuration
+---------------------
+
+From Boron, new models of configuration for OpenDaylight to tell
+OpenStack neutron/networking-odl its configuration/capability.
+
+hostconfig
+~~~~~~~~~~
+
+This is for OpenDaylight to tell per-node configuration to Neutron.
+Especially this is used by pseudo agent port binding heavily.
+
+The model definition can be found at
+
+-  https://git.opendaylight.org/gerrit/gitweb?p=neutron.git;a=blob;f=model/src/main/yang/neutron-hostconfig.yang;hb=refs/heads/stable/boron
+
+How to populate this for pseudo agent port binding is documented at
+
+-  http://git.openstack.org/cgit/openstack/networking-odl/tree/doc/source/devref/hostconfig.rst
+
+Neutron extension config
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+In Boron this is experimental. The model definition can be found at
+
+-  https://git.opendaylight.org/gerrit/gitweb?p=neutron.git;a=blob;f=model/src/main/yang/neutron-extensions.yang;hb=refs/heads/stable/boron
+
+Each Neutron Service provider has its own feature set. Some support the
+full features of OpenStack, but others support only a subset. With same
+supported Neutron API, some functionality may or may not be supported.
+So there is a need for a way that OpenDaylight can tell networking-odl
+its capability. Thus networking-odl can initialize Neutron properly
+based on reported capability.
+
+Neutorn Logger
+--------------
+
+There is another small Karaf feature, ``odl-neutron-logger``, which logs
+changes of Neutron YANG models. which can be used for debug/audit.
+
+It would also help to understand how to listen the change.
+
+-  https://git.opendaylight.org/gerrit/gitweb?p=neutron.git;a=blob;f=neutron-logger/src/main/java/org/opendaylight/neutron/logger/NeutronLogger.java;hb=refs/heads/stable/boron
+
+API Reference Documentation
+---------------------------
+
+The OpenStack Neutron API references
+
+-  http://developer.openstack.org/api-ref-networking-v2.html
+
+-  http://developer.openstack.org/api-ref-networking-v2-ext.html
+
diff --git a/docs/developer-guide/ocp-plugin-developer-guide.rst b/docs/developer-guide/ocp-plugin-developer-guide.rst
new file mode 100644 (file)
index 0000000..ba40a05
--- /dev/null
@@ -0,0 +1,1158 @@
+OCP Plugin Developer Guide
+==========================
+
+This document is intended for both OCP (ORI [Open Radio Interface] C&M
+[Control and Management] Protocol) agent developers and OpenDaylight
+service/application developers. It describes essential information
+needed to implement an OCP agent that is capable of interoperating with
+the OCP plugin running in OpenDaylight, including the OCP connection
+establishment and state machines used on both ends of the connection. It
+also provides a detailed description of the northbound/southbound APIs
+that the OCP plugin exposes to allow automation and programmability.
+
+Overview
+--------
+
+OCP is an ETSI standard protocol for control and management of Remote
+Radio Head (RRH) equipment. The OCP Project addresses the need for a
+southbound plugin that allows applications and controller services to
+interact with RRHs using OCP. The OCP southbound plugin will allow
+applications acting as a Radio Equipment Control (REC) to interact with
+RRHs that support an OCP agent.
+
+.. figure:: ./images/ocpplugin/ocp-sb-plugin.jpg
+   :alt: OCP southbound plugin
+
+   OCP southbound plugin
+
+Architecture
+------------
+
+OCP is a vendor-neutral standard communications interface defined to
+enable control and management between RE and REC of an ORI architecture.
+The OCP Plugin supports the implementation of the OCP specification; it
+is based on the Model Driven Service Abstraction Layer (MD-SAL)
+architecture.
+
+The OCP Plugin project consists of three main components: OCP southbound
+plugin, OCP protocol library and OCP service. For details on each of
+them, refer to the OCP Plugin User Guide.
+
+.. figure:: ./images/ocpplugin/plugin-design.jpg
+   :alt: Overall architecture
+
+   Overall architecture
+
+Connection Establishment
+------------------------
+
+The OCP layer is transported over a TCP/IP connection established
+between the RE and the REC. OCP provides the following functions:
+
+-  Control & Management of the RE by the REC
+
+-  Transport of AISG/3GPP Iuant Layer 7 messages and alarms between REC
+   and RE
+
+Hello Message
+~~~~~~~~~~~~~
+
+Hello message is used by the OCP agent during connection setup. It is
+used for version negotiation. When the connection is established, the
+OCP agent immediately sends a Hello message with the version field set
+to highest OCP version supported by itself, along with the verdor ID and
+serial number of the radio head it is running on.
+
+The combinaiton of the verdor ID and serial number will be used by the
+OCP plugin to uniquely identify a managed radio head. When not receiving
+reply from the OCP plugin, the OCP agent can resend Hello message with
+pre-defined Hello timeout (THLO) and Hello resend times (NHLO).
+
+According to ORI spec, the default value of TCP Link Monitoring Timer
+(TTLM) is 50 seconds. The RE shall trigger an OCP layer restart while
+TTLM expires in RE or the RE detects a TCP link failure. So we may
+define NHLO \* THLO = 50 seconds (e.g. NHLO = 10, THLO = 5 seconds).
+
+By nature the Hello message is a new type of indication, and it contains
+supported OCP version, vendor ID and serial number as shown below.
+
+**Hello message.**
+
+::
+
+    <?xml version="1.0" encoding="UTF-8"?>
+    <msg xmlns="http://uri.etsi.org/ori/002-2/v4.1.1">
+      <header>
+        <msgType>IND</msgType>
+        <msgUID>0</msgUID>
+      </header>
+      <body>
+        <helloInd>
+          <version>4.1.1</version>
+          <vendorId>XYZ</vendorId>
+          <serialNumber>ABC123</serialNumber>
+        </helloInd>
+      </body>
+    </msg>
+
+Ack Message
+~~~~~~~~~~~
+
+Hello from the OCP agent will always make the OCP plugin respond with
+ACK. In case everything is OK, it will be ACK(OK). In case something is
+wrong, it will be ACK(FAIL).
+
+If the OCP agent receives ACK(OK), it goes to Established state. If the
+OCP agent receives ACK(FAIL), it goes to Maintenance state. The failure
+code and reason of ACK(FAIL) are defined as below:
+
+-  FAIL\_OCP\_VERSION (OCP version not supported)
+
+-  FAIL\_NO\_MORE\_CAPACITY (OCP plugin cannot control any more radio
+   heads)
+
+The result inside Ack message indicates OK or FAIL with different
+reasons.
+
+**Ack message.**
+
+::
+
+    <?xml version="1.0" encoding="UTF-8"?>
+    <msg xmlns="http://uri.etsi.org/ori/002-2/v4.1.1">
+      <header>
+        <msgType>ACK</msgType>
+        <msgUID>0</msgUID>
+      </header>
+      <body>
+        <helloAck>
+          <result>FAIL_OCP_VERSION</result>
+        </helloAck>
+      </body>
+    </msg>
+
+State Machines
+~~~~~~~~~~~~~~
+
+The following figures illustrate the Finite State Machine (FSM) of the
+OCP agent and OCP plugin for new connection procedure.
+
+.. figure:: ./images/ocpplugin/ocpagent-state-machine.jpg
+   :alt: OCP agent state machine
+
+   OCP agent state machine
+
+.. figure:: ./images/ocpplugin/ocpplugin-state-machine.jpg
+   :alt: OCP plugin state machine
+
+   OCP plugin state machine
+
+Northbound APIs
+---------------
+
+There are ten exposed northbound APIs: health-check, set-time, re-reset,
+get-param, modify-param, create-obj, delete-obj, get-state, modify-state
+and get-fault
+
+health-check
+~~~~~~~~~~~~
+
+The Health Check procedure allows the application to verify that the OCP
+layer is functioning correctly at the RE.
+
+Default URL:
+http://localhost:8181/restconf/operations/ocp-service:health-check-nb
+
+POST Input
+^^^^^^^^^^
+
++--------------------+----------+--------------------+--------------------+----------+
+| Field Name         | Type     | Description        | Example            | Required |
+|                    |          |                    |                    | ?        |
++====================+==========+====================+====================+==========+
+| nodeId             | String   | Inventory node     | ocp:MTI-101-200    | Yes      |
+|                    |          | reference for OCP  |                    |          |
+|                    |          | radio head         |                    |          |
++--------------------+----------+--------------------+--------------------+----------+
+| tcpLinkMonTimeout  | unsigned | TCP Link           | 50                 | Yes      |
+|                    | Short    | Monitoring Timeout |                    |          |
+|                    |          | (unit: seconds)    |                    |          |
++--------------------+----------+--------------------+--------------------+----------+
+
+**Example.**
+
+::
+
+    {
+        "health-check-nb": {
+            "input": {
+                "nodeId": "ocp:MTI-101-200",
+                "tcpLinkMonTimeout": "50"
+            }
+        }
+    }
+
+POST Output
+^^^^^^^^^^^
+
++--------------------+--------------------+--------------------------------------+
+| Field Name         | Type               | Description                          |
++====================+====================+======================================+
+| result             | String, enumerated | Common default result codes          |
++--------------------+--------------------+--------------------------------------+
+
+**Example.**
+
+::
+
+    {
+        "output": {
+            "result": "SUCCESS"
+        }
+    }
+
+set-time
+~~~~~~~~
+
+The Set Time procedure allows the application to set/update the absolute
+time reference that shall be used by the RE.
+
+Default URL:
+http://localhost:8181/restconf/operations/ocp-service:set-time-nb
+
+POST Input
+^^^^^^^^^^
+
++------------+------------+----------------------+----------------------+------------+
+| Field Name | Type       | Description          | Example              | Required?  |
++============+============+======================+======================+============+
+| nodeId     | String     | Inventory node       | ocp:MTI-101-200      | Yes        |
+|            |            | reference for OCP    |                      |            |
+|            |            | radio head           |                      |            |
++------------+------------+----------------------+----------------------+------------+
+| newTime    | dateTime   | New datetime setting | 2016-04-26T10:23:00- | Yes        |
+|            |            | for radio head       | 05:00                |            |
++------------+------------+----------------------+----------------------+------------+
+
+**Example.**
+
+::
+
+    {
+        "set-time-nb": {
+            "input": {
+                "nodeId": "ocp:MTI-101-200",
+                "newTime": "2016-04-26T10:23:00-05:00"
+            }
+        }
+    }
+
+POST Output
+^^^^^^^^^^^
+
++--------------------+--------------------+--------------------------------------+
+| Field Name         | Type               | Description                          |
++====================+====================+======================================+
+| result             | String, enumerated | Common default result codes +        |
+|                    |                    | FAIL\_INVALID\_TIMEDATA              |
++--------------------+--------------------+--------------------------------------+
+
+**Example.**
+
+::
+
+    {
+        "output": {
+            "result": "SUCCESS"
+        }
+    }
+
+re-reset
+~~~~~~~~
+
+The RE Reset procedure allows the application to reset a specific RE.
+
+Default URL:
+http://localhost:8181/restconf/operations/ocp-service:re-reset-nb
+
+POST Input
+^^^^^^^^^^
+
++------------+------------+----------------------+----------------------+------------+
+| Field Name | Type       | Description          | Example              | Required?  |
++============+============+======================+======================+============+
+| nodeId     | String     | Inventory node       | ocp:MTI-101-200      | Yes        |
+|            |            | reference for OCP    |                      |            |
+|            |            | radio head           |                      |            |
++------------+------------+----------------------+----------------------+------------+
+
+**Example.**
+
+::
+
+    {
+        "re-reset-nb": {
+            "input": {
+                "nodeId": "ocp:MTI-101-200"
+            }
+        }
+    }
+
+POST Output
+^^^^^^^^^^^
+
++--------------------+--------------------+--------------------------------------+
+| Field Name         | Type               | Description                          |
++====================+====================+======================================+
+| result             | String, enumerated | Common default result codes          |
++--------------------+--------------------+--------------------------------------+
+
+**Example.**
+
+::
+
+    {
+        "output": {
+            "result": "SUCCESS"
+        }
+    }
+
+get-param
+~~~~~~~~~
+
+The Object Parameter Reporting procedure allows the application to
+retrieve the following information:
+
+1. the defined object types and instances within the Resource Model of
+   the RE
+
+2. the values of the parameters of the objects
+
+Default URL:
+http://localhost:8181/restconf/operations/ocp-service:get-param-nb
+
+POST Input
+^^^^^^^^^^
+
++------------+------------+----------------------+----------------------+------------+
+| Field Name | Type       | Description          | Example              | Required?  |
++============+============+======================+======================+============+
+| nodeId     | String     | Inventory node       | ocp:MTI-101-200      | Yes        |
+|            |            | reference for OCP    |                      |            |
+|            |            | radio head           |                      |            |
++------------+------------+----------------------+----------------------+------------+
+| objId      | String     | Object ID            | RxSigPath\_5G:1      | Yes        |
++------------+------------+----------------------+----------------------+------------+
+| paramName  | String     | Parameter name       | dataLink             | Yes        |
++------------+------------+----------------------+----------------------+------------+
+
+**Example.**
+
+::
+
+    {
+        "get-param-nb": {
+            "input": {
+                "nodeId": "ocp:MTI-101-200",
+                "objId": "RxSigPath_5G:1",
+                "paramName": "dataLink"
+            }
+        }
+    }
+
+POST Output
+^^^^^^^^^^^
+
++--------------------+--------------------+--------------------------------------+
+| Field Name         | Type               | Description                          |
++====================+====================+======================================+
+| id                 | String             | Object ID                            |
++--------------------+--------------------+--------------------------------------+
+| name               | String             | Object parameter name                |
++--------------------+--------------------+--------------------------------------+
+| value              | String             | Object parameter value               |
++--------------------+--------------------+--------------------------------------+
+| result             | String, enumerated | Common default result codes +        |
+|                    |                    | "FAIL\_UNKNOWN\_OBJECT",             |
+|                    |                    | "FAIL\_UNKNOWN\_PARAM"               |
++--------------------+--------------------+--------------------------------------+
+
+**Example.**
+
+::
+
+    {
+        "output": {
+            "obj": [
+                {
+                    "id": "RxSigPath_5G:1",
+                    "param": [
+                        {
+                            "name": "dataLink",
+                            "value": "dataLink:1"
+                        }
+                    ]
+                }
+            ],
+            "result": "SUCCESS"
+        }
+    }
+
+modify-param
+~~~~~~~~~~~~
+
+The Object Parameter Modification procedure allows the application to
+configure the values of the parameters of the objects identified by the
+Resource Model.
+
+Default URL:
+http://localhost:8181/restconf/operations/ocp-service:modify-param-nb
+
+POST Input
+^^^^^^^^^^
+
++------------+------------+----------------------+----------------------+------------+
+| Field Name | Type       | Description          | Example              | Required?  |
++============+============+======================+======================+============+
+| nodeId     | String     | Inventory node       | ocp:MTI-101-200      | Yes        |
+|            |            | reference for OCP    |                      |            |
+|            |            | radio head           |                      |            |
++------------+------------+----------------------+----------------------+------------+
+| objId      | String     | Object ID            | RxSigPath\_5G:1      | Yes        |
++------------+------------+----------------------+----------------------+------------+
+| name       | String     | Object parameter     | dataLink             | Yes        |
+|            |            | name                 |                      |            |
++------------+------------+----------------------+----------------------+------------+
+| value      | String     | Object parameter     | dataLink:1           | Yes        |
+|            |            | value                |                      |            |
++------------+------------+----------------------+----------------------+------------+
+
+**Example.**
+
+::
+
+    {
+        "modify-param-nb": {
+            "input": {
+                "nodeId": "ocp:MTI-101-200",
+                "objId": "RxSigPath_5G:1",
+                "param": [
+                    {
+                        "name": "dataLink",
+                        "value": "dataLink:1"
+                    }
+                ]
+            }
+        }
+    }
+
+POST Output
+^^^^^^^^^^^
+
++--------------------+--------------------+--------------------------------------+
+| Field Name         | Type               | Description                          |
++====================+====================+======================================+
+| objId              | String             | Object ID                            |
++--------------------+--------------------+--------------------------------------+
+| globResult         | String, enumerated | Common default result codes +        |
+|                    |                    | "FAIL\_UNKNOWN\_OBJECT",             |
+|                    |                    | "FAIL\_PARAMETER\_FAIL",             |
+|                    |                    | "FAIL\_NOSUCH\_RESOURCE"             |
++--------------------+--------------------+--------------------------------------+
+| name               | String             | Object parameter name                |
++--------------------+--------------------+--------------------------------------+
+| result             | String, enumerated | "SUCCESS", "FAIL\_UNKNOWN\_PARAM",   |
+|                    |                    | "FAIL\_PARAM\_READONLY",             |
+|                    |                    | "FAIL\_PARAM\_LOCKREQUIRED",         |
+|                    |                    | "FAIL\_VALUE\_OUTOF\_RANGE",         |
+|                    |                    | "FAIL\_VALUE\_TYPE\_ERROR"           |
++--------------------+--------------------+--------------------------------------+
+
+**Example.**
+
+::
+
+    {
+        "output": {
+            "objId": "RxSigPath_5G:1",
+            "globResult": "SUCCESS",
+            "param": [
+                {
+                    "name": "dataLink",
+                    "result": "SUCCESS"
+                }
+            ]
+        }
+    }
+
+create-obj
+~~~~~~~~~~
+
+The Object Creation procedure allows the application to create and
+initialize a new instance of the given object type on the RE.
+
+Default URL:
+http://localhost:8181/restconf/operations/ocp-service:create-obj-nb
+
+POST Input
+^^^^^^^^^^
+
++------------+------------+----------------------+----------------------+------------+
+| Field Name | Type       | Description          | Example              | Required?  |
++============+============+======================+======================+============+
+| nodeId     | String     | Inventory node       | ocp:MTI-101-200      | Yes        |
+|            |            | reference for OCP    |                      |            |
+|            |            | radio head           |                      |            |
++------------+------------+----------------------+----------------------+------------+
+| objType    | String     | Object type          | RxSigPath\_5G        | Yes        |
++------------+------------+----------------------+----------------------+------------+
+| name       | String     | Object parameter     | dataLink             | No         |
+|            |            | name                 |                      |            |
++------------+------------+----------------------+----------------------+------------+
+| value      | String     | Object parameter     | dataLink:1           | No         |
+|            |            | value                |                      |            |
++------------+------------+----------------------+----------------------+------------+
+
+**Example.**
+
+::
+
+    {
+        "create-obj-nb": {
+            "input": {
+                "nodeId": "ocp:MTI-101-200",
+                "objType": "RxSigPath_5G",
+                "param": [
+                    {
+                        "name": "dataLink",
+                        "value": "dataLink:1"
+                    }
+                ]
+            }
+        }
+    }
+
+POST Output
+^^^^^^^^^^^
+
++--------------------+--------------------+--------------------------------------+
+| Field Name         | Type               | Description                          |
++====================+====================+======================================+
+| objId              | String             | Object ID                            |
++--------------------+--------------------+--------------------------------------+
+| globResult         | String, enumerated | Common default result codes +        |
+|                    |                    | "FAIL\_UNKNOWN\_OBJTYPE",            |
+|                    |                    | "FAIL\_STATIC\_OBJTYPE",             |
+|                    |                    | "FAIL\_UNKNOWN\_OBJECT",             |
+|                    |                    | "FAIL\_CHILD\_NOTALLOWED",           |
+|                    |                    | "FAIL\_OUTOF\_RESOURCES"             |
+|                    |                    | "FAIL\_PARAMETER\_FAIL",             |
+|                    |                    | "FAIL\_NOSUCH\_RESOURCE"             |
++--------------------+--------------------+--------------------------------------+
+| name               | String             | Object parameter name                |
++--------------------+--------------------+--------------------------------------+
+| result             | String, enumerated | "SUCCESS", "FAIL\_UNKNOWN\_PARAM",   |
+|                    |                    | "FAIL\_PARAM\_READONLY",             |
+|                    |                    | "FAIL\_PARAM\_LOCKREQUIRED",         |
+|                    |                    | "FAIL\_VALUE\_OUTOF\_RANGE",         |
+|                    |                    | "FAIL\_VALUE\_TYPE\_ERROR"           |
++--------------------+--------------------+--------------------------------------+
+
+**Example.**
+
+::
+
+    {
+        "output": {
+            "objId": "RxSigPath_5G:0",
+            "globResult": "SUCCESS",
+            "param": [
+                {
+                    "name": "dataLink",
+                    "result": "SUCCESS"
+                }
+            ]
+        }
+    }
+
+delete-obj
+~~~~~~~~~~
+
+The Object Deletion procedure allows the application to delete a given
+object instance and recursively its entire child objects on the RE.
+
+Default URL:
+http://localhost:8181/restconf/operations/ocp-service:delete-obj-nb
+
+POST Input
+^^^^^^^^^^
+
++------------+------------+----------------------+----------------------+------------+
+| Field Name | Type       | Description          | Example              | Required?  |
++============+============+======================+======================+============+
+| nodeId     | String     | Inventory node       | ocp:MTI-101-200      | Yes        |
+|            |            | reference for OCP    |                      |            |
+|            |            | radio head           |                      |            |
++------------+------------+----------------------+----------------------+------------+
+| objId      | String     | Object ID            | RxSigPath\_5G:1      | Yes        |
++------------+------------+----------------------+----------------------+------------+
+
+**Example.**
+
+::
+
+    {
+        "delete-obj-nb": {
+            "input": {
+                "nodeId": "ocp:MTI-101-200",
+                "obj-id": "RxSigPath_5G:0"
+            }
+        }
+    }
+
+POST Output
+^^^^^^^^^^^
+
++--------------------+--------------------+--------------------------------------+
+| Field Name         | Type               | Description                          |
++====================+====================+======================================+
+| result             | String, enumerated | Common default result codes +        |
+|                    |                    | "FAIL\_UNKNOWN\_OBJECT",             |
+|                    |                    | "FAIL\_STATIC\_OBJTYPE",             |
+|                    |                    | "FAIL\_LOCKREQUIRED"                 |
++--------------------+--------------------+--------------------------------------+
+
+**Example.**
+
+::
+
+    {
+        "output": {
+            "result": "SUCCESS"
+        }
+    }
+
+get-state
+~~~~~~~~~
+
+The Object State Reporting procedure allows the application to acquire
+the current state (for the requested state type) of one or more objects
+of the RE resource model, and additionally configure event-triggered
+reporting of the detected state changes for all state types of the
+indicated objects.
+
+Default URL:
+http://localhost:8181/restconf/operations/ocp-service:get-state-nb
+
+POST Input
+^^^^^^^^^^
+
++--------------------+----------+--------------------+--------------------+----------+
+| Field Name         | Type     | Description        | Example            | Required |
+|                    |          |                    |                    | ?        |
++====================+==========+====================+====================+==========+
+| nodeId             | String   | Inventory node     | ocp:MTI-101-200    | Yes      |
+|                    |          | reference for OCP  |                    |          |
+|                    |          | radio head         |                    |          |
++--------------------+----------+--------------------+--------------------+----------+
+| objId              | String   | Object ID          | RxSigPath\_5G:1    | Yes      |
++--------------------+----------+--------------------+--------------------+----------+
+| stateType          | String,  | Valid values:      | ALL                | Yes      |
+|                    | enumerat | "AST", "FST",      |                    |          |
+|                    | ed       | "ALL"              |                    |          |
++--------------------+----------+--------------------+--------------------+----------+
+| eventDrivenReporti | Boolean  | Event-triggered    | true               | Yes      |
+| ng                 |          | reporting of state |                    |          |
+|                    |          | change             |                    |          |
++--------------------+----------+--------------------+--------------------+----------+
+
+**Example.**
+
+::
+
+    {
+        "get-state-nb": {
+            "input": {
+                "nodeId": "ocp:MTI-101-200",
+                "objId": "antPort:0",
+                "stateType": "ALL",
+                "eventDrivenReporting": "true"
+            }
+        }
+    }
+
+POST Output
+^^^^^^^^^^^
+
++--------------------+--------------------+--------------------------------------+
+| Field Name         | Type               | Description                          |
++====================+====================+======================================+
+| id                 | String             | Object ID                            |
++--------------------+--------------------+--------------------------------------+
+| type               | String, enumerated | State type. Valid values: "AST",     |
+|                    |                    | "FST                                 |
++--------------------+--------------------+--------------------------------------+
+| value              | String, enumerated | State value. Valid values: For state |
+|                    |                    | type = "AST": "LOCKED", "UNLOCKED".  |
+|                    |                    | For state type = "FST":              |
+|                    |                    | "PRE\_OPERATIONAL", "OPERATIONAL",   |
+|                    |                    | "DEGRADED", "FAILED",                |
+|                    |                    | "NOT\_OPERATIONAL", "DISABLED"       |
++--------------------+--------------------+--------------------------------------+
+| result             | String, enumerated | Common default result codes +        |
+|                    |                    | "FAIL\_UNKNOWN\_OBJECT",             |
+|                    |                    | "FAIL\_UNKNOWN\_STATETYPE",          |
+|                    |                    | "FAIL\_VALUE\_OUTOF\_RANGE"          |
++--------------------+--------------------+--------------------------------------+
+
+**Example.**
+
+::
+
+    {
+        "output": {
+            "obj": [
+                {
+                    "id": "antPort:0",
+                    "state": [
+                        {
+                            "type": "FST",
+                            "value": "DISABLED"
+                        },
+                        {
+                            "type": "AST",
+                            "value": "LOCKED"
+                        }
+                    ]
+                }
+            ],
+            "result": "SUCCESS"
+        }
+    }
+
+modify-state
+~~~~~~~~~~~~
+
+The Object State Modification procedure allows the application to
+trigger a change in the state of an object of the RE Resource Model.
+
+Default URL:
+http://localhost:8181/restconf/operations/ocp-service:modify-state-nb
+
+POST Input
+^^^^^^^^^^
+
++------------+------------+----------------------+----------------------+------------+
+| Field Name | Type       | Description          | Example              | Required?  |
++============+============+======================+======================+============+
+| nodeId     | String     | Inventory node       | ocp:MTI-101-200      | Yes        |
+|            |            | reference for OCP    |                      |            |
+|            |            | radio head           |                      |            |
++------------+------------+----------------------+----------------------+------------+
+| objId      | String     | Object ID            | RxSigPath\_5G:1      | Yes        |
++------------+------------+----------------------+----------------------+------------+
+| stateType  | String,    | Valid values: "AST", | AST                  | Yes        |
+|            | enumerated | "FST", "ALL"         |                      |            |
++------------+------------+----------------------+----------------------+------------+
+| stateValue | String,    | Valid values: For    | LOCKED               | Yes        |
+|            | enumerated | state type = "AST":  |                      |            |
+|            |            | "LOCKED",            |                      |            |
+|            |            | "UNLOCKED". For      |                      |            |
+|            |            | state type = "FST":  |                      |            |
+|            |            | "PRE\_OPERATIONAL",  |                      |            |
+|            |            | "OPERATIONAL",       |                      |            |
+|            |            | "DEGRADED",          |                      |            |
+|            |            | "FAILED",            |                      |            |
+|            |            | "NOT\_OPERATIONAL",  |                      |            |
+|            |            | "DISABLED"           |                      |            |
++------------+------------+----------------------+----------------------+------------+
+
+**Example.**
+
+::
+
+    {
+        "modify-state-nb": {
+            "input": {
+                "nodeId": "ocp:MTI-101-200",
+                "objId": "RxSigPath_5G:1",
+                "stateType": "AST",
+                "stateValue": "LOCKED"
+            }
+        }
+    }
+
+POST Output
+^^^^^^^^^^^
+
++--------------------+--------------------+--------------------------------------+
+| Field Name         | Type               | Description                          |
++====================+====================+======================================+
+| objId              | String             | Object ID                            |
++--------------------+--------------------+--------------------------------------+
+| stateType          | String, enumerated | State type. Valid values: "AST",     |
+|                    |                    | "FST                                 |
++--------------------+--------------------+--------------------------------------+
+| stateValue         | String, enumerated | State value. Valid values: For state |
+|                    |                    | type = "AST": "LOCKED", "UNLOCKED".  |
+|                    |                    | For state type = "FST":              |
+|                    |                    | "PRE\_OPERATIONAL", "OPERATIONAL",   |
+|                    |                    | "DEGRADED", "FAILED",                |
+|                    |                    | "NOT\_OPERATIONAL", "DISABLED"       |
++--------------------+--------------------+--------------------------------------+
+| result             | String, enumerated | Common default result codes +        |
+|                    |                    | "FAIL\_UNKNOWN\_OBJECT",             |
+|                    |                    | "FAIL\_UNKNOWN\_STATETYPE",          |
+|                    |                    | "FAIL\_UNKNOWN\_STATEVALUE",         |
+|                    |                    | "FAIL\_STATE\_READONLY",             |
+|                    |                    | "FAIL\_RESOURCE\_UNAVAILABLE",       |
+|                    |                    | "FAIL\_RESOURCE\_INUSE",             |
+|                    |                    | "FAIL\_PARENT\_CHILD\_CONFLICT",     |
+|                    |                    | "FAIL\_PRECONDITION\_NOTMET          |
++--------------------+--------------------+--------------------------------------+
+
+**Example.**
+
+::
+
+    {
+        "output": {
+            "objId": "RxSigPath_5G:1",
+            "stateType": "AST",
+            "stateValue": "LOCKED",
+            "result": "SUCCESS",
+        }
+    }
+
+get-fault
+~~~~~~~~~
+
+The Fault Reporting procedure allows the application to acquire
+information about all current active faults associated with a primary
+object, as well as configure the RE to report when the fault status
+changes for any of faults associated with the indicated primary object.
+
+Default URL:
+http://localhost:8181/restconf/operations/ocp-service:get-fault-nb
+
+POST Input
+^^^^^^^^^^
+
++------------+------------+----------------------+----------------------+------------+
+| Field Name | Type       | Description          | Example              | Required?  |
++============+============+======================+======================+============+
+| nodeId     | String     | Inventory node       | ocp:MTI-101-200      | Yes        |
+|            |            | reference for OCP    |                      |            |
+|            |            | radio head           |                      |            |
++------------+------------+----------------------+----------------------+------------+
+| objId      | String     | Object ID            | RE:0                 | Yes        |
++------------+------------+----------------------+----------------------+------------+
+| eventDrive | Boolean    | Event-triggered      | true                 | Yes        |
+| nReporting |            | reporting of fault   |                      |            |
++------------+------------+----------------------+----------------------+------------+
+
+**Example.**
+
+::
+
+    {
+        "get-fault-nb": {
+            "input": {
+                "nodeId": "ocp:MTI-101-200",
+                "objId": "RE:0",
+                "eventDrivenReporting": "true"
+            }
+        }
+    }
+
+POST Output
+^^^^^^^^^^^
+
++--------------------+--------------------+--------------------------------------+
+| Field Name         | Type               | Description                          |
++====================+====================+======================================+
+| result             | String, enumerated | Common default result codes +        |
+|                    |                    | "FAIL\_UNKNOWN\_OBJECT",             |
+|                    |                    | "FAIL\_VALUE\_OUTOF\_RANGE"          |
++--------------------+--------------------+--------------------------------------+
+| id (obj)           | String             | Object ID                            |
++--------------------+--------------------+--------------------------------------+
+| id (fault)         | String             | Fault ID                             |
++--------------------+--------------------+--------------------------------------+
+| severity           | String             | Fault severity                       |
++--------------------+--------------------+--------------------------------------+
+| timestamp          | dateTime           | Time stamp                           |
++--------------------+--------------------+--------------------------------------+
+| descr              | String             | Text description                     |
++--------------------+--------------------+--------------------------------------+
+| affectedObj        | String             | Affected object                      |
++--------------------+--------------------+--------------------------------------+
+
+**Example.**
+
+::
+
+    {
+        "output": {
+            "result": "SUCCESS",
+            "obj": [
+                {
+                    "id": "RE:0",
+                    "fault": [
+                        {
+                            "id": "FAULT_OVERTEMP",
+                            "severity": "DEGRADED",
+                            "timestamp": "2012-02-12T16:35:00",
+                            "descr": "PA temp too high; Pout reduced",
+                            "affectedObj": [
+                                "TxSigPath_EUTRA:0",
+                                "TxSigPath_EUTRA:1"
+                            ]
+                        },
+                        {
+                            "id": "FAULT_VSWR_OUTOF_RANGE",
+                            "severity": "WARNING",
+                            "timestamp": "2012-02-12T16:01:05",
+                        }
+                    ]
+                }
+            ],
+        }
+    }
+
+.. note::
+
+    The northbound APIs described above wrap the southbound APIs to make
+    them accessible to external applications via RESTCONF, as well as
+    take care of synchronizing the RE resource model between radio heads
+    and the controller’s datastore. See
+    applications/ocp-service/src/main/yang/ocp-resourcemodel.yang for
+    the yang representation of the RE resource model.
+
+Java Interfaces (Southbound APIs)
+---------------------------------
+
+The southbound APIs provide concrete implementation of the following OCP
+elementary functions: health-check, set-time, re-reset, get-param,
+modify-param, create-obj, delete-obj, get-state, modify-state and
+get-fault. Any OpenDaylight services/applications (of course, including
+OCP service) wanting to speak OCP to radio heads will need to use them.
+
+SalDeviceMgmtService
+~~~~~~~~~~~~~~~~~~~~
+
+Interface SalDeviceMgmtService defines three methods corresponding to
+health-check, set-time and re-reset.
+
+**SalDeviceMgmtService.java.**
+
+::
+
+    package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.device.mgmt.rev150811;
+
+    public interface SalDeviceMgmtService
+        extends
+        RpcService
+    {
+
+        Future<RpcResult<HealthCheckOutput>> healthCheck(HealthCheckInput input);
+
+        Future<RpcResult<SetTimeOutput>> setTime(SetTimeInput input);
+
+        Future<RpcResult<ReResetOutput>> reReset(ReResetInput input);
+
+    }
+
+SalConfigMgmtService
+~~~~~~~~~~~~~~~~~~~~
+
+Interface SalConfigMgmtService defines two methods corresponding to
+get-param and modify-param.
+
+**SalConfigMgmtService.java.**
+
+::
+
+    package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.config.mgmt.rev150811;
+
+    public interface SalConfigMgmtService
+        extends
+        RpcService
+    {
+
+        Future<RpcResult<GetParamOutput>> getParam(GetParamInput input);
+
+        Future<RpcResult<ModifyParamOutput>> modifyParam(ModifyParamInput input);
+
+    }
+
+SalObjectLifecycleService
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Interface SalObjectLifecycleService defines two methods corresponding to
+create-obj and delete-obj.
+
+**SalObjectLifecycleService.java.**
+
+::
+
+    package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.object.lifecycle.rev150811;
+
+    public interface SalObjectLifecycleService
+        extends
+        RpcService
+    {
+
+        Future<RpcResult<CreateObjOutput>> createObj(CreateObjInput input);
+
+        Future<RpcResult<DeleteObjOutput>> deleteObj(DeleteObjInput input);
+
+    }
+
+SalObjectStateMgmtService
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Interface SalObjectStateMgmtService defines two methods corresponding to
+get-state and modify-state.
+
+**SalObjectStateMgmtService.java.**
+
+::
+
+    package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.object.state.mgmt.rev150811;
+
+    public interface SalObjectStateMgmtService
+        extends
+        RpcService
+    {
+
+        Future<RpcResult<GetStateOutput>> getState(GetStateInput input);
+
+        Future<RpcResult<ModifyStateOutput>> modifyState(ModifyStateInput input);
+
+    }
+
+SalFaultMgmtService
+~~~~~~~~~~~~~~~~~~~
+
+Interface SalFaultMgmtService defines only one method corresponding to
+get-fault.
+
+**SalFaultMgmtService.java.**
+
+::
+
+    package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.fault.mgmt.rev150811;
+
+    public interface SalFaultMgmtService
+        extends
+        RpcService
+    {
+
+        Future<RpcResult<GetFaultOutput>> getFault(GetFaultInput input);
+
+    }
+
+Notifications
+-------------
+
+In addition to indication messages, the OCP southbound plugin will
+translate specific events (e.g., connect, disconnect) coming up from the
+OCP protocol library into MD-SAL Notification objects and then publish
+them to the MD-SAL. Also, the OCP service will notify the completion of
+certain operation via Notification as well.
+
+SalDeviceMgmtListener
+~~~~~~~~~~~~~~~~~~~~~
+
+An onDeviceConnected Notification will be published to the MD-SAL as
+soon as a radio head is connected to the controller, and when that radio
+head is disconnected the OCP southbound plugin will publish an
+onDeviceDisconnected Notification in response to the disconnect event
+propagated from the OCP protocol library.
+
+**SalDeviceMgmtListener.java.**
+
+::
+
+    package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.device.mgmt.rev150811;
+
+    public interface SalDeviceMgmtListener
+        extends
+        NotificationListener
+    {
+
+        void onDeviceConnected(DeviceConnected notification);
+
+        void onDeviceDisconnected(DeviceDisconnected notification);
+
+    }
+
+OcpServiceListener
+~~~~~~~~~~~~~~~~~~
+
+The OCP service will publish an onAlignmentCompleted Notification to the
+MD-SAL once it has completed the OCP alignment procedure with the radio
+head.
+
+**OcpServiceListener.java.**
+
+::
+
+    package org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.ocp.applications.ocp.service.rev150811;
+
+    public interface OcpServiceListener
+        extends
+        NotificationListener
+    {
+
+        void onAlignmentCompleted(AlignmentCompleted notification);
+
+    }
+
+SalObjectStateMgmtListener
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When receiving a state change indication message, the OCP southbound
+plugin will propagate the indication message to upper layer
+services/applications by publishing a corresponding onStateChangeInd
+Notification to the MD-SAL.
+
+**SalObjectStateMgmtListener.java.**
+
+::
+
+    package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.object.state.mgmt.rev150811;
+
+    public interface SalObjectStateMgmtListener
+        extends
+        NotificationListener
+    {
+
+        void onStateChangeInd(StateChangeInd notification);
+
+    }
+
+SalFaultMgmtListener
+~~~~~~~~~~~~~~~~~~~~
+
+When receiving a fault indication message, the OCP southbound plugin
+will propagate the indication message to upper layer
+services/applications by publishing a corresponding onFaultInd
+Notification to the MD-SAL.
+
+**SalFaultMgmtListener.java.**
+
+::
+
+    package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.fault.mgmt.rev150811;
+
+    public interface SalFaultMgmtListener
+        extends
+        NotificationListener
+    {
+
+        void onFaultInd(FaultInd notification);
+
+    }
+
diff --git a/docs/developer-guide/of-config-developer-guide.rst b/docs/developer-guide/of-config-developer-guide.rst
new file mode 100644 (file)
index 0000000..d8709ab
--- /dev/null
@@ -0,0 +1,133 @@
+OF-CONFIG Developer Guide
+=========================
+
+Overview
+--------
+
+OF-CONFIG defines an OpenFlow switch as an abstraction called an
+OpenFlow Logical Switch. The OF-CONFIG protocol enables configuration of
+essential artifacts of an OpenFlow Logical Switch so that an OpenFlow
+controller can communicate and control the OpenFlow Logical switch via
+the OpenFlow protocol. OF-CONFIG introduces an operating context for one
+or more OpenFlow data paths called an OpenFlow Capable Switch for one or
+more switches. An OpenFlow Capable Switch is intended to be equivalent
+to an actual physical or virtual network element (e.g. an Ethernet
+switch) which is hosting one or more OpenFlow data paths by partitioning
+a set of OpenFlow related resources such as ports and queues among the
+hosted OpenFlow data paths. The OF-CONFIG protocol enables dynamic
+association of the OpenFlow related resources of an OpenFlow Capable
+Switch with specific OpenFlow Logical Switches which are being hosted on
+the OpenFlow Capable Switch. OF-CONFIG does not specify or report how
+the partitioning of resources on an OpenFlow Capable Switch is achieved.
+OF-CONFIG assumes that resources such as ports and queues are
+partitioned amongst multiple OpenFlow Logical Switches such that each
+OpenFlow Logical Switch can assume full control over the resources that
+is assigned to it.
+
+How to start
+------------
+
+-  start OF-CONFIG feature as below:
+
+   ::
+
+       feature:install odl-of-config-all
+
+Compatible with NETCONF
+-----------------------
+
+-  Config OpenFlow Capable Switch via OpenFlow Configuration Points
+
+   Method: POST
+
+   URI:
+   http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules
+
+   Headers: Content-Type" and "Accept" header attributes set to
+   application/xml
+
+   Payload:
+
+   .. code:: xml
+
+       <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
+         <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">prefix:sal-netconf-connector</type>
+         <name>testtool</name>
+         <address xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">10.74.151.67</address>
+         <port xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">830</port>
+         <username xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">mininet</username>
+         <password xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">mininet</password>
+         <tcp-only xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">false</tcp-only>
+         <event-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+           <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:netty">prefix:netty-event-executor</type>
+           <name>global-event-executor</name>
+         </event-executor>
+         <binding-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+           <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
+           <name>binding-osgi-broker</name>
+         </binding-registry>
+         <dom-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+           <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">prefix:dom-broker-osgi-registry</type>
+           <name>dom-broker</name>
+         </dom-registry>
+         <client-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+           <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:config:netconf">prefix:netconf-client-dispatcher</type>
+           <name>global-netconf-dispatcher</name>
+         </client-dispatcher>
+         <processing-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+           <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:threadpool</type>
+           <name>global-netconf-processing-executor</name>
+         </processing-executor>
+       </module>
+
+-  NETCONF establishes the connections with OpenFlow Capable Switches
+   using the parameters in the previous step. NETCONF also gets the
+   information of whether the OpenFlow Switch supports NETCONF during
+   the signal handshaking. The information will be stored in the NETCONF
+   topology as prosperity of a node.
+
+-  OF-CONFIG can be aware of the switches accessing and leaving by
+   monitoring the data changes in the NETCONF topology. For the detailed
+   information it can be refered to the
+   `implementation <https://git.opendaylight.org/gerrit/gitweb?p=of-config.git;a=blob_plain;f=southbound/southbound-impl/src/main/java/org/opendaylight/ofconfig/southbound/impl/OdlOfconfigApiServiceImpl.java;hb=refs/heads/stable/boron>`__.
+
+The establishment of OF-CONFIG topology
+---------------------------------------
+
+Firstly, OF-CONFIG will check whether the newly accessed switch supports
+OF-CONFIG by querying the NETCONF interface.
+
+1. During the NETCONF connection’s establishment, the NETCONF and the
+   switches will exchange the their capabilities via the "hello"
+   message.
+
+2. OF-CONFIG gets the connection information between the NETCONF and
+   switches by monitoring the data changes via the interface of
+   DataChangeListener.
+
+3. After the NETCONF connection established, the OF-CONFIG module will
+   check whether OF-CONFIG capability is in the switch’s capabilities
+   list which is got in step 1.
+
+4. If the result of step 3 is yes, the OF-CONFIG will call the following
+   processing steps to create the topology database.
+
+For the detailed information it can be referred to the
+`implementation <https://git.opendaylight.org/gerrit/gitweb?p=of-config.git;a=blob_plain;f=southbound/southbound-impl/src/main/java/org/opendaylight/ofconfig/southbound/impl/listener/OfconfigListenerHelper.java;hb=refs/heads/stable/boron>`__.
+
+Secondly, the capable switch node and logical switch node are added in
+the OF-CONFIG topology if the switch supports OF-CONFIG.
+
+OF-CONFIG’s topology compromise: Capable Switch topology (underlay) and
+logical Switch topology (overlay). Both of them are enhanced (augment)
+on
+
+/topo:network-topology/topo:topology/topo:node
+
+The NETCONF will add the nodes in the Topology via the path of
+"/topo:network-topology/topo:topology/topo:node" if it gets the
+configuration information of the switches.
+
+For the detailed information it can be referred to the
+`implementation <https://git.opendaylight.org/gerrit/gitweb?p=of-config.git;a=blob;f=southbound/southbound-api/src/main/yang/odl-ofconfig-topology.yang;h=dbdaec46ee59da3791386011f571d7434dd1e416;hb=refs/heads/stable/boron>`__.
+
diff --git a/docs/developer-guide/openflow-protocol-library-developer-guide.rst b/docs/developer-guide/openflow-protocol-library-developer-guide.rst
new file mode 100644 (file)
index 0000000..62e193d
--- /dev/null
@@ -0,0 +1,1093 @@
+OpenFlow Protocol Library Developer Guide
+=========================================
+
+Introduction
+------------
+
+OpenFlow Protocol Library is component in OpenDaylight, that mediates
+communication between OpenDaylight controller and hardware devices
+supporting OpenFlow protocol. Primary goal is to provide user (or upper
+layers of OpenDaylight) communication channel, that can be used for
+managing network hardware devices.
+
+Features Overview
+-----------------
+
+There are three features inside openflowjava:
+
+-  **odl-openflowjava-protocol** provides all openflowjava bundles, that
+   are needed for communication with openflow devices. It ensures
+   message translation and handles network connections. It also provides
+   openflow protocol specific model.
+
+-  **odl-openflowjava-all** currently contains only
+   odl-openflowjava-protocol feature.
+
+-  **odl-openflowjava-stats** provides mechanism for message counting
+   and reporting. Can be used for performance analysis.
+
+odl-openflowjava-protocol Architecture
+--------------------------------------
+
+Basic bundles contained in this feature are openflow-protocol-api,
+openflow-protocol-impl, openflow-protocol-spi and util.
+
+-  **openflow-protocol-api** - contains openflow model, constants and
+   keys used for (de)serializer registration.
+
+-  **openflow-protocol-impl** - contains message factories, that
+   translate binary messages into DataObjects and vice versa. Bundle
+   also contains network connection handlers - servers, netty pipeline
+   handlers, …
+
+-  **openflow-protocol-spi** - entry point for openflowjava
+   configuration, startup and close. Basically starts implementation.
+
+-  **util** - utility classes for binary-Java conversions and to ease
+   experimenter key creation
+
+odl-openflowjava-stats Feature
+------------------------------
+
+Runs over odl-openflowjava-protocol. It counts various message types /
+events and reports counts in specified time periods. Statistics
+collection can be configured in
+openflowjava-config/src/main/resources/45-openflowjava-stats.xml
+
+Key APIs and Interfaces
+-----------------------
+
+Basic API / SPI classes are ConnectionAdapter (Rpc/notifications) and
+SwitchConnectionProcider (configure, start, shutdown)
+
+Installation
+------------
+
+Pull the code and import project into your IDE.
+
+::
+
+    git clone ssh://<username>@git.opendaylight.org:29418/openflowjava.git
+
+Configuration
+-------------
+
+Current implementation allows to configure:
+
+-  listening port (mandatory)
+
+-  transfer protocol (mandatory)
+
+-  switch idle timeout (mandatory)
+
+-  TLS configuration (optional)
+
+-  thread count (optional)
+
+You can find exemplary Openflow Protocol Library instance configuration
+below:
+
+::
+
+    <data xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
+      <modules xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
+        <!-- default OF-switch-connection-provider (port 6633) -->
+        <module>
+          <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider:impl">prefix:openflow-switch-connection-provider-impl</type>
+          <name>openflow-switch-connection-provider-default-impl</name>
+          <port>6633</port>
+    <!--  Possible transport-protocol options: TCP, TLS, UDP -->
+          <transport-protocol>TCP</transport-protocol>
+          <switch-idle-timeout>15000</switch-idle-timeout>
+    <!--       Exemplary TLS configuration:
+                - uncomment the <tls> tag
+                - copy exemplary-switch-privkey.pem, exemplary-switch-cert.pem and exemplary-cacert.pem
+                  files into your virtual machine
+                - set VM encryption options to use copied keys
+                - start communication
+               Please visit OpenflowPlugin or Openflow Protocol Library#Documentation wiki pages
+               for detailed information regarding TLS -->
+    <!--       <tls>
+                 <keystore>/exemplary-ctlKeystore</keystore>
+                 <keystore-type>JKS</keystore-type>
+                 <keystore-path-type>CLASSPATH</keystore-path-type>
+                 <keystore-password>opendaylight</keystore-password>
+                 <truststore>/exemplary-ctlTrustStore</truststore>
+                 <truststore-type>JKS</truststore-type>
+                 <truststore-path-type>CLASSPATH</truststore-path-type>
+                 <truststore-password>opendaylight</truststore-password>
+                 <certificate-password>opendaylight</certificate-password>
+               </tls> -->
+    <!--       Exemplary thread model configuration. Uncomment <threads> tag below to adjust default thread model -->
+    <!--       <threads>
+                 <boss-threads>2</boss-threads>
+                 <worker-threads>8</worker-threads>
+               </threads> -->
+        </module>
+
+::
+
+        <!-- default OF-switch-connection-provider (port 6653) -->
+        <module>
+          <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider:impl">prefix:openflow-switch-connection-provider-impl</type>
+          <name>openflow-switch-connection-provider-legacy-impl</name>
+          <port>6653</port>
+    <!--  Possible transport-protocol options: TCP, TLS, UDP -->
+          <transport-protocol>TCP</transport-protocol>
+          <switch-idle-timeout>15000</switch-idle-timeout>
+    <!--       Exemplary TLS configuration:
+                - uncomment the <tls> tag
+                - copy exemplary-switch-privkey.pem, exemplary-switch-cert.pem and exemplary-cacert.pem
+                  files into your virtual machine
+                - set VM encryption options to use copied keys
+                - start communication
+               Please visit OpenflowPlugin or Openflow Protocol Library#Documentation wiki pages
+               for detailed information regarding TLS -->
+    <!--       <tls>
+                 <keystore>/exemplary-ctlKeystore</keystore>
+                 <keystore-type>JKS</keystore-type>
+                 <keystore-path-type>CLASSPATH</keystore-path-type>
+                 <keystore-password>opendaylight</keystore-password>
+                 <truststore>/exemplary-ctlTrustStore</truststore>
+                 <truststore-type>JKS</truststore-type>
+                 <truststore-path-type>CLASSPATH</truststore-path-type>
+                 <truststore-password>opendaylight</truststore-password>
+                 <certificate-password>opendaylight</certificate-password>
+               </tls> -->
+    <!--       Exemplary thread model configuration. Uncomment <threads> tag below to adjust default thread model -->
+    <!--       <threads>
+                 <boss-threads>2</boss-threads>
+                 <worker-threads>8</worker-threads>
+               </threads> -->
+        </module>
+
+::
+
+        <module>
+          <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:common:config:impl">prefix:openflow-provider-impl</type>
+          <name>openflow-provider-impl</name>
+          <openflow-switch-connection-provider>
+            <type xmlns:ofSwitch="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider">ofSwitch:openflow-switch-connection-provider</type>
+            <name>openflow-switch-connection-provider-default</name>
+          </openflow-switch-connection-provider>
+          <openflow-switch-connection-provider>
+            <type xmlns:ofSwitch="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider">ofSwitch:openflow-switch-connection-provider</type>
+            <name>openflow-switch-connection-provider-legacy</name>
+          </openflow-switch-connection-provider>
+          <binding-aware-broker>
+            <type xmlns:binding="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">binding:binding-broker-osgi-registry</type>
+            <name>binding-osgi-broker</name>
+          </binding-aware-broker>
+        </module>
+      </modules>
+
+Possible transport-protocol options:
+
+-  TCP
+
+-  TLS
+
+-  UDP
+
+Switch-idle timeout specifies time needed to detect idle state of
+switch. When no message is received from switch within this time, upper
+layers are notified on switch idleness. To be able to use this exemplary
+TLS configuration:
+
+-  uncomment the ``<tls>`` tag
+
+-  copy *exemplary-switch-privkey.pem*, *exemplary-switch-cert.pem* and
+   *exemplary-cacert.pem* files into your virtual machine
+
+-  set VM encryption options to use copied keys (please visit TLS
+   support wiki page for detailed information regarding TLS)
+
+-  start communication
+
+Thread model configuration specifies how many threads are desired to
+perform Netty’s I/O operations.
+
+-  boss-threads specifies the number of threads that register incoming
+   connections
+
+-  worker-threads specifies the number of threads performing read /
+   write (+ serialization / deserialization) operations.
+
+Architecture
+------------
+
+Public API ``(openflow-protocol-api)``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set of interfaces and builders for immutable data transfer objects
+representing Openflow Protocol structures.
+
+Transfer objects and service APIs are infered from several YANG models
+using code generator to reduce verbosity of definition and repeatibility
+of code.
+
+The following YANG modules are defined:
+
+-  openflow-types - defines common Openflow specific types
+
+-  openflow-instruction - defines base Openflow instructions
+
+-  openflow-action - defines base Openflow actions
+
+-  openflow-augments - defines object augmentations
+
+-  openflow-extensible-match - defines Openflow OXM match
+
+-  openflow-protocol - defines Openflow Protocol messages
+
+-  system-notifications - defines system notification objects
+
+-  openflow-configuration - defines structures used in ConfigSubsystem
+
+This modules also reuse types from following YANG modules:
+
+-  ietf-inet-types - IP adresses, IP prefixes, IP-protocol related types
+
+-  ietf-yang-types - Mac Address, etc.
+
+The use of predefined types is to make APIs contracts more safe, better
+readable and documented (e.g using MacAddress instead of byte array…)
+
+TCP Channel pipeline ``(openflow-protocol-impl)``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Creates channel processing pipeline based on configuration and support.
+
+**TCP Channel pipeline.**
+
+imageopenflowjava/500px-TCPChannelPipeline.png[width=500]
+
+**Switch Connection Provider.**
+
+Implementation of connection point for other projects. Library exposes
+its functionality through this class. Library can be configured, started
+and shutdowned here. There are also methods for custom (de)serializer
+registration.
+
+**Tcp Connection Initializer.**
+
+In order to initialize TCP connection to a device (switch), OF Plugin
+calls method ``initiateConnection()`` in ``SwitchConnectionProvider``.
+This method in turn initializes (Bootstrap) server side channel towards
+the device.
+
+**TCP Handler.**
+
+Represents single server that is handling incoming connections over TCP
+/ TLS protocol. TCP Handler creates a single instance of TCP Channel
+Initializer that will initialize channels. After that it binds to
+configured InetAddress and port. When a new device connects, TCP Handler
+registers its channel and passes control to TCP Channel Initializer.
+
+**TCP Channel Initializer.**
+
+This class is used for channel initialization / rejection and passing
+arguments. After a new channel has been registered it calls Switch
+Connection Handler’s (OF Plugin) accept method to decide if the library
+should keep the newly registered channel or if the channel should be
+closed. If the channel has been accepted, TCP Channel Initializer
+creates the whole pipeline with needed handlers and also with
+ConnectionAdapter instance. After the channel pipeline is ready, Switch
+Connection Handler is notified with ``onConnectionReady`` notification.
+OpenFlow Plugin can now start sending messages downstream.
+
+**Idle Handler.**
+
+If there are no messages received for more than time specified, this
+handler triggers idle state notification. The switch idle timeout is
+received as a parameter from ConnectionConfiguration settings. Idle
+State Handler is inactive while there are messages received within the
+switch idle timeout. If there are no messages received for more than
+timeout specified, handler creates SwitchIdleEvent message and sends it
+upstream.
+
+**TLS Handler.**
+
+It encrypts and decrypts messages over TLS protocol. Engaging TLS
+Handler into pipeline is matter of configuration (``<tls>`` tag). TLS
+communication is either unsupported or required. TLS Handler is
+represented as a Netty’s SslHandler.
+
+**OF Frame Decoder.**
+
+Parses input stream into correct length message frames for further
+processing. Framing is based on Openflow header length. If received
+message is shorter than minimal length of OpenFlow message (8 bytes), OF
+Frame Decoder waits for more data. After receiving at least 8 bytes the
+decoder checks length in OpenFlow header. If there are still some bytes
+missing, the decoder waits for them. Else the OF Frame Decoder sends
+correct length message to next handler in the channel pipeline.
+
+**OF Version Detector.**
+
+Detects version of used OpenFlow Protocol and discards unsupported
+version messages. If the detected version is supported, OF Version
+Detector creates ``VersionMessageWrapper`` object containing the
+detected version and byte message and sends this object upstream.
+
+**OF Decoder.**
+
+Chooses correct deserilization factory (based on message type) and
+deserializes messages into generated DTOs (Data Transfer Object). OF
+Decoder receives ``VersionMessageWrapper`` object and passes it to
+``DeserializationFactory`` which will return translated DTO.
+``DeserializationFactory`` creates ``MessageCodeKey`` object with
+version and type of received message and Class of object that will be
+the received message deserialized into. This object is used as key when
+searching for appropriate decoder in ``DecoderTable``. ``DecoderTable``
+is basically a map storing decoders. Found decoder translates received
+message into DTO. If there was no decoder found, null is returned. After
+returning translated DTO back to OF Decoder, the decoder checks if it is
+null or not. When the DTO is null, the decoder logs this state and
+throws an Exception. Else it passes the DTO further upstream. Finally,
+the OF Decoder releases ByteBuf containing received and decoded byte
+message.
+
+**OF Encoder.**
+
+Chooses correct serialization factory (based on type of DTO) and
+serializes DTOs into byte messages. OF Encoder does the opposite than
+the OF Decoder using the same principle. OF Encoder receives DTO, passes
+it for translation and if the result is not null, it sends translated
+DTO downstream as a ByteBuf. Searching for appropriate encoder is done
+via MessageTypeKey, based on version and class of received DTO.
+
+**Delegating Inbound Handler.**
+
+Delegates received DTOs to Connection Adapter. It also reacts on
+channelInactive and channelUnregistered events. Upon one of these events
+is triggered, DelegatingInboundHandler creates DisconnectEvent message
+and sends it upstream, notifying upper layers about switch
+disconnection.
+
+**Channel Outbound Queue.**
+
+Message flushing handler. Stores outgoing messages (DTOs) and flushes
+them. Flush is performed based on time expired and on the number of
+messages enqueued.
+
+**Connection Adapter.**
+
+Provides a facade on top of pipeline, which hides netty.io specifics.
+Provides a set of methods to register for incoming messages and to send
+messages to particular channel / session. ConnectionAdapterImpl
+basically implements three interfaces (unified in one superinterface
+ConnectionFacade):
+
+-  ConnectionAdapter
+
+-  MessageConsumer
+
+-  OpenflowProtocolService
+
+**ConnectionAdapter** interface has methods for setting up listeners
+(message, system and connection ready listener), method to check if all
+listeners are set, checking if the channel is alive and disconnect
+method. Disconnect method clears responseCache and disables consuming of
+new messages.
+
+**MessageConsumer** interface holds only one method: ``consume()``.
+``Consume()`` method is called from DelegatingInboundHandler. This
+method processes received DTO’s based on their type. There are three
+types of received objects:
+
+-  System notifications - invoke system notifications in OpenFlow Plugin
+   (systemListener set). In case of ``DisconnectEvent`` message, the
+   Connection Adapter clears response cache and disables consume()
+   method processing,
+
+-  OpenFlow asynchronous messages (from switch) - invoke corresponding
+   notifications in OpenFlow Plugin,
+
+-  OpenFlow symmetric messages (replies to requests) - create
+   ``RpcResponseKey`` with XID and DTO’s class set. This
+   ``RpcResponseKey`` is then used to find corresponding future object
+   in responseCache. Future object is set with success flag, received
+   message and errors (if any occurred). In case no corresponding future
+   was found in responseCache, Connection Adapter logs warning and
+   discards the message. Connection Adapter also logs warning when an
+   unknown DTO is received.
+
+**OpenflowProtocolService** interface contains all rpc-methods for
+sending messages from upper layers (OpenFlow Plugin) downstream and
+responding. Request messages return Future filled with expected reply
+message, otherwise the expected Future is of type Void.
+
+**NOTE:** MultipartRequest message is the only exception. Basically it
+is request - reply Message type, but it wouldn’t be able to process more
+following MultipartReply messages if this was implemented as rpc (only
+one Future). This is why MultipartReply is implemented as notification.
+OpenFlow Plugin takes care of correct message processing.
+
+UDP Channel pipeline (openflow-protocol-impl)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Creates UDP channel processing pipeline based on configuration and
+support. **Switch Connection Provider**, **Channel Outbound Queue** and
+**Connection Adapter** fulfill the same role as in case of TCP
+connection / channel pipeline (please see above).
+
+.. figure:: ./images/openflowjava/500px-UdpChannelPipeline.png
+   :alt: UDP Channel pipeline
+
+   UDP Channel pipeline
+
+**UDP Handler.**
+
+Represents single server that is handling incoming connections over UDP
+(DTLS) protocol. UDP Handler creates a single instance of UDP Channel
+Initializer that will initialize channels. After that it binds to
+configured InetAddress and port. When a new device connects, UDP Handler
+registers its channel and passes control to UDP Channel Initializer.
+
+**UDP Channel Initializer.**
+
+This class is used for channel initialization and passing arguments.
+After a new channel has been registered (for UDP there is always only
+one channel) UDP Channel Initializer creates whole pipeline with needed
+handlers.
+
+**DTLS Handler.**
+
+Haven’t been implemented yet. Will take care of secure DTLS connections.
+
+**OF Datagram Packet Handler.**
+
+Combines functionality of OF Frame Decoder and OF Version Detector.
+Extracts messages from received datagram packets and checks if message
+version is supported. If there is a message received from yet unknown
+sender, OF Datagram Packet Handler creates Connection Adapter for this
+sender and stores it under sender’s address in ``UdpConnectionMap``.
+This map is also used for sending the messages and for correct
+Connection Adapter lookup - to delegate messages from one channel to
+multiple sessions.
+
+**OF Datagram Packet Decoder.**
+
+Chooses correct deserilization factory (based on message type) and
+deserializes messages into generated DTOs. OF Decoder receives
+``VersionMessageUdpWrapper`` object and passes it to
+``DeserializationFactory`` which will return translated DTO.
+``DeserializationFactory`` creates ``MessageCodeKey`` object with
+version and type of received message and Class of object that will be
+the received message deserialized into. This object is used as key when
+searching for appropriate decoder in ``DecoderTable``. ``DecoderTable``
+is basically a map storing decoders. Found decoder translates received
+message into DTO (DataTransferObject). If there was no decoder found,
+null is returned. After returning translated DTO back to OF Datagram
+Packet Decoder, the decoder checks if it is null or not. When the DTO is
+null, the decoder logs this state. Else it looks up appropriate
+Connection Adapter in ``UdpConnectionMap`` and passes the DTO to found
+Connection Adapter. Finally, the OF Decoder releases ``ByteBuf``
+containing received and decoded byte message.
+
+**OF Datagram Packet Encoder.**
+
+Chooses correct serialization factory (based on type of DTO) and
+serializes DTOs into byte messages. OF Datagram Packet Encoder does the
+opposite than the OF Datagram Packet Decoder using the same principle.
+OF Encoder receives DTO, passes it for translation and if the result is
+not null, it sends translated DTO downstream as a datagram packet.
+Searching for appropriate encoder is done via MessageTypeKey, based on
+version and class of received DTO.
+
+SPI (openflow-protocol-spi)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Defines interface for library’s connection point for other projects.
+Library exposes its functionality through this interface.
+
+Integration test (openflow-protocol-it)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Testing communication with simple client.
+
+Simple client(simple-client)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Lightweight switch simulator - programmable with desired scenarios.
+
+Utility (util)
+~~~~~~~~~~~~~~
+
+Contains utility classes, mainly for work with ByteBuf.
+
+Library’s lifecycle
+-------------------
+
+Steps (after the library’s bundle is started):
+
+-  [1] Library is configured by ConfigSubsystem (adress, ports,
+   encryption, …)
+
+-  [2] Plugin injects its SwitchConnectionHandler into the Library
+
+-  [3] Plugin starts the Library
+
+-  [4] Library creates configured protocol handler (e.g. TCP Handler)
+
+-  [5] Protocol Handler creates Channel Initializer
+
+-  [6] Channel Initializer asks plugin whether to accept incoming
+   connection on each new switch connection
+
+-  [7] Plugin responds:
+
+   -  true - continue building pipeline
+
+   -  false - reject connection / disconnect channel
+
+-  [8] Library notifies Plugin with onSwitchConnected(ConnectionAdapter)
+   notification, passing reference to ConnectionAdapter, that will
+   handle the connection
+
+-  [9] Plugin registers its system and message listeners
+
+-  [10] FireConnectionReadyNotification() is triggered, announcing that
+   pipeline handlers needed for communication have been created and
+   Plugin can start communication
+
+-  [11] Plugin shutdowns the Library when desired
+
+.. figure:: ./images/openflowjava/Library_lifecycle.png
+   :alt: Library lifecycle
+
+   Library lifecycle
+
+Statistics collection
+---------------------
+
+Introduction
+~~~~~~~~~~~~
+
+Statistics collection collects message statistics. Current collected
+statistics (``DS`` - downstream, ``US`` - upstream):
+
+-  ``DS_ENTERED_OFJAVA`` - all messages that entered openflowjava
+   (picked up from openflowplugin)
+
+-  ``DS_ENCODE_SUCCESS`` - successfully encoded messages
+
+-  ``DS_ENCODE_FAIL`` - messages that failed during encoding
+   (serialization) process
+
+-  ``DS_FLOW_MODS_ENTERED`` - all flow-mod messages that entered
+   openflowjava
+
+-  ``DS_FLOW_MODS_SENT`` - all flow-mod messages that were successfully
+   sent
+
+-  ``US_RECEIVED_IN_OFJAVA`` - messages received from switch
+
+-  ``US_DECODE_SUCCESS`` - successfully decoded messages
+
+-  ``US_DECODE_FAIL`` - messages that failed during decoding
+   (deserialization) process
+
+-  ``US_MESSAGE_PASS`` - messages handed over to openflowplugin
+
+Karaf
+~~~~~
+
+In orded to start statistics, it is needed to feature:install
+odl-openflowjava-stats. To see the logs one should use log:set DEBUG
+org.opendaylight.openflowjava.statistics and than probably log:display
+(you can log:list to see if the logging has been set). To adjust
+collection settings it is enough to modify 45-openflowjava-stats.xml.
+
+JConsole
+~~~~~~~~
+
+JConsole provides two commands for the statistics collection:
+
+-  printing current statistics
+
+-  resetting statistic counters
+
+After attaching JConsole to correct process, one only needs to go into
+MBeans
+``tab → org.opendaylight.controller → RuntimeBean → statistics-collection-service-impl
+→ statistics-collection-service-impl → Operations`` to be able to use
+this commands.
+
+TLS Support
+-----------
+
+    **Note**
+
+    see OpenFlow Plugin Developper Guide
+
+Extensibility
+-------------
+
+Introduction
+~~~~~~~~~~~~
+
+Entry point for the extensibility is ``SwitchConnectionProvider``.
+``SwitchConnectionProvider`` contains methods for (de)serializer
+registration. To register deserializer it is needed to use
+.register\*Deserializer(key, impl). To register serializer one must use
+.register\*Serializer(key, impl). Registration can occur either during
+configuration or at runtime.
+
+**NOTE**: In case when experimenter message is received and no
+(de)serializer was registered, the library will throw
+``IllegalArgumentException``.
+
+Basic Principle
+~~~~~~~~~~~~~~~
+
+In order to use extensions it is needed to augment existing model and
+register new (de)serializers.
+
+Augmenting the model: 1. Create new augmentation
+
+Register (de)serializers: 1. Create your (de)serializer 2. Let it
+implement ``OFDeserializer<>`` / ``OFSerializer<>`` - in case the
+structure you are (de)serializing needs to be used in Multipart
+TableFeatures messages, let it implement ``HeaderDeserializer<>`` /
+``HeaderSerializer`` 3. Implement prescribed methods 4. Register your
+deserializer under appropriate key (in our case
+``ExperimenterActionDeserializerKey``) 5. Register your serializer under
+appropriate key (in our case ``ExperimenterActionSerializerKey``) 6.
+Done, test your implementation
+
+**NOTE**: If you don’t know what key should be used with your
+(de)serializer implementation, please visit `Registration
+keys <#registration_keys>`__ page.
+
+Example
+~~~~~~~
+
+Let’s say we have vendor / experimenter action represented by this
+structure:
+
+::
+
+    struct foo_action {
+        uint16_t type;
+        uint16_t length;
+        uint32_t experimenter;
+        uint16_t first;
+        uint16_t second;
+        uint8_t  pad[4];
+    }
+
+First, we have to augment existing model. We create new module, which
+imports "``openflow-types.yang``" (don’t forget to update your
+``pom.xml`` with api dependency). Now we create foo action identity:
+
+::
+
+    import openflow-types {prefix oft;}
+    identity foo {
+        description "Foo action description";
+        base oft:action-base;
+    }
+
+This will be used as type in our structure. Now we must augment existing
+action structure, so that we will have the desired fields first and
+second. In order to create new augmentation, our module has to import
+"``openflow-action.yang``". The augment should look like this:
+
+::
+
+    import openflow-action {prefix ofaction;}
+    augment "/ofaction:actions-container/ofaction:action" {
+        ext:augment-identifier "foo-action";
+            leaf first {
+                type uint16;
+            }
+            leaf second {
+                type uint16;
+            }
+        }
+
+We are finished with model changes. Run mvn clean compile to generate
+sources. After generation is done, we need to implement our
+(de)serializer.
+
+Deserializer:
+
+::
+
+    public class FooActionDeserializer extends OFDeserializer<Action> {
+       @Override
+       public Action deserialize(ByteBuf input) {
+           ActionBuilder builder = new ActionBuilder();
+           input.skipBytes(SIZE_OF_SHORT_IN_BYTES); *// we know the type of action*
+           builder.setType(Foo.class);
+           input.skipBytes(SIZE_OF_SHORT_IN_BYTES); *// we don't need length*
+           *// now create experimenterIdAugmentation - so that openflowplugin can
+           differentiate correct vendor codec*
+           ExperimenterIdActionBuilder expIdBuilder = new ExperimenterIdActionBuilder();
+           expIdBuilder.setExperimenter(new ExperimenterId(input.readUnsignedInt()));
+           builder.addAugmentation(ExperimenterIdAction.class, expIdBuilder.build());
+           FooActionBuilder fooBuilder = new FooActionBuilder();
+           fooBuilder.setFirst(input.readUnsignedShort());
+           fooBuilder.setSecond(input.readUnsignedShort());
+           builder.addAugmentation(FooAction.class, fooBuilder.build());
+           input.skipBytes(4); *// padding*
+           return builder.build();
+       }
+    }
+
+Serializer:
+
+::
+
+    public class FooActionSerializer extends OFSerializer<Action> {
+       @Override
+       public void serialize(Action action, ByteBuf outBuffer) {
+           outBuffer.writeShort(FOO_CODE);
+           outBuffer.writeShort(16);
+           *// we don't have to check for ExperimenterIdAction augmentation - our
+           serializer*
+           *// was called based on the vendor / experimenter ID, so we simply write
+           it to buffer*
+           outBuffer.writeInt(VENDOR / EXPERIMENTER ID);
+           FooAction foo = action.getAugmentation(FooAction.class);
+           outBuffer.writeShort(foo.getFirst());
+           outBuffer.writeShort(foo.getSecond());
+           outBuffer.writeZero(4); //write padding
+       }
+    }
+
+Register both deserializer and serializer:
+``SwitchConnectionProvider.registerDeserializer(new
+ExperimenterActionDeserializerKey(0x04, VENDOR / EXPERIMENTER ID),
+new FooActionDeserializer());``
+``SwitchConnectionProvider.registerSerializer(new
+ExperimenterActionSerializerKey(0x04, VENDOR / EXPERIMENTER ID),
+new FooActionSerializer());``
+
+We are ready to test our implementation.
+
+**NOTE:** Vendor / Experimenter structures define only vendor /
+experimenter ID as common distinguisher (besides action type). Vendor /
+Experimenter ID is unique for all vendor messages - that’s why vendor is
+able to register only one class under
+ExperimenterAction(De)SerializerKey. And that’s why vendor has to switch
+/ choose between his subclasses / subtypes on his own.
+
+Detailed walkthrough: Deserialization extensibility
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**External interface & class description.**
+
+**OFGeneralDeserializer:**
+
+-  ``OFDeserializer<E extends DataObject>``
+
+   -  *deserialize(ByteBuf)* - deserializes given ByteBuf
+
+-  ``HeaderDeserializer<E extends DataObject>``
+
+   -  *deserializeHeaders(ByteBuf)* - deserializes only E headers (used
+      in Multipart TableFeatures messages)
+
+**DeserializerRegistryInjector**
+
+-  ``injectDeserializerRegistry(DeserializerRegistry)`` - injects
+   deserializer registry into deserializer. Useful when custom
+   deserializer needs access to other deserializers.
+
+**NOTE:** DeserializerRegistryInjector is not OFGeneralDeserializer
+descendand. It is a standalone interface.
+
+**MessageCodeKey and its descendants** These keys are used as for
+deserializer lookup in DeserializerRegistry. MessageCodeKey should is
+used in general, while its descendants are used in more special cases.
+For Example ActionDeserializerKey is used for Action deserializer lookup
+and (de)registration. Vendor is provided with special keys, which
+contain only the most necessary fields. These keys usually start with
+"Experimenter" prefix (MatchEntryDeserializerKey is an exception).
+
+MessageCodeKey has these fields:
+
+-  short version - Openflow wire version number
+
+-  int value - value read from byte message
+
+-  Class<?> clazz - class of object being creating
+
+-  [1] The scenario starts in a custom bundle which wants to extend
+   library’s functionality. The custom bundle creates deserializers
+   which implement exposed ``OFDeserializer`` / ``HeaderDeserializer``
+   interfaces (wrapped under ``OFGeneralDeserializer`` unifying super
+   interface).
+
+-  [2] Created deserializers are paired with corresponding
+   ExperimenterKeys, which are used for deserializer lookup. If you
+   don’t know what key should be used with your (de)serializer
+   implementation, please visit `Registration
+   keys <#registration_keys>`__ page.
+
+-  [3] Paired deserializers are passed to the OF Library via
+   **SwitchConnectionProvider**.\ *registerCustomDeserializer(key,
+   impl)*. Library registers the deserializer.
+
+   -  While registering, Library checks if the deserializer is an
+      instance of **DeserializerRegistryInjector** interface. If yes,
+      the DeserializerRegistry (which stores all deserializer
+      references) is injected into the deserializer.
+
+This is particularly useful when the deserializer needs access to other
+deserializers. For example ``IntructionsDeserializer`` needs access to
+``ActionsDeserializer`` in order to be able to process
+OFPIT\_WRITE\_ACTIONS/OFPIT\_APPLY\_ACTIONS instructions.
+
+.. figure:: ./images/openflowjava/800px-Extensibility.png
+   :alt: Deserialization scenario walkthrough
+
+   Deserialization scenario walkthrough
+
+Detailed walkthrough: Serialization extensibility
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**External interface & class description.**
+
+**OFGeneralSerializer:**
+
+-  OFSerializer<E extends DataObject>
+
+   -  *serialize(E,ByteBuf)* - serializes E into given ByteBuf
+
+-  ``HeaderSerializer<E extends DataObject>``
+
+   -  *serializeHeaders(E,ByteBuf)* - serializes E headers (used in
+      Multipart TableFeatures messages)
+
+**SerializerRegistryInjector** \*
+``injectSerializerRegistry(SerializerRegistry)`` - injects serializer
+registry into serializer. Useful when custom serializer needs access to
+other serializers.
+
+**NOTE:** SerializerRegistryInjector is not OFGeneralSerializer
+descendand.
+
+**MessageTypeKey and its descendants** These keys are used as for
+serializer lookup in SerializerRegistry. MessageTypeKey should is used
+in general, while its descendants are used in more special cases. For
+Example ActionSerializerKey is used for Action serializer lookup and
+(de)registration. Vendor is provided with special keys, which contain
+only the most necessary fields. These keys usually start with
+"Experimenter" prefix (MatchEntrySerializerKey is an exception).
+
+MessageTypeKey has these fields:
+
+-  *short version* - Openflow wire version number
+
+-  *Class<E> msgType* - DTO class
+
+Scenario walkthrough
+
+-  [1] Serialization extensbility principles are similar to the
+   deserialization principles. The scenario starts in a custom bundle.
+   The custom bundle creates serializers which implement exposed
+   OFSerializer / HeaderSerializer interfaces (wrapped under
+   OFGeneralSerializer unifying super interface).
+
+-  [2] Created serializers are paired with their ExperimenterKeys, which
+   are used for serializer lookup. If you don’t know what key should be
+   used with your serializer implementation, please visit `Registration
+   keys <#registration_keys>`__ page.
+
+-  [3] Paired serializers are passed to the OF Library via
+   **SwitchConnectionProvider**.\ *registerCustomSerializer(key, impl)*.
+   Library registers the serializer.
+
+-  While registering, Library checks if the serializer is an instance of
+   **SerializerRegistryInjector** interface. If yes, the
+   SerializerRegistry (which stores all serializer references) is
+   injected into the serializer.
+
+This is particularly useful when the serializer needs access to other
+deserializers. For example IntructionsSerializer needs access to
+ActionsSerializer in order to be able to process
+OFPIT\_WRITE\_ACTIONS/OFPIT\_APPLY\_ACTIONS instructions.
+
+.. figure:: ./images/openflowjava/800px-Extensibility2.png
+   :alt: Serialization scenario walkthrough
+
+   Serialization scenario walkthrough
+
+Internal description
+~~~~~~~~~~~~~~~~~~~~
+
+**SwitchConnectionProvider** ``SwitchConnectionProvider`` constructs and
+initializes both deserializer and serializer registries with default
+(de)serializers. It also injects the ``DeserializerRegistry`` into the
+``DeserializationFactory``, the ``SerializerRegistry`` into the
+``SerializationFactory``. When call to register custom (de)serializer is
+made, ``SwitchConnectionProvider`` calls register method on appropriate
+registry.
+
+**DeserializerRegistry / SerializerRegistry** Both registries contain
+init() method to initialize default (de)serializers. Registration checks
+if key or (de)serializer implementation are not ``null``. If at least
+one of the is ``null``, ``NullPointerException`` is thrown. Else the
+(de)serializer implementation is checked if it is
+``(De)SerializerRegistryInjector`` instance. If it is an instance of
+this interface, the registry is injected into this (de)serializer
+implementation.
+
+``GetSerializer(key)`` or ``GetDeserializer(key)`` performs registry
+lookup. Because there are two separate interfaces that might be put into
+the registry, the registry uses their unifying super interface.
+Get(De)Serializer(key) method casts the super interface to desired type.
+There is also a null check for the (de)serializer received from the
+registry. If the deserializer wasn’t found, ``NullPointerException``
+with key description is thrown.
+
+Registration keys
+~~~~~~~~~~~~~~~~~
+
+**Deserialization.**
+
+**Possible openflow extensions and their keys**
+
+There are three vendor specific extensions in Openflow v1.0 and eight in
+Openflow v1.3. These extensions are registered under registration keys,
+that are shown in table below:
+
++----------------+---------+------------------------------+-----------------------+
+| Extension type | OpenFlo | Registration key             | Utility class         |
+|                | w       |                              |                       |
++================+=========+==============================+=======================+
+| Vendor message | 1.0     | ExperimenterIdDeserializerKe | ExperimenterDeseriali |
+|                |         | y(1,                         | zerKeyFactory         |
+|                |         | experimenterId,              |                       |
+|                |         | ExperimenterMessage.class)   |                       |
++----------------+---------+------------------------------+-----------------------+
+| Action         | 1.0     | ExperimenterActionDeserializ | .                     |
+|                |         | erKey(1,                     |                       |
+|                |         | experimenter ID)             |                       |
++----------------+---------+------------------------------+-----------------------+
+| Stats message  | 1.0     | ExperimenterMultipartReplyMe | ExperimenterDeseriali |
+|                |         | ssageDeserializerKey(1,      | zerKeyFactory         |
+|                |         | experimenter ID)             |                       |
++----------------+---------+------------------------------+-----------------------+
+| Experimenter   | 1.3     | ExperimenterIdDeserializerKe | ExperimenterDeseriali |
+| message        |         | y(4,                         | zerKeyFactory         |
+|                |         | experimenterId,              |                       |
+|                |         | ExperimenterMessage.class)   |                       |
++----------------+---------+------------------------------+-----------------------+
+| Match entry    | 1.3     | MatchEntryDeserializerKey(4, | .                     |
+|                |         | (number) ${oxm\_class},      |                       |
+|                |         | (number) ${oxm\_field});     |                       |
++----------------+---------+------------------------------+-----------------------+
+|                |         | key.setExperimenterId(experi | .                     |
+|                |         | menter                       |                       |
+|                |         | ID);                         |                       |
++----------------+---------+------------------------------+-----------------------+
+| Action         | 1.3     | ExperimenterActionDeserializ | .                     |
+|                |         | erKey(4,                     |                       |
+|                |         | experimenter ID)             |                       |
++----------------+---------+------------------------------+-----------------------+
+| Instruction    | 1.3     | ExperimenterInstructionDeser | .                     |
+|                |         | ializerKey(4,                |                       |
+|                |         | experimenter ID)             |                       |
++----------------+---------+------------------------------+-----------------------+
+| Multipart      | 1.3     | ExperimenterIdDeserializerKe | ExperimenterDeseriali |
+|                |         | y(4,                         | zerKeyFactory         |
+|                |         | experimenterId,              |                       |
+|                |         | MultipartReplyMessage.class) |                       |
++----------------+---------+------------------------------+-----------------------+
+| Multipart -    | 1.3     | ExperimenterIdDeserializerKe | ExperimenterDeseriali |
+| Table features |         | y(4,                         | zerKeyFactory         |
+|                |         | experimenterId,              |                       |
+|                |         | TableFeatureProperties.class |                       |
+|                |         | )                            |                       |
++----------------+---------+------------------------------+-----------------------+
+| Error          | 1.3     | ExperimenterIdDeserializerKe | ExperimenterDeseriali |
+|                |         | y(4,                         | zerKeyFactory         |
+|                |         | experimenterId,              |                       |
+|                |         | ErrorMessage.class)          |                       |
++----------------+---------+------------------------------+-----------------------+
+| Queue property | 1.3     | ExperimenterIdDeserializerKe | ExperimenterDeseriali |
+|                |         | y(4,                         | zerKeyFactory         |
+|                |         | experimenterId,              |                       |
+|                |         | QueueProperty.class)         |                       |
++----------------+---------+------------------------------+-----------------------+
+| Meter band     | 1.3     | ExperimenterIdDeserializerKe | ExperimenterDeseriali |
+| type           |         | y(4,                         | zerKeyFactory         |
+|                |         | experimenterId,              |                       |
+|                |         | MeterBandExperimenterCase.cl |                       |
+|                |         | ass)                         |                       |
++----------------+---------+------------------------------+-----------------------+
+
+Table: **Deserialization**
+
+**Serialization.**
+
+**Possible openflow extensions and their keys**
+
+There are three vendor specific extensions in Openflow v1.0 and seven
+Openflow v1.3. These extensions are registered under registration keys,
+that are shown in table below:
+
++----------------+---------+------------------------------+-----------------------+
+| Extension type | OpenFlo | Registration key             | Utility class         |
+|                | w       |                              |                       |
++================+=========+==============================+=======================+
+| Vendor message | 1.0     | ExperimenterIdSerializerKey< | ExperimenterSerialize |
+|                |         | >(1,                         | rKeyFactory           |
+|                |         | experimenterId,              |                       |
+|                |         | ExperimenterInput.class)     |                       |
++----------------+---------+------------------------------+-----------------------+
+| Action         | 1.0     | ExperimenterActionSerializer | .                     |
+|                |         | Key(1,                       |                       |
+|                |         | experimenterId, sub-type)    |                       |
++----------------+---------+------------------------------+-----------------------+
+| Stats message  | 1.0     | ExperimenterMultipartRequest | ExperimenterSerialize |
+|                |         | SerializerKey(1,             | rKeyFactory           |
+|                |         | experimenter ID)             |                       |
++----------------+---------+------------------------------+-----------------------+
+| Experimenter   | 1.3     | ExperimenterIdSerializerKey< | ExperimenterSerialize |
+| message        |         | >(4,                         | rKeyFactory           |
+|                |         | experimenterId,              |                       |
+|                |         | ExperimenterInput.class)     |                       |
++----------------+---------+------------------------------+-----------------------+
+| Match entry    | 1.3     | MatchEntrySerializerKey<>(4, | .                     |
+|                |         | (class) ${oxm\_class},       |                       |
+|                |         | (class) ${oxm\_field});      |                       |
++----------------+---------+------------------------------+-----------------------+
+|                |         | key.setExperimenterId(experi | .                     |
+|                |         | menter                       |                       |
+|                |         | ID)                          |                       |
++----------------+---------+------------------------------+-----------------------+
+| Action         | 1.3     | ExperimenterActionSerializer | .                     |
+|                |         | Key(4,                       |                       |
+|                |         | experimenterId, sub-type)    |                       |
++----------------+---------+------------------------------+-----------------------+
+| Instruction    | 1.3     | ExperimenterInstructionSeria | .                     |
+|                |         | lizerKey(4,                  |                       |
+|                |         | experimenter ID)             |                       |
++----------------+---------+------------------------------+-----------------------+
+| Multipart      | 1.3     | ExperimenterIdSerializerKey< | ExperimenterSerialize |
+|                |         | >(4,                         | rKeyFactory           |
+|                |         | experimenterId,              |                       |
+|                |         | MultipartRequestExperimenter |                       |
+|                |         | Case.class)                  |                       |
++----------------+---------+------------------------------+-----------------------+
+| Multipart -    | 1.3     | ExperimenterIdSerializerKey< | ExperimenterSerialize |
+| Table features |         | >(4,                         | rKeyFactory           |
+|                |         | experimenterId,              |                       |
+|                |         | TableFeatureProperties.class |                       |
+|                |         | )                            |                       |
++----------------+---------+------------------------------+-----------------------+
+| Meter band     | 1.3     | ExperimenterIdSerializerKey< | ExperimenterSerialize |
+| type           |         | >(4,                         | rKeyFactory           |
+|                |         | experimenterId,              |                       |
+|                |         | MeterBandExperimenterCase.cl |                       |
+|                |         | ass)                         |                       |
++----------------+---------+------------------------------+-----------------------+
+
+Table: **Serialization**
+
diff --git a/docs/developer-guide/ovsdb-netvirt.rst b/docs/developer-guide/ovsdb-netvirt.rst
new file mode 100644 (file)
index 0000000..81781f8
--- /dev/null
@@ -0,0 +1,1877 @@
+OVSDB NetVirt
+=============
+
+OVSDB Integration
+-----------------
+
+The Open vSwitch database (OVSDB) Southbound Plugin component for
+OpenDaylight implements the OVSDB `RFC
+7047 <https://tools.ietf.org/html/rfc7047>`__ management protocol that
+allows the southbound configuration of switches that support OVSDB. The
+component comprises a library and a plugin. The OVSDB protocol uses
+JSON-RPC calls to manipulate a physical or virtual switch that supports
+OVSDB. Many vendors support OVSDB on various hardware platforms. The
+OpenDaylight controller uses the library project to interact with an OVS
+instance.
+
+    **Note**
+
+    Read the OVSDB User Guide before you begin development.
+
+OpenDaylight OVSDB integration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The OpenStack integration architecture uses the following technologies:
+
+-  `RFC 7047 <https://tools.ietf.org/html/rfc7047>`__ - The Open vSwitch
+   Database Management Protocol
+
+-  `OpenFlow
+   v1.3 <http://www.opennetworking.org/images/stories/downloads/sdn-resources/onf-specifications/openflow/openflow-switch-v1.3.4.pdf>`__
+
+-  `OpenStack Neutron ML2
+   Plugin <https://wiki.openstack.org/wiki/Neutron/ML2>`__
+
+OpenDaylight Mechanism Driver for Openstack Neutron ML2
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This code is a part of OpenStack and is available at:
+https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py
+
+The ODL neutron driver implementation can be found at:
+https://github.com/openstack/networking-odl
+
+To make changes to this code, please read about `Neutron
+Development <https://wiki.openstack.org/wiki/NeutronDevelopment>`__.
+
+Before submitting the code, run the following tests:
+
+::
+
+    tox -e py27
+    tox -e pep8
+
+Importing the code in to Eclipse or IntelliJ
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To import code, look at either of the following pages:
+
+-  `Getting started with
+   Eclipse <https://wiki.opendaylight.org/view/Eclipse_Setup>`__
+
+-  `Developing with
+   Intellij <https://wiki.opendaylight.org/view/OpenDaylight_Controller:Developing_With_Intellij>`__
+
+.. figure:: ./images/OVSDB_Eclipse.png
+   :alt: Avoid conflicting project names
+
+   Avoid conflicting project names
+
+-  To ensure that a project in Eclipse does not have a conflicting name
+   in the workspace, select Advanced > Name Template >
+   [groupId].[artifactId] when importing the project.
+
+Browsing the code
+^^^^^^^^^^^^^^^^^
+
+The code is mirrored to
+`GitHub <https://github.com/opendaylight/ovsdb>`__ to make reading code
+online easier.
+
+Source code organization
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The OVSDB project generates the following Karaf modules:
+
+-  ovsdb.karaf  — all openstack netvirt related artifacts
+
+-  ovsdb.library-karaf — the OVSDB library reference implementation
+
+-  ovsdb.openstack.net-virt-sfc-karaf  — openflow service function
+   chaining
+
+-  ovsdb.hwvtepsouthbound-karaf — the hw\_vtep schema southbound plugin
+
+-  ovsdb.southbound-karaf - the Open\_vSwitch schema plugin
+
+Following are a brief descriptions on directories you will find a the
+root ovsdb/ directory:
+
+-  *commons* contains the parent POM file for Maven project which is
+   used to get consistency of settings across the project.
+
+-  *features* contains all the Karaf related feature files.
+
+-  *hwvtepsouthbound* contains the hw\_vtep southbound plugin.
+
+-  *karaf* contains the ovsdb library and southbound and OpenStack
+   bundles for the OpenStack integration.
+
+-  *library* contains a schema-independent library that is a reference
+   implementation for RFC 7047.
+
+-  *openstack* contains the northbound handlers for Neutron used by
+   OVSDB, as well as their providers. The NetVirt SFC implementation is
+   also located here.
+
+-  *ovsdb-ui* contains the DLUX implementation for displaying network
+   virtualization.
+
+-  *resources* contains useful scripts, how-tos, demos and other
+   resources.
+
+-  *schemas* contains the OVSDB schemas that are implemented in
+   OpenDaylight.
+
+-  *southbound* contains the plugin for converting from the OVSDB
+   protocol to MD-SAL and vice-versa.
+
+-  *utils* contains a collection of utilities for using the OpenFlow
+   plugin, southbound, Neutron and other helper methods.
+
+Building and running OVSDB
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+| **Prerequisites**
+
+-  JDK 1.7+
+
+-  Maven 3+
+
+Building a Karaf feature and deploying it in an Opendaylight Karaf distribution
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+1. From the root ovsdb/ directory, run **mvn clean install**.
+
+2. Unzip the karaf-<VERSION\_NUMBER>-SNAPSHOT.zip file created from step
+   1 in the directory ovsdb/karaf/target/:
+
+::
+
+    unzip karaf-<VERSION_NUMBER>-SNAPSHOT.zip
+
+Downloading OVSDB’s Karaf distribution
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Instead of building, you can download the latest OVSDB distribution from
+the Nexus server. The link for that is:
+
+::
+
+    https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/ovsdb/karaf/1.3.0-SNAPSHOT/
+
+Running Karaf feature from OVSDB’s Karaf distribution
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+1. Start ODL, from the unzipped directory
+
+::
+
+    bin/karaf
+
+1. Once karaf has started, and you see the Opendaylight ascii art in the
+   console, the last step is to start the OVSDB plugin framework with
+   the following command in the karaf console:
+
+::
+
+    feature:install odl-ovsdb-openstack
+
+Sample output from the Karaf console
+''''''''''''''''''''''''''''''''''''
+
+::
+
+    opendaylight-user@root>feature:list | grep -i ovsdb
+    opendaylight-user@root>feature:list -i | grep ovsdb
+    odl-ovsdb-southbound-api          | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: api
+    odl-ovsdb-southbound-impl         | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: impl
+    odl-ovsdb-southbound-impl-rest    | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: impl :: REST
+    odl-ovsdb-southbound-impl-ui      | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: impl :: UI
+    odl-ovsdb-library                 | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-library-1.2.1-SNAPSHOT        | OpenDaylight :: library
+    odl-ovsdb-openstack               | 1.2.1-SNAPSHOT   | x         | ovsdb-1.2.1-SNAPSHOT                    | OpenDaylight :: OVSDB :: OpenStack Network Virtual
+
+Testing patches
+^^^^^^^^^^^^^^^
+
+It is recommended that you test your patches locally before submission.
+
+Neutron integration
+^^^^^^^^^^^^^^^^^^^
+
+To test patches to the Neutron integration, you need a `Multi-Node
+Devstack Setup <http://devstack.org/guides/multinode-lab.html>`__. The
+\`\`resources\`\` folder contains sample \`\`local.conf\`\` files.
+
+Open vSwitch
+^^^^^^^^^^^^
+
+To test patches to the library, you will need a working `Open
+vSwitch <http://openvswitch.org/>`__. Packages are available for most
+Linux distributions. If you would like to run multiple versions of Open
+vSwitch for testing you can use
+`docker-ovs <https://github.com/dave-tucker/docker-ovs>`__ to run Open
+vSwitch in `Docker <https://www.docker.com/>`__ containers.
+
+Mininet
+^^^^^^^
+
+`Mininet <http://mininet.org/>`__ is another useful resource for testing
+patches. Mininet creates multiple Open vSwitches connected in a
+configurable topology.
+
+Vagrant
+^^^^^^^
+
+The Vagrant file in the root of the OVSDB source code provides an easy
+way to create VMs for tests.
+
+-  To install Vagrant on your machine, follow the steps at: `Installing
+   Vagrant <https://docs.vagrantup.com/v2/installation/>`__.
+
+**Testing with Devstack**
+
+1. Start the controller.
+
+::
+
+    vagrant up devstack-control
+    vagrant ssh devstack-control
+    cd devstack
+    ./stack.sh
+
+1. Run the following:
+
+::
+
+    vagrant up devstack-compute-1
+    vagrant ssh devstack-compute-1
+    cd devstack
+    ./stack.sh
+
+1. To start testing, create a new VM.
+
+::
+
+    nova boot --flavor m1.tiny --image $(nova image-list | grep 'cirros-0.3.1-x86_64-uec\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep private | awk '{print $2}') test
+
+To create three, use the following:
+
+::
+
+    nova boot --flavor m1.tiny --image $(nova image-list | grep 'cirros-0.3.1-x86_64-uec\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep private | awk '{print $2}') --num-instances 3 test
+
+**To get a mininet installation for testing:.**
+
+::
+
+    vagrant up mininet
+    vagrant ssh mininet
+
+1. Use the following to clean up when finished:
+
+::
+
+    vagrant destroy
+
+OVSDB integration design
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Resources
+^^^^^^^^^
+
+| See the following:
+
+-  `Network
+   Heresy <http://networkheresy.com/2012/09/15/remembering-the-management-plane/>`__
+
+| See the OVSDB YouTube Channel for getting started videos and other
+  tutorials:
+
+-  `ODL OVSDB Youtube
+   Channel <http://www.youtube.com/channel/UCMYntfZ255XGgYFrxCNcAzA>`__
+
+-  `Mininet OVSDB
+   Tutorial <https://wiki.opendaylight.org/view/OVSDB_Integration:Mininet_OVSDB_Tutorial>`__
+
+-  `OVSDB Getting
+   Started <https://wiki.opendaylight.org/view/OVSDB_Integration:Main#Getting_Started_with_OpenDaylight_OVSDB_Plugin_Network_Virtualization>`__
+
+OpenDaylight OVSDB southbound plugin architecture and design
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+OpenVSwitch (OVS) is generally accepted as the unofficial standard for
+Virtual Switching in the Open hypervisor based solutions. Every other
+Virtual Switch implementation, properietery or otherwise, uses OVS in
+some form. For information on OVS, see `Open
+vSwitch <http://openvswitch.org/>`__.
+
+In Software Defined Networking (SDN), controllers and applications
+interact using two channels: OpenFlow and OVSDB. OpenFlow addresses the
+forwarding-side of the OVS functionality. OVSDB, on the other hand,
+addresses the management-plane. A simple and concise overview of Open
+Virtual Switch Database(OVSDB) is available at:
+http://networkstatic.net/getting-started-ovsdb/
+
+Overview of OpenDaylight Controller architecture
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The OpenDaylight controller platform is designed as a highly modular and
+plugin based middleware that serves various network applications in a
+variety of use-cases. The modularity is achieved through the Java OSGi
+framework. The controller consists of many Java OSGi bundles that work
+together to provide the required controller functionalities.
+
+| The bundles can be placed in the following broad categories:
+
+-  Network Service Functional Modules (Examples: Topology Manager,
+   Inventory Manager, Forwarding Rules Manager,and others)
+
+-  NorthBound API Modules (Examples: Topology APIs, Bridge Domain APIs,
+   Neutron APIs, Connection Manager APIs, and others)
+
+-  Service Abstraction Layer(SAL)- (Inventory Services, DataPath
+   Services, Topology Services, Network Config, and others)
+
+-  SouthBound Plugins (OpenFlow Plugin, OVSDB Plugin, OpenDove Plugin,
+   and others)
+
+-  Application Modules (Simple Forwarding, Load Balancer)
+
+Each layer of the Controller architecture performs specified tasks, and
+hence aids in modularity. While the Northbound API layer addresses all
+the REST-Based application needs, the SAL layer takes care of
+abstracting the SouthBound plugin protocol specifics from the Network
+Service functions.
+
+Each of the SouthBound Plugins serves a different purpose, with some
+overlapping. For example, the OpenFlow plugin might serve the Data-Plane
+needs of an OVS element, while the OVSDB plugin can serve the management
+plane needs of the same OVS element. As the Openflow Plugin talks
+OpenFlow protocol with the OVS element, the OVSDB plugin will use OVSDB
+schema over JSON-RPC transport.
+
+OVSDB southbound plugin
+~~~~~~~~~~~~~~~~~~~~~~~
+
+| The `Open vSwitch Database Management
+  Protocol-draft-02 <http://tools.ietf.org/html/draft-pfaff-ovsdb-proto-02>`__
+  and `Open vSwitch
+  Manual <http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf>`__ provide
+  theoretical information about OVSDB. The OVSDB protocol draft is
+  generic enough to lay the groundwork on Wire Protocol and Database
+  Operations, and the OVS Manual currently covers 13 tables leaving
+  space for future OVS expansion, and vendor expansions on proprietary
+  implementations. The OVSDB Protocol is a database records transport
+  protocol using JSON RPC1.0. For information on the protocol structure,
+  see `Getting Started with
+  OVSDB <http://networkstatic.net/getting-started-ovsdb/>`__. The
+  OpenDaylight OVSDB southbound plugin consists of one or more OSGi
+  bundles addressing the following services or functionalities:
+
+-  Connection Service - Based on Netty
+
+-  Network Configuration Service
+
+-  Bidirectional JSON-RPC Library
+
+-  OVSDB Schema definitions and Object mappers
+
+-  Overlay Tunnel management
+
+-  OVSDB to OpenFlow plugin mapping service
+
+-  Inventory Service
+
+Connection service
+~~~~~~~~~~~~~~~~~~
+
+| One of the primary services that most southbound plugins provide in
+  Opendaylight a Connection Service. The service provides protocol
+  specific connectivity to network elements, and supports the
+  connectivity management services as specified by the OpenDaylight
+  Connection Manager. The connectivity services include:
+
+-  Connection to a specified element given IP-address, L4-port, and
+   other connectivity options (such as authentication,…)
+
+-  Disconnection from an element
+
+-  Handling Cluster Mode change notifications to support the
+   OpenDaylight Clustering/High-Availability feature
+
+Network Configuration Service
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+| The goal of the OpenDaylight Network Configuration services is to
+  provide complete management plane solutions needed to successfully
+  install, configure, and deploy the various SDN based network services.
+  These are generic services which can be implemented in part or full by
+  any south-bound protocol plugin. The south-bound plugins can be either
+  of the following:
+
+-  The new network virtualization protocol plugins such as OVSDB
+   JSON-RPC
+
+-  The traditional management protocols such as SNMP or any others in
+   the middle.
+
+The above definition, and more information on Network Configuration
+Services, is available at :
+https://wiki.opendaylight.org/view/OpenDaylight_Controller:NetworkConfigurationServices
+
+Bidirectional JSON-RPC library
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The OVSDB plugin implements a Bidirectional JSON-RPC library. It is easy
+to design the library as a module that manages the Netty connection
+towards the Element.
+
+| The main responsibilities of this Library are:
+
+-  Demarshal and marshal JSON Strings to JSON objects
+
+-  Demarshal and marshal JSON Strings from and to the Network Element.
+
+OVSDB Schema definitions and Object mappers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The OVSDB Schema definitions and Object Mapping layer sits above the
+JSON-RPC library. It maps the generic JSON objects to OVSDB schema POJOs
+(Plain Old Java Object) and vice-versa. This layer mostly provides the
+Java Object definition for the corresponding OVSDB schema (13 of them)
+and also will provide much more friendly API abstractions on top of
+these object data. This helps in hiding the JSON semantics from the
+functional modules such as Configuration Service and Tunnel management.
+
+| On the demarshaling side the mapping logic differentiates the Request
+  and Response messages as follows :
+
+-  Request messages are mapped by its "method"
+
+-  | Response messages are mapped by their IDs which were originally
+     populated by the Request message. The JSON semantics of these OVSDB
+     schema is quite complex. The following figures summarize two of the
+     end-to-end scenarios:
+
+.. figure:: ./images/ConfigurationService-example1.png
+   :alt: End-to-end handling of a Create Bridge request
+
+   End-to-end handling of a Create Bridge request
+
+.. figure:: ./images/MonitorResponse.png
+   :alt: End-to-end handling of a monitor response
+
+   End-to-end handling of a monitor response
+
+Overlay tunnel management
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Network Virtualization using OVS is achieved through Overlay Tunnels.
+The actual Type of the Tunnel may be GRE, VXLAN, or STT. The differences
+in the encapsulation and configuration decide the tunnel types.
+Establishing a tunnel using configuration service requires just the
+sending of OVSDB messages towards the ovsdb-server. However, the scaling
+issues that would arise on the state management at the data-plane (using
+OpenFlow) can get challenging. Also, this module can assist in various
+optimizations in the presence of Gateways. It can also help in providing
+Service guarantees for the VMs using these overlays with the help of
+underlay orchestration.
+
+OVSDB to OpenFlow plugin mapping service
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+| The connect() of the ConnectionService would result in a Node that
+  represents an ovsdb-server. The CreateBridgeDomain() Configuration on
+  the above Node would result in creating an OVS bridge. This OVS Bridge
+  is an OpenFlow Agent for the OpenDaylight OpenFlow plugin with its own
+  Node represented as (example) OF\|xxxx.yyyy.zzzz. Without any help
+  from the OVSDB plugin, the Node Mapping Service of the Controller
+  platform would not be able to map the following:
+
+::
+
+    {OVSDB_NODE + BRIDGE_IDENTFIER} <---> {OF_NODE}.
+
+Without such mapping, it would be extremely difficult for the
+applications to manage and maintain such nodes. This Mapping Service
+provided by the OVSDB plugin would essentially help in providing more
+value added services to the orchestration layers that sit atop the
+Northbound APIs (such as OpenStack).
+
+OpenDaylight OVSDB Developer Getting Started Video Series
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The video series were started to help developers bootstrap into OVSDB
+development.
+
+-  `OpenDaylight OVSDB Developer Getting
+   Started <http://www.youtube.com/watch?v=ieB645oCIPs>`__
+
+-  `OpenDaylight OVSDB Developer Getting Started - Northbound API
+   Usage <http://www.youtube.com/watch?v=xgevyaQ12cg>`__
+
+-  `OpenDaylight OVSDB Developer Getting Started - Java
+   APIs <http://www.youtube.com/watch?v=xgevyaQ12cg>`__
+
+-  `OpenDaylight OVSDB Developer Getting Started - OpenStack Integration
+   OpenFlow v1.0 <http://www.youtube.com/watch?v=NayuY6J-AMA>`__
+
+Other developer tutorials
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  `OVSDB NetVirt
+   Tutorial <https://docs.google.com/presentation/d/1KIuNDuUJGGEV37Zk9yzx9OSnWExt4iD2Z7afycFLf_I/edit?usp=sharing>`__
+
+-  `Youtube of OVSDB NetVirt
+   tutorial <https://www.youtube.com/watch?v=2axNKHvt5MY&list=PL8F5jrwEpGAiJG252ShQudYeodGSsks2l&index=43>`__
+
+-  `OVSDB OpenFlow v1.3 Neutron ML2
+   Integration <https://wiki.opendaylight.org/view/OVSDB:OVSDB_OpenStack_Guide>`__
+
+-  `Open vSwitch Database Table Explanations and Simple Jackson
+   Tutorial <http://networkstatic.net/getting-started-ovsdb/>`__
+
+OVSDB integration: New features
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Schema independent library
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The OVS connection is a node which can have multiple databases. Each
+database is represented by a schema. A single connection can have
+multiple schemas. OSVDB supports multiple schemas. Currently, these are
+two schemas available in the OVSDB, but there is no restriction on the
+number of schemas. Owing to the Northbound v3 API, no code changes in
+ODL are needed for supporting additional schemas.
+
+| Schemas:
+
+-  openvswitch : Schema wrapper that represents
+   http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf
+
+-  hardwarevtep: Schema wrapper that represents
+   http://openvswitch.org/docs/vtep.5.pdf
+
+Port security
+^^^^^^^^^^^^^
+
+Based on the fact that security rules can be obtained from a port
+object, OVSDB can apply Open Flow rules. These rules will match on what
+types of traffic the Openstack tenant VM is allowed to use.
+
+Support for security groups is very experimental. There are limitations
+in determining the state of flows in the Open vSwitch. See `Open vSwitch
+and the Intelligent
+Edge <http://%20https//www.youtube.com/watch?v=DSop2uLJZS8>`__ from
+Justin Petit for a deep dive into the challenges we faced creating a
+flow based port security implementation. The current set of rules that
+will be installed only supports filtering of the TCP protocol. This is
+because via a Nicira TCP\_Flag read we can match on a flows TCP\_SYN
+flag, and permit or deny the flow based on the Neutron port security
+rules. If rules are requested for ICMP and UDP, they are ignored until
+greater visibility from the Linux kernel is available as outlined in the
+OpenStack presentation mentioned earlier.
+
+Using the port security groups of Neutron, one can add rules that
+restrict the network access of the tenants. The OVSDB Neutron
+integration checks the port security rules configured, and apply them by
+means of openflow rules.
+
+Through the ML2 interface, Neutron security rules are available in the
+port object, following this scope: Neutron Port → Security Group →
+Security Rules.
+
+The current rules are applied on the basis of the following attributes:
+ingress/egress, tcp protocol, port range, and prefix.
+
+OpenStack workflow
+''''''''''''''''''
+
+1. Create a stack.
+
+2. Add the network and subnet.
+
+3. Add the Security Group and Rules.
+
+    **Note**
+
+    This is no different than what users normally do in regular
+    openstack deployments.
+
+::
+
+    neutron security-group-create group1 --description "Group 1"
+    neutron security-group-list
+    neutron security-group-rule-create --direction ingress --protocol tcp group1
+
+1. Start the tenant, specifying the security-group.
+
+::
+
+    nova boot --flavor m1.tiny \
+    --image $(nova image-list | grep 'cirros-0.3.1-x86_64-uec\s' | awk '{print $2}') \
+    --nic net-id=$(neutron net-list | grep 'vxlan2' | awk '{print $2}') vxlan2 \
+    --security-groups group1
+
+Examples: Rules supported
+'''''''''''''''''''''''''
+
+::
+
+    neutron security-group-create group2 --description "Group 2"
+    neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 54 group2
+    neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 80 group2
+    neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 1633 group2
+    neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 22 group2
+
+::
+
+    neutron security-group-create group3 --description "Group 3"
+    neutron security-group-rule-create --direction ingress --protocol tcp --remote-ip-prefix 10.200.0.0/16 group3
+
+::
+
+    neutron security-group-create group4 --description "Group 4"
+    neutron security-group-rule-create --direction ingress --remote-ip-prefix 172.24.0.0/16 group4
+
+::
+
+    neutron security-group-create group5 --description "Group 5"
+    neutron security-group-rule-create --direction ingress --protocol tcp group5
+    neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 54 group5
+    neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 80 group5
+    neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 1633 group5
+    neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 22 group5
+
+::
+
+    neutron security-group-create group6 --description "Group 6"
+    neutron security-group-rule-create --direction ingress --protocol tcp --remote-ip-prefix 0.0.0.0/0 group6
+
+::
+
+    neutron security-group-create group7 --description "Group 7"
+    neutron security-group-rule-create --direction egress --protocol tcp --port-range-min 443 --remote-ip-prefix 172.16.240.128/25 group7
+
+**Reference
+gist**:https://gist.github.com/anonymous/1543a410d57f491352c8[Gist]
+
+Security group rules supported in ODL
+'''''''''''''''''''''''''''''''''''''
+
+The following rules formata are supported in the current implementation.
+The direction (ingress/egress) is always expected. Rules are implemented
+such that tcp-syn packets that do not satisfy the rules are dropped.
+
++--------------------------+--------------------------+--------------------------+
+| Proto                    | Port                     | IP Prefix                |
++==========================+==========================+==========================+
+| TCP                      | x                        | x                        |
++--------------------------+--------------------------+--------------------------+
+| Any                      | Any                      | x                        |
++--------------------------+--------------------------+--------------------------+
+| TCP                      | x                        | Any                      |
++--------------------------+--------------------------+--------------------------+
+| TCP                      | Any                      | Any                      |
++--------------------------+--------------------------+--------------------------+
+
+Limitations
+'''''''''''
+
+-  Soon, conntrack will be supported by OVS. Until then, TCP flags are
+   used as way of checking for connection state. Specifically, that is
+   done by matching on the TCP-SYN flag.
+
+-  The param *--port-range-max* in *security-group-rule-create* is not
+   used until the implementation uses contrack.
+
+-  No UDP/ICMP specific match support is provided.
+
+-  No IPv6 support is provided.
+
+L3 forwarding
+^^^^^^^^^^^^^
+
+OVSDB extends support for the usage of an ODL-Neutron-driver so that
+OVSDB can configure OF 1.3 rules to route IPv4 packets. The driver
+eliminates the need for the router of the L3 Agent. In order to
+accomplish that, OVS 2.1 or a newer version is required. OVSDB also
+supports inbound/outbound NAT, floating IPs.
+
+Starting OVSDB and OpenStack
+''''''''''''''''''''''''''''
+
+1. Build or download OVSDB distribution, as mentioned in `building a
+   Karaf feature section <#ovsdbBuildSteps>`__.
+
+2. `Install
+   Vagrant <http://docs.vagrantup.com/v2/installation/index.html>`__.
+
+1. Enable the L3 Forwarding feature:
+
+::
+
+    echo 'ovsdb.l3.fwd.enabled=yes' >> ./opendaylight/configuration/config.ini
+    echo 'ovsdb.l3gateway.mac=${GATEWAY_MAC}' >> ./configuration/config.ini
+
+1. Run the following commands to get the odl neutron drivers:
+
+::
+
+    git clone https://github.com/dave-tucker/odl-neutron-drivers.git
+    cd odl-neutron-drivers
+    vagrant up devstack-control devstack-compute-1
+
+1. Use ssh to go to the control node, and clone odl-neutron-drivers
+   again:
+
+::
+
+    vagrant ssh devstack-control
+    git clone https://github.com/dave-tucker/odl-neutron-drivers.git
+    cd odl-neutron-drivers
+    sudo python setup.py install
+    *leave this shell open*
+
+1. Start odl, as mentioned in `running Karaf feature
+   section <#ovsdbStartingOdl>`__.
+
+2. To see processing of neutron event related to L3, do this from
+   prompt:
+
+::
+
+    log:set debug org.opendaylight.ovsdb.openstack.netvirt.impl.NeutronL3Adapter
+
+1. From shell, do one of the following: open on ssh into control node or
+   vagrant ssh devstack-control.
+
+::
+
+    cd ~/devstack && ./stack.sh
+
+1. From a new shell in the host system, run the following:
+
+::
+
+    cd odl-neutron-drivers
+    vagrant ssh devstack-compute-1
+    cd ~/devstack && ./stack.sh
+
+OpenStack workflow
+''''''''''''''''''
+
+.. figure:: ./images/L3FwdSample.png
+   :alt: Sample workflow
+
+   Sample workflow
+
+Use the following steps to set up a workflow like the one shown in
+figure above.
+
+1. Set up authentication. From shell on stack control or vagrant ssh
+   devstack-control:
+
+::
+
+    source openrc admin admin
+
+::
+
+    rm -f id_rsa_demo* ; ssh-keygen -t rsa -b 2048 -N  -f id_rsa_demo
+     nova keypair-add --pub-key  id_rsa_demo.pub  demo_key
+     # nova keypair-list
+
+1. Create two networks and two subnets.
+
+::
+
+    neutron net-create net1 --tenant-id $(keystone tenant-list | grep '\s'admin | awk '{print $2}') \
+     --provider:network_type gre --provider:segmentation_id 555
+
+::
+
+    neutron subnet-create --tenant-id $(keystone tenant-list | grep '\s'admin | awk '{print $2}') \
+    net1 10.0.0.0/16 --name subnet1 --dns-nameserver 8.8.8.8
+
+::
+
+    neutron net-create net2 --tenant-id $(keystone tenant-list | grep '\s'admin | awk '{print $2}') \
+     --provider:network_type gre --provider:segmentation_id 556
+
+::
+
+    neutron subnet-create --tenant-id $(keystone tenant-list | grep '\s'admin | awk '{print $2}') \
+     net2 20.0.0.0/16 --name subnet2 --dns-nameserver 8.8.8.8
+
+1. Create a router, and add an interface to each of the two subnets.
+
+::
+
+    neutron router-create demorouter --tenant-id $(keystone tenant-list | grep '\s'admin | awk '{print $2}')
+     neutron router-interface-add demorouter subnet1
+     neutron router-interface-add demorouter subnet2
+     # neutron router-port-list demorouter
+
+1. Create two tenant instances.
+
+::
+
+    nova boot --poll --flavor m1.nano --image $(nova image-list | grep 'cirros-0.3.2-x86_64-uec\s' | awk '{print $2}') \
+     --nic net-id=$(neutron net-list | grep -w net1 | awk '{print $2}'),v4-fixed-ip=10.0.0.10 \
+     --availability-zone nova:devstack-control \
+     --key-name demo_key host10
+
+::
+
+    nova boot --poll --flavor m1.nano --image $(nova image-list | grep 'cirros-0.3.2-x86_64-uec\s' | awk '{print $2}') \
+     --nic net-id=$(neutron net-list | grep -w net2 | awk '{print $2}'),v4-fixed-ip=20.0.0.20 \
+     --availability-zone nova:devstack-compute-1 \
+     --key-name demo_key host20
+
+Limitations
+'''''''''''
+
+-  To use this feature, you need OVS 2.1 or newer version.
+
+-  Owing to OF limitations, icmp responses due to routing failures, like
+   ttl expired or host unreacheable, are not generated.
+
+-  The MAC address of the default route is not automatically mapped. In
+   order to route to L3 destinations outside the networks of the tenant,
+   the manual configuration of the default route is necessary. To
+   provide the MAC address of the default route, use ovsdb.l3gateway.mac
+   in file configuration/config.ini ;
+
+-  This feature is Tech preview, which depends on later versions of
+   OpenStack to be used without the provided neutron-driver.
+
+-  No IPv6 support is provided.
+
+| **More information on L3 forwarding**:
+
+-  odl-neutron-driver:
+   https://github.com/dave-tucker/odl-neutron-drivers
+
+-  OF rules example:
+   http://dtucker.co.uk/hack/building-a-router-with-openvswitch.html
+
+LBaaS
+^^^^^
+
+Load-Balancing-as-a-Service (LBaaS) creates an Open vSwitch powered
+L3-L4 stateless load-balancer in a virtualized network environment so
+that individual TCP connections destined to a designated virtual IP
+(VIP) are sent to the appropriate servers (that is to say, serving app
+VMs). The load-balancer works in a session-preserving, proactive manner
+without involving the controller during flow setup.
+
+A Neutron northbound interface is provided to create a VIP which will
+map to a pool of servers (that is to say, members) within a subnet. The
+pools consist of members identified by an IP address. The goal is to
+closely match the API to the OpenStack LBaaS v2 API:
+http://docs.openstack.org/api/openstack-network/2.0/content/lbaas_ext.html.
+
+Creating an OpenStack workflow
+''''''''''''''''''''''''''''''
+
+1. Create a subnet.
+
+2. Create a floating VIP *A* that maps to a private VIP *B*.
+
+3. Create a Loadbalancer pool *X*.
+
+::
+
+    neutron lb-pool-create --name http-pool --lb-method ROUND_ROBIN --protocol HTTP --subnet-id XYZ
+
+1. Create a Loadbalancer pool member *Y* and associate with pool *X*.
+
+::
+
+    neutron lb-member-create --address 10.0.0.10 --protocol-port 80 http-pool
+    neutron lb-member-create --address 10.0.0.11 --protocol-port 80 http-pool
+    neutron lb-member-create --address 10.0.0.12 --protocol-port 80 http-pool
+    neutron lb-member-create --address 10.0.0.13 --protocol-port 80 http-pool
+
+1. Create a Loadbalancer instance *Z*, and associate pool *X* and VIP
+   *B* with it.
+
+::
+
+    neutron lb-vip-create --name http-vip --protocol-port 80 --protocol HTTP --subnet-id XYZ http-pool
+
+Implementation
+''''''''''''''
+
+The current implementation of the proactive stateless load-balancer was
+made using "multipath" action in the Open vSwitch. The "multipath"
+action takes a max\_link parameter value (which is same as the number of
+pool members) as input, and performs a hash of the fields to get a value
+between (0, max\_link). The value of the hash is used as an index to
+select a pool member to handle that session.
+
+Open vSwitch rules
+^^^^^^^^^^^^^^^^^^
+
+Assuming that table=20 contains all the rules to forward the traffic
+destined for a specific destination MAC address, the following are the
+rules needed to be programmed in the LBaaS service table=10. The
+programmed rules makes the translation from the VIP to a different pool
+member for every session.
+
+-  Proactive forward rules:
+
+::
+
+    sudo ovs-ofctl -O OpenFlow13 add-flow s1 "table=10,reg0=0,ip,nw_dst=10.0.0.5,actions=load:0x1->NXM_NX_REG0[[]],multipath(symmetric_l4, 1024, modulo_n, 4, 0, NXM_NX_REG1[0..12]),resubmit(,10)"
+    sudo ovs-ofctl -O OpenFlow13 add-flow s1 table=10,reg0=1,nw_dst=10.0.0.5,ip,reg1=0,actions=mod_dl_dst:00:00:00:00:00:10,mod_nw_dst:10.0.0.10,goto_table:20
+    sudo ovs-ofctl -O OpenFlow13 add-flow s1 table=10,reg0=1,nw_dst=10.0.0.5,ip,reg1=1,actions=mod_dl_dst:00:00:00:00:00:11,mod_nw_dst:10.0.0.11,goto_table:20
+    sudo ovs-ofctl -O OpenFlow13 add-flow s1 table=10,reg0=1,nw_dst=10.0.0.5,ip,reg1=2,actions=mod_dl_dst:00:00:00:00:00:12,mod_nw_dst:10.0.0.12,goto_table:20
+    sudo ovs-ofctl -O OpenFlow13 add-flow s1 table=10,reg0=1,nw_dst=10.0.0.5,ip,reg1=3,actions=mod_dl_dst:00:00:00:00:00:13,mod_nw_dst:10.0.0.13,goto_table:20
+
+-  Proactive reverse rules:
+
+::
+
+    sudo ovs-ofctl -O OpenFlow13 add-flow s1 table=10,ip,tcp,tp_src=80,actions=mod_dl_src:00:00:00:00:00:05,mod_nw_src:10.0.0.5,goto_table:20
+
+OVSDB project code
+''''''''''''''''''
+
+The current implementation handles all neutron calls in the
+net-virt/LBaaSHandler.java code, and makes calls to the
+net-virt-providers/LoadBalancerService to program appropriate flowmods.
+The rules are updated whenever there is a change in the Neutron LBaaS
+settings. There is no cache of state kept in the net-virt or providers.
+
+Limitations
+'''''''''''
+
+Owing to the inflexibility of the multipath action, the existing LBaaS
+implementation comes with some limitations:
+
+-  TCP, HTTP or HTTPS are supported protocols for the pool. (Caution:
+   You can lose access to the members if you assign {Proto:TCP, Port:22}
+   to LB)
+
+-  Member weights are ignored.
+
+-  The update of an LB instance is done as a delete + add, and not an
+   actual delta.
+
+-  The update of an LB member is not supported (because weights are
+   ignored).
+
+-  Deletion of an LB member leads to the reprogramming of the LB on all
+   nodes (because of the way multipath does link hash).
+
+-  There is only a single LB instance per subnet because the pool-id is
+   not reported in the create load-balancer call.
+
+OVSDB Library Developer Guide
+-----------------------------
+
+Overview
+~~~~~~~~
+
+The OVSDB library manages the Netty connections to network nodes and
+handles bidirectional JSON-RPC messages. It not only provides OVSDB
+protocol functionality to OpenDaylight OVSDB plugin but also can be used
+as standalone JAVA library for OVSDB protocol.
+
+The main responsibilities of OVSDB library include:
+
+-  Manage connections to peers
+
+-  Marshal and unmarshal JSON Strings to JSON objects.
+
+-  Marshal and unmarshal JSON Strings from and to the Network Element.
+
+Connection Service
+~~~~~~~~~~~~~~~~~~
+
+The OVSDB library provides connection management through the
+OvsdbConnection interface. The OvsdbConnection interface provides OVSDB
+connection management APIs which include both active and passive
+connections. From the library perspective, active OVSDB connections are
+initiated from the controller to OVS nodes while passive OVSDB
+connections are initiated from OVS nodes to the controller. In the
+active connection scenario an application needs to provide the IP
+address and listening port of OVS nodes to the library management API.
+On the other hand, the library management API only requires the info of
+the controller listening port in the passive connection scenario.
+
+For a passive connection scenario, the library also provides a
+connection event listener through the OvsdbConnectionListener interface.
+The listener interface has connected() and disconnected() methods to
+notify an application when a new passive connection is established or an
+existing connection is terminated.
+
+SSL Connection
+~~~~~~~~~~~~~~
+
+In addition to a regular TCP connection, the OvsdbConnection interface
+also provides a connection management API for an SSL connection. To
+start an OVSDB connection with SSL, an application will need to provide
+a Java SSLContext object to the management API. There are different ways
+to create a Java SSLContext, but in most cases a Java KeyStore with
+certificate and private key provided by the application is required.
+Detailed steps about how to create a Java SSLContext is out of the scope
+of this document and can be found in the Java documentation for `JAVA
+Class SSlContext <http://goo.gl/5svszT>`__.
+
+In the active connection scenario, the library uses the given SSLContext
+to create a Java SSLEngine and configures the SSL engine with the client
+mode for SSL handshaking. Normally clients are not required to
+authenticate themselves.
+
+In the passive connection scenario, the library uses the given
+SSLContext to create a Java SSLEngine which will operate in server mode
+for SSL handshaking. For security reasons, the SSLv3 protocol and some
+cipher suites are disabled. Currently the OVSDB server only supports the
+TLS\_RSA\_WITH\_AES\_128\_CBC\_SHA cipher suite and the following
+protocols: SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2.
+
+The SSL engine is also configured to operate on two-way authentication
+mode for passive connection scenarios, i.e., the OVSDB server
+(controller) will authenticate clients (OVS nodes) and clients (OVS
+nodes) are also required to authenticate the server (controller). In the
+two-way authentication mode, an application should keep a trust manager
+to store the certificates of trusted clients and initialize a Java
+SSLContext with this trust manager. Thus during the SSL handshaking
+process the OVSDB server (controller) can use the trust manager to
+verify clients and only accept connection requests from trusted clients.
+On the other hand, users should also configure OVS nodes to authenticate
+the controller. Open vSwitch already supports this functionality in the
+ovsdb-server command with option ``--ca-cert=cacert.pem`` and
+``--bootstrap-ca-cert=cacert.pem``. On the OVS node, a user can use the
+option ``--ca-cert=cacert.pem`` to specify a controller certificate
+directly and the node will only allow connections to the controller with
+the specified certificate. If the OVS node runs ovsdb-server with option
+``--bootstrap-ca-cert=cacert.pem``, it will authenticate the controller
+with the specified certificate cacert.pem. If the certificate file
+doesn’t exist, it will attempt to obtain a certificate from the peer
+(controller) on its first SSL connection and save it to the named PEM
+file ``cacert.pem``. Here is an example of ovsdb-server with
+``--bootstrap-ca-cert=cacert.pem`` option:
+
+``ovsdb-server --pidfile --detach --log-file --remote punix:/var/run/openvswitch/db.sock --remote=db:hardware_vtep,Global,managers --private-key=/etc/openvswitch/ovsclient-privkey.pem -- certificate=/etc/openvswitch/ovsclient-cert.pem --bootstrap-ca-cert=/etc/openvswitch/vswitchd.cacert``
+
+OVSDB protocol transactions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The OVSDB protocol defines the RPC transaction methods in RFC 7047. The
+following RPC methods are supported in OVSDB protocol:
+
+-  List databases
+
+-  Get schema
+
+-  Transact
+
+-  Cancel
+
+-  Monitor
+
+-  Update notification
+
+-  Monitor cancellation
+
+-  Lock operations
+
+-  Locked notification
+
+-  Stolen notification
+
+-  Echo
+
+According to RFC 7047, an OVSDB server must implement all methods, and
+an OVSDB client is only required to implement the "Echo" method and
+otherwise free to implement whichever methods suit its needs. However,
+the OVSDB library currently doesn’t support all RPC methods. For the
+"Echo" method, the library can handle "Echo" messages from a peer and
+send a JSON response message back, but the library doesn’t support
+actively sending an "Echo" JSON request to a peer. Other unsupported RPC
+methods are listed below:
+
+-  Cancel
+
+-  Lock operations
+
+-  Locked notification
+
+-  Stolen notification
+
+In the OVSDB library the RPC methods are defined in the Java interface
+OvsdbRPC. The library also provides a high-level interface OvsdbClient
+as the main interface to interact with peers through the OVSDB protocol.
+In the passive connection scenario, each connection will have a
+corresponding OvsdbClient object, and the application can obtain the
+OvsdbClient object through connection listener callback methods. In
+other words, if the application implements the OvsdbConnectionListener
+interface, it will get notifications of connection status changes with
+the corresponding OvsdbClient object of that connection.
+
+OVSDB database operations
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+RFC 7047 also defines database operations, such as insert, delete, and
+update, to be performed as part of a "transact" RPC request. The OVSDB
+library defines the data operations in Operations.java and provides the
+TransactionBuilder class to help build "transact" RPC requests. To build
+a JSON-RPC transact request message, the application can obtain the
+TransactionBuilder object through a transactBuilder() method in the
+OvsdbClient interface.
+
+The TransactionBuilder class provides the following methods to help
+build transactions:
+
+-  getOperations(): Get the list of operations in this transaction.
+
+-  add(): Add data operation to this transaction.
+
+-  build(): Return the list of operations in this transaction. This is
+   the same as the getOperations() method.
+
+-  execute(): Send the JSON RPC transaction to peer.
+
+-  getDatabaseSchema(): Get the database schema of this transaction.
+
+If the application wants to build and send a "transact" RPC request to
+modify OVSDB tables on a peer, it can take the following steps:
+
+1. Statically import parameter "op" in Operations.java
+
+   ``import static org.opendaylight.ovsdb.lib.operations.Operations.op;``
+
+2. Obtain transaction builder through transacBuilder() method in
+   OvsdbClient:
+
+   ``TransactionBuilder transactionBuilder = ovsdbClient.transactionBuilder(dbSchema);``
+
+3. Add operations to transaction builder:
+
+   ``transactionBuilder.add(op.insert(schema, row));``
+
+4. Send transaction to peer and get JSON RPC response:
+
+   ``operationResults = transactionBuilder.execute().get();``
+
+       **Note**
+
+       Although the "select" operation is supported in the OVSDB
+       library, the library implementation is a little different from
+       RFC 7047. In RFC 7047, section 5.2.2 describes the "select"
+       operation as follows:
+
+   “The "rows" member of the result is an array of objects. Each object
+   corresponds to a matching row, with each column specified in
+   "columns" as a member, the column’s name as the member name, and its
+   value as the member value. If "columns" is not specified, all the
+   table’s columns are included (including the internally generated
+   "\_uuid" and "\_version" columns).”
+
+   The OVSDB library implementation always requires the column’s name in
+   the "columns" field of a JSON message. If the "columns" field is not
+   specified, none of the table’s columns are included. If the
+   application wants to get the table entry with all columns, it needs
+   to specify all the columns’ names in the "columns" field.
+
+Reference Documentation
+~~~~~~~~~~~~~~~~~~~~~~~
+
+RFC 7047 The Open vSwitch Databse Management Protocol
+https://tools.ietf.org/html/rfc7047
+
+OVSDB MD-SAL Southbound Plugin Developer Guide
+----------------------------------------------
+
+Overview
+~~~~~~~~
+
+The Open vSwitch Database (OVSDB) Model Driven Service Abstraction Layer
+(MD-SAL) Southbound Plugin provides an MD-SAL based interface to Open
+vSwitch systems. This is done by augmenting the MD-SAL topology node
+with a YANG model which replicates some (but not all) of the Open
+vSwitch schema.
+
+OVSDB MD-SAL Southbound Plugin Architecture and Operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The architecture and operation of the OVSDB MD-SAL Southbound plugin is
+illustrated in the following set of diagrams.
+
+Connecting to an OVSDB Node
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+An OVSDB node is a system which is running the OVS software and is
+capable of being managed by an OVSDB manager. The OVSDB MD-SAL
+Southbound plugin in OpenDaylight is capable of operating as an OVSDB
+manager. Depending on the configuration of the OVSDB node, the
+connection of the OVSDB manager can be active or passive.
+
+Active OVSDB Node Manager Workflow
+''''''''''''''''''''''''''''''''''
+
+An active OVSDB node manager connection is made when OpenDaylight
+initiates the connection to the OVSDB node. In order for this to work,
+you must configure the OVSDB node to listen on a TCP port for the
+connection (i.e. OpenDaylight is active and the OVSDB node is passive).
+This option can be configured on the OVSDB node using the following
+command:
+
+::
+
+    ovs-vsctl set-manager ptcp:6640
+
+The following diagram illustrates the sequence of events which occur
+when OpenDaylight initiates an active OVSDB manager connection to an
+OVSDB node.
+
+.. figure:: ./images/ovsdb-sb-active-connection.jpg
+   :alt: Active OVSDB Manager Connection
+
+   Active OVSDB Manager Connection
+
+Step 1
+    Create an OVSDB node by using RESTCONF or an OpenDaylight plugin.
+    The OVSDB node is listed under the OVSDB topology node.
+
+Step 2
+    Add the OVSDB node to the OVSDB MD-SAL southbound configuration
+    datastore. The OVSDB southbound provider is registered to listen for
+    data change events on the portion of the MD-SAL topology data store
+    which contains the OVSDB southbound topology node augmentations. The
+    addition of an OVSDB node causes an event which is received by the
+    OVSDB Southbound provider.
+
+Step 3
+    The OVSDB Southbound provider initiates a connection to the OVSDB
+    node using the connection information provided in the configuration
+    OVSDB node (i.e. IP address and TCP port number).
+
+Step 4
+    The OVSDB Southbound provider adds the OVSDB node to the OVSDB
+    MD-SAL operational data store. The operational data store contains
+    OVSDB node objects which represent active connections to OVSDB
+    nodes.
+
+Step 5
+    The OVSDB Southbound provider requests the schema and databases
+    which are supported by the OVSDB node.
+
+Step 6
+    The OVSDB Southbound provider uses the database and schema
+    information to construct a monitor request which causes the OVSDB
+    node to send the controller any updates made to the OVSDB databases
+    on the OVSDB node.
+
+Passive OVSDB Node Manager Workflow
+'''''''''''''''''''''''''''''''''''
+
+A passive OVSDB node connection to OpenDaylight is made when the OVSDB
+node initiates the connection to OpenDaylight. In order for this to
+work, you must configure the OVSDB node to connect to the IP address and
+OVSDB port on which OpenDaylight is listening. This option can be
+configured on the OVSDB node using the following command:
+
+::
+
+    ovs-vsctl set-manager tcp:<IP address>:6640
+
+The following diagram illustrates the sequence of events which occur
+when an OVSDB node connects to OpenDaylight.
+
+.. figure:: ./images/ovsdb-sb-passive-connection.jpg
+   :alt: Passive OVSDB Manager Connection
+
+   Passive OVSDB Manager Connection
+
+Step 1
+    The OVSDB node initiates a connection to OpenDaylight.
+
+Step 2
+    The OVSDB Southbound provider adds the OVSDB node to the OVSDB
+    MD-SAL operational data store. The operational data store contains
+    OVSDB node objects which represent active connections to OVSDB
+    nodes.
+
+Step 3
+    The OVSDB Southbound provider requests the schema and databases
+    which are supported by the OVSDB node.
+
+Step 4
+    The OVSDB Southbound provider uses the database and schema
+    information to construct a monitor request which causes the OVSDB
+    node to send back any updates which have been made to the OVSDB
+    databases on the OVSDB node.
+
+OVSDB Node ID in the Southbound Operational MD-SAL
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When OpenDaylight initiates an active connection to an OVSDB node, it
+writes an external-id to the Open\_vSwitch table on the OVSDB node. The
+external-id is an OpenDaylight instance identifier which identifies the
+OVSDB topology node which has just been created. Here is an example
+showing the value of the *opendaylight-iid* entry in the external-ids
+column of the Open\_vSwitch table where the node-id of the OVSDB node is
+*ovsdb:HOST1*.
+
+::
+
+    $ ovs-vsctl list open_vswitch
+    ...
+    external_ids        : {opendaylight-iid="/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']"}
+    ...
+
+The *opendaylight-iid* entry in the external-ids column of the
+Open\_vSwitch table causes the OVSDB node to have same node-id in the
+operational MD-SAL datastore as in the configuration MD-SAL datastore.
+This holds true if the OVSDB node manager settings are subsequently
+changed so that a passive OVSDB manager connection is made.
+
+If there is no *opendaylight-iid* entry in the external-ids column and a
+passive OVSDB manager connection is made, then the node-id of the OVSDB
+node in the operational MD-SAL datastore will be constructed using the
+UUID of the Open\_vSwitch table as follows.
+
+::
+
+    "node-id": "ovsdb://uuid/b8dc0bfb-d22b-4938-a2e8-b0084d7bd8c1"
+
+The *opendaylight-iid* entry can be removed from the Open\_vSwitch table
+using the following command.
+
+::
+
+    $ sudo ovs-vsctl remove open_vswitch . external-id "opendaylight-iid"
+
+OVSDB Changes by using OVSDB Southbound Config MD-SAL
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+After the connection has been made to an OVSDB node, you can make
+changes to the OVSDB node by using the OVSDB Southbound Config MD-SAL.
+You can make CRUD operations by using the RESTCONF interface or by a
+plugin using the MD-SAL APIs. The following diagram illustrates the
+high-level flow of events.
+
+.. figure:: ./images/ovsdb-sb-config-crud.jpg
+   :alt: OVSDB Changes by using the Southbound Config MD-SAL
+
+   OVSDB Changes by using the Southbound Config MD-SAL
+
+Step 1
+    A change to the OVSDB Southbound Config MD-SAL is made. Changes
+    include adding or deleting bridges and ports, or setting attributes
+    of OVSDB nodes, bridges or ports.
+
+Step 2
+    The OVSDB Southbound provider receives notification of the changes
+    made to the OVSDB Southbound Config MD-SAL data store.
+
+Step 3
+    As appropriate, OVSDB transactions are constructed and transmitted
+    to the OVSDB node to update the OVSDB database on the OVSDB node.
+
+Step 4
+    The OVSDB node sends update messages to the OVSDB Southbound
+    provider to indicate the changes made to the OVSDB nodes database.
+
+Step 5
+    The OVSDB Southbound provider maps the changes received from the
+    OVSDB node into corresponding changes made to the OVSDB Southbound
+    Operational MD-SAL data store.
+
+Detecting changes in OVSDB coming from outside OpenDaylight
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Changes to the OVSDB nodes database may also occur independently of
+OpenDaylight. OpenDaylight also receives notifications for these events
+and updates the Southbound operational MD-SAL. The following diagram
+illustrates the sequence of events.
+
+.. figure:: ./images/ovsdb-sb-oper-crud.jpg
+   :alt: OVSDB Changes made directly on the OVSDB node
+
+   OVSDB Changes made directly on the OVSDB node
+
+Step 1
+    Changes are made to the OVSDB node outside of OpenDaylight (e.g.
+    ovs-vsctl).
+
+Step 2
+    The OVSDB node constructs update messages to inform OpenDaylight of
+    the changes made to its databases.
+
+Step 3
+    The OVSDB Southbound provider maps the OVSDB database changes to
+    corresponding changes in the OVSDB Southbound operational MD-SAL
+    data store.
+
+OVSDB Model
+^^^^^^^^^^^
+
+The OVSDB Southbound MD-SAL operates using a YANG model which is based
+on the abstract topology node model found in the `network topology
+model <https://github.com/opendaylight/yangtools/blob/stable/lithium/model/ietf/ietf-topology/src/main/yang/network-topology%402013-10-21.yang>`__.
+
+The augmentations for the OVSDB Southbound MD-SAL are defined in the
+`ovsdb.yang <https://github.com/opendaylight/ovsdb/blob/stable/lithium/southbound/southbound-api/src/main/yang/ovsdb.yang>`__
+file.
+
+There are three augmentations:
+
+**ovsdb-node-augmentation**
+    This augments the topology node and maps primarily to the
+    Open\_vSwitch table of the OVSDB schema. It contains the following
+    attributes.
+
+    -  **connection-info** - holds the local and remote IP address and
+       TCP port numbers for the OpenDaylight to OVSDB node connections
+
+    -  **db-version** - version of the OVSDB database
+
+    -  **ovs-version** - version of OVS
+
+    -  **list managed-node-entry** - a list of references to
+       ovsdb-bridge-augmentation nodes, which are the OVS bridges
+       managed by this OVSDB node
+
+    -  **list datapath-type-entry** - a list of the datapath types
+       supported by the OVSDB node (e.g. *system*, *netdev*) - depends
+       on newer OVS versions
+
+    -  **list interface-type-entry** - a list of the interface types
+       supported by the OVSDB node (e.g. *internal*, *vxlan*, *gre*,
+       *dpdk*, etc.) - depends on newer OVS verions
+
+    -  **list openvswitch-external-ids** - a list of the key/value pairs
+       in the Open\_vSwitch table external\_ids column
+
+    -  **list openvswitch-other-config** - a list of the key/value pairs
+       in the Open\_vSwitch table other\_config column
+
+**ovsdb-bridge-augmentation**
+    This augments the topology node and maps to an specific bridge in
+    the OVSDB bridge table of the associated OVSDB node. It contains the
+    following attributes.
+
+    -  **bridge-uuid** - UUID of the OVSDB bridge
+
+    -  **bridge-name** - name of the OVSDB bridge
+
+    -  **bridge-openflow-node-ref** - a reference (instance-identifier)
+       of the OpenFlow node associated with this bridge
+
+    -  **list protocol-entry** - the version of OpenFlow protocol to use
+       with the OpenFlow controller
+
+    -  **list controller-entry** - a list of controller-uuid and
+       is-connected status of the OpenFlow controllers associated with
+       this bridge
+
+    -  **datapath-id** - the datapath ID associated with this bridge on
+       the OVSDB node
+
+    -  **datapath-type** - the datapath type of this bridge
+
+    -  **fail-mode** - the OVSDB fail mode setting of this bridge
+
+    -  **flow-node** - a reference to the flow node corresponding to
+       this bridge
+
+    -  **managed-by** - a reference to the ovsdb-node-augmentation
+       (OVSDB node) that is managing this bridge
+
+    -  **list bridge-external-ids** - a list of the key/value pairs in
+       the bridge table external\_ids column for this bridge
+
+    -  **list bridge-other-configs** - a list of the key/value pairs in
+       the bridge table other\_config column for this bridge
+
+**ovsdb-termination-point-augmentation**
+    This augments the topology termination point model. The OVSDB
+    Southbound MD-SAL uses this model to represent both the OVSDB port
+    and OVSDB interface for a given port/interface in the OVSDB schema.
+    It contains the following attributes.
+
+    -  **port-uuid** - UUID of an OVSDB port row
+
+    -  **interface-uuid** - UUID of an OVSDB interface row
+
+    -  **name** - name of the port
+
+    -  **interface-type** - the interface type
+
+    -  **list options** - a list of port options
+
+    -  **ofport** - the OpenFlow port number of the interface
+
+    -  **ofport\_request** - the requested OpenFlow port number for the
+       interface
+
+    -  **vlan-tag** - the VLAN tag value
+
+    -  **list trunks** - list of VLAN tag values for trunk mode
+
+    -  **vlan-mode** - the VLAN mode (e.g. access, native-tagged,
+       native-untagged, trunk)
+
+    -  **list port-external-ids** - a list of the key/value pairs in the
+       port table external\_ids column for this port
+
+    -  **list interface-external-ids** - a list of the key/value pairs
+       in the interface table external\_ids interface for this interface
+
+    -  **list port-other-configs** - a list of the key/value pairs in
+       the port table other\_config column for this port
+
+    -  **list interface-other-configs** - a list of the key/value pairs
+       in the interface table other\_config column for this interface
+
+Examples of OVSDB Southbound MD-SAL API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Connect to an OVSDB Node
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+This example RESTCONF command adds an OVSDB node object to the OVSDB
+Southbound configuration data store and attempts to connect to the OVSDB
+host located at the IP address 10.11.12.1 on TCP port 6640.
+
+::
+
+    POST http://<host>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
+    Content-Type: application/json
+    {
+      "node": [
+         {
+           "node-id": "ovsdb:HOST1",
+           "connection-info": {
+             "ovsdb:remote-ip": "10.11.12.1",
+             "ovsdb:remote-port": 6640
+           }
+         }
+      ]
+    }
+
+Query the OVSDB Southbound Configuration MD-SAL
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Following on from the previous example, if the OVSDB Southbound
+configuration MD-SAL is queried, the RESTCONF command and the resulting
+reply is similar to the following example.
+
+::
+
+    GET http://<host>:8080/restconf/config/network-topology:network-topology/topology/ovsdb:1/
+    Application/json data in the reply
+    {
+      "topology": [
+        {
+          "topology-id": "ovsdb:1",
+          "node": [
+            {
+              "node-id": "ovsdb:HOST1",
+              "ovsdb:connection-info": {
+                "remote-port": 6640,
+                "remote-ip": "10.11.12.1"
+              }
+            }
+          ]
+        }
+      ]
+    }
+
+Reference Documentation
+~~~~~~~~~~~~~~~~~~~~~~~
+
+`Openvswitch
+schema <http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf>`__
+
+OVSDB Openstack Developer Guide
+-------------------------------
+
+Overview
+~~~~~~~~
+
+The Open vSwitch database (OVSDB) Southbound Plugin component for
+OpenDaylight implements the OVSDB `RFC
+7047 <https://tools.ietf.org/html/rfc7047>`__ management protocol that
+allows the southbound configuration of switches that support OVSDB. The
+component comprises a library and a plugin. The OVSDB protocol uses
+JSON-RPC calls to manipulate a physical or virtual switch that supports
+OVSDB. Many vendors support OVSDB on various hardware platforms. The
+OpenDaylight controller uses the library project to interact with an OVS
+instance.
+
+`OpenStack <http://www.openstack.org>`__ is a popular open source
+Infrastructure as a Service (IaaS) project, covering compute, storage
+and network management. OpenStack can use OpenDaylight as its network
+management provider through the Neutron API, which acts as a northbound
+for OpenStack. the OVSDB NetVirt piece of the OVSDB project is a
+provider for the Neutron API in OpenDaylight. OpenDaylight manages the
+network flows for the OpenStack compute nodes via the OVSDB project,
+with the south-bound plugin. This section describes how to set that up,
+and how to tell when everything is working.
+
+OVSDB Openstack Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The OpenStack integration architecture uses the following technologies:
+
+-  `RFC 7047 <https://tools.ietf.org/html/rfc7047>`__ - The Open vSwitch
+   Database Management Protocol
+
+-  `OpenFlow
+   v1.3 <http://www.opennetworking.org/images/stories/downloads/sdn-resources/onf-specifications/openflow/openflow-switch-v1.3.4.pdf>`__
+
+-  `OpenStack Neutron ML2
+   Plugin <https://wiki.openstack.org/wiki/Neutron/ML2>`__
+
+|Openstack Integration|
+
+OVSDB Service Function Chaining Developer Guide
+-----------------------------------------------
+
+Overview
+~~~~~~~~
+
+The OVSDB NetVirtSfc provides a classification and traffic steering
+component when integrated with OpenStack. Please refer to the Service
+Function Chaining project for the theory and programming of service
+chains.
+
+Installing the NetVirt SFC Feature
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Install the odl-ovsdb-sfc feature. The feature will also ensure that the
+odl-ovsdb-openstack feature as well as the openflowplugin, neutron and
+sfc features are installed.
+
+feature:install odl-ovsdb-sfc-ui ---
+
+Verify the required features are installed:
+
+opendaylight-user@root>feature:list -i \| grep ovsdb
+
+odl-ovsdb-southbound-api \| 1.2.1-SNAPSHOT \| x \|
+odl-ovsdb-southbound-1.2.1-SNAPSHOT \| OpenDaylight
+    southbound :: api
+
+odl-ovsdb-southbound-impl \| 1.2.1-SNAPSHOT \| x \|
+odl-ovsdb-southbound-1.2.1-SNAPSHOT \| OpenDaylight :: southbound
+    impl
+
+odl-ovsdb-southbound-impl-rest \| 1.2.1-SNAPSHOT \| x \|
+odl-ovsdb-southbound-1.2.1-SNAPSHOT \| OpenDaylight :: southbound ::
+impl
+    REST
+
+odl-ovsdb-southbound-impl-ui \| 1.2.1-SNAPSHOT \| x \|
+odl-ovsdb-southbound-1.2.1-SNAPSHOT \| OpenDaylight :: southbound ::
+impl
+    UI
+
+odl-ovsdb-library \| 1.2.1-SNAPSHOT \| x \|
+odl-ovsdb-library-1.2.1-SNAPSHOT \| OpenDaylight
+    library
+
+odl-ovsdb-openstack \| 1.2.1-SNAPSHOT \| x \| ovsdb-1.2.1-SNAPSHOT \|
+OpenDaylight :: OVSDB
+    OpenStack Network Virtual
+
+odl-ovsdb-sfc-api \| 1.2.1-SNAPSHOT \| x \| odl-ovsdb-sfc-1.2.1-SNAPSHOT
+\| OpenDaylight :: ovsdb-sfc
+    api
+
+odl-ovsdb-sfc \| 1.2.1-SNAPSHOT \| x \| odl-ovsdb-sfc-1.2.1-SNAPSHOT \|
+OpenDaylight
+    ovsdb-sfc
+
+odl-ovsdb-sfc-rest \| 1.2.1-SNAPSHOT \| x \|
+odl-ovsdb-sfc-1.2.1-SNAPSHOT \| OpenDaylight :: ovsdb-sfc
+    REST
+
+odl-ovsdb-sfc-ui \| 1.2.1-SNAPSHOT \| x \| odl-ovsdb-sfc-1.2.1-SNAPSHOT
+\| OpenDaylight :: ovsdb-sfc
+    UI
+
+opendaylight-user@root>feature:list -i \| grep sfc odl-sfc-model \|
+0.2.0-SNAPSHOT \| x \| odl-sfc-0.2.0-SNAPSHOT \| OpenDaylight :: sfc ::
+Model odl-sfc-provider \| 0.2.0-SNAPSHOT \| x \| odl-sfc-0.2.0-SNAPSHOT
+\| OpenDaylight :: sfc :: Provider odl-sfc-provider-rest \|
+0.2.0-SNAPSHOT \| x \| odl-sfc-0.2.0-SNAPSHOT \| OpenDaylight :: sfc ::
+Provider odl-sfc-ovs \| 0.2.0-SNAPSHOT \| x \| odl-sfc-0.2.0-SNAPSHOT \|
+OpenDaylight :: OpenvSwitch odl-sfcofl2 \| 0.2.0-SNAPSHOT \| x \|
+odl-sfc-0.2.0-SNAPSHOT \| OpenDaylight :: sfcofl2 odl-ovsdb-sfc-test \|
+1.2.1-SNAPSHOT \| x \| odl-ovsdb-sfc-test1.2.1-SNAPSHOT \| OpenDaylight
+:: ovsdb-sfc-test odl-ovsdb-sfc-api \| 1.2.1-SNAPSHOT \| x \|
+odl-ovsdb-sfc-1.2.1-SNAPSHOT \| OpenDaylight :: ovsdb-sfc :: api
+odl-ovsdb-sfc \| 1.2.1-SNAPSHOT \| x \| odl-ovsdb-sfc-1.2.1-SNAPSHOT \|
+OpenDaylight :: ovsdb-sfc odl-ovsdb-sfc-rest \| 1.2.1-SNAPSHOT \| x \|
+odl-ovsdb-sfc-1.2.1-SNAPSHOT \| OpenDaylight :: ovsdb-sfc :: REST
+odl-ovsdb-sfc-ui \| 1.2.1-SNAPSHOT \| x \| odl-ovsdb-sfc-1.2.1-SNAPSHOT
+\| OpenDaylight :: ovsdb-sfc :: UI
+
+opendaylight-user@root>feature:list -i \| grep neutron
+odl-neutron-service \| 0.6.0-SNAPSHOT \| x \| odl-neutron-0.6.0-SNAPSHOT
+\| OpenDaylight :: Neutron :: API odl-neutron-northbound-api \|
+0.6.0-SNAPSHOT \| x \| odl-neutron-0.6.0-SNAPSHOT \| OpenDaylight ::
+Neutron :: Northbound odl-neutron-spi \| 0.6.0-SNAPSHOT \| x \|
+odl-neutron-0.6.0-SNAPSHOT \| OpenDaylight :: Neutron :: API
+odl-neutron-transcriber \| 0.6.0-SNAPSHOT \| x \|
+odl-neutron-0.6.0-SNAPSHOT \| OpenDaylight :: Neutron :: Implementation
+---
+
+OVSDB NetVirt Service Function Chaining Example
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The architecture within OpenDaylight can be seen in the following
+figure:
+
+.. figure:: ./images/ovsdb/ODL_SFC_Architecture.png
+   :alt: OpenDaylight OVSDB NetVirt SFC Architecture
+
+   OpenDaylight OVSDB NetVirt SFC Architecture
+
+Tacker is a Virtual Network Functions Manager that is responsible for
+orchestrating the Service Function Chaining. Tacker is responsible for
+generating templates for Virtual Network Functions for OpenStack to
+instantiate the Service Functions. Tacker also uses the RESTCONF
+interfaces of OpenDaylight to create the Service Function Chains.
+
+Classification
+~~~~~~~~~~~~~~
+
+OVSDB NetVirt SFC implements the classification for the chains. The
+classification steers traffic from the tenant overlay to the chain
+overlay and back to the tenant overlay.
+
+An Access Control List used by NetVirtSFC to create the classifier is
+shown below. This is an example of classifying HTTP traffic using the
+tcp port 80. In this example the user would have created a Service
+Function Chain with the name "http-sfc" as well as all the associated
+Service Functions and Service Function Forwarders for the chain.
+
+http://localhost:8181/restconf/config/ietf-access-control-list:access-lists
+
+{ "access-lists": { "acl": [ { "acl-name": "http-acl",
+"access-list-entries": { "ace": [ { "rule-name": "http-rule", "matches":
+{ "source-port-range": { "lower-port": 0, "upper-port": 0 }, "protocol":
+6, "destination-port-range": { "lower-port": 80, "upper-port": 80 } },
+"actions": { "netvirt-sfc-acl:sfc-name": "http-sfc" } } ] } } ] } } ---
+
+When the chain is rendered using the Rendered Service Path RPC,
+NetvirtSfc will add the classification flows. The classification flows
+are shown below. The list shown has been modified to remove the NetVirt
+tenant overlay flows. The classification flow is identified with the
+cookie: 0x1110010000040255. The 6th digit of the cookie identifies the
+flow type as the classifier. The last 8 digits identify the chain with
+the first four digits indicating the NSH NSP and the last four digits
+identifying the NSH NSI. In this case the chain is identified with an
+NSP of 4 and the NSI is 255 to indicate the beginning of the chain.
+
+sudo ovs-ofctl --protocol=OpenFlow13 dump-flows br-int OFPST\_FLOW reply
+(OF1.3) (xid=0x2): cookie=0x0, duration=17.157s, table=0, n\_packets=0,
+n\_bytes=0, priority=6 actions=goto\_table:1 cookie=0x14,
+duration=10.692s, table=0, n\_packets=0, n\_bytes=0,
+priority=400,udp,in\_port=4,tp\_dst=6633 actions=LOCAL cookie=0x0,
+duration=17.134s, table=0, n\_packets=0, n\_bytes=0, dl\_type=0x88cc
+actions=CONTROLLER:65535 cookie=0x14, duration=10.717s, table=0,
+n\_packets=0, n\_bytes=0, priority=350,nsp=4 actions=goto\_table:152
+cookie=0x14, duration=10.688s, table=0, n\_packets=0, n\_bytes=0,
+priority=400,udp,nw\_dst=10.2.1.1,tp\_dst=6633 actions=output:4
+cookie=0x0, duration=17.157s, table=1, n\_packets=0, n\_bytes=0,
+priority=0 actions=goto\_table:11 cookie=0x1110070000040254,
+duration=10.608s, table=1, n\_packets=0, n\_bytes=0,
+priority=40000,reg0=0x1,nsp=4,nsi=254,in\_port=1 actions=goto\_table:21
+cookie=0x0, duration=17.157s, table=11, n\_packets=0, n\_bytes=0,
+priority=0 actions=goto\_table:21 cookie=0x1110060000040254,
+duration=10.625s, table=11, n\_packets=0, n\_bytes=0,
+nsp=4,nsi=254,in\_port=4
+actions=load:0x1→NXM\_NX\_REG0[],move:NXM\_NX\_NSH\_C2[]→NXM\_NX\_TUN\_ID[0..31],resubmit(1,1)
+cookie=0x1110010000040255, duration=10.615s, table=11, n\_packets=0,
+n\_bytes=0, tcp,reg0=0x1,tp\_dst=80
+actions=move:NXM\_NX\_TUN\_ID[0..31]→NXM\_NX\_NSH\_C2[],set\_nshc1:0xc0a83246,set\_nsp:0x4,set\_nsi:255,load:0xa020101→NXM\_NX\_TUN\_IPV4\_DST[],load:0x4→NXM\_NX\_TUN\_ID[0..31],resubmit(,0)
+cookie=0x0, duration=17.157s, table=21, n\_packets=0, n\_bytes=0,
+priority=0 actions=goto\_table:31 cookie=0x1110040000000000,
+duration=10.765s, table=21, n\_packets=0, n\_bytes=0,
+priority=1024,arp,in\_port=LOCAL,arp\_tpa=10.2.1.1,arp\_op=1
+actions=move:NXM\_OF\_ETH\_SRC[]→NXM\_OF\_ETH\_DST[],set\_field:f6:00:00:0f:00:01→eth\_src,load:0x2→NXM\_OF\_ARP\_OP[],move:NXM\_NX\_ARP\_SHA[]→NXM\_NX\_ARP\_THA[],move:NXM\_OF\_ARP\_SPA[]→NXM\_OF\_ARP\_TPA[],load:0xf600000f0001→NXM\_NX\_ARP\_SHA[],load:0xa020101→NXM\_OF\_ARP\_SPA[],IN\_PORT
+cookie=0x0, duration=17.157s, table=31, n\_packets=0, n\_bytes=0,
+priority=0 actions=goto\_table:41 cookie=0x0, duration=17.157s,
+table=41, n\_packets=0, n\_bytes=0, priority=0 actions=goto\_table:51
+cookie=0x0, duration=17.157s, table=51, n\_packets=0, n\_bytes=0,
+priority=0 actions=goto\_table:61 cookie=0x0, duration=17.142s,
+table=61, n\_packets=0, n\_bytes=0, priority=0 actions=goto\_table:71
+cookie=0x0, duration=17.140s, table=71, n\_packets=0, n\_bytes=0,
+priority=0 actions=goto\_table:81 cookie=0x0, duration=17.116s,
+table=81, n\_packets=0, n\_bytes=0, priority=0 actions=goto\_table:91
+cookie=0x0, duration=17.116s, table=91, n\_packets=0, n\_bytes=0,
+priority=0 actions=goto\_table:101 cookie=0x0, duration=17.107s,
+table=101, n\_packets=0, n\_bytes=0, priority=0 actions=goto\_table:111
+cookie=0x0, duration=17.083s, table=111, n\_packets=0, n\_bytes=0,
+priority=0 actions=drop cookie=0x14, duration=11.042s, table=150,
+n\_packets=0, n\_bytes=0, priority=5 actions=goto\_table:151
+cookie=0x14, duration=11.027s, table=151, n\_packets=0, n\_bytes=0,
+priority=5 actions=goto\_table:152 cookie=0x14, duration=11.010s,
+table=152, n\_packets=0, n\_bytes=0, priority=5 actions=goto\_table:158
+cookie=0x14, duration=10.668s, table=152, n\_packets=0, n\_bytes=0,
+priority=650,nsp=4,nsi=255
+actions=load:0xa020101→NXM\_NX\_TUN\_IPV4\_DST[],goto\_table:158
+cookie=0x14, duration=10.995s, table=158, n\_packets=0, n\_bytes=0,
+priority=5 actions=drop cookie=0xba5eba11ba5eba11, duration=10.645s,
+table=158, n\_packets=0, n\_bytes=0,
+priority=751,nsp=4,nsi=255,in\_port=4
+actions=move:NXM\_NX\_NSH\_C1[]→NXM\_NX\_NSH\_C1[],move:NXM\_NX\_NSH\_C2[]→NXM\_NX\_NSH\_C2[],move:NXM\_NX\_TUN\_ID[0..31]→NXM\_NX\_TUN\_ID[0..31],IN\_PORT
+cookie=0xba5eba11ba5eba11, duration=10.590s, table=158, n\_packets=0,
+n\_bytes=0, priority=751,nsp=4,nsi=254,in\_port=4
+actions=move:NXM\_NX\_NSI[]→NXM\_NX\_NSI[],move:NXM\_NX\_NSP[]→NXM\_NX\_NSP[],move:NXM\_NX\_NSH\_C1[]→NXM\_NX\_TUN\_IPV4\_DST[],move:NXM\_NX\_NSH\_C2[]→NXM\_NX\_TUN\_ID[0..31],IN\_PORT
+cookie=0xba5eba11ba5eba11, duration=10.640s, table=158, n\_packets=0,
+n\_bytes=0, priority=750,nsp=4,nsi=255
+actions=move:NXM\_NX\_NSH\_C1[]→NXM\_NX\_NSH\_C1[],move:NXM\_NX\_NSH\_C2[]→NXM\_NX\_NSH\_C2[],move:NXM\_NX\_TUN\_ID[0..31]→NXM\_NX\_TUN\_ID[0..31],output:4
+cookie=0xba5eba11ba5eba11, duration=10.571s, table=158, n\_packets=0,
+n\_bytes=0, priority=761,nsp=4,nsi=254,nshc1=3232248390,in\_port=4
+actions=move:NXM\_NX\_NSI[]→NXM\_NX\_NSI[],move:NXM\_NX\_NSP[]→NXM\_NX\_NSP[],move:NXM\_NX\_NSH\_C1[]→NXM\_NX\_TUN\_IPV4\_DST[],move:NXM\_NX\_NSH\_C2[]→NXM\_NX\_TUN\_ID[0..31],set\_nshc1:0,resubmit(,11)
+---
+
+Configuration
+~~~~~~~~~~~~~
+
+Some configuration is required due to application coexistence for the
+OpenFlow programming. The SFC project programs flows for the SFC overlay
+and NetVirt programs flows for the tenant overlay. Coexistence is
+achieved by each application owning a unique set of tables and providing
+a simple handoff between the tables.
+
+First configure NetVirt to use table 1 as it’s starting table:
+
+http://localhost:8181/restconf/config/netvirt-providers-config:netvirt-providers-config
+
+{ "netvirt-providers-config": { "table-offset": 1 } } ---
+
+Next configure SFC to start at table 150 and configure the table
+handoff. The configuration starts SFC at table 150 and sets the handoff
+to table 11 which is the NetVirt SFC classification table.
+
+http://localhost:8181/restconf/config/sfc-of-renderer:sfc-of-renderer-config
+
+{ "sfc-of-renderer-config": { "sfc-of-app-egress-table-offset": 11,
+"sfc-of-table-offset": 150 } } ---
+
+OVSDB Hardware VTEP Developer Guide
+-----------------------------------
+
+Overview
+~~~~~~~~
+
+TBD
+
+OVSDB Hardware VTEP Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+TBD
+
diff --git a/docs/developer-guide/packetcable-developer-guide.rst b/docs/developer-guide/packetcable-developer-guide.rst
new file mode 100644 (file)
index 0000000..172fee1
--- /dev/null
@@ -0,0 +1,345 @@
+PacketCable Developer Guide
+===========================
+
+PCMM Specification
+------------------
+
+`PacketCable™ Multimedia
+Specification <http://www.cablelabs.com/specification/packetcable-multimedia-specification>`__
+
+System Overview
+---------------
+
+These components introduce a DOCSIS QoS Service Flow management using
+the PCMM protocol. The driver component is responsible for the
+PCMM/COPS/PDP functionality required to service requests from
+PacketCable Provider and FlowManager. Requests are transposed into PCMM
+Gate Control messages and transmitted via COPS to the CCAP/CMTS. This
+plugin adheres to the PCMM/COPS/PDP functionality defined in the
+CableLabs specification. PacketCable solution is an MDSAL compliant
+component.
+
+PacketCable Components
+----------------------
+
+The packetcable maven project is comprised of several modules.
+
++--------------------------------------+--------------------------------------+
+| Bundle                               | Description                          |
++======================================+======================================+
+| packetcable-driver                   | A common module that containts the   |
+|                                      | COPS stack and manages all           |
+|                                      | connections to CCAPS/CMTSes.         |
++--------------------------------------+--------------------------------------+
+| packetcable-emulator                 | A basic CCAP emulator to facilitate  |
+|                                      | testing the the plugin when no       |
+|                                      | physical CCAP is avaible.            |
++--------------------------------------+--------------------------------------+
+| packetcable-policy-karaf             | Generates a Karaf distribution with  |
+|                                      | a config that loads all the          |
+|                                      | packetcable features at runtime.     |
++--------------------------------------+--------------------------------------+
+| packetcable-policy-model             | Contains the YANG information model. |
++--------------------------------------+--------------------------------------+
+| packetcable-policy-server            | Provider hosts the model processing, |
+|                                      | RESTCONF, and API implementation.    |
++--------------------------------------+--------------------------------------+
+
+Setting Logging Levels
+~~~~~~~~~~~~~~~~~~~~~~
+
+From the Karaf console
+
+::
+
+    log:set <LEVEL> (<PACKAGE>|<BUNDLE>)
+    Example
+    log:set DEBUG org.opendaylight.packetcable.packetcable-policy-server
+
+Tools for Testing
+-----------------
+
+Postman REST client for Chrome
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+`Install the Chrome
+extension <https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en>`__
+
+`Download and import sample packetcable
+collection <https://git.opendaylight.org/gerrit/gitweb?p=packetcable.git;a=tree;f=packetcable-policy-server/doc/restconf-samples>`__
+
+View Rest API
+~~~~~~~~~~~~~
+
+1. Install the ``odl-mdsal-apidocs`` feature from the karaf console.
+
+2. Open http://localhost:8181/apidoc/explorer/index.html default dev
+   build user/pass is admin/admin
+
+3. Navigate to the PacketCable section.
+
+Yang-IDE
+~~~~~~~~
+
+Editing yang can be done in any text editor but Yang-IDE will help
+prevent mistakes.
+
+`Setup and Build Yang-IDE for
+Eclipse <https://github.com/xored/yang-ide/wiki/Setup-and-build>`__
+
+Using Wireshark to Trace PCMM
+-----------------------------
+
+1. To start wireshark with privileges issue the following command:
+
+   ::
+
+       sudo wireshark &
+
+2. Select the interface to monitor.
+
+3. Use the Filter to only display COPS messages by applying “cops” in
+   the filter field.
+
+   .. figure:: ./images/packetcable-developer-wireshark.png
+
+      Wireshark looking for COPS messages.
+
+Debugging and Verifying DQoS Gate (Flows) on the CCAP/CMTS
+----------------------------------------------------------
+
+Below are some of the most useful CCAP/CMTS commands to verify flows
+have been enabled on the CMTS.
+
+Cisco
+~~~~~
+
+`Cisco CMTS Cable Command
+Reference <http://www.cisco.com/c/en/us/td/docs/cable/cmts/cmd_ref/b_cmts_cable_cmd_ref.pdf>`__
+
+Find the Cable Modem
+~~~~~~~~~~~~~~~~~~~~
+
+::
+
+    10k2-DSG#show cable modem
+                                                                                      D
+    MAC Address    IP Address      I/F           MAC           Prim RxPwr  Timing Num I
+                                                 State         Sid  (dBmv) Offset CPE P
+    0010.188a.faf6 0.0.0.0         C8/0/0/U0     offline       1    0.00   1482   0   N
+    74ae.7600.01f3 10.32.115.150   C8/0/10/U0    online        1    -0.50  1431   0   Y
+    0010.188a.fad8 10.32.115.142   C8/0/10/UB    w-online      2    -0.50  1507   1   Y
+    000e.0900.00dd 10.32.115.143   C8/0/10/UB    w-online      3    1.00   1677   0   Y
+    e86d.5271.304f 10.32.115.168   C8/0/10/UB    w-online      6    -0.50  1419   1   Y
+
+Show PCMM Plugin Connection
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+    10k2-DSG#show packetcabl ?
+      cms     Gate Controllers connected to this PacketCable client
+      event   Event message server information
+      gate    PacketCable gate information
+      global  PacketCable global information
+
+    10k2-DSG#show packetcable cms
+    GC-Addr        GC-Port  Client-Addr    COPS-handle  Version PSID Key PDD-Cfg
+
+
+    10k2-DSG#show packetcable cms
+    GC-Addr        GC-Port  Client-Addr    COPS-handle  Version PSID Key PDD-Cfg
+    10.32.0.240    54238    10.32.15.3     0x4B9C8150/1    4.0   0    0   0
+
+Show COPS Messages
+~~~~~~~~~~~~~~~~~~
+
+::
+
+    debug cops details
+
+Use CM Mac Address to List Service Flows
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+    10k2-DSG#show cable modem
+                                                                                      D
+    MAC Address    IP Address      I/F           MAC           Prim RxPwr  Timing Num I
+                                                 State         Sid  (dBmv) Offset CPE P
+    0010.188a.faf6 ---             C8/0/0/UB     w-online      1    0.50   1480   1   N
+    74ae.7600.01f3 10.32.115.150   C8/0/10/U0    online        1    -0.50  1431   0   Y
+    0010.188a.fad8 10.32.115.142   C8/0/10/UB    w-online      2    -0.50  1507   1   Y
+    000e.0900.00dd 10.32.115.143   C8/0/10/UB    w-online      3    0.00   1677   0   Y
+    e86d.5271.304f 10.32.115.168   C8/0/10/UB    w-online      6    -0.50  1419   1   Y
+
+
+    10k2-DSG#show cable modem 000e.0900.00dd service-flow
+
+
+    SUMMARY:
+    MAC Address    IP Address      Host          MAC           Prim  Num Primary    DS
+                                   Interface     State         Sid   CPE Downstream RfId
+    000e.0900.00dd 10.32.115.143   C8/0/10/UB    w-online      3     0   Mo8/0/2:1  2353
+
+
+    Sfid  Dir Curr  Sid   Sched  Prio MaxSusRate  MaxBrst     MinRsvRate  Throughput
+              State       Type
+    23    US  act   3     BE     0    0           3044        0           39
+    30    US  act   16    BE     0    500000      3044        0           0
+    24    DS  act   N/A   N/A    0    0           3044        0           17
+
+
+
+    UPSTREAM SERVICE FLOW DETAIL:
+
+    SFID  SID   Requests   Polls      Grants     Delayed    Dropped    Packets
+                                                 Grants     Grants
+    23    3     784        0          784        0          0          784
+    30    16    0          0          0          0          0          0
+
+
+    DOWNSTREAM SERVICE FLOW DETAIL:
+
+    SFID  RP_SFID QID    Flg Policer               Scheduler             FrwdIF
+                             Xmits      Drops      Xmits      Drops
+    24    33019   131550     0          0          777        0          Wi8/0/2:2
+
+    Flags Legend:
+    $: Low Latency Queue (aggregated)
+    ~: CIR Queue
+
+Deleting a PCMM Gate Message from the CMTS
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+    10k2-DSG#test cable dsd  000e.0900.00dd 30
+
+Find service flows
+~~~~~~~~~~~~~~~~~~
+
+All gate controllers currently connected to the PacketCable client are
+displayed
+
+::
+
+    show cable modem 00:11:22:33:44:55 service flow   ????
+    show cable modem
+
+Debug and display PCMM Gate messages
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+    debug packetcable gate control
+    debug packetcable gate events
+    show packetcable gate summary
+    show packetcable global
+    show packetcable cms
+
+Debug COPS messages
+~~~~~~~~~~~~~~~~~~~
+
+::
+
+    debug cops detail
+    debug packetcable cops
+    debug cable dynamic_qos trace
+
+Integration Verification
+------------------------
+
+Checkout the integration project and perform regression tests.
+
+::
+
+    git clone ssh://${ODL_USERNAME}@git.opendaylight.org:29418/integration.git
+    git clone https:/git.opendaylight.org/gerrit/integration.git
+
+1. Check and edit the
+   integration/features/src/main/resources/features.xml and follow the
+   directions there.
+
+2. Check and edit the integration/features/pom.xml and add a dependency
+   for your feature file
+
+3. Build integration/features and debug
+
+``  mvn clean install``
+
+Test your feature in the integration/distributions/extra/karaf/
+distribution
+
+::
+
+    cd integration/distributions/extra/karaf/
+    mvn clean install
+    cd target/assembly/bin
+    ./karaf
+
+service-wrapper
+~~~~~~~~~~~~~~~
+
+Install http://karaf.apache.org/manual/latest/users-guide/wrapper.html
+
+::
+
+    opendaylight-user@root>feature:install service-wrapper
+    opendaylight-user@root>wrapper:install --help
+    DESCRIPTION
+            wrapper:install
+
+    Install the container as a system service in the OS.
+
+    SYNTAX
+            wrapper:install [options]
+
+    OPTIONS
+            -d, --display
+                    The display name of the service.
+                    (defaults to karaf)
+            --help
+                    Display this help message
+            -s, --start-type
+                    Mode in which the service is installed. AUTO_START or DEMAND_START (Default: AUTO_START)
+                    (defaults to AUTO_START)
+            -n, --name
+                    The service name that will be used when installing the service. (Default: karaf)
+                    (defaults to karaf)
+            -D, --description
+                    The description of the service.
+                    (defaults to )
+
+    opendaylight-user@root> wrapper:install
+    Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/bin/karaf-wrapper
+    Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/bin/karaf-service
+    Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/etc/karaf-wrapper.conf
+    Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/lib/libwrapper.so
+    Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/lib/karaf-wrapper.jar
+    Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/lib/karaf-wrapper-main.jar
+
+    Setup complete.  You may wish to tweak the JVM properties in the wrapper configuration file:
+    /home/user/odl/distribution-karaf-0.3.0-Lithium/etc/karaf-wrapper.conf
+    before installing and starting the service.
+
+
+    Ubuntu/Debian Linux system detected:
+      To install the service:
+        $ ln -s /home/user/odl/distribution-karaf-0.3.0-Lithium/bin/karaf-service /etc/init.d/
+
+      To start the service when the machine is rebooted:
+        $ update-rc.d karaf-service defaults
+
+      To disable starting the service when the machine is rebooted:
+        $ update-rc.d -f karaf-service remove
+
+      To start the service:
+        $ /etc/init.d/karaf-service start
+
+      To stop the service:
+        $ /etc/init.d/karaf-service stop
+
+      To uninstall the service :
+        $ rm /etc/init.d/karaf-service
+
diff --git a/docs/developer-guide/pcep-developer-guide.rst b/docs/developer-guide/pcep-developer-guide.rst
new file mode 100644 (file)
index 0000000..461803e
--- /dev/null
@@ -0,0 +1,336 @@
+PCEP Developer Guide
+====================
+
+Overview
+--------
+
+This section provides an overview of **feature odl-bgpcep-pcep-all** .
+This feature will install everything needed for PCEP (Path Computation
+Element Protocol) including establishing the connection, storing
+information about LSPs (Label Switched Paths) and displaying data in
+network-topology overview.
+
+PCEP Architecture
+-----------------
+
+Each feature represents a module in the BGPCEP codebase. The following
+diagram illustrates how the features are related.
+
+.. figure:: ./images/bgpcep/pcep-dependency-tree.png
+   :alt: PCEP Dependency Tree
+
+   PCEP Dependency Tree
+
+Key APIs and Interfaces
+-----------------------
+
+PCEP
+~~~~
+
+Session handling
+^^^^^^^^^^^^^^^^
+
+*32-pcep.xml* defines only pcep-dispatcher the parser should be using
+(global-pcep-extensions), factory for creating session proposals (you
+can create different proposals for different PCCs (Path Computation
+Clients)).
+
+.. code:: xml
+
+     <module>
+      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:pcep:impl">prefix:pcep-dispatcher-impl</type>
+      <name>global-pcep-dispatcher</name>
+      <pcep-extensions>
+       <type xmlns:pcepspi="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">pcepspi:extensions</type>
+       <name>global-pcep-extensions</name>
+      </pcep-extensions>
+      <pcep-session-proposal-factory>
+       <type xmlns:pcep="urn:opendaylight:params:xml:ns:yang:controller:pcep">pcep:pcep-session-proposal-factory</type>
+       <name>global-pcep-session-proposal-factory</name>
+      </pcep-session-proposal-factory>
+      <boss-group>
+       <type xmlns:netty="urn:opendaylight:params:xml:ns:yang:controller:netty">netty:netty-threadgroup</type>
+       <name>global-boss-group</name>
+      </boss-group>
+      <worker-group>
+       <type xmlns:netty="urn:opendaylight:params:xml:ns:yang:controller:netty">netty:netty-threadgroup</type>
+       <name>global-worker-group</name>
+      </worker-group>
+     </module>
+
+For user configuration of PCEP, check User Guide.
+
+Parser
+^^^^^^
+
+The base PCEP parser includes messages and attributes from
+`RFC5441 <http://tools.ietf.org/html/rfc5441>`__,
+`RFC5541 <http://tools.ietf.org/html/rfc5541>`__,
+`RFC5455 <http://tools.ietf.org/html/rfc5455>`__,
+`RFC5557 <http://tools.ietf.org/html/rfc5557>`__ and
+`RFC5521 <http://tools.ietf.org/html/rfc5521>`__.
+
+Registration
+^^^^^^^^^^^^
+
+All parsers and serializers need to be registered into *Extension
+provider*. This *Extension provider* is configured in initial
+configuration of the parser-spi module (*32-pcep.xml*).
+
+.. code:: xml
+
+    <module>
+     <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">prefix:pcep-extensions-impl</type>
+     <name>global-pcep-extensions</name>
+     <extension>
+      <type xmlns:pcepspi="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">pcepspi:extension</type>
+      <name>pcep-parser-base</name>
+     </extension>
+     <extension>
+      <type xmlns:pcepspi="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">pcepspi:extension</type>
+      <name>pcep-parser-ietf-stateful07</name>
+     </extension>
+     <extension>
+      <type xmlns:pcepspi="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">pcepspi:extension</type>
+      <name>pcep-parser-ietf-initiated00</name>
+     </extension>
+     <extension>
+      <type xmlns:pcepspi="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">pcepspi:extension</type>
+      <name>pcep-parser-sync-optimizations</name>
+     </extension>
+    </module>
+
+-  *pcep-parser-base* - will register parsers and serializers
+   implemented in pcep-impl module
+
+-  *pcep-parser-ietf-stateful07* - will register parsers and serializers
+   of draft-ietf-pce-stateful-pce-07 implementation
+
+-  *pcep-parser-ietf-initiated00* - will register parser and serializer
+   of draft-ietf-pce-pce-initiated-lsp-00 implementation
+
+-  *pcep-parser-sync-optimizations* - will register parser and
+   serializers of draft-ietf-pce-stateful-sync-optimizations-03
+   implementation
+
+Stateful07 module is a good example of a PCEP parser extension.
+
+Configuration of PCEP parsers specifies one implementation of *Extension
+provider* that will take care of registering mentioned parser
+extensions:
+`SimplePCEPExtensionProviderContext <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/spi/src/main/java/org/opendaylight/protocol/pcep/spi/pojo/SimplePCEPExtensionProviderContext.java;hb=refs/for/stable/beryllium>`__.
+All registries are implemented in package
+`pcep-spi <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=tree;f=pcep/spi/src/main/java/org/opendaylight/protocol/pcep/spi/pojo;hb=refs/for/stable/beryllium>`__.
+
+Parsing
+^^^^^^^
+
+Parsing of PCEP elements is mostly done equally to BGP, the only
+exception is message parsing, that is described here.
+
+In BGP messages, parsing of first-level elements (path-attributes) can
+be validated in a simple way, as the attributes should be ordered
+chronologically. PCEP, on the other hand, has a strict object order
+policy, that is described in RBNF (Routing Backus-Naur Form) in each
+RFC. Therefore the algorithm for parsing here is to parse all objects in
+order as they appear in the message. The result of parsing is a list of
+*PCEPObjects*, that is put through validation. *validate()* methods are
+present in each message parser. Depending on the complexity of the
+message, it can contain either a simple condition (checking the presence
+of a mandatory object) or a full state machine.
+
+In addition to that, PCEP requires sending error message for each
+documented parsing error. This is handled by creating an empty list of
+messages *errors* which is then passed as argument throughout whole
+parsing process. If some parser encounters *PCEPDocumentedException*, it
+has the duty to create appropriate PCEP error message and add it to this
+list. In the end, when the parsing is finished, this list is examined
+and all messages are sent to peer.
+
+Better understanding provides this sequence diagram:
+
+.. figure:: ./images/bgpcep/pcep-parsing.png
+   :alt: Parsing
+
+   Parsing
+
+PCEP IETF stateful
+~~~~~~~~~~~~~~~~~~
+
+This section summarizes module pcep-ietf-stateful07. The term *stateful*
+refers to
+`draft-ietf-pce-stateful-pce <http://tools.ietf.org/html/draft-ietf-pce-stateful-pce>`__
+and
+`draft-ietf-pce-pce-initiated-lsp <http://tools.ietf.org/html/draft-ietf-pce-pce-initiated-lsp>`__
+in versions draft-ietf-pce-stateful-pce-07 with
+draft-ietf-pce-pce-initiated-lsp-00.
+
+We will upgrade our implementation, when the stateful draft gets
+promoted to RFC.
+
+The stateful module is implemented as extensions to pcep-base-parser.
+The stateful draft declared new elements as well as additional fields or
+TLVs (type,length,value) to known objects. All new elements are defined
+in yang models, that contain augmentations to elements defined in
+`pcep-types.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/api/src/main/yang/pcep-types.yang;hb=refs/for/stable/beryllium>`__.
+In the case of extending known elements, the *Parser* class merely
+extends the base class and overrides necessary methods as shown in
+following diagram:
+
+.. figure:: ./images/bgpcep/validation.png
+   :alt: Extending existing parsers
+
+   Extending existing parsers
+
+All parsers (including those for newly defined PCEP elements) have to be
+registered via the *Activator* class. This class is present in both
+modules.
+
+In addition to parsers, the stateful module also introduces additional
+session proposal. This proposal includes new fields defined in stateful
+drafts for Open object.
+
+PCEP segment routing (SR)
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+PCEP Segment Routing is an extension of base PCEP and
+pcep-ietf-stateful-07 extension. The pcep-segment-routing module
+implements
+`draft-ietf-pce-segment-routing-01 <http://tools.ietf.org/html/draft-ietf-pce-segment-routing-01>`__.
+
+The extension brings new SR-ERO (Explicit Route Object) and SR-RRO
+(Reported Route Object) subobject composed of SID (Segment Identifier)
+and/or NAI (Node or Adjacency Identifier). The segment Routing path is
+carried in the ERO and RRO object, as a list of SR-ERO/SR-RRO subobjects
+in an order specified by the user. The draft defines new TLV -
+SR-PCE-CAPABILITY TLV, carried in PCEP Open object, used to negotiate
+Segment Routing ability.
+
+| The yang models of subobject, SR-PCE-CAPABILITY TLV and appropriate
+  augmentations are defined in
+  `odl-pcep-segment-routing.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/yang/odl-pcep-segment-routing.yang;hb=refs/for/stable/beryllium>`__.
+| The pcep-segment-routing module includes parsers/serializers for new
+  subobject
+  (`SrEroSubobjectParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/java/org/opendaylight/protocol/pcep/segment/routing/SrEroSubobjectParser.java;hb=refs/for/stable/beryllium>`__)
+  and TLV
+  (`SrPceCapabilityTlvParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/java/org/opendaylight/protocol/pcep/segment/routing/SrPceCapabilityTlvParser.java;hb=refs/for/stable/beryllium>`__).
+
+The pcep-segment-routing module implements
+`draft-ietf-pce-lsp-setup-type-01 <http://tools.ietf.org/html/draft-ietf-pce-lsp-setup-type-01>`__,
+too. The draft defines new TLV - Path Setup Type TLV, which value
+indicate path setup signaling technique. The TLV may be included in
+RP(Request Parameters)/SRP(Stateful PCE Request Parameters) object. For
+the default RSVP-TE (Resource Reservation Protocol), the TLV is omitted.
+For Segment Routing, PST = 1 is defined.
+
+The Path Setup Type TLV is modeled with yang in module
+`pcep-types.yang <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/api/src/main/yang/pcep-types.yang;hb=refs/for/stable/beryllium>`__.
+A parser/serializer is implemented in
+`PathSetupTypeTlvParser <https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/impl/src/main/java/org/opendaylight/protocol/pcep/impl/tlv/PathSetupTypeTlvParser.java;hb=refs/for/stable/beryllium>`__
+and it is overriden in segment-routing module to provide the aditional
+PST.
+
+PCEP Synchronization Procedures Optimization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Optimizations of Label Switched Path State Synchronization Procedures
+for a Stateful PCE draft-ietf-pce-stateful-sync-optimizations-03
+specifies following optimizations for state synchronization and the
+corresponding PCEP procedures and extensions:
+
+-  **State Synchronization Avoidance:** To skip state synchronization if
+   the state has survived and not changed during session restart.
+
+-  **Incremental State Synchronization:** To do incremental (delta)
+   state synchronization when possible.
+
+-  **PCE-triggered Initial Synchronization:** To let PCE control the
+   timing of the initial state synchronization. The capability can be
+   applied to both full and incremental state synchronization.
+
+-  **PCE-triggered Re-synchronization:** To let PCE re-synchronize the
+   state for sanity check.
+
+PCEP Topology
+~~~~~~~~~~~~~
+
+PCEP data is displayed only through one URL that is accessible from the
+base network-topology URL:
+
+*http://localhost:8181/restconf/operational/network-topology:network-topology/topology/pcep-topology*
+
+Each PCC will be displayed as a node:
+
+.. code:: xml
+
+    <node>
+     <path-computation-client>
+      <ip-address>42.42.42.42</ip-address>
+      <state-sync>synchronized</state-sync>
+      <stateful-tlv>
+       <stateful>
+        <initiation>true</initiation>
+        <lsp-update-capability>true</lsp-update-capability>
+       </stateful>
+      </stateful-tlv>
+     </path-computation-client>
+     <node-id>pcc://42.42.42.42</node-id>
+    </node>
+    </source>
+
+If some tunnels are configured on the network, they would be displayed
+on the same page, within a node that initiated the tunnel:
+
+.. code:: xml
+
+    <node>
+     <path-computation-client>
+      <state-sync>synchronized</state-sync>
+      <stateful-tlv>
+       <stateful>
+        <initiation>true</initiation>
+        <lsp-update-capability>true</lsp-update-capability>
+       </stateful>
+      </stateful-tlv>
+      <reported-lsp>
+       <name>foo</name>
+       <lsp>
+        <operational>down</operational>
+        <sync>false</sync>
+        <ignore>false</ignore>
+        <plsp-id>1</plsp-id>
+        <create>false</create>
+        <administrative>true</administrative>
+        <remove>false</remove>
+        <delegate>true</delegate>
+        <processing-rule>false</processing-rule>
+        <tlvs>
+        <lsp-identifiers>
+          <ipv4>
+           <ipv4-tunnel-sender-address>43.43.43.43</ipv4-tunnel-sender-address>
+           <ipv4-tunnel-endpoint-address>0.0.0.0</ipv4-tunnel-endpoint-address>
+           <ipv4-extended-tunnel-id>0.0.0.0</ipv4-extended-tunnel-id>
+          </ipv4>
+          <tunnel-id>0</tunnel-id>
+          <lsp-id>0</lsp-id>
+         </lsp-identifiers>
+         <symbolic-path-name>
+          <path-name>Zm9v</path-name>
+         </symbolic-path-name>
+        </tlvs>
+       </lsp>
+      </reported-lsp>
+      <ip-address>43.43.43.43</ip-address>
+     </path-computation-client>
+     <node-id>pcc://43.43.43.43</node-id>
+    </node>
+
+Note that, the *<path-name>* tag displays tunnel name in Base64
+encoding.
+
+API Reference Documentation
+---------------------------
+
+Javadocs are generated while creating mvn:site and they are located in
+target/ directory in each module.
+
diff --git a/docs/developer-guide/service-function-chaining.rst b/docs/developer-guide/service-function-chaining.rst
new file mode 100644 (file)
index 0000000..8bddf67
--- /dev/null
@@ -0,0 +1,389 @@
+Service Function Chaining
+=========================
+
+OpenDaylight Service Function Chaining (SFC) Overiew
+----------------------------------------------------
+
+OpenDaylight Service Function Chaining (SFC) provides the ability to
+define an ordered list of a network services (e.g. firewalls, load
+balancers). These service are then "stitched" together in the network to
+create a service chain. This project provides the infrastructure
+(chaining logic, APIs) needed for ODL to provision a service chain in
+the network and an end-user application for defining such chains.
+
+-  ACE - Access Control Entry
+
+-  ACL - Access Control List
+
+-  SCF - Service Classifier Function
+
+-  SF - Service Function
+
+-  SFC - Service Function Chain
+
+-  SFF - Service Function Forwarder
+
+-  SFG - Service Function Group
+
+-  SFP - Service Function Path
+
+-  RSP - Rendered Service Path
+
+-  NSH - Network Service Header
+
+SFC Classifier Control and Date plane Developer guide
+-----------------------------------------------------
+
+Overview
+~~~~~~~~
+
+Description of classifier can be found in:
+https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/
+
+Classifier manages everything from starting the packet listener to
+creation (and removal) of appropriate ip(6)tables rules and marking
+received packets accordingly. Its functionality is **available only on
+Linux** as it leverdges **NetfilterQueue**, which provides access to
+packets matched by an **iptables** rule. Classifier requires **root
+privileges** to be able to operate.
+
+So far it is capable of processing ACL for MAC addresses, ports, IPv4
+and IPv6. Supported protocols are TCP and UDP.
+
+Classifier Architecture
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Python code located in the project repository
+sfc-py/common/classifier.py.
+
+.. note::
+
+    classifier assumes that Rendered Service Path (RSP) **already
+    exists** in ODL when an ACL referencing it is obtained
+
+1. sfc\_agent receives an ACL and passes it for processing to the
+   classifier
+
+2. the RSP (its SFF locator) referenced by ACL is requested from ODL
+
+3. if the RSP exists in the ODL then ACL based iptables rules for it are
+   applied
+
+After this process is over, every packet successfully matched to an
+iptables rule (i.e. successfully classified) will be NSH encapsulated
+and forwarded to a related SFF, which knows how to traverse the RSP.
+
+Rules are created using appropriate iptables command. If the Access
+Control Entry (ACE) rule is MAC address related both iptables and
+ip6tabeles rules re issued. If ACE rule is IPv4 address related, only
+iptables rules are issued, same for IPv6.
+
+.. note::
+
+    iptables **raw** table contains all created rules
+
+Information regarding already registered RSP(s) are stored in an
+internal data-store, which is represented as a dictionary:
+
+::
+
+    {rsp_id: {'name': <rsp_name>,
+              'chains': {'chain_name': (<ipv>,),
+                         ...
+                         },
+              'sff': {'ip': <ip>,
+                      'port': <port>,
+                      'starting-index': <starting-index>,
+                      'transport-type': <transport-type>
+                      },
+              },
+    ...
+    }
+
+-  ``name``: name of the RSP
+
+-  ``chains``: dictionary of iptables chains related to the RSP with
+   information about IP version for which the chain exists
+
+-  ``SFF``: SFF forwarding parameters
+
+   -  ``ip``: SFF IP address
+
+   -  ``port``: SFF port
+
+   -  ``starting-index``: index given to packet at first RSP hop
+
+   -  ``transport-type``: encapsulation protocol
+
+Key APIs and Interfaces
+~~~~~~~~~~~~~~~~~~~~~~~
+
+This features exposes API to configure classifier (corresponds to
+service-function-classifier.yang)
+
+API Reference Documentation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+See: sfc-model/src/main/yang/service-function-classifier.yang
+
+SFC-OVS Plugin
+--------------
+
+Overview
+~~~~~~~~
+
+SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices.
+Integration is realized through mapping of SFC objects (like SF, SFF,
+Classifier, etc.) to OVS objects (like Bridge,
+TerminationPoint=Port/Interface). The mapping takes care of automatic
+instantiation (setup) of corresponding object whenever its counterpart
+is created. For example, when a new SFF is created, the SFC-OVS plugin
+will create a new OVS bridge and when a new OVS Bridge is created, the
+SFC-OVS plugin will create a new SFF.
+
+SFC-OVS Architecture
+~~~~~~~~~~~~~~~~~~~~
+
+SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing
+information from/to OVS devices. The core functionality consists of two
+types of mapping:
+
+a. mapping from OVS to SFC
+
+   -  OVS Bridge is mapped to SFF
+
+   -  OVS TerminationPoints are mapped to SFF DataPlane locators
+
+b. mapping from SFC to OVS
+
+   -  SFF is mapped to OVS Bridge
+
+   -  SFF DataPlane locators are mapped to OVS TerminationPoints
+
+.. figure:: ./images/sfc/sfc-ovs-architecture.png
+   :alt: SFC < — > OVS mapping flow diagram
+
+   SFC < — > OVS mapping flow diagram
+
+Key APIs and Interfaces
+~~~~~~~~~~~~~~~~~~~~~~~
+
+-  SFF to OVS mapping API (methods to convert SFF object to OVS Bridge
+   and OVS TerminationPoints)
+
+-  OVS to SFF mapping API (methods to convert OVS Bridge and OVS
+   TerminationPoints to SFF object)
+
+SFC Southbound REST Plugin
+--------------------------
+
+Overview
+~~~~~~~~
+
+The Southbound REST Plugin is used to send configuration from DataStore
+down to network devices supporting a REST API (i.e. they have a
+configured REST URI). It supports POST/PUT/DELETE operations, which are
+triggered accordingly by changes in the SFC data stores.
+
+-  Access Control List (ACL)
+
+-  Service Classifier Function (SCF)
+
+-  Service Function (SF)
+
+-  Service Function Group (SFG)
+
+-  Service Function Schedule Type (SFST)
+
+-  Service Function Forwader (SFF)
+
+-  Rendered Service Path (RSP)
+
+Southbound REST Plugin Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+1. **listeners** - used to listen on changes in the SFC data stores
+
+2. **JSON exporters** - used to export JSON-encoded data from
+   binding-aware data store objects
+
+3. **tasks** - used to collect REST URIs of network devices and to send
+   JSON-encoded data down to these devices
+
+.. figure:: ./images/sfc/sb-rest-architecture.png
+   :alt: Southbound REST Plugin Architecture diagram
+
+   Southbound REST Plugin Architecture diagram
+
+Key APIs and Interfaces
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The plugin provides Southbound REST API designated to listening REST
+devices. It supports POST/PUT/DELETE operations. The operation (with
+corresponding JSON-encoded data) is sent to unique REST URL belonging to
+certain datatype.
+
+-  Access Control List (ACL):
+   ``http://<host>:<port>/config/ietf-acl:access-lists/access-list/``
+
+-  Service Function (SF):
+   ``http://<host>:<port>/config/service-function:service-functions/service-function/``
+
+-  Service Function Group (SFG):
+   ``http://<host>:<port>/config/service-function:service-function-groups/service-function-group/``
+
+-  Service Function Schedule Type (SFST):
+   ``http://<host>:<port>/config/service-function-scheduler-type:service-function-scheduler-types/service-function-scheduler-type/``
+
+-  Service Function Forwarder (SFF):
+   ``http://<host>:<port>/config/service-function-forwarder:service-function-forwarders/service-function-forwarder/``
+
+-  Rendered Service Path (RSP):
+   ``http://<host>:<port>/operational/rendered-service-path:rendered-service-paths/rendered-service-path/``
+
+Therefore, network devices willing to receive REST messages must listen
+on these REST URLs.
+
+.. note::
+
+    Service Classifier Function (SCF) URL does not exist, because SCF is
+    considered as one of the network devices willing to receive REST
+    messages. However, there is a listener hooked on the SCF data store,
+    which is triggering POST/PUT/DELETE operations of ACL object,
+    because ACL is referenced in ``service-function-classifier.yang``
+
+Service Function Load Balancing Developer Guide
+-----------------------------------------------
+
+Overview
+~~~~~~~~
+
+SFC Load-Balancing feature implements load balancing of Service
+Functions, rather than a one-to-one mapping between Service Function
+Forwarder and Service Function.
+
+Load Balancing Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Service Function Groups (SFG) can replace Service Functions (SF) in the
+Rendered Path model. A Service Path can only be defined using SFGs or
+SFs, but not a combination of both.
+
+Relevant objects in the YANG model are as follows:
+
+1. Service-Function-Group-Algorithm:
+
+   ::
+
+       Service-Function-Group-Algorithms {
+           Service-Function-Group-Algorithm {
+               String name
+               String type
+           }
+       }
+
+   ::
+
+       Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
+
+2. Service-Function-Group:
+
+   ::
+
+       Service-Function-Groups {
+           Service-Function-Group {
+               String name
+               String serviceFunctionGroupAlgorithmName
+               String type
+               String groupId
+               Service-Function-Group-Element {
+                   String service-function-name
+                   int index
+               }
+           }
+       }
+
+3. ServiceFunctionHop: holds a reference to a name of SFG (or SF)
+
+Key APIs and Interfaces
+~~~~~~~~~~~~~~~~~~~~~~~
+
+This feature enhances the existing SFC API.
+
+REST API commands include: \* For Service Function Group (SFG): read
+existing SFG, write new SFG, delete existing SFG, add Service Function
+(SF) to SFG, and delete SF from SFG \* For Service Function Group
+Algorithm (SFG-Alg): read, write, delete
+
+Bundle providing the REST API: sfc-sb-rest \* Service Function Groups
+and Algorithms are defined in: sfc-sfg and sfc-sfg-alg \* Relevant JAVA
+API: SfcProviderServiceFunctionGroupAPI,
+SfcProviderServiceFunctionGroupAlgAPI
+
+Service Function Scheduling Algorithms
+--------------------------------------
+
+Overview
+~~~~~~~~
+
+When creating the Rendered Service Path (RSP), the earlier release of
+SFC chose the first available service function from a list of service
+function names. Now a new API is introduced to allow developers to
+develop their own schedule algorithms when creating the RSP. There are
+four scheduling algorithms (Random, Round Robin, Load Balance and
+Shortest Path) are provided as examples for the API definition. This
+guide gives a simple introduction of how to develop service function
+scheduling algorithms based on the current extensible framework.
+
+Architecture
+~~~~~~~~~~~~
+
+The following figure illustrates the service function selection
+framework and algorithms.
+
+.. figure:: ./images/sfc-sf-selection-arch.png
+   :alt: SF Scheduling Algorithm framework Architecture
+
+   SF Scheduling Algorithm framework Architecture
+
+The YANG Model defines the Service Function Scheduling Algorithm type
+identities and how they are stored in the MD-SAL data store for the
+scheduling algorithms.
+
+The MD-SAL data store stores all informations for the scheduling
+algorithms, including their types, names, and status.
+
+The API provides some basic APIs to manage the informations stored in
+the MD-SAL data store, like putting new items into it, getting all
+scheduling algorithms, etc.
+
+The RESTCONF API provides APIs to manage the informations stored in the
+MD-SAL data store through RESTful calls.
+
+The Service Function Chain Renderer gets the enabled scheduling
+algorithm type, and schedules the service functions with scheduling
+algorithm implementation.
+
+Key APIs and Interfaces
+~~~~~~~~~~~~~~~~~~~~~~~
+
+While developing a new Service Function Scheduling Algorithm, a new
+class should be added and it should extend the base schedule class
+SfcServiceFunctionSchedulerAPI. And the new class should implement the
+abstract function:
+
+``public List<String> scheduleServiceFuntions(ServiceFunctionChain chain, int serviceIndex)``.
+
+-  **``ServiceFunctionChain chain``**: the chain which will be rendered
+
+-  **``int serviceIndex``**: the initial service index for this rendered
+   service path
+
+-  **``List<String>``**: a list of service funtion names which scheduled
+   by the Service Function Scheduling Algorithm.
+
+API Reference Documentation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Please refer the API docs generated in the mdsal-apidocs.
+
diff --git a/docs/developer-guide/topology-processing-framework-developer-guide.rst b/docs/developer-guide/topology-processing-framework-developer-guide.rst
new file mode 100644 (file)
index 0000000..f781e1d
--- /dev/null
@@ -0,0 +1,1280 @@
+Topology Processing Framework Developer Guide
+=============================================
+
+Overview
+--------
+
+The Topology Processing Framework allows developers to aggregate and
+filter topologies according to defined correlations. It also provides
+functionality, which you can use to make your own topology model by
+automating the translation from one model to another. For example to
+translate from the opendaylight-inventory model to only using the
+network-topology model.
+
+Architecture
+------------
+
+Chapter Overview
+~~~~~~~~~~~~~~~~
+
+In this chapter we describe the architecture of the Topology Processing
+Framework. In the first part, we provide information about available
+features and basic class relationships. In the second part, we describe
+our model specific approach, which is used to provide support for
+different models.
+
+Basic Architecture
+~~~~~~~~~~~~~~~~~~
+
+The Topology Processing Framework consists of several Karaf features:
+
+-  odl-topoprocessing-framework
+
+-  odl-topoprocessing-inventory
+
+-  odl-topoprocessing-network-topology
+
+-  odl-topoprocessing-i2rs
+
+-  odl-topoprocessing-inventory-rendering
+
+The feature odl-topoprocessing-framework contains the
+topoprocessing-api, topoprocessing-spi and topoprocessing-impl bundles.
+This feature is the core of the Topology Processing Framework and is
+required by all others features.
+
+-  topoprocessing-api - contains correlation definitions and definitions
+   required for rendering
+
+-  topoprocessing-spi - entry point for topoprocessing service (start
+   and close)
+
+-  topoprocessing-impl - contains base implementations of handlers,
+   listeners, aggregators and filtrators
+
+TopoProcessingProvider is the entry point for Topology Processing
+Framework. It requires a DataBroker instance. The DataBroker is needed
+for listener registration. There is also the TopologyRequestListener
+which listens on aggregated topology requests (placed into the
+configuration datastore) and UnderlayTopologyListeners which listen on
+underlay topology data changes (made in operational datastore). The
+TopologyRequestHandler saves toporequest data and provides a method for
+translating a path to the specified leaf. When a change in the topology
+occurs, the registered UnderlayTopologyListener processes this
+information for further aggregation and/or filtration. Finally, after an
+overlay topology is created, it is passed to the TopologyWriter, which
+writes this topology into operational datastore.
+
+.. figure:: ./images/topoprocessing/TopologyRequestHandler_classesRelationship.png
+   :alt: Class relationship
+
+   Class relationship
+
+[1] TopologyRequestHandler instantiates TopologyWriter and
+TopologyManager. Then, according to the request, initializes either
+TopologyAggregator, TopologyFiltrator or LinkCalculator.
+
+[2] It creates as many instances of UnderlayTopologyListener as there
+are underlay topologies.
+
+[3] PhysicalNodes are created for relevant incoming nodes (those having
+node ID).
+
+[4a] It performs aggregation and creates logical nodes.
+
+[4b] It performs filtration and creates logical nodes.
+
+[4c] It performs link computation and creates links between logical
+nodes.
+
+[5] Logical nodes are put into wrapper.
+
+[6] The wrapper is translated into the appropriate format and written
+into datastore.
+
+Model Specific Approach
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The Topology Processing Framework consists of several modules and Karaf
+features, which provide support for different input models. Currently we
+support the network-topology, opendaylight-inventory and i2rs models.
+For each of these input models, the Topology Processing Framework has
+one module and one Karaf feature.
+
+How it works
+^^^^^^^^^^^^
+
+**User point of view:**
+
+When you start the odl-topoprocessing-framework feature, the Topology
+Processing Framework starts without knowledge how to work with any input
+models. In order to allow the Topology Processing Framework to process
+some kind of input model, you must install one (or more) model specific
+features. Installing these features will also start
+odl-topoprocessing-framework feature if it is not already running. These
+features inject appropriate logic into the odl-topoprocessing-framework
+feature. From that point, the Topology Processing Framework is able to
+process different kinds of input models, specifically those that you
+install features for.
+
+**Developer point of view:**
+
+The topoprocessing-impl module contains (among other things) classes and
+interfaces, which are common for every model specific topoprocessing
+module. These classes and interfaces are implemented and extended by
+classes in particular model specific modules. Model specific modules
+also depend on the TopoProcessingProvider class in the
+topoprocessing-spi module. This dependency is injected during
+installation of model specific features in Karaf. When a model specific
+feature is started, it calls the registerAdapters(adapters) method of
+the injected TopoProcessingProvider object. After this step, the
+Topology Processing Framework is able to use registered model adapters
+to work with input models.
+
+To achieve the described functionality we created a ModelAdapter
+interface. It represents installed feature and provides methods for
+creating crucial structures specific to each model.
+
+.. figure:: ./images/topoprocessing/ModelAdapter.png
+   :alt: ModelAdapter interface
+
+   ModelAdapter interface
+
+Model Specific Features
+^^^^^^^^^^^^^^^^^^^^^^^
+
+-  odl-topoprocessing-network-topology - this feature contains logic to
+   work with network-topology model
+
+-  odl-topoprocessing-inventory - this feature contains logic to work
+   with opendaylight-inventory model
+
+-  odl-topoprocessing-i2rs - this feature contains logic to work with
+   i2rs model
+
+Inventory Model Support
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The opendaylight-inventory model contains only nodes, termination
+points, information regarding these structures. This model co-operates
+with network-topology model, where other topology related information is
+stored. This means that we have to handle two input models at once. To
+support the inventory model, InventoryListener and
+NotificationInterConnector classes were introduced. Please see the flow
+diagrams below.
+
+.. figure:: ./images/topoprocessing/Network_topology_model_flow_diagram.png
+   :alt: Network topology model
+
+   Network topology model
+
+.. figure:: ./images/topoprocessing/Inventory_model_listener_diagram.png
+   :alt: Inventory model
+
+   Inventory model
+
+Here we can see the InventoryListener and NotificationInterConnector
+classes. InventoryListener listens on data changes in the inventory
+model and passes these changes wrapped as an UnderlayItem for further
+processing to NotificationInterConnector. It doesn’t contain node
+information - it contains a leafNode (node based on which aggregation
+occurs) instead. The node information is stored in the topology model,
+where UnderlayTopologyListener is registered as usual. This listener
+delivers the missing information.
+
+Then the NotificationInterConnector combines the two notifications into
+a complete UnderlayItem (no null values) and delivers this UnderlayItem
+for further processing (to next TopologyOperator).
+
+Aggregation and Filtration
+--------------------------
+
+Chapter Overview
+~~~~~~~~~~~~~~~~
+
+The Topology Processing Framework allows the creation of aggregated
+topologies and filtered views over existing topologies. Currently,
+aggregation and filtration is supported for topologies that follow
+`network-topology <https://github.com/opendaylight/yangtools/blob/master/model/ietf/ietf-topology/src/main/yang/network-topology%402013-10-21.yang>`__,
+opendaylight-inventory or i2rs model. When a request to create an
+aggregated or filtered topology is received, the framework creates one
+listener per underlay topology. Whenever any specified underlay topology
+is changed, the appropriate listener is triggered with the change and
+the change is processed. Two types of correlations (functionalities) are
+currently supported:
+
+-  Aggregation
+
+   -  Unification
+
+   -  Equality
+
+-  Filtration
+
+Terminology
+~~~~~~~~~~~
+
+We use the term underlay item (physical node) for items (nodes, links,
+termination-points) from underlay and overlay item (logical node) for
+items from overlay topologies regardless of whether those are actually
+physical network elements.
+
+Aggregation
+~~~~~~~~~~~
+
+Aggregation is an operation which creates an aggregated item from two or
+more items in the underlay topology if the aggregation condition is
+fulfilled. Requests for aggregated topologies must specify a list of
+underlay topologies over which the overlay (aggregated) topology will be
+created and a target field in the underlay item that the framework will
+check for equality.
+
+Create Overlay Node
+^^^^^^^^^^^^^^^^^^^
+
+First, each new underlay item is inserted into the proper topology
+store. Once the item is stored, the framework compares it (using the
+target field value) with all stored underlay items from underlay
+topologies. If there is a target-field match, a new overlay item is
+created containing pointers to all *equal* underlay items. The newly
+created overlay item is also given new references to its supporting
+underlay items.
+
+**Equality case:**
+
+If an item doesn’t fulfill the equality condition with any other items,
+processing finishes after adding the item into topology store. It will
+stay there for future use, ready to create an aggregated item with a new
+underlay item, with which it would satisfy the equality condition.
+
+**Unification case:**
+
+An overlay item is created for all underlay items, even those which
+don’t fulfill the equality condition with any other items. This means
+that an overlay item is created for every underlay item, but for items
+which satisfy the equality condition, an aggregated item is created.
+
+Update Node
+^^^^^^^^^^^
+
+Processing of updated underlay items depends on whether the target field
+has been modified. If yes, then:
+
+-  if the underlay item belonged to some overlay item, it is removed
+   from that item. Next, if the aggregation condition on the target
+   field is satisfied, the item is inserted into another overlay item.
+   If the condition isn’t met then:
+
+   -  in equality case - the item will not be present in overlay
+      topology.
+
+   -  in unification case - the item will create an overlay item with a
+      single underlay item and this will be written into overlay
+      topology.
+
+-  if the item didn’t belong to some overlay item, it is checked again
+   for aggregation with other underlay items.
+
+Remove Node
+^^^^^^^^^^^
+
+The underlay item is removed from the corresponding topology store, from
+it’s overlay item (if it belongs to one) and this way it is also removed
+from overlay topology.
+
+**Equality case:**
+
+If there is only one underlay item left in the overlay item, the overlay
+item is removed.
+
+**Unification case:**
+
+The overlay item is removed once it refers to no underlay item.
+
+Filtration
+~~~~~~~~~~
+
+Filtration is an operation which results in creation of overlay topology
+containing only items fulfilling conditions set in the topoprocessing
+request.
+
+Create Underlay Item
+^^^^^^^^^^^^^^^^^^^^
+
+If a newly created underlay item passes all filtrators and their
+conditions, then it is stored in topology store and a creation
+notification is delivered into topology manager. No operation otherwise.
+
+Update Underlay Item
+^^^^^^^^^^^^^^^^^^^^
+
+First, the updated item is checked for presence in topology store:
+
+-  if it is present in topology store:
+
+   -  if it meets the filtering conditions, then processUpdatedData
+      notification is triggered
+
+   -  else processRemovedData notification is triggered
+
+-  if item isn’t present in topology store
+
+   -  if item meets filtering conditions, then processCreatedData
+      notification is triggered
+
+   -  else it is ignored
+
+Remove Underlay Item
+^^^^^^^^^^^^^^^^^^^^
+
+If an underlay node is supporting some overlay node, the overlay node is
+simply removed.
+
+Default Filtrator Types
+^^^^^^^^^^^^^^^^^^^^^^^
+
+There are seven types of default filtrators defined in the framework:
+
+-  IPv4-address filtrator - checks if specified field meets IPv4 address
+   + mask criteria
+
+-  IPv6-address filtrator - checks if specified field meets IPv6 address
+   + mask criteria
+
+-  Specific number filtrator - checks for specific number
+
+-  Specific string filtrator - checks for specific string
+
+-  Range number filtrator - checks if specified field is higher than
+   provided minimum (inclusive) and lower than provided maximum
+   (inclusive)
+
+-  Range string filtrator - checks if specified field is alphabetically
+   greater than provided minimum (inclusive) and alphabetically lower
+   than provided maximum (inclusive)
+
+-  Script filtrator - allows a user or application to implement their
+   own filtrator
+
+Register Custom Filtrator
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There might be some use case that cannot be achieved with the default
+filtrators. In these cases, the framework offers the possibility for a
+user or application to register a custom filtrator.
+
+Pre-Filtration / Filtration & Aggregation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This feature was introduced in order to lower memory and performance
+demands. It is a combination of the filtration and aggregation
+operations. First, uninteresting items are filtered out and then
+aggregation is performed only on items that passed filtration. This way
+the framework saves on compute time. The PreAggregationFiltrator and
+TopologyAggregator share the same TopoStoreProvider (and thus topology
+store) which results in lower memory demands (as underlay items are
+stored only in one topology store - they aren’t stored twice).
+
+Link Computation
+----------------
+
+Chapter Overview
+~~~~~~~~~~~~~~~~
+
+While processing the topology request, we create overlay nodes with
+lists of supporting underlay nodes. Because these overlay nodes have
+completely new identifiers, we lose link information. To regain this
+link information, we provide Link Computation functionality. Its main
+purpose is to create new overlay links based on the links from the
+underlay topologies and underlay items from overlay items. The required
+information for Link Computation is provided via the Link Computation
+model in
+(`topology-link-computation.yang <https://git.opendaylight.org/gerrit/gitweb?p=topoprocessing.git;a=blob;f=topoprocessing-api/src/main/yang/topology-link-computation.yang;hb=refs/heads/stable/beryllium>`__).
+
+Link Computation Functionality
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Let us consider two topologies with following components:
+
+Topology 1:
+
+-  Node: ``node:1:1``
+
+-  Node: ``node:1:2``
+
+-  Node: ``node:1:3``
+
+-  Link: ``link:1:1`` (from ``node:1:1`` to ``node:1:2``)
+
+-  Link: ``link:1:2`` (from ``node:1:3`` to ``node:1:2``)
+
+Topology 2:
+
+-  Node: ``node:2:1``
+
+-  Node: ``node:2:2``
+
+-  Node: ``node:2:3``
+
+-  Link: ``link:2:1`` (from ``node:2:1`` to ``node:2:3``)
+
+Now let’s say that we applied some operations over these topologies that
+results into aggregating together
+
+-  ``node:1:1`` and ``node:2:3`` (``node:1``)
+
+-  ``node:1:2`` and ``node:2:2`` (``node:2``)
+
+-  ``node:1:3`` and ``node:2:1`` (``node:3``)
+
+At this point we can no longer use available links in new topology
+because of the node ID change, so we must create new overlay links with
+source and destination node set to new nodes IDs. It means that
+``link:1:1`` from topology 1 will create new link ``link:1``. Since
+original source (``node:1:1``) is already aggregated under ``node:1``,
+it will become source node for ``link:1``. Using same method the
+destination will be ``node:2``. And the final output will be three
+links:
+
+-  ``link:1``, from ``node:1`` to ``node:2``
+
+-  ``link:2``, from ``node:3`` to ``node:2``
+
+-  ``link:3``, from ``node:3`` to ``node:1``
+
+.. figure:: ./images/topoprocessing/LinkComputation.png
+   :alt: Overlay topology with computed links
+
+   Overlay topology with computed links
+
+In-Depth Look
+~~~~~~~~~~~~~
+
+The main logic behind Link Computation is executed in the LinkCalculator
+operator. The required information is passed to LinkCalculator through
+the LinkComputation section of the topology request. This section is
+defined in the topology-link-computation.yang file. The main logic also
+covers cases when some underlay nodes may not pass through other
+topology operators.
+
+Link Computation Model
+^^^^^^^^^^^^^^^^^^^^^^
+
+There are three essential pieces of information for link computations.
+All of them are provided within the LinkComputation section. These
+pieces are:
+
+-  output model
+
+.. code:: yang
+
+    leaf output-model {
+        type identityref {
+            base topo-corr:model;
+        }
+        description "Desired output model for computed links.";
+    }
+
+-  overlay topology with new nodes
+
+.. code:: yang
+
+    container node-info {
+        leaf node-topology {
+            type string;
+            mandatory true;
+            description "Topology that contains aggregated nodes.
+                         This topology will be used for storing computed links.";
+        }
+        uses topo-corr:input-model-grouping;
+    }
+
+-  underlay topologies with original links
+
+.. code:: yang
+
+    list link-info {
+        key "link-topology input-model";
+        leaf link-topology {
+            type string;
+            mandatory true;
+            description "Topology that contains underlay (base) links.";
+        }
+        leaf aggregated-links {
+            type boolean;
+            description "Defines if link computation should be based on supporting-links.";
+        }
+        uses topo-corr:input-model-grouping;
+    }
+
+This whole section is augmented into ``network-topology:topology``. By
+placing this section out of correlations section, it allows us to send
+link computation request separately from topology operations request.
+
+Main Logic
+^^^^^^^^^^
+
+Taking into consideration that some of the underlay nodes may not
+transform into overlay nodes (e.g. they are filtered out), we created
+two possible states for links:
+
+-  matched - a link is considered as matched when both original source
+   and destination node were transformed to overlay nodes
+
+-  waiting - a link is considered as waiting if original source,
+   destination or both nodes are missing from the overlay topology
+
+All links in waiting the state are stored in waitingLinks list, already
+matched links are stored in matchedLinks list and overlay nodes are
+stored in the storedOverlayNodes list. All processing is based only on
+information in these lists. Processing created, updated and removed
+underlay items is slightly different and described in next sections
+separately.
+
+**Processing Created Items**
+
+Created items can be either nodes or links, depending on the type of
+listener from which they came. In the case of a link, it is immediately
+added to waitingLinks and calculation for possible overlay link
+creations (calculatePossibleLink) is started. The flow diagram for this
+process is shown in the following picture:
+
+.. figure:: ./images/topoprocessing/LinkComputationFlowDiagram.png
+   :alt: Flow diagram of processing created items
+
+   Flow diagram of processing created items
+
+Searching for the source and destination nodes in the
+calculatePossibleLink method runs over each node in storedOverlayNodes
+and the IDs of each supporting node is compared against IDs from the
+underlay link’s source and destination nodes. If there are any nodes
+missing, the link remains in the waiting state. If both the source and
+destination nodes are found, the corresponding overlay nodes is recorded
+as the new source and destination. The link is then removed from
+waitingLinks and a new CalculatedLink is added to the matched links. At
+the end, the new link (if it exists) is written into the datastore.
+
+If the created item is an overlayNode, this is added to
+storedOverlayNodes and we call calculatePossibleLink for every link in
+waitingLinks.
+
+**Processing Updated Items**
+
+The difference from processing created items is that we have three
+possible types of updated items: overlay nodes, waiting underlay links,
+and matched underlay links.
+
+-  In the case of a change in a matched link, this must be recalculated
+   and based on the result it will either be matched with new source and
+   destination or will be returned to waiting links. If the link is
+   moved back to a waiting state, it must also be removed from the
+   datastore.
+
+-  In the case of change in a waiting link, it is passed to the
+   calculation process and based on the result will either remain in
+   waiting state or be promoted to the matched state.
+
+-  In the case of a change in an overlay node, storedOverlayNodes must
+   be updated properly and all links must be recalculated in case of
+   changes.
+
+**Processing Removed items**
+
+Same as for processing updated item. There can be three types of removed
+items:
+
+-  In case of waiting link removal, the link is just removed from
+   waitingLinks
+
+-  In case of matched link removal, the link is removed from
+   matchingLinks and datastore
+
+-  In case of overlay node removal, the node must be removed form
+   storedOverlayNodes and all matching links must be recalculated
+
+Wrapper, RPC Republishing, Writing Mechanism
+--------------------------------------------
+
+Chapter Overview
+~~~~~~~~~~~~~~~~
+
+During the process of aggregation and filtration, overlay items (so
+called logical nodes) were created from underlay items (physical nodes).
+In the topology manager, overlay items are put into a wrapper. A wrapper
+is identified with unique ID and contains list of logical nodes.
+Wrappers are used to deal with transitivity of underlay items - which
+permits grouping of overlay items (into wrappers).
+
+.. figure:: ./images/topoprocessing/wrapper.png
+   :alt: Wrapper
+
+   Wrapper
+
+PN1, PN2, PN3 = physical nodes
+
+LN1, LN2 = logical nodes
+
+RPC Republishing
+~~~~~~~~~~~~~~~~
+
+All RPCs registered to handle underlay items are re-registered under
+their corresponding wrapper ID. RPCs of underlay items (belonging to an
+overlay item) are gathered, and registered under ID of their wrapper.
+
+RPC Call
+^^^^^^^^
+
+When RPC is called on overlay item, this call is delegated to it’s
+underlay items, this means that the RPC is called on all underlay items
+of this overlay item.
+
+Writing Mechanism
+~~~~~~~~~~~~~~~~~
+
+When a wrapper (containing overlay item(s) with it’s underlay item(s))
+is ready to be written into data store, it has to be converted into DOM
+format. After this translation is done, the result is written into
+datastore. Physical nodes are stored as supporting-nodes. In order to
+use resources responsibly, writing operation is divided into two steps.
+First, a set of threads registers prepared operations (deletes and puts)
+and one thread makes actual write operation in batch.
+
+Topology Rendering Guide - Inventory Rendering
+----------------------------------------------
+
+Chapter Overview
+~~~~~~~~~~~~~~~~
+
+In the most recent OpenDaylight release, the opendaylight-inventory
+model is marked as deprecated. To facilitate migration from it to the
+network-topology model, there were requests to render (translate) data
+from inventory model (whether augmented or not) to another model for
+further processing. The Topology Processing Framework was extended to
+provide this functionality by implementing several rendering-specific
+classes. This chapter is a step-by-step guide on how to implement your
+own topology rendering using our inventory rendering as an example.
+
+Use case
+~~~~~~~~
+
+For the purpose of this guide we are going to render the following
+augmented fields from the OpenFlow model:
+
+-  from inventory node:
+
+   -  manufacturer
+
+   -  hardware
+
+   -  software
+
+   -  serial-number
+
+   -  description
+
+   -  ip-address
+
+-  from inventory node-connector:
+
+   -  name
+
+   -  hardware-address
+
+   -  current-speed
+
+   -  maximum-speed
+
+We also want to preserve the node ID and termination-point ID from
+opendaylight-topology-inventory model, which is network-topology part of
+the inventory model.
+
+Implementation
+~~~~~~~~~~~~~~
+
+There are two ways to implement support for your specific topology
+rendering:
+
+-  add a module to your project that depends on the Topology Processing
+   Framework
+
+-  add a module to the Topology Processing Framework itself
+
+Regardless, a successful implementation must complete all of the
+following steps.
+
+Step1 - Target Model Creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Because the network-topology node does not have fields to store all
+desired data, it is necessary to create new model to render this extra
+data in to. For this guide we created the inventory-rendering model. The
+picture below shows how data will be rendered and stored.
+
+.. figure:: ./images/topoprocessing/Inventory_Rendering_Use_case.png
+   :alt: Rendering to the inventory-rendering model
+
+   Rendering to the inventory-rendering model
+
+    **Important**
+
+    When implementing your version of the topology-rendering model in
+    the Topology Processing Framework, the source file of the model
+    (.yang) must be saved in /topoprocessing-api/src/main/yang folder so
+    corresponding structures can be generated during build and can be
+    accessed from every module through dependencies.
+
+When the target model is created you have to add an identifier through
+which you can set your new model as output model. To do that you have to
+add another identity item to topology-correlation.yang file. For our
+inventory-rendering model identity looks like this:
+
+.. code:: yang
+
+    identity inventory-rendering-model {
+        description "inventory-rendering.yang";
+        base model;
+    }
+
+After that you will be able to set inventory-rendering-model as output
+model in XML.
+
+Step2 - Module and Feature Creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+    **Important**
+
+    This and following steps are based on the `model specific
+    approach <#_model_specific_approach>`__ in the Topology Processing
+    Framework. We highly recommend that you familiarize yourself with
+    this approach in advance.
+
+To create a base module and add it as a feature to Karaf in the Topology
+Processing Framework we made the changes in following
+`commit <https://git.opendaylight.org/gerrit/#/c/26223/>`__. Changes in
+other projects will likely be similar.
+
++--------------------------------------+--------------------------------------+
+| File                                 | Changes                              |
++======================================+======================================+
+| pom.xml                              | add new module to topoprocessing     |
++--------------------------------------+--------------------------------------+
+| features.xml                         | add feature to topoprocessing        |
++--------------------------------------+--------------------------------------+
+| features/pom.xml                     | add dependencies needed by features  |
++--------------------------------------+--------------------------------------+
+| topoprocessing-artifacts/pom.xml     | add artifact                         |
++--------------------------------------+--------------------------------------+
+| topoprocessing-config/pom.xml        | add configuration file               |
++--------------------------------------+--------------------------------------+
+| 81-topoprocessing-inventory-renderin | configuration file for new module    |
+| g-config.xml                         |                                      |
++--------------------------------------+--------------------------------------+
+| topoprocessing-inventory-rendering/p | main pom for new module              |
+| om.xml                               |                                      |
++--------------------------------------+--------------------------------------+
+| TopoProcessingProviderIR.java        | contains startup method which        |
+|                                      | register new model adapter           |
++--------------------------------------+--------------------------------------+
+| TopoProcessingProviderIRModule.java  | generated class which contains       |
+|                                      | createInstance method. You should    |
+|                                      | call your startup method from here.  |
++--------------------------------------+--------------------------------------+
+| TopoProcessingProviderIRModuleFactor | generated class. You will probably   |
+| y.java                               | not need to edit this file           |
++--------------------------------------+--------------------------------------+
+| log4j.xml                            | configuration file for logger        |
+|                                      | topoprocessing-inventory-rendering-p |
+|                                      | rovider-impl.yang                    |
++--------------------------------------+--------------------------------------+
+
+Step3 - Module Adapters Creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There are seven mandatory interfaces or abstract classes that needs to
+be implemented in each module. They are:
+
+-  TopoProcessingProvider - provides module registration
+
+-  ModelAdapter - provides model specific instances
+
+-  TopologyRequestListener - listens on changes in the configuration
+   datastore
+
+-  TopologyRequestHandler - processes configuration datastore changes
+
+-  UnderlayTopologyListener - listens for changes in the specific model
+
+-  LinkTransaltor and NodeTranslator - used by OverlayItemTranslator to
+   create NormalizedNodes from OverlayItems
+
+The name convention we used was to add an abbreviation for the specific
+model to the beginning of implementing class name (e.g. the
+IRModelAdapter refers to class which implements ModelAdapter in module
+Inventory Rendering). In the case of the provider class, we put the
+abbreviation at the end.
+
+    **Important**
+
+    -  In the next sections, we use the terms TopologyRequestListener,
+       TopologyRequestHandler, etc. without a prepended or appended
+       abbreviation because the steps apply regardless of which specific
+       model you are targeting.
+
+    -  If you want to implement rendering from inventory to
+       network-topology, you can just copy-paste our module and
+       additional changes will be required only in the output part.
+
+**Provider part**
+
+This part is the starting point of the whole module. It is responsible
+for creating and registering TopologyRequestListeners. It is necessary
+to create three classes which will import:
+
+-  **TopoProcessingProviderModule** - is a generated class from
+   topoprocessing-inventory-rendering-provider-impl.yang (created in
+   previous step, file will appear after first build). Its method
+   ``createInstance()`` is called at the feature start and must be
+   modified to create an instance of TopoProcessingProvider and call its
+   ``startup(TopoProcessingProvider topoProvider)`` function.
+
+-  **TopoProcessingProvider** - in
+   ``startup(TopoProcessingProvider topoProvider)`` function provides
+   ModelAdapter registration to TopoProcessingProviderImpl.
+
+-  **ModelAdapter** - provides creation of corresponding module specific
+   classes.
+
+**Input part**
+
+This includes the creation of the classes responsible for input data
+processing. In this case, we had to create five classes implementing:
+
+-  **TopologyRequestListener** and **TopologyRequestHandler** - when
+   notified about a change in the configuration datastore, verify if the
+   change contains a topology request (has correlations in it) and
+   creates UnderlayTopologyListeners if needed. The implementation of
+   these classes will differ according to the model in which are
+   correlations saved (network-topology or i2rs). In the case of using
+   network-topology, as the input model, you can use our classes
+   IRTopologyRequestListener and IRTopologyRequestHandler.
+
+-  **UnderlayTopologyListener** - registers underlay listeners according
+   to input model. In our case (listening in the inventory model), we
+   created listeners for the network-topology model and inventory model,
+   and set the NotificationInterConnector as the first operator and set
+   the IRRenderingOperator as the second operator (after
+   NotificationInterConnector). Same as for
+   TopologyRequestListener/Handler, if you are rendering from the
+   inventory model, you can use our class IRUnderlayTopologyListener.
+
+-  **InventoryListener** - a new implementation of this class is
+   required only for inventory input model. This is because the
+   InventoryListener from topoprocessing-impl requires pathIdentifier
+   which is absent in the case of rendering.
+
+-  **TopologyOperator** - replaces classic topoprocessing operator.
+   While the classic operator provides specific operations on topology,
+   the rendering operator just wraps each received UnderlayItem to
+   OverlayItem and sends them to write.
+
+    **Important**
+
+    For purposes of topology rendering from inventory to
+    network-topology, there are misused fields in UnderlayItem as
+    follows:
+
+    -  item - contains node from network-topology part of inventory
+
+    -  leafItem - contains node from inventory
+
+    In case of implementing UnderlayTopologyListener or
+    InventoryListener you have to carefully adjust UnderlayItem creation
+    to these terms.
+
+**Output part**
+
+The output part of topology rendering is responsible for translating
+received overlay items to normalized nodes. In the case of inventory
+rendering, this is where node information from inventory are combined
+with node information from network-topology. This combined information
+is stored in our inventory-rendering model normalized node and passed to
+the writer.
+
+The output part consists of two translators implementing the
+NodeTranslator and LinkTranslator interfaces.
+
+**NodeTranslator implementation** - The NodeTranslator interface has one
+``translate(OverlayItemWrapper wrapper)`` method. For our purposes,
+there is one important thing in wrapper - the list of OverlayItems which
+have one or more common UnderlayItems. Regardless of this list, in the
+case of rendering it will always contains only one OverlayItem. This
+item has list of UnderlayItems, but again in case of rendering there
+will be only one UnderlayItem item in this list. In NodeTranslator, the
+OverlayItem and corresponding UnderlayItem represent nodes from the
+translating model.
+
+The UnderlayItem has several attributes. How you will use these
+attributes in your rendering is up to you, as you create this item in
+your topology operator. For example, as mentioned above, in our
+inventory rendering example is an inventory node normalized node stored
+in the UnderlayItem leafNode attribute, and we also store node-id from
+network-topology model in UnderlayItem itemId attribute. You can now use
+these attributes to build a normalized node for your new model. How to
+read and create normalized nodes is out of scope of this document.
+
+**LinkTranslator implementation** - The LinkTranslator interface also
+has one ``translate(OverlayItemWrapper wrapper)`` method. In our
+inventory rendering this method returns ``null``, because the inventory
+model doesn’t have links. But if you also need links, this is the place
+where you should translate it into a normalized node for your model. In
+LinkTranslator, the OverlayItem and corresponding UnderlayItem represent
+links from the translating model. As in NodeTranslator, there will be
+only one OverlayItem and one UnderlayItem in the corresponding lists.
+
+Testing
+~~~~~~~
+
+If you want to test topoprocessing with some manually created underlay
+topologies (like in this guide), than you have to tell Topoprocessing
+to listen for underlay topologies on Configuration datastore
+instead of Operational.
+
+| You can do this in this config file
+| ``<topoprocessing_directory>/topoprocessing-config/src/main/resources/80-topoprocessing-config.xml``.
+| Here you have to change
+| ``<datastore-type>OPERATIONAL</datastore-type>``
+| to
+| ``<datastore-type>CONFIGURATION</datastore-type>``.
+
+
+Also you have to add dependency required to test "inventory" topologies.
+
+| In ``<topoprocessing_directory>/features/pom.xml``
+| add ``<openflowplugin.version>latest_snapshot</openflowplugin.version>``
+  to properties section
+| and add this dependency to dependencies section
+
+.. code:: xml
+
+        <dependency>
+                <groupId>org.opendaylight.openflowplugin</groupId>
+                <artifactId>features-openflowplugin</artifactId>
+                <version>${openflowplugin.version}</version>
+                <classifier>features</classifier><type>xml</type>
+        </dependency>
+
+``latest_snapshot`` in ``<openflowplugin.version>`` replace with latest snapshot, which can be found `here <https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/openflowplugin/openflowplugin/>`__.
+
+| And in ``<topoprocessing_directory>/features/src/main/resources/features.xml``
+| add ``<repository>mvn:org.opendaylight.openflowplugin/features-openflowplugin/${openflowplugin.version}/xml/features</repository>``
+  to repositories section.
+
+Now after you rebuild project and start Karaf, you can install necessary features.
+
+| You can install all with one command:
+| ``feature:install odl-restconf-noauth odl-topoprocessing-inventory-rendering odl-openflowplugin-southbound odl-openflowplugin-nsf-model``
+
+Now you can send messages to REST from any REST client (e.g. Postman in
+Chrome). Messages have to have following headers:
+
++--------------------------------------+--------------------------------------+
+| Header                               | Value                                |
++======================================+======================================+
+| Content-Type:                        | application/xml                      |
++--------------------------------------+--------------------------------------+
+| Accept:                              | application/xml                      |
++--------------------------------------+--------------------------------------+
+| username:                            | admin                                |
++--------------------------------------+--------------------------------------+
+| password:                            | admin                                |
++--------------------------------------+--------------------------------------+
+
+Firstly send topology request to
+http://localhost:8181/restconf/config/network-topology:network-topology/topology/render:1
+with method PUT. Example of simple rendering request:
+
+.. code:: xml
+
+    <topology xmlns="urn:TBD:params:xml:ns:yang:network-topology">
+      <topology-id>render:1</topology-id>
+        <correlations xmlns="urn:opendaylight:topology:correlation" >
+          <output-model>inventory-rendering-model</output-model>
+          <correlation>
+             <correlation-id>1</correlation-id>
+              <type>rendering-only</type>
+              <correlation-item>node</correlation-item>
+              <rendering>
+                <underlay-topology>und-topo:1</underlay-topology>
+            </rendering>
+          </correlation>
+        </correlations>
+    </topology>
+
+This request says that we want create topology with name render:1 and
+this topology should be stored in the inventory-rendering-model and it
+should be created from topology flow:1 by node rendering.
+
+Next we send the network-topology part of topology flow:1. So to the URL
+http://localhost:8181/restconf/config/network-topology:network-topology/topology/und-topo:1
+we PUT:
+
+.. code:: xml
+
+    <topology xmlns="urn:TBD:params:xml:ns:yang:network-topology"
+              xmlns:it="urn:opendaylight:model:topology:inventory"
+              xmlns:i="urn:opendaylight:inventory">
+        <topology-id>und-topo:1</topology-id>
+        <node>
+            <node-id>openflow:1</node-id>
+            <it:inventory-node-ref>
+        /i:nodes/i:node[i:id="openflow:1"]
+            </it:inventory-node-ref>
+            <termination-point>
+                <tp-id>tp:1</tp-id>
+                <it:inventory-node-connector-ref>
+                    /i:nodes/i:node[i:id="openflow:1"]/i:node-connector[i:id="openflow:1:1"]
+                </it:inventory-node-connector-ref>
+            </termination-point>
+        </node>
+    </topology>
+
+And the last input will be inventory part of topology. To the URL
+http://localhost:8181/restconf/config/opendaylight-inventory:nodes we
+PUT:
+
+.. code:: xml
+
+    <nodes
+        xmlns="urn:opendaylight:inventory">
+        <node>
+            <id>openflow:1</id>
+            <node-connector>
+                <id>openflow:1:1</id>
+                <port-number
+                    xmlns="urn:opendaylight:flow:inventory">1
+                </port-number>
+                <current-speed
+                    xmlns="urn:opendaylight:flow:inventory">10000000
+                </current-speed>
+                <name
+                    xmlns="urn:opendaylight:flow:inventory">s1-eth1
+                </name>
+                <supported
+                    xmlns="urn:opendaylight:flow:inventory">
+                </supported>
+                <current-feature
+                    xmlns="urn:opendaylight:flow:inventory">copper ten-gb-fd
+                </current-feature>
+                <configuration
+                    xmlns="urn:opendaylight:flow:inventory">
+                </configuration>
+                <peer-features
+                    xmlns="urn:opendaylight:flow:inventory">
+                </peer-features>
+                <maximum-speed
+                    xmlns="urn:opendaylight:flow:inventory">0
+                </maximum-speed>
+                <advertised-features
+                    xmlns="urn:opendaylight:flow:inventory">
+                </advertised-features>
+                <hardware-address
+                    xmlns="urn:opendaylight:flow:inventory">0E:DC:8C:63:EC:D1
+                </hardware-address>
+                <state
+                    xmlns="urn:opendaylight:flow:inventory">
+                    <link-down>false</link-down>
+                    <blocked>false</blocked>
+                    <live>false</live>
+                </state>
+                <flow-capable-node-connector-statistics
+                    xmlns="urn:opendaylight:port:statistics">
+                    <receive-errors>0</receive-errors>
+                    <receive-frame-error>0</receive-frame-error>
+                    <receive-over-run-error>0</receive-over-run-error>
+                    <receive-crc-error>0</receive-crc-error>
+                    <bytes>
+                        <transmitted>595</transmitted>
+                        <received>378</received>
+                    </bytes>
+                    <receive-drops>0</receive-drops>
+                    <duration>
+                        <second>28</second>
+                        <nanosecond>410000000</nanosecond>
+                    </duration>
+                    <transmit-errors>0</transmit-errors>
+                    <collision-count>0</collision-count>
+                    <packets>
+                        <transmitted>7</transmitted>
+                        <received>5</received>
+                    </packets>
+                    <transmit-drops>0</transmit-drops>
+                </flow-capable-node-connector-statistics>
+            </node-connector>
+            <node-connector>
+                <id>openflow:1:LOCAL</id>
+                <port-number
+                    xmlns="urn:opendaylight:flow:inventory">4294967294
+                </port-number>
+                <current-speed
+                    xmlns="urn:opendaylight:flow:inventory">0
+                </current-speed>
+                <name
+                    xmlns="urn:opendaylight:flow:inventory">s1
+                </name>
+                <supported
+                    xmlns="urn:opendaylight:flow:inventory">
+                </supported>
+                <current-feature
+                    xmlns="urn:opendaylight:flow:inventory">
+                </current-feature>
+                <configuration
+                    xmlns="urn:opendaylight:flow:inventory">
+                </configuration>
+                <peer-features
+                    xmlns="urn:opendaylight:flow:inventory">
+                </peer-features>
+                <maximum-speed
+                    xmlns="urn:opendaylight:flow:inventory">0
+                </maximum-speed>
+                <advertised-features
+                    xmlns="urn:opendaylight:flow:inventory">
+                </advertised-features>
+                <hardware-address
+                    xmlns="urn:opendaylight:flow:inventory">BA:63:87:0C:76:41
+                </hardware-address>
+                <state
+                    xmlns="urn:opendaylight:flow:inventory">
+                    <link-down>false</link-down>
+                    <blocked>false</blocked>
+                    <live>false</live>
+                </state>
+                <flow-capable-node-connector-statistics
+                    xmlns="urn:opendaylight:port:statistics">
+                    <receive-errors>0</receive-errors>
+                    <receive-frame-error>0</receive-frame-error>
+                    <receive-over-run-error>0</receive-over-run-error>
+                    <receive-crc-error>0</receive-crc-error>
+                    <bytes>
+                        <transmitted>576</transmitted>
+                        <received>468</received>
+                    </bytes>
+                    <receive-drops>0</receive-drops>
+                    <duration>
+                        <second>28</second>
+                        <nanosecond>426000000</nanosecond>
+                    </duration>
+                    <transmit-errors>0</transmit-errors>
+                    <collision-count>0</collision-count>
+                    <packets>
+                        <transmitted>6</transmitted>
+                        <received>6</received>
+                    </packets>
+                    <transmit-drops>0</transmit-drops>
+                </flow-capable-node-connector-statistics>
+            </node-connector>
+            <serial-number
+                xmlns="urn:opendaylight:flow:inventory">None
+            </serial-number>
+            <manufacturer
+                xmlns="urn:opendaylight:flow:inventory">Nicira, Inc.
+            </manufacturer>
+            <hardware
+                xmlns="urn:opendaylight:flow:inventory">Open vSwitch
+            </hardware>
+            <software
+                xmlns="urn:opendaylight:flow:inventory">2.1.3
+            </software>
+            <description
+                xmlns="urn:opendaylight:flow:inventory">None
+            </description>
+            <ip-address
+                xmlns="urn:opendaylight:flow:inventory">10.20.30.40
+          </ip-address>
+            <meter-features
+                xmlns="urn:opendaylight:meter:statistics">
+                <max_bands>0</max_bands>
+                <max_color>0</max_color>
+                <max_meter>0</max_meter>
+            </meter-features>
+            <group-features
+                xmlns="urn:opendaylight:group:statistics">
+                <group-capabilities-supported
+                    xmlns:x="urn:opendaylight:group:types">x:chaining
+                </group-capabilities-supported>
+                <group-capabilities-supported
+                    xmlns:x="urn:opendaylight:group:types">x:select-weight
+                </group-capabilities-supported>
+                <group-capabilities-supported
+                    xmlns:x="urn:opendaylight:group:types">x:select-liveness
+                </group-capabilities-supported>
+                <max-groups>4294967040</max-groups>
+                <actions>67082241</actions>
+                <actions>0</actions>
+            </group-features>
+        </node>
+    </nodes>
+
+After this, the expected result from a GET request to
+http://127.0.0.1:8181/restconf/operational/network-topology:network-topology
+is:
+
+.. code:: xml
+
+    <network-topology
+        xmlns="urn:TBD:params:xml:ns:yang:network-topology">
+        <topology>
+            <topology-id>render:1</topology-id>
+            <node>
+                <node-id>openflow:1</node-id>
+                <node-augmentation
+                    xmlns="urn:opendaylight:topology:inventory:rendering">
+                    <ip-address>10.20.30.40</ip-address>
+                    <serial-number>None</serial-number>
+                    <manufacturer>Nicira, Inc.</manufacturer>
+                    <description>None</description>
+                    <hardware>Open vSwitch</hardware>
+                    <software>2.1.3</software>
+                </node-augmentation>
+                <termination-point>
+                    <tp-id>openflow:1:1</tp-id>
+                    <tp-augmentation
+                        xmlns="urn:opendaylight:topology:inventory:rendering">
+                        <hardware-address>0E:DC:8C:63:EC:D1</hardware-address>
+                        <current-speed>10000000</current-speed>
+                        <maximum-speed>0</maximum-speed>
+                        <name>s1-eth1</name>
+                    </tp-augmentation>
+                </termination-point>
+                <termination-point>
+                    <tp-id>openflow:1:LOCAL</tp-id>
+                    <tp-augmentation
+                        xmlns="urn:opendaylight:topology:inventory:rendering">
+                        <hardware-address>BA:63:87:0C:76:41</hardware-address>
+                        <current-speed>0</current-speed>
+                        <maximum-speed>0</maximum-speed>
+                        <name>s1</name>
+                    </tp-augmentation>
+                </termination-point>
+            </node>
+        </topology>
+    </network-topology>
+
+Use Cases
+---------
+
+You can find use case examples on `this wiki page
+<https://wiki.opendaylight.org/view/Topology_Processing_Framework:Developer_Guide:Use_Case_Tutorial>`__.
+
+Key APIs and Interfaces
+-----------------------
+
+The basic provider class is TopoProcessingProvider which provides
+startup and shutdown methods. Otherwise, the framework communicates via
+requests and outputs stored in the MD-SAL datastores.
+
+API Reference Documentation
+---------------------------
+
+You can find API examples on `this wiki
+page <https://wiki.opendaylight.org/view/Topology_Processing_Framework:Developer_Guide:REST_API_Specification>`__.
+
diff --git a/docs/developer-guide/ttp-cli-tools-developer-guide.rst b/docs/developer-guide/ttp-cli-tools-developer-guide.rst
new file mode 100644 (file)
index 0000000..6a6c3d6
--- /dev/null
@@ -0,0 +1,31 @@
+TTP CLI Tools Developer Guide
+=============================
+
+Overview
+--------
+
+Table Type Patterns are a specification developed by the `Open
+Networking Foundation <https://www.opennetworking.org/>`__ to enable the
+description and negotiation of subsets of the OpenFlow protocol. This is
+particularly useful for hardware switches that support OpenFlow as it
+enables the to describe what features they do (and thus also what
+features they do not) support. More details can be found in the full
+specification listed on the `OpenFlow specifications
+page <https://www.opennetworking.org/sdn-resources/onf-specifications/openflow>`__.
+
+The TTP CLI Tools provide a way for people interested in TTPs to read
+in, validate, output, and manipulate TTPs as a self-contained,
+executable jar file.
+
+TTP CLI Tools Architecture
+--------------------------
+
+The TTP CLI Tools use the TTP Model and the YANG Tools/RESTCONF codecs
+to translate between the Data Transfer Objects (DTOs) and JSON/XML.
+
+Command Line Options
+--------------------
+
+This will cover the various options for the CLI Tools. For now, there
+are no options and it merely outputs fixed data using the codecs.
+
diff --git a/docs/developer-guide/ttp-model-developer-guide.rst b/docs/developer-guide/ttp-model-developer-guide.rst
new file mode 100644 (file)
index 0000000..390e6fc
--- /dev/null
@@ -0,0 +1,531 @@
+TTP Model Developer Guide
+=========================
+
+Overview
+--------
+
+Table Type Patterns are a specification developed by the `Open
+Networking Foundation <https://www.opennetworking.org/>`__ to enable the
+description and negotiation of subsets of the OpenFlow protocol. This is
+particularly useful for hardware switches that support OpenFlow as it
+enables the to describe what features they do (and thus also what
+features they do not) support. More details can be found in the full
+specification listed on the `OpenFlow specifications
+page <https://www.opennetworking.org/sdn-resources/onf-specifications/openflow>`__.
+
+TTP Model Architecture
+----------------------
+
+The TTP Model provides a YANG-modeled type for a TTP and allows them to
+be associated with a master list of known TTPs, as well as active and
+supported TTPs with nodes in the MD-SAL inventory model.
+
+Key APIs and Interfaces
+-----------------------
+
+The key API provided by the TTP Model feature is the ability to store a
+set of TTPs in the MD-SAL as well as associate zero or one active TTPs
+and zero or more supported TTPs along with a given node in the MD-SAL
+inventory model.
+
+API Reference Documentation
+---------------------------
+
+RESTCONF
+~~~~~~~~
+
+See the generated RESTCONF API documentation at:
+http://localhost:8181/apidoc/explorer/index.html
+
+Look for the onf-ttp module to expand and see the various RESTCONF APIs.
+
+Java Bindings
+~~~~~~~~~~~~~
+
+As stated above there are 3 locations where a Table Type Pattern can be
+placed into the MD-SAL Data Store. They correspond to 3 different REST
+API URIs:
+
+1. ``restconf/config/onf-ttp:opendaylight-ttps/onf-ttp:table-type-patterns/``
+
+2. ``restconf/config/opendaylight-inventory:nodes/node/{id}/ttp-inventory-node:active_ttp/``
+
+3. ``restconf/config/opendaylight-inventory:nodes/node/{id}/ttp-inventory-node:supported_ttps/``
+
+.. note::
+
+    Typically, these URIs are running on the machine the controller is
+    on at port 8181. If you are on the same machine they can thus be
+    accessed at ``http://localhost:8181/<uri>``
+
+Using the TTP Model RESTCONF APIs
+---------------------------------
+
+Setting REST HTTP Headers
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Authentication
+^^^^^^^^^^^^^^
+
+The REST API calls require authentication by default. The default method
+is to use basic auth with a user name and password of ‘admin’.
+
+Content-Type and Accept
+^^^^^^^^^^^^^^^^^^^^^^^
+
+RESTCONF supports both xml and json. This example focuses on JSON, but
+xml can be used just as easily. When doing a PUT or POST be sure to
+specify the appropriate ``Conetnt-Type`` header: either
+``application/json`` or ``application/xml``.
+
+When doing a GET be sure to specify the appropriate ``Accept`` header:
+again, either ``application/json`` or ``application/xml``.
+
+Content
+~~~~~~~
+
+The contents of a PUT or POST should be a OpenDaylight Table Type
+Pattern. An example of one is provided below. The example can also be
+found at ```parser/sample-TTP-from-tests.ttp`` in the TTP git
+repository <https://git.opendaylight.org/gerrit/gitweb?p=ttp.git;a=blob;f=parser/sample-TTP-from-tests.ttp;h=45130949b25c6f86b750959d27d04ec2208935fb;hb=HEAD>`__.
+
+**Sample Table Type Pattern (json).**
+
+::
+
+    {
+        "table-type-patterns": {
+            "table-type-pattern": [
+                {
+                    "security": {
+                        "doc": [
+                            "This TTP is not published for use by ONF. It is an example and for",
+                            "illustrative purposes only.",
+                            "If this TTP were published for use it would include",
+                            "guidance as to any security considerations in this doc member."
+                        ]
+                    },
+                    "NDM_metadata": {
+                        "authority": "org.opennetworking.fawg",
+                        "OF_protocol_version": "1.3.3",
+                        "version": "1.0.0",
+                        "type": "TTPv1",
+                        "doc": [
+                            "Example of a TTP supporting L2 (unicast, multicast, flooding), L3 (unicast only),",
+                            "and an ACL table."
+                        ],
+                        "name": "L2-L3-ACLs"
+                    },
+                    "identifiers": [
+                        {
+                            "doc": [
+                                "The VLAN ID of a locally attached L2 subnet on a Router."
+                            ],
+                            "var": "<subnet_VID>"
+                        },
+                        {
+                            "doc": [
+                                "An OpenFlow group identifier (integer) identifying a group table entry",
+                                "of the type indicated by the variable name"
+                            ],
+                            "var": "<<group_entry_types/name>>"
+                        }
+                    ],
+                    "features": [
+                        {
+                            "doc": [
+                                "Flow entry notification Extension – notification of changes in flow entries"
+                            ],
+                            "feature": "ext187"
+                        },
+                        {
+                            "doc": [
+                                "Group notifications Extension – notification of changes in group or meter entries"
+                            ],
+                            "feature": "ext235"
+                        }
+                    ],
+                    "meter_table": {
+                        "meter_types": [
+                            {
+                                "name": "ControllerMeterType",
+                                "bands": [
+                                    {
+                                        "type": "DROP",
+                                        "rate": "1000..10000",
+                                        "burst": "50..200"
+                                    }
+                                ]
+                            },
+                            {
+                                "name": "TrafficMeter",
+                                "bands": [
+                                    {
+                                        "type": "DSCP_REMARK",
+                                        "rate": "10000..500000",
+                                        "burst": "50..500"
+                                    },
+                                    {
+                                        "type": "DROP",
+                                        "rate": "10000..500000",
+                                        "burst": "50..500"
+                                    }
+                                ]
+                            }
+                        ],
+                        "built_in_meters": [
+                            {
+                                "name": "ControllerMeter",
+                                "meter_id": 1,
+                                "type": "ControllerMeterType",
+                                "bands": [
+                                    {
+                                        "rate": 2000,
+                                        "burst": 75
+                                    }
+                                ]
+                            },
+                            {
+                                "name": "AllArpMeter",
+                                "meter_id": 2,
+                                "type": "ControllerMeterType",
+                                "bands": [
+                                    {
+                                        "rate": 1000,
+                                        "burst": 50
+                                    }
+                                ]
+                            }
+                        ]
+                    },
+                    "table_map": [
+                        {
+                            "name": "ControlFrame",
+                            "number": 0
+                        },
+                        {
+                            "name": "IngressVLAN",
+                            "number": 10
+                        },
+                        {
+                            "name": "MacLearning",
+                            "number": 20
+                        },
+                        {
+                            "name": "ACL",
+                            "number": 30
+                        },
+                        {
+                            "name": "L2",
+                            "number": 40
+                        },
+                        {
+                            "name": "ProtoFilter",
+                            "number": 50
+                        },
+                        {
+                            "name": "IPv4",
+                            "number": 60
+                        },
+                        {
+                            "name": "IPv6",
+                            "number": 80
+                        }
+                    ],
+                    "parameters": [
+                        {
+                            "doc": [
+                                "documentation"
+                            ],
+                            "name": "Showing-curt-how-this-works",
+                            "type": "type1"
+                        }
+                    ],
+                    "flow_tables": [
+                        {
+                            "doc": [
+                                "Filters L2 control reserved destination addresses and",
+                                "may forward control packets to the controller.",
+                                "Directs all other packets to the Ingress VLAN table."
+                            ],
+                            "name": "ControlFrame",
+                            "flow_mod_types": [
+                                {
+                                    "doc": [
+                                        "This match/action pair allows for flow_mods that match on either",
+                                        "ETH_TYPE or ETH_DST (or both) and send the packet to the",
+                                        "controller, subject to metering."
+                                    ],
+                                    "name": "Frame-To-Controller",
+                                    "match_set": [
+                                        {
+                                            "field": "ETH_TYPE",
+                                            "match_type": "all_or_exact"
+                                        },
+                                        {
+                                            "field": "ETH_DST",
+                                            "match_type": "exact"
+                                        }
+                                    ],
+                                    "instruction_set": [
+                                        {
+                                            "doc": [
+                                                "This meter may be used to limit the rate of PACKET_IN frames",
+                                                "sent to the controller"
+                                            ],
+                                            "instruction": "METER",
+                                            "meter_name": "ControllerMeter"
+                                        },
+                                        {
+                                            "instruction": "APPLY_ACTIONS",
+                                            "actions": [
+                                                {
+                                                    "action": "OUTPUT",
+                                                    "port": "CONTROLLER"
+                                                }
+                                            ]
+                                        }
+                                    ]
+                                }
+                            ],
+                            "built_in_flow_mods": [
+                                {
+                                    "doc": [
+                                        "Mandatory filtering of control frames with C-VLAN Bridge reserved DA."
+                                    ],
+                                    "name": "Control-Frame-Filter",
+                                    "priority": "1",
+                                    "match_set": [
+                                        {
+                                            "field": "ETH_DST",
+                                            "mask": "0xfffffffffff0",
+                                            "value": "0x0180C2000000"
+                                        }
+                                    ]
+                                },
+                                {
+                                    "doc": [
+                                        "Mandatory miss flow_mod, sends packets to IngressVLAN table."
+                                    ],
+                                    "name": "Non-Control-Frame",
+                                    "priority": "0",
+                                    "instruction_set": [
+                                        {
+                                            "instruction": "GOTO_TABLE",
+                                            "table": "IngressVLAN"
+                                        }
+                                    ]
+                                }
+                            ]
+                        }
+                    ],
+                    "group_entry_types": [
+                        {
+                            "doc": [
+                                "Output to a port, removing VLAN tag if needed.",
+                                "Entry per port, plus entry per untagged VID per port."
+                            ],
+                            "name": "EgressPort",
+                            "group_type": "INDIRECT",
+                            "bucket_types": [
+                                {
+                                    "name": "OutputTagged",
+                                    "action_set": [
+                                        {
+                                            "action": "OUTPUT",
+                                            "port": "<port_no>"
+                                        }
+                                    ]
+                                },
+                                {
+                                    "name": "OutputUntagged",
+                                    "action_set": [
+                                        {
+                                            "action": "POP_VLAN"
+                                        },
+                                        {
+                                            "action": "OUTPUT",
+                                            "port": "<port_no>"
+                                        }
+                                    ]
+                                },
+                                {
+                                    "opt_tag": "VID-X",
+                                    "name": "OutputVIDTranslate",
+                                    "action_set": [
+                                        {
+                                            "action": "SET_FIELD",
+                                            "field": "VLAN_VID",
+                                            "value": "<local_vid>"
+                                        },
+                                        {
+                                            "action": "OUTPUT",
+                                            "port": "<port_no>"
+                                        }
+                                    ]
+                                }
+                            ]
+                        }
+                    ],
+                    "flow_paths": [
+                        {
+                            "doc": [
+                                "This object contains just a few examples of flow paths, it is not",
+                                "a comprehensive list of the flow paths required for this TTP.  It is",
+                                "intended that the flow paths array could include either a list of",
+                                "required flow paths or a list of specific flow paths that are not",
+                                "required (whichever is more concise or more useful."
+                            ],
+                            "name": "L2-2",
+                            "path": [
+                                "Non-Control-Frame",
+                                "IV-pass",
+                                "Known-MAC",
+                                "ACLskip",
+                                "L2-Unicast",
+                                "EgressPort"
+                            ]
+                        },
+                        {
+                            "name": "L2-3",
+                            "path": [
+                                "Non-Control-Frame",
+                                "IV-pass",
+                                "Known-MAC",
+                                "ACLskip",
+                                "L2-Multicast",
+                                "L2Mcast",
+                                "[EgressPort]"
+                            ]
+                        },
+                        {
+                            "name": "L2-4",
+                            "path": [
+                                "Non-Control-Frame",
+                                "IV-pass",
+                                "Known-MAC",
+                                "ACL-skip",
+                                "VID-flood",
+                                "VIDflood",
+                                "[EgressPort]"
+                            ]
+                        },
+                        {
+                            "name": "L2-5",
+                            "path": [
+                                "Non-Control-Frame",
+                                "IV-pass",
+                                "Known-MAC",
+                                "ACLskip",
+                                "L2-Drop"
+                            ]
+                        },
+                        {
+                            "name": "v4-1",
+                            "path": [
+                                "Non-Control-Frame",
+                                "IV-pass",
+                                "Known-MAC",
+                                "ACLskip",
+                                "L2-Router-MAC",
+                                "IPv4",
+                                "v4-Unicast",
+                                "NextHop",
+                                "EgressPort"
+                            ]
+                        },
+                        {
+                            "name": "v4-2",
+                            "path": [
+                                "Non-Control-Frame",
+                                "IV-pass",
+                                "Known-MAC",
+                                "ACLskip",
+                                "L2-Router-MAC",
+                                "IPv4",
+                                "v4-Unicast-ECMP",
+                                "L3ECMP",
+                                "NextHop",
+                                "EgressPort"
+                            ]
+                        }
+                    ]
+                }
+            ]
+        }
+    }
+
+Making a REST Call
+~~~~~~~~~~~~~~~~~~
+
+In this example we’ll do a PUT to install the sample TTP from above into
+OpenDaylight and then retrieve it both as json and as xml. We’ll use the
+`Postman - REST
+Client <https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm>`__
+for Chrome in the examples, but any method of accessing REST should
+work.
+
+First, we’ll fill in the basic information:
+
+.. figure:: ./images/ttp-screen1-basic-auth.png
+   :alt: Filling in URL, content, Content-Type and basic auth
+
+   Filling in URL, content, Content-Type and basic auth
+
+1. Set the URL to
+   ``http://localhost:8181/restconf/config/onf-ttp:opendaylight-ttps/onf-ttp:table-type-patterns/``
+
+2. Set the action to ``PUT``
+
+3. Click Headers and
+
+4. Set a header for ``Content-Type`` to ``application/json``
+
+5. Make sure the content is set to raw and
+
+6. Copy the sample TTP from above into the content
+
+7. Click the Basic Auth tab and
+
+8. Set the username and password to admin
+
+9. Click Refresh headers
+
+.. figure:: ./images/ttp-screen2-applied-basic-auth.png
+   :alt: Refreshing basic auth headers
+
+   Refreshing basic auth headers
+
+After clicking Refresh headers, we can see that a new header
+(``Authorization``) has been created and this will allow us to
+authenticate to make the REST call.
+
+.. figure:: ./images/ttp-screen3-sent-put.png
+   :alt: PUTting a TTP
+
+   PUTting a TTP
+
+At this point, clicking send should result in a Status response of ``200
+OK`` indicating we’ve successfully PUT the TTP into OpenDaylight.
+
+.. figure:: ./images/ttp-screen4-get-json.png
+   :alt: Retrieving the TTP as json via a GET
+
+   Retrieving the TTP as json via a GET
+
+We can now retrieve the TTP by:
+
+1. Changing the action to ``GET``
+
+2. Setting an ``Accept`` header to ``application/json`` and
+
+3. Pressing send
+
+.. figure:: ./images/ttp-screen5-get-xml.png
+   :alt: Retrieving the TTP as xml via a GET
+
+   Retrieving the TTP as xml via a GET
+
+The same process can retrieve the content as xml by setting the
+``Accept`` header to ``application/xml``.
+
diff --git a/docs/developer-guide/uni-manager-plug-in-developer-guide.rst b/docs/developer-guide/uni-manager-plug-in-developer-guide.rst
new file mode 100644 (file)
index 0000000..8d6718c
--- /dev/null
@@ -0,0 +1,561 @@
+UNI Manager Plug-In Developer Guide
+===================================
+
+The UNI Manager plug in exposes capabilities of OpenDaylight to
+configure networked equipment to operate according to Metro Ethernet
+Forum (MEF) requirements for User Network Interface (UNI) and to support
+the creation of an Ethernet Virtual Connection (EVC) according to MEF
+requirements.
+
+UNI Manager adheres to a minimum set of functionality defined by MEF 7.2
+and 10.2 specifications.
+
+Functionality
+-------------
+
+The UNI manager plugin enables the creation of Ethernet Virtual
+Connections (EVC) as defined by the Metro Ethernet Forum (MEF). An EVC
+provides a simulated Ethernet connection among LANs existing at
+different geographical locations. This version of the plugin is limited
+to connecting two LANS.
+
+As defined by MEF, each location to be connected must have a User
+Network Interface, (UNI) which is a device that connects the user LAN to
+the EVC providers network.
+
+UNI and EVC are implemented via Open vSwitch, leveraging the OVSDB
+project: creating a UNI will end up creating an OVSDB node with an
+*ovsbr0* bridge, interface and port. While creating a UNI, based on the
+MEF requirement, one can specify a desired QoS; this leverages the QoS
+and Queue tables from the OVS database. (see documentation bellow for
+full details). Same goes with the EVC, to which one can apply a given
+QoS to control the speed of the connection. Creating an EVC will add two
+additional ports to the *ovsbr0* bridge:
+
+-  *eht0*: the interface connected to a client laptop
+
+-  *gre1*, interface used to for gre tunnelling between two clients
+   (VXLAN).
+
+Finally, within this release, UniMgr is more a Proof Of Concept than a
+framework to be used in production. Previous demonstrations were made
+using Raspberry Pis, having a low NIC bandwith, thus the speed as
+defined in the API is actually mapped as follow:
+
+-  ``speed-10M`` ⇒ 1 Mb
+
+-  ``speed-100M`` ⇒ 2 Mb
+
+-  ``speed-1G`` ⇒ 3 Mb
+
+-  ``speed-10G`` ⇒ 4 Mb
+
+UNI Manager REST APIs
+---------------------
+
+This API enables the creation and management of both UNI’s and EVCs. In
+order to create an EVC using this interface you would first create two
+UNI’s via the following REST API (see documentation below for full
+details)
+
+::
+
+    PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:uni/node/<uni-id>
+
+You would then create an EVC, indicating that it is a connection between
+the two UNI’s that were just created, via the following REST API (see
+documentation below for full details)
+
+::
+
+    PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:evc/link/<evc-id>
+
+You can then change attributes of the UNI’s or EVCs, and delete these
+entities using this API (see documentation below for full details).
+
+This plugin uses the OpenDaylight OVSDB plugin to provision and the
+manage devices which implement the OVSDB REST interface, as needed to
+realize the UNI and EVC life-cycles
+
+.. note::
+
+    Both the configuration and operational databases can be operated
+    upon by the unimgr REST API. The only difference between the two is
+    in the REST Path. The configuration datastore represents the desired
+    state, the operational datastore represents the actual state.
+
+For operating on the config database
+
+::
+
+    http://<host-ip>:8181/restconf/config/<PATH>
+
+For operating on the operational database
+
+::
+
+    http://<host-ip>:8181/restconf/operational/<PATH>
+
+The documentation below shows examples of both
+
+CREATE UNI
+~~~~~~~~~~
+
+::
+
+    PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:uni/node/<uni-id>
+
+.. note::
+
+    uni-id is determined by and supplied by the caller both in the path
+    and the body of the rest message
+
+Request Body
+
+::
+
+    {
+      "network-topology:node": [
+        {
+          "node-id": "uni-id",
+          "speed": {
+            "speed-1G": 1
+          },
+          "uni:mac-layer": "IEEE 802.3-2005",
+          "uni:physical-medium": "100BASE-T",
+          "uni:mode": "syncEnabled"
+          "uni:type": "UNITYPE",
+          "uni:mtu-size": 1600,
+          "uni:mac-address": "68:5b:35:bb:f8:3e",
+          "uni:ip-address": "192.168.2.11",
+        }
+      ]
+    }
+
+Response on success: 200
+
+Input Options
+
+::
+
+    "speed"
+        "speed-10M"
+        "speed-100M"
+        "speed-1G"
+        "speed-10G"
+    "uni:mac-layer"
+        "IEEE 802.3-2005"
+    uni:physical-medium
+        "10BASE-T"
+        "100BASE-T"
+        "1000BASE-T"
+        "10GBASE-T"
+    "uni:mode"
+        "syncEnabled"
+        "syncDisabled"
+    "uni:type"
+        "UNITYPE"
+        "uni:mtu-size"
+        1600 reccomended
+
+On OVS, the QoS, the Queue were updated, and a bridge was added:
+
+::
+
+    mininet@mininet-vm:~$ sudo ovs-vsctl list QoS
+    _uuid               : 341c6e9d-ecb4-44ff-a21c-db644b466f4c
+    external_ids        : {opendaylight-qos-id="qos://18db2a79-5655-4a94-afac-94015245e3f6"}
+    other_config        : {dscp="0", max-rate="3000000"}
+    queues              : {}
+    type                : linux-htb
+
+    mininet@mininet-vm:~$ sudo ovs-vsctl list Queue
+    _uuid               : 8a0e1fc1-5d5f-4e7a-9c4d-ec412a5ec7de
+    dscp                : 0
+    external_ids        : {opendaylight-queue-id="queue://740a3809-5bef-4ad4-98d6-2ba81132bd06"}
+    other_config        : {dscp="0", max-rate="3000000"}
+
+    mininet@mininet-vm:~$ sudo ovs-vsctl show
+    0b8ed0aa-67ac-4405-af13-70249a7e8a96
+        Manager "tcp:192.168.1.200:6640"
+            is_connected: true
+        Bridge "ovsbr0"
+            Port "ovsbr0"
+                Interface "ovsbr0"
+                    type: internal
+        ovs_version: "2.4.0"
+
+RETRIEVE UNI
+~~~~~~~~~~~~
+
+GET
+`http://<host-ip>:8181/restconf/operational/network-topology:network-topology/topology/unimgr:uni/node/<uni-id> <http://<host-ip>:8181/restconf/operational/network-topology:network-topology/topology/unimgr:uni/node/<uni-id>>`__
+
+Response : 200
+
+::
+
+    {
+        "node": [
+        {
+            "node-id": "uni-id",
+            "cl-unimgr-mef:speed": {
+                "speed-1G": [null]
+            },
+            "cl-unimgr-mef:mac-layer": "IEEE 802.3-2005",
+            "cl-unimgr-mef:physical-medium": "1000BASE-T",
+            "cl-unimgr-mef:mode": "syncEnabled",
+            "cl-unimgr-mef:type": "UNITYPE",
+            "cl-unimgr-mef:mtu-size": "1600",
+            "cl-unimgr-mef:mac-address": "00:22:22:22:22:22",
+            "cl-unimgr-mef:ip-address": "10.36.0.22"
+        }
+        ]
+    }
+
+Output Options
+
+::
+
+    "cl-unimgr-mef:speed"
+        "speed-10M"
+        "speed-100M"
+        "speed-1G"
+        "speed-10G"
+    "cl-unimgr-mef::mac-layer"
+        "IEEE 802.3-2005"
+    "cl-unimgr-mef:physical-medium"
+        "10BASE-T"
+        "100BASE-T"
+        "1000BASE-T"
+        "10GBASE-T"
+    "cl-unimgr-mef::mode"
+        "syncEnabled"
+        "syncDisabled"
+    "cl-unimgr-mef::type"
+        "UNITYPE"
+
+UPDATE UNI
+~~~~~~~~~~
+
+::
+
+    PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:uni/node/<uni-id>
+
+.. note::
+
+    uni-id is determined by and supplied by the caller both in the path
+    and the body of the rest message
+
+Request Body
+
+::
+
+    {
+        "network-topology:node": [
+        {
+            "node-id": "uni-id",
+            "speed": {
+                "speed-1G": 1
+            },
+            "uni:mac-layer": "IEEE 802.3-2005",
+            "uni:physical-medium": "100BASE-T",
+            "uni:mode": "syncEnabled"
+            "uni:type": "UNITYPE",
+            "uni:mtu-size": 1600,
+            "uni:mac-address": "68:5b:35:bb:f8:3e",
+            "uni:ip-address": "192.168.2.11",
+        }
+        ]
+    }
+
+Response on success: 200
+
+Input Options
+
+::
+
+    "speed"
+        "speed-10M"
+        "speed-100M"
+        "speed-1G"
+        "speed-10G"
+    "uni:mac-layer"
+        "IEEE 802.3-2005"
+    uni:physical-medium
+        "10BASE-T"
+        "100BASE-T"
+        "1000BASE-T"
+        "10GBASE-T"
+    "uni:mode"
+        "syncEnabled"
+        "syncDisabled"
+    "uni:type"
+        "UNITYPE"
+    "uni:mtu-size"
+        1600 reccomended
+
+DELETE UNI
+~~~~~~~~~~
+
+::
+
+    DELETE http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:uni/node/<uni-id>
+
+Response on success: 200
+
+CREATE EVC
+~~~~~~~~~~
+
+::
+
+    PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:evc/link/<evc-id>
+
+.. note::
+
+    evc-id is determined by and supplied by the caller both in the path
+    and the body of the rest message
+
+Request Body
+
+::
+
+    {
+        "link": [
+        {
+            "link-id": "evc-1",
+            "source": {
+                "source-node": "/network-topology/topology/node/uni-1"
+            },
+            "destination": {
+                "dest-node": "/network-topology/topology/node/uni-2"
+          },
+          "cl-unimgr-mef:uni-source": [
+            {
+                "order": "0",
+                "ip-address": "192.168.2.11"
+            }
+            ],
+            "cl-unimgr-mef:uni-dest": [
+            {
+                "order": "0",
+                "ip-address": "192.168.2.10"
+            }
+            ],
+            "cl-unimgr-mef:cos-id": "gold",
+            "cl-unimgr-mef:ingress-bw": {
+                "speed-10G": {}
+            },
+            "cl-unimgr-mef:egress-bw": {
+                "speed-10G": {}
+          }
+        }
+        ]
+    }
+
+Response on success: 200
+
+Input Optionss
+
+::
+
+    ["source"]["source-node"]
+        Id of 1st UNI to assocate EVC with
+    ["cl-unimgr-mef:uni-source"][0]["ip-address"]
+        IP address of 1st UNI to associate EVC with
+    ["destination"]["dest-node"]
+        Id of 2nd UNI to assocate EVC with
+    ["cl-unimgr-mef:uni-dest"][0]["ip-address"]
+        IP address of 2nd UNI to associate EVC with
+    "cl-unimgr-mef:cos-id"
+        class of service id to associate with the EVC
+    "cl-unimgr-mef:ingress-bw"
+    "cl-unimgr-mef:egress-bw"
+        "speed-10M"
+        "speed-100M"
+        "speed-1G"
+        "speed-10G"
+
+On OVS, the QoS, the Queue were updated, and two ports were added:
+
+::
+
+    mininet@mininet-vm:~$ sudo ovs-vsctl list QoS
+    _uuid               : 341c6e9d-ecb4-44ff-a21c-db644b466f4c
+    external_ids        : {opendaylight-qos-id="qos://18db2a79-5655-4a94-afac-94015245e3f6"}
+    other_config        : {dscp="0", max-rate="3000000"}
+    queues              : {}
+    type                : linux-htb
+
+    mininet@mininet-vm:~$ sudo ovs-vsctl list Queue
+    _uuid               : 8a0e1fc1-5d5f-4e7a-9c4d-ec412a5ec7de
+    dscp                : 0
+    external_ids        : {opendaylight-queue-id="queue://740a3809-5bef-4ad4-98d6-2ba81132bd06"}
+    other_config        : {dscp="0", max-rate="3000000"}
+
+    mininet@mininet-vm:~$ sudo ovs-vsctl show
+    0b8ed0aa-67ac-4405-af13-70249a7e8a96
+        Manager "tcp:192.168.1.200:6640"
+            is_connected: true
+        Bridge "ovsbr0"
+            Port "ovsbr0"
+                Interface "ovsbr0"
+                    type: internal
+            Port "eth1"
+                Interface "eth1"
+            Port "gre1"
+                Interface "gre1"
+                    type: gre
+                    options: {remote_ip="192.168.1.233"}
+    ovs_version: "2.4.0"
+
+RETRIEVE EVC
+~~~~~~~~~~~~
+
+::
+
+    GET http://<host-ip>:8181/restconf/operational/network-topology:network-topology/topology/unimgr:evc/link/<evc-id>
+
+Response on success: 200
+
+::
+
+    {
+        "link": [
+        {
+            "link-id": "evc-5",
+            "source": {
+                "source-node": "/network-topology/topology/node/uni-9"
+            },
+            "destination": {
+                "dest-node": "/network-topology/topology/node/uni-10"
+            },
+            "cl-unimgr-mef:uni-dest": [
+            {
+                "order": 0,
+                "uni": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='unimgr:uni']/network-topology:node[network-topology:node-id='uni-10']",
+                "ip-address": "10.0.0.22"
+            }
+            ],
+            "cl-unimgr-mef:ingress-bw": {
+                "speed-1G": [null]
+            },
+            "cl-unimgr-mef:cos-id": "new1",
+            "cl-unimgr-mef:uni-source": [
+            {
+                "order": 0,
+                "uni": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='unimgr:uni']/network-topology:node[network-topology:node-id='uni-9']",
+                "ip-address": "10.0.0.21"
+            }
+            ],
+            "cl-unimgr-mef:egress-bw": {
+            "speed-1G": [null]
+          }
+        }
+        ]
+    }
+
+Output Options
+
+::
+
+    ["source"]["source-node"]
+    ["cl-unimgr-mef:uni-source"][0]["uni"]
+        Id of 1st UNI assocated with EVC
+        ["cl-unimgr-mef:uni-source"][0]["ip-address"]
+        IP address of 1st UNI assocated with EVC
+    ["destination"]["dest-node"]
+    ["cl-unimgr-mef:uni-dest"][0]["uni"]
+        Id of 2nd UNI assocated with EVC
+    ["cl-unimgr-mef:uni-dest"][0]["ip-address"]
+        IP address of 2nd UNI assocated with EVC
+    "cl-unimgr-mef:cos-id"
+        class of service id associated with the EVC
+    "cl-unimgr-mef:ingress-bw"
+    "cl-unimgr-mef:egress-bw"
+        "speed-10M"
+        "speed-100M"
+        "speed-1G"
+        "speed-10G"
+
+UPDATE EVC
+~~~~~~~~~~
+
+::
+
+    PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:evc/link/<evc-id>
+
+.. note::
+
+    evc-id is determined by and supplied by the caller both in the path
+    and the body of the rest message
+
+Request Body
+
+::
+
+    {
+        "link": [
+        {
+            "link-id": "evc-1",
+            "source": {
+                "source-node": "/network-topology/topology/node/uni-1"
+            },
+            "destination": {
+                "dest-node": "/network-topology/topology/node/uni-2"
+            },
+            "cl-unimgr-mef:uni-source": [
+            {
+                "order": "0",
+                "ip-address": "192.168.2.11"
+            }
+            ],
+            "cl-unimgr-mef:uni-dest": [
+            {
+                "order": "0",
+                "ip-address": "192.168.2.10"
+            }
+            ],
+            "cl-unimgr-mef:cos-id": "gold",
+            "cl-unimgr-mef:ingress-bw": {
+                "speed-10G": {}
+            },
+            "cl-unimgr-mef:egress-bw": {
+            "speed-10G": {}
+          }
+        }
+        ]
+    }
+
+Response on success: 200
+
+Input Optionss
+
+::
+
+    ["source"]["source-node"]
+        Id of 1st UNI to assocate EVC with
+    ["cl-unimgr-mef:uni-source"][0]["ip-address"]
+        IP address of 1st UNI to associate EVC with
+    ["destination"]["dest-node"]
+        Id of 2nd UNI to assocate EVC with
+    ["cl-unimgr-mef:uni-dest"][0]["ip-address"]
+        IP address of 2nd UNI to associate EVC with
+    "cl-unimgr-mef:cos-id"
+        class of service id to associate with the EVC
+    "cl-unimgr-mef:ingress-bw"
+    "cl-unimgr-mef:egress-bw"
+        "speed-10M"
+        "speed-100M"
+        "speed-1G"
+        "speed-10G"
+
+DELETE EVC
+~~~~~~~~~~
+
+::
+
+    DELETE http://host-ip:8181/restconf/config/network-topology:network-topology/topology/unimgr:evc/link/evc-id
+
+Response on success: 200
+
diff --git a/docs/developer-guide/unified-secure-channel.rst b/docs/developer-guide/unified-secure-channel.rst
new file mode 100644 (file)
index 0000000..6703b32
--- /dev/null
@@ -0,0 +1,68 @@
+Unified Secure Channel
+======================
+
+Overview
+--------
+
+The Unified Secure Channel (USC) feature provides REST API, manager, and
+plugin for unified secure channels. The REST API provides a northbound
+api. The manager monitors, maintains, and provides channel related
+services. The plugin handles the lifecycle of channels.
+
+USC Channel Architecture
+------------------------
+
+-  USC Agent
+
+   -  The USC Agent provides proxy and agent functionality on top of all
+      standard protocols supported by the device. It initiates call-home
+      with the controller, maintains live connections with with the
+      controller, acts as a demuxer/muxer for packets with the USC
+      header, and authenticates the controller.
+
+-  USC Plugin
+
+   -  The USC Plugin is responsible for communication between the
+      controller and the USC agent . It responds to call-home with the
+      controller, maintains live connections with the devices, acts as a
+      muxer/demuxer for packets with the USC header, and provides
+      support for TLS/DTLS.
+
+-  USC Manager
+
+   -  The USC Manager handles configurations, high availability,
+      security, monitoring, and clustering support for USC.
+
+-  USC UI
+
+   -  The USC UI is responsible for displaying a graphical user
+      interface representing the state of USC in the OpenDaylight DLUX
+      UI.
+
+USC Channel APIs and Interfaces
+-------------------------------
+
+This section describes the APIs for interacting with the unified secure
+channels.
+
+USC Channel Topology API
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The USC project maintains a topology that is YANG-based in MD-SAL. These
+models are available via RESTCONF.
+
+-  Name: view-channel
+
+-  URL:
+   `http://${ipaddress}:8181/restconf/operations/usc-channel:view-channel <http://${ipaddress}:8181/restconf/operations/usc-channel:view-channel>`__
+
+-  Description: Views the current state of the USC environment.
+
+API Reference Documentation
+---------------------------
+
+Go to
+`http://${ipaddress}:8181/apidoc/explorer/index.html <http://${ipaddress}:8181/apidoc/explorer/index.html>`__,
+sign in, and expand the usc-channel panel. From there, users can execute
+various API calls to test their USC deployment.
+
diff --git a/docs/developer-guide/yang-push-developer-guide.rst b/docs/developer-guide/yang-push-developer-guide.rst
new file mode 100644 (file)
index 0000000..641c4a0
--- /dev/null
@@ -0,0 +1,121 @@
+YANG-PUSH Developer Guide
+=========================
+
+Overview
+--------
+
+The YANG PUBSUB project allows subscriptions to be placed on targeted
+subtrees of YANG datastores residing on remote devices. Changes in YANG
+objects within the remote subtree can be pushed to an OpenDaylight
+controller as specified without a requiring the controller to make a
+continuous set of fetch requests.
+
+YANG-PUSH capabilities available
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This module contains the base code which embodies the intent of
+YANG-PUSH requirements for subscription as defined in
+{i2rs-pub-sub-requirements}
+[https://datatracker.ietf.org/doc/draft-ietf-i2rs-pub-sub-requirements/].
+The mechanism for delivering on these YANG-PUSH requirements over
+Netconf transport is defined in {netconf-yang-push} [netconf-yang-push:
+https://tools.ietf.org/html/draft-ietf-netconf-yang-push-00].
+
+Note that in the current release, not all capabilities of
+draft-ietf-netconf-yang-push are realized. Currently only implemented is
+**create-subscription** RPC support from
+ietf-datastore-push@2015-10-15.yang; and this will be for periodic
+subscriptions only. There of course is intent to provide much additional
+functionality in future OpenDaylight releases.
+
+Future YANG-PUSH capabilities
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Over time, the intent is to flesh out more robust capabilities which
+will allow OpenDaylight applications to subscribe to YANG-PUSH compliant
+devices. Capabilities for future releases will include:
+
+Support for subscription change/delete: **modify-subscription** rpc
+support for all mountpoint devices or particular mountpoint device
+**delete-subscription** rpc support for all mountpoint devices or
+particular mountpoint device
+
+Support for static subscriptions: This will enable the receipt of
+subscription updates pushed from publishing devices where no signaling
+from the controller has been used to establish the subscriptions.
+
+Support for additional transports: NETCONF is not the only transport of
+interest to OpenDaylight or the subscribed devices. Over time this code
+will support Restconf and HTTP/2 transport requirements defined in
+{netconf-restconf-yang-push}
+[https://tools.ietf.org/html/draft-voit-netconf-restconf-yang-push-01]
+
+YANG-PUSH Architecture
+----------------------
+
+The code architecture of Yang push consists of two main elements
+
+YANGPUSH Provider YANGPUSH Listener
+
+YANGPUSH Provider receives create-subscription requests from
+applications and then establishes/registers the corresponding listener
+which will receive information pushed by a publisher. In addition,
+YANGPUSH Provider also invokes an augmented OpenDaylight
+create-subscription RPC which enables applications to register for
+notification as per rfc5277. This augmentation adds periodic time period
+(duration) and subscription-id values to the existing RPC parameters.
+The Java package supporting this capability is
+“org.opendaylight.yangpush.impl”. Below class supports the YANGPUSH
+Provider capability:
+
+(1) YangpushDomProvider The Binding Independent version. It uses a
+neutral data Document Object Model format for data and API calls, which
+is independent of any generated Java language bindings from the YANG
+model.
+
+The YANGPUSH Listener accepts update notifications from a device after
+they have been de-encapsulated from the NETCONF transport. The YANGPUSH
+Listener then passes these updates to MD-SAL. This function is
+implemented via the YangpushDOMNotificationListener class within the
+“org.opendaylight.yangpush.listner” Java package.
+
+Key APIs and Interfaces
+-----------------------
+
+YangpushDomProvider
+~~~~~~~~~~~~~~~~~~~
+
+Central to this is onSessionInitiated which acquires the Document Object
+Model format based versions of MD-SAL services, including the MountPoint
+service and RPCs. Via these acquired services, invoke
+registerDataChangeListener over in YangpushDOMNotificationListener.
+
+YangpushDOMNotificationListener
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This API handles instances of a received Push Updates which are inbound
+to the listener and places these in MD-SAL. Key classes in include:
+
+onPushUpdate Converts and validates the encoding of the pushed
+subscription update. If the subscription exists and is active, calls
+updateDataStoreForPushUpdate so that the information can be put in
+MD-SAL. Finally logs the pushed subscription update as well as some
+additional context information.
+
+updateDataStoreForPushUpdate Used to put the published information into
+MD-SAL. This pushed information will also include elements such as the
+subscription-id, the identity of the publisher, the time of the update,
+the incoming encoding type, and the pushed YANG subtree information.
+
+YangpushDOMNotificationListener Starts the listener tracking a new
+Subscription ID from a particular publisher.
+
+API Reference Documentation
+---------------------------
+
+Javadocs are generated while creating mvn:site and they are located in
+target/ directory in each module.
+
+.. |Openstack Integration| image:: ./images/openstack_integration.png
+.. |Screenshot8| image:: ./images/Screenshot8.png
+   :width: 500px
diff --git a/docs/developer-guide/yang-tools.rst b/docs/developer-guide/yang-tools.rst
new file mode 100644 (file)
index 0000000..b30175a
--- /dev/null
@@ -0,0 +1,1273 @@
+YANG Tools
+==========
+
+Overview
+--------
+
+YANG Tools is set of libraries and tooling providing support for use
+`YANG <https://tools.ietf.org/html/rfc6020>`__ for Java (or other
+JVM-based language) projects and applications.
+
+YANG Tools provides following features in OpenDaylight:
+
+-  parsing of YANG sources and semantic inference of relationship across
+   YANG models as defined in
+   `RFC6020 <https://tools.ietf.org/html/rfc6020>`__
+
+-  representation of YANG-modeled data in Java
+
+   -  **Normalized Node** representation - DOM-like tree model, which
+      uses conceptual meta-model more tailored to YANG and OpenDaylight
+      use-cases than a standard XML DOM model allows for.
+
+   -  **Java Binding** - concrete data model and classes generated from
+      YANG models, designed to provide compile-time safety when working
+      with YANG-modeled data.
+
+-  serialization / deserialization of YANG-modeled data driven by YANG
+   models
+
+   -  XML - as defined in
+      `RFC6020 <https://tools.ietf.org/html/rfc6020>`__
+
+   -  JSON - as defined in
+      `draft-lhotka-netmod-yang-json-01 <https://tools.ietf.org/html/rfc6020>`__
+
+   -  Java Binding to Normalized Node and vice-versa
+
+-  Integration of YANG model parsing into Maven build lifecycle and
+   support for third-party generators processing YANG models.
+
+YANG Tools project consists of following logical subsystems:
+
+-  **Commons** - Set of general purpose code, which is not specific to
+   YANG, but is also useful outside YANG Tools implementation.
+
+-  **YANG Model and Parser** - YANG semantic model and lexical and
+   semantic parser of YANG models, which creates in-memory
+   cross-referenced represenation of YANG models, which is used by other
+   components to determine their behaviour based on the model.
+
+-  **YANG Data** - Definition of Normalized Node APIs and Data Tree
+   APIs, reference implementation of these APIs and implementation of
+   XML and JSON codecs for Normalized Nodes.
+
+-  **YANG Maven Plugin** - Maven plugin which integrates YANG parser
+   into Maven build lifecycle and provides code-generation framework for
+   components, which wants to generate code or other artefacts based on
+   YANG model.
+
+-  **YANG Java Binding** - Mapping of YANG model to generated Java APIs.
+   Java Binding also references to set of compile-time and runtime
+   components which implements this mapping, provides generation of
+   classes and APIs based on YANG models and integrate these Java
+   Binding objects with **YANG Data** APIs and components.
+
+   -  **Models** - Set of **IETF** and **YANG Tools** models, with
+      generated Java Bindings so they could be simply consumed outside
+      of **YANG Tools**.
+
+YANG Java Binding: Mapping rules
+--------------------------------
+
+This chapter covers the details of mapping YANG to Java.
+
+.. note::
+
+    The following source code examples does not show canonical generated
+    code, but rather illustrative example. Generated classes and
+    interfaces may differ from this examples, but APIs are preserved.
+
+General conversion rules
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Package names of YANG models
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+| The package name consists of the following parts:
+
+-  **Opendaylight prefix** - Specifies the opendaylight prefix. Every
+   package name starts with the prefix ``org.opendaylight.yang.gen.v``.
+
+-  **Java Binding version** - Specifies the YANG Java Binding version.
+   Curent Binding version is ``1``.
+
+-  **Namespace** - Specified by the value of ``namespace`` substatement.
+   URI is converted to package name structure.
+
+-  **Revision** - Specifies the concatenation of word ``rev`` and value
+   of ``module`` substatements ``revision`` argument value without
+   leading zeros before month and day. For example: ``rev201379``
+
+After the package name is generated, we check if it contains any Java
+keywords or starts with a digit. If so, then we add an underscore before
+the offending token.
+
+The following is a list of keywords which are prefixed with underscore:
+
+abstract, assert, boolean, break, byte, case, catch, char, class, const,
+continue, default, double, do, else, enum, extends, false, final,
+finally, float, for, goto, if, implements, import, instanceof, int,
+interface, long, native, new, null, package, private, protected, public,
+return, short, static, strictfp, super, switch, synchronized, this,
+throw, throws, transient, true, try, void, volatile, while
+
+As an example suppose following yang model:
+
+.. code:: yang
+
+    module module {
+        namespace "urn:2:case#module";
+        prefix "sbd";
+        organization "OPEN DAYLIGHT";
+        contact "http://www.example.com/";
+        revision 2013-07-09 {
+        }
+    }
+
+After applying rules (replacing digits and Java keywords) the resulting
+package name is
+``org.opendaylight.yang.gen.v1.urn._2._case.module.rev201379``
+
+Additional Packages
+^^^^^^^^^^^^^^^^^^^
+
+In cases when YANG statement contain some of specific YANG statements
+additional packages are generated to designate this containment. Table
+below provides details of parent statement and nested statements, which
+yields additional package generation:
+
++--------------------------------------+--------------------------------------+
+| Parent statement                     | Substatement                         |
++======================================+======================================+
+| ``list``                             | list, container, choice              |
++--------------------------------------+--------------------------------------+
+| ``container``                        | list, container, choice              |
++--------------------------------------+--------------------------------------+
+| ``choice``                           | leaf, list, leaf-list, container,    |
+|                                      | case                                 |
++--------------------------------------+--------------------------------------+
+| ``case``                             | list, container, choice              |
++--------------------------------------+--------------------------------------+
+| rpc ``input`` or ``output``          | list, container, (choice isn’t       |
+|                                      | supported)                           |
++--------------------------------------+--------------------------------------+
+| ``notification``                     | list, container, (choice isn’t       |
+|                                      | supported)                           |
++--------------------------------------+--------------------------------------+
+| ``augment``                          | list, container, choice, case        |
++--------------------------------------+--------------------------------------+
+
+Substatements are not only mapped to Java setter methods in the
+interface representing the parent statement, but they also generate
+packages with names consisting of the parent statement package name with
+the parent statement name appended.
+
+For example, this YANG model considers the container statement ``cont``
+as the direct substatement of the module.
+
+.. code:: yang
+
+    container cont {
+      container cont-inner {
+      }
+      list outter-list {
+        list list-in-list {
+        }
+      }
+    }
+
+Container ``cont`` is the parent statement for the substatements
+``cont-inner`` and ``outter-list``. ``list outter-list`` is the parent
+statement for substatement ``list-in-list``.
+
+| Java code is generated in the following structure:
+
+-  ``org.opendaylight.yang.gen.v1.urn.module.rev201379`` - package
+   contains direct substatements of module statement
+
+   -  ``Cont.java``
+
+-  ``org.opendaylight.yang.gen.v1.urn.module.rev201379.cont`` - package
+   contains substatements of ``cont`` container statement
+
+   -  ``ContInner.java`` - interface representing container
+      ``cont-inner``
+
+   -  ``OutterList.java`` - interface representing list ``outer-list``
+
+-  ``org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.outter.list``
+   - package contains substatements of outter-list list element
+
+   -  ``ListInList.java``
+
+Class and interface names
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Some YANG statements are mapped to Java classes and interfaces. The name
+of YANG element may contain various characters which aren’t permitted in
+Java class names. Firstly whitespaces are trimmed from YANG name. Next
+the characters space, -, \` are deleted and the subsequent letter is
+capitalized. At the end, first letter is capitalized.
+
+For example, ``example-name_ without_capitalization`` would map to
+``ExampleNameWithoutCapitalization``.
+
+Getter and setter names
+^^^^^^^^^^^^^^^^^^^^^^^
+
+In some cases, YANG statements are converted to getter and/or setter
+methods. The process for getter is:
+
+1. the name of YANG statement is converted to Java class name style as
+   `explained above <#_class_and_interface_names>`__.
+
+2. the word ``get`` is added as prefix, if resulting type is
+   ``Boolean``, the name is prefixed with ``is`` prefix instead of
+   ``get``.
+
+3. the return type of the getter method is set to Java type representing
+   substatement
+
+The process for setter is:
+
+1. the name of YANG statement is converted to Java class name style as
+   `explained above <#_class_and_interface_names>`__.
+
+2. the word ``set`` is added as prefix
+
+3. the input parameter name is set to element’s name converted to Java
+   parameter style
+
+4. the return parameter is set to builder type
+
+Statement specific mapping
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+module statement
+^^^^^^^^^^^^^^^^
+
+YANG ``module`` statement is converted to Java as two Java classes. Each
+of the classes is in the separate Java file. The names of Java files are
+composed as follows: ``<module name><suffix>.java`` where ``<suffix>``
+is either data or service.
+
+Data Interface
+''''''''''''''
+
+Data Interface has a mapping similar to container, but contains only top
+level nodes defined in module.
+
+Data interface serves only as marker interface for type-safe APIs of
+``InstanceIdentifier``.
+
+Service Interface
+'''''''''''''''''
+
+Service Interface serves to describe RPC contract defined in the module.
+This RPC contract is defined by ``rpc`` statements.
+
+RPC implementation usually implement this interface and users of the
+RPCs use this interface to invoke RPCs.
+
+container statement
+^^^^^^^^^^^^^^^^^^^
+
+YANG containers are mapped to Java interfaces which extend the Java
+DataObject and Augmentable<container-interface>, where
+container-interface is the name of the mapped interface.
+
+For example, the following YANG:
+
+**YANG model.**
+
+.. code:: yang
+
+    container cont {
+
+    }
+
+is converted into this Java:
+
+**Cont.java.**
+
+.. code:: java
+
+    public interface Cont extends ChildOf<...>, Augmentable<Cont> {
+    }
+
+Leaf statement
+^^^^^^^^^^^^^^
+
+Each leaf has to contain at least one type substatement. The leaf is
+mapped to getter method of parent statement with return type equal to
+type substatement value.
+
+For example, the following YANG:
+
+**YANG model.**
+
+.. code:: yang
+
+    container cont {
+      leaf lf {
+        type string;
+      }
+    }
+
+is converted into this Java:
+
+**Cont.java.**
+
+.. code:: java
+
+    public interface Cont extends DataObject, Augmentable<Cont> {
+        String getLf(); 
+    }
+
+-  Represents ``leaf lf``
+
+leaf-list statement
+^^^^^^^^^^^^^^^^^^^
+
+Each leaf-list has to contain one type substatement. The leaf-list is
+mapped to getter method of parent statement with return type equal to
+List of type substatement value.
+
+For example, the following YANG:
+
+**YANG model.**
+
+.. code:: yang
+
+    container cont {
+        leaf-list lf-lst {
+            type string;
+        }
+    }
+
+is converted into this Java:
+
+**Cont.java.**
+
+.. code:: java
+
+    public interface Cont extends DataObject, Augmentable<Cont> {
+        List<String> getLfLst();
+    }
+
+list statement
+^^^^^^^^^^^^^^
+
+``list`` statements are mapped to Java interfaces and a getter method is
+generated in the interface associated with it’s parent statement. The
+return type of getter the method is a Java List of objects implementing
+the interface generated corresponding to the ``list statement.
+Mapping of `list`` substatement to Java:
+
+For example, the following YANG:
+
+**YANG model.**
+
+.. code:: yang
+
+    container cont {
+      list outter-list {
+        key "leaf-in-list";
+        leaf number {
+          type uint64;
+        }
+      }
+    }
+
+The list statement ``example-list`` is mapped to the Java interface
+``ExampleList`` and the ``Cont`` interface (parent of ``ExampleList``)
+contains getter method with return type ``List<ExampleList>``. The
+presence of a ``key`` statement, triggers generation of
+``ExampleListKey``, which may be used to identify item in list.
+
+The end result is this Java:
+
+**OutterList.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379.cont;
+
+    import org.opendaylight.yangtools.yang.binding.DataObject;
+    import org.opendaylight.yangtools.yang.binding.Augmentable;
+    import Java.util.List;
+    import org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.outter.list.ListInList;
+
+    public interface OutterList extends DataObject, Augmentable<OutterList> {
+
+        List<String> getLeafListInList();
+
+        List<ListInList> getListInList();
+
+        /*
+        Returns Primary Key of Yang List Type
+        */
+        OutterListKey getOutterListKey();
+
+    }
+
+**OutterListKey.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379.cont;
+
+    import org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.OutterListKey;
+    import Java.math.BigInteger;
+
+    public class OutterListKey {
+
+        private BigInteger _leafInList;
+
+        public OutterListKey(BigInteger _leafInList) {
+            super();
+            this_leafInList = _leafInList;
+        }
+
+        public BigInteger getLeafInList() {
+            return _leafInList;
+        }
+
+        @Override
+        public int hashCode() {
+            final int prime = 31;
+            int result = 1;
+            result = prime * result + ((_leafInList == null) ? 0 : _leafInList.hashCode());
+            return result;
+        }
+
+        @Override
+        public boolean equals(Object obj) {
+            if (this == obj) {
+                return true;
+            }
+            if (obj == null) {
+                return false;
+            }
+            if (getClass() != obj.getClass()) {
+                return false;
+            }
+            OutterListKey other = (OutterListKey) obj;
+            if (_leafInList == null) {
+                if (other._LeafInList != null) {
+                    return false;
+                }
+            } else if(!_leafInList.equals(other._leafInList)) {
+                return false;
+            }
+            return true;
+        }
+
+        @Override
+        public String toString() {
+            StringBuilder builder = new StringBuilder();
+            builder.append("OutterListKey [_leafInList=");
+            builder.append(_leafInList);
+            builder.append("]");
+            return builder.toString();
+        }
+    }
+
+choice and case statements
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A ``choice`` element is mapped in mostly the same way a ``list`` element
+is. The ``choice`` element is mapped to and interface (marker interface)
+and a new getter method with the return type of a Java ``List`` of this
+marker interfaces is added to the interface corresponding to the parent
+statement. Any ``case`` substatements are mapped to Java interfaces
+which extend the marker interface.
+
+For example, the following YANG:
+
+**YANG model.**
+
+.. code:: yang
+
+    container cont {
+        choice example-choice {
+            case foo-case {
+              leaf foo {
+                type string;
+              }
+            }
+            case bar-case {
+                leaf bar {
+                  type string;
+                }
+            }
+        }
+    }
+
+is converted into this Java:
+
+**Cont.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379;
+
+    import org.opendaylight.yangtools.yang.binding.DataObject;
+    import org.opendaylight.yangtools.yang.binding.Augmentable;
+    import org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.ChoiceTest;
+
+    public interface Cont extends DataObject, Augmentable<Cont> {
+
+        ExampleChoice getExampleChoice();
+
+    }
+
+**ExampleChoice.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379.cont;
+
+    import org.opendaylight.yangtools.yang.binding.DataObject;
+
+    public interface ExampleChoice extends DataContainer {
+    }
+
+**FooCase.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.example.choice;
+
+    import org.opendaylight.yangtools.yang.binding.DataObject;
+    import org.opendaylight.yangtools.yang.binding.Augmentable;
+    import org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.ChoiceTest;
+
+    public interface FooCase extends ExampleChoice, DataObject, Augmentable<FooCase> {
+
+        String getFoo();
+
+    }
+
+**BarCase.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.example.choice;
+
+    import org.opendaylight.yangtools.yang.binding.DataObject;
+    import org.opendaylight.yangtools.yang.binding.Augmentable;
+    import org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.ChoiceTest;
+
+    public interface BarCase extends ExampleChoice, DataObject, Augmentable<BarCase> {
+
+        String getBar();
+
+    }
+
+grouping and uses statements
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+``grouping`s are mapped to Java interfaces. `uses`` statements in some
+element (using of concrete grouping) are mapped as extension of
+interface for this element with the interface which represents grouping.
+
+For example, the following YANG:
+
+**YANG Model.**
+
+.. code:: yang
+
+    grouping grp {
+      leaf foo {
+        type string;
+      }
+    }
+
+    container cont {
+        uses grp;
+    }
+
+is converted into this Java:
+
+**Grp.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379;
+
+    import org.opendaylight.yangtools.yang.binding.DataObject;
+
+    public interface Grp extends DataObject {
+
+        String getFoo();
+
+    }
+
+**Cont.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379;
+
+    import org.opendaylight.yangtools.yang.binding.DataObject;
+    import org.opendaylight.yangtools.yang.binding.Augmentable;
+
+    public interface Cont extends DataObject, Augmentable<Cont>, Grp {
+    }
+
+rpc, input and output statements
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+An ``rpc`` statement is mapped to Java as method of class
+``ModuleService.java``. Any substatements of an ``rpc`` are mapped as
+follows:
+
++--------------------------------------+--------------------------------------+
+| Rpc Substatement                     | Mapping                              |
++======================================+======================================+
+| input                                | presence of input statement triggers |
+|                                      | generation of interface              |
++--------------------------------------+--------------------------------------+
+| output                               | presence of output statement         |
+|                                      | triggers generation of interface     |
++--------------------------------------+--------------------------------------+
+
+For example, the following YANG:
+
+**YANG model.**
+
+.. code:: yang
+
+    rpc rpc-test1 {
+        output {
+            leaf lf-output {
+                type string;
+            }
+        }
+        input {
+            leaf lf-input {
+                type string;
+            }
+        }
+    }
+
+is converted into this Java:
+
+**ModuleService.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379;
+
+    import Java.util.concurrent.Future;
+    import org.opendaylight.yangtools.yang.common.RpcResult;
+
+    public interface ModuleService {
+
+        Future<RpcResult<RpcTest1Output>> rpcTest1(RpcTest1Input input);
+
+    }
+
+**RpcTest1Input.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379;
+
+    public interface RpcTest1Input {
+
+        String getLfInput();
+
+    }
+
+**RpcTest1Output.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379;
+
+    public interface RpcTest1Output {
+
+        String getLfOutput();
+
+    }
+
+notification statement
+^^^^^^^^^^^^^^^^^^^^^^
+
+``notification`` statements are mapped to Java interfaces which extend
+the Notification interface.
+
+For example, the following YANG:
+
+**YANG model.**
+
+.. code:: yang
+
+    notification notif {
+        }
+
+is converted into this Java:
+
+**Notif.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379;
+
+
+    import org.opendaylight.yangtools.yang.binding.DataObject;
+    import org.opendaylight.yangtools.yang.binding.Augmentable;
+    import org.opendaylight.yangtools.yang.binding.Notification;
+
+    public interface Notif extends DataObject, Augmentable<Notif>, Notification {
+    }
+
+augment statement
+~~~~~~~~~~~~~~~~~
+
+``augment`` statements are mapped to Java interfaces. The interface
+starts with the same name as the name of augmented interface with a
+suffix corresponding to the order number of augmenting interface. The
+augmenting interface also extends ``Augmentation<>`` with actual type
+parameter equal to augmented interface.
+
+For example, the following YANG:
+
+**YANG Model.**
+
+.. code:: yang
+
+    container cont {
+    }
+
+    augment "/cont" {
+      leaf additional-value {
+        type string;
+      }
+    }
+
+is converted into this Java:
+
+**Cont.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379;
+
+    import org.opendaylight.yangtools.yang.binding.DataObject;
+    import org.opendaylight.yangtools.yang.binding.Augmentable;
+
+    public interface Cont extends DataObject, Augmentable<Cont> {
+
+    }
+
+**Cont1.java.**
+
+.. code:: java
+
+    package org.opendaylight.yang.gen.v1.urn.module.rev201379;
+
+    import org.opendaylight.yangtools.yang.binding.DataObject;
+    import org.opendaylight.yangtools.yang.binding.Augmentation;
+
+    public interface Cont1 extends DataObject, Augmentation<Cont> {
+
+    }
+
+YANG Type mapping
+~~~~~~~~~~~~~~~~~
+
+typedef statement
+^^^^^^^^^^^^^^^^^
+
+YANG ``typedef`` statements are mapped to Java classes. A ``typedef``
+may contain following substatements:
+
++--------------------------------------+--------------------------------------+
+| Substatement                         | Behaviour                            |
++======================================+======================================+
+| type                                 | determines wrapped type and how      |
+|                                      | class will be generated              |
++--------------------------------------+--------------------------------------+
+| descripton                           | Javadoc description                  |
++--------------------------------------+--------------------------------------+
+| units                                | is not mapped                        |
++--------------------------------------+--------------------------------------+
+| default                              | is not mapped                        |
++--------------------------------------+--------------------------------------+
+
+Valid Arguments Type
+''''''''''''''''''''
+
+Simple values of type argument are mapped as follows:
+
++--------------------------------------+--------------------------------------+
+| YANG Type                            | Java type                            |
++======================================+======================================+
+| boolean                              | Boolean                              |
++--------------------------------------+--------------------------------------+
+| empty                                | Boolean                              |
++--------------------------------------+--------------------------------------+
+| int8                                 | Byte                                 |
++--------------------------------------+--------------------------------------+
+| int16                                | Short                                |
++--------------------------------------+--------------------------------------+
+| int32                                | Integer                              |
++--------------------------------------+--------------------------------------+
+| int64                                | Long                                 |
++--------------------------------------+--------------------------------------+
+| string                               | String or, wrapper class (if pattern |
+|                                      | substatement is specified)           |
++--------------------------------------+--------------------------------------+
+| decimal64                            | Double                               |
++--------------------------------------+--------------------------------------+
+| uint8                                | Short                                |
++--------------------------------------+--------------------------------------+
+| uint16                               | Integer                              |
++--------------------------------------+--------------------------------------+
+| uint32                               | Long                                 |
++--------------------------------------+--------------------------------------+
+| uint64                               | BigInteger                           |
++--------------------------------------+--------------------------------------+
+| binary                               | byte[]                               |
++--------------------------------------+--------------------------------------+
+
+Complex values of type argument are mapped as follows:
+
++--------------------------------------+--------------------------------------+
+| Argument Type                        | Java type                            |
++======================================+======================================+
+| enumeration                          | generated java enum                  |
++--------------------------------------+--------------------------------------+
+| bits                                 | generated class for bits             |
++--------------------------------------+--------------------------------------+
+| leafref                              | same type as referenced leaf         |
++--------------------------------------+--------------------------------------+
+| identityref                          | Class                                |
++--------------------------------------+--------------------------------------+
+| union                                | generated java class                 |
++--------------------------------------+--------------------------------------+
+| instance-identifier                  | ``org.opendaylight.yangtools.yang.bi |
+|                                      | nding.InstanceIdentifier``           |
++--------------------------------------+--------------------------------------+
+
+Enumeration Substatement Enum
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The YANG ``enumeration`` type has to contain some ``enum``
+substatements. An ``enumeration`` is mapped as Java enum type
+(standalone class) and every YANG enum substatements is mapped to Java
+enum’s predefined values.
+
+An ``enum`` statement can have following substatements:
+
++--------------------------------------+--------------------------------------+
+| Enum’s Substatement                  | Java mapping                         |
++======================================+======================================+
+| description                          | is not mapped in API                 |
++--------------------------------------+--------------------------------------+
+| value                                | mapped as input parameter for every  |
+|                                      | predefined value of enum             |
++--------------------------------------+--------------------------------------+
+
+For example, the following YANG:
+
+**YANG model.**
+
+.. code:: yang
+
+    typedef typedef-enumeration {
+        type enumeration {
+            enum enum1 {
+                description "enum1 description";
+                value 18;
+            }
+            enum enum2 {
+                value 16;
+            }
+            enum enum3 {
+            }
+        }
+    }
+
+is converted into this Java:
+
+**TypedefEnumeration.java.**
+
+.. code:: java
+
+    public enum TypedefEnumeration {
+        Enum1(18),
+        Enum2(16),
+        Enum3(19);
+
+        int value;
+
+        private TypedefEnumeration(int value) {
+            this.value = value;
+        }
+    }
+
+Bits’s Substatement Bit
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The YANG ``bits`` type has to contain some bit substatements. YANG
+``bits`` is mapped to a Java class (standalone class) and every YANG
+``bits`` substatements is mapped to a boolean attribute of that class.
+In addition, the class provides overridden versions of the Object
+methods ``hashCode``, ``toString``, and ``equals``.
+
+For example, the following YANG:
+
+**YANG Model.**
+
+.. code:: yang
+
+    typedef typedef-bits {
+      type bits {
+        bit first-bit {
+          description "first-bit description";
+            position 15;
+          }
+        bit second-bit;
+      }
+    }
+
+is converted into this Java:
+
+**TypedefBits.java.**
+
+.. code:: java
+
+    public class TypedefBits {
+
+        private Boolean firstBit;
+        private Boolean secondBit;
+
+        public TypedefBits() {
+            super();
+        }
+
+        public Boolean getFirstBit() {
+            return firstBit;
+        }
+
+        public void setFirstBit(Boolean firstBit) {
+            this.firstBit = firstBit;
+        }
+
+        public Boolean getSecondBit() {
+            return secondBit;
+        }
+
+        public void setSecondBit(Boolean secondBit) {
+            this.secondBit = secondBit;
+        }
+
+        @Override
+        public int hashCode() {
+            final int prime = 31;
+            int result = 1;
+            result = prime * result +
+             ((firstBit == null) ? 0 : firstBit.hashCode());
+            result = prime * result +
+             ((secondBit == null) ? 0 : secondBit.hashCode());
+            return result;
+        }
+
+        @Override
+        public boolean equals(Object obj) {
+            if (this == obj) {
+                return true;
+            }
+            if (obj == null) {
+                return false;
+            }
+            if (getClass() != obj.getClass()) {
+                return false;
+            }
+            TypedefBits other = (TypedefBits) obj;
+            if (firstBit == null) {
+                if (other.firstBit != null) {
+                    return false;
+                }
+            } else if(!firstBit.equals(other.firstBit)) {
+                return false;
+            }
+            if (secondBit == null) {
+                if (other.secondBit != null) {
+                    return false;
+                }
+            } else if(!secondBit.equals(other.secondBit)) {
+                return false;
+            }
+            return true;
+        }
+
+        @Override
+        public String toString() {
+            StringBuilder builder = new StringBuilder();
+            builder.append("TypedefBits [firstBit=");
+            builder.append(firstBit);
+            builder.append(", secondBit=");
+            builder.append(secondBit);
+            builder.append("]");
+            return builder.toString();
+        }
+    }
+
+Union’s Substatement Type
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If the type of a ``typedef`` is ``union``, it has to contain ``type``
+substatements. The ``union typedef`` is mapped to class and its ``type``
+substatements are mapped to private class members. Every YANG union
+subtype gets its own Java constructor with a parameter which represent
+just that one attribute.
+
+For example, the following YANG:
+
+**YANG model.**
+
+.. code:: yang
+
+    typedef typedef-union {
+        type union {
+            type int32;
+            type string;
+        }
+    }
+
+is converted into this Java:
+
+**TypdefUnion.java.**
+
+.. code:: java
+
+    public class TypedefUnion {
+
+        private Integer int32;
+        private String string;
+
+        public TypedefUnion(Integer int32) {
+            super();
+            this.int32 = int32;
+        }
+
+        public TypedefUnion(String string) {
+            super();
+            this.string = string;
+        }
+
+        public Integer getInt32() {
+            return int32;
+        }
+
+        public String getString() {
+            return string;
+        }
+
+        @Override
+        public int hashCode() {
+            final int prime = 31;
+            int result = 1;
+            result = prime * result + ((int32 == null) ? 0 : int32.hashCode());
+            result = prime * result + ((string == null) ? 0 : string.hashCode());
+            return result;
+        }
+
+        @Override
+        public boolean equals(Object obj) {
+            if (this == obj) {
+                return true;
+            }
+            if (obj == null) {
+                return false;
+            }
+            if (getClass() != obj.getClass()) {
+                return false;
+            }
+            TypedefUnion other = (TypedefUnion) obj;
+            if (int32 == null) {
+                if (other.int32 != null) {
+                    return false;
+                }
+            } else if(!int32.equals(other.int32)) {
+                return false;
+            }
+            if (string == null) {
+                if (other.string != null) {
+                    return false;
+                }
+            } else if(!string.equals(other.string)) {
+                return false;
+            }
+            return true;
+        }
+
+        @Override
+        public String toString() {
+            StringBuilder builder = new StringBuilder();
+            builder.append("TypedefUnion [int32=");
+            builder.append(int32);
+            builder.append(", string=");
+            builder.append(string);
+            builder.append("]");
+            return builder.toString();
+        }
+    }
+
+String Mapping
+^^^^^^^^^^^^^^
+
+The YANG ``string`` type can contain the substatements ``length`` and
+``pattern`` which are mapped as follows:
+
++--------------------------------------+--------------------------------------+
+| Type substatements                   | Mapping to Java                      |
++======================================+======================================+
+| length                               | not mapped                           |
++--------------------------------------+--------------------------------------+
+| pattern                              | | . list of string constants = list  |
+|                                      |   of patterns                        |
+|                                      | | . list of Pattern objects          |
+|                                      | | . static initialization block      |
+|                                      |   where list of Patterns is          |
+|                                      |   initialized from list of string of |
+|                                      |   constants                          |
++--------------------------------------+--------------------------------------+
+
+For example, the following YANG:
+
+**YANG model.**
+
+.. code:: yang
+
+    typedef typedef-string {
+        type string {
+            length 44;
+            pattern "[a][.]*"
+        }
+    }
+
+is converted into this Java:
+
+**TypedefString.java.**
+
+.. code:: java
+
+    public class TypedefString {
+
+        private static final List<Pattern> patterns = new ArrayList<Pattern>();
+        public static final List<String> PATTERN`CONSTANTS = Arrays.asList("[a][.]*");
+
+        static {
+            for (String regEx : PATTERN`CONSTANTS) {
+                patterns.add(Pattern.compile(regEx));
+            }
+        }
+
+        private String typedefString;
+
+        public TypedefString(String typedefString) {
+            super();
+            // Pattern validation
+            this.typedefString = typedefString;
+        }
+
+        public String getTypedefString() {
+            return typedefString;
+        }
+
+        @Override
+        public int hashCode() {
+            final int prime = 31;
+            int result = 1;
+            result = prime * result + ((typedefString == null) ? 0 : typedefString.hashCode());
+            return result;
+        }
+
+        @Override
+        public boolean equals(Object obj) {
+            if (this == obj) {
+                return true;
+            }
+            if (obj == null) {
+                return false;
+            }
+            if (getClass() != obj.getClass()) {
+                return false;
+            }
+            TypedefString other = (TypedefString) obj;
+            if (typedefString == null) {
+                if (other.typedefString != null) {
+                    return false;
+                }
+            } else if(!typedefString.equals(other.typedefString)) {
+                return false;
+            }
+            return true;
+        }
+
+        @Override
+        public String toString() {
+            StringBuilder builder = new StringBuilder();
+            builder.append("TypedefString [typedefString=");
+            builder.append(typedefString);
+            builder.append("]");
+            return builder.toString();
+        }
+    }
+
+identity statement
+~~~~~~~~~~~~~~~~~~
+
+The purpose of the ``identity`` statement is to define a new globally
+unique, abstract, and untyped value.
+
+The ``base`` substatement argument is the name of existing identity from
+which the new identity is derived.
+
+Given that, an ``identity`` statement is mapped to Java abstract class
+and any ``base`` substatements are mapped as ``extends`` Java keyword.
+The identity name is translated to class name.
+
+For example, the following YANG:
+
+**YANG Model.**
+
+.. code:: yang
+
+    identity toast-type {
+
+    }
+
+    identity white-bread {
+       base toast-type;
+    }
+
+is converted into this Java:
+
+**ToastType.java.**
+
+.. code:: java
+
+    public abstract class ToastType extends BaseIdentity {
+        protected ToastType() {
+            super();
+        }
+    }
+
+**WhiteBread.java.**
+
+.. code:: java
+
+    public abstract class WhiteBread extends ToastType {
+        protected WhiteBread() {
+            super();
+        }
+    }
+
index 54b5593679acb0d799c7f75ff7400759913ec882..2860b5929256ca50b9c40d86a4d4644dc85e7cce 100644 (file)
@@ -38,7 +38,7 @@ Installing YANG IDE
 -------------------
 
 The YANG IDE plugin can be installed by using the public update site URL
-provided, which is http://abc.def .
+provided, which is https://nexus.opendaylight.org/content/sites/p2repos/org.opendaylight.yangide/release/ .
 
 While in Eclipse, select "Help" from the menu bar and select "Install
 New Software ...".  On the resulting "Install" dialog, click the
index 457bc34a2802669f75ff2294607338f6d601afc0..c1952c517682a6dc599eb68c42a26fb26a7f0c98 100644 (file)
@@ -42,6 +42,7 @@ Installing OpenDaylight
 
    openstack-with-ovsdb
    openstack-with-gbp
+   openstack-with-gbp-vpp
    openstack-with-vtn
 
 .. _OpenStack: https://www.openstack.org/
diff --git a/docs/opendaylight-with-openstack/openstack-with-gbp-vpp.rst b/docs/opendaylight-with-openstack/openstack-with-gbp-vpp.rst
new file mode 100755 (executable)
index 0000000..0c77959
--- /dev/null
@@ -0,0 +1,239 @@
+Using Groupbasedpolicy's Neutron VPP Mapper
+===========================================
+
+Overview
+--------
+Neutron VPP Mapper implements features for support policy-based routing for OpenStack Neutron interface involving VPP devices.
+It allows using of policy-based schemes defined in GBP controller in a network consisting of OpenStack-provided nodes routed by a VPP node.
+
+Architecture
+------------
+Neutron VPP Mapper listens to Neutron data store change events, as well as being able to access directly the store.
+If the data changed match certain criteria (see `Processing Neutron Configuration`_),
+Neutron VPP Mapper converts Neutron data specifically required to render a VPP node configuration with a given End Point,
+e.g., the virtual host interface name assigned to a ``vhostuser`` socket.
+Then the mapped data is stored in the VPP info data store.
+
+Administering Neutron VPP Mapper
+--------------------------------
+To use the Neutron VPP Mapper in Karaf, at least the following Karaf features must be installed:
+
+* odl-groupbasedpolicy-neutron-vpp-mapper
+* odl-vbd-ui
+
+Initial pre-requisites
+----------------------
+A topology should exist in config datastore, it is necessary to define a node with a particular ``node-id``.
+Later, ``node-id`` will be used as a physical location reference in VPP renderer's bridge domain::
+
+   GET http://localhost:8181/restconf/config/network-topology:network-topology/
+
+   {
+       "network-topology":{
+          "topology":[
+               {
+                   "topology-id":"datacentre",
+                   "node":[
+                       {
+                          "node-id":"dut2",
+                          "vlan-tunnel:super-interface":"GigabitEthernet0/9/0",
+                          "termination-point":[
+                               {
+                                   "tp-id":"GigabitEthernet0/9/0",
+                                   "neutron-provider-topology:physical-interface":{
+                                       "interface-name":"GigabitEthernet0/9/0"
+                                   }
+                               }
+                           ]
+                       }
+                   ]
+               }
+           ]
+       }
+   }
+
+
+Processing Neutron Configuration
+--------------------------------
+``NeutronListener`` listens to the changes in Neutron datatree in config datastore. It filters the changes, processing only ``network`` and ``port`` entities.
+
+For a ``network`` entity it is checked that it has ``physical-network`` parameter set (i.e., it is backed-up by a physical network),
+and that ``network-type`` is ``vlan-network`` or ``"flat"``, and if this check has passed, a related bridge domain is created
+in VPP Renderer config datastore
+(``http://{{controller}}:{{port}}/restconf/config/vpp-renderer:config``), referenced to network by ``vlan`` field.
+
+In case of ``"vlan-network"``, the ``vlan`` field contains the same value as ``neutron-provider-ext:segmentation-id`` of network created by Neutron.
+
+In case of ``"flat"``, the VLAN specific parameters are not filled out.
+
+.. note:: In case of VXLAN network (i.e. ``network-type`` is ``"vxlan-network"``), no information is actually written
+into VPP Renderer datastore, as VXLAN is used for tenant-network (so no packets are going outside). Instead, VPP Renderer looks up GBP flood domains corresponding to existing VPP bridge domains trying to establish a VXLAN tunnel between them.
+
+For a ``port`` entity it is checked that ``vif-type`` contains ``"vhostuser"`` substring, and that ``device-owner`` contains a specific substring, namely ``"compute"``, ``"router"`` or ``"dhcp"``.
+
+In case of ``"compute"`` substring, a ``vhost-user`` is written to VPP Renderer config datastore.
+
+In case of ``"dhcp"`` or ``"router"``, a ``tap`` is written to VPP Renderer config datastore.
+
+Input/output examples
+---------------------
+
+OpenStack is creating network, and these data are being put into the data store::
+
+   PUT http://{{controller}}:{{port}}/restconf/config/neutron:neutron/networks
+
+   {
+       "networks": {
+           "network": [
+               {
+                   "uuid": "43282482-a677-4102-87d6-90708f30a115",
+                   "tenant-id": "94836b88-0e56-4150-aaa7-60f1c2b67faa",
+                   "neutron-provider-ext:segmentation-id": "2016",
+                   "neutron-provider-ext:network-type": "neutron-networks:network-type-vlan",
+                   "neutron-provider-ext:physical-network": "datacentre",
+                   "neutron-L3-ext:external": true,
+                   "name": "drexternal",
+                   "shared": false,
+                   "admin-state-up": true,
+                   "status": "ACTIVE"
+               }
+           ]
+       }
+   }
+
+Checking bridge domain in VPP Renderer config data store.
+Note that ``physical-location-ref`` is referring to ``"dut2"``, paired by ``neutron-provider-ext:physical-network`` -> ``topology-id``::
+
+   GET http://{{controller}}:{{port}}/restconf/config/vpp-renderer:config
+
+   {
+     "config": {
+       "bridge-domain": [
+         {
+           "id": "43282482-a677-4102-87d6-90708f30a115",
+           "type": "vpp-renderer:vlan-network",
+           "description": "drexternal",
+           "vlan": 2016,
+           "physical-location-ref": [
+             {
+               "node-id": "dut2",
+               "interface": [
+                 "GigabitEthernet0/9/0"
+               ]
+             }
+           ]
+         }
+       ]
+     }
+   }
+
+Port (compute)::
+
+   PUT http://{{controller}}:{{port}}/restconf/config/neutron:neutron/ports
+
+   {
+       "ports": {
+           "port": [
+               {
+                   "uuid": "3d5dff96-25f5-4d4b-aa11-dc03f7f8d8e0",
+                   "tenant-id": "94836b88-0e56-4150-aaa7-60f1c2b67faa",
+                   "device-id": "dhcp58155ae3-f2e7-51ca-9978-71c513ab02ee-a91437c0-8492-47e2-b9d0-25c44aef6cda",
+                   "neutron-binding:vif-details": [
+                       {
+                           "details-key": "somekey"
+                       }
+                   ],
+                   "neutron-binding:host-id": "devstack-control",
+                   "neutron-binding:vif-type": "vhostuser",
+                   "neutron-binding:vnic-type": "normal",
+                   "mac-address": "fa:16:3e:4a:9f:c0",
+                   "name": "",
+                   "network-id": "a91437c0-8492-47e2-b9d0-25c44aef6cda",
+                   "neutron-portsecurity:port-security-enabled": false,
+                   "device-owner": "network:compute",
+                   "fixed-ips": [
+                       {
+                           "subnet-id": "0a5834ed-ed31-4425-832d-e273cac26325",
+                           "ip-address": "10.1.1.3"
+                       }
+                   ],
+                   "admin-state-up": true
+               }
+           ]
+       }
+   }
+
+   GET http://{{controller}}:{{port}}/restconf/config/vpp-renderer:config
+
+   {
+     "config": {
+       "vpp-endpoint": [
+         {
+           "context-type": "l2-l3-forwarding:l2-bridge-domain",
+           "context-id": "a91437c0-8492-47e2-b9d0-25c44aef6cda",
+           "address-type": "l2-l3-forwarding:mac-address-type",
+           "address": "fa:16:3e:4a:9f:c0",
+           "vpp-node-path": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='topology-netconf']/network-topology:node[network-topology:node-id='devstack-control']",
+           "vpp-interface-name": "neutron_port_3d5dff96-25f5-4d4b-aa11-dc03f7f8d8e0",
+           "socket": "/tmp/socket_3d5dff96-25f5-4d4b-aa11-dc03f7f8d8e0",
+           "description": "neutron port"
+         }
+       ]
+     }
+   }
+
+Port (dhcp)::
+
+   PUT http://{{controller}}:{{port}}/restconf/config/neutron:neutron/ports
+
+   {
+       "ports": {
+           "port": [
+               {
+                   "uuid": "3d5dff96-25f5-4d4b-aa11-dc03f7f8d8e0",
+                   "tenant-id": "94836b88-0e56-4150-aaa7-60f1c2b67faa",
+                   "device-id": "dhcp58155ae3-f2e7-51ca-9978-71c513ab02ee-a91437c0-8492-47e2-b9d0-25c44aef6cda",
+                   "neutron-binding:vif-details": [
+                       {
+                           "details-key": "somekey"
+                       }
+                   ],
+                   "neutron-binding:host-id": "devstack-control",
+                   "neutron-binding:vif-type": "vhostuser",
+                   "neutron-binding:vnic-type": "normal",
+                   "mac-address": "fa:16:3e:4a:9f:c0",
+                   "name": "",
+                   "network-id": "a91437c0-8492-47e2-b9d0-25c44aef6cda",
+                   "neutron-portsecurity:port-security-enabled": false,
+                   "device-owner": "network:dhcp",
+                   "fixed-ips": [
+                       {
+                           "subnet-id": "0a5834ed-ed31-4425-832d-e273cac26325",
+                           "ip-address": "10.1.1.3"
+                       }
+                   ],
+                   "admin-state-up": true
+               }
+           ]
+       }
+   }
+
+   GET http://{{controller}}:{{port}}/restconf/config/vpp-renderer:config
+
+   {
+     "config": {
+       "vpp-endpoint": [
+         {
+           "context-type": "l2-l3-forwarding:l2-bridge-domain",
+           "context-id": "a91437c0-8492-47e2-b9d0-25c44aef6cda",
+           "address-type": "l2-l3-forwarding:mac-address-type",
+           "address": "fa:16:3e:4a:9f:c0",
+           "vpp-node-path": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='topology-netconf']/network-topology:node[network-topology:node-id='devstack-control']",
+           "vpp-interface-name": "neutron_port_3d5dff96-25f5-4d4b-aa11-dc03f7f8d8e0",
+           "physical-address": "fa:16:3e:4a:9f:c0",
+           "name": "tap3d5dff96-25",
+           "description": "neutron port"
+         }
+       ]
+     }
+   }
index ed4e7da6063fbc492aaa62738343a75d98107ae3..119ee7f425f16884b8cfb6a4d1b2772d1a7f879f 160000 (submodule)
@@ -1 +1 @@
-Subproject commit ed4e7da6063fbc492aaa62738343a75d98107ae3
+Subproject commit 119ee7f425f16884b8cfb6a4d1b2772d1a7f879f
index 42e2183c83a939313b2663f5763e11db55c44697..d153e7728a07aba69e49fab176fe52b1de2cbbdf 160000 (submodule)
@@ -1 +1 @@
-Subproject commit 42e2183c83a939313b2663f5763e11db55c44697
+Subproject commit d153e7728a07aba69e49fab176fe52b1de2cbbdf
index 1a0bece17ca76ee1e887d1e66992df63774d1f9b..b28444e372162528d42d86363918bfcf49d00058 160000 (submodule)
@@ -1 +1 @@
-Subproject commit 1a0bece17ca76ee1e887d1e66992df63774d1f9b
+Subproject commit b28444e372162528d42d86363918bfcf49d00058
index 0c3d83c27a9de0177536d70dd0aeecbf2d0c701c..a33e41ba37f39608691bd22132978d7cc1eee21c 160000 (submodule)
@@ -1 +1 @@
-Subproject commit 0c3d83c27a9de0177536d70dd0aeecbf2d0c701c
+Subproject commit a33e41ba37f39608691bd22132978d7cc1eee21c
index d025b89521d1c4e3673704a3fa9d7f7847c48bd3..34ef3da1a88922e439235b60b2eb7a4210ca02e2 160000 (submodule)
@@ -1 +1 @@
-Subproject commit d025b89521d1c4e3673704a3fa9d7f7847c48bd3
+Subproject commit 34ef3da1a88922e439235b60b2eb7a4210ca02e2
diff --git a/docs/user-guide/atrium-user-guide.rst b/docs/user-guide/atrium-user-guide.rst
new file mode 100644 (file)
index 0000000..9c6d98a
--- /dev/null
@@ -0,0 +1,70 @@
+Atrium User Guide
+=================
+
+Overview
+--------
+
+Project Atrium is an open source SDN distribution - a vertically
+integrated set of open source components which together form a complete
+SDN stack. It’s goals are threefold:
+
+-  Close the large integration-gap of the elements that are needed to
+   build an SDN stack - while there are multiple choices at each layer,
+   there are missing pieces with poor or no integration.
+
+-  Overcome a massive gap in interoperability - This exists both at the
+   switch level, where existing products from different vendors have
+   limited compatibility, making it difficult to connect an arbitrary
+   switch and controller and at an API level, where its difficult to
+   write a portable application across multiple controller platforms.
+
+-  Work closely with network operators on deployable use-cases, so that
+   they could download near production quality code from one location,
+   and get started with functioning software defined networks on real
+   hardware.
+
+Architecture
+------------
+
+The key components of Atrium BGP Peering Router Application are as
+follows:
+
+-  Data Plane Switch - Data plane switch is the entity that uses flow
+   table entries installed by BGP Routing Application through SDN
+   controller. In the simplest form data plane switch with the installed
+   flows act like a BGP Router.
+
+-  OpenDaylight Controller - OpenDaylight SDN controller has many
+   utility applications or plugins which are leveraged by the BGP Router
+   application to manage the control plane information.
+
+-  BGP Routing Application - An application running within the
+   OpenDaylight runtime environment to handle I-BGP updates.
+
+-  `DIDM <#_didm_user_guide>`__ - DIDM manages the drivers specific to
+   each data plane switch connected to the controller. The drivers are
+   created primarily to hide the underlying complexity of the devices
+   and to expose a uniform API to applications.
+
+-  Flow Objectives API - The driver implementation provides a pipeline
+   abstraction and exposes Flow Objectives API. This means applications
+   need to be aware of only the Flow Objectives API without worrying
+   about the Table IDs or the pipelines.
+
+-  Control Plane Switch - This component is primarily used to connect
+   the OpenDaylight SDN controller with the Quagga Soft-Router and
+   establish a path for forwarding E-BGP packets to and from Quagga.
+
+-  Quagga soft router - An open source routing software that handles
+   E-BGP updates.
+
+Running Atrium
+--------------
+
+-  To run the Atrium BGP Routing Application in OpenDaylight
+   distribution, simply install the ``odl-atrium-all`` feature.
+
+   ::
+
+       feature:install odl-atrium-all
+
diff --git a/docs/user-guide/bgp-monitoring-protocol-user-guide.rst b/docs/user-guide/bgp-monitoring-protocol-user-guide.rst
new file mode 100644 (file)
index 0000000..5ea5833
--- /dev/null
@@ -0,0 +1,153 @@
+BGP Monitoring Protocol User Guide
+==================================
+
+Overview
+--------
+
+The OpenDaylight Karaf distribution comes preconfigured with baseline
+BMP configuration.
+
+-  **32-bmp.xml** (initial configuration for BMP messages handler
+   service provider and BMP client/server dispatcher settings)
+
+-  **42-bmp-example.xml** (sample initial configuration for the BMP
+   Monitoring Station application)
+
+Configuring BMP
+---------------
+
+Server Binding
+~~~~~~~~~~~~~~
+
+The default shipped configuration will start a BMP server on
+0.0.0.0:12345.You can change this behavior in **42-bmp-example.xml**:
+
+.. code:: xml
+
+     <module>
+      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">prefix:bmp-monitor-impl</type>
+      <name>example-bmp-monitor</name>
+      <!--<binding-address>0.0.0.0</binding-address>-->
+      <binding-port>12345</binding-port>
+      ...
+     </module>
+
+-  **binding-address** - adress on which BMP will be started and listen;
+   to change value, uncomment then line first
+
+-  **binding-port** - port on which the address will be started and
+   listen
+
+Multiple instances of the BMP monitoring station (**bmp-monitor-impl**
+module) can be created. However, each instance must have a unique pair
+of **binding-address** and **binding-port**
+
+Active mode
+~~~~~~~~~~~
+
+OpenDaylight’s BMP might be configured to act as an active party of the
+connection (ODL BMP < = > Monitored router). To enable this
+functionality, configure monitored-router with mandatory parameters:
+
+-  address (must be unique for each configured "monitored-router"),
+
+-  port,
+
+-  active.
+
+See following example from 42-bmp-example.xml:
+
+.. code:: xml
+
+     <monitored-router>
+      <address>192.0.2.2</address>
+      <port>1234</port>
+      <active>true</active>
+     </monitored-router>
+
+Configuration through RESTCONF
+------------------------------
+
+Server Binding
+~~~~~~~~~~~~~~
+
+**URL:**
+*`http://<controllerIP>:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/config:module/odl-bmp-impl-cfg:bmp-monitor-impl/example-bmp-monitor <http://<controllerIP>:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/config:module/odl-bmp-impl-cfg:bmp-monitor-impl/example-bmp-monitor>`__*
+
+**Content-Type:** application/xml
+
+**Method:** PUT
+
+**Body:**
+
+.. code:: xml
+
+    <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
+      <name>example-bmp-monitor</name>
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">x:bmp-monitor-impl</type>
+      <bmp-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
+        <type>bmp-dispatcher</type>
+        <name>global-bmp-dispatcher</name>
+      </bmp-dispatcher>
+      <codec-tree-factory xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
+        <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">x:binding-codec-tree-factory</type>
+        <name>runtime-mapping-singleton</name>
+      </codec-tree-factory>
+      <extensions xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
+        <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:spi">x:extensions</type>
+        <name>global-rib-extensions</name>
+      </extensions>
+      <binding-address xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">0.0.0.0</binding-address>
+      <dom-data-provider xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
+        <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">x:dom-async-data-broker</type>
+        <name>pingpong-broker</name>
+      </dom-data-provider>
+      <binding-port xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">12345</binding-port>
+    </module>
+
+-  change values for **binding-address** and/or **binding-port**
+
+Active mode
+~~~~~~~~~~~
+
+**URL:**
+*`http://<controllerIP>:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/config:module/odl-bmp-impl-cfg:bmp-monitor-impl/example-bmp-monitor <http://<controllerIP>:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/config:module/odl-bmp-impl-cfg:bmp-monitor-impl/example-bmp-monitor>`__*
+
+**Content-Type:** application/xml
+
+**Method:** PUT
+
+**Body:**
+
+.. code:: xml
+
+    <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
+      <name>example-bmp-monitor</name>
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">x:bmp-monitor-impl</type>
+      <bmp-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
+        <type>bmp-dispatcher</type>
+        <name>global-bmp-dispatcher</name>
+      </bmp-dispatcher>
+      <codec-tree-factory xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
+        <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">x:binding-codec-tree-factory</type>
+        <name>runtime-mapping-singleton</name>
+      </codec-tree-factory>
+      <extensions xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
+        <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:spi">x:extensions</type>
+        <name>global-rib-extensions</name>
+      </extensions>
+      <binding-address xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">0.0.0.0</binding-address>
+          <dom-data-provider xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
+        <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">x:dom-async-data-broker</type>
+        <name>pingpong-broker</name>
+      </dom-data-provider>
+      <binding-port xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">12345</binding-port>
+      <monitored-router xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
+        <address xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">127.0.0.1</address>
+        <port xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">1234</port>
+        <active xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">true</active>
+      </monitored-router>
+    </module>
+
+-  change values for **address** and **port**
+
diff --git a/docs/user-guide/bgp-user-guide.rst b/docs/user-guide/bgp-user-guide.rst
new file mode 100644 (file)
index 0000000..e27ce41
--- /dev/null
@@ -0,0 +1,996 @@
+BGP User Guide
+==============
+
+Configuring BGP
+---------------
+
+The OpenDaylight Karaf distribution comes pre-configured with a baseline
+BGP configuration. You can find it in the ``etc/opendaylight/karaf``
+directory and it consists of two files:
+
+``31-bgp.xml``
+    defines the basic parser and RIB support
+
+``41-bgp-example.xml``
+    contains a sample configuration which needs to be customized to your
+    deployment)
+
+The next sections will describe how to configure BGP manually or using
+RESTCONF.
+
+RIB
+~~~
+
+The configuration of the Routing Information Base (RIB) is specified
+using a block in the ``41-bgp-example.xml`` file.
+
+.. code:: xml
+
+    <module>
+        <type>prefix:rib-impl</type>
+        <name>example-bgp-rib</name>
+        <rib-id>example-bgp-rib</rib-id>
+        <local-as>64496</local-as>
+        <bgp-id>192.0.2.2</bgp-id>
+        <cluster-id>192.0.2.3</cluster-id>
+        ...
+    </module>
+
+-  **type** - should always be set to ``prefix:rib-impl``
+
+-  **name** and **rib-id** - BGP RIB Identifier, you can specify
+   multiple BGP RIBs by having multiple the above ``module`` blocks.
+   Each such RIB must have a unique rib-id and name.
+
+-  **local-as** - the local AS number (where OpenDaylight is deployed),
+   we use this in best path selection
+
+-  **bgp-id** - the local BGP identifier (the IP of the VM where
+   OpenDaylight is deployed), we use this in best path selection.
+
+-  **cluster-id** - cluster identifier, optional, if not specified, BGP
+   Identifier will be used
+
+Depending on your BGP router, you might need to switch from linkstate
+attribute type 99 to 29. Check with your router vendor. Change the field
+iana-linkstate-attribute-type to true if your router supports type 29.
+This snippet is located in ``31-bgp.xml`` file.
+
+.. code:: xml
+
+    <module>
+     <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:linkstate">prefix:bgp-linkstate</type>
+     <name>bgp-linkstate</name>
+     <iana-linkstate-attribute-type>true</iana-linkstate-attribute-type>
+    </module>
+
+-  **iana-linkstate-attribute-type** - IANA has issued an early
+   allocation for the BGP linkstate path attribute (=29). To preserve he
+   old value (=99) set this to to false; to use IANA assigned type set
+   the value to true or remove it as it’s true by default.
+
+BGP Peer
+~~~~~~~~
+
+The initial configuration is written so that it will be ignored to
+prevent the client from starting with default configuration. Therefore
+the first step is to uncomment the module containing bgp-peer.
+
+.. code:: xml
+
+    <module>
+     <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer</type>
+     <name>example-bgp-peer</name>
+     <host>192.0.2.1</host>
+     <holdtimer>180</holdtimer>
+     <peer-role>ibgp</peer-role>
+     <rib>
+      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:rib-instance</type>
+      <name>example-bgp-rib</name>
+     </rib>
+     ...
+    </module>
+
+-  **name** - BGP Peer name, in this configuration file you specify
+   multiple BGP Peers by replicating the above ``module`` block. Each
+   peers must have a unique name.
+
+-  **host** - IP address or hostname of BGP speaker where OpenDaylight
+   should connect to the peer
+
+-  **holdtimer** - hold time in seconds
+
+-  **peer-role** - If peer role is not present, default value "ibgp"
+   will be used (other allowed values are "ebgp" and "rr-client"). This
+   field is case-sensitive.
+
+-  **rib** - BGP RIB identifier
+
+Configure Connection Attributes (Optional)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code:: xml
+
+    <module>
+       <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:reconnectstrategy">prefix:timed-reconnect-strategy</type>
+       <name>example-reconnect-strategy</name>
+       <min-sleep>1000</min-sleep>
+       <max-sleep>180000</max-sleep>
+       <sleep-factor>2.00</sleep-factor>
+       <connect-time>5000</connect-time>
+       <executor>
+           <type xmlns:netty="urn:opendaylight:params:xml:ns:yang:controller:netty">netty:netty-event-executor</type>
+           <name>global-event-executor</name>
+       </executor>
+    </module>
+
+-  **min-sleep** - minimum sleep time (miliseconds) in between reconnect
+   tries
+
+-  **max-sleep** - maximum sleep time (miliseconds) in between reconnect
+   tries
+
+-  **sleep-factor** - power factor of the sleep time between reconnect
+   tries, i.e., the previous sleep time will be multiplied by this
+   number to determine the next sleep time, but never exceed
+   **max-sleep**
+
+-  **connect-time** - how long BGP should wait (miliseconds) for the TCP
+   connect attempt, overrides default connection timeout dictated by
+   TCP.
+
+BGP Speaker Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The previous entries described configuration of a BGP connections
+initiated by OpenDaylight. OpenDaylight can also accept incoming BGP
+connections.
+
+The configuration of BGP speaker is located in: ``41-bgp-example.xml``:
+
+.. code:: xml
+
+    <module>
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer-acceptor</type>
+        <name>bgp-peer-server</name>
+
+        <!--Default parameters-->
+        <!--<binding-address>0.0.0.0</binding-address>-->
+        <!--<binding-port>1790</binding-port>-->
+
+        ...
+        <!--Drops or accepts incoming BGP connection, every BGP Peer that should be accepted needs to be added to this registry-->
+        <peer-registry>
+            <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer-registry</type>
+            <name>global-bgp-peer-registry</name>
+        </peer-registry>
+    </module>
+
+-  Changing binding address: Uncomment tag binding-address and change
+   the address to e.g. *127.0.0.1*. The default binding address is
+   *0.0.0.0*.
+
+-  Changing binding port: Uncomment tag binding-port and change the port
+   to e.g. *1790*. The default binding port is *179* as specified in
+   `RFC 4271 <http://tools.ietf.org/html/rfc4271>`__.
+
+Incomming BGP Connections
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**The BGP speaker drops all BGP connections from unknown BGP peers.**
+The decision is made in component bgp-peer-registry that is injected
+into the speaker (The registry is configured in ``31-bgp.xml``).
+
+To add a BGP Peer configuration into the registry, it is necessary to
+configure regular BGP peer just like in example in
+``41-bgp-example.xml``. Notice that the BGP peer depends on the same
+bgp-peer-registry as bgp-speaker:
+
+.. code:: xml
+
+    <module>
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer</type>
+        <name>example-bgp-peer</name>
+        <host>192.0.2.1</host>
+        ...
+        <peer-registry>
+            <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer-registry</type>
+            <name>global-bgp-peer-registry</name>
+        </peer-registry>
+        ...
+    </module>
+
+The BGP peer registers itself into the registry, which allows incoming
+BGP connections handled by the bgp-speaker. (Config attribute
+peer-registry is optional for now to preserve backwards compatibility).
+With this configuration, the connection to 192.0.2.1 is initiated by
+OpenDaylight but will also be accepted from 192.0.2.1. In case both
+connections are being established, only one of them will be preserved
+and the other will be dropped. The connection initiated from device with
+lower BGP id will be dropped by the registry. Each BGP peer must be
+configured in its own ``module`` block. Note, that the name of the
+module needs to be unique, so if you are configuring more peers, when
+changing the **host**, also change the **name**.
+
+To configure a peer that only listens for incoming connections and
+instruct OpenDaylight not to initiate the connection, add the
+initiate-connection attribute to peer’s configuration and set it to
+false:
+
+.. code:: xml
+
+    <module>
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer</type>
+        <name>example-bgp-peer</name>
+        <host>192.0.2.1</host>                         // IP address or hostname of the speaker
+        <holdtimer>180</holdtimer>
+        <initiate-connection>false</initiate-connection>  // Connection will not be initiated by ODL
+        ...
+    </module>
+
+-  **initiate-connection** - if set to false OpenDaylight will not
+   initiate connection to this peer. Default value is true.
+
+BGP Application Peer
+~~~~~~~~~~~~~~~~~~~~
+
+A BGP speaker needs to register all peers that can be connected to it
+(meaning if a BGP peer is not configured, the connection with
+OpenDaylight won’t be successful). As a first step, configure RIB. Then,
+instead of configuring regular peer, configure this application peer,
+with its own application RIB. Change the bgp-peer-id, which is your
+local BGP ID that will be used in BGP best path selection algorithm.
+
+.. code:: xml
+
+    <module>
+     <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-application-peer</type>
+     <name>example-bgp-peer-app</name>
+     <bgp-peer-id>10.25.1.9</bgp-peer-id>
+     <target-rib>
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:rib-instance</type>
+      <name>example-bgp-rib</name>
+     </target-rib>
+     <application-rib-id>example-app-rib</application-rib-id>
+     ...
+    </module>
+
+-  **bgp-peer-id** - the local BGP identifier (the IP of the VM where
+   OpenDaylight is deployed), we use this in best path selection
+
+-  **target-rib** - RIB ID of existing RIB where the data should be
+   transferred
+
+-  **application-rib-id** - RIB ID of local application RIB (all the
+   routes that you put to OpenDaylight will be displayed here)
+
+Configuration through RESTCONF
+------------------------------
+
+Another method to configure BGP is dynamically through RESTCONF. Instead
+of restarting Karaf, install another feature, that provides you the
+access to *restconf/config/* URLs.
+
+::
+
+    feature:install odl-netconf-connector-all
+
+To check what modules you have currently configured, check following
+link:
+http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/
+
+This URL is also used to POST new configuration. If you want to change
+any other configuration that is listed here, make sure you include the
+correct namespaces. RESTCONF will tell you if some namespace is wrong.
+
+To update an existing configuration use **PUT** and give the full path
+to the element you wish to update.
+
+It is vital that you respect the order of steps described in user guide.
+
+RIB
+~~~
+
+First, configure the RIB. This module is already present in the
+configuration, therefore we change only the parameters we need. In this
+case, it’s **bgp-rib-id** and **local-as**.
+
+**URL:**
+*http://127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-bgp-rib-impl-cfg:rib-impl/example-bgp-rib*
+
+**PUT:**
+
+.. code:: xml
+
+    <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
+     <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:rib-impl</type>
+     <name>example-bgp-rib</name>
+     <session-reconnect-strategy xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:protocol:framework">x:reconnect-strategy-factory</type>
+      <name>example-reconnect-strategy-factory</name>
+     </session-reconnect-strategy>
+     <rib-id xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">example-bgp-rib</rib-id>
+     <extensions xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:spi">x:extensions</type>
+      <name>global-rib-extensions</name>
+     </extensions>
+     <codec-tree-factory xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">x:binding-codec-tree-factory</type>
+      <name>runtime-mapping-singleton</name>
+     </codec-tree-factory>
+     <tcp-reconnect-strategy xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:protocol:framework">x:reconnect-strategy-factory</type>
+      <name>example-reconnect-strategy-factory</name>
+     </tcp-reconnect-strategy>
+     <data-provider xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">x:binding-async-data-broker</type>
+      <name>pingpong-binding-data-broker</name>
+     </data-provider>
+     <local-as xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">64496</local-as>
+     <bgp-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type>bgp-dispatcher</type>
+      <name>global-bgp-dispatcher</name>
+     </bgp-dispatcher>
+     <dom-data-provider xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">x:dom-async-data-broker</type>
+      <name>pingpong-broker</name>
+     </dom-data-provider>
+     <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type>bgp-table-type</type>
+      <name>ipv4-unicast</name>
+     </local-table>
+     <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type>bgp-table-type</type>
+      <name>ipv6-unicast</name>
+     </local-table>
+     <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type>bgp-table-type</type>
+      <name>linkstate</name>
+     </local-table>
+     <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type>bgp-table-type</type>
+      <name>ipv4-flowspec</name>
+     </local-table>
+     <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type>bgp-table-type</type>
+      <name>ipv6-flowspec</name>
+     </local-table>
+     <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type>bgp-table-type</type>
+      <name>labeled-unicast</name>
+     </local-table>
+     <bgp-rib-id xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">192.0.2.2</bgp-rib-id>
+     <openconfig-provider xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp-openconfig-spi">x:bgp-openconfig-provider</type>
+      <name>openconfig-bgp</name>
+     </openconfig-provider>
+    </module>
+
+Depending on your BGP router, you might need to switch from linkstate
+attribute type 99 to 29. Check with your router vendor. Change the field
+iana-linkstate-attribute-type to true if your router supports type 29.
+You can do that with the following RESTCONF operation:
+
+**URL:**
+*http://127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-bgp-linkstate-cfg:bgp-linkstate/bgp-linkstate*
+
+**PUT:**
+
+.. code:: xml
+
+    <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
+     <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:linkstate">x:bgp-linkstate</type>
+     <name>bgp-linkstate</name>
+     <iana-linkstate-attribute-type xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:linkstate">true</iana-linkstate-attribute-type>
+    </module>
+
+BGP Peer
+~~~~~~~~
+
+We also need to add a new module to configuration (bgp-peer). In this
+case, the whole module needs to be configured. Please change values
+**host**, **holdtimer** and **peer-role** (if necessary).
+
+**URL:**
+*http://127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules*
+
+**POST:**
+
+.. code:: xml
+
+    <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
+     <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-peer</type>
+     <name>example-bgp-peer</name>
+     <host xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">192.0.2.1</host>
+     <holdtimer xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">180</holdtimer>
+     <peer-role xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">ibgp</peer-role>
+     <rib xmlns"urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:rib-instance</type>
+      <name>example-bgp-rib</name>
+     </rib>
+     <peer-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-peer-registry</type>
+      <name>global-bgp-peer-registry</name>
+     </peer-registry>
+     <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
+      <name>ipv4-unicast</name>
+     </advertized-table>
+     <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
+      <name>ipv6-unicast</name>
+     </advertized-table>
+     <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
+      <name>linkstate</name>
+     </advertized-table>
+     <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
+      <name>ipv4-flowspec</name>
+     </advertized-table>
+     <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
+      <name>ipv6-flowspec</name>
+     </advertized-table>
+     <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
+      <name>labeled-unicast</name>
+     </advertized-table>
+    </module>
+
+This is all necessary information that you need to get ODL connect to
+your speaker.
+
+BGP Application Peer
+~~~~~~~~~~~~~~~~~~~~
+
+Change the value **bgp-peer-id** which is your local BGP ID that will be
+used in BGP Best Path Selection algorithm.
+
+**URL:**
+*http://127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules*
+
+**POST:**
+
+.. code:: xml
+
+    <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
+     <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-application-peer</type>
+     <name>example-bgp-peer-app</name>
+     <bgp-peer-id xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">10.25.1.9</bgp-peer-id> <!-- Your local BGP-ID that will be used in BGP Best Path Selection algorithm -->
+     <target-rib xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:rib-instance</type>
+      <name>example-bgp-rib</name>
+      </target-rib>
+     <application-rib-id xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">example-app-rib</application-rib-id>
+     <data-broker xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
+      <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">x:dom-async-data-broker</type>
+      <name>pingpong-broker</name>
+     </data-broker>
+    </module>
+
+Tutorials
+---------
+
+Viewing BGP Topology
+~~~~~~~~~~~~~~~~~~~~
+
+This section summarizes how data from BGP can be viewed through
+RESTCONF. Currently it is the only way to view the data.
+
+Network Topology View
+^^^^^^^^^^^^^^^^^^^^^
+
+The URL for network topology is:
+http://localhost:8181/restconf/operational/network-topology:network-topology/
+
+If BGP is configured properly, it should display output similar to:
+
+.. code:: xml
+
+    <network-topology>
+     <topology>
+      <topology-id>pcep-topology</topology-id>
+      <topology-types>
+       <topology-pcep/>
+      </topology-types>
+     </topology>
+     <topology>
+      <server-provided>true</server-provided>
+      <topology-id>example-ipv4-topology</topology-id>
+      <topology-types/>
+     </topology>
+     <topology>
+      <server-provided>true</server-provided>
+      <topology-id>example-linkstate-topology</topology-id>
+      <topology-types/>
+     </topology>
+    </network-topology>
+
+BGP topology information as learned from BGP peers are is in three
+topologies (if all three are configured):
+
+-  **example-linkstate-topology** - displays links and nodes advertised
+   through linkstate update messages
+
+   -  http://localhost:8181/restconf/operational/network-topology:network-topology/topology/example-linkstate-topology
+
+-  **example-ipv4-topology** - display IPv4 addresses of nodes in the
+   topology
+
+   -  http://localhost:8181/restconf/operational/network-topology:network-topology/topology/example-ipv4-topology
+
+-  **example-ipv6-topology** - display IPv6 addresses of nodes in the
+   topology
+
+   -  http://localhost:8181/restconf/operational/network-topology:network-topology/topology/example-ipv6-topology
+
+Route Information Base (RIB) View
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Another view of BGP data is through **BGP RIBs**, located here:
+http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/
+
+There are multiple RIBs configured:
+
+-  AdjRibsIn (per Peer) : Adjacency RIBs In, BGP routes as they come
+   from BGP Peer
+
+-  EffectiveRib (per Peer) : BGP routes after applying Import policies
+
+-  LocRib (per RIB) : Local RIB, BGP routes from all peers
+
+-  AdjRibsOut (per Peer) : BGP routes that will be advertizes, after
+   applying Export policies
+
+This is how the empty output looks like, when address families for IPv4
+Unicast, IPv6 Unicast, IPv4 Flowspec, IPv6 Flowspec, IPv4 Labeled
+Unicast and Linkstate were configured:
+
+.. code:: xml
+
+    <loc-rib xmlns="urn:opendaylight:params:xml:ns:yang:bgp-rib">
+      <tables>
+        <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv6-address-family</afi>
+        <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:unicast-subsequent-address-family</safi>
+        <attributes>
+          <uptodate>false</uptodate>
+        </attributes>
+        <ipv6-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-inet">
+        </ipv6-routes>
+      </tables>
+      <tables>
+        <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv4-address-family</afi>
+        <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:unicast-subsequent-address-family</safi>
+        <attributes>
+          <uptodate>false</uptodate>
+        </attributes>
+        <ipv4-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-inet">
+        </ipv4-routes>
+      </tables>
+      <tables>
+        <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv4-address-family</afi>
+        <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">x:flowspec-subsequent-address-family</safi>
+        <attributes>
+          <uptodate>false</uptodate>
+        </attributes>
+        <flowspec-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">
+        </flowspec-routes>
+      </tables>
+      <tables>
+        <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv6-address-family</afi>
+        <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">x:flowspec-subsequent-address-family</safi>
+        <attributes>
+          <uptodate>false</uptodate>
+        </attributes>
+        <flowspec-ipv6-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">
+        </flowspec-ipv6-routes>
+      </tables>
+      <tables>
+        <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv4-address-family</afi>
+        <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-labeled-unicast">x:labeled-unicast-subsequent-address-family</safi>
+        <attributes>
+          <uptodate>false</uptodate>
+        </attributes>
+        <labeled-unicast-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-labeled-unicast">
+        </labeled-unicast-routes>
+      </tables>
+      <tables>
+        <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-linkstate">x:linkstate-address-family</afi>
+        <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-linkstate">x:linkstate-subsequent-address-family</safi>
+        <attributes>
+          <uptodate>false</uptodate>
+        </attributes>
+        <linkstate-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-linkstate">
+        </linkstate-routes>
+      </tables>
+    </loc-rib>
+
+You can see details for each AFI by expanding the RESTCONF link:
+
+-  **IPv4 Unicast** :
+   http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-types:ipv4-address-family/bgp-types:unicast-subsequent-address-family/ipv4-routes
+
+-  **IPv6 Unicast** :
+   http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-types:ipv6-address-family/bgp-types:unicast-subsequent-address-family/ipv6-routes
+
+-  **IPv4 Labeled Unicast** :
+   http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-types:ipv4-address-family/bgp-labeled-unicast:labeled-unicast-subsequent-address-family/bgp-labeled-unicast:labeled-unicast-routes
+
+-  **IPv4 Flowspec** :
+   http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-types:ipv4-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-routes
+
+-  **IPv6 Flowspec** :
+   http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-types:ipv6-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-ipv6-routes
+
+-  **Linkstate** :
+   http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-linkstate:linkstate-address-family/bgp-linkstate:linkstate-subsequent-address-family/linkstate-routes
+
+Populate RIB
+~~~~~~~~~~~~
+
+If an application peer is configured, you can populate its RIB by making
+POST calls to RESTCONF like the following.
+
+IPv4 Unicast
+^^^^^^^^^^^^
+
+**Add route:**
+
+**URL:**
+http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-types:unicast-subsequent-address-family/bgp-inet:ipv4-routes/
+
+-  where example-app-rib is your application RIB id (that you specified
+   in the configuration) and tables specifies AFI and SAFI of the data
+   that you want to add.
+
+**Method:** POST
+
+**Content-Type:** application/xml
+
+.. code:: xml
+
+     <?xml version="1.0" encoding="UTF-8" standalone="no"?>
+      <ipv4-route xmlns="urn:opendaylight:params:xml:ns:yang:bgp-inet">
+       <prefix>1.1.1.1/32</prefix>
+       <attributes>
+        <ipv4-next-hop>
+         <global>199.20.160.41</global>
+        </ipv4-next-hop><as-path/>
+        <multi-exit-disc>
+         <med>0</med>
+        </multi-exit-disc>
+        <local-pref>
+         <pref>100</pref>
+        </local-pref>
+        <originator-id>
+         <originator>41.41.41.41</originator>
+        </originator-id>
+        <origin>
+         <value>igp</value>
+        </origin>
+        <cluster-id>
+         <cluster>40.40.40.40</cluster>
+        </cluster-id>
+       </attributes>
+      </ipv4-route>
+
+The request results in **204 No content**. This is expected.
+
+**Delete route:**
+
+**URL:**
+`http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-types:unicast-subsequent-address-family/bgp-inet:ipv4-routes/bgp-inet:ipv4-route/<route-id> <http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-types:unicast-subsequent-address-family/bgp-inet:ipv4-routes/bgp-inet:ipv4-route/<route-id>>`__
+
+**Method:** DELETE
+
+IPv6 Unicast
+^^^^^^^^^^^^
+
+**Add route:**
+
+**URL:**
+http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv6-address-family/bgp-types:unicast-subsequent-address-family/bgp-inet:ipv6-routes/
+
+**Method:** POST
+
+**Content-Type:** application/xml
+
+.. code:: xml
+
+      <ipv6-route xmlns="urn:opendaylight:params:xml:ns:yang:bgp-inet">
+       <prefix>2001:db8:30::3/128</prefix>
+       <attributes>
+        <ipv6-next-hop>
+         <global>2001:db8:1::6</global>
+        </ipv6-next-hop>
+        <as-path/>
+        <origin>
+         <value>egp</value>
+        </origin>
+       </attributes>
+      </ipv6-route>
+
+The request results in **204 No content**. This is expected.
+
+**Delete route:**
+
+**URL:**
+`http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv6-address-family/bgp-types:unicast-subsequent-address-family/bgp-inet:ipv6-routes/bgp-inet:ipv6-route/<route-id> <http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv6-address-family/bgp-types:unicast-subsequent-address-family/bgp-inet:ipv6-routes/bgp-inet:ipv6-route/<route-id>>`__
+
+**Method:** DELETE
+
+IPv4 Labeled Unicast
+^^^^^^^^^^^^^^^^^^^^
+
+**Add route:**
+
+**URL:**
+http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-labeled-unicast:labeled-unicast-subsequent-address-family/bgp-labeled-unicast:labeled-unicast-routes
+
+**Method:** POST
+
+**Content-Type:** application/xml
+
+.. code:: xml
+
+      <labeled-unicast-route xmlns="urn:opendaylight:params:xml:ns:yang:bgp-labeled-unicast">
+       <route-key>label1</route-key>
+       <prefix>1.1.1.1/32</prefix>
+       <label-stack>
+        <label-value>123</label-value>
+       </label-stack>
+       <label-stack>
+        <label-value>456</label-value>
+       </label-stack>
+       <label-stack>
+        <label-value>342</label-value>
+       </label-stack>
+       <attributes>
+        <ipv4-next-hop>
+         <global>199.20.160.41</global>
+        </ipv4-next-hop>
+        <origin>
+         <value>igp</value>
+        </origin>
+        <as-path/>
+        <local-pref>
+         <pref>100</pref>
+        </local-pref>
+       </attributes>
+      </labeled-unicast-route>
+
+The request results in **204 No content**. This is expected.
+
+**Delete route:**
+
+**URL:**
+`http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-labeled-unicast:labeled-unicast-subsequent-address-family/bgp-labeled-unicast:labeled-unicast-routes/bgp-labeled-unicast:labeled-unicast-route/<route-id> <http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-labeled-unicast:labeled-unicast-subsequent-address-family/bgp-labeled-unicast:labeled-unicast-routes/bgp-labeled-unicast:labeled-unicast-route/<route-id>>`__
+
+**Method:** DELETE
+
+IPv4 Flowspec
+^^^^^^^^^^^^^
+
+**Add route:**
+
+**URL:**
+http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-routes
+
+**Method:** POST
+
+**Content-Type:** application/xml
+
+.. code:: xml
+
+    <flowspec-route xmlns="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">
+      <route-key>flow1</route-key>
+      <flowspec>
+        <destination-prefix>192.168.0.1/32</destination-prefix>
+      </flowspec>
+      <flowspec>
+        <source-prefix>10.0.0.1/32</source-prefix>
+      </flowspec>
+      <flowspec>
+        <protocol-ips>
+          <op>equals end-of-list</op>
+          <value>6</value>
+        </protocol-ips>
+      </flowspec>
+      <flowspec>
+        <ports>
+          <op>equals end-of-list</op>
+          <value>80</value>
+        </ports>
+      </flowspec>
+      <flowspec>
+        <destination-ports>
+          <op>greater-than</op>
+          <value>8080</value>
+        </destination-ports>
+        <destination-ports>
+          <op>and-bit less-than end-of-list</op>
+          <value>8088</value>
+        </destination-ports>
+      </flowspec>
+      <flowspec>
+        <source-ports>
+          <op>greater-than end-of-list</op>
+          <value>1024</value>
+        </source-ports>
+      </flowspec>
+      <flowspec>
+        <types>
+          <op>equals end-of-list</op>
+          <value>0</value>
+        </types>
+      </flowspec>
+      <flowspec>
+        <codes>
+          <op>equals end-of-list</op>
+          <value>0</value>
+        </codes>
+      </flowspec>
+      <flowspec>
+        <tcp-flags>
+          <op>match end-of-list</op>
+          <value>32</value>
+        </tcp-flags>
+      </flowspec>
+      <flowspec>
+        <packet-lengths>
+          <op>greater-than</op>
+          <value>400</value>
+        </packet-lengths>
+        <packet-lengths>
+          <op>and-bit less-than end-of-list</op>
+           <value>500</value>
+        </packet-lengths>
+      </flowspec>
+      <flowspec>
+        <dscps>
+          <op>equals end-of-list</op>
+          <value>20</value>
+        </dscps>
+      </flowspec>
+      <flowspec>
+        <fragments>
+          <op>match end-of-list</op>
+          <value>first</value>
+        </fragments>
+      </flowspec>
+      <attributes>
+        <origin>
+          <value>igp</value>
+        </origin>
+        <as-path/>
+        <local-pref>
+          <pref>100</pref>
+        </local-pref>
+        <extended-communities>
+        ....
+        </extended-communities>
+      </attributes>
+    </flowspec-route>
+
+**Flowspec Extended Communities (Actions):**
+
+.. code:: xml
+
+      <extended-communities>
+        <transitive>true</transitive>
+        <traffic-rate-extended-community>
+          <informative-as>123</informative-as>
+          <local-administrator>AAAAAA==</local-administrator>
+        </traffic-rate-extended-community>
+      </extended-communities>
+
+      <extended-communities>
+        <transitive>true</transitive>
+        <traffic-action-extended-community>
+          <sample>true</sample>
+          <terminal-action>false</terminal-action>
+        </traffic-action-extended-community>
+      </extended-communities>
+
+      <extended-communities>
+        <transitive>true</transitive>
+        <redirect-extended-community>
+          <global-administrator>123</global-administrator>
+          <local-administrator>AAAAew==</local-administrator>
+        </redirect-extended-community>
+      </extended-communities>
+
+      <extended-communities>
+        <transitive>true</transitive>
+        <redirect-ipv4>
+          <global-administrator>192.168.0.1</global-administrator>
+          <local-administrator>12345</local-administrator>
+        </redirect-ipv4>
+      </extended-communities>
+
+      <extended-communities>
+        <transitive>true</transitive>
+        <redirect-as4>
+          <global-administrator>64495</global-administrator>
+          <local-administrator>12345</local-administrator>
+        </redirect-as4>
+      </extended-communities>
+
+      <extended-communities>
+        <transitive>true</transitive>
+        <redirect-ip-nh-extended-community>
+          <copy>false</false>
+        </redirect-ip-nh-extended-community>
+      </extended-communities>
+
+      <extended-communities>
+        <transitive>true</transitive>
+        <traffic-marking-extended-community>
+          <global-administrator>20</global-administrator>
+        </traffic-marking-extended-community>
+      </extended-communities>
+
+The request results in **204 No content**. This is expected.
+
+**Delete route:**
+
+**URL:**
+`http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-routes/bgp-flowspec:flowspec-route/<route-id> <http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-routes/bgp-flowspec:flowspec-route/<route-id>>`__
+
+**Method:** DELETE
+
+IPv6 Flowspec
+^^^^^^^^^^^^^
+
+**Add route:**
+
+**URL:**
+http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv6-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-ipv6-routes
+
+**Method:** POST
+
+**Content-Type:** application/xml
+
+.. code:: xml
+
+    <flowspec-route xmlns="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">
+      <route-key>flow-v6</route-key>
+      <flowspec>
+        <destination-prefix>2001:db8:30::3/128</destination-prefix>
+      </flowspec>
+      <flowspec>
+        <source-prefix>2001:db8:31::3/128</source-prefix>
+      </flowspec>
+      <flowspec>
+        <flow-label>
+          <op>equals end-of-list</op>
+          <value>1</value>
+        </flow-label>
+      </flowspec>
+      <attributes>
+        <extended-communities>
+          <redirect-ipv6>
+            <global-administrator>2001:db8:1::6</global-administrator>
+            <local-administrator>12345</local-administrator>
+          </redirect-ipv6>
+        </extended-communities>
+        <origin>
+          <value>igp</value>
+        </origin>
+        <as-path/>
+        <local-pref>
+          <pref>100</pref>
+        </local-pref>
+      </attributes>
+    </flowspec-route>
+
+The request results in **204 No content**. This is expected.
+
+**Delete route:**
+
+**URL:**
+`http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv6-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-ipv6-routes/bgp-flowspec:flowspec-route/<route-id> <http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv6-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-ipv6-routes/bgp-flowspec:flowspec-route/<route-id>>`__
+
+**Method:** DELETE
+
diff --git a/docs/user-guide/capwap-user-guide.rst b/docs/user-guide/capwap-user-guide.rst
new file mode 100644 (file)
index 0000000..26bd3b8
--- /dev/null
@@ -0,0 +1,95 @@
+CAPWAP User Guide
+=================
+
+This document describes how to use the Control And Provisioning of
+Wireless Access Points (CAPWAP) feature in OpenDaylight. This document
+contains configuration, administration, and management sections for the
+feature.
+
+Overview
+--------
+
+CAPWAP feature fills the gap OpenDaylight Controller has with respect to
+managing CAPWAP compliant wireless termination point (WTP) network
+devices present in enterprise networks. Intelligent applications (e.g.
+centralized firmware management, radio planning) can be developed by
+tapping into the WTP network device’s operational states via REST APIs.
+
+CAPWAP Architecture
+-------------------
+
+The CAPWAP feature is implemented as an MD-SAL based provider module,
+which helps discover WTP devices and update their states in MD-SAL
+operational datastore.
+
+Scope of CAPWAP Project
+-----------------------
+
+In the Lithium release, CAPWAP project aims to only detect the WTPs and
+store their basic attributes in the operational data store, which is
+accessible via REST and JAVA APIs.
+
+Installing CAPWAP
+-----------------
+
+To install CAPWAP, download OpenDaylight and use the Karaf console to
+install the following feature:
+
+odl-capwap-ac-rest
+
+Configuring CAPWAP
+------------------
+
+As of Lithium, there are no configuration requirements.
+
+Administering or Managing CAPWAP
+--------------------------------
+
+After installing the odl-capwap-ac-rest feature from the Karaf console,
+users can administer and manage CAPWAP from the APIDOCS explorer.
+
+Go to
+`http://${ipaddress}:8181/apidoc/explorer/index.html <http://${ipaddress}:8181/apidoc/explorer/index.html>`__,
+sign in, and expand the capwap-impl panel. From there, users can execute
+various API calls.
+
+Tutorials
+---------
+
+Viewing Discovered WTPs
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This tutorial can be used as a walk through to understand the steps for
+starting the CAPWAP feature, detecting CAPWAP WTPs, accessing the
+operational states of WTPs.
+
+Prerequisites
+^^^^^^^^^^^^^
+
+It is assumed that user has access to at least one hardware/software
+based CAPWAP compliant WTP. These devices should be configured with
+OpenDaylight controller IP address as a CAPWAP Access Controller (AC)
+address. It is also assumed that WTPs and OpenDaylight controller share
+the same ethernet broadcast domain.
+
+Instructions
+^^^^^^^^^^^^
+
+1. Run the OpenDaylight distribution and install odl-capwap-ac-rest from
+   the Karaf console.
+
+2. Go to
+   `http://${ipaddress}:8181/apidoc/explorer/index.html <http://${ipaddress}:8181/apidoc/explorer/index.html>`__
+
+3. Expand capwap-impl
+
+4. Click /operational/capwap-impl:capwap-ac-root/
+
+5. Click "Try it out"
+
+6. The above step should display list of WTPs discovered using ODL
+   CAPWAP feature.
+
diff --git a/docs/user-guide/cardinal_-opendaylight-monitoring-as-a-service.rst b/docs/user-guide/cardinal_-opendaylight-monitoring-as-a-service.rst
new file mode 100644 (file)
index 0000000..b53e1c6
--- /dev/null
@@ -0,0 +1,130 @@
+Cardinal: OpenDaylight Monitoring as a Service
+==============================================
+
+This section describes how to use the Cardinal feature in OpenDaylight
+and contains configuration, administration, and management sections for
+the feature.
+
+Overview
+--------
+
+Cardinal (OpenDaylight Monitoring as a Service) enables OpenDaylight and
+the underlying software defined network to be remotely monitored by
+deployed Network Management Systems (NMS) or Analytics suite. In the
+Boron release, Cardinal will add:
+
+1. OpenDaylight MIB.
+
+2. Enable ODL diagnostics/monitoring to be exposed across SNMP (v2c, v3)
+   and REST north-bound.
+
+3. Extend ODL System health, Karaf parameter and feature info, ODL
+   plugin scalability and network parameters.
+
+4. Support autonomous notifications (SNMP Traps).
+
+Cardinal Architecture
+---------------------
+
+The Cardinal architecture can be found at the below link:
+
+https://wiki.opendaylight.org/images/8/89/Cardinal-ODL_Monitoring_as_a_Service_V2.pdf
+
+Configuring Cardinal feature
+----------------------------
+
+To start Cardinal feature, start karaf and type the following command:
+
+::
+
+    feature:install odl-cardinal
+
+After this Cardinal should be up and working with SNMP daemon running on
+port 161.
+
+Tutorials
+---------
+
+Below are tutorials for Cardinal.
+
+Using Cardinal
+~~~~~~~~~~~~~~
+
+These tutorials are intended for any user who wants to monitor three
+basic component in OpenDaylight
+
+1. System Info in which controller is running.
+
+2. Karaf Info
+
+3. Project Specific Information.
+
+Prerequisites
+^^^^^^^^^^^^^
+
+There is no as such specific prerequisite. Cardinal can work without
+installing any third party software. However If one wants to see the
+output of a snmpget/snmpwalk on the CLI prompt, than one can install the
+SNMP using the below link:
+
+https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-an-snmp-daemon-and-client-on-ubuntu-14-04
+
+Using the above command line utility one can get the same result as the
+cardinal APIs will give for the snmpget/snmpwalk request.
+
+Target Environment
+^^^^^^^^^^^^^^^^^^
+
+This tutorial is developed considering the following environment:
+
+controller-Linux(Ubuntu 14.02).
+
+Instructions
+^^^^^^^^^^^^
+
+Install Cardinal feature
+''''''''''''''''''''''''
+
+Open karaf and install the cardinal feature using the following command:
+
+::
+
+    feature:install odl-cardinal
+
+Please verify that SNMP daemon is up on port 161 using the following
+command on the terminal window of Linux machine:
+
+::
+
+    netstat -anp | grep "161"
+
+If the grep on the \`\`snmpd\`\` port is successful than SNMP daemon is
+up and working.
+
+APIs Reference
+''''''''''''''
+
+Please see Developer guide for usage of Cardinal APIs.
+
+CLI commands to do snmpget/walk
+'''''''''''''''''''''''''''''''
+
+One can do snmpget/walk on the ODL-CARDINAL-MIB. Open the linux terminal
+and type the below command:
+
+::
+
+    snmpget -v2c -c public localhost Oid_Of_the_mib_variable
+
+Or
+
+::
+
+    snmpget -v2c -c public localhost ODL-CARDINAL-MIB::mib_variable_name
+
+For snmpwalk use the below command:
+
+::
+
+    snmpwalk -v2c -c public localhost SNMPv2-SMI::experimental
+
diff --git a/docs/user-guide/centinel-user-guide.rst b/docs/user-guide/centinel-user-guide.rst
new file mode 100644 (file)
index 0000000..3531cb6
--- /dev/null
@@ -0,0 +1,144 @@
+Centinel User Guide
+===================
+
+The Centinel project aims at providing a distributed, reliable framework
+for efficiently collecting, aggregating and sinking streaming data
+across Persistence DB and stream analyzers (example: Graylog, Elastic
+search, Spark, Hive etc.). This document contains configuration,
+administration, management, using sections for the feature.
+
+Overview
+--------
+
+In the Beryllium Release of Centinel, this framework enables SDN
+applications/services to receive events from multiple streaming sources
+(e.g., Syslog, Thrift, Avro, AMQP, Log4j, HTTP/REST) and execute actions
+like network configuration/batch processing/real-time analytics. It also
+provides a Log Service to assist operators running SDN ecosystem by
+installing the feature odl-centinel-all.
+
+With the configurations development of "Log Service" and plug-in for log
+analyzer (e.g., Graylog) will take place. Log service will do processing
+of real time events coming from log analyzer. Additionally, stream
+collector (Flume and Sqoop based) that will collect logs from
+OpenDaylight and sink it to persistence service (integrated with TSDR).
+Also includes RESTCONF interface to inject events to north bound
+applications for real-time analytic/network configuration. Centinel User
+Interface (web interface) will be available to operators to enable
+rules/alerts/dashboard.
+
+Centinel core features
+----------------------
+
+The core features of the Centinel framework are:
+
+Stream collector
+    Collecting, aggregating and sinking streaming data
+
+Log Service
+    Listen log stream events coming from log analyzer
+
+Log Service
+    Enables user to configure rules (e.g., alerts, diagnostic, health,
+    dashboard)
+
+Log Service
+    Performs event processing/analytics
+
+User Interface
+    Enable set-rule, search, visualize, alert, diagnostic, dashboard
+    etc.
+
+Adaptor
+    Log analyzer plug-in to Graylog and a generic data-model to extend
+    to other stream analyzers (e.g., Logstash)
+
+REST Service
+    Northbound APIs for Log Service and Steam collector framework
+
+Leverages
+    TSDR persistence service, data query, purging and elastic search
+
+Centinel Architecture
+---------------------
+
+The following wiki pages capture the Centinel Model/Architecture
+
+a. https://wiki.opendaylight.org/view/Centinel:Main
+
+b. https://wiki.opendaylight.org/view/Project_Proposals:Centinel
+
+c. https://wiki.opendaylight.org/images/0/09/Centinel-08132015.pdf
+
+Administering or Managing Centinel with default configuration
+-------------------------------------------------------------
+
+Prerequisites
+~~~~~~~~~~~~~
+
+1. Check whether Graylog is up and running and plugins deployed as
+   mentioned in `installation
+   guide <http://opendaylight.readthedocs.io/en/stable-beryllium/getting-started-guide/index.html>`__.
+
+2. Check whether HBase is up and respective tables and column families
+   as mentioned in `installation
+   guide <http://opendaylight.readthedocs.io/en/stable-beryllium/getting-started-guide/index.html>`__
+   are created.
+
+3. Check if apache flume is up and running.
+
+4. Check if apache drill is up and running.
+
+Running Centinel
+~~~~~~~~~~~~~~~~
+
+The following steps should be followed to bring up the controller:
+
+1. Download the Centinel OpenDaylight distribution release from below
+   link: http://www.opendaylight.org/software/downloads
+
+2. Run Karaf of the distribution from bin folder
+
+   ::
+
+       ./karaf
+
+3. Install the centinel features using below command:
+
+   ::
+
+       feature:install odl-centinel-all
+
+4. Give some time for the centinel to come up.
+
+User Actions
+~~~~~~~~~~~~
+
+1. **Log In:** User logs into the Centinel with required credentials
+   using following URL: http://localhost:8181/index.html
+
+2. **Create Rule:**
+
+   a. Select Centinel sub-tree present in left side and go to Rule tab.
+
+   b. Create Rule with title and description.
+
+   c. Configure flow rule on the stream to filter the logs accordingly
+      for, e.g., ``bundle_name=org.opendaylight.openflow-plugin``
+
+3. **Set Alarm Condition:** Configure alarm condition, e.g.,
+   message-count-rule such that if 10 messages comes on a stream (e.g.,
+   The OpenFlow Plugin) in last 1 minute with an alert is generated.
+
+4. **Subscription:** User can subscribe to the rule and alarm condition
+   by entering the http details or email-id in subscription textfield by
+   clicking on the subscribe button.
+
+5. **Create Dashboard:** Configure dashboard for stream and alert
+   widgets. Alarm and Stream count will be updated in corresponding
+   widget in Dashboard.
+
+6. **Event Tab:** Intercepted Logs, Alarms and Raw Logs in Event Tab
+   will be displayed by selecting the appropriate radio button. User can
+   also filter the searched data using SQL query in the search box.
+
diff --git a/docs/user-guide/didm-user-guide.rst b/docs/user-guide/didm-user-guide.rst
new file mode 100644 (file)
index 0000000..4c4affc
--- /dev/null
@@ -0,0 +1,194 @@
+DIDM User Guide
+===============
+
+Overview
+--------
+
+The Device Identification and Driver Management (DIDM) project addresses
+the need to provide device-specific functionality. Device-specific
+functionality is code that performs a feature, and the code is
+knowledgeable of the capability and limitations of the device. For
+example, configuring VLANs and adjusting FlowMods are features, and
+there may be different implementations for different device types.
+Device-specific functionality is implemented as Device Drivers. Device
+Drivers need to be associated with the devices they can be used with. To
+determine this association requires the ability to identify the device
+type.
+
+DIDM Architecture
+-----------------
+
+The DIDM project creates the infrastructure to support the following
+functions:
+
+-  **Discovery** - Determination that a device exists in the controller
+   management domain and connectivity to the device can be established.
+   For devices that support the OpenFlow protocol, the existing
+   discovery mechanism in OpenDaylight suffices. Devices that do not
+   support OpenFlow will be discovered through manual means such as the
+   operator entering device information via GUI or REST API.
+
+-  **Identification** – Determination of the device type.
+
+-  **Driver Registration** – Registration of Device Drivers as routed
+   RPCs.
+
+-  **Synchronization** – Collection of device information, device
+   configuration, and link (connection) information.
+
+-  **Data Models for Common Features** – Data models will be defined to
+   perform common features such as VLAN configuration. For example,
+   applications can configure a VLAN by writing the VLAN data to the
+   data store as specified by the common data model.
+
+-  **RPCs for Common Features** – Configuring VLANs and adjusting
+   FlowMods are example of features. RPCs will be defined that specify
+   the APIs for these features. Drivers implement features for specific
+   devices and support the APIs defined by the RPCs. There may be
+   different Driver implementations for different device types.
+
+Atrium Support
+--------------
+
+The Atrium implements an open source router that speaks BGP to other
+routers, and forwards packets received on one port/vlan to another,
+based on the next-hop learnt via BGP peering. A BGP peering application
+for the Open Daylight Controller and a new model for flow objective
+drivers for switches integrated with the Open Daylight Atrium
+distribution was developed for this project. The implementation has the
+same level of feature partly that was introduced by the Atrium 2015/A
+distribution on the ONOS controller. An overview of the architecture is
+available at here
+(https://github.com/onfsdn/atrium-docs/wiki/ODL-Based-Atrium-Router-16A).
+
+Atrium stack is implemented in OpenDaylight using Atrium and DIDM
+project. Atrium project provides the application implementation for BGP
+peering and the DIDM project provides implementation for FlowObjectives.
+FlowObjective provides an abstraction layer and present the pipeline
+agnostic api to application to consume.
+
+FlowObjective
+~~~~~~~~~~~~~
+
+Flow Objectives describe an SDN application’s objective (or intention)
+behind a flow it is sending to a device.
+
+Application communicates the flow installation requirement using Flow
+Objectives. DIDM drivers translates the Flow Objectives to device
+specific flows as per the device pipeline.
+
+There are three FlowObjectives (already implemented in ONOS controller)
+:
+
+-  Filtering Objective
+
+-  Next Objective
+
+-  Forwarding Objective
+
+Installing DIDM
+---------------
+
+To install DIDM, download OpenDaylight and use the Karaf console to
+install the following features:
+
+-  odl-openflowplugin-all
+
+-  odl-didm-all
+
+odl-didm-all installs the following required features:
+
+-  odl-didm-ovs-all
+
+-  odl-didm-ovs-impl
+
+-  odl-didm-util
+
+-  odl-didm-identification
+
+-  odl-didm-drivers
+
+-  odl-didm-hp-all
+
+Configuring DIDM
+----------------
+
+This section shows an example configuration steps for installing a
+driver (HP 3800 OpenFlow switch driver).
+
+Install DIDM features:
+----------------------
+
+::
+
+    feature:install odl-didm-identification-api
+    feature:install odl-didm-drivers
+
+In order to identify the device, device driver needs to be installed
+first. Identification Manager will be notified when a new device
+connects to the controller.
+
+Install HP driver
+-----------------
+
+feature:install odl-didm-hp-all installs the following features
+
+-  odl-didm-util
+
+-  odl-didm-identification
+
+-  odl-didm-drivers
+
+-  odl-didm-hp-all
+
+-  odl-didm-hp-impl
+
+Now at this point, the driver has written all of the identification
+information in to the MD-SAL datastore. The identification manager
+should have that information so that it can try to identify the HP 3800
+device when it connects to the controller.
+
+Configure the switch and connect it to the controller from the switch
+CLI.
+
+Run REST GET command to verify the device details:
+--------------------------------------------------
+
+`http://<CONTROLLER-IP:8181>/restconf/operational/opendaylight-inventory:nodes <http://<CONTROLLER-IP:8181>/restconf/operational/opendaylight-inventory:nodes>`__
+
+Run REST adjust-flow command to adjust flows and push to the device
+-------------------------------------------------------------------
+
+**Flow mod driver for HP 3800 device is added in Beryllium release.**
+
+This driver adjusts the flows and push the same to the device. This API
+takes the flow to be adjusted as input and displays the adjusted flow as
+output in the REST output container. Here is the REST API to adjust and
+push flows to HP 3800 device:
+
+`http://<CONTROLLER-IP:8181>/restconf/operations/openflow-feature:adjust-flow <http://<CONTROLLER-IP:8181>/restconf/operations/openflow-feature:adjust-flow>`__
+
+FlowObjectives API
+------------------
+
+FlowObjective presents the OpenFlow pipeline agnostic API to Application
+to consume. Application communicate their intent behind installation of
+flow to Drivers using the FlowObjective. Driver translates the
+FlowObjective in device specific flows and uses the OpenFlowPlugin to
+install the flows to the device.
+
+Filter Objective
+~~~~~~~~~~~~~~~~
+
+`http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:filter <http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:filter>`__
+
+Next Objective
+~~~~~~~~~~~~~~
+
+`http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:next <http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:next>`__
+
+Forward Objective
+~~~~~~~~~~~~~~~~~
+
+`http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:forward <http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:forward>`__
+
diff --git a/docs/user-guide/genius-user-guide.rst b/docs/user-guide/genius-user-guide.rst
new file mode 100644 (file)
index 0000000..bf07ff9
--- /dev/null
@@ -0,0 +1,568 @@
+Genius User Guide
+=================
+
+Overview
+--------
+
+The Genius project provides generic network interfaces, utilities and
+services. Any OpenDaylight application can use these to achieve
+interference-free co-existence with other applications using Genius.
+
+Modules and Interfaces
+----------------------
+
+In the the first phase delivered in OpenDaylight Boron release, Genius
+provides following modules — 
+
+-  Modules providing a common view of network interfaces for different
+   services
+
+   -  **Interface (logical port) Manager**
+
+      -  *Allows bindings/registration of multiple services to logical
+         ports/interfaces*
+
+      -  *Ability to plug in different types of southbound protocol
+         renderers*
+
+   -  **Overlay Tunnel Manager**
+
+      -  *Creates and maintains overlay tunnels between configured
+         Tunnel Endpoints (TEPs)*
+
+-  Modules providing commonly used functions as shared services to avoid
+   duplication of code and waste of resources
+
+   -  **Liveness Monitor**
+
+      -  *Provides tunnel/nexthop liveness monitoring services*
+
+   -  **ID Manager**
+
+      -  *Generates persistent unique integer IDs*
+
+   -  **MD-SAL Utils**
+
+      -  *Provides common generic APIs for interaction with MD-SAL*
+
+Interface Manager Operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Creating interfaces
+^^^^^^^^^^^^^^^^^^^
+
+The YANG file Data Model
+`odl-interface.yang <https://github.com/opendaylight/genius/blob/master/interfacemanager/interfacemanager-api/src/main/yang/odl-interface.yang>`__
+contains the interface configuration data-model.
+
+You can create interfaces at the MD-SAL Data Node Path
+**/config/if:interfaces/interface**, with the following attributes — 
+
+***Common attributes***
+
+-  **name** — unique interface name, can be any unique string (e.g.,
+   UUID string)
+
+-  **type** — interface type, currently supported *iana-if-type:l2vlan
+   and iana-if-type:tunnel*
+
+-  **enabled** — admin status, possible values *true* or *false*
+
+-  **parent-refs** : used to specify references to parent interface/port
+   feeding to this interface
+
+-  datapath-node-identifier — identifier for a fixed/physical dataplane
+   node, can be physical switch identifier
+
+-  parent-interface — can be a physical switch port (in conjunction of
+   above), virtual switch port (e.g., neutron port) or another interface
+
+-  list node-identifier — identifier of the dependant underlying
+   configuration protocol
+
+   -  *topology-id* — can be ovsdb configuration protocol
+
+   -  *node-id* — can be hwvtep node-id
+
+***Type specific attributes***
+
+-  when type = l2vlan
+
+   -  **vlan-id** — VLAN id for trunk-member l2vlan interfaces
+
+   -  **l2vlan-mode** — currently supported ones are *transparent*,
+      *trunk* or *trunk-member*
+
+-  when type = stacked\_vlan (Not supported yet)
+
+   -  **stacked-vlan-id** — VLAN-Id for additional/second VLAN tag
+
+-  when type = tunnel
+
+   -  **tunnel-interface-type** — tunnel type, currently supported ones
+      are:
+
+      -  tunnel-type-vxlan
+
+      -  tunnel-type-gre
+
+      -  tunnel-type-mpls-over-gre
+
+   -  **tunnel-source** — tunnel source IP address
+
+   -  **tunnel-destination** — tunnel destination IP address
+
+   -  **tunnel-gateway** — gateway IP address
+
+   -  **monitor-enabled** — tunnel monitoring enable control
+
+   -  **monitor-interval** — tunnel monitoring interval in millisiconds
+
+-  when type = mpls (Not supported yet)
+
+   -  **list labelStack** — list of lables
+
+   -  **num-labels** — number of lables configured
+
+Supported REST calls are **GET, PUT, DELETE, POST**
+
+Creating L2 port interfaces
+'''''''''''''''''''''''''''
+
+Interfaces on normal L2 ports (e.g. Neutron tap ports) are created with
+type *l2vlan* and *l2vlan-mode* as *transparent*. This type of interfce
+classifies packets passing through a particular L2 (OpenFlow) port. In
+dataplane, packets belonging to this interface are classified by
+matching in-port against the of-port-id assigned to the base port as
+specified in parent-interface.
+
+**URL:** /restconf/config/ietf-interfaces:interfaces
+
+**Sample JSON data**
+
+::
+
+    "interfaces": {
+        "interface": [
+            {
+                "name": "4158408c-942b-487c-9a03-0b603c39d3dd",
+                "type": "iana-if-type:l2vlan",                       <--- interface type 'l2vlan' for normal L2 port
+                "odl-interface:l2vlan-mode": "transparent",          <--- 'transparent' VLAN port mode allows any (tagged, untagged) ethernet packet
+                "odl-interface:parent-interface": "tap4158408c-94",  <--- port-name as it appears on southbound interface
+                "enabled": true
+            }
+        ]
+    }
+
+Creating VLAN interfaces
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+A VLAN interface is created as a *l2vlan* interface in *trunk-member*
+mode, by configuring a VLAN-Id and a particular L2 (vlan trunk)
+interface. Parent VLAN trunk interface is created in the same way as the
+*transparent* interface as specified above. A *trunk-member* interface
+defines a flow on a particular L2 port and having a particular VLAN tag.
+On ingress, after classification the VLAN tag is popped out and
+corresponding unique dataplane-id is associated with the packet, before
+delivering the packet to service processing. When a service module
+delivers the packet to this interface for egress, it pushes
+corresponding VLAN tag and sends the packet out of the parent L2 port.
+
+**URL:** /restconf/config/ietf-interfaces:interfaces
+
+**Sample JSON data**
+
+::
+
+    "interfaces": {
+        "interface": [
+            {
+                "name": "4158408c-942b-487c-9a03-0b603c39d3dd:100",
+                "type": "iana-if-type:l2vlan",
+                "odl-interface:l2vlan-mode": "trunk-member",        <--- for 'trunk-member', flow is classified with particular vlan-id on an l2 port
+                "odl-interface:parent-interface": "4158408c-942b-487c-9a03-0b603c39d3dd",  <--- Parent 'trunk' iterface name
+                "odl-interface:vlan-id": "100",
+                "enabled": true
+            }
+        ]
+    }
+
+Creating Overlay Tunnel Interfaces
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+An overlay tunnel interface is created with type *tunnel* and particular
+*tunnel-interface-type*. Tunnel interfaces are created on a particular
+data plane node (virtual switches) with a pair of (local, remote) IP
+addresses. Currently supported tunnel interface types are VxLAN, GRE and
+MPLSoverGRE.
+
+**URL:** /restconf/config/ietf-interfaces:interfaces
+
+**Sample JSON data**
+
+::
+
+    "interfaces": {
+        "interface": [
+            {
+                "name": "MGRE_TUNNEL:1",
+                "type": "iana-if-type:tunnel",
+                "odl-interface:tunnel-interface-type": "odl-interface:tunnel-type-mpls-over-gre",
+                "odl-interface:datapath-node-identifier": 156613701272907,
+                "odl-interface:tunnel-source": "11.0.0.43",
+                "odl-interface:tunnel-destination": "11.0.0.66",
+                "odl-interface:monitor-enabled": false,
+                "odl-interface:monitor-interval": 10000,
+                "enabled": true
+            }
+        ]
+    }
+
+Binding services on interface
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The YANG file
+`odl-interface-service-bindings.yang <https://github.com/opendaylight/genius/blob/stable/boron/interfacemanager/interfacemanager-api/src/main/yang/odl-interface-service-bindings.yang>`__
+contains the service binding configuration data model.
+
+An application can bind services to a particular interface by
+configuring MD-SAL data node at path /config/interface-service-binding.
+Binding services on interface allows particular service to pull traffic
+arriving on that interafce depending upon the a service priority.
+Service modules can specify openflow-rules to be applied on the packet
+belonging to the inetrface. Usually these rules include sending the
+packet to specific service table/pipeline. Service modules are
+responsible for sending the packet back (if not consumed) to service
+dispatcher table, for next service to process the packet.
+
+**URL:**/restconf/config/interface-service-bindings:service-bindings/
+
+**Sample JSON data**
+
+::
+
+    "service-bindings": {
+      "services-info": [
+        {
+          "interface-name": "4152de47-29eb-4e95-8727-2939ac03ef84",
+          "bound-services": [
+            {
+              "service-name": "ELAN",
+              "service-type": "interface-service-bindings:service-type-flow-based"
+              "service-priority": 3,
+              "flow-priority": 5,
+              "flow-cookie": 134479872,
+              "instruction": [
+                {
+                  "order": 2,
+                  "go-to-table": {
+                    "table_id": 50
+                  }
+                },
+                {
+                  "order": 1,
+                  "write-metadata": {
+                    "metadata": 83953188864,
+                    "metadata-mask": 1099494850560
+                  }
+                }
+              ],
+            },
+            {
+             "service-name": "L3VPN",
+             "service-type": "interface-service-bindings:service-type-flow-based"
+             "service-priority": 2,
+             "flow-priority": 10,
+             "flow-cookie": 134217729,
+             "instruction": [
+                {
+                  "order": 2,
+                  "go-to-table": {
+                    "table_id": 21
+                  }
+                },
+                {
+                  "order": 1,
+                  "write-metadata": {
+                    "metadata": 100,
+                    "metadata-mask": 4294967295
+                  }
+                }
+              ],
+            }
+          ]
+        }
+      ]
+    }
+
+Interface Manager RPCs
+~~~~~~~~~~~~~~~~~~~~~~
+
+In addition to the above defined configuration interfaces, Interface
+Manager also provides several RPCs to access interface operational data
+and other helpful information. Interface Manger RPCs are defined in
+`odl-interface-rpc.yang <https://github.com/opendaylight/genius/blob/stable/boron/interfacemanager/interfacemanager-api/src/main/yang/odl-interface-rpc.yang>`__
+
+The following RPCs are available — 
+
+get-dpid-from-interface
+^^^^^^^^^^^^^^^^^^^^^^^
+
+This RPC is used to retrieve dpid/switch hosting the root port from
+given interface name.
+
+::
+
+    rpc get-dpid-from-interface {
+        description "used to retrieve dpid from interface name";
+        input {
+            leaf intf-name {
+                type string;
+            }
+        }
+        output {
+            leaf dpid {
+                type uint64;
+            }
+        }
+    }
+
+get-port-from-interface
+^^^^^^^^^^^^^^^^^^^^^^^
+
+This RPC is used to retrieve south bound port attributes from the
+interface name.
+
+::
+
+    rpc get-port-from-interface {
+        description "used to retrieve south bound port attributes from the interface name";
+        input {
+            leaf intf-name {
+                type string;
+            }
+        }
+        output {
+            leaf dpid {
+                type uint64;
+            }
+            leaf portno {
+                type uint32;
+            }
+            leaf portname {
+                type string;
+            }
+        }
+    }
+
+get-egress-actions-for-interface
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This RPC is used to retrieve group actions to use from interface name.
+
+::
+
+    rpc get-egress-actions-for-interface {
+        description "used to retrieve group actions to use from interface name";
+        input {
+            leaf intf-name {
+                type string;
+                mandatory true;
+            }
+            leaf tunnel-key {
+                description "It can be VNI for VxLAN tunnel ifaces, Gre Key for GRE tunnels, etc.";
+                type uint32;
+                mandatory false;
+            }
+        }
+        output {
+            uses action:action-list;
+        }
+    }
+
+get-egress-instructions-for-interface
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This RPC is used to retrieve flow instructions to use from interface
+name.
+
+::
+
+    rpc get-egress-instructions-for-interface {
+        description "used to retrieve flow instructions to use from interface name";
+        input {
+            leaf intf-name {
+                type string;
+                mandatory true;
+            }
+            leaf tunnel-key {
+                description "It can be VNI for VxLAN tunnel ifaces, Gre Key for GRE tunnels, etc.";
+                type uint32;
+                mandatory false;
+            }
+        }
+        output {
+            uses offlow:instruction-list;
+        }
+    }
+
+get-endpoint-ip-for-dpn
+^^^^^^^^^^^^^^^^^^^^^^^
+
+This RPC is used to get the local ip of the tunnel/trunk interface on a
+particular DPN (Data Plane Node).
+
+::
+
+    rpc get-endpoint-ip-for-dpn {
+        description "to get the local ip of the tunnel/trunk interface";
+        input {
+            leaf dpid {
+                type uint64;
+            }
+        }
+        output {
+            leaf-list local-ips {
+                type inet:ip-address;
+            }
+        }
+    }
+
+get-interface-type
+^^^^^^^^^^^^^^^^^^
+
+This RPC is used to get the type of the interface (vlan/vxlan or gre).
+
+::
+
+    rpc get-interface-type {
+    description "to get the type of the interface (vlan/vxlan or gre)";
+        input {
+            leaf intf-name {
+                type string;
+            }
+        }
+        output {
+            leaf interface-type {
+                type identityref {
+                    base if:interface-type;
+                }
+            }
+        }
+    }
+
+get-tunnel-type
+^^^^^^^^^^^^^^^
+
+This RPC is used to get the type of the tunnel interface(vxlan or gre).
+
+::
+
+    rpc get-tunnel-type {
+    description "to get the type of the tunnel interface (vxlan or gre)";
+        input {
+            leaf intf-name {
+                type string;
+            }
+        }
+        output {
+            leaf tunnel-type {
+                type identityref {
+                    base odlif:tunnel-type-base;
+                }
+            }
+        }
+    }
+
+get-nodeconnector-id-from-interface
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This RPC is used to get node-connector-id associated with an interface.
+
+::
+
+    rpc get-nodeconnector-id-from-interface {
+    description "to get nodeconnector id associated with an interface";
+        input {
+            leaf intf-name {
+                type string;
+            }
+        }
+        output {
+            leaf nodeconnector-id {
+                type inv:node-connector-id;
+            }
+        }
+    }
+
+get-interface-from-if-index
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This RPC is used to get interface associated with an if-index (dataplane
+interface id).
+
+::
+
+    rpc get-interface-from-if-index {
+        description "to get interface associated with an if-index";
+            input {
+                leaf if-index {
+                    type int32;
+                }
+            }
+            output {
+                leaf interface-name {
+                    type string;
+                }
+            }
+        }
+
+create-terminating-service-actions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This RPC is used to create the tunnel termination service table entries.
+
+::
+
+    rpc create-terminating-service-actions {
+    description "create the ingress terminating service table entries";
+        input {
+             leaf dpid {
+                 type uint64;
+             }
+             leaf tunnel-key {
+                 type uint64;
+             }
+             leaf interface-name {
+                 type string;
+             }
+             uses offlow:instruction-list;
+        }
+    }
+
+remove-terminating-service-actions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This RPC is used to remove the tunnel termination service table entries.
+
+::
+
+    rpc remove-terminating-service-actions {
+    description "remove the ingress terminating service table entries";
+        input {
+             leaf dpid {
+                 type uint64;
+             }
+             leaf interface-name {
+                 type string;
+             }
+             leaf tunnel-key {
+                 type uint64;
+             }
+        }
+    }
+
+ID Manager
+----------
+
+TBD.
diff --git a/docs/user-guide/group-based-policy-user-guide.rst b/docs/user-guide/group-based-policy-user-guide.rst
new file mode 100644 (file)
index 0000000..d62e2bc
--- /dev/null
@@ -0,0 +1,2478 @@
+Group Based Policy User Guide
+=============================
+
+Overview
+--------
+
+OpenDaylight Group Based Policy allows users to express network
+configuration in a declarative versus imperative way.
+
+This is often described as asking for **"what you want"**, rather than
+**"how to do it"**.
+
+In order to achieve this Group Based Policy (herein referred to as
+**GBP**) is an implementation of an **Intent System**.
+
+An **Intent System**:
+
+-  is a process around an intent driven data model
+
+-  contains no domain specifics
+
+-  is capable of addressing multiple semantic definitions of intent
+
+To this end, **GBP** Policy views an **Intent System** visually as:
+
+.. figure:: ./images/groupbasedpolicy/IntentSystemPolicySurfaces.png
+   :alt: Intent System Process and Policy Surfaces
+
+   Intent System Process and Policy Surfaces
+
+-  **expressed intent** is the entry point into the system.
+
+-  **operational constraints** provide policy for the usage of the
+   system which modulates how the system is consumed. For instance *"All
+   Financial applications must use a specific encryption standard"*.
+
+-  **capabilities and state** are provided by *renderers*. *Renderers*
+   dynamically provide their capabilities to the core model, allowing
+   the core model to remain non-domain specific.
+
+-  **governance** provides feedback on the delivery of the *expressed
+   intent*. i.e. *"Did we do what you asked us?"*
+
+In summary **GBP is about the Automation of Intent**.
+
+By thinking of **Intent Systems** in this way, it enables:
+
+-  **automation of intent**
+
+   By focusing on **Model. Process. Automation**, a consistent policy
+   resolution process enables for mapping between the **expressed
+   intent** and renderers responsible for providing the capabilities of
+   implementing that intent.
+
+-  recursive/intent level-independent behaviour.
+
+   Where *one person’s concrete is another’s abstract*, intent can be
+   fulfilled through a hierarchical implementation of non-domain
+   specific policy resolution. Domain specifics are provided by the
+   *renderers*, and exposed via the API, at each policy resolution
+   instance. For example:
+
+   -  To DNS: The name "www.foo.com" is *abstract*, and it’s IPv4
+      address 10.0.0.10 is *concrete*,
+
+   -  To an IP stack: 10.0.0.10 is *abstract* and the MAC
+      08:05:04:03:02:01 is *concrete*,
+
+   -  To an Ethernet switch: The MAC 08:05:04:03:02:01 is *abstract*,
+      the resolution to a port in it’s CAM table is *concrete*,
+
+   -  To an optical network: The port maybe *abstract*, yet the optical
+      wavelength is *concrete*.
+
+    **Note**
+
+    *This is a very domain specific analogy, tied to something most
+    readers will understand. It in no way implies the **GBP** should be
+    implemented in an OSI type fashion. The premise is that by
+    implementing a full **Intent System**, the user is freed from a lot
+    of the constraints of how the expressed intent is realised.*
+
+It is important to show the overall philosophy of **GBP** as it sets the
+project’s direction.
+
+In the Beryllium release of OpenDaylight, **GBP** focused on **expressed
+intent**, **refactoring of how renderers consume and publish Subject
+Feature Definitions for multi-renderer support**.
+
+GBP Base Architecture and Value Proposition
+-------------------------------------------
+
+Terminology
+~~~~~~~~~~~
+
+In order to explain the fundamental value proposition of **GBP**, an
+illustrated example is given. In order to do that some terminology must
+be defined.
+
+The Access Model is the core of the **GBP** Intent System policy
+resolution process.
+
+.. figure:: ./images/groupbasedpolicy/GBPTerminology1.png
+   :alt: GBP Access Model Terminology - Endpoints, EndpointGroups,
+   Contract
+
+   GBP Access Model Terminology - Endpoints, EndpointGroups, Contract
+
+.. figure:: ./images/groupbasedpolicy/GBPTerminology2.png
+   :alt: GBP Access Model Terminology - Subject, Classifier, Action
+
+   GBP Access Model Terminology - Subject, Classifier, Action
+
+.. figure:: ./images/groupbasedpolicy/GBPTerminology3.png
+   :alt: GBP Forwarding Model Terminology - L3 Context, L2 Bridge
+   Context, L2 Flood Context/Domain, Subnet
+
+   GBP Forwarding Model Terminology - L3 Context, L2 Bridge Context, L2
+   Flood Context/Domain, Subnet
+
+-  Endpoints:
+
+   Define concrete uniquely identifiable entities. In Beryllium,
+   examples could be a Docker container, or a Neutron port
+
+-  EndpointGroups:
+
+   EndpointGroups are sets of endpoints that share a common set of
+   policies. EndpointGroups can participate in contracts that determine
+   the kinds of communication that are allowed. EndpointGroups *consume*
+   and *provide* contracts. They also expose both *requirements and
+   capabilities*, which are labels that help to determine how contracts
+   will be applied. An EndpointGroup can specify a parent EndpointGroup
+   from which it inherits.
+
+-  Contracts:
+
+   Contracts determine which endpoints can communicate and in what way.
+   Contracts between pairs of EndpointGroups are selected by the
+   contract selectors defined by the EndpointGroup. Contracts expose
+   qualities, which are labels that can help EndpointGroups to select
+   contracts. Once the contract is selected, contracts have clauses that
+   can match against requirements and capabilities exposed by
+   EndpointGroups, as well as any conditions that may be set on
+   endpoints, in order to activate subjects that can allow specific
+   kinds of communication. A contract is allowed to specify a parent
+   contract from which it inherits.
+
+-  Subject
+
+   Subjects describe some aspect of how two endpoints are allowed to
+   communicate. Subjects define an ordered list of rules that will match
+   against the traffic and perform any necessary actions on that
+   traffic. No communication is allowed unless a subject allows that
+   communication.
+
+-  Clause
+
+   Clauses are defined as part of a contract. Clauses determine how a
+   contract should be applied to particular endpoints and
+   EndpointGroups. Clauses can match against requirements and
+   capabilities exposed by EndpointGroups, as well as any conditions
+   that may be set on endpoints. Matching clauses define some set of
+   subjects which can be applied to the communication between the pairs
+   of endpoints.
+
+Architecture and Value Proposition
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**GBP** offers an intent based interface, accessed via the `UX <#UX>`__,
+via the `REST API <#REST>`__ or directly from a domain-specific-language
+such as `Neutron <#Neutron>`__ through a mapping interface.
+
+There are two models in **GBP**:
+
+-  the access (or core) model
+
+-  the forwarding model
+
+.. figure:: ./images/groupbasedpolicy/GBP_AccessModel_simple.png
+   :alt: GBP Access (or Core) Model
+
+   GBP Access (or Core) Model
+
+The *classifier* and *action* portions of the model can be thought of as
+hooks, with their definition provided by each *renderer* about its
+domain specific capabilities. In **GBP** Beryllium, there is one
+renderer, the *`OpenFlow Overlay renderer (OfOverlay). <#OfOverlay>`__*
+
+These hooks are filled with *definitions* of the types of *features* the
+renderer can provide the *subject*, and are called
+**subject-feature-definitions**.
+
+This means an *expressed intent* can be fulfilled by, and across,
+multiple renderers simultaneously, without any specific provisioning
+from the consumer of **GBP**.
+
+Since **GBP** is implemented in OpenDaylight, which is an SDN
+controller, it also must address networking. This is done via the
+*forwarding model*, which is domain specific to networking, but could be
+applied to many different *types* of networking.
+
+.. figure:: ./images/groupbasedpolicy/GBP_ForwardingModel_simple.png
+   :alt: GBP Forwarding Model
+
+   GBP Forwarding Model
+
+Each endpoint is provisioned with a *network-containment*. This can be
+a:
+
+-  subnet
+
+   -  normal IP stack behaviour, where ARP is performed in subnet, and
+      for out of subnet, traffic is sent to default gateway.
+
+   -  a subnet can be a child of any of the below forwarding model
+      contexts, but typically would be a child of a flood-domain
+
+-  L2 flood-domain
+
+   -  allows flooding behaviour.
+
+   -  is a n:1 child of a bridge-domain
+
+   -  can have multiple children
+
+-  L2 bridge-domain
+
+   -  is a layer2 namespace
+
+   -  is the realm where traffic can be sent at layer 2
+
+   -  is a n:1 child of a L3 context
+
+   -  can have multiple children
+
+-  L3 context
+
+   -  is a layer3 namespace
+
+   -  is the realm where traffic is passed at layer 3
+
+   -  is a n:1 child of a tenant
+
+   -  can have multiple children
+
+A simple example of how the access and forwarding models work is as
+follows:
+
+.. figure:: ./images/groupbasedpolicy/GBP_Endpoint_EPG_Contract.png
+   :alt: GBP Endpoints, EndpointGroups and Contracts
+
+   GBP Endpoints, EndpointGroups and Contracts
+
+In this example, the **EPG:webservers** is *providing* the *web* and
+*ssh* contracts. The **EPG:client** is consuming those contracts.
+**EPG:client** is providing the *any* contract, which is consumed by
+**EPG:webservers**.
+
+The *direction* keyword is always from the perspective of the *provider*
+of the contract. In this case contract *web*, being *provided* by
+**EPG:webservers**, with the classifier to match TCP destination port
+80, means:
+
+-  packets with a TCP destination port of 80
+
+-  sent to (*in*) endpoints in the **EPG:webservers**
+
+-  will be *allowed*.
+
+.. figure:: ./images/groupbasedpolicy/GBP_Endpoint_EPG_Forwarding.png
+   :alt: GBP Endpoints and the Forwarding Model
+
+   GBP Endpoints and the Forwarding Model
+
+When the forwarding model is considered in the figure above, it can be
+seen that even though all endpoints are communicating using a common set
+of contracts, their forwarding is *contained* by the forwarding model
+contexts or namespaces. In the example shown, the endpoints associated
+with a *network-containment* that has an ultimate parent of
+*L3Context:Sales* can only communicate with other endpoints within this
+L3Context. In this way L3VPN services can be implemented without any
+impact to the **Intent** of the contract.
+
+High-level implementation Architecture
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The overall architecture, including *`Neutron <#Neutron>`__* domain
+specific mapping, and the `OpenFlow Overlay renderer <#OfOverlay>`__
+looks as so:
+
+.. figure:: ./images/groupbasedpolicy/GBP_High-levelBerylliumArchitecture.png
+   :alt: GBP High Level Beryllium Architecture
+
+   GBP High Level Beryllium Architecture
+
+The major benefit of this architecture is that the mapping of the
+domain-specific-language is completely separate and independent of the
+underlying renderer implementation.
+
+For instance, using the `Neutron Mapper <#Neutron>`__, which maps the
+Neutron API to the **GBP** core model, any contract automatically
+generated from this mapping can be augmented via the `UX <#UX>`__ to use
+`Service Function Chaining <#SFC>`__, a capability not currently
+available in OpenStack Neutron.
+
+When another renderer is added, for instance, NetConf, the same policy
+can now be leveraged across NetConf devices simultaneously:
+
+.. figure:: ./images/groupbasedpolicy/GBP_High-levelExtraRenderer.png
+   :alt: GBP High Level Beryllium Architecture - adding a renderer
+
+   GBP High Level Beryllium Architecture - adding a renderer
+
+As other domain-specific mappings occur, they too can leverage the same
+renderers, as the renderers only need to implement the **GBP** access
+and forwarding models, and the domain-specific mapping need only manage
+mapping to the access and forwarding models. For instance:
+
+.. figure:: ./images/groupbasedpolicy/High-levelBerylliumArchitectureEvolution2.png
+   :alt: GBP High Level Beryllium Architecture - adding a renderer
+
+   GBP High Level Beryllium Architecture - adding a renderer
+
+In summary, the **GBP** architecture:
+
+-  separates concerns: the Expressed Intent is kept completely separated
+   from the underlying renderers.
+
+-  is cohesive: each part does it’s part and it’s part only
+
+-  is scalable: code can be optimised around model
+   mapping/implementation, and functionality re-used
+
+Policy Resolution
+~~~~~~~~~~~~~~~~~
+
+Contract Selection
+^^^^^^^^^^^^^^^^^^
+
+The first step in policy resolution is to select the contracts that are
+in scope.
+
+EndpointGroups participate in contracts either as a *provider* or as a
+*consumer* of a contract. Each EndpointGroup can participate in many
+contracts at the same time, but for each contract it can be in only one
+role at a time. In addition, there are two ways for an EndpointGroup to
+select a contract: either with either a:
+
+-  *named selector*
+
+   Named selectors simply select a specific contract by its contract ID.
+
+-  target selector.
+
+   Target selectors allow for additional flexibility by matching against
+   *qualities* of the contract’s *target.*
+
+Thus, there are a total of 4 kinds of contract selector:
+
+-  provider named selector
+
+   Select a contract by contract ID, and participate as a provider.
+
+-  provider target selector
+
+   Match against a contract’s target with a quality matcher, and
+   participate as a provider.
+
+-  consumer named selector
+
+   Select a contract by contract ID, and participate as a consumer.
+
+-  consumer target selector
+
+   Match against a contract’s target with a quality matcher, and
+   participate as a consumer.
+
+To determine which contracts are in scope, contracts are found where
+either the source EndpointGroup selects a contract as either a provider
+or consumer, while the destination EndpointGroup matches against the
+same contract in the corresponding role. So if endpoint *x* in
+EndpointGroup *X* is communicating with endpoint *y* in EndpointGroup
+*Y*, a contract *C* is in scope if either *X* selects *C* as a provider
+and *Y* selects *C* as a consumer, or vice versa.
+
+The details of how quality matchers work are described further in
+`Matchers <#Matchers>`__. Quality matchers provide a flexible mechanism
+for contract selection based on labels.
+
+The end result of the contract selection phase can be thought of as a
+set of tuples representing selected contract scopes. The fields of the
+tuple are:
+
+-  Contract ID
+
+-  The provider EndpointGroup ID
+
+-  The name of the selector in the provider EndpointGroup that was used
+   to select the contract, called the *matching provider selector.*
+
+-  The consumer EndpointGroup ID
+
+-  The name of the selector in the consumer EndpointGroup that was used
+   to select the contract, called the *matching consumer selector.*
+
+The result is then stored in the datastore under **Resolved Policy**.
+
+Subject Selection
+^^^^^^^^^^^^^^^^^
+
+The second phase in policy resolution is to determine which subjects are
+in scope. The subjects define what kinds of communication are allowed
+between endpoints in the EndpointGroups. For each of the selected
+contract scopes from the contract selection phase, the subject selection
+procedure is applied.
+
+Labels called, capabilities, requirements and conditions are matched
+against to bring a Subject *into scope*. EndpointGroups have
+capabilities and requirements, while endpoints have conditions.
+
+Requirements and Capabilities
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When acting as a *provider*, EndpointGroups expose *capabilities,* which
+are labels representing specific pieces of functionality that can be
+exposed to other EndpointGroups that may meet functional requirements of
+those EndpointGroups.
+
+When acting as a *consumer*, EndpointGroups expose *requirements*, which
+are labels that represent that the EndpointGroup requires some specific
+piece of functionality.
+
+As an example, we might create a capability called "user-database" which
+indicates that an EndpointGroup contains endpoints that implement a
+database of users.
+
+We might create a requirement also called "user-database" to indicate an
+EndpointGroup contains endpoints that will need to communicate with the
+endpoints that expose this service.
+
+Note that in this example the requirement and capability have the same
+name, but the user need not follow this convention.
+
+The matching provider selector (that was used by the provider
+EndpointGroup to select the contract) is examined to determine the
+capabilities exposed by the provider EndpointGroup for this contract
+scope.
+
+The provider selector will have a list of capabilities either directly
+included in the provider selector or inherited from a parent selector or
+parent EndpointGroup. (See `Inheritance <#Inheritance>`__).
+
+Similarly, the matching consumer selector will expose a set of
+requirements.
+
+Conditions
+^^^^^^^^^^
+
+Endpoints can have *conditions*, which are labels representing some
+relevant piece of operational state related to the endpoint.
+
+An example of a condition might be "malware-detected," or
+"authentication-succeeded." Conditions are used to affect how that
+particular endpoint can communicate.
+
+To continue with our example, the "malware-detected" condition might
+cause an endpoint’s connectivity to be cut off, while
+"authentication-succeeded" might open up communication with services
+that require an endpoint to be first authenticated and then forward its
+authentication credentials.
+
+Clauses
+^^^^^^^
+
+Clauses perform the actual selection of subjects. A clause has lists of
+matchers in two categories. In order for a clause to become active, all
+lists of matchers must match. A matching clause will select all the
+subjects referenced by the clause. Note that an empty list of matchers
+counts as a match.
+
+The first category is the consumer matchers, which match against the
+consumer EndpointGroup and endpoints. The consumer matchers are:
+
+-  Group Idenfication Constraint: Requirement matchers
+
+   Matches against requirements in the matching consumer selector.
+
+-  Group Identification Constraint: GroupName
+
+   Matches against the group name
+
+-  Consumer condition matchers
+
+   Matches against conditions on endpoints in the consumer EndpointGroup
+
+-  Consumer Endpoint Identification Constraint
+
+   Label based criteria for matching against endpoints. In Beryllium
+   this can be used to label endpoints based on IpPrefix.
+
+The second category is the provider matchers, which match against the
+provider EndpointGroup and endpoints. The provider matchers are:
+
+-  Group Idenfication Constraint: Capability matchers
+
+   Matches against capabilities in the matching provider selector.
+
+-  Group Identification Constraint: GroupName
+
+   Matches against the group name
+
+-  Consumer condition matchers
+
+   Matches against conditions on endpoints in the provider EndpointGroup
+
+-  Consumer Endpoint Identification Constraint
+
+   Label based criteria for matching against endpoints. In Beryllium
+   this can be used to label endpoints based on IpPrefix.
+
+Clauses have a list of subjects that apply when all the matchers in the
+clause match. The output of the subject selection phase logically is a
+set of subjects that are in scope for any particular pair of endpoints.
+
+Rule Application
+^^^^^^^^^^^^^^^^
+
+Now subjects have been selected that apply to the traffic between a
+particular set of endpoints, policy can be applied to allow endpoints to
+communicate. The applicable subjects from the previous step will each
+contain a set of rules.
+
+Rules consist of a set of *classifiers* and a set of *actions*.
+Classifiers match against traffic between two endpoints. An example of a
+classifier would be something that matches against all TCP traffic on
+port 80, or one that matches against HTTP traffic containing a
+particular cookie. Actions are specific actions that need to be taken on
+the traffic before it reaches its destination. Actions could include
+tagging or encapsulating the traffic in some way, redirecting the
+traffic, or applying a `service function chain <#SFC>`__.
+
+Rules, subjects, and actions have an *order* parameter, where a lower
+order value means that a particular item will be applied first. All
+rules from a particular subject will be applied before the rules of any
+other subject, and all actions from a particular rule will be applied
+before the actions from another rule. If more than item has the same
+order parameter, ties are broken with a lexicographic ordering of their
+names, with earlier names having logically lower order.
+
+Matchers
+''''''''
+
+Matchers specify a set of labels (which include requirements,
+capabilities, conditions, and qualities) to match against. There are
+several kinds of matchers that operate similarly:
+
+-  Quality matchers
+
+   used in target selectors during the contract selection phase. Quality
+   matchers provide a more advanced and flexible way to select contracts
+   compared to a named selector.
+
+-  Requirement and capability matchers
+
+   used in clauses during the subject selection phase to match against
+   requirements and capabilities on EndpointGroups
+
+-  Condition matchers
+
+   used in clauses during the subject selection phase to match against
+   conditions on endpoints
+
+A matcher is, at its heart, fairly simple. It will contain a list of
+label names, along with a *match type*. The match type can be either:
+
+-  "all"
+
+   which means the matcher matches when all of its labels match
+
+-  "any"
+
+   which means the matcher matches when any of its labels match,
+
+-  "none"
+
+   which means the matcher matches when none of its labels match.
+
+Note a *match all* matcher can be made by matching against an empty set
+of labels with a match type of "all."
+
+Additionally each label to match can optionally include a relevant name
+field. For quality matchers, this is a target name. For capability and
+requirement matchers, this is a selector name. If the name field is
+specified, then the matcher will only match against targets or selectors
+with that name, rather than any targets or selectors.
+
+Inheritance
+^^^^^^^^^^^
+
+Some objects in the system include references to parents, from which
+they will inherit definitions. The graph of parent references must be
+loop free. When resolving names, the resolution system must detect loops
+and raise an exception. Objects that are part of these loops may be
+considered as though they are not defined at all. Generally, inheritance
+works by simply importing the objects in the parent into the child
+object. When there are objects with the same name in the child object,
+then the child object will override the parent object according to rules
+which are specific to the type of object. We’ll next explore the
+detailed rules for inheritance for each type of object
+
+**EndpointGroups**
+
+EndpointGroups will inherit all their selectors from their parent
+EndpointGroups. Selectors with the same names as selectors in the parent
+EndpointGroups will inherit their behavior as defined below.
+
+**Selectors**
+
+Selectors include provider named selectors, provider target selectors,
+consumer named selectors, and consumer target selectors. Selectors
+cannot themselves have parent selectors, but when selectors have the
+same name as a selector of the same type in the parent EndpointGroup,
+then they will inherit from and override the behavior of the selector in
+the parent EndpointGroup.
+
+**Named Selectors**
+
+Named selectors will add to the set of contract IDs that are selected by
+the parent named selector.
+
+**Target Selectors**
+
+A target selector in the child EndpointGroup with the same name as a
+target selector in the parent EndpointGroup will inherit quality
+matchers from the parent. If a quality matcher in the child has the same
+name as a quality matcher in the parent, then it will inherit as
+described below under Matchers.
+
+**Contracts**
+
+Contracts will inherit all their targets, clauses and subjects from
+their parent contracts. When any of these objects have the same name as
+in the parent contract, then the behavior will be as defined below.
+
+**Targets**
+
+Targets cannot themselves have a parent target, but it may inherit from
+targets with the same name as the target in a parent contract. Qualities
+in the target will be inherited from the parent. If a quality with the
+same name is defined in the child, then this does not have any semantic
+effect except if the quality has its inclusion-rule parameter set to
+"exclude." In this case, then the label should be ignored for the
+purpose of matching against this target.
+
+**Subjects**
+
+Subjects cannot themselves have a parent subject, but it may inherit
+from a subject with the same name as the subject in a parent contract.
+The order parameter in the child subject, if present, will override the
+order parameter in the parent subject. The rules in the parent subject
+will be added to the rules in the child subject. However, the rules will
+not override rules of the same name. Instead, all rules in the parent
+subject will be considered to run with a higher order than all rules in
+the child; that is all rules in the child will run before any rules in
+the parent. This has the effect of overriding any rules in the parent
+without the potentially-problematic semantics of merging the ordering.
+
+**Clauses**
+
+Clauses cannot themselves have a parent clause, but it may inherit from
+a clause with the same name as the clause in a parent contract. The list
+of subject references in the parent clause will be added to the list of
+subject references in the child clause. This is just a union operation.
+A subject reference that refers to a subject name in the parent contract
+might have that name overridden in the child contract. Each of the
+matchers in the clause are also inherited by the child clause. Matchers
+in the child of the same name and type as a matcher from the parent will
+inherit from and override the parent matcher. See below under Matchers
+for more information.
+
+**Matchers**
+
+Matchers include quality matchers, condition matchers, requirement
+matchers, and capability matchers. Matchers cannot themselves have
+parent matchers, but when there is a matcher of the same name and type
+in the parent object, then the matcher in the child object will inherit
+and override the behavior of the matcher in the parent object. The match
+type, if specified in the child, overrides the value specified in the
+parent. Labels are also inherited from the parent object. If there is a
+label with the same name in the child object, this does not have any
+semantic effect except if the label has its inclusion-rule parameter set
+to "exclude." In this case, then the label should be ignored for the
+purpose of matching. Otherwise, the label with the same name will
+completely override the label from the parent.
+
+Using the GBP UX interface
+--------------------------
+
+Overview
+~~~~~~~~
+
+These following components make up this application and are described in
+more detail in following sections:
+
+-  Basic view
+
+-  Governance view
+
+-  Policy Expression view
+
+-  Wizard view
+
+The **GBP** UX is access via:
+
+::
+
+    http://<odl controller>:8181/index.html
+
+Basic view
+~~~~~~~~~~
+
+Basic view contains 5 navigation buttons which switch user to the
+desired section of application:
+
+-  Governance – switch to the Governance view (middle of graphic has the
+   same function)
+
+-  Renderer configuration – switch to the Policy expression view with
+   Renderers section expanded
+
+-  Policy expression – switch to the Policy expression view with Policy
+   section expanded
+
+-  Operational constraints – placeholder for development in next release
+
+.. figure:: ./images/groupbasedpolicy/ui-1-basicview.png
+   :alt: Basic view
+
+   Basic view
+
+Governance view
+~~~~~~~~~~~~~~~
+
+Governance view consists from three columns.
+
+.. figure:: ./images/groupbasedpolicy/ui-2-governanceview.png
+   :alt: Governance view
+
+   Governance view
+
+**Governance view – Basic view – Left column**
+
+In the left column is Health section with Exception and Conflict buttons
+with no functionality yet. This is a placeholder for development in
+further releases.
+
+**Governance view – Basic view – Middle column**
+
+In the top half of this section is select box with list of tenants for
+select. Once the tenant is selected, all sub sections in application
+operate and display data with actual selected tenant.
+
+Below the select box are buttons which display Expressed or Delivered
+policy of Governance section. In the bottom half of this section is
+select box with list of renderers for select. There is currently only
+`OfOverlay <#OfOverlay>`__ renderer available.
+
+Below the select box is Renderer configuration button, which switch the
+app into the Policy expression view with Renderers section expanded for
+performing CRUD operations. Renderer state button display Renderer state
+view.
+
+**Governance view – Basic view – Right column**
+
+In the bottom part of the right section of Governance view is Home
+button which switch the app to the Basic view.
+
+In the top part is situated navigation menu with four main sections.
+
+Policy expression button expand/collapse sub menu with three main parts
+of Policy expression. By clicking on sub menu buttons, user will be
+switched into the Policy expressions view with appropriate section
+expanded for performing CRUD operations.
+
+Renderer configuration button switches user into the Policy expressions
+view.
+
+Governance button expand/collapse sub menu with four main parts of
+Governance section. Sub menu buttons of Governance section display
+appropriate section of Governance view.
+
+Operational constraints have no functionality yet, and is a placeholder
+for development in further releases.
+
+Below the menu is place for view info section which displays info about
+actual selected element from the topology (explained below).
+
+**Governance view – Expressed policy**
+
+In this view are displayed contracts with their consumed and provided
+EndpointGroups of actual selected tenant, which can be changed in select
+box in the upper left corner.
+
+By single-clicking on any contract or EPG, the data of actual selected
+element will be shown in the right column below the menu. A Manage
+button launches a display wizard window for managing configuration of
+items such as `Service Function Chaining <#SFC>`__.
+
+.. figure:: ./images/groupbasedpolicy/ui-3-governanceview-expressed.png
+   :alt: Expressed policy
+
+   Expressed policy
+
+**Governance view – Delivered policy** In this view are displayed
+subjects with their consumed and provided EndpointGroups of actual
+selected tenant, which can be changed in select box in the upper left
+corner.
+
+By single-clicking on any subject or EPG, the data of actual selected
+element will be shown in the right column below the menu.
+
+By double-click on subject the subject detail view will be displayed
+with subject’s rules of actual selected subject, which can be changed in
+select box in the upper left corner.
+
+By single-clicking on rule or subject, the data of actual selected
+element will be shown in the right column below the menu.
+
+By double-clicking on EPG in Delivered policy view, the EPG detail view
+will be displayed with EPG’s endpoints of actual selected EPG, which can
+be changed in select box in the upper left corner.
+
+By single-clicking on EPG or endpoint the data of actual selected
+element will be shown in the right column below the menu.
+
+.. figure:: ./images/groupbasedpolicy/ui-4-governanceview-delivered-0.png
+   :alt: Delivered policy
+
+   Delivered policy
+
+.. figure:: ./images/groupbasedpolicy/ui-4-governanceview-delivered-1-subject.png
+   :alt: Subject detail
+
+   Subject detail
+
+.. figure:: ./images/groupbasedpolicy/ui-4-governanceview-delivered-2-epg.png
+   :alt: EPG detail
+
+   EPG detail
+
+**Governance view – Renderer state**
+
+In this part are displayed Subject feature definition data with two main
+parts: Action definition and Classifier definition.
+
+By clicking on the down/right arrow in the circle is possible to
+expand/hide data of appropriate container or list. Next to the list node
+are displayed names of list’s elements where one is always selected and
+element’s data are shown (blue line under the name).
+
+By clicking on names of children nodes is possible to select desired
+node and node’s data will be displayed.
+
+.. figure:: ./images/groupbasedpolicy/ui-4-governanceview-renderer.png
+   :alt: Renderer state
+
+   Renderer state
+
+Policy expression view
+~~~~~~~~~~~~~~~~~~~~~~
+
+In the left part of this view is placed topology of actual selected
+elements with the buttons for switching between types of topology at the
+bottom.
+
+Right column of this view contains four parts. At the top of this column
+are displayed breadcrumbs with actual position in the application.
+
+Below the breadcrumbs is select box with list of tenants for select. In
+the middle part is situated navigation menu, which allows switch to the
+desired section for performing CRUD operations.
+
+At the bottom is quick navigation menu with Access Model Wizard button
+which display Wizard view, Home button which switch application to the
+Basic view and occasionally Back button, which switch application to the
+upper section.
+
+**Policy expression - Navigation menu**
+
+To open Policy expression, select Policy expression from the GBP Home
+screen.
+
+In the top of navigation box you can select the tenant from the tenants
+list to activate features addicted to selected tenant.
+
+In the right menu, by default, the Policy menu section is expanded.
+Subitems of this section are modules for CRUD (creating, reading,
+updating and deleting) of tenants, EndpointGroups, contracts, L2/L3
+objects.
+
+-  Section Renderers contains CRUD forms for Classifiers and Actions.
+
+-  Section Endpoints contains CRUD forms for Endpoint and L3 prefix
+   endpoint.
+
+.. figure:: ./images/groupbasedpolicy/ui-5-expresssion-1.png
+   :alt: Navigation menu
+
+   Navigation menu
+
+.. figure:: ./images/groupbasedpolicy/ui-5-expresssion-2.png
+   :alt: CRUD operations
+
+   CRUD operations
+
+**Policy expression - Types of topology**
+
+There are three different types of topology:
+
+-  Configured topology - EndpointGroups and contracts between them from
+   CONFIG datastore
+
+-  Operational topology - displays same information but is based on
+   operational data.
+
+-  L2/L3 - displays relationships between L3Contexts, L2 Bridge domains,
+   L2 Flood domains and Subnets.
+
+.. figure:: ./images/groupbasedpolicy/ui-5-expresssion-3.png
+   :alt: L2/L3 Topology
+
+   L2/L3 Topology
+
+.. figure:: ./images/groupbasedpolicy/ui-5-expresssion-4.png
+   :alt: Config Topology
+
+   Config Topology
+
+**Policy expression - CRUD operations**
+
+In this part are described basic flows for viewing, adding, editing and
+deleting system elements like tenants, EndpointGroups etc.
+
+Tenants
+~~~~~~~
+
+To edit tenant objects click the Tenants button in the right menu. You
+can see the CRUD form containing tenants list and control buttons.
+
+To add new tenant, click the Add button This will display the form for
+adding a new tenant. After filling tenant attributes Name and
+Description click Save button. Saving of any object can be performed
+only if all the object attributes are filled correctly. If some
+attribute doesn’t have correct value, exclamation mark with mouse-over
+tooltip will be displayed next to the label for the attribute. After
+saving of tenant the form will be closed and the tenants list will be
+set to default value.
+
+To view an existing tenant, select the tenant from the select box
+Tenants list. The view form is read-only and can be closed by clicking
+cross mark in the top right of the form.
+
+To edit selected tenant, click the Edit button, which will display the
+edit form for selected tenant. After editing the Name and Description of
+selected tenant click the Save button to save selected tenant. After
+saving of tenant the edit form will be closed and the tenants list will
+be set to default value.
+
+To delete tenant select the tenant from the Tenants list and click
+Delete button.
+
+To return to the Policy expression click Back button on the bottom of
+window.
+
+**EndpointGroups**
+
+For managing EndpointGroups (EPG) the tenant from the top Tenants list
+must be selected.
+
+To add new EPG click Add button and after filling required attributes
+click Save button. After adding the EPG you can edit it and assign
+Consumer named selector or Provider named selector to it.
+
+To edit EPG click the Edit button after selecting the EPG from Group
+list.
+
+To add new Consumer named selector (CNS) click the Add button next to
+the Consumer named selectors list. While CNS editing you can set one or
+more contracts for current CNS pressing the Plus button and selecting
+the contract from the Contracts list. To remove the contract, click on
+the cross mark next to the contract. Added CNS can be viewed, edited or
+deleted by selecting from the Consumer named selectors list and clicking
+the Edit and Delete buttons like with the EPG or tenants.
+
+To add new Provider named selector (PNS) click the Add button next to
+the Provider named selectors list. While PNS editing you can set one or
+more contracts for current PNS pressing the Plus button and selecting
+the contract from the Contracts list. To remove the contract, click on
+the cross mark next to the contract. Added PNS can be viewed, edited or
+deleted by selecting from the Provider named selectors list and clicking
+the Edit and Delete buttons like with the EPG or tenants.
+
+To delete EPG, CNS or PNS select it in selectbox and click the Delete
+button next to the selectbox.
+
+**Contracts**
+
+For managing contracts the tenant from the top Tenants list must be
+selected.
+
+To add new Contract click Add button and after filling required fields
+click Save button.
+
+After adding the Contract user can edit it by selecting in the Contracts
+list and clicking Edit button.
+
+To add new Clause click Add button next to the Clause list while editing
+the contract. While editing the Clause after selecting clause from the
+Clause list user can assign clause subjects by clicking the Plus button
+next to the Clause subjects label. Adding and editing action must be
+submitted by pressing Save button. To manage Subjects you can use CRUD
+form like with the Clause list.
+
+**L2/L3**
+
+For managing L2/L3 the tenant from the top Tenants list must be
+selected.
+
+To add L3 Context click the Add button next to the L3 Context list
+,which will display the form for adding a new L3 Context. After filling
+L3 Context attributes click Save button. After saving of L3 Context,
+form will be closed and the L3 Context list will be set to default
+value.
+
+To view an existing L3 Context, select the L3 Context from the select
+box L3 Context list. The view form is read-only and can be closed by
+clicking cross mark in the top right of the form.
+
+If user wants to edit selected L3 Context, click the Edit button, which
+will display the edit form for selected L3 Context. After editing click
+the Save button to save selected L3 Context. After saving of L3 Context,
+the edit form will be closed and the L3 Context list will be set to
+default value.
+
+To delete L3 Context, select it from the L3 Context list and click
+Delete button.
+
+To add L2 Bridge Domain, click the Add button next to the L2 Bridge
+Domain list. This will display the form for adding a new L2 Bridge
+Domain. After filling L2 Bridge Domain attributes click Save button.
+After saving of L2 Bridge Domain, form will be closed and the L2 Bridge
+Domain list will be set to default value.
+
+To view an existing L2 Bridge Domain, select the L2 Bridge Domain from
+the select box L2 Bridge Domain list. The view form is read-only and can
+be closed by clicking cross mark in the top right of the form.
+
+If user wants to edit selected L2 Bridge Domain, click the Edit button,
+which will display the edit form for selected L2 Bridge Domain. After
+editing click the Save button to save selected L2 Bridge Domain. After
+saving of L2 Bridge Domain the edit form will be closed and the L2
+Bridge Domain list will be set to default value.
+
+To delete L2 Bridge Domain select it from the L2 Bridge Domain list and
+click Delete button.
+
+To add L3 Flood Domain, click the Add button next to the L3 Flood Domain
+list. This will display the form for adding a new L3 Flood Domain. After
+filling L3 Flood Domain attributes click Save button. After saving of L3
+Flood Domain, form will be closed and the L3 Flood Domain list will be
+set to default value.
+
+To view an existing L3 Flood Domain, select the L3 Flood Domain from the
+select box L3 Flood Domain list. The view form is read-only and can be
+closed by clicking cross mark in the top right of the form.
+
+If user wants to edit selected L3 Flood Domain, click the Edit button,
+which will display the edit form for selected L3 Flood Domain. After
+editing click the Save button to save selected L3 Flood Domain. After
+saving of L3 Flood Domain the edit form will be closed and the L3 Flood
+Domain list will be set to default value.
+
+To delete L3 Flood Domain select it from the L3 Flood Domain list and
+click Delete button.
+
+To add Subnet click the Add button next to the Subnet list. This will
+display the form for adding a new Subnet. After filling Subnet
+attributes click Save button. After saving of Subnet, form will be
+closed and the Subnet list will be set to default value.
+
+To view an existing Subnet, select the Subnet from the select box Subnet
+list. The view form is read-only and can be closed by clicking cross
+mark in the top right of the form.
+
+If user wants to edit selected Subnet, click the Edit button, which will
+display the edit form for selected Subnet. After editing click the Save
+button to save selected Subnet. After saving of Subnet the edit form
+will be closed and the Subnet list will be set to default value.
+
+To delete Subnet select it from the Subnet list and click Delete button.
+
+**Classifiers**
+
+To add Classifier, click the Add button next to the Classifier list.
+This will display the form for adding a new Classifier. After filling
+Classifier attributes click Save button. After saving of Classifier,
+form will be closed and the Classifier list will be set to default
+value.
+
+To view an existing Classifier, select the Classifier from the select
+box Classifier list. The view form is read-only and can be closed by
+clicking cross mark in the top right of the form.
+
+If you want to edit selected Classifier, click the Edit button, which
+will display the edit form for selected Classifier. After editing click
+the Save button to save selected Classifier. After saving of Classifier
+the edit form will be closed and the Classifier list will be set to
+default value.
+
+To delete Classifier select it from the Classifier list and click Delete
+button.
+
+**Actions**
+
+To add Action, click the Add button next to the Action list. This will
+display the form for adding a new Action. After filling Action
+attributes click Save button. After saving of Action, form will be
+closed and the Action list will be set to default value.
+
+To view an existing Action, select the Action from the select box Action
+list. The view form is read-only and can be closed by clicking cross
+mark in the top right of the form.
+
+If user wants to edit selected Action, click the Edit button, which will
+display the edit form for selected Action. After editing click the Save
+button to save selected Action. After saving of Action the edit form
+will be closed and the Action list will be set to default value.
+
+To delete Action select it from the Action list and click Delete button.
+
+**Endpoint**
+
+To add Endpoint, click the Add button next to the Endpoint list. This
+will display the form for adding a new Endpoint. To add EndpointGroup
+assignment click the Plus button next to the label EndpointGroups. To
+add Condition click Plus button next to the label Condition. To add L3
+Address click the Plus button next to the L3 Addresses label. After
+filling Endpoint attributes click Save button. After saving of Endpoint,
+form will be closed and the Endpoint list will be set to default value.
+
+To view an existing Endpoint just, the Endpoint from the select box
+Endpoint list. The view form is read-only and can be closed by clicking
+cross mark in the top right of the form.
+
+If you want to edit selected Endpoint, click the Edit button, which will
+display the edit form for selected Endpoint. After editing click the
+Save button to save selected Endpoint. After saving of Endpoint the edit
+form will be closed and the Endpoint list will be set to default value.
+
+To delete Endpoint select it from the Endpoint list and click Delete
+button.
+
+**L3 prefix endpoint**
+
+To add L3 prefix endpoint, click the Add button next to the L3 prefix
+endpoint list. This will display the form for adding a new Endpoint. To
+add EndpointGroup assignment, click the Plus button next to the label
+EndpointGroups. To add Condition, click Plus button next to the label
+Condition. To add L2 gateway click the Plus button next to the L2
+gateways label. To add L3 gateway, click the Plus button next to the L3
+gateways label. After filling L3 prefix endpoint attributes click Save
+button. After saving of L3 prefix endpoint, form will be closed and the
+Endpoint list will be set to default value.
+
+To view an existing L3 prefix endpoint, select the Endpoint from the
+select box L3 prefix endpoint list. The view form is read-only and can
+be closed by clicking cross mark in the top right of the form.
+
+If you want to edit selected L3 prefix endpoint, click the Edit button,
+which will display the edit form for selected L3 prefix endpoint. After
+editing click the Save button to save selected L3 prefix endpoint. After
+saving of Endpoint the edit form will be closed and the Endpoint list
+will be set to default value.
+
+To delete Endpoint select it from the L3 prefix endpoint list and click
+Delete button.
+
+Wizard
+~~~~~~
+
+Wizard provides quick method to send basic data to controller necessary
+for basic usage of GBP application. It is useful in the case that there
+aren’t any data in controller. In the first tab is form for create
+tenant. The second tab is for CRUD operations with contracts and their
+sub elements such as subjects, rules, clauses, action refs and
+classifier refs. The last tab is for CRUD operations with EndpointGroups
+and their CNS and PNS. Created structure of data is possible to send by
+clicking on Submit button.
+
+.. figure:: ./images/groupbasedpolicy/ui-6-wizard.png
+   :alt: Wizard
+
+   Wizard
+
+Using the GBP API
+-----------------
+
+Please see:
+
+-  `Using the GBP OpenFlow Overlay (OfOverlay) renderer <#OfOverlay>`__
+
+-  `Policy Resolution <#policyresolution>`__
+
+-  `Forwarding Model <#forwarding>`__
+
+-  `the **GBP** demo and development environments for tips <#demo>`__
+
+It is recommended to use either:
+
+-  `Neutron mapper <#Neutron>`__
+
+-  `the UX <#UX>`__
+
+If the REST API must be used, and the above resources are not
+sufficient:
+
+-  feature:install odl-dlux-yangui
+
+-  browse to:
+   `http://<odl-controller>:8181/index.html <http://<odl-controller>:8181/index.html>`__
+   and select YangUI from the left menu.
+
+to explore the various **GBP** REST options
+
+Using OpenStack with GBP
+------------------------
+
+Overview
+~~~~~~~~
+
+This section is for Application Developers and Network Administrators
+who are looking to integrate Group Based Policy with OpenStack.
+
+To enable the **GBP** Neutron Mapper feature, at the Karaf console:
+
+::
+
+    feature:install odl-groupbasedpolicy-neutronmapper
+
+Neutron Mapper has the following dependencies that are automatically
+loaded:
+
+::
+
+    odl-neutron-service
+
+Neutron Northbound implementing REST API used by OpenStack
+
+::
+
+    odl-groupbasedpolicy-base
+
+Base **GBP** feature set, such as policy resolution, data model etc.
+
+::
+
+    odl-groupbasedpolicy-ofoverlay
+
+REST calls from OpenStack Neutron are by the Neutron NorthBound project.
+
+**GBP** provides the implementation of the `Neutron V2.0
+API <http://developer.openstack.org/api-ref-networking-v2.html>`__.
+
+Features
+~~~~~~~~
+
+List of supported Neutron entities:
+
+-  Port
+
+-  Network
+
+   -  Standard Internal
+
+   -  External provider L2/L3 network
+
+-  Subnet
+
+-  Security-groups
+
+-  Routers
+
+   -  Distributed functionality with local routing per compute
+
+   -  External gateway access per compute node (dedicated port required)
+
+   -  Multiple routers per tenant
+
+-  FloatingIP NAT
+
+-  IPv4/IPv6 support
+
+The mapping of Neutron entities to **GBP** entities is as follows:
+
+**Neutron Port**
+
+.. figure:: ./images/groupbasedpolicy/neutronmapper-gbp-mapping-port.png
+   :alt: Neutron Port
+
+   Neutron Port
+
+The Neutron port is mapped to an endpoint.
+
+The current implementation supports one IP address per Neutron port.
+
+An endpoint and L3-endpoint belong to multiple EndpointGroups if the
+Neutron port is in multiple Neutron Security Groups.
+
+The key for endpoint is L2-bridge-domain obtained as the parent of
+L2-flood-domain representing Neutron network. The MAC address is from
+the Neutron port. An L3-endpoint is created based on L3-context (the
+parent of the L2-bridge-domain) and IP address of Neutron Port.
+
+**Neutron Network**
+
+.. figure:: ./images/groupbasedpolicy/neutronmapper-gbp-mapping-network.png
+   :alt: Neutron Network
+
+   Neutron Network
+
+A Neutron network has the following characteristics:
+
+-  defines a broadcast domain
+
+-  defines a L2 transmission domain
+
+-  defines a L2 name space.
+
+To represent this, a Neutron Network is mapped to multiple **GBP**
+entities. The first mapping is to an L2 flood-domain to reflect that the
+Neutron network is one flooding or broadcast domain. An L2-bridge-domain
+is then associated as the parent of L2 flood-domain. This reflects both
+the L2 transmission domain as well as the L2 addressing namespace.
+
+The third mapping is to L3-context, which represents the distinct L3
+address space. The L3-context is the parent of L2-bridge-domain.
+
+**Neutron Subnet**
+
+.. figure:: ./images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet.png
+   :alt: Neutron Subnet
+
+   Neutron Subnet
+
+Neutron subnet is associated with a Neutron network. The Neutron subnet
+is mapped to a **GBP** subnet where the parent of the subnet is
+L2-flood-domain representing the Neutron network.
+
+**Neutron Security Group**
+
+.. figure:: ./images/groupbasedpolicy/neutronmapper-gbp-mapping-securitygroup.png
+   :alt: Neutron Security Group and Rules
+
+   Neutron Security Group and Rules
+
+**GBP** entity representing Neutron security-group is EndpointGroup.
+
+**Infrastructure EndpointGroups**
+
+Neutron-mapper automatically creates EndpointGroups to manage key
+infrastructure items such as:
+
+-  DHCP EndpointGroup - contains endpoints representing Neutron DHCP
+   ports
+
+-  Router EndpointGroup - contains endpoints representing Neutron router
+   interfaces
+
+-  External EndpointGroup - holds L3-endpoints representing Neutron
+   router gateway ports, also associated with FloatingIP ports.
+
+**Neutron Security Group Rules**
+
+This is the most involved amongst all the mappings because Neutron
+security-group-rules are mapped to contracts with clauses, subjects,
+rules, action-refs, classifier-refs, etc. Contracts are used between
+EndpointGroups representing Neutron Security Groups. For simplification
+it is important to note that Neutron security-group-rules are similar to
+a **GBP** rule containing:
+
+-  classifier with direction
+
+-  action of **allow**.
+
+**Neutron Routers**
+
+.. figure:: ./images/groupbasedpolicy/neutronmapper-gbp-mapping-router.png
+   :alt: Neutron Router
+
+   Neutron Router
+
+Neutron router is represented as a L3-context. This treats a router as a
+Layer3 namespace, and hence every network attached to it a part of that
+Layer3 namespace.
+
+This allows for multiple routers per tenant with complete isolation.
+
+The mapping of the router to an endpoint represents the router’s
+interface or gateway port.
+
+The mapping to an EndpointGroup represents the internal infrastructure
+EndpointGroups created by the **GBP** Neutron Mapper
+
+When a Neutron router interface is attached to a network/subnet, that
+network/subnet and its associated endpoints or Neutron Ports are
+seamlessly added to the namespace.
+
+**Neutron FloatingIP**
+
+When associated with a Neutron Port, this leverages the
+`OfOverlay <#OfOverlay>`__ renderer’s NAT capabilities.
+
+A dedicated *external* interface on each Nova compute host allows for
+disitributed external access. Each Nova instance associated with a
+FloatingIP address can access the external network directly without
+having to route via the Neutron controller, or having to enable any form
+of Neutron distributed routing functionality.
+
+Assuming the gateway provisioned in the Neutron Subnet command for the
+external network is reachable, the combination of **GBP** Neutron Mapper
+and `OfOverlay renderer <#OfOverlay>`__ will automatically ARP for this
+default gateway, requiring no user intervention.
+
+**Troubleshooting within GBP**
+
+Logging level for the mapping functionality can be set for package
+org.opendaylight.groupbasedpolicy.neutron.mapper. An example of enabling
+TRACE logging level on Karaf console:
+
+::
+
+    log:set TRACE org.opendaylight.groupbasedpolicy.neutron.mapper
+
+**Neutron mapping example**
+
+As an example for mapping can be used creation of Neutron network,
+subnet and port. When a Neutron network is created 3 **GBP** entities
+are created: l2-flood-domain, l2-bridge-domain, l3-context.
+
+.. figure:: ./images/groupbasedpolicy/neutronmapper-gbp-mapping-network-example.png
+   :alt: Neutron network mapping
+
+   Neutron network mapping
+
+After an subnet is created in the network mapping looks like this.
+
+.. figure:: ./images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet-example.png
+   :alt: Neutron subnet mapping
+
+   Neutron subnet mapping
+
+If an Neutron port is created in the subnet an endpoint and l3-endpoint
+are created. The endpoint has key composed from l2-bridge-domain and MAC
+address from Neutron port. A key of l3-endpoint is compesed from
+l3-context and IP address. The network containment of endpoint and
+l3-endpoint points to the subnet.
+
+.. figure:: ./images/groupbasedpolicy/neutronmapper-gbp-mapping-port-example.png
+   :alt: Neutron port mapping
+
+   Neutron port mapping
+
+Configuring GBP Neutron
+~~~~~~~~~~~~~~~~~~~~~~~
+
+No intervention passed initial OpenStack setup is required by the user.
+
+More information about configuration can be found in our DevStack demo
+environment on the `**GBP**
+wiki <https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)>`__.
+
+Administering or Managing GBP Neutron
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For consistencies sake, all provisioning should be performed via the
+Neutron API. (CLI or Horizon).
+
+The mapped policies can be augmented via the **GBP** `UX <#UX>`__, to:
+
+-  Enable `Service Function Chaining <#SFC>`__
+
+-  Add endpoints from outside of Neutron i.e. VMs/containers not
+   provisioned in OpenStack
+
+-  Augment policies/contracts derived from Security Group Rules
+
+-  Overlay additional contracts or groupings
+
+Tutorials
+~~~~~~~~~
+
+A DevStack demo environment can be found on the `**GBP**
+wiki <https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)>`__.
+
+GBP Renderer manager
+--------------------
+
+Overview
+~~~~~~~~
+
+The GBP Renderer manager is an integral part of **GBP** base module.
+It dispatches information about endpoints'
+policy configuration to specific device renderer
+by writing a renderer policy configuration into the
+registered renderer's policy store.
+
+Installing and Pre-requisites
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Renderer manager is integrated into GBP base module,
+so no additional installation is required.
+
+Architecture
+~~~~~~~~~~~~
+
+Renderer manager gets data notifications about:
+
+- Endoints (base-endpoint.yang)
+
+- EndpointLocations (base-endpoint.yang)
+
+- ResolvedPolicies (resolved-policy.yang)
+
+- Forwarding (forwarding.yang)
+
+Based on data from notifications it creates a configuration task for
+specific renderers by writing a renderer policy configuration into the
+registered renderer's policy store.
+Configuration is stored to CONF data store as Renderers (renderer.yang).
+
+Configuration is signed with version number which is incremented by every change.
+All renderers are supposed to be on the same version. Renderer manager waits
+for all renderers to respond with version update in OPER data store.
+After a version of every renderer in OPER data store has the same value
+as the one in CONF data store,
+renderer manager moves to the next configuration with incremented version.
+
+GBP Location manager
+--------------------
+
+Overview
+~~~~~~~~
+
+Location manager monitors information about Endpoint Location providers
+(see endpoint-location-provider.yang) and manages Endpoint locations in OPER data store accordingly.
+
+Installing and Pre-requisites
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Location manager is integrated into GBP base module,
+so no additional installation is required.
+
+Architecture
+~~~~~~~~~~~~
+
+The endpoint-locations container in OPER data store (see base-endpoint.yang)
+contains two lists for two types of EP location,
+namely address-endpoint-location and containment-endpoint-location.
+LocationResolver is a class that processes Location providers in CONF data store
+and puts location information to OPER data store.
+
+When a new Location provider is created in CONF data store, its Address EP locations
+are being processed first, and their info is stored locally in accordance with processed
+Location provider's priority. Then a location of type "absolute" with the highest priority
+is selected for an EP, and is put in OPER data store. If Address EP locations contain
+locations of type "relative", those are put to OPER data store.
+
+If current Location provider contains Containment EP locations of type "relative",
+then those are put to OPER data store.
+
+Similarly, when a Location provider is deleted, information of its locations
+is removed from the OPER data store.
+
+Using the GBP OpenFlow Overlay (OfOverlay) renderer
+---------------------------------------------------
+
+Overview
+~~~~~~~~
+
+The OpenFlow Overlay (OfOverlay) feature enables the OpenFlow Overlay
+renderer, which creates a network virtualization solution across nodes
+that host Open vSwitch software switches.
+
+Installing and Pre-requisites
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+From the Karaf console in OpenDaylight:
+
+::
+
+    feature:install odl-groupbasedpolicy-ofoverlay
+
+This renderer is designed to work with OpenVSwitch (OVS) 2.1+ (although
+2.3 is strongly recommended) and OpenFlow 1.3.
+
+When used in conjunction with the `Neutron Mapper feature <#Neutron>`__
+no extra OfOverlay specific setup is required.
+
+When this feature is loaded "standalone", the user is required to
+configure infrastructure, such as
+
+-  instantiating OVS bridges,
+
+-  attaching hosts to the bridges,
+
+-  and creating the VXLAN/VXLAN-GPE tunnel ports on the bridges.
+
+The **GBP** OfOverlay renderer also supports a table offset option, to
+offset the pipeline post-table 0. The value of table offset is stored in
+the config datastore and it may be rewritten at runtime.
+
+::
+
+    PUT http://{{controllerIp}}:8181/restconf/config/ofoverlay:of-overlay-config
+    {
+        "of-overlay-config": {
+            "gbp-ofoverlay-table-offset": 6
+        }
+    }
+
+The default value is set by changing:
+<gbp-ofoverlay-table-offset>0</gbp-ofoverlay-table-offset>
+
+in file:
+distribution-karaf/target/assembly/etc/opendaylight/karaf/15-groupbasedpolicy-ofoverlay.xml
+
+To avoid overwriting runtime changes, the default value is used only
+when the OfOverlay renderer starts and no other value has been written
+before.
+
+OpenFlow Overlay Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+These are the primary components of **GBP**. The OfOverlay components
+are highlighted in red.
+
+.. figure:: ./images/groupbasedpolicy/ofoverlay-1-components.png
+   :alt: OfOverlay within **GBP**
+
+   OfOverlay within **GBP**
+
+In terms of the inner components of the **GBP** OfOverlay renderer:
+
+.. figure:: ./images/groupbasedpolicy/ofoverlay-2-components.png
+   :alt: OfOverlay expanded view:
+
+   OfOverlay expanded view:
+
+**OfOverlay Renderer**
+
+Launches components below:
+
+**Policy Resolver**
+
+Policy resolution is completely domain independent, and the OfOverlay
+leverages process policy information internally. See `Policy Resolution
+process <#policyresolution>`__.
+
+It listens to inputs to the *Tenants* configuration datastore, validates
+tenant input, then writes this to the Tenants operational datastore.
+
+From there an internal notification is generated to the PolicyManager.
+
+In the next release, this will be moving to a non-renderer specific
+location.
+
+**Endpoint Manager**
+
+The endpoint repository operates in **orchestrated** mode. This means
+the user is responsible for the provisioning of endpoints via:
+
+-  `UX/GUI <#UX>`__
+
+-  REST API
+
+    **Note**
+
+    When using the `Neutron mapper <#Neutron>`__ feature, everything is
+    managed transparently via Neutron.
+
+The Endpoint Manager is responsible for listening to Endpoint repository
+updates and notifying the Switch Manager when a valid Endpoint has been
+registered.
+
+It also supplies utility functions to the flow pipeline process.
+
+**Switch Manager**
+
+The Switch Manager is purely a state manager.
+
+Switches are in one of 3 states:
+
+-  DISCONNECTED
+
+-  PREPARING
+
+-  READY
+
+**Ready** is denoted by a connected switch:
+
+-  having a tunnel interface
+
+-  having at least one endpoint connected.
+
+In this way **GBP** is not writing to switches it has no business to.
+
+**Preparing** simply means the switch has a controller connection but is
+missing one of the above *complete and necessary* conditions
+
+**Disconnected** means a previously connected switch is no longer
+present in the Inventory operational datastore.
+
+.. figure:: ./images/groupbasedpolicy/ofoverlay-3-flowpipeline.png
+   :alt: OfOverlay Flow Pipeline
+
+   OfOverlay Flow Pipeline
+
+The OfOverlay leverages Nicira registers as follows:
+
+-  REG0 = Source EndpointGroup + Tenant ordinal
+
+-  REG1 = Source Conditions + Tenant ordinal
+
+-  REG2 = Destination EndpointGroup + Tenant ordinal
+
+-  REG3 = Destination Conditions + Tenant ordinal
+
+-  REG4 = Bridge Domain + Tenant ordinal
+
+-  REG5 = Flood Domain + Tenant ordinal
+
+-  REG6 = Layer 3 Context + Tenant ordinal
+
+**Port Security**
+
+Table 0 of the OpenFlow pipeline. Responsible for ensuring that only
+valid connections can send packets into the pipeline:
+
+::
+
+    cookie=0x0, <snip> , priority=200,in_port=3 actions=goto_table:2
+    cookie=0x0, <snip> , priority=200,in_port=1 actions=goto_table:1
+    cookie=0x0, <snip> , priority=121,arp,in_port=5,dl_src=fa:16:3e:d5:b9:8d,arp_spa=10.1.1.3 actions=goto_table:2
+    cookie=0x0, <snip> , priority=120,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_src=10.1.1.3 actions=goto_table:2
+    cookie=0x0, <snip> , priority=115,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_dst=255.255.255.255 actions=goto_table:2
+    cookie=0x0, <snip> , priority=112,ipv6 actions=drop
+    cookie=0x0, <snip> , priority=111, ip actions=drop
+    cookie=0x0, <snip> , priority=110,arp actions=drop
+    cookie=0x0, <snip> ,in_port=5,dl_src=fa:16:3e:d5:b9:8d actions=goto_table:2
+    cookie=0x0, <snip> , priority=1 actions=drop
+
+Ingress from tunnel interface, go to Table *Source Mapper*:
+
+::
+
+    cookie=0x0, <snip> , priority=200,in_port=3 actions=goto_table:2
+
+Ingress from outside, goto Table *Ingress NAT Mapper*:
+
+::
+
+    cookie=0x0, <snip> , priority=200,in_port=1 actions=goto_table:1
+
+ARP from Endpoint, go to Table *Source Mapper*:
+
+::
+
+    cookie=0x0, <snip> , priority=121,arp,in_port=5,dl_src=fa:16:3e:d5:b9:8d,arp_spa=10.1.1.3 actions=goto_table:2
+
+IPv4 from Endpoint, go to Table *Source Mapper*:
+
+::
+
+    cookie=0x0, <snip> , priority=120,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_src=10.1.1.3 actions=goto_table:2
+
+DHCP DORA from Endpoint, go to Table *Source Mapper*:
+
+::
+
+    cookie=0x0, <snip> , priority=115,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_dst=255.255.255.255 actions=goto_table:2
+
+Series of DROP tables with priority set to capture any non-specific
+traffic that should have matched above:
+
+::
+
+    cookie=0x0, <snip> , priority=112,ipv6 actions=drop
+    cookie=0x0, <snip> , priority=111, ip actions=drop
+    cookie=0x0, <snip> , priority=110,arp actions=drop
+
+"L2" catch all traffic not identified above:
+
+::
+
+    cookie=0x0, <snip> ,in_port=5,dl_src=fa:16:3e:d5:b9:8d actions=goto_table:2
+
+Drop Flow:
+
+::
+
+    cookie=0x0, <snip> , priority=1 actions=drop
+
+**Ingress NAT Mapper**
+
+Table `*offset* <#offset>`__\ +1.
+
+ARP responder for external NAT address:
+
+::
+
+    cookie=0x0, <snip> , priority=150,arp,arp_tpa=192.168.111.51,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:58:c3:dd->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0xfa163e58c3dd->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xc0a86f33->NXM_OF_ARP_SPA[],IN_PORT
+
+Translate from Outside to Inside and perform same functions as
+SourceMapper.
+
+::
+
+    cookie=0x0, <snip> , priority=100,ip,nw_dst=192.168.111.51 actions=set_field:10.1.1.2->ip_dst,set_field:fa:16:3e:58:c3:dd->eth_dst,load:0x2->NXM_NX_REG0[],load:0x1->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],load:0x3->NXM_NX_TUN_ID[0..31],goto_table:3
+
+**Source Mapper**
+
+Table `*offset* <#offset>`__\ +2.
+
+Determines based on characteristics from the ingress port, which:
+
+-  EndpointGroup(s) it belongs to
+
+-  Forwarding context
+
+-  Tunnel VNID ordinal
+
+Establishes tunnels at valid destination switches for ingress.
+
+Ingress Tunnel established at remote node with VNID Ordinal that maps to
+Source EPG, Forwarding Context etc:
+
+::
+
+    cookie=0x0, <snip>, priority=150,tun_id=0xd,in_port=3 actions=load:0xc->NXM_NX_REG0[],load:0xffffff->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],goto_table:3
+
+Maps endpoint to Source EPG, Forwarding Context based on ingress port,
+and MAC:
+
+::
+
+    cookie=0x0, <snip> , priority=100,in_port=5,dl_src=fa:16:3e:b4:b4:b1 actions=load:0xc->NXM_NX_REG0[],load:0x1->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],load:0xd->NXM_NX_TUN_ID[0..31],goto_table:3
+
+Generic drop:
+
+::
+
+    cookie=0x0, duration=197.622s, table=2, n_packets=0, n_bytes=0, priority=1 actions=drop
+
+**Destination Mapper**
+
+Table `*offset* <#offset>`__\ +3.
+
+Determines based on characteristics of the endpoint:
+
+-  EndpointGroup(s) it belongs to
+
+-  Forwarding context
+
+-  Tunnel Destination value
+
+Manages routing based on valid ingress nodes ARP’ing for their default
+gateway, and matches on either gateway MAC or destination endpoint MAC.
+
+ARP for default gateway for the 10.1.1.0/24 subnet:
+
+::
+
+    cookie=0x0, <snip> , priority=150,arp,reg6=0x7,arp_tpa=10.1.1.1,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:28:4c:82->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0xfa163e284c82->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xa010101->NXM_OF_ARP_SPA[],IN_PORT
+
+Broadcast traffic destined for GroupTable:
+
+::
+
+    cookie=0x0, <snip> , priority=140,reg5=0x5,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=load:0x5->NXM_NX_TUN_ID[0..31],group:5
+
+Layer3 destination matching flows, where priority=100+masklength. Since
+**GBP** now support L3Prefix endpoint, we can set default routes etc:
+
+::
+
+    cookie=0x0, <snip>, priority=132,ip,reg6=0x7,dl_dst=fa:16:3e:b4:b4:b1,nw_dst=10.1.1.3 actions=load:0xc->NXM_NX_REG2[],load:0x1->NXM_NX_REG3[],load:0x5->NXM_NX_REG7[],set_field:fa:16:3e:b4:b4:b1->eth_dst,dec_ttl,goto_table:4
+
+Layer2 destination matching flows, designed to be caught only after last
+IP flow (lowest priority IP flow is 100):
+
+::
+
+    cookie=0x0, duration=323.203s, table=3, n_packets=4, n_bytes=168, priority=50,reg4=0x4,dl_dst=fa:16:3e:58:c3:dd actions=load:0x2->NXM_NX_REG2[],load:0x1->NXM_NX_REG3[],load:0x2->NXM_NX_REG7[],goto_table:4
+
+General drop flow: cookie=0x0, duration=323.207s, table=3, n\_packets=6,
+n\_bytes=588, priority=1 actions=drop
+
+**Policy Enforcer**
+
+Table `*offset* <#offset>`__\ +4.
+
+Once the Source and Destination EndpointGroups are assigned, policy is
+enforced based on resolved rules.
+
+In the case of `Service Function Chaining <#SFC>`__, the encapsulation
+and destination for traffic destined to a chain, is discovered and
+enforced.
+
+Policy flow, allowing IP traffic between EndpointGroups:
+
+::
+
+    cookie=0x0, <snip> , priority=64998,ip,reg0=0x8,reg1=0x1,reg2=0xc,reg3=0x1 actions=goto_table:5
+
+**Egress NAT Mapper**
+
+Table `*offset* <#offset>`__\ +5.
+
+Performs NAT function before Egressing OVS instance to the underlay
+network.
+
+Inside to Outside NAT translation before sending to underlay:
+
+::
+
+    cookie=0x0, <snip> , priority=100,ip,reg6=0x7,nw_src=10.1.1.2 actions=set_field:192.168.111.51->ip_src,goto_table:6
+
+**External Mapper**
+
+Table `*offset* <#offset>`__\ +6.
+
+Manages post-policy enforcement for endpoint specific destination
+effects. Specifically for `Service Function Chaining <#SFC>`__, which is
+why we can support both symmetric and asymmetric chains and distributed
+ingress/egress classification.
+
+Generic allow:
+
+::
+
+    cookie=0x0, <snip>, priority=100 actions=output:NXM_NX_REG7[]
+
+Configuring OpenFlow Overlay via REST
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    **Note**
+
+    Please see the `UX <#UX>`__ section on how to configure **GBP** via
+    the GUI.
+
+**Endpoint**
+
+::
+
+    POST http://{{controllerIp}}:8181/restconf/operations/endpoint:register-endpoint
+    {
+        "input": {
+            "endpoint-group": "<epg0>",
+            "endpoint-groups" : ["<epg1>","<epg2>"],
+            "network-containment" : "<fowarding-model-context1>",
+            "l2-context": "<bridge-domain1>",
+            "mac-address": "<mac1>",
+            "l3-address": [
+                {
+                    "ip-address": "<ipaddress1>",
+                    "l3-context": "<l3_context1>"
+                }
+            ],
+            "*ofoverlay:port-name*": "<ovs port name>",
+            "tenant": "<tenant1>"
+        }
+    }
+
+    **Note**
+
+    The usage of "port-name" preceded by "ofoverlay". In OpenDaylight,
+    base datastore objects can be *augmented*. In **GBP**, the base
+    endpoint model has no renderer specifics, hence can be leveraged
+    across multiple renderers.
+
+**OVS Augmentations to Inventory**
+
+::
+
+    PUT http://{{controllerIp}}:8181/restconf/config/opendaylight-inventory:nodes/
+    {
+        "opendaylight-inventory:nodes": {
+            "node": [
+                {
+                    "id": "openflow:123456",
+                    "ofoverlay:tunnel": [
+                        {
+                            "tunnel-type": "overlay:tunnel-type-vxlan",
+                            "ip": "<ip_address_of_ovs>",
+                            "port": 4789,
+                            "node-connector-id": "openflow:123456:1"
+                        }
+                    ]
+                },
+                {
+                    "id": "openflow:654321",
+                    "ofoverlay:tunnel": [
+                        {
+                            "tunnel-type": "overlay:tunnel-type-vxlan",
+                            "ip": "<ip_address_of_ovs>",
+                            "port": 4789,
+                            "node-connector-id": "openflow:654321:1"
+                        }
+                    ]
+                }
+            ]
+        }
+    }
+
+**Tenants** see `Policy Resolution <#policyresolution>`__ and
+`Forwarding Model <#forwarding>`__ for details:
+
+::
+
+    {
+      "policy:tenant": {
+        "contract": [
+          {
+            "clause": [
+              {
+                "name": "allow-http-clause",
+                "subject-refs": [
+                  "allow-http-subject",
+                  "allow-icmp-subject"
+                ]
+              }
+            ],
+            "id": "<id>",
+            "subject": [
+              {
+                "name": "allow-http-subject",
+                "rule": [
+                  {
+                    "classifier-ref": [
+                      {
+                        "direction": "in",
+                        "name": "http-dest"
+                      },
+                      {
+                        "direction": "out",
+                        "name": "http-src"
+                      }
+                    ],
+                    "action-ref": [
+                      {
+                        "name": "allow1",
+                        "order": 0
+                      }
+                    ],
+                    "name": "allow-http-rule"
+                  }
+                ]
+              },
+              {
+                "name": "allow-icmp-subject",
+                "rule": [
+                  {
+                    "classifier-ref": [
+                      {
+                        "name": "icmp"
+                      }
+                    ],
+                    "action-ref": [
+                      {
+                        "name": "allow1",
+                        "order": 0
+                      }
+                    ],
+                    "name": "allow-icmp-rule"
+                  }
+                ]
+              }
+            ]
+          }
+        ],
+        "endpoint-group": [
+          {
+            "consumer-named-selector": [
+              {
+                "contract": [
+                  "<id>"
+                ],
+                "name": "<name>"
+              }
+            ],
+            "id": "<id>",
+            "provider-named-selector": []
+          },
+          {
+            "consumer-named-selector": [],
+            "id": "<id>",
+            "provider-named-selector": [
+              {
+                "contract": [
+                  "<id>"
+                ],
+                "name": "<name>"
+              }
+            ]
+          }
+        ],
+        "id": "<id>",
+        "l2-bridge-domain": [
+          {
+            "id": "<id>",
+            "parent": "<id>"
+          }
+        ],
+        "l2-flood-domain": [
+          {
+            "id": "<id>",
+            "parent": "<id>"
+          },
+          {
+            "id": "<id>",
+            "parent": "<id>"
+          }
+        ],
+        "l3-context": [
+          {
+            "id": "<id>"
+          }
+        ],
+        "name": "GBPPOC",
+        "subject-feature-instances": {
+          "classifier-instance": [
+            {
+              "classifier-definition-id": "<id>",
+              "name": "http-dest",
+              "parameter-value": [
+                {
+                  "int-value": "6",
+                  "name": "proto"
+                },
+                {
+                  "int-value": "80",
+                  "name": "destport"
+                }
+              ]
+            },
+            {
+              "classifier-definition-id": "<id>",
+              "name": "http-src",
+              "parameter-value": [
+                {
+                  "int-value": "6",
+                  "name": "proto"
+                },
+                {
+                  "int-value": "80",
+                  "name": "sourceport"
+                }
+              ]
+            },
+            {
+              "classifier-definition-id": "<id>",
+              "name": "icmp",
+              "parameter-value": [
+                {
+                  "int-value": "1",
+                  "name": "proto"
+                }
+              ]
+            }
+          ],
+          "action-instance": [
+            {
+              "name": "allow1",
+              "action-definition-id": "<id>"
+            }
+          ]
+        },
+        "subnet": [
+          {
+            "id": "<id>",
+            "ip-prefix": "<ip_prefix>",
+            "parent": "<id>",
+            "virtual-router-ip": "<ip address>"
+          },
+          {
+            "id": "<id>",
+            "ip-prefix": "<ip prefix>",
+            "parent": "<id>",
+            "virtual-router-ip": "<ip address>"
+          }
+        ]
+      }
+    }
+
+Tutorials
+~~~~~~~~~
+
+Comprehensive tutorials, along with a demonstration environment
+leveraging Vagrant can be found on the `**GBP**
+wiki <https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)>`__
+
+Using the GBP eBPF IO Visor Agent renderer
+------------------------------------------
+
+Overview
+~~~~~~~~
+
+The IO Visor renderer feature enables container endpoints (e.g. Docker,
+LXC) to leverage GBP policies.
+
+The renderer interacts with a IO Visor module from the Linux Foundation
+IO Visor project.
+
+Installing and Pre-requisites
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+From the Karaf console in OpenDaylight:
+
+::
+
+    feature:install odl-groupbasedpolicy-iovisor odl-restconf
+
+Installation details, usage, and other information for the IO Visor GBP
+module can be found here: `**IO Visor** github repo for IO
+Modules <https://github.com/iovisor/iomodules>`__
+
+Using the GBP FaaS renderer
+---------------------------
+
+Overview
+~~~~~~~~
+
+The FaaS renderer feature enables leveraging the FaaS project as a GBP
+renderer.
+
+Installing and Pre-requisites
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+From the Karaf console in OpenDaylight:
+
+::
+
+    feature:install odl-groupbasedpolicy-faas
+
+More information about FaaS can be found here:
+https://wiki.opendaylight.org/view/FaaS:GBPIntegration
+
+Using Service Function Chaining (SFC) with GBP Neutron Mapper and OfOverlay
+---------------------------------------------------------------------------
+
+Overview
+~~~~~~~~
+
+Please refer to the Service Function Chaining project for specifics on
+SFC provisioning and theory.
+
+**GBP** allows for the use of a chain, by name, in policy.
+
+This takes the form of an *action* in **GBP**.
+
+Using the `**GBP** demo and development environment <#demo>`__ as an
+example:
+
+.. figure:: ./images/groupbasedpolicy/sfc-1-topology.png
+   :alt: GBP and SFC integration environment
+
+   GBP and SFC integration environment
+
+In the topology above, a symmetrical chain between H35\_2 and H36\_3
+could take path:
+
+H35\_2 to sw1 to sff1 to sf1 to sff1 to sff2 to sf2 to sff2 to sw6 to
+H36\_3
+
+If symmetric chaining was desired, the return path is:
+
+.. figure:: ./images/groupbasedpolicy/sfc-2-symmetric.png
+   :alt: GBP and SFC symmetric chain environment
+
+   GBP and SFC symmetric chain environment
+
+If asymmetric chaining was desired, the return path could be direct, or
+an **entirely different chain**.
+
+.. figure:: ./images/groupbasedpolicy/sfc-3-asymmetric.png
+   :alt: GBP and SFC assymmetric chain environment
+
+   GBP and SFC assymmetric chain environment
+
+All these scenarios are supported by the integration.
+
+In the **Subject Feature Instance** section of the tenant config, we
+define the instances of the classifier definitions for ICMP and HTTP:
+
+::
+
+            "subject-feature-instances": {
+              "classifier-instance": [
+                {
+                  "name": "icmp",
+                  "parameter-value": [
+                    {
+                      "name": "proto",
+                      "int-value": 1
+                    }
+                  ]
+                },
+                {
+                  "name": "http-dest",
+                  "parameter-value": [
+                    {
+                      "int-value": "6",
+                      "name": "proto"
+                    },
+                    {
+                      "int-value": "80",
+                      "name": "destport"
+                    }
+                  ]
+                },
+                {
+                  "name": "http-src",
+                  "parameter-value": [
+                    {
+                      "int-value": "6",
+                      "name": "proto"
+                    },
+                    {
+                      "int-value": "80",
+                      "name": "sourceport"
+                    }
+                  ]
+                }
+              ],
+
+Then the action instances to associate to traffic that matches
+classifiers are defined.
+
+Note the *SFC chain name* must exist in SFC, and is validated against
+the datastore once the tenant configuration is entered, before entering
+a valid tenant configuration into the operational datastore (which
+triggers policy resolution).
+
+::
+
+              "action-instance": [
+                {
+                  "name": "chain1",
+                  "parameter-value": [
+                    {
+                      "name": "sfc-chain-name",
+                      "string-value": "SFCGBP"
+                    }
+                  ]
+                },
+                {
+                  "name": "allow1",
+                }
+              ]
+            },
+
+When ICMP is matched, allow the traffic:
+
+::
+
+            "contract": [
+              {
+                "subject": [
+                  {
+                    "name": "icmp-subject",
+                    "rule": [
+                      {
+                        "name": "allow-icmp-rule",
+                        "order" : 0,
+                        "classifier-ref": [
+                          {
+                            "name": "icmp"
+                          }
+                        ],
+                        "action-ref": [
+                          {
+                            "name": "allow1",
+                            "order": 0
+                          }
+                        ]
+                      }
+
+                    ]
+                  },
+
+When HTTP is matched, **in** to the provider of the contract with a TCP
+destination port of 80 (HTTP) or the HTTP request. The chain action is
+triggered, and similarly **out** from the provider for traffic with TCP
+source port of 80 (HTTP), or the HTTP response.
+
+::
+
+                  {
+                    "name": "http-subject",
+                    "rule": [
+                      {
+                        "name": "http-chain-rule-in",
+                        "classifier-ref": [
+                          {
+                            "name": "http-dest",
+                            "direction": "in"
+                          }
+                        ],
+                        "action-ref": [
+                          {
+                            "name": "chain1",
+                            "order": 0
+                          }
+                        ]
+                      },
+                      {
+                        "name": "http-chain-rule-out",
+                        "classifier-ref": [
+                          {
+                            "name": "http-src",
+                            "direction": "out"
+                          }
+                        ],
+                        "action-ref": [
+                          {
+                            "name": "chain1",
+                            "order": 0
+                          }
+                        ]
+                      }
+                    ]
+                  }
+
+To enable asymmetrical chaining, for instance, the user desires that
+HTTP requests traverse the chain, but the HTTP response does not, the
+HTTP response is set to *allow* instead of chain:
+
+::
+
+                      {
+                        "name": "http-chain-rule-out",
+                        "classifier-ref": [
+                          {
+                            "name": "http-src",
+                            "direction": "out"
+                          }
+                        ],
+                        "action-ref": [
+                          {
+                            "name": "allow1",
+                            "order": 0
+                          }
+                        ]
+                      }
+
+Demo/Development environment
+----------------------------
+
+The **GBP** project for Beryllium has two demo/development environments.
+
+-  Docker based GBP and GBP+SFC integration Vagrant environment
+
+-  DevStack based GBP+Neutron integration Vagrant environment
+
+`Demo @ GBP
+wiki <https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)/Consumability/Demo>`__
+
diff --git a/docs/user-guide/images/ODL_lfm_Be_component.jpg b/docs/user-guide/images/ODL_lfm_Be_component.jpg
new file mode 100644 (file)
index 0000000..1ee3094
Binary files /dev/null and b/docs/user-guide/images/ODL_lfm_Be_component.jpg differ
diff --git a/docs/user-guide/images/groupbasedpolicy/GBPTerminology1.png b/docs/user-guide/images/groupbasedpolicy/GBPTerminology1.png
new file mode 100644 (file)
index 0000000..1015146
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/GBPTerminology1.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/GBPTerminology2.png b/docs/user-guide/images/groupbasedpolicy/GBPTerminology2.png
new file mode 100644 (file)
index 0000000..9acfde5
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/GBPTerminology2.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/GBPTerminology3.png b/docs/user-guide/images/groupbasedpolicy/GBPTerminology3.png
new file mode 100644 (file)
index 0000000..2f54a06
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/GBPTerminology3.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/GBP_AccessModel_simple.png b/docs/user-guide/images/groupbasedpolicy/GBP_AccessModel_simple.png
new file mode 100644 (file)
index 0000000..38da83a
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/GBP_AccessModel_simple.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/GBP_Endpoint_EPG_Contract.png b/docs/user-guide/images/groupbasedpolicy/GBP_Endpoint_EPG_Contract.png
new file mode 100644 (file)
index 0000000..5c4199b
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/GBP_Endpoint_EPG_Contract.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/GBP_Endpoint_EPG_Forwarding.png b/docs/user-guide/images/groupbasedpolicy/GBP_Endpoint_EPG_Forwarding.png
new file mode 100644 (file)
index 0000000..bb96f37
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/GBP_Endpoint_EPG_Forwarding.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/GBP_ForwardingModel_simple.png b/docs/user-guide/images/groupbasedpolicy/GBP_ForwardingModel_simple.png
new file mode 100644 (file)
index 0000000..863d788
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/GBP_ForwardingModel_simple.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/GBP_High-levelBerylliumArchitecture.png b/docs/user-guide/images/groupbasedpolicy/GBP_High-levelBerylliumArchitecture.png
new file mode 100644 (file)
index 0000000..3afa28c
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/GBP_High-levelBerylliumArchitecture.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/GBP_High-levelExtraRenderer.png b/docs/user-guide/images/groupbasedpolicy/GBP_High-levelExtraRenderer.png
new file mode 100644 (file)
index 0000000..4ce2493
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/GBP_High-levelExtraRenderer.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/High-levelBerylliumArchitectureEvolution2.png b/docs/user-guide/images/groupbasedpolicy/High-levelBerylliumArchitectureEvolution2.png
new file mode 100644 (file)
index 0000000..9ec6a64
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/High-levelBerylliumArchitectureEvolution2.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/IntentSystemPolicySurfaces.png b/docs/user-guide/images/groupbasedpolicy/IntentSystemPolicySurfaces.png
new file mode 100644 (file)
index 0000000..12bc834
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/IntentSystemPolicySurfaces.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-network-example.png b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-network-example.png
new file mode 100644 (file)
index 0000000..ba07caa
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-network-example.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-network.png b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-network.png
new file mode 100644 (file)
index 0000000..b562d28
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-network.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-port-example.png b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-port-example.png
new file mode 100644 (file)
index 0000000..556fefc
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-port-example.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-port.png b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-port.png
new file mode 100644 (file)
index 0000000..bb4f592
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-port.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-router.png b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-router.png
new file mode 100644 (file)
index 0000000..4c87531
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-router.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-securitygroup.png b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-securitygroup.png
new file mode 100644 (file)
index 0000000..0bc9aad
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-securitygroup.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet-example.png b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet-example.png
new file mode 100644 (file)
index 0000000..cca428d
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet-example.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet.png b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet.png
new file mode 100644 (file)
index 0000000..fe79551
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/neutronmapper-gbp-mapping-subnet.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ofoverlay-1-components.png b/docs/user-guide/images/groupbasedpolicy/ofoverlay-1-components.png
new file mode 100644 (file)
index 0000000..9f24608
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ofoverlay-1-components.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ofoverlay-2-components.png b/docs/user-guide/images/groupbasedpolicy/ofoverlay-2-components.png
new file mode 100644 (file)
index 0000000..0eadb35
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ofoverlay-2-components.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ofoverlay-3-flowpipeline.png b/docs/user-guide/images/groupbasedpolicy/ofoverlay-3-flowpipeline.png
new file mode 100644 (file)
index 0000000..d7abe1c
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ofoverlay-3-flowpipeline.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/sfc-1-topology.png b/docs/user-guide/images/groupbasedpolicy/sfc-1-topology.png
new file mode 100644 (file)
index 0000000..c978f67
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/sfc-1-topology.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/sfc-2-symmetric.png b/docs/user-guide/images/groupbasedpolicy/sfc-2-symmetric.png
new file mode 100644 (file)
index 0000000..d56bd21
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/sfc-2-symmetric.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/sfc-3-asymmetric.png b/docs/user-guide/images/groupbasedpolicy/sfc-3-asymmetric.png
new file mode 100644 (file)
index 0000000..c6ca1e7
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/sfc-3-asymmetric.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-1-basicview.png b/docs/user-guide/images/groupbasedpolicy/ui-1-basicview.png
new file mode 100644 (file)
index 0000000..289f89d
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-1-basicview.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-2-governanceview.png b/docs/user-guide/images/groupbasedpolicy/ui-2-governanceview.png
new file mode 100644 (file)
index 0000000..6957e97
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-2-governanceview.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-3-governanceview-expressed.png b/docs/user-guide/images/groupbasedpolicy/ui-3-governanceview-expressed.png
new file mode 100644 (file)
index 0000000..2c7d9e5
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-3-governanceview-expressed.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-0.png b/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-0.png
new file mode 100644 (file)
index 0000000..7f5cd27
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-0.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-1-subject.png b/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-1-subject.png
new file mode 100644 (file)
index 0000000..ad3a252
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-1-subject.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-2-epg.png b/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-2-epg.png
new file mode 100644 (file)
index 0000000..e391b82
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-delivered-2-epg.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-renderer.png b/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-renderer.png
new file mode 100644 (file)
index 0000000..8e83617
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-4-governanceview-renderer.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-1.png b/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-1.png
new file mode 100644 (file)
index 0000000..0de94b4
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-1.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-2.png b/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-2.png
new file mode 100644 (file)
index 0000000..37b9cce
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-2.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-3.png b/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-3.png
new file mode 100644 (file)
index 0000000..614d06a
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-3.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-4.png b/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-4.png
new file mode 100644 (file)
index 0000000..b6d08a8
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-5-expresssion-4.png differ
diff --git a/docs/user-guide/images/groupbasedpolicy/ui-6-wizard.png b/docs/user-guide/images/groupbasedpolicy/ui-6-wizard.png
new file mode 100644 (file)
index 0000000..4e59fcb
Binary files /dev/null and b/docs/user-guide/images/groupbasedpolicy/ui-6-wizard.png differ
diff --git a/docs/user-guide/images/l2switch-address-observations.png b/docs/user-guide/images/l2switch-address-observations.png
new file mode 100644 (file)
index 0000000..c3715fb
Binary files /dev/null and b/docs/user-guide/images/l2switch-address-observations.png differ
diff --git a/docs/user-guide/images/l2switch-hosts.png b/docs/user-guide/images/l2switch-hosts.png
new file mode 100644 (file)
index 0000000..e44b506
Binary files /dev/null and b/docs/user-guide/images/l2switch-hosts.png differ
diff --git a/docs/user-guide/images/l2switch-stp-status.png b/docs/user-guide/images/l2switch-stp-status.png
new file mode 100644 (file)
index 0000000..ab9aae5
Binary files /dev/null and b/docs/user-guide/images/l2switch-stp-status.png differ
diff --git a/docs/user-guide/images/netide/netide-flow.jpg b/docs/user-guide/images/netide/netide-flow.jpg
new file mode 100644 (file)
index 0000000..fada123
Binary files /dev/null and b/docs/user-guide/images/netide/netide-flow.jpg differ
diff --git a/docs/user-guide/images/netide/netidearch.jpg b/docs/user-guide/images/netide/netidearch.jpg
new file mode 100644 (file)
index 0000000..9a67849
Binary files /dev/null and b/docs/user-guide/images/netide/netidearch.jpg differ
diff --git a/docs/user-guide/images/neutron/odl-neutron-service-architecture.png b/docs/user-guide/images/neutron/odl-neutron-service-architecture.png
new file mode 100644 (file)
index 0000000..9be2e47
Binary files /dev/null and b/docs/user-guide/images/neutron/odl-neutron-service-architecture.png differ
diff --git a/docs/user-guide/images/nic/Redirect_flow.png b/docs/user-guide/images/nic/Redirect_flow.png
new file mode 100644 (file)
index 0000000..65be1e0
Binary files /dev/null and b/docs/user-guide/images/nic/Redirect_flow.png differ
diff --git a/docs/user-guide/images/nic/Service_Chaining.png b/docs/user-guide/images/nic/Service_Chaining.png
new file mode 100644 (file)
index 0000000..c2afd17
Binary files /dev/null and b/docs/user-guide/images/nic/Service_Chaining.png differ
diff --git a/docs/user-guide/images/ocpplugin/dlux-ocp-apis.jpg b/docs/user-guide/images/ocpplugin/dlux-ocp-apis.jpg
new file mode 100644 (file)
index 0000000..820583d
Binary files /dev/null and b/docs/user-guide/images/ocpplugin/dlux-ocp-apis.jpg differ
diff --git a/docs/user-guide/images/ocpplugin/dlux-ocp-nodes.jpg b/docs/user-guide/images/ocpplugin/dlux-ocp-nodes.jpg
new file mode 100644 (file)
index 0000000..949474a
Binary files /dev/null and b/docs/user-guide/images/ocpplugin/dlux-ocp-nodes.jpg differ
diff --git a/docs/user-guide/images/ocpplugin/message_flow.jpg b/docs/user-guide/images/ocpplugin/message_flow.jpg
new file mode 100644 (file)
index 0000000..323cefc
Binary files /dev/null and b/docs/user-guide/images/ocpplugin/message_flow.jpg differ
diff --git a/docs/user-guide/images/ocpplugin/ocp-sb-plugin.jpg b/docs/user-guide/images/ocpplugin/ocp-sb-plugin.jpg
new file mode 100644 (file)
index 0000000..23cf919
Binary files /dev/null and b/docs/user-guide/images/ocpplugin/ocp-sb-plugin.jpg differ
diff --git a/docs/user-guide/images/ocpplugin/plugin-config.jpg b/docs/user-guide/images/ocpplugin/plugin-config.jpg
new file mode 100644 (file)
index 0000000..d3ecb74
Binary files /dev/null and b/docs/user-guide/images/ocpplugin/plugin-config.jpg differ
diff --git a/docs/user-guide/images/ocpplugin/plugin-design.jpg b/docs/user-guide/images/ocpplugin/plugin-design.jpg
new file mode 100644 (file)
index 0000000..03f5fd3
Binary files /dev/null and b/docs/user-guide/images/ocpplugin/plugin-design.jpg differ
diff --git a/docs/user-guide/images/ovsdb/ovsdb-netvirt-architecture.jpg b/docs/user-guide/images/ovsdb/ovsdb-netvirt-architecture.jpg
new file mode 100644 (file)
index 0000000..70fff82
Binary files /dev/null and b/docs/user-guide/images/ovsdb/ovsdb-netvirt-architecture.jpg differ
diff --git a/docs/user-guide/images/packetcable-postman.png b/docs/user-guide/images/packetcable-postman.png
new file mode 100644 (file)
index 0000000..61b30d6
Binary files /dev/null and b/docs/user-guide/images/packetcable-postman.png differ
diff --git a/docs/user-guide/images/sfc/RESTClient-snapshot.png b/docs/user-guide/images/sfc/RESTClient-snapshot.png
new file mode 100644 (file)
index 0000000..9ffda8a
Binary files /dev/null and b/docs/user-guide/images/sfc/RESTClient-snapshot.png differ
diff --git a/docs/user-guide/images/sfc/karaf-webui-select-a-type.png b/docs/user-guide/images/sfc/karaf-webui-select-a-type.png
new file mode 100644 (file)
index 0000000..fd0c62d
Binary files /dev/null and b/docs/user-guide/images/sfc/karaf-webui-select-a-type.png differ
diff --git a/docs/user-guide/images/sfc/sb-rest-architecture-user.png b/docs/user-guide/images/sfc/sb-rest-architecture-user.png
new file mode 100644 (file)
index 0000000..279e6b2
Binary files /dev/null and b/docs/user-guide/images/sfc/sb-rest-architecture-user.png differ
diff --git a/docs/user-guide/images/sfc/sf-rendered-service-path.png b/docs/user-guide/images/sfc/sf-rendered-service-path.png
new file mode 100644 (file)
index 0000000..ddd45c0
Binary files /dev/null and b/docs/user-guide/images/sfc/sf-rendered-service-path.png differ
diff --git a/docs/user-guide/images/sfc/sf-schedule-type.png b/docs/user-guide/images/sfc/sf-schedule-type.png
new file mode 100644 (file)
index 0000000..8a14a71
Binary files /dev/null and b/docs/user-guide/images/sfc/sf-schedule-type.png differ
diff --git a/docs/user-guide/images/sfc/sf-selection-arch.png b/docs/user-guide/images/sfc/sf-selection-arch.png
new file mode 100644 (file)
index 0000000..168d979
Binary files /dev/null and b/docs/user-guide/images/sfc/sf-selection-arch.png differ
diff --git a/docs/user-guide/images/sfc/sfc-ovs-architecture-user.png b/docs/user-guide/images/sfc/sfc-ovs-architecture-user.png
new file mode 100644 (file)
index 0000000..408d81c
Binary files /dev/null and b/docs/user-guide/images/sfc/sfc-ovs-architecture-user.png differ
diff --git a/docs/user-guide/images/sfc/sfc-ui-architecture.png b/docs/user-guide/images/sfc/sfc-ui-architecture.png
new file mode 100644 (file)
index 0000000..e5f2581
Binary files /dev/null and b/docs/user-guide/images/sfc/sfc-ui-architecture.png differ
diff --git a/docs/user-guide/images/sfc/sfcofrenderer_architecture.png b/docs/user-guide/images/sfc/sfcofrenderer_architecture.png
new file mode 100644 (file)
index 0000000..a6c6c5a
Binary files /dev/null and b/docs/user-guide/images/sfc/sfcofrenderer_architecture.png differ
diff --git a/docs/user-guide/images/sfc/sfcofrenderer_nwtopo.png b/docs/user-guide/images/sfc/sfcofrenderer_nwtopo.png
new file mode 100644 (file)
index 0000000..51ccb97
Binary files /dev/null and b/docs/user-guide/images/sfc/sfcofrenderer_nwtopo.png differ
diff --git a/docs/user-guide/images/snmp4sdn_getvlantable_postman.jpg b/docs/user-guide/images/snmp4sdn_getvlantable_postman.jpg
new file mode 100644 (file)
index 0000000..e6936f7
Binary files /dev/null and b/docs/user-guide/images/snmp4sdn_getvlantable_postman.jpg differ
diff --git a/docs/user-guide/images/snmp4sdn_in_odl_architecture.jpg b/docs/user-guide/images/snmp4sdn_in_odl_architecture.jpg
new file mode 100644 (file)
index 0000000..c5195d7
Binary files /dev/null and b/docs/user-guide/images/snmp4sdn_in_odl_architecture.jpg differ
diff --git a/docs/user-guide/images/vtn/Creare_Network_Step_1.png b/docs/user-guide/images/vtn/Creare_Network_Step_1.png
new file mode 100644 (file)
index 0000000..bfba99d
Binary files /dev/null and b/docs/user-guide/images/vtn/Creare_Network_Step_1.png differ
diff --git a/docs/user-guide/images/vtn/Create_Network.png b/docs/user-guide/images/vtn/Create_Network.png
new file mode 100644 (file)
index 0000000..a2d54b4
Binary files /dev/null and b/docs/user-guide/images/vtn/Create_Network.png differ
diff --git a/docs/user-guide/images/vtn/Create_Network_Step_2.png b/docs/user-guide/images/vtn/Create_Network_Step_2.png
new file mode 100644 (file)
index 0000000..5023bd9
Binary files /dev/null and b/docs/user-guide/images/vtn/Create_Network_Step_2.png differ
diff --git a/docs/user-guide/images/vtn/Create_Network_Step_3.png b/docs/user-guide/images/vtn/Create_Network_Step_3.png
new file mode 100644 (file)
index 0000000..0c6331a
Binary files /dev/null and b/docs/user-guide/images/vtn/Create_Network_Step_3.png differ
diff --git a/docs/user-guide/images/vtn/Dlux_login.png b/docs/user-guide/images/vtn/Dlux_login.png
new file mode 100644 (file)
index 0000000..05e38e7
Binary files /dev/null and b/docs/user-guide/images/vtn/Dlux_login.png differ
diff --git a/docs/user-guide/images/vtn/Dlux_topology.png b/docs/user-guide/images/vtn/Dlux_topology.png
new file mode 100644 (file)
index 0000000..1f1e1b6
Binary files /dev/null and b/docs/user-guide/images/vtn/Dlux_topology.png differ
diff --git a/docs/user-guide/images/vtn/How_to_provision_virtual_L2_network.png b/docs/user-guide/images/vtn/How_to_provision_virtual_L2_network.png
new file mode 100644 (file)
index 0000000..fffd65c
Binary files /dev/null and b/docs/user-guide/images/vtn/How_to_provision_virtual_L2_network.png differ
diff --git a/docs/user-guide/images/vtn/Hypervisors.png b/docs/user-guide/images/vtn/Hypervisors.png
new file mode 100644 (file)
index 0000000..31d80bf
Binary files /dev/null and b/docs/user-guide/images/vtn/Hypervisors.png differ
diff --git a/docs/user-guide/images/vtn/Instance_Console.png b/docs/user-guide/images/vtn/Instance_Console.png
new file mode 100644 (file)
index 0000000..fe37e0d
Binary files /dev/null and b/docs/user-guide/images/vtn/Instance_Console.png differ
diff --git a/docs/user-guide/images/vtn/Instance_Creation.png b/docs/user-guide/images/vtn/Instance_Creation.png
new file mode 100644 (file)
index 0000000..57add35
Binary files /dev/null and b/docs/user-guide/images/vtn/Instance_Creation.png differ
diff --git a/docs/user-guide/images/vtn/Instance_ping.png b/docs/user-guide/images/vtn/Instance_ping.png
new file mode 100644 (file)
index 0000000..d7b1540
Binary files /dev/null and b/docs/user-guide/images/vtn/Instance_ping.png differ
diff --git a/docs/user-guide/images/vtn/Launch_Instance.png b/docs/user-guide/images/vtn/Launch_Instance.png
new file mode 100644 (file)
index 0000000..2075b85
Binary files /dev/null and b/docs/user-guide/images/vtn/Launch_Instance.png differ
diff --git a/docs/user-guide/images/vtn/Launch_Instance_network.png b/docs/user-guide/images/vtn/Launch_Instance_network.png
new file mode 100644 (file)
index 0000000..81bf6ff
Binary files /dev/null and b/docs/user-guide/images/vtn/Launch_Instance_network.png differ
diff --git a/docs/user-guide/images/vtn/Load_All_Instances.png b/docs/user-guide/images/vtn/Load_All_Instances.png
new file mode 100644 (file)
index 0000000..63eacd0
Binary files /dev/null and b/docs/user-guide/images/vtn/Load_All_Instances.png differ
diff --git a/docs/user-guide/images/vtn/Mininet_Configuration.png b/docs/user-guide/images/vtn/Mininet_Configuration.png
new file mode 100644 (file)
index 0000000..bd14105
Binary files /dev/null and b/docs/user-guide/images/vtn/Mininet_Configuration.png differ
diff --git a/docs/user-guide/images/vtn/MutiController_Example_diagram.png b/docs/user-guide/images/vtn/MutiController_Example_diagram.png
new file mode 100644 (file)
index 0000000..988a20f
Binary files /dev/null and b/docs/user-guide/images/vtn/MutiController_Example_diagram.png differ
diff --git a/docs/user-guide/images/vtn/OpenStackGui.png b/docs/user-guide/images/vtn/OpenStackGui.png
new file mode 100644 (file)
index 0000000..046bf08
Binary files /dev/null and b/docs/user-guide/images/vtn/OpenStackGui.png differ
diff --git a/docs/user-guide/images/vtn/OpenStack_Demo_Picture.png b/docs/user-guide/images/vtn/OpenStack_Demo_Picture.png
new file mode 100644 (file)
index 0000000..2523423
Binary files /dev/null and b/docs/user-guide/images/vtn/OpenStack_Demo_Picture.png differ
diff --git a/docs/user-guide/images/vtn/Pathmap.png b/docs/user-guide/images/vtn/Pathmap.png
new file mode 100644 (file)
index 0000000..1c1ab1f
Binary files /dev/null and b/docs/user-guide/images/vtn/Pathmap.png differ
diff --git a/docs/user-guide/images/vtn/Service_Chaining_With_One_Service.png b/docs/user-guide/images/vtn/Service_Chaining_With_One_Service.png
new file mode 100644 (file)
index 0000000..0d1f5fd
Binary files /dev/null and b/docs/user-guide/images/vtn/Service_Chaining_With_One_Service.png differ
diff --git a/docs/user-guide/images/vtn/Service_Chaining_With_One_Service_LLD.png b/docs/user-guide/images/vtn/Service_Chaining_With_One_Service_LLD.png
new file mode 100644 (file)
index 0000000..b87a51f
Binary files /dev/null and b/docs/user-guide/images/vtn/Service_Chaining_With_One_Service_LLD.png differ
diff --git a/docs/user-guide/images/vtn/Service_Chaining_With_One_Service_Verification.png b/docs/user-guide/images/vtn/Service_Chaining_With_One_Service_Verification.png
new file mode 100644 (file)
index 0000000..0d1f5fd
Binary files /dev/null and b/docs/user-guide/images/vtn/Service_Chaining_With_One_Service_Verification.png differ
diff --git a/docs/user-guide/images/vtn/Service_Chaining_With_Two_Services.png b/docs/user-guide/images/vtn/Service_Chaining_With_Two_Services.png
new file mode 100644 (file)
index 0000000..7886897
Binary files /dev/null and b/docs/user-guide/images/vtn/Service_Chaining_With_Two_Services.png differ
diff --git a/docs/user-guide/images/vtn/Service_Chaining_With_Two_Services_LLD.png b/docs/user-guide/images/vtn/Service_Chaining_With_Two_Services_LLD.png
new file mode 100644 (file)
index 0000000..3053386
Binary files /dev/null and b/docs/user-guide/images/vtn/Service_Chaining_With_Two_Services_LLD.png differ
diff --git a/docs/user-guide/images/vtn/Single_Controller_Mapping.png b/docs/user-guide/images/vtn/Single_Controller_Mapping.png
new file mode 100644 (file)
index 0000000..378b86d
Binary files /dev/null and b/docs/user-guide/images/vtn/Single_Controller_Mapping.png differ
diff --git a/docs/user-guide/images/vtn/Tenant2.png b/docs/user-guide/images/vtn/Tenant2.png
new file mode 100644 (file)
index 0000000..4d3caa5
Binary files /dev/null and b/docs/user-guide/images/vtn/Tenant2.png differ
diff --git a/docs/user-guide/images/vtn/VTN_API.jpg b/docs/user-guide/images/vtn/VTN_API.jpg
new file mode 100644 (file)
index 0000000..e9240cd
Binary files /dev/null and b/docs/user-guide/images/vtn/VTN_API.jpg differ
diff --git a/docs/user-guide/images/vtn/VTN_Construction.jpg b/docs/user-guide/images/vtn/VTN_Construction.jpg
new file mode 100644 (file)
index 0000000..69d7755
Binary files /dev/null and b/docs/user-guide/images/vtn/VTN_Construction.jpg differ
diff --git a/docs/user-guide/images/vtn/VTN_Flow_Filter.jpg b/docs/user-guide/images/vtn/VTN_Flow_Filter.jpg
new file mode 100644 (file)
index 0000000..0e3a3bc
Binary files /dev/null and b/docs/user-guide/images/vtn/VTN_Flow_Filter.jpg differ
diff --git a/docs/user-guide/images/vtn/VTN_Mapping.jpg b/docs/user-guide/images/vtn/VTN_Mapping.jpg
new file mode 100644 (file)
index 0000000..6bfc6fa
Binary files /dev/null and b/docs/user-guide/images/vtn/VTN_Mapping.jpg differ
diff --git a/docs/user-guide/images/vtn/VTN_Overview.jpg b/docs/user-guide/images/vtn/VTN_Overview.jpg
new file mode 100644 (file)
index 0000000..a3fa417
Binary files /dev/null and b/docs/user-guide/images/vtn/VTN_Overview.jpg differ
diff --git a/docs/user-guide/images/vtn/flow_filter_example.png b/docs/user-guide/images/vtn/flow_filter_example.png
new file mode 100644 (file)
index 0000000..1dd88d8
Binary files /dev/null and b/docs/user-guide/images/vtn/flow_filter_example.png differ
diff --git a/docs/user-guide/images/vtn/setup_diagram_SCVMM.png b/docs/user-guide/images/vtn/setup_diagram_SCVMM.png
new file mode 100644 (file)
index 0000000..40d93f2
Binary files /dev/null and b/docs/user-guide/images/vtn/setup_diagram_SCVMM.png differ
diff --git a/docs/user-guide/images/vtn/vlanmap_using_mininet.png b/docs/user-guide/images/vtn/vlanmap_using_mininet.png
new file mode 100644 (file)
index 0000000..b564fd9
Binary files /dev/null and b/docs/user-guide/images/vtn/vlanmap_using_mininet.png differ
diff --git a/docs/user-guide/images/vtn/vtn-single-controller-topology-example.png b/docs/user-guide/images/vtn/vtn-single-controller-topology-example.png
new file mode 100644 (file)
index 0000000..7eb0adc
Binary files /dev/null and b/docs/user-guide/images/vtn/vtn-single-controller-topology-example.png differ
diff --git a/docs/user-guide/images/vtn/vtn_devstack_setup.png b/docs/user-guide/images/vtn/vtn_devstack_setup.png
new file mode 100644 (file)
index 0000000..82e68ba
Binary files /dev/null and b/docs/user-guide/images/vtn/vtn_devstack_setup.png differ
diff --git a/docs/user-guide/images/vtn/vtn_stations.png b/docs/user-guide/images/vtn/vtn_stations.png
new file mode 100644 (file)
index 0000000..50fff10
Binary files /dev/null and b/docs/user-guide/images/vtn/vtn_stations.png differ
index 9ca70912da30c3d0b3e74625bb633cbb1dae43dd..f5122bffa891a339c827b8f683d8108e67697beb 100644 (file)
@@ -30,12 +30,13 @@ Project-specific User Guides
    cardinal_-opendaylight-monitoring-as-a-service
    centinel-user-guide
    didm-user-guide
+   genius-user-guide
    group-based-policy-user-guide
    l2switch-user-guide
    l3vpn-service_-user-guide
    link-aggregation-control-protocol-user-guide
    lisp-flow-mapping-user-guide
-   network-modeling-(nemo)
+   nemo-user-guide
    netconf-user-guide
    netide-user-guide
    neutron-service-user-guide
diff --git a/docs/user-guide/l2switch-user-guide.rst b/docs/user-guide/l2switch-user-guide.rst
new file mode 100644 (file)
index 0000000..b6ad952
--- /dev/null
@@ -0,0 +1,424 @@
+L2Switch User Guide
+===================
+
+Overview
+--------
+
+The L2Switch project provides Layer2 switch functionality.
+
+L2Switch Architecture
+---------------------
+
+-  Packet Handler
+
+   -  Decodes the packets coming to the controller and dispatches them
+      appropriately
+
+-  Loop Remover
+
+   -  Removes loops in the network
+
+-  Arp Handler
+
+   -  Handles the decoded ARP packets
+
+-  Address Tracker
+
+   -  Learns the Addresses (MAC and IP) of entities in the network
+
+-  Host Tracker
+
+   -  Tracks the locations of hosts in the network
+
+-  L2Switch Main
+
+   -  Installs flows on each switch based on network traffic
+
+Configuring L2Switch
+--------------------
+
+This sections below give details about the configuration settings for
+the components that can be configured.
+
+Configuring Loop Remover
+------------------------
+
+-  52-loopremover.xml
+
+   -  is-install-lldp-flow
+
+      -  "true" means a flow that sends all LLDP packets to the
+         controller will be installed on each switch
+
+      -  "false" means this flow will not be installed
+
+   -  lldp-flow-table-id
+
+      -  The LLDP flow will be installed on the specified flow table of
+         each switch
+
+      -  This field is only relevant when "is-install-lldp-flow" is set
+         to "true"
+
+   -  lldp-flow-priority
+
+      -  The LLDP flow will be installed with the specified priority
+
+      -  This field is only relevant when "is-install-lldp-flow" is set
+         to "true"
+
+   -  lldp-flow-idle-timeout
+
+      -  The LLDP flow will timeout (removed from the switch) if the
+         flow doesn’t forward a packet for *x* seconds
+
+      -  This field is only relevant when "is-install-lldp-flow" is set
+         to "true"
+
+   -  lldp-flow-hard-timeout
+
+      -  The LLDP flow will timeout (removed from the switch) after *x*
+         seconds, regardless of how many packets it is forwarding
+
+      -  This field is only relevant when "is-install-lldp-flow" is set
+         to "true"
+
+   -  graph-refresh-delay
+
+      -  A graph of the network is maintained and gets updated as
+         network elements go up/down (i.e. links go up/down and switches
+         go up/down)
+
+      -  After a network element going up/down, it waits
+         *graph-refresh-delay* seconds before recomputing the graph
+
+      -  A higher value has the advantage of doing less graph updates,
+         at the potential cost of losing some packets because the graph
+         didn’t update immediately.
+
+      -  A lower value has the advantage of handling network topology
+         changes quicker, at the cost of doing more computation.
+
+Configuring Arp Handler
+-----------------------
+
+-  54-arphandler.xml
+
+   -  is-proactive-flood-mode
+
+      -  "true" means that flood flows will be installed on each switch.
+         With this flood flow, each switch will flood a packet that
+         doesn’t match any other flows.
+
+         -  Advantage: Fewer packets are sent to the controller because
+            those packets are flooded to the network.
+
+         -  Disadvantage: A lot of network traffic is generated.
+
+      -  "false" means the previously mentioned flood flows will not be
+         installed. Instead an ARP flow will be installed on each switch
+         that sends all ARP packets to the controller.
+
+         -  Advantage: Less network traffic is generated.
+
+         -  Disadvantage: The controller handles more packets (ARP
+            requests & replies) and the ARP process takes longer than if
+            there were flood flows.
+
+   -  flood-flow-table-id
+
+      -  The flood flow will be installed on the specified flow table of
+         each switch
+
+      -  This field is only relevant when "is-proactive-flood-mode" is
+         set to "true"
+
+   -  flood-flow-priority
+
+      -  The flood flow will be installed with the specified priority
+
+      -  This field is only relevant when "is-proactive-flood-mode" is
+         set to "true"
+
+   -  flood-flow-idle-timeout
+
+      -  The flood flow will timeout (removed from the switch) if the
+         flow doesn’t forward a packet for *x* seconds
+
+      -  This field is only relevant when "is-proactive-flood-mode" is
+         set to "true"
+
+   -  flood-flow-hard-timeout
+
+      -  The flood flow will timeout (removed from the switch) after *x*
+         seconds, regardless of how many packets it is forwarding
+
+      -  This field is only relevant when "is-proactive-flood-mode" is
+         set to "true"
+
+   -  arp-flow-table-id
+
+      -  The ARP flow will be installed on the specified flow table of
+         each switch
+
+      -  This field is only relevant when "is-proactive-flood-mode" is
+         set to "false"
+
+   -  arp-flow-priority
+
+      -  The ARP flow will be installed with the specified priority
+
+      -  This field is only relevant when "is-proactive-flood-mode" is
+         set to "false"
+
+   -  arp-flow-idle-timeout
+
+      -  The ARP flow will timeout (removed from the switch) if the flow
+         doesn’t forward a packet for *x* seconds
+
+      -  This field is only relevant when "is-proactive-flood-mode" is
+         set to "false"
+
+   -  arp-flow-hard-timeout
+
+      -  The ARP flow will timeout (removed from the switch) after
+         *arp-flow-hard-timeout* seconds, regardless of how many packets
+         it is forwarding
+
+      -  This field is only relevant when "is-proactive-flood-mode" is
+         set to "false"
+
+Configuring Address Tracker
+---------------------------
+
+-  56-addresstracker.xml
+
+   -  timestamp-update-interval
+
+      -  A last-seen timestamp is associated with each address. This
+         last-seen timestamp will only be updated after
+         *timestamp-update-interval* milliseconds.
+
+      -  A higher value has the advantage of performing less writes to
+         the database.
+
+      -  A lower value has the advantage of knowing how fresh an address
+         is.
+
+   -  observe-addresses-from
+
+      -  IP and MAC addresses can be observed/learned from ARP, IPv4,
+         and IPv6 packets. Set which packets to make these observations
+         from.
+
+Configuring L2Switch Main
+-------------------------
+
+-  58-l2switchmain.xml
+
+   -  is-install-dropall-flow
+
+      -  "true" means a drop-all flow will be installed on each switch,
+         so the default action will be to drop a packet instead of
+         sending it to the controller
+
+      -  "false" means this flow will not be installed
+
+   -  dropall-flow-table-id
+
+      -  The dropall flow will be installed on the specified flow table
+         of each switch
+
+      -  This field is only relevant when "is-install-dropall-flow" is
+         set to "true"
+
+   -  dropall-flow-priority
+
+      -  The dropall flow will be installed with the specified priority
+
+      -  This field is only relevant when "is-install-dropall-flow" is
+         set to "true"
+
+   -  dropall-flow-idle-timeout
+
+      -  The dropall flow will timeout (removed from the switch) if the
+         flow doesn’t forward a packet for *x* seconds
+
+      -  This field is only relevant when "is-install-dropall-flow" is
+         set to "true"
+
+   -  dropall-flow-hard-timeout
+
+      -  The dropall flow will timeout (removed from the switch) after
+         *x* seconds, regardless of how many packets it is forwarding
+
+      -  This field is only relevant when "is-install-dropall-flow" is
+         set to "true"
+
+   -  is-learning-only-mode
+
+      -  "true" means that the L2Switch will only be learning addresses.
+         No additional flows to optimize network traffic will be
+         installed.
+
+      -  "false" means that the L2Switch will react to network traffic
+         and install flows on the switches to optimize traffic.
+         Currently, MAC-to-MAC flows are installed.
+
+   -  reactive-flow-table-id
+
+      -  The reactive flow will be installed on the specified flow table
+         of each switch
+
+      -  This field is only relevant when "is-learning-only-mode" is set
+         to "false"
+
+   -  reactive-flow-priority
+
+      -  The reactive flow will be installed with the specified priority
+
+      -  This field is only relevant when "is-learning-only-mode" is set
+         to "false"
+
+   -  reactive-flow-idle-timeout
+
+      -  The reactive flow will timeout (removed from the switch) if the
+         flow doesn’t forward a packet for *x* seconds
+
+      -  This field is only relevant when "is-learning-only-mode" is set
+         to "false"
+
+   -  reactive-flow-hard-timeout
+
+      -  The reactive flow will timeout (removed from the switch) after
+         *x* seconds, regardless of how many packets it is forwarding
+
+      -  This field is only relevant when "is-learning-only-mode" is set
+         to "false"
+
+Running the L2Switch project
+----------------------------
+
+To run the L2 Switch inside the Lithium OpenDaylight distribution simply
+install the ``odl-l2switch-switch-ui`` feature;
+
+::
+
+    feature:install odl-l2switch-switch-ui
+
+Create a network using mininet
+------------------------------
+
+::
+
+    sudo mn --controller=remote,ip=<Controller IP> --topo=linear,3 --switch ovsk,protocols=OpenFlow13
+    sudo mn --controller=remote,ip=127.0.0.1 --topo=linear,3 --switch ovsk,protocols=OpenFlow13
+
+The above command will create a virtual network consisting of 3
+switches. Each switch will connect to the controller located at the
+specified IP, i.e. 127.0.0.1
+
+::
+
+    sudo mn --controller=remote,ip=127.0.0.1 --mac --topo=linear,3 --switch ovsk,protocols=OpenFlow13
+
+The above command has the "mac" option, which makes it easier to
+distinguish between Host MAC addresses and Switch MAC addresses.
+
+Generating network traffic using mininet
+----------------------------------------
+
+::
+
+    h1 ping h2
+
+The above command will cause host1 (h1) to ping host2 (h2)
+
+::
+
+    pingall
+
+*pingall* will cause each host to ping every other host.
+
+Checking Address Observations
+-----------------------------
+
+Address Observations are added to the Inventory data tree.
+
+The Address Observations on a Node Connector can be checked through a
+browser or a REST Client.
+
+::
+
+    http://10.194.126.91:8080/restconf/operational/opendaylight-inventory:nodes/node/openflow:1/node-connector/openflow:1:1
+
+.. figure:: ./images/l2switch-address-observations.png
+   :alt: Address Observations
+
+   Address Observations
+
+Checking Hosts
+--------------
+
+Host information is added to the Topology data tree.
+
+-  Host address
+
+-  Attachment point (link) to a node/switch
+
+This host information and attachment point information can be checked
+through a browser or a REST Client.
+
+::
+
+    http://10.194.126.91:8080/restconf/operational/network-topology:network-topology/topology/flow:1/
+
+.. figure:: ./images/l2switch-hosts.png
+   :alt: Hosts
+
+   Hosts
+
+Checking STP status of each link
+--------------------------------
+
+STP Status information is added to the Inventory data tree.
+
+-  A status of "forwarding" means the link is active and packets are
+   flowing on it.
+
+-  A status of "discarding" means the link is inactive and packets are
+   not sent over it.
+
+The STP status of a link can be checked through a browser or a REST
+Client.
+
+::
+
+    http://10.194.126.91:8080/restconf/operational/opendaylight-inventory:nodes/node/openflow:1/node-connector/openflow:1:2
+
+.. figure:: ./images/l2switch-stp-status.png
+   :alt: STP status
+
+   STP status
+
+Miscellaneous mininet commands
+------------------------------
+
+::
+
+    link s1 s2 down
+
+This will bring the link between switch1 (s1) and switch2 (s2) down
+
+::
+
+    link s1 s2 up
+
+This will bring the link between switch1 (s1) and switch2 (s2) up
+
+::
+
+    link s1 h1 down
+
+This will bring the link between switch1 (s1) and host1 (h1) down
+
diff --git a/docs/user-guide/l3vpn-service_-user-guide.rst b/docs/user-guide/l3vpn-service_-user-guide.rst
new file mode 100644 (file)
index 0000000..ce8f1f5
--- /dev/null
@@ -0,0 +1,463 @@
+L3VPN Service: User Guide
+=========================
+
+Overview
+--------
+
+L3VPN Service in OpenDaylight provides a framework to create L3VPN based
+on BGP-MP. It also helps to create Network Virtualization for DC Cloud
+environment.
+
+Modules & Interfaces
+--------------------
+
+L3VPN service can be realized using the following modules -
+
+VPN Service Modules
+~~~~~~~~~~~~~~~~~~~
+
+1. **VPN Manager** : Creates and manages VPNs and VPN Interfaces
+
+2. **BGP Manager** : Configures BGP routing stack and provides interface
+   to routing services
+
+3. **FIB Manager** : Provides interface to FIB, creates and manages
+   forwarding rules in Dataplane
+
+4. **Nexthop Manager** : Creates and manages nexthop egress pointer,
+   creates egress rules in Dataplane
+
+5. **Interface Manager** : Creates and manages different type of network
+   interfaces, e.g., VLAN, l3tunnel etc.,
+
+6. **Id Manager** : Provides cluster-wide unique ID for a given key.
+   Used by different modules to get unique IDs for different entities.
+
+7. **MD-SAL Util** : Provides interface to MD-SAL. Used by service
+   modules to access MD-SAL Datastore and services.
+
+All the above modules can function independently and can be utilized by
+other services as well.
+
+Configuration Interfaces
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following modules expose configuration interfaces through which user
+can configure L3VPN Service.
+
+1. BGP Manager
+
+2. VPN Manager
+
+3. Interface Manager
+
+4. FIB Manager
+
+Configuration Interface Details
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+1. Data Node Path : */config/bgp:bgp-router/*
+
+   a. Fields :
+
+      i.  local-as-identifier
+
+      ii. local-as-number
+
+   b. REST Methods : GET, PUT, DELETE, POST
+
+2. Data Node Path : */config/bgp:bgp-neighbors/*
+
+   a. Fields :
+
+      i. List of bgp-neighbor
+
+   b. REST Methods : GET, PUT, DELETE, POST
+
+3. Data Node Path :
+   */config/bgp:bgp-neighbors/bgp-neighbor/``{as-number}``/*
+
+   a. Fields :
+
+      i.  as-number
+
+      ii. ip-address
+
+   b. REST Methods : GET, PUT, DELETE, POST
+
+1. Data Node Path : */config/l3vpn:vpn-instances/*
+
+   a. Fields :
+
+      i. List of vpn-instance
+
+   b. REST Methods : GET, PUT, DELETE, POST
+
+2. Data Node Path : */config/l3vpn:vpn-interfaces/vpn-instance*
+
+   a. Fields :
+
+      i.   name
+
+      ii.  route-distinguisher
+
+      iii. import-route-policy
+
+      iv.  export-route-policy
+
+   b. REST Methods : GET, PUT, DELETE, POST
+
+3. Data Node Path : */config/l3vpn:vpn-interfaces/*
+
+   a. Fields :
+
+      i. List of vpn-interface
+
+   b. REST Methods : GET, PUT, DELETE, POST
+
+4. Data Node Path : */config/l3vpn:vpn-interfaces/vpn-interface*
+
+   a. Fields :
+
+      i.  name
+
+      ii. vpn-instance-name
+
+   b. REST Methods : GET, PUT, DELETE, POST
+
+5. Data Node Path :
+   */config/l3vpn:vpn-interfaces/vpn-interface/``{name}``/adjacency*
+
+   a. Fields :
+
+      i.  ip-address
+
+      ii. mac-address
+
+   b. REST Methods : GET, PUT, DELETE, POST
+
+1. Data Node Path : */config/if:interfaces/interface*
+
+   a. Fields :
+
+      i.   name
+
+      ii.  type
+
+      iii. enabled
+
+      iv.  of-port-id
+
+      v.   tenant-id
+
+      vi.  base-interface
+
+   b. type specific fields
+
+      i.   when type = *l2vlan*
+
+           A. vlan-id
+
+      ii.  when type = *stacked\_vlan*
+
+           A. stacked-vlan-id
+
+      iii. when type = *l3tunnel*
+
+           A. tunnel-type
+
+           B. local-ip
+
+           C. remote-ip
+
+           D. gateway-ip
+
+      iv.  when type = *mpls*
+
+           A. list labelStack
+
+           B. num-labels
+
+   c. REST Methods : GET, PUT, DELETE, POST
+
+1. Data Node Path : */config/odl-fib:fibEntries/vrfTables*
+
+   a. Fields :
+
+      i. List of vrfTables
+
+   b. REST Methods : GET, PUT, DELETE, POST
+
+2. Data Node Path :
+   */config/odl-fib:fibEntries/vrfTables/``{routeDistinguisher}``/*
+
+   a. Fields :
+
+      i.  route-distinguisher
+
+      ii. list vrfEntries
+
+          A. destPrefix
+
+          B. label
+
+          C. nexthopAddress
+
+   b. REST Methods : GET, PUT, DELETE, POST
+
+3. Data Node Path : */config/odl-fib:fibEntries/ipv4Table*
+
+   a. Fields :
+
+      i. list ipv4Entry
+
+         A. destPrefix
+
+         B. nexthopAddress
+
+   b. REST Methods : GET, PUT, DELETE, POST
+
+Provisioning Sequence & Sample Configurations
+---------------------------------------------
+
+Installation
+~~~~~~~~~~~~
+
+1. Edit *etc/custom.properties* and set the following property:
+   *vpnservice.bgpspeaker.host.name = <bgpserver-ip>* *<bgpserver-ip>*
+   here refers to the IP address of the host where BGP is running.
+
+2. Run ODL and install VPN Service *feature:install odl-vpnservice-core*
+
+Use REST interface to configure L3VPN service
+
+Pre-requisites:
+~~~~~~~~~~~~~~~
+
+1. BGP stack with VRF support needs to installed and configured
+
+   a. *Configure BGP as specified in Step 1 below.*
+
+2. Create pairs of GRE/VxLAN Tunnels (using ovsdb/ovs-vsctl) between
+   each switch and between each switch to the Gateway node
+
+   a. *Create *l3tunnel* interfaces corresponding to each tunnel in
+      interfaces DS as specified in Step 2 below.*
+
+Step 1 : Configure BGP
+~~~~~~~~~~~~~~~~~~~~~~
+
+1. Configure BGP Router
+^^^^^^^^^^^^^^^^^^^^^^^
+
+**REST API** : *PUT /config/bgp:bgp-router/*
+
+**Sample JSON Data**
+
+.. code:: json
+
+    {
+        "bgp-router": {
+            "local-as-identifier": "10.10.10.10",
+            "local-as-number": 108
+        }
+    }
+
+2. Configure BGP Neighbors
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**REST API** : *PUT /config/bgp:bgp-neighbors/*
+
+**Sample JSON Data**
+
+.. code:: json
+
+      {
+         "bgp-neighbor" : [
+                {
+                    "as-number": 105,
+                    "ip-address": "169.144.42.168"
+                }
+           ]
+       }
+
+Step 2 : Create Tunnel Interfaces
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Create l3tunnel interfaces corresponding to all GRE/VxLAN tunnels
+created with ovsdb (`refer Prerequisites <#prer>`__). Use following REST
+Interface -
+
+**REST API** : *PUT /config/if:interfaces/if:interfacce*
+
+**Sample JSON Data**
+
+.. code:: json
+
+    {
+        "interface": [
+            {
+                "name" : "GRE_192.168.57.101_192.168.57.102",
+                "type" : "odl-interface:l3tunnel",
+                "odl-interface:tunnel-type": "odl-interface:tunnel-type-gre",
+                "odl-interface:local-ip" : "192.168.57.101",
+                "odl-interface:remote-ip" : "192.168.57.102",
+                "odl-interface:portId" : "openflow:1:3",
+                "enabled" : "true"
+            }
+        ]
+    }
+
+Following is expected as a result of these configurations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+1. Unique If-index is generated
+
+2. *Interface-state* operational DS is updated
+
+3. Corresponding Nexthop Group Entry is created
+
+Step 3 : OS Create Neutron Ports and attach VMs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+At this step user creates VMs.
+
+Step 4 : Create VM Interfaces
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Create l2vlan interfaces corresponding to VM created in step 3
+
+**REST API** : *PUT /config/if:interfaces/if:interface*
+
+**Sample JSON Data**
+
+.. code:: json
+
+    {
+        "interface": [
+            {
+                "name" : "dpn1-dp1.2",
+                "type" : "l2vlan",
+                "odl-interface:of-port-id" : "openflow:1:2",
+                "odl-interface:vlan-id" : "1",
+                "enabled" : "true"
+            }
+        ]
+    }
+
+Step 5: Create VPN Instance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**REST API** : *PUT /config/l3vpn:vpn-instances/l3vpn:vpn-instance/*
+
+**Sample JSON Data**
+
+.. code:: json
+
+    {
+      "vpn-instance": [
+        {
+            "description": "Test VPN Instance 1",
+            "vpn-instance-name": "testVpn1",
+            "ipv4-family": {
+                "route-distinguisher": "4000:1",
+                "export-route-policy": "4000:1,5000:1",
+                "import-route-policy": "4000:1,5000:1",
+            }
+        }
+      ]
+    }
+
+Following is expected as a result of these configurations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+1. VPN ID is allocated and updated in data-store
+
+2. Corresponding VRF is created in BGP
+
+3. If there are vpn-interface configurations for this VPN, corresponding
+   action is taken as defined in step 5
+
+Step 5 : Create VPN-Interface and Local Adjacency
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+*this can be done in two steps as well*
+
+1. Create vpn-interface
+^^^^^^^^^^^^^^^^^^^^^^^
+
+**REST API** : *PUT /config/l3vpn:vpn-interfaces/l3vpn:vpn-interface/*
+
+**Sample JSON Data**
+
+.. code:: json
+
+    {
+      "vpn-interface": [
+        {
+          "vpn-instance-name": "testVpn1",
+          "name": "dpn1-dp1.2",
+        }
+      ]
+    }
+
+.. note::
+
+    name here is the name of VM interface created in step 3, 4
+
+2. Add Adjacencies on vpn-interafce
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**REST API** : *PUT
+/config/l3vpn:vpn-interfaces/l3vpn:vpn-interface/dpn1-dp1.3/adjacency*
+
+**Sample JSON Data**
+
+.. code:: json
+
+      {
+         "adjacency" : [
+                {
+                    "ip-address" : "169.144.42.168",
+                    "mac-address" : "11:22:33:44:55:66"
+                }
+           ]
+       }
+
+    its a list, user can define more than one adjacency on a
+    vpn\_interface
+
+Above steps can be carried out in a single step as following
+
+.. code:: json
+
+    {
+        "vpn-interface": [
+            {
+                "vpn-instance-name": "testVpn1",
+                "name": "dpn1-dp1.3",
+                "odl-l3vpn:adjacency": [
+                    {
+                        "odl-l3vpn:mac_address": "11:22:33:44:55:66",
+                        "odl-l3vpn:ip_address": "11.11.11.2",
+                    }
+                ]
+            }
+        ]
+    }
+
+Following is expected as a result of these configurations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+1. Prefix label is generated and stored in DS
+
+2. Ingress table is programmed with flow corresponding to interface
+
+3. Local Egress Group is created
+
+4. Prefix is added to BGP for advertisement
+
+5. BGP pushes route update to FIB YANG Interface
+
+6. FIB Entry flow is added to FIB Table in OF pipeline
+
diff --git a/docs/user-guide/link-aggregation-control-protocol-user-guide.rst b/docs/user-guide/link-aggregation-control-protocol-user-guide.rst
new file mode 100644 (file)
index 0000000..2cdcd15
--- /dev/null
@@ -0,0 +1,209 @@
+Link Aggregation Control Protocol User Guide
+============================================
+
+Overview
+--------
+
+This section contains information about how to use the LACP plugin
+project with OpenDaylight, including configurations.
+
+Link Aggregation Control Protocol Architecture
+----------------------------------------------
+
+The LACP Project within OpenDaylight implements Link Aggregation Control
+Protocol (LACP) as an MD-SAL service module and will be used to
+auto-discover and aggregate multiple links between an OpenDaylight
+controlled network and LACP-enabled endpoints or switches. The result is
+the creation of a logical channel, which represents the aggregation of
+the links. Link aggregation provides link resiliency and bandwidth
+aggregation. This implementation adheres to IEEE Ethernet specification
+`802.3ad <http://www.ieee802.org/3/hssg/public/apr07/frazier_01_0407.pdf>`__.
+
+Configuring Link Aggregation Control Protocol
+---------------------------------------------
+
+This feature can be enabled in the Karaf console of the OpenDaylight
+Karaf distribution by issuing the following command:
+
+::
+
+    feature:install odl-lacp-ui
+
+.. note::
+
+    1. Ensure that legacy (non-OpenFlow) switches are configured with
+       LACP mode active with a long timeout to allow for the LACP plugin
+       in OpenDaylight to respond to its messages.
+
+    2. Flows that want to take advantage of LACP-configured Link
+       Aggregation Groups (LAGs) must explicitly use a OpenFlow group
+       table entry created by the LACP plugin. The plugin only creates
+       group table entries, it does not program any flows on its own.
+
+Administering or Managing Link Aggregation Control Protocol
+-----------------------------------------------------------
+
+LACP-discovered network inventory and network statistics can be viewed
+using the following REST APIs.
+
+1. List of aggregators available for a node:
+
+   ::
+
+       http://<ControllerIP>:8181/restconf/operational/opendaylight-inventory:nodes/node/<node-id>
+
+   Aggregator information will appear within the ``<lacp-aggregators>``
+   XML tag.
+
+2. To view only the information of an aggregator:
+
+   ::
+
+       http://<ControllerIP>:8181/restconf/operational/opendaylight-inventory:nodes/node/<node-id>/lacp-aggregators/<agg-id>
+
+   The group ID associated with the aggregator can be found inside the
+   ``<lag-groupid>`` XML tag.
+
+   The group table entry information for the ``<lag-groupid>`` added for
+   the aggregator is also available in the ``opendaylight-inventory``
+   node database.
+
+3. To view physical port information.
+
+   ::
+
+       http://<ControllerIP>:8181/restconf/operational/opendaylight-inventory:nodes/node/<node-id>/node-connector/<node-connector-id>
+
+   Ports that are associated with an aggregator will have the tag
+   ``<lacp-agg-ref>`` updated with valid aggregator information.
+
+Tutorials
+---------
+
+The below tutorial demonstrates LACP LAG creation for a sample mininet
+topology.
+
+Sample LACP Topology creation on Mininet
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+    sudo mn --controller=remote,ip=<Controller IP> --topo=linear,1 --switch ovsk,protocols=OpenFlow13
+
+The above command will create a virtual network consisting of a switch
+and a host. The switch will be connected to the controller.
+
+Once the topology is discovered, verify the presence of a flow entry
+with "dl\_type" set to "0x8809" to handle LACP packets using the below
+ovs-ofctl command:
+
+::
+
+    ovs-ofctl -O OpenFlow13 dump-flows s1
+     OFPST_FLOW reply (OF1.3) (xid=0x2):
+     cookie=0x300000000000001e, duration=60.067s, table=0, n_packets=0, n_bytes=0, priority=5,dl_dst=01:80:c2:00:00:02,dl_type=0x8809 actions=CONTROLLER:65535
+
+Configure an additional link between the switch (s1) and host (h1) using
+the below command on mininet shell to aggregate 2 links:
+
+::
+
+    mininet> py net.addLink(s1, net.get('h1'))
+    mininet> py s1.attach('s1-eth2')
+
+The LACP module will listen for LACP control packets that are generated
+from legacy switch (non-OpenFlow enabled). In our example, host (h1)
+will act as a LACP packet generator. In order to generate the LACP
+control packets, a bond interface has to be created on the host (h1)
+with mode type set to LACP with long-timeout. To configure bond
+interface, create a new file bonding.conf under the /etc/modprobe.d/
+directory and insert the below lines in this new file:
+
+::
+
+    alias bond0 bonding
+    options bonding mode=4
+
+Here mode=4 is referred to LACP and the default timeout is set to long.
+
+Enable bond interface and associate both physical interface h1-eth0 &
+h1-eth1 as members of the bond interface on host (h1) using the below
+commands on the mininet shell:
+
+::
+
+    mininet> py net.get('h1').cmd('modprobe bonding')
+    mininet> py net.get('h1').cmd('ip link add bond0 type bond')
+    mininet> py net.get('h1').cmd('ip link set bond0 address <bond-mac-address>')
+    mininet> py net.get('h1').cmd('ip link set h1-eth0 down')
+    mininet> py net.get('h1').cmd('ip link set h1-eth0 master bond0')
+    mininet> py net.get('h1').cmd('ip link set h1-eth1 down')
+    mininet> py net.get('h1').cmd('ip link set h1-eth1 master bond0')
+    mininet> py net.get('h1').cmd('ip link set bond0 up')
+
+Once the bond0 interface is up, the host (h1) will send LACP packets to
+the switch (s1). The LACP Module will then create a LAG through exchange
+of LACP packets between the host (h1) and switch (s1). To view the bond
+interface output on the host (h1) side:
+
+::
+
+    mininet> py net.get('h1').cmd('cat /proc/net/bonding/bond0')
+    Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
+    Bonding Mode: IEEE 802.3ad Dynamic link aggregation
+    Transmit Hash Policy: layer2 (0)
+    MII Status: up
+    MII Polling Interval (ms): 100
+    Up Delay (ms): 0
+    Down Delay (ms): 0
+    802.3ad info
+    LACP rate: slow
+    Min links: 0
+    Aggregator selection policy (ad_select): stable
+    Active Aggregator Info:
+            Aggregator ID: 1
+            Number of ports: 2
+            Actor Key: 33
+            Partner Key: 27
+            Partner Mac Address: 00:00:00:00:01:01
+
+::
+
+    Slave Interface: h1-eth0
+    MII Status: up
+    Speed: 10000 Mbps
+    Duplex: full
+    Link Failure Count: 0
+    Permanent HW addr: 00:00:00:00:00:11
+    Aggregator ID: 1
+    Slave queue ID: 0
+
+::
+
+    Slave Interface: h1-eth1
+    MII Status: up
+    Speed: 10000 Mbps
+    Duplex: full
+    Link Failure Count: 0
+    Permanent HW addr: 00:00:00:00:00:12
+    Aggregator ID: 1
+    Slave queue ID: 0
+
+A corresponding group table entry would be created on the OpenFlow
+switch (s1) with "type" set to "select" to perform the LAG
+functionality. To view the group entries:
+
+::
+
+    mininet>ovs-ofctl -O Openflow13 dump-groups s1
+    OFPST_GROUP_DESC reply (OF1.3) (xid=0x2):
+     group_id=60169,type=select,bucket=weight:0,actions=output:1,output:2
+
+To apply the LAG functionality on the switches, the flows should be
+configured with action set to GroupId instead of output port. A sample
+add-flow configuration with output action set to GroupId:
+
+::
+
+    sudo ovs-ofctl -O Openflow13 add-flow s1 dl_type=0x0806,dl_src=SRC_MAC,dl_dst=DST_MAC,actions=group:60169
+
diff --git a/docs/user-guide/lisp-flow-mapping-user-guide.rst b/docs/user-guide/lisp-flow-mapping-user-guide.rst
new file mode 100644 (file)
index 0000000..6d85e2e
--- /dev/null
@@ -0,0 +1,817 @@
+LISP Flow Mapping User Guide
+============================
+
+Overview
+--------
+
+Locator/ID Separation Protocol
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+`Locator/ID Separation Protocol
+(LISP) <http://tools.ietf.org/html/rfc6830>`__ is a technology that
+provides a flexible map-and-encap framework that can be used for overlay
+network applications such as data center network virtualization and
+Network Function Virtualization (NFV).
+
+LISP provides the following name spaces:
+
+-  `Endpoint Identifiers
+   (EIDs) <http://tools.ietf.org/html/rfc6830#page-6>`__
+
+-  `Routing Locators
+   (RLOCs) <http://tools.ietf.org/html/rfc6830#section-3>`__
+
+In a virtualization environment EIDs can be viewed as virtual address
+space and RLOCs can be viewed as physical network address space.
+
+The LISP framework decouples network control plane from the forwarding
+plane by providing:
+
+-  A data plane that specifies how the virtualized network addresses are
+   encapsulated in addresses from the underlying physical network.
+
+-  A control plane that stores the mapping of the virtual-to-physical
+   address spaces, the associated forwarding policies and serves this
+   information to the data plane on demand.
+
+Network programmability is achieved by programming forwarding policies
+such as transparent mobility, service chaining, and traffic engineering
+in the mapping system; where the data plane elements can fetch these
+policies on demand as new flows arrive. This chapter describes the LISP
+Flow Mapping project in OpenDaylight and how it can be used to enable
+advanced SDN and NFV use cases.
+
+LISP data plane Tunnel Routers are available at
+`LISPmob.org <http://LISPmob.org/>`__ in the open source community on
+the following platforms:
+
+-  Linux
+
+-  Android
+
+-  OpenWRT
+
+For more details and support for LISP data plane software please visit
+`the LISPmob web site <http://LISPmob.org/>`__.
+
+LISP Flow Mapping Service
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The LISP Flow Mapping service provides LISP Mapping System services.
+This includes LISP Map-Server and LISP Map-Resolver services to store
+and serve mapping data to data plane nodes as well as to OpenDaylight
+applications. Mapping data can include mapping of virtual addresses to
+physical network address where the virtual nodes are reachable or hosted
+at. Mapping data can also include a variety of routing policies
+including traffic engineering and load balancing. To leverage this
+service, OpenDaylight applications and services can use the northbound
+REST API to define the mappings and policies in the LISP Mapping
+Service. Data plane devices capable of LISP control protocol can
+leverage this service through a southbound LISP plugin. LISP-enabled
+devices must be configured to use this OpenDaylight service as their Map
+Server and/or Map Resolver.
+
+The southbound LISP plugin supports the LISP control protocol
+(Map-Register, Map-Request, Map-Reply messages), and can also be used to
+register mappings in the OpenDaylight mapping service.
+
+LISP Flow Mapping Architecture
+------------------------------
+
+The following figure shows the various LISP Flow Mapping modules.
+
+.. figure:: ./images/ODL_lfm_Be_component.jpg
+   :alt: LISP Mapping Service Internal Architecture
+
+   LISP Mapping Service Internal Architecture
+
+A brief description of each module is as follows:
+
+-  **DAO (Data Access Object):** This layer separates the LISP logic
+   from the database, so that we can separate the map server and map
+   resolver from the specific implementation of the mapping database.
+   Currently we have an implementation of this layer with an in-memory
+   HashMap, but it can be switched to any other key/value store and you
+   only need to implement the ILispDAO interface.
+
+-  **Map Server:** This module processes the adding or registration of
+   authentication tokens (keys) and mappings. For a detailed
+   specification of LISP Map Server, see
+   `LISP <http://tools.ietf.org/search/rfc6830>`__.
+
+-  **Map Resolver:** This module receives and processes the mapping
+   lookup queries and provides the mappings to requester. For a detailed
+   specification of LISP Map Server, see
+   `LISP <http://tools.ietf.org/search/rfc6830>`__.
+
+-  **RPC/RESTCONF:** This is the auto-generated RESTCONF-based
+   northbound API. This module enables defining key-EID associations as
+   well as adding mapping information through the Map Server. Key-EID
+   associations and mappings can also be queried via this API.
+
+-  **GUI:** This module enables adding and querying the mapping service
+   through a GUI based on ODL DLUX.
+
+-  **Neutron:** This module implements the OpenDaylight Neutron Service
+   APIs. It provides integration between the LISP service and the
+   OpenDaylight Neutron service, and thus OpenStack.
+
+-  **Java API:** The API module exposes the Map Server and Map Resolver
+   capabilities via a Java API.
+
+-  **LISP Proto:** This module includes LISP protocol dependent data
+   types and associated processing.
+
+-  **In Memory DB:** This module includes the in memory database
+   implementation of the mapping service.
+
+-  **LISP Southbound Plugin:** This plugin enables data plane devices
+   that support LISP control plane protocol (see
+   `LISP <http://tools.ietf.org/search/rfc6830>`__) to register and
+   query mappings to the LISP Flow Mapping via the LISP control plane
+   protocol.
+
+Configuring LISP Flow Mapping
+-----------------------------
+
+In order to use the LISP mapping service for registering EID to RLOC
+mappings from northbound or southbound, keys have to be defined for the
+EID prefixes first. Once a key is defined for an EID prefix, it can be
+used to add mappings for that EID prefix multiple times. If the service
+is going to be used to process Map-Register messages from the southbound
+LISP plugin, the same key must be used by the data plane device to
+create the authentication data in the Map-Register messages for the
+associated EID prefix.
+
+The ``etc/custom.properties`` file in the Karaf distribution allows
+configuration of several OpenDaylight parameters. The LISP service has
+the following properties that can be adjusted:
+
+**lisp.mappingOverwrite** (default: *true*)
+    Configures handling of mapping updates. When set to *true* (default)
+    a mapping update (either through the southbound plugin via a
+    Map-Register message or through a northbound API PUT REST call) the
+    existing RLOC set associated to an EID prefix is overwritten. When
+    set to *false*, the RLOCs of the update are merged to the existing
+    set.
+
+**lisp.smr** (default: *false*)
+    Enables/disables the `Solicit-Map-Request
+    (SMR) <http://tools.ietf.org/html/rfc6830#section-6.6.2>`__
+    functionality. SMR is a method to notify changes in an EID-to-RLOC
+    mapping to "subscribers". The LISP service considers all
+    Map-Request’s source RLOC as a subscriber to the requested EID
+    prefix, and will send an SMR control message to that RLOC if the
+    mapping changes.
+
+**lisp.elpPolicy** (default: *default*)
+    Configures how to build a Map-Reply southbound message from a
+    mapping containing an Explicit Locator Path (ELP) RLOC. It is used
+    for compatibility with dataplane devices that don’t understand the
+    ELP LCAF format. The *default* setting doesn’t alter the mapping,
+    returning all RLOCs unmodified. The *both* setting adds a new RLOC
+    to the mapping, with a lower priority than the ELP, that is the next
+    hop in the service chain. To determine the next hop, it searches the
+    source RLOC of the Map-Request in the ELP, and chooses the next hop,
+    if it exists, otherwise it chooses the first hop. The *replace*
+    setting adds a new RLOC using the same algorithm as the *both*
+    setting, but using the origin priority of the ELP RLOC, which is
+    removed from the mapping.
+
+**lisp.lookupPolicy** (default: *northboundFirst*)
+    Configures the mapping lookup algorithm. When set to
+    *northboundFirst* mappings programmed through the northbound API
+    will take precedence. If no northbound programmed mappings exist,
+    then the mapping service will return mappings registered through the
+    southbound plugin, if any exists. When set to
+    *northboundAndSouthbound* the mapping programmed by the northbound
+    is returned, updated by the up/down status of these mappings as
+    reported by the southbound (if existing).
+
+**lisp.mappingMerge** (default: *false*)
+    Configures the merge policy on the southbound registrations through
+    the LISP SB Plugin. When set to *false*, only the latest mapping
+    registered through the SB plugin is valid in the southbound mapping
+    database, independent of which device it came from. When set to
+    *true*, mappings for the same EID registered by different devices
+    are merged together and a union of the locators is maintained as the
+    valid mapping for that EID.
+
+Textual Conventions for LISP Address Formats
+--------------------------------------------
+
+In addition to the more common IPv4, IPv6 and MAC address data types,
+the LISP control plane supports arbitrary `Address Family
+Identifiers <http://www.iana.org/assignments/address-family-numbers>`__
+assigned by IANA, and in addition to those the `LISP Canoncal Address
+Format (LCAF) <https://tools.ietf.org/html/draft-ietf-lisp-lcaf>`__.
+
+The LISP Flow Mapping project in OpenDaylight implements support for
+many of these different address formats, the full list being summarized
+in the following table. While some of the address formats have well
+defined and widely used textual representation, many don’t. It became
+necessary to define a convention to use for text rendering of all
+implemented address types in logs, URLs, input fields, etc. The below
+table lists the supported formats, along with their AFI number and LCAF
+type, including the prefix used for disambiguation of potential overlap,
+and examples output.
+
++------------------+----------+----------+----------+----------------------------------+
+| Name             | AFI      | LCAF     | Prefix   | Text Rendering                   |
++==================+==========+==========+==========+==================================+
+| **No Address**   | 0        | -        | no:      | No Address Present               |
++------------------+----------+----------+----------+----------------------------------+
+| **IPv4 Prefix**  | 1        | -        | ipv4:    | 192.0.2.0/24                     |
++------------------+----------+----------+----------+----------------------------------+
+| **IPv6 Prefix**  | 2        | -        | ipv6:    | 2001:db8::/32                    |
++------------------+----------+----------+----------+----------------------------------+
+| **MAC Address**  | 16389    | -        | mac:     | 00:00:5E:00:53:00                |
++------------------+----------+----------+----------+----------------------------------+
+| **Distinguished  | 17       | -        | dn:      | stringAsIs                       |
+| Name**           |          |          |          |                                  |
++------------------+----------+----------+----------+----------------------------------+
+| **AS Number**    | 18       | -        | as:      | AS64500                          |
++------------------+----------+----------+----------+----------------------------------+
+| **AFI List**     | 16387    | 1        | list:    | {192.0.2.1,192.0.2.2,2001:db8::1 |
+|                  |          |          |          | }                                |
++------------------+----------+----------+----------+----------------------------------+
+| **Instance ID**  | 16387    | 2        | -        | [223] 192.0.2.0/24               |
++------------------+----------+----------+----------+----------------------------------+
+| **Application    | 16387    | 4        | appdata: | 192.0.2.1!128!17!80-81!6667-7000 |
+| Data**           |          |          |          |                                  |
++------------------+----------+----------+----------+----------------------------------+
+| **Explicit       | 16387    | 10       | elp:     | {192.0.2.1→192.0.2.2\|lps→192.0. |
+| Locator Path**   |          |          |          | 2.3}                             |
++------------------+----------+----------+----------+----------------------------------+
+| **Source/Destina | 16387    | 12       | srcdst:  | 192.0.2.1/32\|192.0.2.2/32       |
+| tion             |          |          |          |                                  |
+| Key**            |          |          |          |                                  |
++------------------+----------+----------+----------+----------------------------------+
+| **Key/Value      | 16387    | 15       | kv:      | 192.0.2.1⇒192.0.2.2              |
+| Address Pair**   |          |          |          |                                  |
++------------------+----------+----------+----------+----------------------------------+
+| **Service Path** | 16387    | N/A      | sp:      | 42(3)                            |
++------------------+----------+----------+----------+----------------------------------+
+
+Table: LISP Address Formats
+
+Please note that the forward slash character ``/`` typically separating
+IPv4 and IPv6 addresses from the mask length is transformed into ``%2f``
+when used in a URL.
+
+Karaf commands
+--------------
+
+In this section we will discuss two types of Karaf commands: built-in,
+and LISP specific. Some built-in commands are quite useful, and are
+needed for the tutorial, so they will be discussed here. A reference of
+all LISP specific commands, added by the LISP Flow Mapping project is
+also included. They are useful mostly for debugging.
+
+Useful built-in commands
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+``help``
+    Lists all available command, with a short description of each.
+
+``help <command_name>``
+    Show detailed help about a specific command.
+
+``feature:list [-i]``
+    Show all locally available features in the Karaf container. The
+    ``-i`` option lists only features that are currently installed. It
+    is possible to use ``| grep`` to filter the output (for all
+    commands, not just this one).
+
+``feature:install <feature_name>``
+    Install feature ``feature_name``.
+
+``log:set <level> <class>``
+    Set the log level for ``class`` to ``level``. The default log level
+    for all classes is INFO. For debugging, or learning about LISP
+    internals it is useful to run
+    ``log:set TRACE org.opendaylight.lispflowmapping`` right after Karaf
+    starts up.
+
+``log:display``
+    Outputs the log file to the console, and returns control to the
+    user.
+
+``log:tail``
+    Continuously shows log output, requires ``Ctrl+C`` to return to the
+    console.
+
+LISP specific commands
+~~~~~~~~~~~~~~~~~~~~~~
+
+The available lisp commands can always be obtained by
+``help mappingservice``. Currently they are:
+
+``mappingservice:addkey``
+    Add the default password ``password`` for the IPv4 EID prefix
+    0.0.0.0/0 (all addresses). This is useful when experimenting with
+    southbound devices, and using the REST interface would be combersome
+    for whatever reason.
+
+``mappingservice:mappings``
+    Show the list of all mappings stored in the internal non-persistent
+    data store (the DAO), listing the full data structure. The output is
+    not human friendly, but can be used for debugging.
+
+LISP Flow Mapping Karaf Features
+--------------------------------
+
+LISP Flow Mapping has the following Karaf features that can be installed
+from the Karaf console:
+
+``odl-lispflowmapping-msmr``
+    This includes the core features required to use the LISP Flow
+    Mapping Service such as mapping service and the LISP southbound
+    plugin.
+
+``odl-lispflowmapping-ui``
+    This includes the GUI module for the LISP Mapping Service.
+
+``odl-lispflowmapping-neutron``
+    This is the experimental Neutron provider module for LISP mapping
+    service.
+
+Tutorials
+---------
+
+This section provides a tutorial demonstrating various features in this
+service.
+
+Creating a LISP overlay
+~~~~~~~~~~~~~~~~~~~~~~~
+
+This section provides instructions to set up a LISP network of three
+nodes (one "client" node and two "server" nodes) using LISPmob as data
+plane LISP nodes and the LISP Flow Mapping project from OpenDaylight as
+the LISP programmable mapping system for the LISP network.
+
+Overview
+^^^^^^^^
+
+The steps shown below will demonstrate setting up a LISP network between
+a client and two servers, then performing a failover between the two
+"server" nodes.
+
+Prerequisites
+^^^^^^^^^^^^^
+
+-  **OpenDaylight Beryllium**
+
+-  **The Postman Chrome App**: the most convenient way to follow along
+   this tutorial is to use the `Postman Chrome
+   App <https://chrome.google.com/webstore/detail/postman/fhbjgbiflinjbdggehcddcbncdddomop?hl=en>`__
+   to edit and send the requests. The project git repository hosts a
+   collection of the requests that are used in this tutorial in the
+   ``resources/tutorial/Beryllium_Tutorial.json.postman_collection``
+   file. You can import this file to Postman by clicking *Import* at the
+   top, choosing *Download from link* and then entering the following
+   URL:
+   ``https://git.opendaylight.org/gerrit/gitweb?p=lispflowmapping.git;a=blob_plain;f=resources/tutorial/Beryllium_Tutorial.json.postman_collection;hb=refs/heads/stable/beryllium``.
+   Alternatively, you can save the file on your machine, or if you have
+   the repository checked out, you can import from there. You will need
+   to create a new Postman Environment and define some variables within:
+   ``controllerHost`` set to the hostname or IP address of the machine
+   running the ODL instance, and ``restconfPort`` to 8181, if you didn’t
+   modify the default controller settings.
+
+-  **LISPmob version 0.5.x** The README.md lists the dependencies needed
+   to build it from source.
+
+-  **A virtualization platform**
+
+Target Environment
+^^^^^^^^^^^^^^^^^^
+
+The three LISP data plane nodes and the LISP mapping system are assumed
+to be running in Linux virtual machines, which have the ``eth0``
+interface in NAT mode to allow outside internet access and ``eth1``
+connected to a host-only network, with the following IP addresses
+(please adjust configuration files, JSON examples, etc. accordingly if
+you’re using another addressing scheme):
+
++--------------------------+--------------------------+--------------------------+
+| Node                     | Node Type                | IP Address               |
++==========================+==========================+==========================+
+| **controller**           | OpenDaylight             | 192.168.16.11            |
++--------------------------+--------------------------+--------------------------+
+| **client**               | LISPmob                  | 192.168.16.30            |
++--------------------------+--------------------------+--------------------------+
+| **server1**              | LISPmob                  | 192.168.16.31            |
++--------------------------+--------------------------+--------------------------+
+| **server2**              | LISPmob                  | 192.168.16.32            |
++--------------------------+--------------------------+--------------------------+
+| **service-node**         | LISPmob                  | 192.168.16.33            |
++--------------------------+--------------------------+--------------------------+
+
+Table: Nodes in the tutorial
+
+.. note::
+
+    While the tutorial uses LISPmob as the data plane, it could be any
+    LISP-enabled hardware or software router (commercial/open source).
+
+Instructions
+^^^^^^^^^^^^
+
+The below steps use the command line tool cURL to talk to the LISP Flow
+Mapping RPC REST API. This is so that you can see the actual request
+URLs and body content on the page.
+
+1.  Install and run OpenDaylight Beryllium release on the controller VM.
+    Please follow the general OpenDaylight Beryllium Installation Guide
+    for this step. Once the OpenDaylight controller is running install
+    the *odl-lispflowmapping-msmr* feature from the Karaf CLI:
+
+    ::
+
+        feature:install odl-lispflowmapping-msmr
+
+    It takes quite a while to load and initialize all features and their
+    dependencies. It’s worth running the command ``log:tail`` in the
+    Karaf console to see when the log output is winding down, and
+    continue with the tutorial after that.
+
+2.  Install LISPmob on the **client**, **server1**, **server2**, and
+    **service-node** VMs following the installation instructions `from
+    the LISPmob README
+    file <https://github.com/LISPmob/lispmob#software-prerequisites>`__.
+
+3.  Configure the LISPmob installations from the previous step. Starting
+    from the ``lispd.conf.example`` file in the distribution, set the
+    EID in each ``lispd.conf`` file from the IP address space selected
+    for your virtual/LISP network. In this tutorial the EID of the
+    **client** is set to 1.1.1.1/32, and that of **server1** and
+    **server2** to 2.2.2.2/32.
+
+4.  Set the RLOC interface to ``eth1`` in each ``lispd.conf`` file. LISP
+    will determine the RLOC (IP address of the corresponding VM) based
+    on this interface.
+
+5.  Set the Map-Resolver address to the IP address of the
+    **controller**, and on the **client** the Map-Server too. On
+    **server1** and **server2** set the Map-Server to something else, so
+    that it doesn’t interfere with the mappings on the controller, since
+    we’re going to program them manually.
+
+6.  Modify the "key" parameter in each ``lispd.conf`` file to a
+    key/password of your choice (*password* in this tutorial).
+
+    .. note::
+
+        The ``resources/tutorial`` directory in the *stable/beryllium*
+        branch of the project git repository has the files used in the
+        tutorial `checked
+        in <https://git.opendaylight.org/gerrit/gitweb?p=lispflowmapping.git;a=tree;f=resources/tutorial;hb=refs/heads/stable/beryllium>`__,
+        so you can just copy the files to ``/root/lispd.conf`` on the
+        respective VMs. You will also find the JSON files referenced
+        below in the same directory.
+
+7.  Define a key and EID prefix association in OpenDaylight using the
+    RPC REST API for the **client** EID (1.1.1.1/32) to allow
+    registration from the southbound. Since the mappings for the server
+    EID will be configured from the REST API, no such association is
+    necessary. Run the below command on the **controller** (or any
+    machine that can reach **controller**, by replacing *localhost* with
+    the IP address of **controller**).
+
+    ::
+
+        curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
+            http://localhost:8181/restconf/operations/odl-mappingservice:add-key \
+            --data @add-key.json
+
+    where the content of the *add-key.json* file is the following:
+
+    .. code:: json
+
+        {
+            "input": {
+                "eid": {
+                    "address-type": "ietf-lisp-address-types:ipv4-prefix-afi",
+                    "ipv4-prefix": "1.1.1.1/32"
+                },
+                "mapping-authkey": {
+                    "key-string": "password",
+                    "key-type": 1
+                }
+            }
+        }
+
+8.  Verify that the key is added properly by requesting the following
+    URL:
+
+    ::
+
+        curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
+            http://localhost:8181/restconf/operations/odl-mappingservice:get-key \
+            --data @get1.json
+
+    where the content of the *get1.json* file can be derived from the
+    *add-key.json* file by removing the *mapping-authkey* field. The
+    output the above invocation should look like this:
+
+    ::
+
+        {"output":{"mapping-authkey":{"key-type":1,"key-string":"password"}}}
+
+9.  Run the ``lispd`` LISPmob daemon on all VMs:
+
+    ::
+
+        lispd -f /root/lispd.conf
+
+10. The **client** LISPmob node should now register its EID-to-RLOC
+    mapping in OpenDaylight. To verify you can lookup the corresponding
+    EIDs via the REST API
+
+    ::
+
+        curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
+            http://localhost:8181/restconf/operations/odl-mappingservice:get-mapping \
+            --data @get1.json
+
+    An alternative way for retrieving mappings from ODL using the
+    southbound interface is using the
+    ```lig`` <https://github.com/davidmeyer/lig>`__ open source tool.
+
+11. Register the EID-to-RLOC mapping of the server EID 2.2.2.2/32 to the
+    controller, pointing to **server1** and **server2** with a higher
+    priority for **server1**
+
+    ::
+
+        curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
+            http://localhost:8181/restconf/operations/odl-mappingservice:add-mapping \
+            --data @mapping.json
+
+    where the *mapping.json* file looks like this:
+
+    .. code:: json
+
+        {
+            "input": {
+                "mapping-record": {
+                    "recordTtl": 1440,
+                    "action": "NoAction",
+                    "authoritative": true,
+                    "eid": {
+                        "address-type": "ietf-lisp-address-types:ipv4-prefix-afi",
+                        "ipv4-prefix": "2.2.2.2/32"
+                    },
+                    "LocatorRecord": [
+                        {
+                            "locator-id": "server1",
+                            "priority": 1,
+                            "weight": 1,
+                            "multicastPriority": 255,
+                            "multicastWeight": 0,
+                            "localLocator": true,
+                            "rlocProbed": false,
+                            "routed": true,
+                            "rloc": {
+                                "address-type": "ietf-lisp-address-types:ipv4-afi",
+                                "ipv4": "192.168.16.31"
+                            }
+                        },
+                        {
+                            "locator-id": "server2",
+                            "priority": 2,
+                            "weight": 1,
+                            "multicastPriority": 255,
+                            "multicastWeight": 0,
+                            "localLocator": true,
+                            "rlocProbed": false,
+                            "routed": true,
+                            "rloc": {
+                                "address-type": "ietf-lisp-address-types:ipv4-afi",
+                                "ipv4": "192.168.16.32"
+                            }
+                        }
+                    ]
+                }
+            }
+        }
+
+    Here the priority of the second RLOC (192.168.16.32 - **server2**)
+    is 2, a higher numeric value than the priority of 192.168.16.31,
+    which is 1. This policy is saying that **server1** is preferred to
+    **server2** for reaching EID 2.2.2.2/32. Note that lower priority
+    value has higher preference in LISP.
+
+12. Verify the correct registration of the 2.2.2.2/32 EID:
+
+    ::
+
+        curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
+            http://localhost:8181/restconf/operations/odl-mappingservice:get-mapping \
+            --data @get2.json
+
+    where *get2.json* can be derived from *get1.json* by changing the
+    content of the *Ipv4Address* field from *1.1.1.1* to *2.2.2.2*.
+
+13. Now the LISP network is up. To verify, log into the **client** VM
+    and ping the server EID:
+
+    ::
+
+        ping 2.2.2.2
+
+14. Let’s test fail-over now. Suppose you had a service on **server1**
+    which became unavailable, but **server1** itself is still reachable.
+    LISP will not automatically fail over, even if the mapping for
+    2.2.2.2/32 has two locators, since both locators are still reachable
+    and uses the one with the higher priority (lowest priority value).
+    To force a failover, we need to set the priority of **server2** to a
+    lower value. Using the file mapping.json above, swap the priority
+    values between the two locators (lines 14 and 28 in *mapping.json*)
+    and repeat the request from step 11. You can also repeat step 12 to
+    see if the mapping is correctly registered. If you leave the ping
+    on, and monitor the traffic using wireshark, you can see that the
+    ping traffic to 2.2.2.2 will be diverted from the **server1** RLOC
+    to the **server2** RLOC.
+
+    With the default OpenDaylight configuration the failover should be
+    near instantaneous (we observed 3 lost pings in the worst case),
+    because of the LISP `Solicit-Map-Request (SMR)
+    mechanism <http://tools.ietf.org/html/rfc6830#section-6.6.2>`__ that
+    can ask a LISP data plane element to update its mapping for a
+    certain EID (enabled by default). It is controlled by the
+    ``lisp.smr`` variable in ``etc/custom.porperties``. When enabled,
+    any mapping change from the RPC interface will trigger an SMR packet
+    to all data plane elements that have requested the mapping in the
+    last 24 hours (this value was chosen because it’s the default TTL of
+    Cisco IOS xTR mapping registrations). If disabled, ITRs keep their
+    mappings until the TTL specified in the Map-Reply expires.
+
+15. To add a service chain into the path from the client to the server,
+    we can use an Explicit Locator Path, specifying the **service-node**
+    as the first hop and **server1** (or **server2**) as the second hop.
+    The following will achieve that:
+
+    ::
+
+        curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
+            http://localhost:8181/restconf/operations/odl-mappingservice:add-mapping \
+            --data @elp.json
+
+    where the *elp.json* file is as follows:
+
+    .. code:: json
+
+        {
+            "input": {
+                "mapping-record": {
+                    "recordTtl": 1440,
+                    "action": "NoAction",
+                    "authoritative": true,
+                    "eid": {
+                        "address-type": "ietf-lisp-address-types:ipv4-prefix-afi",
+                        "ipv4-prefix": "2.2.2.2/32"
+                    },
+                    "LocatorRecord": [
+                        {
+                            "locator-id": "ELP",
+                            "priority": 1,
+                            "weight": 1,
+                            "multicastPriority": 255,
+                            "multicastWeight": 0,
+                            "localLocator": true,
+                            "rlocProbed": false,
+                            "routed": true,
+                            "rloc": {
+                                "address-type": "ietf-lisp-address-types:explicit-locator-path-lcaf",
+                                "explicit-locator-path": {
+                                    "hop": [
+                                        {
+                                            "hop-id": "service-node",
+                                            "address": "192.168.16.33",
+                                            "lrs-bits": "strict"
+                                        },
+                                        {
+                                            "hop-id": "server1",
+                                            "address": "192.168.16.31",
+                                            "lrs-bits": "strict"
+                                        }
+                                    ]
+                                }
+                            }
+                        }
+                    ]
+                }
+            }
+        }
+
+    After the mapping for 2.2.2.2/32 is updated with the above, the ICMP
+    traffic from **client** to **server1** will flow through the
+    **service-node**. You can confirm this in the LISPmob logs, or by
+    sniffing the traffic on either the **service-node** or **server1**.
+    Note that service chains are unidirectional, so unless another ELP
+    mapping is added for the return traffic, packets will go from
+    **server1** to **client** directly.
+
+16. Suppose the **service-node** is actually a firewall, and traffic is
+    diverted there to support access control lists (ACLs). In this
+    tutorial that can be emulated by using ``iptables`` firewall rules
+    in the **service-node** VM. To deny traffic on the service chain
+    defined above, the following rule can be added:
+
+    ::
+
+        iptables -A OUTPUT --dst 192.168.16.31 -j DROP
+
+    The ping from the **client** should now have stopped.
+
+    In this case the ACL is done on the destination RLOC. There is an
+    effort underway in the LISPmob community to allow filtering on EIDs,
+    which is the more logical place to apply ACLs.
+
+17. To delete the rule and restore connectivity on the service chain,
+    delete the ACL by issuing the following command:
+
+    ::
+
+        iptables -D OUTPUT --dst 192.168.16.31 -j DROP
+
+    which should restore connectivity.
+
+LISP Flow Mapping Support
+-------------------------
+
+For support the lispflowmapping project can be reached by emailing the
+developer mailing list: lispflowmapping-dev@lists.opendaylight.org or on
+the #opendaylight-lispflowmapping IRC channel on irc.freenode.net.
+
+Additional information is also available on the `Lisp Flow Mapping
+wiki <https://wiki.opendaylight.org/view/OpenDaylight_Lisp_Flow_Mapping:Main>`__
+
+Clustering in LISP Flow Mapping
+-------------------------------
+
+Documentation regarding setting up a 3-node OpenDaylight cluster is
+described at following `odl wiki
+page <https://wiki.opendaylight.org/view/Running_and_testing_an_OpenDaylight_Cluster#Three-node_cluster>`__.
+
+To turn on clustering in LISP Flow Mapping it is necessary:
+
+-  run script **deploy.py** script. This script is in
+   `integration-test <https://git.opendaylight.org/gerrit/integration/test>`__
+   project placed at *tools/clustering/cluster-deployer/deploy.py*. A
+   whole deploy.py command can looks like:
+
+.. raw:: html
+
+   <div class="informalexample">
+
+| {path\_to\_integration\_test\_project}/tools/clustering/cluster-deployer/**deploy.py**
+| --**distribution** {path\_to\_distribution\_in\_zip\_format}
+| --**rootdir** {dir\_at\_remote\_host\_where\_copy\_odl\_distribution}
+| --**hosts** {ip1},{ip2},{ip3}
+| --**clean**
+| --**template** lispflowmapping
+| --**rf** 3
+| --**user** {user\_name\_of\_remote\_hosts}
+| --**password** {password\_to\_remote\_hosts}
+
+.. raw:: html
+
+   </div>
+
+| Running this script will cause that specified **distribution** to be
+  deployed to remote **hosts** specified through their IP adresses with
+  using credentials (**user** and **password**). The distribution will
+  be copied to specified **rootdir**. As part of the deployment, a
+  **template** which contains a set of controller files which are
+  different from standard ones. In this case it is specified in
+| *{path\_to\_integration\_test\_project}/tools/clustering/cluster-deployer/lispflowmapping*
+  directory.
+| Lispflowmapping templates are part of integration-test project. There
+  are 5 template files:
+
+-  akka.conf.template
+
+-  jolokia.xml.template
+
+-  module-shards.conf.template
+
+-  modules.conf.template
+
+-  org.apache.karaf.features.cfg.template
+
+After copying the distribution, it is unzipped and started on all of
+specified **hosts** in cluster aware manner.
+
+Remarks
+~~~~~~~
+
+It is necessary to have:
+
+-  **unzip** program installed on all of the host
+
+-  set all remote hosts /etc/sudoers files to not **requiretty** (should
+   only matter on debian hosts)
+
diff --git a/docs/user-guide/nemo-user-guide.rst b/docs/user-guide/nemo-user-guide.rst
new file mode 100644 (file)
index 0000000..39be652
--- /dev/null
@@ -0,0 +1,91 @@
+NEtwork MOdeling (NEMO)
+=======================
+
+This section describes how to use the NEMO feature in OpenDaylight
+and contains contains configuration, administration, and management
+sections for the feature.
+
+Overview
+--------
+
+With the network becoming more complicated, users and applications must handle
+more complex configurations to deploy new services. NEMO project aims to simplify
+the usage of network by providing a new intent northbound interface (NBI). Instead
+of tons of APIs, users/applications just need to describe their intent without
+caring about complex physical devices and implementation means. The intent will
+be translated into detailed configurations on the devices in the NEMO engine. A
+typical scenario is user just need to assign which nodes to implement an VPN,
+without considering which technique is used.
+
+NEMO Engine Architecture
+------------------------
+
+* NEMO API
+  * The NEMO API provide users the NEMO model, which guides users how to construct the
+  instance of intent, and how to construct the instance of predefined types.
+* NEMO REST
+  * The NEMO REST provides users REST APIs to access NEMO engine, that is, user could
+  transmit the intent instance to NEMO engine through basic REST methods.
+* NEMO UI
+  * The NEMO UI provides users a visual interface to deploy service with NEMO model,
+  and display the state in DLUX UI.
+
+Installing NEMO engine
+----------------------
+
+To install NEMO engine, download OpenDaylight and use the Karaf console
+to install the following feature:
+
+odl-nemo-engine-ui
+
+Administering or Managing NEMO Engine
+-------------------------------------
+
+After install features NEMO engine used, user could use NEMO to express his intent
+with NEMO UI or REST APIs in apidoc.
+
+Go to ``http://{controller-ip}:8181/index.html``. In this interface, user could go to
+NEMO UI, and use the tabs and input box to input intent, and see the state of intent
+deployment with the image.
+
+Go to ``http://{controller-ip}:8181/apidoc/explorer/index.html``. In this interface, user
+could REST methods "POST", "PUT","GET" and "DELETE" to deploy intent or query the state
+of deployment.
+
+Tutorials
+---------
+
+Below are tutorials for NEMO Engine.
+
+Using NEMO Engine
+~~~~~~~~~~~~~~~~~
+
+The purpose of the tutorial is to describe how to use use UI to deploy intent.
+
+Overview
+^^^^^^^^
+
+This tutorial will describe how to use the NEMO UI to check the operated resources, the steps
+to deploy service, and the ultimate state.
+
+Prerequisites
+^^^^^^^^^^^^^
+
+To understand the tutorial well, we hope there are a physical or virtual network exist, and
+OpenDaylight with NEMO engine must be deployed in one host.
+
+Target Environment
+^^^^^^^^^^^^^^^^^^
+
+The intent expressed by NEMO model is depended on network resources, so user need to have enough
+resources to use, or else, the deployment of intent will fail.
+
+Instructions
+^^^^^^^^^^^^
+
+-  Run the OpenDaylight distribution and install odl-nemo-engine-ui from the Karaf console.
+-  Go to ``http://{controller-ip}:8181/index.html``, and sign in.
+-  Go the NEMO UI interface. And Register a new user with user name, password, and tenant.
+-  Check the existing resources to see if it is consistent with yours.
+-  Deploy service with NEMO model by the create intent menu.
+
diff --git a/docs/user-guide/netconf-user-guide.rst b/docs/user-guide/netconf-user-guide.rst
new file mode 100644 (file)
index 0000000..e2bb698
--- /dev/null
@@ -0,0 +1,1268 @@
+NETCONF User Guide
+==================
+
+Overview
+--------
+
+NETCONF is an XML-based protocol used for configuration and monitoring
+devices in the network. The base NETCONF protocol is described in
+`RFC-6241 <http://tools.ietf.org/html/rfc6241>`__.
+
+**NETCONF in OpenDaylight:.**
+
+OpenDaylight supports the NETCONF protocol as a northbound server as
+well as a southbound plugin. It also includes a set of test tools for
+simulating NETCONF devices and clients.
+
+Southbound (netconf-connector)
+------------------------------
+
+The NETCONF southbound plugin is capable of connecting to remote NETCONF
+devices and exposing their configuration/operational datastores, RPCs
+and notifications as MD-SAL mount points. These mount points allow
+applications and remote users (over RESTCONF) to interact with the
+mounted devices.
+
+In terms of RFCs, the connector supports:
+
+-  `RFC-6241 <http://tools.ietf.org/html/rfc6241>`__
+
+-  `RFC-5277 <https://tools.ietf.org/html/rfc5277>`__
+
+-  `RFC-6022 <https://tools.ietf.org/html/rfc6022>`__
+
+-  `draft-ietf-netconf-yang-library-06 <https://tools.ietf.org/html/draft-ietf-netconf-yang-library-06>`__
+
+**Netconf-connector is fully model-driven (utilizing the YANG modeling
+language) so in addition to the above RFCs, it supports any
+data/RPC/notifications described by a YANG model that is implemented by
+the device.**
+
+    **Tip**
+
+    NETCONF southbound can be activated by installing
+    ``odl-netconf-connector-all`` Karaf feature.
+
+Netconf-connector configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are 2 ways for configuring netconf-connector: NETCONF or RESTCONF.
+This guide focuses on using RESTCONF.
+
+Default configuration
+^^^^^^^^^^^^^^^^^^^^^
+
+The default configuration contains all the necessary dependencies (file:
+01-netconf.xml) and a single instance of netconf-connector (file:
+99-netconf-connector.xml) called **controller-config** which connects
+itself to the NETCONF northbound in OpenDaylight in a loopback fashion.
+The connector mounts the NETCONF server for config-subsystem in order to
+enable RESTCONF protocol for config-subsystem. This RESTCONF still goes
+via NETCONF, but using RESTCONF is much more user friendly than using
+NETCONF.
+
+Spawning additional netconf-connectors while the controller is running
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Preconditions:
+
+1. OpenDaylight is running
+
+2. In Karaf, you must have the netconf-connector installed (at the Karaf
+   prompt, type: ``feature:install odl-netconf-connector-all``); the
+   loopback NETCONF mountpoint will be automatically configured and
+   activated
+
+3. Wait until log displays following entry:
+   RemoteDevice{controller-config}: NETCONF connector initialized
+   successfully
+
+To configure a new netconf-connector you need to send following request
+to RESTCONF:
+
+POST
+http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules
+
+Headers:
+
+-  Accept application/xml
+
+-  Content-Type application/xml
+
+::
+
+    <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
+      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">prefix:sal-netconf-connector</type>
+      <name>new-netconf-device</name>
+      <address xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">127.0.0.1</address>
+      <port xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">830</port>
+      <username xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">admin</username>
+      <password xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">admin</password>
+      <tcp-only xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">false</tcp-only>
+      <event-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:netty">prefix:netty-event-executor</type>
+        <name>global-event-executor</name>
+      </event-executor>
+      <binding-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
+        <name>binding-osgi-broker</name>
+      </binding-registry>
+      <dom-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">prefix:dom-broker-osgi-registry</type>
+        <name>dom-broker</name>
+      </dom-registry>
+      <client-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:config:netconf">prefix:netconf-client-dispatcher</type>
+        <name>global-netconf-dispatcher</name>
+      </client-dispatcher>
+      <processing-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:threadpool</type>
+        <name>global-netconf-processing-executor</name>
+      </processing-executor>
+      <keepalive-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:scheduled-threadpool</type>
+        <name>global-netconf-ssh-scheduled-executor</name>
+      </keepalive-executor>
+    </module>
+
+This spawns a new netconf-connector which tries to connect to (or mount)
+a NETCONF device at 127.0.0.1 and port 830. You can check the
+configuration of config-subsystem’s configuration datastore. The new
+netconf-connector will now be present there. Just invoke:
+
+GET
+http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules
+
+The response will contain the module for new-netconf-device.
+
+Right after the new netconf-connector is created, it writes some useful
+metadata into the datastore of MD-SAL under the network-topology
+subtree. This metadata can be found at:
+
+GET
+http://localhost:8181/restconf/operational/network-topology:network-topology/
+
+Information about connection status, device capabilities, etc. can be
+found there.
+
+Connecting to a device not supporting NETCONF monitoring
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The netconf-connector in OpenDaylight relies on ietf-netconf-monitoring
+support when connecting to remote NETCONF device. The
+ietf-netconf-monitoring support allows netconf-connector to list and
+download all YANG schemas that are used by the device. NETCONF connector
+can only communicate with a device if it knows the set of used schemas
+(or at least a subset). However, some devices use YANG models internally
+but do not support NETCONF monitoring. Netconf-connector can also
+communicate with these devices, but you have to side load the necessary
+yang models into OpenDaylight’s YANG model cache for netconf-connector.
+In general there are 2 situations you might encounter:
+
+**1. NETCONF device does not support ietf-netconf-monitoring but it does
+list all its YANG models as capabilities in HELLO message**
+
+This could be a device that internally uses only ietf-inet-types YANG
+model with revision 2010-09-24. In the HELLO message that is sent from
+this device there is this capability reported:
+
+::
+
+    urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&revision=2010-09-24
+
+**For such devices you only need to put the schema into folder
+cache/schema inside your Karaf distribution.**
+
+    **Important**
+
+    The file with YANG schema for ietf-inet-types has to be called
+    ietf-inet-types@2010-09-24.yang. It is the required naming format of
+    the cache.
+
+**2. NETCONF device does not support ietf-netconf-monitoring and it does
+NOT list its YANG models as capabilities in HELLO message**
+
+Compared to device that lists its YANG models in HELLO message, in this
+case there would be no capability with ietf-inet-types in the HELLO
+message. This type of device basically provides no information about the
+YANG schemas it uses so its up to the user of OpenDaylight to properly
+configure netconf-connector for this device.
+
+Netconf-connector has an optional configuration attribute called
+yang-module-capabilities and this attribute can contain a list of "YANG
+module based" capabilities. So by setting this configuration attribute,
+it is possible to override the "yang-module-based" capabilities reported
+in HELLO message of the device. To do this, we need to modify the
+configuration of netconf-connector by adding this XML (It needs to be
+added next to the address, port, username etc. configuration elements):
+
+::
+
+    <yang-module-capabilities xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+      <capability xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&amp;revision=2010-09-24
+      </capability>
+    </yang-module-capabilities>
+
+**Remember to also put the YANG schemas into the cache folder.**
+
+.. note::
+
+    For putting multiple capabilities, you just need to replicate the
+    capability xml element inside yang-module-capability element.
+    Capability element is modeled as a leaf-list. With this
+    configuration, we would make the remote device report usage of
+    ietf-inet-types in the eyes of netconf-connector.
+
+Reconfiguring Netconf-Connector While the Controller is Running
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+It is possible to change the configuration of a running module while the
+whole controller is running. This example will continue where the last
+left off and will change the configuration for the brand new
+netconf-connector after it was spawned. Using one RESTCONF request, we
+will change both username and password for the netconf-connector.
+
+To update an existing netconf-connector you need to send following
+request to RESTCONF:
+
+PUT
+http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-sal-netconf-connector-cfg:sal-netconf-connector/new-netconf-device
+
+::
+
+    <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
+      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">prefix:sal-netconf-connector</type>
+      <name>new-netconf-device</name>
+      <username xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">bob</username>
+      <password xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">passwd</password>
+      <tcp-only xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">false</tcp-only>
+      <event-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:netty">prefix:netty-event-executor</type>
+        <name>global-event-executor</name>
+      </event-executor>
+      <binding-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
+        <name>binding-osgi-broker</name>
+      </binding-registry>
+      <dom-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">prefix:dom-broker-osgi-registry</type>
+        <name>dom-broker</name>
+      </dom-registry>
+      <client-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:config:netconf">prefix:netconf-client-dispatcher</type>
+        <name>global-netconf-dispatcher</name>
+      </client-dispatcher>
+      <processing-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:threadpool</type>
+        <name>global-netconf-processing-executor</name>
+      </processing-executor>
+      <keepalive-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
+        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:scheduled-threadpool</type>
+        <name>global-netconf-ssh-scheduled-executor</name>
+      </keepalive-executor>
+    </module>
+
+Since a PUT is a replace operation, the whole configuration must be
+specified along with the new values for username and password. This
+should result in a 2xx response and the instance of netconf-connector
+called new-netconf-device will be reconfigured to use username bob and
+password passwd. New configuration can be verified by executing:
+
+GET
+http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-sal-netconf-connector-cfg:sal-netconf-connector/new-netconf-device
+
+With new configuration, the old connection will be closed and a new one
+established.
+
+Destroying Netconf-Connector While the Controller is Running
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Using RESTCONF one can also destroy an instance of a module. In case of
+netconf-connector, the module will be destroyed, NETCONF connection
+dropped and all resources will be cleaned. To do this, simply issue a
+request to following URL:
+
+DELETE
+http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-sal-netconf-connector-cfg:sal-netconf-connector/new-netconf-device
+
+The last element of the URL is the name of the instance and its
+predecessor is the type of that module (In our case the type is
+**sal-netconf-connector** and name **new-netconf-device**). The type and
+name are actually the keys of the module list.
+
+Netconf-connector configuration with MD-SAL
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+It is also possible to configure new NETCONF connectors directly through
+MD-SAL with the usage of the network-topology model. You can configure
+new NETCONF connectors both through the NETCONF server for MD-SAL (port
+2830) or RESTCONF. This guide focuses on RESTCONF.
+
+    **Tip**
+
+    To enable NETCONF connector configuration through MD-SAL install
+    either the ``odl-netconf-topology`` or
+    ``odl-netconf-clustered-topology`` feature. We will explain the
+    difference between these features later.
+
+Preconditions
+^^^^^^^^^^^^^
+
+1. OpenDaylight is running
+
+2. In Karaf, you must have the ``odl-netconf-topology`` or
+   ``odl-netconf-clustered-topology`` feature installed.
+
+3. Feature ``odl-restconf`` must be installed
+
+4. Wait until log displays following entry:
+
+   ::
+
+       Successfully pushed configuration snapshot 02-netconf-topology.xml(odl-netconf-topology,odl-netconf-topology)
+
+   or until
+
+   ::
+
+       GET http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/
+
+   returns a non-empty response, for example:
+
+   ::
+
+       <topology xmlns="urn:TBD:params:xml:ns:yang:network-topology">
+         <topology-id>topology-netconf</topology-id>
+       </topology>
+
+Spawning new NETCONF connectors
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To create a new NETCONF connector you need to send the following request
+to RESTCONF:
+
+::
+
+    PUT http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
+
+Headers:
+
+-  Accept: application/xml
+
+-  Content-Type: application/xml
+
+Payload:
+
+::
+
+    <node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
+      <node-id>new-netconf-device</node-id>
+      <host xmlns="urn:opendaylight:netconf-node-topology">127.0.0.1</host>
+      <port xmlns="urn:opendaylight:netconf-node-topology">17830</port>
+      <username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
+      <password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
+      <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
+      <!-- non-mandatory fields with default values, you can safely remove these if you do not wish to override any of these values-->
+      <reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema>
+      <connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis>
+      <max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts>
+      <between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis>
+      <sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.5</sleep-factor>
+      <!-- keepalive-delay set to 0 turns off keepalives-->
+      <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">120</keepalive-delay>
+    </node>
+
+Note that the device name in <node-id> element must match the last
+element of the restconf URL.
+
+Reconfiguring an existing connector
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The steps to reconfigure an existing connector are exactly the same as
+when spawning a new connector. The old connection will be disconnected
+and a new connector with the new configuration will be created.
+
+Deleting an existing connector
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To remove an already configured NETCONF connector you need to send the
+following:
+
+::
+
+    DELETE http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
+
+Connecting to a device supporting only NETCONF 1.0
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+OpenDaylight is schema-based distribution and heavily depends on YANG
+models. However some legacy NETCONF devices are not schema-based and
+implement just RFC 4741. This type of device does not utilize YANG
+models internally and OpenDaylight does not know how to communicate
+with such devices, how to validate data, or what the semantics of data
+are.
+
+NETCONF connector can communicate also with these devices, but the
+trade-offs are worsened possibilities in utilization of NETCONF
+mountpoints. Using RESTCONF with such devices is not suported. Also
+communicating with schemaless devices from application code is slightly
+different.
+
+To connect to schemaless device, there is a optional configuration option
+in netconf-node-topology model called schemaless. You have to set this
+option to true.
+
+Clustered NETCONF connector
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To spawn NETCONF connectors that are cluster-aware you need to install
+the ``odl-netconf-clustered-topology`` karaf feature.
+
+    **Warning**
+
+    The ``odl-netconf-topology`` and ``odl-netconf-clustered-topology``
+    features are considered **INCOMPATIBLE**. They both manage the same
+    space in the datastore and would issue conflicting writes if
+    installed together.
+
+Configuration of clustered NETCONF connectors works the same as the
+configuration through the topology model in the previous section.
+
+When a new clustered connector is configured the configuration gets
+distributed among the member nodes and a NETCONF connector is spawned on
+each node. From these nodes a master is chosen which handles the schema
+download from the device and all the communication with the device. You
+will be able to read/write to/from the device from all slave nodes due
+to the proxy data brokers implemented.
+
+You can use the ``odl-netconf-clustered-topology`` feature in a single
+node scenario as well but the code that uses akka will be used, so for a
+scenario where only a single node is used, ``odl-netconf-topology``
+might be preferred.
+
+Netconf-connector utilization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once the connector is up and running, users can utilize the new mount
+point instance. By using RESTCONF or from their application code. This
+chapter deals with using RESTCONF and more information for app
+developers can be found in the developers guide or in the official
+tutorial application **ncmount** that can be found in the coretutorials
+project:
+
+-  https://github.com/opendaylight/coretutorials/tree/stable/beryllum/ncmount
+
+Reading data from the device
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Just invoke (no body needed):
+
+GET
+http://localhost:8080/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/
+
+This will return the entire content of operation datastore from the
+device. To view just the configuration datastore, change **operational**
+in this URL to **config**.
+
+Writing configuration data to the device
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In general, you cannot simply write any data you want to the device. The
+data have to conform to the YANG models implemented by the device. In
+this example we are adding a new interface-configuration to the mounted
+device (assuming the device supports Cisco-IOS-XR-ifmgr-cfg YANG model).
+In fact this request comes from the tutorial dedicated to the
+**ncmount** tutorial app.
+
+POST
+http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/Cisco-IOS-XR-ifmgr-cfg:interface-configurations
+
+::
+
+    <interface-configuration xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ifmgr-cfg">
+        <active>act</active>
+        <interface-name>mpls</interface-name>
+        <description>Interface description</description>
+        <bandwidth>32</bandwidth>
+        <link-status></link-status>
+    </interface-configuration>
+
+Should return 200 response code with no body.
+
+    **Tip**
+
+    This call is transformed into a couple of NETCONF RPCs. Resulting
+    NETCONF RPCs that go directly to the device can be found in the
+    OpenDaylight logs after invoking ``log:set TRACE
+    org.opendaylight.controller.sal.connect.netconf`` in the Karaf
+    shell. Seeing the NETCONF RPCs might help with debugging.
+
+This request is very similar to the one where we spawned a new netconf
+device. That’s because we used the loopback netconf-connector to write
+configuration data into config-subsystem datastore and config-subsystem
+picked it up from there.
+
+Invoking custom RPC
+^^^^^^^^^^^^^^^^^^^
+
+Devices can implement any additional RPC and as long as it provides YANG
+models for it, it can be invoked from OpenDaylight. Following example
+shows how to invoke the get-schema RPC (get-schema is quite common among
+netconf devices). Invoke:
+
+POST
+http://localhost:8181/restconf/operations/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/ietf-netconf-monitoring:get-schema
+
+::
+
+    <input xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring">
+      <identifier>ietf-yang-types</identifier>
+      <version>2013-07-15</version>
+    </input>
+
+This call should fetch the source for ietf-yang-types YANG model from
+the mounted device.
+
+Netconf-connector + Netopeer
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+`Netopeer <https://github.com/cesnet/netopeer>`__ (an open-source
+NETCONF server) can be used for testing/exploring NETCONF southbound in
+OpenDaylight.
+
+Netopeer installation
+^^^^^^^^^^^^^^^^^^^^^
+
+A `Docker <https://www.docker.com/>`__ container with netopeer will be
+used in this guide. To install Docker and start the `netopeer
+image <https://index.docker.io/u/dockeruser/netopeer/>`__ perform
+following steps:
+
+1. Install docker http://docs.docker.com/linux/step_one/
+
+2. Start the netopeer image:
+
+   ::
+
+       docker run -rm -t -p 1831:830 dockeruser/netopeer
+
+3. Verify netopeer is running by invoking (netopeer should send its
+   HELLO message right away:
+
+   ::
+
+       ssh root@localhost -p 1831 -s netconf
+       (password root)
+
+Mounting netopeer NETCONF server
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Preconditions:
+
+-  OpenDaylight is started with features ``odl-restconf-all`` and
+   ``odl-netconf-connector-all``.
+
+-  Netopeer is up and running in docker
+
+Now just follow the chapter: `Spawning
+netconf-connector <#_spawning_additional_netconf_connectors_while_the_controller_is_running>`__.
+In the payload change the:
+
+-  name, e.g., to netopeer
+
+-  username/password to your system credentials
+
+-  ip to localhost
+
+-  port to 1831.
+
+After netopeer is mounted successfully, its configuration can be read
+using RESTCONF by invoking:
+
+GET
+http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/netopeer/yang-ext:mount/
+
+Northbound (NETCONF servers)
+----------------------------
+
+OpenDaylight provides 2 types of NETCONF servers:
+
+-  **NETCONF server for config-subsystem (listening by default on port
+   1830)**
+
+   -  Serves as a default interface for config-subsystem and allows
+      users to spawn/reconfigure/destroy modules (or applications) in
+      OpenDaylight
+
+-  **NETCONF server for MD-SAL (listening by default on port 2830)**
+
+   -  Serves as an alternative interface for MD-SAL (besides RESTCONF)
+      and allows users to read/write data from MD-SAL’s datastore and to
+      invoke its rpcs (NETCONF notifications are not available in the
+      Beryllium release of OpenDaylight)
+
+.. note::
+
+    The reason for having 2 NETCONF servers is that config-subsystem and
+    MD-SAL are 2 different components of OpenDaylight and require
+    different approach for NETCONF message handling and data
+    translation. These 2 components will probably merge in the future.
+
+NETCONF server for config-subsystem
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This NETCONF server is the primary interface for config-subsystem. It
+allows the users to interact with config-subsystem in a standardized
+NETCONF manner.
+
+In terms of RFCs, these are supported:
+
+-  `RFC-6241 <http://tools.ietf.org/html/rfc6241>`__
+
+-  `RFC-5277 <https://tools.ietf.org/html/rfc5277>`__
+
+-  `RFC-6470 <https://tools.ietf.org/html/rfc6470>`__
+
+   -  (partially, only the schema-change notification is available in
+      Beryllium release)
+
+-  `RFC-6022 <https://tools.ietf.org/html/rfc6022>`__
+
+For regular users it is recommended to use RESTCONF + the
+controller-config loopback mountpoint instead of using pure NETCONF. How
+to do that is spesific for each component/module/application in
+OpenDaylight and can be found in their dedicated user guides.
+
+NETCONF server for MD-SAL
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This NETCONF server is just a generic interface to MD-SAL in
+OpenDaylight. It uses the stadard MD-SAL APIs and serves as an
+alternative to RESTCONF. It is fully model driven and supports any data
+and rpcs that are supported by MD-SAL.
+
+In terms of RFCs, these are supported:
+
+-  `RFC-6241 <http://tools.ietf.org/html/rfc6241>`__
+
+-  `RFC-6022 <https://tools.ietf.org/html/rfc6022>`__
+
+-  `draft-ietf-netconf-yang-library-06 <https://tools.ietf.org/html/draft-ietf-netconf-yang-library-06>`__
+
+Notifications over NETCONF are not supported in the Beryllium release.
+
+    **Tip**
+
+    Install NETCONF northbound for MD-SAL by installing feature:
+    ``odl-netconf-mdsal`` in karaf. Default binding port is **2830**.
+
+Configuration
+^^^^^^^^^^^^^
+
+The default configuration can be found in file: *08-netconf-mdsal.xml*.
+The file contains the configuration for all necessary dependencies and a
+single SSH endpoint starting on port 2830. There is also a (by default
+disabled) TCP endpoint. It is possible to start multiple endpoints at
+the same time either in the initial configuration file or while
+OpenDaylight is running.
+
+The credentials for SSH endpoint can also be configured here, the
+defaults are admin/admin. Credentials in the SSH endpoint are not yet
+managed by the centralized AAA component and have to be configured
+separately.
+
+Verifying MD-SAL’s NETCONF server
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+After the NETCONF server is available it can be examined by a command
+line ssh tool:
+
+::
+
+    ssh admin@localhost -p 2830 -s netconf
+
+The server will respond by sending its HELLO message and can be used as
+a regular NETCONF server from then on.
+
+Mounting the MD-SAL’s NETCONF server
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To perform this operation, just spawn a new netconf-connector as
+described in `Spawning
+netconf-connector <#_spawning_additional_netconf_connectors_while_the_controller_is_running>`__.
+Just change the ip to "127.0.0.1" port to "2830" and its name to
+"controller-mdsal".
+
+Now the MD-SAL’s datastore can be read over RESTCONF via NETCONF by
+invoking:
+
+GET
+http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/controller-mdsal/yang-ext:mount
+
+.. note::
+
+    This might not seem very useful, since MD-SAL can be accessed
+    directly from RESTCONF or from Application code, but the same method
+    can be used to mount and control other OpenDaylight instances by the
+    "master OpenDaylight".
+
+NETCONF testtool
+----------------
+
+**NETCONF testtool is a set of standalone runnable jars that can:**
+
+-  Simulate NETCONF devices (suitable for scale testing)
+
+-  Stress/Performance test NETCONF devices
+
+-  Stress/Performance test RESTCONF devices
+
+These jars are part of OpenDaylight’s controller project and are built
+from the NETCONF codebase in OpenDaylight.
+
+    **Tip**
+
+    Download testtool from OpenDaylight Nexus at:
+    https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/netconf/netconf-testtool/1.0.2-Beryllium-SR2/
+
+**Nexus contains 3 executable tools:**
+
+-  executable.jar - device simulator
+
+-  stress.client.tar.gz - NETCONF stress/performance measuring tool
+
+-  perf-client.jar - RESTCONF stress/performance measuring tool
+
+    **Tip**
+
+    Each executable tool provides help. Just invoke ``java -jar
+    <name-of-the-tool.jar> --help``
+
+NETCONF device simulator
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+NETCONF testtool (or NETCONF device simulator) is a tool that
+
+-  Simulates 1 or more NETCONF devices
+
+-  Is suitable for scale, performance or crud testing
+
+-  Uses core implementation of NETCONF server from OpenDaylight
+
+-  Generates configuration files for controller so that the OpenDaylight
+   distribution (Karaf) can easily connect to all simulated devices
+
+-  Provides broad configuration options
+
+-  Can start a fully fledged MD-SAL datastore
+
+-  Supports notifications
+
+Building testtool
+^^^^^^^^^^^^^^^^^
+
+1. Check out latest NETCONF repository from
+   `git <https://git.opendaylight.org/gerrit/#/admin/projects/netconf>`__
+
+2. Move into the ``opendaylight/netconf/tools/netconf-testtool/`` folder
+
+3. Build testtool using the ``mvn clean install`` command
+
+Downloading testtool
+^^^^^^^^^^^^^^^^^^^^
+
+Netconf-testtool is now part of default maven build profile for
+controller and can be also downloaded from nexus. The executable jar for
+testtool can be found at:
+`nexus-artifacts <https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/netconf/netconf-testtool/1.0.2-Beryllium-SR2/>`__
+
+Running testtool
+^^^^^^^^^^^^^^^^
+
+1. After successfully building or downloading, move into the
+   ``opendaylight/netconf/tools/netconf-testtool/target/`` folder and
+   there is file ``netconf-testtool-1.1.0-SNAPSHOT-executable.jar`` (or
+   if downloaded from nexus just take that jar file)
+
+2. Execute this file using, e.g.:
+
+   ::
+
+       java -jar netconf-testtool-1.1.0-SNAPSHOT-executable.jar
+
+   This execution runs the testtool with default for all parameters and
+   you should see this log output from the testtool :
+
+   ::
+
+       10:31:08.206 [main] INFO  o.o.c.n.t.t.NetconfDeviceSimulator - Starting 1, SSH simulated devices starting on port 17830
+       10:31:08.675 [main] INFO  o.o.c.n.t.t.NetconfDeviceSimulator - All simulated devices started successfully from port 17830 to 17830
+
+Default Parameters
+''''''''''''''''''
+
+The default parameters for testtool are:
+
+-  Use SSH
+
+-  Run 1 simulated device
+
+-  Device port is 17830
+
+-  YANG modules used by device are only: ietf-netconf-monitoring,
+   ietf-yang-types, ietf-inet-types (these modules are required for
+   device in order to support NETCONF monitoring and are included in the
+   netconf-testtool)
+
+-  Connection timeout is set to 30 minutes (quite high, but when testing
+   with 10000 devices it might take some time for all of them to fully
+   establish a connection)
+
+-  Debug level is set to false
+
+-  No distribution is modified to connect automatically to the NETCONF
+   testtool
+
+Verifying testtool
+^^^^^^^^^^^^^^^^^^
+
+To verify that the simulated device is up and running, we can try to
+connect to it using command line ssh tool. Execute this command to
+connect to the device:
+
+::
+
+    ssh admin@localhost -p 17830 -s netconf
+
+Just accept the server with yes (if required) and provide any password
+(testtool accepts all users with all passwords). You should see the
+hello message sent by simulated device.
+
+Testtool help
+^^^^^^^^^^^^^
+
+::
+
+    usage: netconf testool [-h] [--device-count DEVICES-COUNT] [--devices-per-port DEVICES-PER-PORT] [--schemas-dir SCHEMAS-DIR] [--notification-file NOTIFICATION-FILE]
+                           [--initial-config-xml-file INITIAL-CONFIG-XML-FILE] [--starting-port STARTING-PORT] [--generate-config-connection-timeout GENERATE-CONFIG-CONNECTION-TIMEOUT]
+                           [--generate-config-address GENERATE-CONFIG-ADDRESS] [--generate-configs-batch-size GENERATE-CONFIGS-BATCH-SIZE] [--distribution-folder DISTRO-FOLDER] [--ssh SSH] [--exi EXI]
+                           [--debug DEBUG] [--md-sal MD-SAL]
+
+    NETCONF device simulator. Detailed info can be found at https://wiki.opendaylight.org/view/OpenDaylight_Controller:Netconf:Testtool#Building_testtool
+
+    optional arguments:
+      -h, --help             show this help message and exit
+      --device-count DEVICES-COUNT
+                             Number of simulated netconf devices to spin. This is the number of actual ports open for the devices.
+      --devices-per-port DEVICES-PER-PORT
+                             Amount of config files generated per port to spoof more devices then are actually running
+      --schemas-dir SCHEMAS-DIR
+                             Directory containing yang schemas to describe simulated devices. Some schemas e.g. netconf monitoring and inet types are included by default
+      --notification-file NOTIFICATION-FILE
+                             Xml file containing notifications that should be sent to clients after create subscription is called
+      --initial-config-xml-file INITIAL-CONFIG-XML-FILE
+                             Xml file containing initial simulatted configuration to be returned via get-config rpc
+      --starting-port STARTING-PORT
+                             First port for simulated device. Each other device will have previous+1 port number
+      --generate-config-connection-timeout GENERATE-CONFIG-CONNECTION-TIMEOUT
+                             Timeout to be generated in initial config files
+      --generate-config-address GENERATE-CONFIG-ADDRESS
+                             Address to be placed in generated configs
+      --generate-configs-batch-size GENERATE-CONFIGS-BATCH-SIZE
+                             Number of connector configs per generated file
+      --distribution-folder DISTRO-FOLDER
+                             Directory where the karaf distribution for controller is located
+      --ssh SSH              Whether to use ssh for transport or just pure tcp
+      --exi EXI              Whether to use exi to transport xml content
+      --debug DEBUG          Whether to use debug log level instead of INFO
+      --md-sal MD-SAL        Whether to use md-sal datastore instead of default simulated datastore.
+
+Supported operations
+^^^^^^^^^^^^^^^^^^^^
+
+Testtool default simple datastore supported operations:
+
+get-schema
+    returns YANG schemas loaded from user specified directory,
+
+edit-config
+    always returns OK and stores the XML from the input in a local
+    variable available for get-config and get RPC. Every edit-config
+    replaces the previous data,
+
+commit
+    always returns OK, but does not actually commit the data,
+
+get-config
+    returns local XML stored by edit-config,
+
+get
+    returns local XML stored by edit-config with netconf-state subtree,
+    but also supports filtering.
+
+(un)lock
+    returns always OK with no lock guarantee
+
+create-subscription
+    returns always OK and after the operation is triggered, provided
+    NETCONF notifications (if any) are fed to the client. No filtering
+    or stream recognition is supported.
+
+Note: when operation="delete" is present in the payload for edit-config,
+it will wipe its local store to simulate the removal of data.
+
+When using the MD-SAL datastore testtool behaves more like normal
+NETCONF server and is suitable for crud testing. create-subscription is
+not supported when testtool is running with the MD-SAL datastore.
+
+Notification support
+^^^^^^^^^^^^^^^^^^^^
+
+Testtool supports notifications via the --notification-file switch. To
+trigger the notification feed, create-subscription operation has to be
+invoked. The XML file provided should look like this example file:
+
+::
+
+    <?xml version='1.0' encoding='UTF-8' standalone='yes'?>
+    <notifications>
+
+    <!-- Notifications are processed in the order they are defined in XML -->
+
+    <!-- Notification that is sent only once right after create-subscription is called -->
+    <notification>
+        <!-- Content of each notification entry must contain the entire notification with event time. Event time can be hardcoded, or generated by testtool if XXXX is set as eventtime in this XML -->
+        <content><![CDATA[
+            <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
+                <eventTime>2011-01-04T12:30:46</eventTime>
+                <random-notification xmlns="http://www.opendaylight.org/netconf/event:1.0">
+                    <random-content>single no delay</random-content>
+                </random-notification>
+            </notification>
+        ]]></content>
+    </notification>
+
+    <!-- Repeated Notification that is sent 5 times with 2 second delay inbetween -->
+    <notification>
+        <!-- Delay in seconds from previous notification -->
+        <delay>2</delay>
+        <!-- Number of times this notification should be repeated -->
+        <times>5</times>
+        <content><![CDATA[
+            <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
+                <eventTime>XXXX</eventTime>
+                <random-notification xmlns="http://www.opendaylight.org/netconf/event:1.0">
+                    <random-content>scheduled 5 times 10 seconds each</random-content>
+                </random-notification>
+            </notification>
+        ]]></content>
+    </notification>
+
+    <!-- Single notification that is sent only once right after the previous notification -->
+    <notification>
+        <delay>2</delay>
+        <content><![CDATA[
+            <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
+                <eventTime>XXXX</eventTime>
+                <random-notification xmlns="http://www.opendaylight.org/netconf/event:1.0">
+                    <random-content>single with delay</random-content>
+                </random-notification>
+            </notification>
+        ]]></content>
+    </notification>
+
+    </notifications>
+
+Connecting testtool with controller Karaf distribution
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Auto connect to OpenDaylight
+''''''''''''''''''''''''''''
+
+It is possible to make OpenDaylight auto connect to the simulated
+devices spawned by testtool (so user does not have to post a
+configuration for every NETCONF connector via RESTCONF). The testtool is
+able to modify the OpenDaylight distribution to auto connect to the
+simulated devices after feature ``odl-netconf-connector-all`` is
+installed. When running testtool, issue this command (just point the
+testool to the distribution:
+
+::
+
+    java -jar netconf-testtool-1.1.0-SNAPSHOT-executable.jar --device-count 10 --distribution-folder ~/distribution-karaf-0.4.0-SNAPSHOT/ --debug true
+
+With the distribution-folder parameter, the testtool will modify the
+distribution to include configuration for netconf-connector to connect
+to all simulated devices. So there is no need to spawn
+netconf-connectors via RESTCONF.
+
+Running testtool and OpenDaylight on different machines
+'''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+The testtool binds by default to 0.0.0.0 so it should be accessible from
+remote machines. However you need to set the parameter
+"generate-config-address" (when using autoconnect) to the address of
+machine where testtool will be run so OpenDaylight can connect. The
+default value is localhost.
+
+Executing operations via RESTCONF on a mounted simulated device
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Simulated devices support basic RPCs for editing their config. This part
+shows how to edit data for simulated device via RESTCONF.
+
+Test YANG schema
+''''''''''''''''
+
+The controller and RESTCONF assume that the data that can be manipulated
+for mounted device is described by a YANG schema. For demonstration, we
+will define a simple YANG model:
+
+::
+
+    module test {
+        yang-version 1;
+        namespace "urn:opendaylight:test";
+        prefix "tt";
+
+        revision "2014-10-17";
+
+
+       container cont {
+
+            leaf l {
+                type string;
+            }
+       }
+    }
+
+Save this schema in file called test@2014-10-17.yang and store it a
+directory called test-schemas/, e.g., your home folder.
+
+Editing data for simulated device
+'''''''''''''''''''''''''''''''''
+
+-  Start the device with following command:
+
+   ::
+
+       java -jar netconf-testtool-1.1.0-SNAPSHOT-executable.jar --device-count 10 --distribution-folder ~/distribution-karaf-0.4.0-SNAPSHOT/ --debug true --schemas-dir ~/test-schemas/
+
+-  Start OpenDaylight
+
+-  Install odl-netconf-connector-all feature
+
+-  Install odl-restconf feature
+
+-  Check that you can see config data for simulated device by executing
+   GET request to
+
+   ::
+
+       http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount/
+
+-  The data should be just and empty data container
+
+-  Now execute edit-config request by executing a POST request to:
+
+   ::
+
+       http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount
+
+   with headers:
+
+   ::
+
+       Accept application/xml
+       Content-Type application/xml
+
+   and payload:
+
+   ::
+
+       <cont xmlns="urn:opendaylight:test">
+         <l>Content</l>
+       </cont>
+
+-  Check that you can see modified config data for simulated device by
+   executing GET request to
+
+   ::
+
+       http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount/
+
+-  Check that you can see the same modified data in operational for
+   simulated device by executing GET request to
+
+   ::
+
+       http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount/
+
+    **Warning**
+
+    Data will be mirrored in operational datastore only when using the
+    default simple datastore.
+
+Known problems
+^^^^^^^^^^^^^^
+
+Slow creation of devices on virtual machines
+''''''''''''''''''''''''''''''''''''''''''''
+
+When testtool seems to take unusually long time to create the devices
+use this flag when running it:
+
+::
+
+    -Dorg.apache.sshd.registerBouncyCastle=false
+
+Too many files open
+'''''''''''''''''''
+
+When testtool or OpenDaylight starts to fail with TooManyFilesOpen
+exception, you need to increase the limit of open files in your OS. To
+find out the limit in linux execute:
+
+::
+
+    ulimit -a
+
+Example sufficient configuration in linux:
+
+::
+
+    core file size          (blocks, -c) 0
+    data seg size           (kbytes, -d) unlimited
+    scheduling priority             (-e) 0
+    file size               (blocks, -f) unlimited
+    pending signals                 (-i) 63338
+    max locked memory       (kbytes, -l) 64
+    max memory size         (kbytes, -m) unlimited
+    open files                      (-n) 500000
+    pipe size            (512 bytes, -p) 8
+    POSIX message queues     (bytes, -q) 819200
+    real-time priority              (-r) 0
+    stack size              (kbytes, -s) 8192
+    cpu time               (seconds, -t) unlimited
+    max user processes              (-u) 63338
+    virtual memory          (kbytes, -v) unlimited
+    file locks                      (-x) unlimited
+
+To set these limits edit file: /etc/security/limits.conf, for example:
+
+::
+
+    *         hard    nofile      500000
+    *         soft    nofile      500000
+    root      hard    nofile      500000
+    root      soft    nofile      500000
+
+"Killed"
+''''''''
+
+The testtool might end unexpectedly with a simple message: "Killed".
+This means that the OS killed the tool due to too much memory consumed
+or too many threads spawned. To find out the reason on linux you can use
+following command:
+
+::
+
+    dmesg | egrep -i -B100 'killed process'
+
+Also take a look at this file: /proc/sys/kernel/threads-max. It limits
+the number of threads spawned by a process. Sufficient (but probably
+much more than enough) value is, e.g., 126676
+
+NETCONF stress/performance measuring tool
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This is basically a NETCONF client that puts NETCONF servers under heavy
+load of NETCONF RPCs and measures the time until a configurable amount
+of them is processed.
+
+RESTCONF stress-performance measuring tool
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Very similar to NETCONF stress tool with the difference of using
+RESTCONF protocol instead of NETCONF.
+
+YANGLIB remote repository
+-------------------------
+
+There are scenarios in NETCONF deployment, that require for a centralized
+YANG models repository. YANGLIB plugin provides such remote repository.
+
+To start this plugin, you have to install odl-yanglib feature. Then you
+have to configure YANGLIB either through RESTCONF or NETCONF. We will
+show how to configure YANGLIB through RESTCONF.
+
+YANGLIB configuration through RESTCONF
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You have to specify what local YANG modules directory you want to provide.
+Then you have to specify address and port whre you want to provide YANG
+sources. For example, we want to serve yang sources from folder /sources
+on localhost:5000 adress. The configuration for this scenario will be
+as follows:
+
+::
+
+    PUT  http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/yanglib:yanglib/example
+
+Headers:
+
+-  Accept: application/xml
+
+-  Content-Type: application/xml
+
+Payload:
+
+::
+
+   <module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
+     <name>example</name>
+     <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">prefix:yanglib</type>
+     <broker xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">
+       <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
+       <name>binding-osgi-broker</name>
+     </broker>
+     <cache-folder xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">/sources</cache-folder>
+     <binding-addr xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">localhost</binding-addr>
+     <binding-port xmlns="urn:opendaylight:params:xml:ns:yang:controller:yanglib:impl">5000</binding-port>
+   </module>
+
+This should result in a 2xx response and new YANGLIB instance should be
+created. This YANGLIB takes all YANG sources from /sources folder and
+for each generates URL in form:
+
+::
+
+    http://localhost:5000/schemas/{modelName}/{revision}
+
+On this URL will be hosted YANG source for particular module.
+
+YANGLIB instance also write this URL along with source identifier to
+ietf-netconf-yang-library/modules-state/module list.
+
+Netconf-connector with YANG library as fallback
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There is an optional configuration in netconf-connector called
+yang-library. You can specify YANG library to be plugged as additional
+source provider into the mount's schema repository. Since YANGLIB
+plugin is advertising provided modules through yang-library model, we
+can use it in mount point's configuration as YANG library.  To do this,
+we need to modify the configuration of netconf-connector by adding this
+XML
+
+::
+
+    <yang-library xmlns="urn:opendaylight:netconf-node-topology">
+      <yang-library-url xmlns="urn:opendaylight:netconf-node-topology">http://localhost:8181/restconf/operational/ietf-yang-library:modules-state</yang-library-url>
+      <username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
+      <password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
+    </yang-library>
+
+This will register YANGLIB provided sources as a fallback schemas for
+particular mount point.
diff --git a/docs/user-guide/netide-user-guide.rst b/docs/user-guide/netide-user-guide.rst
new file mode 100644 (file)
index 0000000..4c3f353
--- /dev/null
@@ -0,0 +1,101 @@
+NetIDE User Guide
+=================
+
+Overview
+--------
+
+OpenDaylight’s NetIDE project allows users to run SDN applications
+written for different SDN controllers, e.g., Floodlight or Ryu, on top
+of OpenDaylight managed infrastructure. The NetIDE Network Engine
+integrates a client controller layer that executes the modules that
+compose a Network Application and interfaces with a server SDN
+controller layer that drives the underlying infrastructure. In addition,
+it provides a uniform interface to common tools that are intended to
+allow the inspection/debug of the control channel and the management of
+the network resources.
+
+The Network Engine provides a compatibility layer capable of translating
+calls of the network applications running on top of the client
+controllers, into calls for the server controller framework. The
+communication between the client and the server layers is achieved
+through the NetIDE intermediate protocol, which is an application-layer
+protocol on top of TCP that transmits the network control/management
+messages from the client to the server controller and vice-versa.
+Between client and server controller sits the Core Layer which also
+speaks the intermediate protocol.
+
+NetIDE API
+----------
+
+Architecture and Design
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The NetIDE engine follows the ONF’s proposed Client/Server SDN
+Application architecture.
+
+.. figure:: ./images/netide/netidearch.jpg
+   :alt: NetIDE Network Engine Architecture
+
+   NetIDE Network Engine Architecture
+
+Core
+~~~~
+
+The NetIDE Core is a message-based system that allows for the exchange
+of messages between OpenDaylight and subscribed Client SDN Controllers
+
+Handling reply messages correctly
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When an application module sends a request to the network (e.g. flow
+statistics, features, etc.), the Network Engine must be able to
+correctly drive the corresponding reply to such a module. This is not a
+trivial task, as many modules may compose the network application
+running on top of the Network Engine, and there is no way for the Core
+to pair replies and requests. The transaction IDs (xid) in the OpenFlow
+header are unusable in this case, as it may happen that different
+modules use the same values.
+
+In the proposed approach, represented in the figure below, the task of
+pairing replies with requests is performed by the Shim Layer which
+replaces the original xid of the OpenFlow requests coming from the core
+with new unique xid values. The Shim also saves the original OpenFlow
+xid value and the module id it finds in the NetIDE header. As the
+network elements must use the same xid values in the replies, the Shim
+layer can easily pair a reply with the correct request as it is using
+unique xid values.
+
+The below figure shows how the Network Engine should handle the
+controller-to-switch OpenFlow messages. The diagram shows the case of a
+request message sent by an application module to a network element where
+the Backend inserts the module id of the module in the NetIDE header (X
+in the Figure). For other messages generated by the client controller
+platform (e.g. echo requests) or by the Backend, the module id of the
+Backend is used (Y in the Figure).
+
+.. figure:: ./images/netide/netide-flow.jpg
+   :alt: NetIDE Communication Flow
+
+   NetIDE Communication Flow
+
+Configuration
+~~~~~~~~~~~~~
+
+Below are the configuration items which can be edited, including their
+default values.
+
+-  core-address: This is the ip address of the NetIDE Core, default is
+   127.0.0.1
+
+-  core-port: The port of on which the NetIDE core is listening on
+
+-  address: IP address where the controller listens for switch
+   connections, default is 127.0.0.1
+
+-  port: Port where controller listens for switch connections, default:
+   6644
+
+-  transport-protocol: default is TCP
+
+-  switch-idle-timeout: default is 15000ms
+
diff --git a/docs/user-guide/network-intent-composition-(nic)-user-guide.rst b/docs/user-guide/network-intent-composition-(nic)-user-guide.rst
new file mode 100644 (file)
index 0000000..c2606fd
--- /dev/null
@@ -0,0 +1,879 @@
+Network Intent Composition (NIC) User Guide
+===========================================
+
+Overview
+--------
+
+Network Intent Composition (NIC) is an interface that allows clients to
+express a desired state in an implementation-neutral form that will be
+enforced via modification of available resources under the control of
+the OpenDaylight system.
+
+This description is purposely abstract as an intent interface might
+encompass network services, virtual devices, storage, etc.
+
+The intent interface is meant to be a controller-agnostic interface so
+that "intents" are portable across implementations, such as OpenDaylight
+and ONOS. Thus an intent specification should not contain implementation
+or technology specifics.
+
+The intent specification will be implemented by decomposing the intent
+and augmenting it with implementation specifics that are driven by local
+implementation rules, policies, and/or settings.
+
+Network Intent Composition (NIC) Architecture
+---------------------------------------------
+
+The core of the NIC architecture is the intent model, which specifies
+the details of the desired state. It is the responsibility of the NIC
+implementation transforms this desired state to the resources under the
+control of OpenDaylight. The component that transforms the intent to the
+implementation is typically referred to as a renderer.
+
+For the Boron release, multiple, simultaneous renderers will not be
+supported. Instead either the VTN or GBP renderer feature can be
+installed, but not both.
+
+For the Boron release, the only actions supported are "ALLOW" and
+"BLOCK". The "ALLOW" action indicates that traffic can flow between the
+source and destination end points, while "BLOCK" prevents that flow;
+although it is possible that an given implementation may augment the
+available actions with additional actions.
+
+Besides transforming a desired state to an actual state it is the
+responsibility of a renderer to update the operational state tree for
+the NIC data model in OpenDaylight to reflect the intent which the
+renderer implemented.
+
+Configuring Network Intent Composition (NIC)
+--------------------------------------------
+
+For the Boron release there is no default implementation of a renderer,
+thus without an additional module installed the NIC will not function.
+
+Administering or Managing Network Intent Composition (NIC)
+----------------------------------------------------------
+
+There is no additional administration of management capabilities related
+to the Network Intent Composition features.
+
+Interactions
+------------
+
+A user can interact with the Network Intent Composition (NIC) either
+through the RESTful interface using standard RESTCONF operations and
+syntax or via the Karaf console CLI.
+
+REST
+~~~~
+
+Configuration
+^^^^^^^^^^^^^
+
+The Network Intent Composition (NIC) feature supports the following REST
+operations against the configuration data store.
+
+-  POST - creates a new instance of an intent in the configuration
+   store, which will trigger the realization of that intent. An ID
+   *must* be specified as part of this request as an attribute of the
+   intent.
+
+-  GET - fetches a list of all configured intents or a specific
+   configured intent.
+
+-  DELETE - removes a configured intent from the configuration store,
+   which triggers the removal of the intent from the network.
+
+Operational
+^^^^^^^^^^^
+
+The Network Intent Composition (NIC) feature supports the following REST
+operations against the operational data store.
+
+-  GET - fetches a list of all operational intents or a specific
+   operational intent.
+
+Karaf Console CLI
+~~~~~~~~~~~~~~~~~
+
+This feature provides karaf console CLI command to manipulate the intent
+data model. The CLI essentailly invokes the equivalent data operations.
+
+intent:add
+^^^^^^^^^^
+
+Creates a new intent in the configuration data tree
+
+::
+
+    DESCRIPTION
+            intent:add
+
+        Adds an intent to the controller.
+
+    Examples: --actions [ALLOW] --from <subject> --to <subject>
+              --actions [BLOCK] --from <subject>
+
+    SYNTAX
+            intent:add [options]
+
+    OPTIONS
+            -a, --actions
+                    Action to be performed.
+                    -a / --actions BLOCK/ALLOW
+                    (defaults to [BLOCK])
+            --help
+                    Display this help message
+            -t, --to
+                    Second Subject.
+                    -t / --to <subject>
+                    (defaults to any)
+            -f, --from
+                    First subject.
+                    -f / --from <subject>
+                    (defaults to any)
+
+intent:delete
+^^^^^^^^^^^^^
+
+Removes an existing intent from the system
+
+::
+
+    DESCRIPTION
+            intent:remove
+
+        Removes an intent from the controller.
+
+    SYNTAX
+            intent:remove id
+
+    ARGUMENTS
+            id  Intent Id
+
+intent:list
+^^^^^^^^^^^
+
+Lists all the intents in the system
+
+::
+
+    DESCRIPTION
+            intent:list
+
+        Lists all intents in the controller.
+
+    SYNTAX
+            intent:list [options]
+
+    OPTIONS
+            -c, --config
+                    List Configuration Data (optional).
+                    -c / --config <ENTER>
+            --help
+                    Display this help message
+
+intent:show
+^^^^^^^^^^^
+
+Displayes the details of a single intent
+
+::
+
+    DESCRIPTION
+            intent:show
+
+        Shows detailed information about an intent.
+
+    SYNTAX
+            intent:show id
+
+    ARGUMENTS
+            id  Intent Id
+
+intent:map
+^^^^^^^^^^
+
+List/Add/Delete current state from/to the mapping service.
+
+::
+
+    DESCRIPTION
+            intent:map
+
+            List/Add/Delete current state from/to the mapping service.
+
+    SYNTAX
+            intent:map [options]
+
+             Examples: --list, -l [ENTER], to retrieve all keys.
+                       --add-key <key> [ENTER], to add a new key with empty contents.
+                       --del-key <key> [ENTER], to remove a key with it's values."
+                       --add-key <key> --value [<value 1>, <value 2>, ...] [ENTER],
+                         to add a new key with some values (json format).
+    OPTIONS
+           --help
+               Display this help message
+           -l, --list
+               List values associated with a particular key.
+           -l / --filter <regular expression> [ENTER]
+           --add-key
+               Adds a new key to the mapping service.
+           --add-key <key name> [ENTER]
+           --value
+               Specifies which value should be added/delete from the mapping service.
+           --value "key=>value"... --value "key=>value" [ENTER]
+               (defaults to [])
+           --del-key
+               Deletes a key from the mapping service.
+           --del-key <key name> [ENTER]
+
+NIC Usage Examples
+------------------
+
+Default Requirements
+~~~~~~~~~~~~~~~~~~~~
+
+Start mininet, and create three switches (s1, s2, and s3) and four hosts
+(h1, h2, h3, and h4) in it.
+
+Replace <Controller IP> based on your environment.
+
+::
+
+    $  sudo mn --mac --topo single,2 --controller=remote,ip=<Controller IP>
+
+::
+
+     mininet> net
+     h1 h1-eth0:s2-eth1
+     h2 h2-eth0:s2-eth2
+     h3 h3-eth0:s3-eth1
+     h4 h4-eth0:s3-eth2
+     s1 lo:  s1-eth1:s2-eth3 s1-eth2:s3-eth3
+     s2 lo:  s2-eth1:h1-eth0 s2-eth2:h2-eth0 s2-eth3:s1-eth1
+     s3 lo:  s3-eth1:h3-eth0 s3-eth2:h4-eth0 s3-eth3:s1-eth2
+
+Downloading and deploy Karaf distribution
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+-  Get the Beryllium distribution.
+
+-  Unzip the downloaded zip distribution.
+
+-  To run the Karaf.
+
+::
+
+    ./bin/karaf
+
+-  Once the console is up, type as below to install feature.
+
+::
+
+    feature:install odl-nic-core-mdsal odl-nic-console odl-nic-listeners
+
+Simple Mininet topology
+-----------------------
+
+.. code:: python
+
+    !/usr/bin/python
+
+    from mininet.topo import Topo
+
+    class SimpleTopology( Topo ):
+        "Simple topology example."
+
+        def __init__( self ):
+            "Create custom topo."
+
+            
+        Topo.__init__( self )
+
+            
+            Switch1 = self.addSwitch( 's1' )
+            Switch2 = self.addSwitch( 's2' )
+            Switch3 = self.addSwitch( 's3' )
+            Switch4 = self.addSwitch( 's4' )
+            Host11 = self.addHost( 'h1' )
+            Host12 = self.addHost( 'h2' )
+            Host21 = self.addHost( 'h3' )
+            Host22 = self.addHost( 'h4' )
+            Host23 = self.addHost( 'h5' )
+            Service1 = self.addHost( 'srvc1' ) 
+
+        
+            self.addLink( Host11, Switch1 )
+            self.addLink( Host12, Switch1 )
+            self.addLink( Host21, Switch2 )
+            self.addLink( Host22, Switch2 )
+            self.addLink( Host23, Switch2 )
+            self.addLink( Switch1, Switch2 )
+            self.addLink( Switch2, Switch4 )
+            self.addLink( Switch4, Switch3 )
+            self.addLink( Switch3, Switch1 )
+            self.addLink( Switch3, Service1 )
+            self.addLink( Switch4, Service1 )
+
+
+    topos = { 'simpletopology': ( lambda: SimpleTopology() ) }
+
+-  Initialize topology
+
+-  Add hosts and switches
+
+-  Host used to represent the service
+
+-  Add links
+
+    Source: https://gist.github.com/vinothgithub15/315d0a427d5afc39f2d7
+
+How to configure VTN Renderer
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The section demonstrates allow or block packets of the traffic within a
+VTN Renderer, according to the specified flow conditions.
+
+The table below lists the actions to be applied when a packet matches
+the condition:
+
++----------------+-----------------------------------------------------------+
+| Action         | Function                                                  |
++================+===========================================================+
+| Allow          | Permits the packet to be forwarded normally.              |
++----------------+-----------------------------------------------------------+
+| Block          | Discards the packet preventing it from being forwarded.   |
++----------------+-----------------------------------------------------------+
+
+Requirement
+^^^^^^^^^^^
+
+-  Before execute the follow steps, please, use default requirements.
+   See section `Default Requirements <#_default_requirements>`__.
+
+Configuration
+^^^^^^^^^^^^^
+
+Please execute the following curl commands to test network intent using
+mininet:
+
+Create Intent
+'''''''''''''
+
+To provision the network for the two hosts(h1 and h2) and demonstrates
+the action allow.
+
+::
+
+    curl -v --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X PUT http://localhost:8181/restconf/config/intent:intents/intent/b9a13232-525e-4d8c-be21-cd65e3436034 -d '{ "intent:intent" : { "intent:id": "b9a13232-525e-4d8c-be21-cd65e3436034", "intent:actions" : [ { "order" : 2, "allow" : {} } ], "intent:subjects" : [ { "order":1 , "end-point-group" : {"name":"10.0.0.1"} }, { "order":2 , "end-point-group" : {"name":"10.0.0.2"}} ] } }'
+
+To provision the network for the two hosts(h2 and h3) and demonstrates
+the action allow.
+
+::
+
+    curl -v --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X PUT http://localhost:8181/restconf/config/intent:intents/intent/b9a13232-525e-4d8c-be21-cd65e3436035 -d '{ "intent:intent" : { "intent:id": "b9a13232-525e-4d8c-be21-cd65e3436035", "intent:actions" : [ { "order" : 2, "allow" : {} } ], "intent:subjects" : [ { "order":1 , "end-point-group" : {"name":"10.0.0.2"} }, { "order":2 , "end-point-group" : {"name":"10.0.0.3"}} ] } }'
+
+Verification
+''''''''''''
+
+As we have applied action type allow now ping should happen between
+hosts (h1 and h2) and (h2 and h3).
+
+::
+
+     mininet> pingall
+     Ping: testing ping reachability
+     h1 -> h2 X X
+     h2 -> h1 h3 X
+     h3 -> X h2 X
+     h4 -> X X X
+
+Update the intent
+'''''''''''''''''
+
+To provision block action that indicates traffic is not allowed between
+h1 and h2.
+
+::
+
+    curl -v --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X PUT http://localhost:8181/restconf/config/intent:intents/intent/b9a13232-525e-4d8c-be21-cd65e3436034 -d '{ "intent:intent" : { "intent:id": "b9a13232-525e-4d8c-be21-cd65e3436034", "intent:actions" : [ { "order" : 2, "block" : {} } ], "intent:subjects" : [ { "order":1 , "end-point-group" : {"name":"10.0.0.1"} }, { "order":2 , "end-point-group" : {"name":"10.0.0.2"}} ] } }'
+
+Verification
+''''''''''''
+
+As we have applied action type block now ping should not happen between
+hosts (h1 and h2).
+
+::
+
+     mininet> pingall
+     Ping: testing ping reachability
+     h1 -> X X X
+     h2 -> X h3 X
+     h3 -> X h2 X
+     h4 -> X X X
+
+.. note::
+
+    Old actions and hosts are replaced by the new action and hosts.
+
+Delete the intent
+'''''''''''''''''
+
+Respective intent and the traffics will be deleted.
+
+::
+
+    curl -v --user "admin":"admin" -H "Accept: application/json" -H     "Content-type: application/json" -X DELETE http://localhost:8181/restconf/config/intent:intents/intent/b9a13232-525e-4d8c-be21-cd65e3436035
+
+Verification
+''''''''''''
+
+Deletion of intent and flow.
+
+::
+
+     mininet> pingall
+     Ping: testing ping reachability
+     h1 -> X X X
+     h2 -> X X X
+     h3 -> X X X
+     h4 -> X X X
+
+.. note::
+
+    Ping between two hosts can also be done using MAC Address
+
+To provision the network for the two hosts(h1 MAC address and h2 MAC
+address).
+
+::
+
+    curl -v --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X PUT http://localhost:8181/restconf/config/intent:intents/intent/b9a13232-525e-4d8c-be21-cd65e3436035 -d '{ "intent:intent" : { "intent:id": "b9a13232-525e-4d8c-be21-cd65e3436035", "intent:actions" : [ { "order" : 2, "allow" : {} } ], "intent:subjects" : [ { "order":1 , "end-point-group" : {"name":"6e:4f:f7:27:15:c9"} }, { "order":2 , "end-point-group" : {"name":"aa:7d:1f:4a:70:81"}} ] } }'
+
+How to configure Redirect Action
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The section explains the redirect action supported in NIC. The redirect
+functionality supports forwarding (to redirect) the traffic to a service
+configured in SFC before forwarding it to the destination.
+
+.. figure:: ./images/nic/Service_Chaining.png
+   :alt: REDIRECT SERVICE
+
+   REDIRECT SERVICE
+
+Following steps explain Redirect action function:
+
+-  Configure the service in SFC using the SFC APIs.
+
+-  Configure the intent with redirect action and the service information
+   where the traffic needs to be redirected.
+
+-  The flows are computed as below
+
+   1. First flow entry between the source host connected node and the
+      ingress node of the configured service.
+
+   2. Second flow entry between the egress Node id the configured
+      service and the ID and destination host connected host.
+
+   3. Third flow entry between the destination host node and the source
+      host node.
+
+Requirement
+^^^^^^^^^^^
+
+-  Save the mininet `Simple Mininet
+   topology <#_simple_mininet_topology>`__ script as redirect\_test.py
+
+-  Start mininet, and create switches in it.
+
+Replace <Controller IP> based on your environment.
+
+::
+
+    sudo mn --controller=remote,ip=<Controller IP>--custom redirect_test.py --topo mytopo2
+
+::
+
+     mininet> net
+     h1 h1-eth0:s1-eth1
+     h2 h2-eth0:s1-eth2
+     h3 h3-eth0:s2-eth1
+     h4 h4-eth0:s2-eth2
+     h5 h5-eth0:s2-eth3
+     srvc1 srvc1-eth0:s3-eth3 srvc1-eth1:s4-eth3
+     s1 lo:  s1-eth1:h1-eth0 s1-eth2:h2-eth0 s1-eth3:s2-eth4 s1-eth4:s3-eth2
+     s2 lo:  s2-eth1:h3-eth0 s2-eth2:h4-eth0 s2-eth3:h5-eth0 s2-eth4:s1-eth3 s2-eth5:s4-eth1
+     s3 lo:  s3-eth1:s4-eth2 s3-eth2:s1-eth4 s3-eth3:srvc1-eth0
+     s4 lo:  s4-eth1:s2-eth5 s4-eth2:s3-eth1 s4-eth3:srvc1-eth1
+     c0
+
+Starting the Karaf
+^^^^^^^^^^^^^^^^^^
+
+-  Before execute the following steps, please, use the default
+   requirements. See section `Downloading and deploy Karaf
+   distribution <#_default_requirements>`__.
+
+Configuration
+^^^^^^^^^^^^^
+
+Mininet
+'''''''
+
+.. figure:: ./images/nic/Redirect_flow.png
+   :alt: CONFIGURATION THE NETWORK IN MININET
+
+   CONFIGURATION THE NETWORK IN MININET
+
+-  Configure srvc1 as service node in the mininet environment.
+
+Please execute the following commands in the mininet console (where
+mininet script is executed).
+
+::
+
+     srvc1 ip addr del 10.0.0.6/8 dev srvc1-eth0
+     srvc1 brctl addbr br0
+     srvc1 brctl addif br0 srvc1-eth0
+     srvc1 brctl addif br0 srvc1-eth1
+     srvc1 ifconfig br0 up
+     srvc1 tc qdisc add dev srvc1-eth1 root netem delay 200ms
+
+Configure service in SFC
+''''''''''''''''''''''''
+
+The service (srvc1) is configured using SFC REST API. As part of the
+configuration the ingress and egress node connected the service is
+configured.
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '{
+      "service-functions": {
+        "service-function": [
+          {
+            "name": "srvc1",
+            "sf-data-plane-locator": [
+              {
+                "name": "Egress",
+                "service-function-forwarder": "openflow:4"
+              },
+              {
+                "name": "Ingress",
+                "service-function-forwarder": "openflow:3"
+              }
+            ],
+            "nsh-aware": false,
+            "type": "delay"
+          }
+        ]
+      }
+    }' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function:service-functions/
+
+**SFF RESTCONF Request**
+
+Configuring switch and port information for the service functions.
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '{
+      "service-function-forwarders": {
+        "service-function-forwarder": [
+          {
+            "name": "openflow:3",
+            "service-node": "OVSDB2",
+            "sff-data-plane-locator": [
+              {
+                "name": "Ingress",
+                "data-plane-locator":
+                {
+                    "vlan-id": 100,
+                    "mac": "11:11:11:11:11:11",
+                    "transport": "service-locator:mac"
+                },
+                "service-function-forwarder-ofs:ofs-port":
+                {
+                    "port-id" : "3"
+                }
+              }
+            ],
+            "service-function-dictionary": [
+              {
+                "name": "srvc1",
+                "sff-sf-data-plane-locator":
+                {
+                    "sf-dpl-name" : "openflow:3",
+                    "sff-dpl-name" : "Ingress"
+                }
+              }
+            ]
+          },
+          {
+            "name": "openflow:4",
+            "service-node": "OVSDB3",
+            "sff-data-plane-locator": [
+              {
+                "name": "Egress",
+                "data-plane-locator":
+                {
+                    "vlan-id": 200,
+                    "mac": "44:44:44:44:44:44",
+                    "transport": "service-locator:mac"
+                },
+                "service-function-forwarder-ofs:ofs-port":
+                {
+                    "port-id" : "3"
+                }
+              }
+            ],
+            "service-function-dictionary": [
+              {
+                "name": "srvc1",
+                "sff-sf-data-plane-locator":
+                {
+                    "sf-dpl-name" : "openflow:4",
+                    "sff-dpl-name" : "Egress"
+                }
+              }
+            ]
+          }
+        ]
+      }
+    }' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
+
+CLI Command
+'''''''''''
+
+To provision the network for the two hosts (h1 and h5).
+
+Demonstrates the redirect action with service name srvc1.
+
+::
+
+    intent:add -f <SOURCE_MAC> -t <DESTINATION_MAC> -a REDIRECT -s <SERVICE_NAME>
+
+Example:
+
+::
+
+    intent:add -f 32:bc:ec:65:a7:d1 -t c2:80:1f:77:41:ed -a REDIRECT -s srvc1
+
+Verification
+''''''''''''
+
+-  As we have applied action type redirect now ping should happen
+   between hosts h1 and h5.
+
+::
+
+     mininet> h1 ping h5
+     PING 10.0.0.5 (10.0.0.5) 56(84) bytes of data.
+     64 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=201 ms
+     64 bytes from 10.0.0.5: icmp_seq=3 ttl=64 time=200 ms
+     64 bytes from 10.0.0.5: icmp_seq=4 ttl=64 time=200 ms
+
+The redirect functionality can be verified by the time taken by the ping
+operation (200ms). The service srvc1 configured using SFC introduces
+200ms delay. As the traffic from h1 to h5 is redirected via the srvc1,
+the time taken by the traffic from h1 to h5 will take about 200ms.
+
+-  Flow entries added to nodes for the redirect action.
+
+::
+
+     mininet> dpctl dump-flows
+     *** s1 ------------------------------------------------------------------------
+     NXST_FLOW reply (xid=0x4):
+     cookie=0x0, duration=9.406s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=1,dl_src=32:bc:ec:65:a7:d1, dl_dst=c2:80:1f:77:41:ed actions=output:4
+     cookie=0x0, duration=9.475s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=3,dl_src=c2:80:1f:77:41:ed, dl_dst=32:bc:ec:65:a7:d1 actions=output:1
+     cookie=0x1, duration=362.315s, table=0, n_packets=144, n_bytes=12240, idle_age=4, priority=9500,dl_type=0x88cc actions=CONTROLLER:65535
+     cookie=0x1, duration=362.324s, table=0, n_packets=4, n_bytes=168, idle_age=3, priority=10000,arp actions=CONTROLLER:65535,NORMAL
+     *** s2 ------------------------------------------------------------------------
+     NXST_FLOW reply (xid=0x4):
+     cookie=0x0, duration=9.503s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=3,dl_src=c2:80:1f:77:41:ed, dl_dst=32:bc:ec:65:a7:d1 actions=output:4
+     cookie=0x0, duration=9.437s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=5,dl_src=32:bc:ec:65:a7:d1, dl_dst=c2:80:1f:77:41:ed actions=output:3
+     cookie=0x3, duration=362.317s, table=0, n_packets=144, n_bytes=12240, idle_age=4, priority=9500,dl_type=0x88cc actions=CONTROLLER:65535
+     cookie=0x3, duration=362.32s, table=0, n_packets=4, n_bytes=168, idle_age=3, priority=10000,arp actions=CONTROLLER:65535,NORMAL
+     *** s3 ------------------------------------------------------------------------
+     NXST_FLOW reply (xid=0x4):
+     cookie=0x0, duration=9.41s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=2,dl_src=32:bc:ec:65:a7:d1, dl_dst=c2:80:1f:77:41:ed actions=output:3
+     *** s4 ------------------------------------------------------------------------
+     NXST_FLOW reply (xid=0x4):
+     cookie=0x0, duration=9.486s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=3,dl_src=32:bc:ec:65:a7:d1, dl_dst=c2:80:1f:77:41:ed actions=output:1
+
+How to configure QoS Attribute Mapping
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This section explains how to provision QoS attribute mapping constraint
+using NIC OF-Renderer.
+
+The QoS attribute mapping currently supports DiffServ. It uses a 6-bit
+differentiated services code point (DSCP) in the 8-bit differentiated
+services field (DS field) in the IP header.
+
++----------------+-----------------------------------------------------------+
+| Action         | Function                                                  |
++================+===========================================================+
+| Allow          | Permits the packet to be forwarded normally, but allows   |
+|                | for packet header fields, e.g., DSCP, to be modified.     |
++----------------+-----------------------------------------------------------+
+
+The following steps explain QoS Attribute Mapping function:
+
+-  Initially configure the QoS profile which contains profile name and
+   DSCP value.
+
+-  When a packet is transferred from a source to destination, the flow
+   builder evaluates whether the transferred packet matches the
+   condition such as action, endpoints in the flow.
+
+-  If the packet matches the endpoints, the flow builder applies the
+   flow matching action and DSCP value.
+
+Requirement
+^^^^^^^^^^^
+
+-  Before execute the following steps, please, use the default
+   requirements. See section `Default
+   Requirements <#_default_requirements>`__.
+
+Configuration
+^^^^^^^^^^^^^
+
+Please execute the following CLI commands to test network intent using
+mininet:
+
+-  To apply the QoS constraint, configure the QoS profile.
+
+::
+
+    intent:qosConfig -p <qos_profile_name> -d <valid_dscp_value>
+
+Example:
+
+::
+
+    intent:qosConfig -p High_Quality -d 46
+
+.. note::
+
+    Valid DSCP value ranges from 0-63.
+
+-  To provision the network for the two hosts (h1 and h3), add intents
+   that allows traffic in both directions by execute the following CLI
+   command.
+
+Demonstrates the ALLOW action with constraint QoS and QoS profile name.
+
+::
+
+    intent:add -a ALLOW -t <DESTINATION_MAC> -f <SOURCE_MAC> -q QOS -p <qos_profile_name>
+
+Example:
+
+::
+
+    intent:add -a ALLOW -t 00:00:00:00:00:03 -f 00:00:00:00:00:01 -q QOS -p High_Quality
+    intent:add -a ALLOW -t 00:00:00:00:00:01 -f 00:00:00:00:00:03 -q QOS -p High_Quality
+
+Verification
+''''''''''''
+
+-  As we have applied action type ALLOW now ping should happen between
+   hosts h1 and h3.
+
+::
+
+     mininet> h1 ping h3
+     PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
+     64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=0.984 ms
+     64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.110 ms
+     64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.098 ms
+
+-  Verification of the flow entry and ensuring the mod\_nw\_tos is part
+   of actions.
+
+::
+
+     mininet> dpctl dump-flows
+     *** s1 ------------------------------------------------------------------------
+     NXST_FLOW reply (xid=0x4):
+     cookie=0x0, duration=21.873s, table=0, n_packets=3, n_bytes=294, idle_age=21, priority=9000,dl_src=00:00:00:00:00:03,dl_dst=00:00:00:00:00:01 actions=NORMAL,mod_nw_tos:184
+     cookie=0x0, duration=41.252s, table=0, n_packets=3, n_bytes=294, idle_age=41, priority=9000,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 actions=NORMAL,mod_nw_tos:184
+
+Requirement
+~~~~~~~~~~~
+
+-  Before execute the follow steps, please, use default requirements.
+   See section `Default Requirements <#_default_requirements>`__.
+
+How to configure Log Action
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This section demonstrates log action in OF Renderer. This demonstration
+aims at enabling communication between two hosts and logging the flow
+statistics details of the particular traffic.
+
+Configuration
+^^^^^^^^^^^^^
+
+Please execute the following CLI commands to test network intent using
+mininet:
+
+-  To provision the network for the two hosts (h1 and h3), add intents
+   that allows traffic in both directions by execute the following CLI
+   command.
+
+::
+
+    intent:add –a ALLOW -t <DESTINATION_MAC> -f <SOURCE_MAC>
+
+Example:
+
+::
+
+    intent:add -a ALLOW -t 00:00:00:00:00:03 -f 00:00:00:00:00:01
+    intent:add -a ALLOW -t 00:00:00:00:00:01 -f 00:00:00:00:00:03
+
+-  To log the flow statistics details of the particular traffic.
+
+::
+
+    intent:add –a LOG -t <DESTINATION_MAC> -f <SOURCE_MAC>
+
+Example:
+
+::
+
+    intent:add -a LOG -t 00:00:00:00:00:03 -f 00:00:00:00:00:01
+
+Verification
+''''''''''''
+
+-  As we have applied action type ALLOW now ping should happen between
+   hosts h1 and h3.
+
+::
+
+     mininet> h1 ping h3
+     PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
+     64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=0.984 ms
+     64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.110 ms
+     64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.098 ms
+
+-  To view the flow statistics log details such as, byte count, packet
+   count and duration, check the karaf.log.
+
+::
+
+    2015-12-15 22:56:20,256 | INFO | lt-dispatcher-23 | IntentFlowManager | 264 - org.opendaylight.nic.of-renderer - 1.1.0.SNAPSHOT | Creating block intent for endpoints: source00:00:00:00:00:01 destination 00:00:00:00:00:03
+    2015-12-15 22:56:20,252 | INFO | lt-dispatcher-29 | FlowStatisticsListener | 264 - org.opendaylight.nic.of-renderer - 1.1.0.SNAPSHOT | Flow Statistics gathering for Byte Count:Counter64 [_value=238]
+    2015-12-15 22:56:20,252 | INFO | lt-dispatcher-29 | FlowStatisticsListener | 264 - org.opendaylight.nic.of-renderer - 1.1.0.SNAPSHOT | Flow Statistics gathering for Packet Count:Counter64 [_value=3]
+    2015-12-15 22:56:20,252 | INFO | lt-dispatcher-29 | FlowStatisticsListener | 264 - org.opendaylight.nic.of-renderer - 1.1.0.SNAPSHOT | Flow Statistics gathering for Duration in Nano second:Counter32 [_value=678000000]
+    2015-12-15 22:56:20,252 | INFO | lt-dispatcher-29 | FlowStatisticsListener | 264 - org.opendaylight.nic.of-renderer - 1.1.0.SNAPSHOT | Flow Statistics gathering for Duration in Second:Counter32 [_value=49]
+
diff --git a/docs/user-guide/neutron-service-user-guide.rst b/docs/user-guide/neutron-service-user-guide.rst
new file mode 100644 (file)
index 0000000..91681c0
--- /dev/null
@@ -0,0 +1,85 @@
+Neutron Service User Guide
+==========================
+
+Overview
+--------
+
+This Karaf feature (``odl-neutron-service``) provides integration
+support for OpenStack Neutron via the OpenDaylight ML2 mechanism driver.
+The Neutron Service is only one of the components necessary for
+OpenStack integration. For those related components please refer to
+documentations of each component:
+
+-  https://wiki.openstack.org/wiki/Neutron
+
+-  https://launchpad.net/networking-odl
+
+-  http://git.openstack.org/cgit/openstack/networking-odl/
+
+-  https://wiki.opendaylight.org/view/NeutronNorthbound:Main
+
+Use cases and who will use the feature
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you want OpenStack integration with OpenDaylight, you will need this
+feature with an OpenDaylight provider feature like ovsdb/netvirt, group
+based policy, VTN, and lisp mapper. For provider configuration, please
+refer to each individual provider’s documentation. Since the Neutron
+service only provides the northbound API for the OpenStack Neutron ML2
+mechanism driver. Without those provider features, the Neutron service
+itself isn’t useful.
+
+Neutron Service feature Architecture
+------------------------------------
+
+The Neutron service provides northbound API for OpenStack Neutron via
+RESTCONF and also its dedicated REST API. It communicates through its
+YANG model with providers.
+
+.. figure:: ./images/neutron/odl-neutron-service-architecture.png
+   :alt: Neutron Service Architecture
+
+   Neutron Service Architecture
+
+Configuring Neutron Service feature
+-----------------------------------
+
+As the Karaf feature includes everything necessary for communicating
+northbound, no special configuration is needed. Usually this feature is
+used with an OpenDaylight southbound plugin that implements actual
+network virtualization functionality and OpenStack Neutron. The user
+wants to setup those configurations. Refer to each related
+documentations for each configurations.
+
+Administering or Managing ``odl-neutron-service``
+-------------------------------------------------
+
+There is no specific configuration regarding to Neutron service itself.
+For related configuration, please refer to OpenStack Neutron
+configuration and OpenDaylight related services which are providers for
+OpenStack.
+
+installing ``odl-neutron-service`` while the controller running
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+1. While OpenDaylight is running, in Karaf prompt, type:
+   ``feature:install odl-neutron-service``.
+
+2. Wait a while until the initialization is done and the controller
+   stabilizes.
+
+``odl-neutron-service`` provides only a unified interface for OpenStack
+Neutron. It doesn’t provide actual functionality for network
+virtualization. Refer to each OpenDaylight project documentation for
+actual configuration with OpenStack Neutron.
+
+Neutron Logger
+--------------
+
+Another service, the Neutron Logger, is provided for debugging/logging
+purposes. It logs changes on Neutron YANG models.
+
+::
+
+    feature:install odl-neutron-logger
+
diff --git a/docs/user-guide/ocp-plugin-user-guide.rst b/docs/user-guide/ocp-plugin-user-guide.rst
new file mode 100644 (file)
index 0000000..a9b4fb4
--- /dev/null
@@ -0,0 +1,298 @@
+OCP Plugin User Guide
+=====================
+
+This document describes how to use the ORI Control & Management Protocol
+(OCP) feature in OpenDaylight. This document contains overview, scope,
+architecture and design, installation, configuration and tutorial
+sections for the feature.
+
+Overview
+--------
+
+OCP is an ETSI standard protocol for control and management of Remote
+Radio Head (RRH) equipment. The OCP Project addresses the need for a
+southbound plugin that allows applications and controller services to
+interact with RRHs using OCP. The OCP southbound plugin will allow
+applications acting as a Radio Equipment Control (REC) to interact with
+RRHs that support an OCP agent.
+
+.. figure:: ./images/ocpplugin/ocp-sb-plugin.jpg
+   :alt: OCP southbound plugin
+
+   OCP southbound plugin
+
+It is foreseen that, in 5G, C-RAN will use the packet-based
+Transport-SDN (T-SDN) as the fronthaul network to transport both control
+plane and user plane data between RRHs and BBUs. As a result, the
+addition of the OCP plugin to OpenDaylight will make it possible to
+build an RRH controller on top of OpenDaylight to centrally manage
+deployed RRHs, as well as integrating the RRH controller with T-SDN on
+one single platform, achieving the joint RRH and fronthaul network
+provisioning in C-RAN.
+
+Scope
+-----
+
+The OCP Plugin project includes:
+
+-  OCP v4.1.1 support
+
+-  Integration of OCP protocol library
+
+-  Simple API invoked as a RPC
+
+-  Simple API that allows applications to perform elementary functions
+   of the following categories:
+
+   -  Device management
+
+   -  Config management
+
+   -  Object lifecycle
+
+   -  Object state management
+
+   -  Fault management
+
+   -  Software management (not implemented as of Boron)
+
+-  Indication processing
+
+-  Logging (not implemented as of Boron)
+
+-  AISG/Iuant interface message tunnelling (not implemented as of Boron)
+
+-  ALD connection management (not implemented as of Boron)
+
+Architecture and Design
+-----------------------
+
+OCP is a vendor-neutral standard communications interface defined to
+enable control and management between RE and REC of an ORI architecture.
+The OCP Plugin supports the implementation of the OCP specification; it
+is based on the Model Driven Service Abstraction Layer (MD-SAL)
+architecture.
+
+OCP Plugin will support the following functionality:
+
+-  Connection handling
+
+-  Session management
+
+-  State management
+
+-  Error handling
+
+-  Connection establishment will be handled by OCP library using
+   opensource netty.io library
+
+-  Message handling
+
+-  Event/indication handling and propagation to upper layers
+
+**Activities in OCP plugin module**
+
+-  Integration with OCP protocol library
+
+-  Integration with corresponding MD-SAL infrastructure
+
+OCP protocol library is a component in OpenDaylight that mediates
+communication between OpenDaylight controller and RRHs supporting OCP
+protocol. Its primary goal is to provide the OCP Plugin with
+communication channel that can be used for managing RRHs.
+
+Key objectives:
+
+-  Immutable transfer objects generation (transformation of OCP protocol
+   library’s POJO objects into OpenDaylight DTO objects)
+
+-  Scalable non-blocking implementation
+
+-  Pipeline processing
+
+-  Scatter buffer
+
+-  TLS support
+
+OCP Service addresses the need for a northbound interface that allows
+applications and other controller services to interact with RRHs using
+OCP, by providing API for abstracting OCP operations.
+
+.. figure:: ./images/ocpplugin/plugin-design.jpg
+   :alt: Overall architecture
+
+   Overall architecture
+
+Message Flow
+------------
+
+.. figure:: ./images/ocpplugin/message_flow.jpg
+   :alt: Message flow example
+
+   Message flow example
+
+Installation
+------------
+
+The OCP Plugin project has two top level Karaf features,
+odl-ocpplugin-all and odl-ocpjava-all, which contain the following
+sub-features:
+
+-  odl-ocpplugin-southbound
+
+-  odl-ocpplugin-app-ocp-service
+
+-  odl-ocpjava-protocol
+
+The OCP service (odl-ocpplugin-app-ocp-service), together with the OCP
+southbound (odl-ocpplugin-southbound) and OCP protocol library
+(odl-ocpjava-protocol), provides OpenDaylight with basic OCP v4.1.1
+functionality.
+
+There are two ways to interact with OCP service: one is via RESTCONF
+(programmatic) and the other is using DLUX web interface (manual), so
+you have to install the following features to enable RESTCONF and DLUX.
+
+::
+
+    karaf#>feature:install odl-restconf odl-l2switch-switch odl-mdsal-apidocs odl-dlux-core odl-dlux-all
+
+Then install the odl-ocpplugin-all feature which includes the
+odl-ocpplugin-southbound and odl-ocpplugin-app-ocp-service features.
+Note that the odl-ocpjava-all feature will be installed automatically as
+the odl-ocpplugin-southbound feature is dependent on the
+odl-ocpjava-protocol feature.
+
+::
+
+    karaf#>feature:install odl-ocpplugin-all
+
+After all required features are installed, use following command from
+karaf console to check and make sure features are correctly installed
+and initialized.
+
+::
+
+    karaf#>feature:list | grep ocp
+
+Configuration
+-------------
+
+Configuring the OCP plugin can be done via its configuration file,
+62-ocpplugin.xml, which can be found in the
+<odl-install-dir>/etc/opendaylight/karaf/ directory.
+
+As of Boron, there are the following settings that are configurable:
+
+1. **port** specifies the port number on which the OCP plugin listens
+   for connection requests
+
+2. **radioHead-idle-timeout** determines the time duration (unit:
+   milliseconds) for which a radio head has been idle before the idle
+   event is triggered to perform health check
+
+3. **ocp-version** specifies the OCP protocol version supported by the
+   OCP plugin
+
+4. **rpc-requests-quota** sets the maximum number of concurrent rpc
+   requests allowed
+
+5. **global-notification-quota** sets the maximum number of concurrent
+   notifications allowed
+
+.. figure:: ./images/ocpplugin/plugin-config.jpg
+   :alt: OCP plugin configuration
+
+   OCP plugin configuration
+
+Test Environment
+----------------
+
+The OCP Plugin project contains a simple OCP agent for testing purposes;
+the agent has been designed specifically to act as a fake radio head
+device, giving you an idea of what it would look like during the OCP
+handshake taking place between the OCP agent and OpenDaylight (OCP
+plugin).
+
+To run the simple OCP agent, you have to first download its JAR file
+from OpenDaylight Nexus Repository.
+
+::
+
+    wget https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/ocpplugin/simple-agent/0.1.0-Boron/simple-agent-0.1.0-Boron.jar
+
+Then run the agent with no arguments (assuming you already have JDK 1.8
+or above installed) and it should display the usage that lists the
+expected arguments.
+
+::
+
+    java -classpath simple-agent-0.1.0-Boron.jar org.opendaylight.ocpplugin.OcpAgent
+
+    Usage: java org.opendaylight.ocpplugin.OcpAgent <controller's ip address> <port number> <vendor id> <serial number>
+
+Here is an example:
+
+::
+
+    java -classpath simple-agent-0.1.0-Boron.jar org.opendaylight.ocpplugin.OcpAgent 127.0.0.1 1033 XYZ 123
+
+Web / Graphical Interface
+-------------------------
+
+Once you enable the DLUX feature, you can access the Controller GUI
+using following URL.
+
+::
+
+    http://<controller-ip>:8080/index.html
+
+Expand Nodes. You should see all the radio head devices that are
+connected to the controller running at <controller-ip>.
+
+.. figure:: ./images/ocpplugin/dlux-ocp-nodes.jpg
+   :alt: DLUX Nodes
+
+   DLUX Nodes
+
+And expand Yang UI if you want to browse the various northbound APIs
+exposed by the OCP service.
+
+.. figure:: ./images/ocpplugin/dlux-ocp-apis.jpg
+   :alt: DLUX Yang UI
+
+   DLUX Yang UI
+
+For information on how to use these northbound APIs, please refer to the
+OCP Plugin Developer Guide.
+
+Programmatic Interface
+----------------------
+
+The OCP Plugin project has implemented a complete set of the C&M
+operations (elementary functions) defined in the OCP specification, in
+the form of both northbound and southbound APIs, including:
+
+-  health-check
+
+-  set-time
+
+-  re-reset
+
+-  get-param
+
+-  modify-param
+
+-  create-obj
+
+-  delete-obj
+
+-  get-state
+
+-  modify-state
+
+-  get-fault
+
+The API is documented in the OCP Plugin Developer Guide under the
+section Southbound API and Northbound API, respectively.
+
diff --git a/docs/user-guide/of-config-user-guide.rst b/docs/user-guide/of-config-user-guide.rst
new file mode 100644 (file)
index 0000000..cd92a18
--- /dev/null
@@ -0,0 +1,89 @@
+OF-CONFIG User Guide
+====================
+
+Overview
+--------
+
+OF-CONFIG defines an OpenFlow switch as an abstraction called an
+OpenFlow Logical Switch. The OF-CONFIG protocol enables configuration of
+essential artifacts of an OpenFlow Logical Switch so that an OpenFlow
+controller can communicate and control the OpenFlow Logical switch via
+the OpenFlow protocol. OF-CONFIG introduces an operating context for one
+or more OpenFlow data paths called an OpenFlow Capable Switch for one or
+more switches. An OpenFlow Capable Switch is intended to be equivalent
+to an actual physical or virtual network element (e.g. an Ethernet
+switch) which is hosting one or more OpenFlow data paths by partitioning
+a set of OpenFlow related resources such as ports and queues among the
+hosted OpenFlow data paths. The OF-CONFIG protocol enables dynamic
+association of the OpenFlow related resources of an OpenFlow Capable
+Switch with specific OpenFlow Logical Switches which are being hosted on
+the OpenFlow Capable Switch. OF-CONFIG does not specify or report how
+the partitioning of resources on an OpenFlow Capable Switch is achieved.
+OF-CONFIG assumes that resources such as ports and queues are
+partitioned amongst multiple OpenFlow Logical Switches such that each
+OpenFlow Logical Switch can assume full control over the resources that
+is assigned to it.
+
+How to start
+------------
+
+-  start OF-CONFIG feature as below:
+
+   ::
+
+       feature:install odl-of-config-all
+
+Configuration on the OVS supporting OF-CONFIG
+---------------------------------------------
+
+.. note::
+
+    OVS is not supported by OF-CONFIG temporarily because the
+    OpenDaylight version of OF-CONFIG is 1.2 while the OVS version of
+    OF-CONFIG is not standard.
+
+The introduction of configuring the OVS can be referred to:
+
+*https://github.com/openvswitch/of-config.*
+
+Connection Establishment between the Capable/Logical Switch and OF-CONFIG
+-------------------------------------------------------------------------
+
+The OF-CONFIG protocol is based on NETCONF. So the switches supporting
+OF-CONFIG can also access OpenDaylight using the functions provided by
+NETCONF. This is the preparation step before connecting to OF-CONFIG.
+How to access the switch to OpenDaylight using the NETCONF can be
+referred to the `NETCONF Southbound User
+Guide <#_southbound_netconf_connector>`__ or `NETCONF Southbound
+examples on the
+wiki <https://wiki.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf>`__.
+
+Now the switches supporting OF-CONFIG and they have connected to the
+controller using NETCONF as described in preparation phase. OF-CONFIG
+can check whether the switch can support OF-CONFIG by reading the
+capability list in NETCONF.
+
+The OF-CONFIG will get the information of the capable switch and logical
+switch via the NETCONF connection, and creates separate topologies for
+the capable and logical switches in the OpenDaylight Topology module.
+
+The Connection between the capable/logical switches and OF-CONFIG is
+finished.
+
+Configuration On Capable Switch
+-------------------------------
+
+Here is an example showing how to make the configuration to
+modify-controller-connection on the capable switch using OF-CONFIG.
+Other configurations can follow the same way of the example.
+
+-  Example: modify-controller-connection
+
+.. note::
+
+    this configuration can execute via the NETCONF, which can be
+    referred to the `NETCONF Southbound User
+    Guide <#_southbound_netconf_connector>`__ or `NETCONF Southbound
+    examples on the
+    wiki <https://wiki.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf>`__.
+
diff --git a/docs/user-guide/opflex-agent-ovs-user-guide.rst b/docs/user-guide/opflex-agent-ovs-user-guide.rst
new file mode 100644 (file)
index 0000000..c054635
--- /dev/null
@@ -0,0 +1,422 @@
+OpFlex agent-ovs User Guide
+===========================
+
+Introduction
+------------
+
+agent-ovs is a policy agent that works with OVS to enforce a group-based
+policy networking model with locally attached virtual machines or
+containers. The policy agent is designed to work well with orchestration
+tools like OpenStack.
+
+Agent Configuration
+-------------------
+
+The agent configuration is handled using its config file which is by
+default found at "/etc/opflex-agent-ovs/opflex-agent-ovs.conf"
+
+Here is an example configuration file that documents the available
+options:
+
+::
+
+    {
+        // Logging configuration
+        // "log": {
+        //    "level": "info"
+        // },
+
+        // Configuration related to the OpFlex protocol
+        "opflex": {
+            // The policy domain for this agent.
+            "domain": "openstack",
+
+            // The unique name in the policy domain for this agent.
+            "name": "example-agent",
+
+            // a list of peers to connect to, by hostname and port.  One
+            // peer, or an anycast pseudo-peer, is sufficient to bootstrap
+            // the connection without needing an exhaustive list of all
+            // peers.
+            "peers": [
+                // EXAMPLE:
+                {"hostname": "10.0.0.30", "port": 8009}
+            ],
+
+            "ssl": {
+                // SSL mode.  Possible values:
+                // disabled: communicate without encryption
+                // encrypted: encrypt but do not verify peers
+                // secure: encrypt and verify peer certificates
+                "mode": "disabled",
+
+                // The path to a directory containing trusted certificate
+                // authority public certificates, or a file containing a
+                // specific CA certificate.
+                "ca-store": "/etc/ssl/certs/"
+            },
+
+            "inspector": {
+                // Enable the MODB inspector service, which allows
+                // inspecting the state of the managed object database.
+            // Default: enabled
+                "enabled": true,
+
+                // Listen on the specified socket for the inspector
+            // Default /var/run/opflex-agent-ovs-inspect.sock
+                "socket-name": "/var/run/opflex-agent-ovs-inspect.sock"
+            }
+        },
+
+        // Endpoint sources provide metadata about local endpoints
+        "endpoint-sources": {
+            // Filesystem path to monitor for endpoint information
+            "filesystem": ["/var/lib/opflex-agent-ovs/endpoints"]
+        },
+
+        // Renderers enforce policy obtained via OpFlex.
+        "renderers": {
+            // Stitched-mode renderer for interoperating with a
+            // hardware fabric such as ACI
+            // EXAMPLE:
+            "stitched-mode": {
+                "ovs-bridge-name": "br0",
+
+                // Set encapsulation type.  Must set either vxlan or vlan.
+                "encap": {
+                    // Encapsulate traffic with VXLAN.
+                    "vxlan" : {
+                        // The name of the tunnel interface in OVS
+                        "encap-iface": "br0_vxlan0",
+
+                        // The name of the interface whose IP should be used
+                        // as the source IP in encapsulated traffic.
+                        "uplink-iface": "eth0.4093",
+
+                        // The vlan tag, if any, used on the uplink interface.
+                        // Set to zero or omit if the uplink is untagged.
+                        "uplink-vlan": 4093,
+
+                        // The IP address used for the destination IP in
+                        // the encapsulated traffic.  This should be an
+                        // anycast IP address understood by the upstream
+                        // stiched-mode fabric.
+                        "remote-ip": "10.0.0.32",
+
+                        // UDP port number of the encapsulated traffic.
+                        "remote-port": 8472
+                    }
+
+                    // Encapsulate traffic with a locally-significant VLAN
+                    // tag
+                    // EXAMPLE:
+                    // "vlan" : {
+                    //     // The name of the uplink interface in OVS
+                    //     "encap-iface": "team0"
+                    // }
+                },
+
+                // Configure forwarding policy
+                "forwarding": {
+                    // Configure the virtual distributed router
+                    "virtual-router": {
+                        // Enable virtual distributed router.  Set to true
+                        // to enable or false to disable.  Default true.
+                        "enabled": true,
+
+                        // Override MAC address for virtual router.
+                        // Default is "00:22:bd:f8:19:ff"
+                        "mac": "00:22:bd:f8:19:ff",
+
+                        // Configure IPv6-related settings for the virtual
+                        // router
+                        "ipv6" : {
+                            // Send router advertisement messages in
+                            // response to router solicitation requests as
+                            // well as unsolicited advertisements.  This
+                            // is not required in stitched mode since the
+                            // hardware router will send them.
+                            "router-advertisement": true
+                        }
+                    },
+
+                    // Configure virtual distributed DHCP server
+                    "virtual-dhcp": {
+                        // Enable virtual distributed DHCP server.  Set to
+                        // true to enable or false to disable.  Default
+                        // true.
+                        "enabled": true,
+
+                        // Override MAC address for virtual dhcp server.
+                        // Default is "00:22:bd:f8:19:ff"
+                        "mac": "00:22:bd:f8:19:ff"
+                    },
+
+                    "endpoint-advertisements": {
+                        // Enable generation of periodic ARP/NDP
+                        // advertisements for endpoints.  Default true.
+                        "enabled": "true"
+                    }
+                },
+
+                // Location to store cached IDs for managing flow state
+                "flowid-cache-dir": "/var/lib/opflex-agent-ovs/ids"
+            }
+        }
+    }
+
+Endpoint Registration
+---------------------
+
+The agent learns about endpoints using endpoint metadata files located
+by default in "/var/lib/opflex-agent-ovs/endpoints".
+
+These are JSON-format files such as the (unusually complex) example
+below:
+
+::
+
+    {
+        "uuid": "83f18f0b-80f7-46e2-b06c-4d9487b0c754",
+        "policy-space-name": "test",
+        "endpoint-group-name": "group1",
+        "interface-name": "veth0",
+        "ip": [
+            "10.0.0.1", "fd8f:69d8:c12c:ca62::1"
+        ],
+        "dhcp4": {
+            "ip": "10.200.44.2",
+            "prefix-len": 24,
+            "routers": ["10.200.44.1"],
+            "dns-servers": ["8.8.8.8", "8.8.4.4"],
+            "domain": "example.com",
+            "static-routes": [
+                {
+                    "dest": "169.254.169.0",
+                    "dest-prefix": 24,
+                    "next-hop": "10.0.0.1"
+                }
+            ]
+        },
+        "dhcp6": {
+            "dns-servers": ["2001:4860:4860::8888", "2001:4860:4860::8844"],
+            "search-list": ["test1.example.com", "example.com"]
+        },
+        "ip-address-mapping": [
+            {
+               "uuid": "91c5b217-d244-432c-922d-533c6c036ab4",
+               "floating-ip": "5.5.5.1",
+               "mapped-ip": "10.0.0.1",
+               "policy-space-name": "common",
+               "endpoint-group-name": "nat-epg"
+            },
+            {
+               "uuid": "22bfdc01-a390-4b6f-9b10-624d4ccb957b",
+               "floating-ip": "fdf1:9f86:d1af:6cc9::1",
+               "mapped-ip": "fd8f:69d8:c12c:ca62::1",
+               "policy-space-name": "common",
+               "endpoint-group-name": "nat-epg"
+            }
+        ],
+        "mac": "00:00:00:00:00:01",
+        "promiscuous-mode": false
+    }
+
+The possible parameters for these files are:
+
+**uuid**
+    A globally unique ID for the endpoint
+
+**endpoint-group-name**
+    The name of the endpoint group for the endpoint
+
+**policy-space-name**
+    The name of the policy space for the endpoint group.
+
+**interface-name**
+    The name of the OVS interface to which the endpoint is attached
+
+**ip**
+    A list of strings contains either IPv4 or IPv6 addresses that the
+    endpoint is allowed to use
+
+**mac**
+    The MAC address for the endpoint’s interface.
+
+**promiscuous-mode**
+    Allow traffic from this VM to bypass default port security
+
+**dhcp4**
+    A distributed DHCPv4 configuration block (see below)
+
+**dhcp6**
+    A distributed DHCPv6 configuration block (see below)
+
+**ip-address-mapping**
+    A list of IP address mapping configuration blocks (see below)
+
+DHCPv4 configuration blocks can contain the following parameters:
+
+**ip**
+    the IP address to return with DHCP. Must be one of the configured
+    IPv4 addresses.
+
+**prefix**
+    the subnet prefix length
+
+**routers**
+    a list of default gateways for the endpoint
+
+**dns**
+    a list of DNS server addresses
+
+**domain**
+    The domain name parameter to send in the DHCP reply
+
+**static-routes**
+    A list of static route configuration blocks, which contains a
+    "dest", "dest-prefix", and "next-hop" parameters to send as static
+    routes to the end host
+
+DHCPv6 configuration blocks can contain the following parameters:
+
+**dns**
+    A list of DNS servers for the endpoint
+
+**search-patch**
+    The DNS search path for the endpoint
+
+IP address mapping configuration blocks can contain the following
+parameters:
+
+**uuid**
+    a globally unique ID for the virtual endpoint created by the
+    mapping.
+
+**floating-ip**
+    Map using DNAT to this floating IPv4 or IPv6 address
+
+**mapped-ip**
+    the source IPv4 or IPv6 address; must be one of the IPs assigned to
+    the endpoint.
+
+**endpoint-group-name**
+    The name of the endpoint group for the NATed IP
+
+**policy-space-name**
+    The name of the policy space for the NATed IP
+
+Inspector
+---------
+
+The Opflex inspector is a useful command-line tool that will allow you
+to inspect the state of the managed object database for the agent for
+debugging and diagnosis purposes.
+
+The command is called "gbp\_inspect" and takes the following arguments:
+
+::
+
+    # gbp_inspect -h
+    Usage: ./gbp_inspect [options]
+    Allowed options:
+      -h [ --help ]                         Print this help message
+      --log arg                             Log to the specified file (default
+                                            standard out)
+      --level arg (=warning)                Use the specified log level (default
+                                            info)
+      --syslog                              Log to syslog instead of file or
+                                            standard out
+      --socket arg (=/usr/local/var/run/opflex-agent-ovs-inspect.sock)
+                                            Connect to the specified UNIX domain
+                                            socket (default /usr/local/var/run/opfl
+                                            ex-agent-ovs-inspect.sock)
+      -q [ --query ] arg                    Query for a specific object with
+                                            subjectname,uri or all objects of a
+                                            specific type with subjectname
+      -r [ --recursive ]                    Retrieve the whole subtree for each
+                                            returned object
+      -f [ --follow-refs ]                  Follow references in returned objects
+      --load arg                            Load managed objects from the specified
+                                            file into the MODB view
+      -o [ --output ] arg                   Output the results to the specified
+                                            file (default standard out)
+      -t [ --type ] arg (=tree)             Specify the output format: tree, list,
+                                            or dump (default tree)
+      -p [ --props ]                        Include object properties in output
+
+Here are some examples of the ways to use this tool.
+
+You can get information about the running system using one or more
+queries, which consist of an object model class name and optionally the
+URI of a specific object. The simplest query is to get a single object,
+nonrecursively:
+
+::
+
+    # gbp_inspect -q DmtreeRoot
+    --* DmtreeRoot,/
+    # gbp_inspect -q GbpEpGroup
+    --* GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/
+    --* GbpEpGroup,/PolicyUniverse/PolicySpace/test/GbpEpGroup/group1/
+    # gbp_inspect -q GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/
+    --* GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/
+
+You can also display all the properties for each object:
+
+::
+
+    # gbp_inspect -p -q GbpeL24Classifier
+    --* GbpeL24Classifier,/PolicyUniverse/PolicySpace/test/GbpeL24Classifier/classifier4/
+         {
+           connectionTracking : 1 (reflexive)
+           dFromPort          : 80
+           dToPort            : 80
+           etherT             : 2048 (ipv4)
+           name               : classifier4
+           prot               : 6
+         }
+    --* GbpeL24Classifier,/PolicyUniverse/PolicySpace/test/GbpeL24Classifier/classifier3/
+         {
+           etherT : 34525 (ipv6)
+           name   : classifier3
+           order  : 100
+           prot   : 58
+         }
+    --* GbpeL24Classifier,/PolicyUniverse/PolicySpace/test/GbpeL24Classifier/classifier2/
+         {
+           etherT : 2048 (ipv4)
+           name   : classifier2
+           order  : 101
+           prot   : 1
+         }
+
+You can also request to get the all the children of an object you query
+for:
+
+::
+
+    # gbp_inspect -r -q GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/
+    --* GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/
+      |-* GbpeInstContext,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/GbpeInstContext/
+      `-* GbpEpGroupToNetworkRSrc,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/GbpEpGroupToNetworkRSrc/
+
+You can also follow references found in any object downloads:
+
+::
+
+    # gbp_inspect -fr -q GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/
+    --* GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/
+      |-* GbpeInstContext,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/GbpeInstContext/
+      `-* GbpEpGroupToNetworkRSrc,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/GbpEpGroupToNetworkRSrc/
+    --* GbpFloodDomain,/PolicyUniverse/PolicySpace/common/GbpFloodDomain/fd_ext/
+      `-* GbpFloodDomainToNetworkRSrc,/PolicyUniverse/PolicySpace/common/GbpFloodDomain/fd_ext/GbpFloodDomainToNetworkRSrc/
+    --* GbpBridgeDomain,/PolicyUniverse/PolicySpace/common/GbpBridgeDomain/bd_ext/
+      `-* GbpBridgeDomainToNetworkRSrc,/PolicyUniverse/PolicySpace/common/GbpBridgeDomain/bd_ext/GbpBridgeDomainToNetworkRSrc/
+    --* GbpRoutingDomain,/PolicyUniverse/PolicySpace/common/GbpRoutingDomain/rd_ext/
+      |-* GbpRoutingDomainToIntSubnetsRSrc,/PolicyUniverse/PolicySpace/common/GbpRoutingDomain/rd_ext/GbpRoutingDomainToIntSubnetsRSrc/122/%2fPolicyUniverse%2fPolicySpace%2fcommon%2fGbpSubnets%2fsubnets_ext%2f/
+      `-* GbpForwardingBehavioralGroupToSubnetsRSrc,/PolicyUniverse/PolicySpace/common/GbpRoutingDomain/rd_ext/GbpForwardingBehavioralGroupToSubnetsRSrc/
+    --* GbpSubnets,/PolicyUniverse/PolicySpace/common/GbpSubnets/subnets_ext/
+      |-* GbpSubnet,/PolicyUniverse/PolicySpace/common/GbpSubnets/subnets_ext/GbpSubnet/subnet_ext4/
+      `-* GbpSubnet,/PolicyUniverse/PolicySpace/common/GbpSubnets/subnets_ext/GbpSubnet/subnet_ext6/
+
diff --git a/docs/user-guide/ovsdb-netvirt.rst b/docs/user-guide/ovsdb-netvirt.rst
new file mode 100644 (file)
index 0000000..7c78875
--- /dev/null
@@ -0,0 +1,2264 @@
+OVSDB NetVirt
+=============
+
+NetVirt
+-------
+
+The OVSDB NetVirt project delivers two major pieces of functionality:
+
+1. The OVSDB Southbound Protocol, and
+
+2. NetVirt, a network virtualization solution.
+
+The following diagram shows the system-level architecture of OVSDB
+NetVirt in an OpenStack-based solution.
+
+.. figure:: ./images/ovsdb/ovsdb-netvirt-architecture.jpg
+   :alt: OVSDB NetVirt Architecture
+
+   OVSDB NetVirt Architecture
+
+NetVirt is a network virtualization solution that is a Neutron service
+provider, and therefore supports the OpenStack Neutron Networking API
+and extensions.
+
+The OVSDB component implements the OVSDB protocol (RFC 7047), as well as
+plugins to support OVSDB Schemas, such as the Open\_vSwitch database
+schema and the hardware\_vtep database schema.
+
+NetVirt has MDSAL-based interfaces with Neutron on the northbound side,
+and OVSDB and OpenFlow plugins on the southbound side.
+
+OVSDB NetVirt currently supports Open vSwitch virtual switches via
+OpenFlow and OVSDB. Work is underway to support hardware gateways.
+
+NetVirt services are enabled by installing the odl-ovsdb-openstack
+feature using the following command:
+
+::
+
+    feature:install odl-ovsdb-openstack
+
+To enable NetVirt’s distributed Layer 3 routing services, the following
+line must be uncommented in the etc/custom.properties file in the
+OpenDaylight distribution prior to starting karaf:
+
+::
+
+    ovsdb.l3.fwd.enabled=yes
+
+To start the OpenDaylight controller, run the following application in
+your distribution:
+
+::
+
+    bin/karaf
+
+More details about using NetVirt with OpenStack can be found in the
+following places:
+
+1. The "OpenDaylight and OpenStack" guide, and
+
+2. `Getting Started with OpenDaylight OVSDB Plugin Network
+   Virtualization <https://wiki.opendaylight.org/view/OVSDB_Integration:Main#Getting_Started_with_OpenDaylight_OVSDB_Plugin_Network_Virtualization>`__
+
+Some additional details about using OpenStack Security Groups and the
+Data Plane Development Kit (DPDK) are provided below.
+
+Security groups
+~~~~~~~~~~~~~~~
+
+The security group in openstack helps to filter packets based on
+policies configured. The current implementation in openstack uses
+iptables to realize security groups. In Opendaylight instead of iptable
+rules, ovs flows are used. This will remove the many layers of
+bridges/ports required in iptable implementation.
+
+The current rules are applied on the basis of the following attributes:
+ingress/egress, protocol, port range, and prefix. In the pipeline, table
+40 is used for egress acl and table 90 for ingress acl rules.
+
+Stateful Implementation
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The security group is implemented in two modes, stateful and stateless.
+Stateful can be enabled by setting false to true in
+etc/opendaylight/karaf/netvirt-impl-default-config.xml
+
+The stateful implementation uses the conntrack capabilities of ovs and
+tracks an existing connection. This mode requires OVS2.5 and linux
+kernel 4.3. The ovs which is integrated with netfilter framework tracks
+the connection using the five tuple(layer-3 protocol, source address,
+destination address, layer-4 protocol, layer-4 key). The connection
+state is independent of the upper level state of connection oriented
+protocols like TCP, and even connectionless protocols like UDP will have
+a pseudo state. With this implementation OVS sends the packet to the
+netfilter framework to know whether there is an entry for to the
+connection. netfilter will return the packet to OVS with the appropriate
+flag set. Below are the states we are interested in:
+
+::
+
+    -trk - The packet was never send to netfilter framework
+
+::
+
+    +trk+est - It is already known and connection which was allowed previously, 
+    pass it to the next table.
+
+::
+
+    +trk+new - This is a new connection. So if there is a specific rule in the 
+    table which allows this traffic with a commit action an entry will be made 
+    in the netfilter framework. If there is no  specific rule to allow this 
+    traffic the packet will be dropped.
+
+So, by default, a packet is be dropped unless there is a rule to allow
+the packet.
+
+Stateless Implementation
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The stateless mode is for OVS 2.4 and below where connection tracking is
+not supported. Here we have pseudo-connection tracking using the TCP SYN
+flag. Other than TCP packets, all protocol packets is allowed by
+default. For TCP packets, the SYN packets will be dropped by default
+unless there is a specific rule which allows TCP SYN packets to a
+particular port.
+
+Fixed Rules
+^^^^^^^^^^^
+
+The SecurityGroup are associated with the vm port when the vm is
+spawned. By default a set of rules are applied to the vm port referred
+to as fixed security group rule. This includes the DHCP rules the ARP
+rule and the conntrack rules. The conntrack rules will be inserted only
+in the stateful mode.
+
+DHCP rules
+''''''''''
+
+The DHCP rules added to the vm port when a vm is spawned. The fixed DHCP
+rules are
+
+-  Allow DHCP server traffic ingress.
+
+   ::
+
+       cookie=0x0, duration=36.848s, table=90, n_packets=2, n_bytes=717,
+       priority=61006,udp,dl_src=fa:16:3e:a1:f9:d0,
+       tp_src=67,tp_dst=68 actions=goto_table:100
+
+   ::
+
+       cookie=0x0, duration=36.566s, table=90, n_packets=0, n_bytes=0, 
+       priority=61006,udp6,dl_src=fa:16:3e:a1:f9:d0,
+       tp_src=547,tp_dst=546 actions=goto_table:100
+
+-  Allow DHCP client traffic egress.
+
+   ::
+
+       cookie=0x0, duration=2165.596s, table=40, n_packets=2, n_bytes=674, 
+       priority=61012,udp,tp_src=68,tp_dst=67 actions=goto_table:50
+
+   ::
+
+       cookie=0x0, duration=2165.513s, table=40, n_packets=0, n_bytes=0, 
+       priority=61012,udp6,tp_src=546,tp_dst=547 actions=goto_table:50
+
+-  Prevent DHCP server traffic from the vm port.(DHCP Spoofing)
+
+   ::
+
+       cookie=0x0, duration=34.711s, table=40, n_packets=0, n_bytes=0, 
+       priority=61011,udp,in_port=2,tp_src=67,tp_dst=68 actions=drop
+
+   ::
+
+       cookie=0x0, duration=34.519s, table=40, n_packets=0, n_bytes=0, 
+       priority=61011,udp6,in_port=2,tp_src=547,tp_dst=546 actions=drop
+
+Arp rules
+'''''''''
+
+The default arp rules allows the arp traffic to go in and out of the vm
+port.
+
+::
+
+    cookie=0x0, duration=35.015s, table=40, n_packets=10, n_bytes=420, 
+    priority=61010,arp,arp_sha=fa:16:3e:93:88:60 actions=goto_table:50
+
+::
+
+    cookie=0x0, duration=35.582s, table=90, n_packets=1, n_bytes=42, 
+    priority=61010,arp,arp_tha=fa:16:3e:93:88:60 actions=goto_table:100
+
+Conntrack rules
+'''''''''''''''
+
+These rules are inserted only in stateful mode. The conntrack rules use
+the netfilter framework to track packets. The below rules are added to
+leverage it.
+
+-  If a packet is not tracked(connection state –trk) it is send it to
+   the netfilter for tracking
+
+-  If the packet is already tracked (netfilter filter returns connection
+   state +trk,+est) and if the connection is established, then allow the
+   packet to go through the pipeline.
+
+-  The third rule is the default drop rule which will drop the packet,
+   if the packet is tracked and new(netfilter filter returns connection
+   state +trk,+new). This rule has lower priority than the custom rules
+   which shall be added.
+
+   ::
+
+       cookie=0x0, duration=35.015s table=40,priority=61021,in_port=3,
+       ct_state=-trk,action=ct"("table=0")"
+
+   ::
+
+       cookie=0x0, duration=35.015s table=40,priority=61020,in_port=3,
+       ct_state=+trk+est,action=goto_table:50
+
+   ::
+
+       cookie=0x0, duration=35.015s table=40,priority=36002,in_port=3,
+       ct_state=+new,actions=drop
+
+   ::
+
+       cookie=0x0, duration=35.015s table=90,priority=61022,
+       dl_dst=fa:16:3e:0d:8d:21,ct_state=+trk+est,action=goto_table:100
+
+   ::
+
+       cookie=0x0, duration=35.015s table=90,priority=61021,
+       dl_dst=fa:16:3e:0d:8d:21,ct_state=-trk,action=ct"("table=0")"
+
+   ::
+
+       cookie=0x0, duration=35.015s table=90,priority=36002,
+       dl_dst=fa:16:3e:0d:8d:21,ct_state=+new,actions=drop
+
+TCP SYN Rule
+''''''''''''
+
+This rule is inserted in stateless mode only. This rule will drop TCP
+SYN packet by default
+
+Custom Security Groups
+^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+       User can add security groups in openstack via command line or UI. When we associate this security group with a vm the flows related to each security group will be added in the related tables. A preconfigured security group called the default security group is available in neutron db. 
+
+Stateful
+''''''''
+
+If connection tracking is enabled the match will have connection state
+and the action will have commit along with goto. The commit will send
+the packet to the netfilter framework to cache the entry. After a
+commit, for the next packet of this connection netfilter will return
++trk+est and the packet will match the fixed conntrack rule and get
+forwarded to next table.
+
+::
+
+    cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
+    priority=61007,ct_state=+new+trk,icmp,dl_src=fa:16:3e:ee:a5:ec,
+    nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50
+
+::
+
+    cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,ct_state=+new+trk,udp,dl_dst=fa:16:3e:22:59:2f,
+    nw_src=10.100.5.3,tp_dst=2222 actions=ct(commit),goto_table:100
+
+::
+
+    cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,ct_state=+new+trk,tcp,dl_dst=fa:16:3e:22:59:2f,
+    nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100
+
+Stateless
+'''''''''
+
+If the mode is stateless the match will have only the parameter
+specified in the security rule and a goto in the action. The ct\_state
+and commit action will be missing.
+
+::
+
+    cookie=0x0, duration=13211.171s, table=40, n_packets=0, n_bytes=0, 
+    priority=61007,icmp,dl_src=fa:16:3e:93:88:60,nw_dst=0.0.0.0/24,
+    icmp_type=2,icmp_code=4 actions=goto_table:50
+
+::
+
+    cookie=0x0, duration=199.674s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,udp,dl_dst=fa:16:3e:dc:49:ff,nw_src=10.100.5.3,tp_dst=2222 
+    actions=goto_table:100
+
+::
+
+    cookie=0x0, duration=199.780s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,tcp,dl_dst=fa:16:3e:93:88:60,nw_src=10.100.5.4,tp_dst=3333 
+    actions=goto_table:100
+
+TCP/UDP Port Range
+''''''''''''''''''
+
+The TCP/UDP port range is supported with the help of port mask. This
+will dramatically reduce the number of flows required to cover a port
+range. The below 7 rules can cover a port range from 333 to 777.
+
+::
+
+    cookie=0x0, duration=56.129s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
+    tp_dst=0x200/0xff00 actions=goto_table:100
+
+::
+
+    cookie=0x0, duration=55.805s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
+    tp_dst=0x160/0xffe0 actions=goto_table:100
+
+::
+
+    cookie=0x0, duration=55.587s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
+    tp_dst=0x300/0xfff8 actions=goto_table:100
+
+::
+
+    cookie=0x0, duration=55.437s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
+    tp_dst=0x150/0xfff0 actions=goto_table:100
+
+::
+
+    cookie=0x0, duration=55.282s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
+    tp_dst=0x14e/0xfffe actions=goto_table:100
+
+::
+
+    cookie=0x0, duration=54.063s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
+    tp_dst=0x308/0xfffe actions=goto_table:100
+
+::
+
+    cookie=0x0, duration=55.130s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
+    tp_dst=333 actions=goto_table:100
+
+CIDR/Remote Security Group
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+    When adding a security group we can select the rule to applicable to a 
+    set of CIDR or to a set of VMs which has a particular security group 
+    associated with it. 
+
+If CIDR is selected there will be only one flow rule added allowing the
+traffic from/to the IP’s belonging to that CIDR.
+
+::
+
+    cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
+    priority=61007,ct_state=+new+trk,icmp,dl_src=fa:16:3e:ee:a5:ec,
+    nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50
+
+If a remote security group is selected a flow will be inserted for every
+vm which has that security group associated.
+
+::
+
+    cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,ct_state=+new+trk,udp,dl_dst=fa:16:3e:22:59:2f,
+    nw_src=10.100.5.3,tp_dst=2222    actions=ct(commit),goto_table:100
+
+::
+
+    cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0, 
+    priority=61007,ct_state=+new+trk,tcp,dl_dst=fa:16:3e:22:59:2f,
+    nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100
+
+Rules supported in ODL
+^^^^^^^^^^^^^^^^^^^^^^
+
+The following rules are supported in the current implementation. The
+direction (ingress/egress) is always expected.
+
++--------------------+--------------------+--------------------+--------------------+
+| Protocol           | Port Range         | IP Prefix          | Remote Security    |
+|                    |                    |                    | Group supported    |
++--------------------+--------------------+--------------------+--------------------+
+| Any                | Any                | Any                | Yes                |
++--------------------+--------------------+--------------------+--------------------+
+| TCP                | 1 - 65535          | 0.0.0.0/0          | Yes                |
++--------------------+--------------------+--------------------+--------------------+
+| UDP                | 1 - 65535          | 0.0.0.0/0          | Yes                |
++--------------------+--------------------+--------------------+--------------------+
+| ICMP               | Any                | 0.0.0.0/0          | Yes                |
++--------------------+--------------------+--------------------+--------------------+
+
+Table: Table Supported Rules
+
+Note : IPV6 and port-range feature is not supported as of today
+
+Using OVS with DPDK hosts and OVSDB NetVirt
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Data Plane Development Kit (`DPDK <http://dpdk.org/>`__) is a
+userspace set of libraries and drivers designed for fast packet
+processing. The userspace datapath variant of OVS can be built with DPDK
+enabled to provide the performance features of DPDK to Open vSwitch
+(OVS). In the 2.4.0 version of OVS, the Open\_vSwtich table schema was
+enhanced to include the lists *datapath-types* and *interface-types*.
+When the OVS with DPDK variant of OVS is running, the *inteface-types*
+list will include DPDK interface types such as *dpdk* and
+*dpdkvhostuser*. The OVSDB Southbound Plugin includes this information
+in the OVSDB YANG model in the MD-SAL, so when a specific OVS host is
+running OVS with DPDK, it is possible for NetVirt to detect that
+information by checking that DPDK interface types are included in the
+list of supported interface types.
+
+For example, query the operational MD-SAL for OVSDB nodes:
+
+HTTP GET:
+
+::
+
+    http://{{CONTROLLER-IP}}:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
+
+Result Body:
+
+::
+
+    {
+      "topology": [
+        {
+          "topology-id": "ovsdb:1",
+          "node": [
+            < content edited out >
+            {
+              "node-id": "ovsdb://uuid/f9b58b6d-04db-459a-b914-fff82b738aec",
+              < content edited out >
+              "ovsdb:interface-type-entry": [
+                {
+                  "interface-type": "ovsdb:interface-type-ipsec-gre"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-internal"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-system"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-patch"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-dpdkvhostuser"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-dpdk"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-dpdkr"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-vxlan"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-lisp"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-geneve"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-gre"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-tap"
+                },
+                {
+                  "interface-type": "ovsdb:interface-type-stt"
+                }
+              ],
+              < content edited out >
+              "ovsdb:datapath-type-entry": [
+                {
+                  "datapath-type": "ovsdb:datapath-type-netdev"
+                },
+                {
+                  "datapath-type": "ovsdb:datapath-type-system"
+                }
+              ],
+              < content edited out >
+            },
+            < content edited out >
+          ]
+        }
+      ]
+    }
+
+This example illustrates the output of an OVS with DPDK host because the
+list of interface types includes types supported by DPDK.
+
+Bridges on OVS with DPDK hosts need to be created with the *netdev*
+datapath type and DPDK specific ports need to be created with the
+appropriate interface type. The OpenDaylight OVSDB Southbound Plugin
+supports these attributes.
+
+The OpenDaylight NetVirt application checks whether the OVS host is
+using OVS with DPDK when creating the bridges that are expected to be
+present on the host, e.g. *br-int*.
+
+Following are some tips for supporting hosts using OVS with DPDK when
+using NetVirt as the Neutron service provider and *devstack* to deploy
+Openstack.
+
+In addition to the *networking-odl* ML2 plugin, enable the
+*networking-odl-dpdk* plugin in *local.conf*.
+
+::
+
+    For working with Openstack Liberty
+    enable_plugin networking-odl https://github.com/FedericoRessi/networking-odl integration/liberty
+    enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk stable/liberty
+
+::
+
+    For working with Openstack Mitaka (or later) branch
+    enable_plugin networking-odl https://github.com/openstack/networking-odl
+    enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk
+
+The order of these plugin lines is important. The *networking-odl*
+plugin will install and setup *openvswitch*. The *networking-ovs-dpdk*
+plugin will install OVS with DPDK. Note, the *networking-ovs-dpdk*
+plugin is only being used here to setup OVS with DPDK. The
+*networking-odl* plugin will be used as the Neutron ML2 driver.
+
+For VXLAN tenant network support, the NetVirt application interacts with
+OVS with DPDK host in the same way as OVS hosts using the kernel
+datapath by creating VXLAN ports on *br-int* to communicate with other
+tunnel endpoints. The IP address for the local tunnel endpoint may be
+configured in the *local.conf* file. For example:
+
+::
+
+    ODL_LOCAL_IP=192.100.200.10
+
+NetVirt will use this information to configure the VXLAN port on
+*br-int*. On a host with the OVS kernel datapath, it is expected that
+there will be a networking interface configured with this IP address. On
+an OVS with DPDK host, an OVS bridge is created and a DPDK port is added
+to the bridge. The local tunnel endpoint address is then assigned to the
+bridge port of the bridge. So, for example, if the physical network
+interface is associated with *eth0* on the host, a bridge named
+*br-eth0* could be created. The DPDK port, such as *dpdk0* (per the
+naming conventions of OVS with DPDK), is added to bridge *br-eth0*. The
+local tunnel endpoint address is assigned to the network interface
+*br-eth0* which is attached to bridge *br-eth0*. All of this setup is
+not done by NetVirt. The *networking-ovs-dpdk* can be made to perform
+this setup by putting configuration like the following in *local.conf*.
+
+::
+
+    ODL_LOCAL_IP=192.168.200.9
+    ODL_PROVIDER_MAPPINGS=physnet1:eth0,physnet2:eht1
+    OVS_DPDK_PORT_MAPPINGS=eth0:br-eth0,eth1:br-ex
+    OVS_BRIDGE_MAPPINGS=physnet1:br-eth0,physnet2:br-ex
+
+The above settings associate the host networking interface *eth0* with
+bridge *br-eth0*. The *networking-ovs-dpdk* plugin will determine the
+DPDK port name associated with *eth0* and add it to the bridge
+*br-eth0*. If using the NetVirt L3 support, these settings will enable
+setup of the *br-ex* bridge and attach the DPDK port associated with
+network interface *eth1* to it.
+
+The following settings are included in *local.conf* to specify specific
+attributes associated with OVS with DPDK. These are used by the
+*networking-ovs-dpdk* plugin to configure OVS with DPDK.
+
+::
+
+    OVS_DATAPATH_TYPE=netdev
+    OVS_NUM_HUGEPAGES=8192
+    OVS_DPDK_MEM_SEGMENTS=8192
+    OVS_HUGEPAGE_MOUNT_PAGESIZE=2M
+    OVS_DPDK_RTE_LIBRTE_VHOST=y
+    OVS_DPDK_MODE=compute
+
+Once the stack is up and running virtual machines may be deployed on OVS
+with DPDK hosts. The *networking-odl* plugin handles ensuring that
+*dpdkvhostuser* interfaces are utilized by Nova instead of the default
+*tap* interface. The *dpdkvhostuser* interface provides the best
+performance for VMs on OVS with DPDK hosts.
+
+A Nova flavor is created for VMs that may be deployed on OVS with DPDK
+hosts.
+
+::
+
+    nova flavor-create largepage-flavor 1002 1024 4 1
+    nova flavor-key 1002 set "hw:mem_page_size=large"
+
+Then, just specify the flavor when creating a VM.
+
+::
+
+    nova boot --flavor largepage-flavor --image cirros-0.3.4-x86_64-uec --nic net-id=<NET ID VALUE> vm-name
+
+OVSDB Plugins
+-------------
+
+Overview and Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are currently two OVSDB Southbound plugins:
+
+-  odl-ovsdb-southbound: Implements the OVSDB Open\_vSwitch database
+   schema.
+
+-  odl-ovsdb-hwvtepsouthbound: Implements the OVSDB hardware\_vtep
+   database schema.
+
+These plugins are normally installed and used automatically by higher
+level applications such as odl-ovsdb-openstack; however, they can also
+be installed separately and used via their REST APIs as is described in
+the following sections.
+
+OVSDB Southbound Plugin
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The OVSDB Southbound Plugin provides support for managing OVS hosts via
+an OVSDB model in the MD-SAL which maps to important tables and
+attributes present in the Open\_vSwitch schema. The OVSDB Southbound
+Plugin is able to connect actively or passively to OVS hosts and operate
+as the OVSDB manager of the OVS host. Using the OVSDB protocol it is
+able to manage the OVS database (OVSDB) on the OVS host as defined by
+the Open\_vSwitch schema.
+
+OVSDB YANG Model
+^^^^^^^^^^^^^^^^
+
+The OVSDB Southbound Plugin provides a YANG model which is based on the
+abstract `network topology
+model <https://github.com/opendaylight/yangtools/blob/stable/beryllium/yang/yang-parser-impl/src/test/resources/ietf/network-topology%402013-10-21.yang>`__.
+
+The details of the OVSDB YANG model are defined in the
+`ovsdb.yang <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/southbound/southbound-api/src/main/yang/ovsdb.yang>`__
+file.
+
+The OVSDB YANG model defines three augmentations:
+
+**ovsdb-node-augmentation**
+    This augments the network-topology node and maps primarily to the
+    Open\_vSwitch table of the OVSDB schema. The ovsdb-node-augmentation
+    is a representation of the OVS host. It contains the following
+    attributes.
+
+    -  **connection-info** - holds the local and remote IP address and
+       TCP port numbers for the OpenDaylight to OVSDB node connections
+
+    -  **db-version** - version of the OVSDB database
+
+    -  **ovs-version** - version of OVS
+
+    -  **list managed-node-entry** - a list of references to
+       ovsdb-bridge-augmentation nodes, which are the OVS bridges
+       managed by this OVSDB node
+
+    -  **list datapath-type-entry** - a list of the datapath types
+       supported by the OVSDB node (e.g. *system*, *netdev*) - depends
+       on newer OVS versions
+
+    -  **list interface-type-entry** - a list of the interface types
+       supported by the OVSDB node (e.g. *internal*, *vxlan*, *gre*,
+       *dpdk*, etc.) - depends on newer OVS verions
+
+    -  **list openvswitch-external-ids** - a list of the key/value pairs
+       in the Open\_vSwitch table external\_ids column
+
+    -  **list openvswitch-other-config** - a list of the key/value pairs
+       in the Open\_vSwitch table other\_config column
+
+    -  **list managery-entry** - list of manager information entries and
+       connection status
+
+    -  **list qos-entries** - list of QoS entries present in the QoS
+       table
+
+    -  **list queues** - list of queue entries present in the queue
+       table
+
+**ovsdb-bridge-augmentation**
+    This augments the network-topology node and maps to an specific
+    bridge in the OVSDB bridge table of the associated OVSDB node. It
+    contains the following attributes.
+
+    -  **bridge-uuid** - UUID of the OVSDB bridge
+
+    -  **bridge-name** - name of the OVSDB bridge
+
+    -  **bridge-openflow-node-ref** - a reference (instance-identifier)
+       of the OpenFlow node associated with this bridge
+
+    -  **list protocol-entry** - the version of OpenFlow protocol to use
+       with the OpenFlow controller
+
+    -  **list controller-entry** - a list of controller-uuid and
+       is-connected status of the OpenFlow controllers associated with
+       this bridge
+
+    -  **datapath-id** - the datapath ID associated with this bridge on
+       the OVSDB node
+
+    -  **datapath-type** - the datapath type of this bridge
+
+    -  **fail-mode** - the OVSDB fail mode setting of this bridge
+
+    -  **flow-node** - a reference to the flow node corresponding to
+       this bridge
+
+    -  **managed-by** - a reference to the ovsdb-node-augmentation
+       (OVSDB node) that is managing this bridge
+
+    -  **list bridge-external-ids** - a list of the key/value pairs in
+       the bridge table external\_ids column for this bridge
+
+    -  **list bridge-other-configs** - a list of the key/value pairs in
+       the bridge table other\_config column for this bridge
+
+**ovsdb-termination-point-augmentation**
+    This augments the topology termination point model. The OVSDB
+    Southbound Plugin uses this model to represent both the OVSDB port
+    and OVSDB interface for a given port/interface in the OVSDB schema.
+    It contains the following attributes.
+
+    -  **port-uuid** - UUID of an OVSDB port row
+
+    -  **interface-uuid** - UUID of an OVSDB interface row
+
+    -  **name** - name of the port and interface
+
+    -  **interface-type** - the interface type
+
+    -  **list options** - a list of port options
+
+    -  **ofport** - the OpenFlow port number of the interface
+
+    -  **ofport\_request** - the requested OpenFlow port number for the
+       interface
+
+    -  **vlan-tag** - the VLAN tag value
+
+    -  **list trunks** - list of VLAN tag values for trunk mode
+
+    -  **vlan-mode** - the VLAN mode (e.g. access, native-tagged,
+       native-untagged, trunk)
+
+    -  **list port-external-ids** - a list of the key/value pairs in the
+       port table external\_ids column for this port
+
+    -  **list interface-external-ids** - a list of the key/value pairs
+       in the interface table external\_ids interface for this interface
+
+    -  **list port-other-configs** - a list of the key/value pairs in
+       the port table other\_config column for this port
+
+    -  **list interface-other-configs** - a list of the key/value pairs
+       in the interface table other\_config column for this interface
+
+    -  **list inteface-lldp** - LLDP Auto Attach configuration for the
+       interface
+
+    -  **qos** - UUID of the QoS entry in the QoS table assigned to this
+       port
+
+Getting Started
+^^^^^^^^^^^^^^^
+
+To install the OVSDB Southbound Plugin, use the following command at the
+Karaf console:
+
+::
+
+    feature:install odl-ovsdb-southbound-impl-ui
+
+After installing the OVSDB Southbound Plugin, and before any OVSDB
+topology nodes have been created, the OVSDB topology will appear as
+follows in the configuration and operational MD-SAL.
+
+HTTP GET:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
+     or
+    http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
+
+Result Body:
+
+::
+
+    {
+      "topology": [
+        {
+          "topology-id": "ovsdb:1"
+        }
+      ]
+    }
+
+Where
+
+*<controller-ip>* is the IP address of the OpenDaylight controller
+
+OpenDaylight as the OVSDB Manager
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+An OVS host is a system which is running the OVS software and is capable
+of being managed by an OVSDB manager. The OVSDB Southbound Plugin is
+capable of connecting to an OVS host and operating as an OVSDB manager.
+Depending on the configuration of the OVS host, the connection of
+OpenDaylight to the OVS host will be active or passive.
+
+Active Connection to OVS Hosts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+An active connection is when the OVSDB Southbound Plugin initiates the
+connection to an OVS host. This happens when the OVS host is configured
+to listen for the connection (i.e. the OVSDB Southbound Plugin is active
+the the OVS host is passive). The OVS host is configured with the
+following command:
+
+::
+
+    sudo ovs-vsctl set-manager ptcp:6640
+
+This configures the OVS host to listen on TCP port 6640.
+
+The OVSDB Southbound Plugin can be configured via the configuration
+MD-SAL to actively connect to an OVS host.
+
+HTTP PUT:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1
+
+Body:
+
+::
+
+    {
+      "network-topology:node": [
+        {
+          "node-id": "ovsdb://HOST1",
+          "connection-info": {
+            "ovsdb:remote-port": "6640",
+            "ovsdb:remote-ip": "<ovs-host-ip>"
+          }
+        }
+      ]
+    }
+
+Where
+
+*<ovs-host-ip>* is the IP address of the OVS Host
+
+Note that the configuration assigns a *node-id* of "ovsdb://HOST1" to
+the OVSDB node. This *node-id* will be used as the identifier for this
+OVSDB node in the MD-SAL.
+
+Query the configuration MD-SAL for the OVSDB topology.
+
+HTTP GET:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
+
+Result Body:
+
+::
+
+    {
+      "topology": [
+        {
+          "topology-id": "ovsdb:1",
+          "node": [
+            {
+              "node-id": "ovsdb://HOST1",
+              "ovsdb:connection-info": {
+                "remote-ip": "<ovs-host-ip>",
+                "remote-port": 6640
+              }
+            }
+          ]
+        }
+      ]
+    }
+
+As a result of the OVSDB node configuration being added to the
+configuration MD-SAL, the OVSDB Southbound Plugin will attempt to
+connect with the specified OVS host. If the connection is successful,
+the plugin will connect to the OVS host as an OVSDB manager, query the
+schemas and databases supported by the OVS host, and register to monitor
+changes made to the OVSDB tables on the OVS host. It will also set an
+external id key and value in the external-ids column of the
+Open\_vSwtich table of the OVS host which identifies the MD-SAL instance
+identifier of the OVSDB node. This ensures that the OVSDB node will use
+the same *node-id* in both the configuration and operational MD-SAL.
+
+::
+
+    "opendaylight-iid" = "instance identifier of OVSDB node in the MD-SAL"
+
+When the OVS host sends the OVSDB Southbound Plugin the first update
+message after the monitoring has been established, the plugin will
+populate the operational MD-SAL with the information it receives from
+the OVS host.
+
+Query the operational MD-SAL for the OVSDB topology.
+
+HTTP GET:
+
+::
+
+    http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
+
+Result Body:
+
+::
+
+    {
+      "topology": [
+        {
+          "topology-id": "ovsdb:1",
+          "node": [
+            {
+              "node-id": "ovsdb://HOST1",
+              "ovsdb:openvswitch-external-ids": [
+                {
+                  "external-id-key": "opendaylight-iid",
+                  "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
+                }
+              ],
+              "ovsdb:connection-info": {
+                "local-ip": "<controller-ip>",
+                "remote-port": 6640,
+                "remote-ip": "<ovs-host-ip>",
+                "local-port": 39042
+              },
+              "ovsdb:ovs-version": "2.3.1-git4750c96",
+              "ovsdb:manager-entry": [
+                {
+                  "target": "ptcp:6640",
+                  "connected": true,
+                  "number_of_connections": 1
+                }
+              ]
+            }
+          ]
+        }
+      ]
+    }
+
+To disconnect an active connection, just delete the configuration MD-SAL
+entry.
+
+HTTP DELETE:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1
+
+Note in the above example, that */* characters which are part of the
+*node-id* are specified in hexadecimal format as "%2F".
+
+Passive Connection to OVS Hosts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A passive connection is when the OVS host initiates the connection to
+the OVSDB Southbound Plugin. This happens when the OVS host is
+configured to connect to the OVSDB Southbound Plugin. The OVS host is
+configured with the following command:
+
+::
+
+    sudo ovs-vsctl set-manager tcp:<controller-ip>:6640
+
+The OVSDB Southbound Plugin is configured to listen for OVSDB
+connections on TCP port 6640. This value can be changed by editing the
+"./karaf/target/assembly/etc/custom.properties" file and changing the
+value of the "ovsdb.listenPort" attribute.
+
+When a passive connection is made, the OVSDB node will appear first in
+the operational MD-SAL. If the Open\_vSwitch table does not contain an
+external-ids value of *opendaylight-iid*, then the *node-id* of the new
+OVSDB node will be created in the format:
+
+::
+
+    "ovsdb://uuid/<actual UUID value>"
+
+If there an *opendaylight-iid* value was already present in the
+external-ids column, then the instance identifier defined there will be
+used to create the *node-id* instead.
+
+Query the operational MD-SAL for an OVSDB node after a passive
+connection.
+
+HTTP GET:
+
+::
+
+    http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
+
+Result Body:
+
+::
+
+    {
+      "topology": [
+        {
+          "topology-id": "ovsdb:1",
+          "node": [
+            {
+              "node-id": "ovsdb://uuid/163724f4-6a70-428a-a8a0-63b2a21f12dd",
+              "ovsdb:openvswitch-external-ids": [
+                {
+                  "external-id-key": "system-id",
+                  "external-id-value": "ecf160af-e78c-4f6b-a005-83a6baa5c979"
+                }
+              ],
+              "ovsdb:connection-info": {
+                "local-ip": "<controller-ip>",
+                "remote-port": 46731,
+                "remote-ip": "<ovs-host-ip>",
+                "local-port": 6640
+              },
+              "ovsdb:ovs-version": "2.3.1-git4750c96",
+              "ovsdb:manager-entry": [
+                {
+                  "target": "tcp:10.11.21.7:6640",
+                  "connected": true,
+                  "number_of_connections": 1
+                }
+              ]
+            }
+          ]
+        }
+      ]
+    }
+
+Take note of the *node-id* that was created in this case.
+
+Manage Bridges
+^^^^^^^^^^^^^^
+
+The OVSDB Southbound Plugin can be used to manage bridges on an OVS
+host.
+
+This example shows how to add a bridge to the OVSDB node
+*ovsdb://HOST1*.
+
+HTTP PUT:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
+
+Body:
+
+::
+
+    {
+      "network-topology:node": [
+        {
+          "node-id": "ovsdb://HOST1/bridge/brtest",
+          "ovsdb:bridge-name": "brtest",
+          "ovsdb:protocol-entry": [
+            {
+              "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
+            }
+          ],
+          "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
+        }
+      ]
+    }
+
+Notice that the *ovsdb:managed-by* attribute is specified in the
+command. This indicates the association of the new bridge node with its
+OVSDB node.
+
+Bridges can be updated. In the following example, OpenDaylight is
+configured to be the OpenFlow controller for the bridge.
+
+HTTP PUT:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
+
+Body:
+
+::
+
+    {
+      "network-topology:node": [
+            {
+              "node-id": "ovsdb://HOST1/bridge/brtest",
+                 "ovsdb:bridge-name": "brtest",
+                  "ovsdb:controller-entry": [
+                    {
+                      "target": "tcp:<controller-ip>:6653"
+                    }
+                  ],
+                 "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
+            }
+        ]
+    }
+
+If the OpenDaylight OpenFlow Plugin is installed, then checking on the
+OVS host will show that OpenDaylight has successfully connected as the
+controller for the bridge.
+
+::
+
+    $ sudo ovs-vsctl show
+        Manager "ptcp:6640"
+            is_connected: true
+        Bridge brtest
+            Controller "tcp:<controller-ip>:6653"
+                is_connected: true
+            Port brtest
+                Interface brtest
+                    type: internal
+        ovs_version: "2.3.1-git4750c96"
+
+Query the operational MD-SAL to see how the bridge appears.
+
+HTTP GET:
+
+::
+
+    http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/
+
+Result Body:
+
+::
+
+    {
+      "node": [
+        {
+          "node-id": "ovsdb://HOST1/bridge/brtest",
+          "ovsdb:bridge-name": "brtest",
+          "ovsdb:datapath-type": "ovsdb:datapath-type-system",
+          "ovsdb:datapath-id": "00:00:da:e9:0c:08:2d:45",
+          "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']",
+          "ovsdb:bridge-external-ids": [
+            {
+              "bridge-external-id-key": "opendaylight-iid",
+              "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']"
+            }
+          ],
+          "ovsdb:protocol-entry": [
+            {
+              "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
+            }
+          ],
+          "ovsdb:bridge-uuid": "080ce9da-101e-452d-94cd-ee8bef8a4b69",
+          "ovsdb:controller-entry": [
+            {
+              "target": "tcp:10.11.21.7:6653",
+              "is-connected": true,
+              "controller-uuid": "c39b1262-0876-4613-8bfd-c67eec1a991b"
+            }
+          ],
+          "termination-point": [
+            {
+              "tp-id": "brtest",
+              "ovsdb:port-uuid": "c808ae8d-7af2-4323-83c1-e397696dc9c8",
+              "ovsdb:ofport": 65534,
+              "ovsdb:interface-type": "ovsdb:interface-type-internal",
+              "ovsdb:interface-uuid": "49e9417f-4479-4ede-8faf-7c873b8c0413",
+              "ovsdb:name": "brtest"
+            }
+          ]
+        }
+      ]
+    }
+
+Notice that just like with the OVSDB node, an *opendaylight-iid* has
+been added to the external-ids column of the bridge since it was created
+via the configuration MD-SAL.
+
+A bridge node may be deleted as well.
+
+HTTP DELETE:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
+
+Manage Ports
+^^^^^^^^^^^^
+
+Similarly, ports may be managed by the OVSDB Southbound Plugin.
+
+This example illustrates how a port and various attributes may be
+created on a bridge.
+
+HTTP PUT:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
+
+Body:
+
+::
+
+    {
+      "network-topology:termination-point": [
+        {
+          "ovsdb:options": [
+            {
+              "ovsdb:option": "remote_ip",
+              "ovsdb:value" : "10.10.14.11"
+            }
+          ],
+          "ovsdb:name": "testport",
+          "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
+          "tp-id": "testport",
+          "vlan-tag": "1",
+          "trunks": [
+            {
+              "trunk": "5"
+            }
+          ],
+          "vlan-mode":"access"
+        }
+      ]
+    }
+
+Ports can be updated - add another VLAN trunk.
+
+HTTP PUT:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
+
+Body:
+
+::
+
+    {
+      "network-topology:termination-point": [
+        {
+          "ovsdb:name": "testport",
+          "tp-id": "testport",
+          "trunks": [
+            {
+              "trunk": "5"
+            },
+            {
+              "trunk": "500"
+            }
+          ]
+        }
+      ]
+    }
+
+Query the operational MD-SAL for the port.
+
+HTTP GET:
+
+::
+
+    http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
+
+Result Body:
+
+::
+
+    {
+      "termination-point": [
+        {
+          "tp-id": "testport",
+          "ovsdb:port-uuid": "b1262110-2a4f-4442-b0df-84faf145488d",
+          "ovsdb:options": [
+            {
+              "option": "remote_ip",
+              "value": "10.10.14.11"
+            }
+          ],
+          "ovsdb:port-external-ids": [
+            {
+              "external-id-key": "opendaylight-iid",
+              "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']/network-topology:termination-point[network-topology:tp-id='testport']"
+            }
+          ],
+          "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
+          "ovsdb:trunks": [
+            {
+              "trunk": 5
+            },
+            {
+              "trunk": 500
+            }
+          ],
+          "ovsdb:vlan-mode": "access",
+          "ovsdb:vlan-tag": 1,
+          "ovsdb:interface-uuid": "7cec653b-f407-45a8-baec-7eb36b6791c9",
+          "ovsdb:name": "testport",
+          "ovsdb:ofport": 1
+        }
+      ]
+    }
+
+Remember that the OVSDB YANG model includes both OVSDB port and
+interface table attributes in the termination-point augmentation. Both
+kinds of attributes can be seen in the examples above. Again, note the
+creation of an *opendaylight-iid* value in the external-ids column of
+the port table.
+
+Delete a port.
+
+HTTP DELETE:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest2/termination-point/testport/
+
+Overview of QoS and Queue
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The OVSDB Southbound Plugin provides the capability of managing the QoS
+and Queue tables on an OVS host with OpenDaylight configured as the
+OVSDB manager.
+
+QoS and Queue Tables in OVSDB
+'''''''''''''''''''''''''''''
+
+The OVSDB includes a QoS and Queue table. Unlike most of the other
+tables in the OVSDB, except the Open\_vSwitch table, the QoS and Queue
+tables are "root set" tables, which means that entries, or rows, in
+these tables are not automatically deleted if they can not be reached
+directly or indirectly from the Open\_vSwitch table. This means that QoS
+entries can exist and be managed independently of whether or not they
+are referenced in a Port entry. Similarly, Queue entries can be managed
+independently of whether or not they are referenced by a QoS entry.
+
+Modelling of QoS and Queue Tables in OpenDaylight MD-SAL
+''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+Since the QoS and Queue tables are "root set" tables, they are modeled
+in the OpenDaylight MD-SAL as lists which are part of the attributes of
+the OVSDB node model.
+
+The MD-SAL QoS and Queue models have an additonal identifier attribute
+per entry (e.g. "qos-id" or "queue-id") which is not present in the
+OVSDB schema. This identifier is used by the MD-SAL as a key for
+referencing the entry. If the entry is created originally from the
+configuration MD-SAL, then the value of the identifier is whatever is
+specified by the configuration. If the entry is created on the OVSDB
+node and received by OpenDaylight in an operational update, then the id
+will be created in the following format.
+
+::
+
+    "queue-id": "queue://<UUID>"
+    "qos-id": "qos://<UUID>"
+
+The UUID in the above identifiers is the actual UUID of the entry in the
+OVSDB database.
+
+When the QoS or Queue entry is created by the configuration MD-SAL, the
+identifier will be configured as part of the external-ids column of the
+entry. This will ensure that the corresponding entry that is created in
+the operational MD-SAL uses the same identifier.
+
+::
+
+    "queues-external-ids": [
+      {
+        "queues-external-id-key": "opendaylight-queue-id",
+        "queues-external-id-value": "QUEUE-1"
+      }
+    ]
+
+See more in the examples that follow in this section.
+
+The QoS schema in OVSDB currently defines two types of QoS entries.
+
+-  linux-htb
+
+-  linux-hfsc
+
+These QoS types are defined in the QoS model. Additional types will need
+to be added to the model in order to be supported. See the examples that
+folow for how the QoS type is specified in the model.
+
+QoS entries can be configured with addtional attritubes such as
+"max-rate". These are configured via the *other-config* column of the
+QoS entry. Refer to OVSDB schema (in the reference section below) for
+all of the relevant attributes that can be configured. The examples in
+the rest of this section will demonstrate how the other-config column
+may be configured.
+
+Similarly, the Queue entries may be configured with additional
+attributes via the other-config column.
+
+Managing QoS and Queues via Configuration MD-SAL
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This section will show some examples on how to manage QoS and Queue
+entries via the configuration MD-SAL. The examples will be illustrated
+by using RESTCONF (see `QoS and Queue Postman
+Collection <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/resources/commons/Qos-and-Queue-Collection.json.postman_collection>`__
+).
+
+A pre-requisite for managing QoS and Queue entries is that the OVS host
+must be present in the configuration MD-SAL.
+
+For the following examples, the following OVS host is configured.
+
+HTTP POST:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
+
+Body:
+
+::
+
+    {
+      "node": [
+        {
+          "node-id": "ovsdb:HOST1",
+          "connection-info": {
+            "ovsdb:remote-ip": "<ovs-host-ip>",
+            "ovsdb:remote-port": "<ovs-host-ovsdb-port>"
+          }
+        }
+      ]
+    }
+
+Where
+
+-  *<controller-ip>* is the IP address of the OpenDaylight controller
+
+-  *<ovs-host-ip>* is the IP address of the OVS host
+
+-  *<ovs-host-ovsdb-port>* is the TCP port of the OVSDB server on the
+   OVS host (e.g. 6640)
+
+This command creates an OVSDB node with the node-id "ovsdb:HOST1". This
+OVSDB node will be used in the following examples.
+
+QoS and Queue entries can be created and managed without a port, but
+ultimately, QoS entries are associated with a port in order to use them.
+For the following examples a test bridge and port will be created.
+
+Create the test bridge.
+
+HTTP PUT
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test
+
+Body:
+
+::
+
+    {
+      "network-topology:node": [
+        {
+          "node-id": "ovsdb:HOST1/bridge/br-test",
+          "ovsdb:bridge-name": "br-test",
+          "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']"
+        }
+      ]
+    }
+
+Create the test port (which is modeled as a termination point in the
+OpenDaylight MD-SAL).
+
+HTTP PUT:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
+
+Body:
+
+::
+
+    {
+      "network-topology:termination-point": [
+        {
+          "ovsdb:name": "testport",
+          "tp-id": "testport"
+        }
+      ]
+    }
+
+If all of the previous steps were successful, a query of the operational
+MD-SAL should look something like the following results. This indicates
+that the configuration commands have been successfully instantiated on
+the OVS host.
+
+HTTP GET:
+
+::
+
+    http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test
+
+Result Body:
+
+::
+
+    {
+      "node": [
+        {
+          "node-id": "ovsdb:HOST1/bridge/br-test",
+          "ovsdb:bridge-name": "br-test",
+          "ovsdb:datapath-type": "ovsdb:datapath-type-system",
+          "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']",
+          "ovsdb:datapath-id": "00:00:8e:5d:22:3d:09:49",
+          "ovsdb:bridge-external-ids": [
+            {
+              "bridge-external-id-key": "opendaylight-iid",
+              "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']"
+            }
+          ],
+          "ovsdb:bridge-uuid": "3d225d8d-d060-4909-93ef-6f4db58ef7cc",
+          "termination-point": [
+            {
+              "tp-id": "br=-est",
+              "ovsdb:port-uuid": "f85f7aa7-4956-40e4-9c94-e6ca2d5cd254",
+              "ovsdb:ofport": 65534,
+              "ovsdb:interface-type": "ovsdb:interface-type-internal",
+              "ovsdb:interface-uuid": "29ff3692-6ed4-4ad7-a077-1edc277ecb1a",
+              "ovsdb:name": "br-test"
+            },
+            {
+              "tp-id": "testport",
+              "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
+              "ovsdb:port-external-ids": [
+                {
+                  "external-id-key": "opendaylight-iid",
+                  "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
+                }
+              ],
+              "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
+              "ovsdb:name": "testport"
+            }
+          ]
+        }
+      ]
+    }
+
+Create Queue
+''''''''''''
+
+Create a new Queue in the configuration MD-SAL.
+
+HTTP PUT:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
+
+Body:
+
+::
+
+    {
+      "ovsdb:queues": [
+        {
+          "queue-id": "QUEUE-1",
+          "dscp": 25,
+          "queues-other-config": [
+            {
+              "queue-other-config-key": "max-rate",
+              "queue-other-config-value": "3600000"
+            }
+          ]
+        }
+      ]
+    }
+
+Query Queue
+'''''''''''
+
+Now query the operational MD-SAL for the Queue entry.
+
+HTTP GET:
+
+::
+
+    http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
+
+Result Body:
+
+::
+
+    {
+      "ovsdb:queues": [
+        {
+          "queue-id": "QUEUE-1",
+          "queues-other-config": [
+            {
+              "queue-other-config-key": "max-rate",
+              "queue-other-config-value": "3600000"
+            }
+          ],
+          "queues-external-ids": [
+            {
+              "queues-external-id-key": "opendaylight-queue-id",
+              "queues-external-id-value": "QUEUE-1"
+            }
+          ],
+          "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
+          "dscp": 25
+        }
+      ]
+    }
+
+Create QoS
+''''''''''
+
+Create a QoS entry. Note that the UUID of the Queue entry, obtained by
+querying the operational MD-SAL of the Queue entry, is specified in the
+queue-list of the QoS entry. Queue entries may be added to the QoS entry
+at the creation of the QoS entry, or by a subsequent update to the QoS
+entry.
+
+HTTP PUT:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
+
+Body:
+
+::
+
+    {
+      "ovsdb:qos-entries": [
+        {
+          "qos-id": "QOS-1",
+          "qos-type": "ovsdb:qos-type-linux-htb",
+          "qos-other-config": [
+            {
+              "other-config-key": "max-rate",
+              "other-config-value": "4400000"
+            }
+          ],
+          "queue-list": [
+            {
+              "queue-number": "0",
+              "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
+            }
+          ]
+        }
+      ]
+    }
+
+Query QoS
+'''''''''
+
+Query the operational MD-SAL for the QoS entry.
+
+HTTP GET:
+
+::
+
+    http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
+
+Result Body:
+
+::
+
+    {
+      "ovsdb:qos-entries": [
+        {
+          "qos-id": "QOS-1",
+          "qos-other-config": [
+            {
+              "other-config-key": "max-rate",
+              "other-config-value": "4400000"
+            }
+          ],
+          "queue-list": [
+            {
+              "queue-number": 0,
+              "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
+            }
+          ],
+          "qos-type": "ovsdb:qos-type-linux-htb",
+          "qos-external-ids": [
+            {
+              "qos-external-id-key": "opendaylight-qos-id",
+              "qos-external-id-value": "QOS-1"
+            }
+          ],
+          "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
+        }
+      ]
+    }
+
+Add QoS to a Port
+'''''''''''''''''
+
+Update the termination point entry to include the UUID of the QoS entry,
+obtained by querying the operational MD-SAL, to associate a QoS entry
+with a port.
+
+HTTP PUT:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
+
+Body:
+
+::
+
+    {
+      "network-topology:termination-point": [
+        {
+          "ovsdb:name": "testport",
+          "tp-id": "testport",
+          "qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
+        }
+      ]
+    }
+
+Query the Port
+''''''''''''''
+
+Query the operational MD-SAL to see how the QoS entry appears in the
+termination point model.
+
+HTTP GET:
+
+::
+
+    http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
+
+Result Body:
+
+::
+
+    {
+      "termination-point": [
+        {
+          "tp-id": "testport",
+          "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
+          "ovsdb:port-external-ids": [
+            {
+              "external-id-key": "opendaylight-iid",
+              "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
+            }
+          ],
+          "ovsdb:qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31",
+          "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
+          "ovsdb:name": "testport"
+        }
+      ]
+    }
+
+Query the OVSDB Node
+''''''''''''''''''''
+
+Query the operational MD-SAL for the OVS host to see how the QoS and
+Queue entries appear as lists in the OVS node model.
+
+HTTP GET:
+
+::
+
+    http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/
+
+Result Body (edited to only show information relevant to the QoS and
+Queue entries):
+
+::
+
+    {
+      "node": [
+        {
+          "node-id": "ovsdb:HOST1",
+          <content edited out>
+          "ovsdb:queues": [
+            {
+              "queue-id": "QUEUE-1",
+              "queues-other-config": [
+                {
+                  "queue-other-config-key": "max-rate",
+                  "queue-other-config-value": "3600000"
+                }
+              ],
+              "queues-external-ids": [
+                {
+                  "queues-external-id-key": "opendaylight-queue-id",
+                  "queues-external-id-value": "QUEUE-1"
+                }
+              ],
+              "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
+              "dscp": 25
+            }
+          ],
+          "ovsdb:qos-entries": [
+            {
+              "qos-id": "QOS-1",
+              "qos-other-config": [
+                {
+                  "other-config-key": "max-rate",
+                  "other-config-value": "4400000"
+                }
+              ],
+              "queue-list": [
+                {
+                  "queue-number": 0,
+                  "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
+                }
+              ],
+              "qos-type": "ovsdb:qos-type-linux-htb",
+              "qos-external-ids": [
+                {
+                  "qos-external-id-key": "opendaylight-qos-id",
+                  "qos-external-id-value": "QOS-1"
+                }
+              ],
+              "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
+            }
+          ]
+          <content edited out>
+        }
+      ]
+    }
+
+Remove QoS from a Port
+''''''''''''''''''''''
+
+This example removes a QoS entry from the termination point and
+associated port. Note that this is a PUT command on the termination
+point with the QoS attribute absent. Other attributes of the termination
+point should be included in the body of the command so that they are not
+inadvertantly removed.
+
+HTTP PUT:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
+
+Body:
+
+::
+
+    {
+      "network-topology:termination-point": [
+        {
+          "ovsdb:name": "testport",
+          "tp-id": "testport"
+        }
+      ]
+    }
+
+Remove a Queue from QoS
+'''''''''''''''''''''''
+
+This example removes the specific Queue entry from the queue list in the
+QoS entry. The queue entry is specified by the queue number, which is
+"0" in this example.
+
+HTTP DELETE:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/queue-list/0/
+
+Remove Queue
+''''''''''''
+
+Once all references to a specific queue entry have been removed from QoS
+entries, the Queue itself can be removed.
+
+HTTP DELETE:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
+
+Remove QoS
+''''''''''
+
+The QoS entry may be removed when it is no longer referenced by any
+ports.
+
+HTTP DELETE:
+
+::
+
+    http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
+
+References
+^^^^^^^^^^
+
+`Openvswitch
+schema <http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf>`__
+
+`OVSDB and Netvirt Postman
+Collection <https://github.com/opendaylight/ovsdb/blob/stable/beryllium/resources/commons>`__
+
+OVSDB Hardware VTEP SouthBound Plugin
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+Hwvtepsouthbound plugin is used to configure a hardware VTEP which
+implements hardware ovsdb schema. This page will show how to use
+RESTConf API of hwvtepsouthbound. There are two ways to connect to ODL:
+
+**switch initiates connection..**
+
+Both will be introduced respectively.
+
+User Initiates Connection
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Prerequisite
+''''''''''''
+
+Configure the hwvtep device/node to listen for the tcp connection in
+passive mode. In addition, management IP and tunnel source IP are also
+configured. After all this configuration is done, a physical switch is
+created automatically by the hwvtep node.
+
+Connect to a hwvtep device/node
+'''''''''''''''''''''''''''''''
+
+Send below Restconf request if you want to initiate the connection to a
+hwvtep node from the controller, where listening IP and port of hwvtep
+device/node are provided.
+
+REST API: POST
+http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/
+
+::
+
+    {
+     "network-topology:node": [
+           {
+               "node-id": "hwvtep://192.168.1.115:6640",
+               "hwvtep:connection-info":
+               {
+                   "hwvtep:remote-port": 6640,
+                   "hwvtep:remote-ip": "192.168.1.115"
+               }
+           }
+       ]
+    }
+
+Please replace *odl* in the URL with the IP address of your OpendayLight
+controller and change *192.168.1.115* to your hwvtep node IP.
+
+**NOTE**: The format of node-id is fixed. It will be one of the two:
+
+User initiates connection from ODL:
+
+::
+
+     hwvtep://ip:port
+
+Switch initiates connection:
+
+::
+
+     hwvtep://uuid/<uuid of switch>
+
+The reason for using UUID is that we can distinguish between multiple
+switches if they are behind a NAT.
+
+After this request is completed successfully, we can get the physical
+switch from the operational data store.
+
+REST API: GET
+http://odl:8181/restconf/operational/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
+
+There is no body in this request.
+
+The response of the request is:
+
+::
+
+    {
+       "node": [
+             {
+               "node-id": "hwvtep://192.168.1.115:6640",
+               "hwvtep:switches": [
+                 {
+                   "switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640/physicalswitch/br0']"
+                 }
+               ],
+               "hwvtep:connection-info": {
+                 "local-ip": "192.168.92.145",
+                 "local-port": 47802,
+                 "remote-port": 6640,
+                 "remote-ip": "192.168.1.115"
+               }
+             },
+             {
+               "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
+               "hwvtep:management-ips": [
+                 {
+                   "management-ips-key": "192.168.1.115"
+                 }
+               ],
+               "hwvtep:physical-switch-uuid": "37eb5abd-a6a3-4aba-9952-a4d301bdf371",
+               "hwvtep:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']",
+               "hwvtep:hwvtep-node-description": "",
+               "hwvtep:tunnel-ips": [
+                 {
+                   "tunnel-ips-key": "192.168.1.115"
+                 }
+               ],
+               "hwvtep:hwvtep-node-name": "br0"
+             }
+           ]
+    }
+
+If there is a physical switch which has already been created by manual
+configuration, we can get the node-id of the physical switch from this
+response, which is presented in “swith-ref”. If the switch does not
+exist, we need to create the physical switch. Currently, most hwvtep
+devices do not support running multiple switches.
+
+Create a physical switch
+''''''''''''''''''''''''
+
+REST API: POST
+http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/
+
+request body:
+
+::
+
+    {
+     "network-topology:node": [
+           {
+               "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
+               "hwvtep-node-name": "ps0",
+               "hwvtep-node-description": "",
+               "management-ips": [
+                 {
+                   "management-ips-key": "192.168.1.115"
+                 }
+               ],
+               "tunnel-ips": [
+                 {
+                   "tunnel-ips-key": "192.168.1.115"
+                 }
+               ],
+               "managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']"
+           }
+       ]
+    }
+
+Note: "managed-by" must provided by user. We can get its value after the
+step *Connect to a hwvtep device/node* since the node-id of hwvtep
+device is provided by user. "managed-by" is a reference typed of
+instance identifier. Though the instance identifier is a little
+complicated for RestConf, the primary user of hwvtepsouthbound plugin
+will be provider-type code such as NetVirt and the instance identifier
+is much easier to write code for.
+
+Create a logical switch
+'''''''''''''''''''''''
+
+Creating a logical switch is effectively creating a logical network. For
+VxLAN, it is a tunnel network with the same VNI.
+
+REST API: POST
+http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
+
+request body:
+
+::
+
+    {
+     "logical-switches": [
+           {
+               "hwvtep-node-name": "ls0",
+               "hwvtep-node-description": "",
+               "tunnel-key": "10000"
+            }
+       ]
+    }
+
+Create a physical locator
+'''''''''''''''''''''''''
+
+After the VXLAN network is ready, we will add VTEPs to it. A VTEP is
+described by a physical locator.
+
+REST API: POST
+http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
+
+request body:
+
+::
+
+     {
+      "termination-point": [
+           {
+               "tp-id": "vxlan_over_ipv4:192.168.0.116",
+               "encapsulation-type": "encapsulation-type-vxlan-over-ipv4",
+               "dst-ip": "192.168.0.116"
+               }
+          ]
+     }
+
+The "tp-id" of locator is "{encapsualation-type}: {dst-ip}".
+
+Note: As far as we know, the OVSDB database does not allow the insertion
+of a new locator alone. So, no locator is inserted after this request is
+sent. We will trigger off the creation until other entity refer to it,
+such as remote-mcast-macs.
+
+Create a remote-mcast-macs entry
+''''''''''''''''''''''''''''''''
+
+After adding a physical locator to a logical switch, we need to create a
+remote-mcast-macs entry to handle unknown traffic.
+
+REST API: POST
+http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
+
+request body:
+
+::
+
+    {
+     "remote-mcast-macs": [
+           {
+               "mac-entry-key": "00:00:00:00:00:00",
+               "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
+               "locator-set": [
+                    {
+                          "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://219.141.189.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
+                    }
+               ]
+           }
+       ]
+    }
+
+The physical locator *vxlan\_over\_ipv4:192.168.0.116* is just created
+in "Create a physical locator". It should be noted that list
+"locator-set" is immutable, that is, we must provide a set of
+"locator-ref" as a whole.
+
+Note: "00:00:00:00:00:00" stands for "unknown-dst" since the type of
+mac-entry-key is yang:mac and does not accept "unknown-dst".
+
+Create a physical port
+''''''''''''''''''''''
+
+Now we add a physical port into the physical switch
+"hwvtep://192.168.1.115:6640/physicalswitch/br0". The port is attached
+with a physical server or an L2 network and with the vlan 100.
+
+REST API: POST
+http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640%2Fphysicalswitch%2Fbr0
+
+::
+
+    {
+     "network-topology:termination-point": [
+           {
+               "tp-id": "port0",
+               "hwvtep-node-name": "port0",
+               "hwvtep-node-description": "",
+               "vlan-bindings": [
+                   {
+                     "vlan-id-key": "100",
+                     "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']"
+                   }
+             ]
+           }
+       ]
+    }
+
+At this point, we have completed the basic configuration.
+
+Typically, hwvtep devices learn local MAC addresses automatically. But
+they also support getting MAC address entries from ODL.
+
+Create a local-mcast-macs entry
+'''''''''''''''''''''''''''''''
+
+It is similar to *Create a remote-mcast-macs entry*.
+
+Create a remote-ucast-macs
+''''''''''''''''''''''''''
+
+REST API: POST
+http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
+
+::
+
+    request body:
+
+::
+
+    {
+     "remote-ucast-macs": [
+           {
+               "mac-entry-key": "11:11:11:11:11:11",
+               "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
+               "ipaddr": "1.1.1.1",
+               "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
+           }
+       ]
+    }
+
+Create a local-ucast-macs entry
+'''''''''''''''''''''''''''''''
+
+This is similar to *Create a remote-ucast-macs*.
+
+Switch Initiates Connection
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+We do not need to connect to a hwvtep device/node when the switch
+initiates the connection. After switches connect to ODL successfully, we
+get the node-id’s of switches by reading the operational data store.
+Once the node-id of a hwvtep device is received, the remaining steps are
+the same as when the user initiates the connection.
+
+References
+^^^^^^^^^^
+
+https://wiki.opendaylight.org/view/User_talk:Pzhang
+
diff --git a/docs/user-guide/packetcable-user-guide.rst b/docs/user-guide/packetcable-user-guide.rst
new file mode 100644 (file)
index 0000000..a89d424
--- /dev/null
@@ -0,0 +1,74 @@
+PacketCable User Guide
+======================
+
+Overview
+--------
+
+These components introduce a DOCSIS QoS Gates management using the PCMM
+protocol. The driver component is responsible for the PCMM/COPS/PDP
+functionality required to service requests from PacketCable Provider and
+FlowManager. Requests are transposed into PCMM Gate Control messages and
+transmitted via COPS to the CMTS. This plugin adheres to the
+PCMM/COPS/PDP functionality defined in the CableLabs specification.
+PacketCable solution is an MDSAL compliant component.
+
+PacketCable Components
+----------------------
+
+PacketCable is comprised of two OpenDaylight bundles:
+
++--------------------------------------+--------------------------------------+
+| Bundle                               | Description                          |
++======================================+======================================+
+| odl-packetcable-policy-server        | Plugin that provides PCMM model      |
+|                                      | implementation based on CMTS         |
+|                                      | structure and COPS protocol.         |
++--------------------------------------+--------------------------------------+
+| odl-packetcable-policy-model         | The Model provided provides a direct |
+|                                      | mapping to the underlying QoS Gates  |
+|                                      | of CMTS.                             |
++--------------------------------------+--------------------------------------+
+
+See the PacketCable `YANG
+Models <https://git.opendaylight.org/gerrit/gitweb?p=packetcable.git;a=tree;f=packetcable-policy-model/src/main/yang>`__.
+
+Installing PacketCable
+----------------------
+
+To install PacketCable, run the following ``feature:install`` command
+from the Karaf CLI
+
+::
+
+    feature:install odl-packetcable-policy-server-all odl-restconf odl-mdsal-apidocs
+
+Explore and exercise the PacketCable REST API
+---------------------------------------------
+
+To see the PacketCable APIs, browse to this URL:
+http://localhost:8181/apidoc/explorer/index.html
+
+Replace localhost with the IP address or hostname where OpenDaylight is
+running if you are not running OpenDaylight locally on your machine.
+
+.. note::
+
+    Prior to setting any PCMM gates, a CCAP must first be added.
+
+Postman
+-------
+
+`Install the Chrome
+extension <https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en>`__
+
+`Download and import sample packetcable
+collection <https://git.opendaylight.org/gerrit/gitweb?p=packetcable.git;a=tree;f=packetcable-policy-server/doc/restconf-samples>`__
+
+Postman Operations
+^^^^^^^^^^^^^^^^^^
+
+.. figure:: ./images/packetcable-postman.png
+   :alt: Postman Operations
+
+   Postman Operations
+
diff --git a/docs/user-guide/pcep-user-guide.rst b/docs/user-guide/pcep-user-guide.rst
new file mode 100644 (file)
index 0000000..1a87adc
--- /dev/null
@@ -0,0 +1,340 @@
+PCEP User Guide
+===============
+
+Overview
+--------
+
+The OpenDaylight Karaf distribution comes preconfigured with baseline
+PCEP configuration.
+
+-  **32-pcep.xml** (basic PCEP configuration, including session
+   parameters)
+
+-  **39-pcep-provider.xml** (configuring for PCEP provider)
+
+Configuring PCEP
+----------------
+
+The default shipped configuration will start a PCE server on
+0.0.0.0:4189. You can change this behavior in **39-pcep-provider.xml**:
+
+.. code:: xml
+
+    <module>
+     <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:pcep:topology:provider">prefix:pcep-topology-provider</type>
+     <name>pcep-topology</name>
+     <listen-address>192.168.122.55</listen-address>
+     <listen-port>4189</listen-port>
+    ...
+    </module>
+
+-  **listen-address** - adress on which PCE will be started and listen
+
+-  **listen-port** - port on which the address will be started and
+   listen
+
+PCEP default configuration is set to conform stateful PCEP extensions:
+
+-  `draft-ietf-pce-stateful-pce-07 <http://tools.ietf.org/html/draft-ietf-pce-stateful-pce-07>`__
+   - PCEP Extensions for Stateful PCE
+
+-  `draft-ietf-pce-pce-initiated-lsp-00 <https://tools.ietf.org/html/draft-ietf-pce-pce-initiated-lsp-00>`__
+   - PCEP Extensions for PCE-initiated LSP Setup in a Stateful PCE Model
+
+-  `draft-ietf-pce-stateful-sync-optimizations-03 <https://tools.ietf.org/html/draft-ietf-pce-stateful-sync-optimizations-03>`__
+   - Optimizations of Label Switched Path State Synchronization
+   Procedures for a Stateful PCE
+
+PCEP Segment Routing
+~~~~~~~~~~~~~~~~~~~~
+
+Conforms
+`draft-ietf-pce-segment-routing <http://tools.ietf.org/html/draft-ietf-pce-segment-routing-01>`__
+- PCEP extension for Segment Routing
+
+The default configuration file is located in etc/opendaylight/karaf.
+
+-  **33-pcep-segment-routing.xml** - You don’t need to edit this file.
+
+Tunnel Management
+-----------------
+
+Programming tunnels through PCEP is one of the key features of PCEP
+implementation in OpenDaylight. User can create, update and delete
+tunnel via RESTCONF calls. Tunnel (LSP - Label Switched Path) arguments
+are passed through RESTCONF and generate a PCEP message that is sent to
+PCC (which is also specified via RESTCONF call). PCC sends a response
+back to OpenDaylight. The response is then interpreted and sent to
+RESTCONF, where, in case of success, the new LSP is displayed.
+
+The PCE Segment Routing Extends draft-ietf-pce-stateful-pce-07 and
+draft-ietf-pce-pce-initiated-lsp-00, brings new Segment Routing Explicit
+Route Object (SR-ERO) subobject composed of SID (Segment Identifier)
+and/or NAI (Node or Adjacency Identifier). Segment Routing path is
+carried in the ERO object, as a list of SR-ERO subobjects ordered by
+user. The draft redefines format of messages (PCUpd, PCRpt, PCInitiate)
+- along with common header, they can hold SPR, LSP and SR-ERO
+(containing only SR-ERO subobjects) objects.
+
+Creating LSP
+~~~~~~~~~~~~
+
+An LSP in PCEP can be created in one or two steps. Making an add-lsp
+operation will trigger a PcInitiate message to PCC.
+
+**URL:**
+http://localhost:8181/restconf/operations/network-topology-pcep:add-lsp
+
+**Method:** POST
+
+**Content-Type:** application/xml
+
+**Body:**
+
+**PCE Active Stateful:**
+
+.. code:: xml
+
+    <input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
+     <node>pcc://43.43.43.43</node>
+     <name>update-tunel</name>
+     <arguments>
+      <lsp xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
+       <delegate>true</delegate>
+       <administrative>true</administrative>
+      </lsp>
+      <endpoints-obj>
+       <ipv4>
+        <source-ipv4-address>43.43.43.43</source-ipv4-address>
+        <destination-ipv4-address>39.39.39.39</destination-ipv4-address>
+       </ipv4>
+      </endpoints-obj>
+      <ero>
+       <subobject>
+        <loose>false</loose>
+        <ip-prefix><ip-prefix>201.20.160.40/32</ip-prefix></ip-prefix>
+       </subobject>
+       <subobject>
+        <loose>false</loose>
+        <ip-prefix><ip-prefix>195.20.160.39/32</ip-prefix></ip-prefix>
+       </subobject>
+       <subobject>
+        <loose>false</loose>
+        <ip-prefix><ip-prefix>39.39.39.39/32</ip-prefix></ip-prefix>
+       </subobject>
+      </ero>
+     </arguments>
+     <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
+    </input>
+
+**PCE Segment Routing:**
+
+.. code:: xml
+
+    <input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
+     <node>pcc://43.43.43.43</node>
+     <name>update-tunnel</name>
+     <arguments>
+      <lsp xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
+       <delegate>true</delegate>
+       <administrative>true</administrative>
+      </lsp>
+      <endpoints-obj>
+       <ipv4>
+        <source-ipv4-address>43.43.43.43</source-ipv4-address>
+        <destination-ipv4-address>39.39.39.39</destination-ipv4-address>
+       </ipv4>
+      </endpoints-obj>
+      <path-setup-type xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
+       <pst>1</pst>
+      </path-setup-type>
+      <ero>
+       <subobject>
+        <loose>false</loose>
+        <sid-type xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">ipv4-node-id</sid-type>
+        <m-flag xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">true</m-flag>
+        <sid xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">12</sid>
+        <ip-address xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">39.39.39.39</ip-address>
+       </subobject>
+      </ero>
+     </arguments>
+     <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
+    </input>
+
+Updating LSP
+~~~~~~~~~~~~
+
+Making an update-lsp operation will trigger a PCUpd message to PCC.
+Updating can be used to change or add additional information to the LSP.
+
+You can only successfully update an LSP if you own the delegation. You
+automatically own the delegation, if you’ve created the LSP. You don’t
+own it, if another PCE created this LSP. In this case PCC is only
+reporting this LSP for you, as read-only (you’ll see
+``<delegate>false</delegate>``). However OpenDaylight won’t restrict you
+from trying to modify the LSP, but you will be stopped by receiving a
+PCErr message from PCC.
+
+To revoke delegation, don’t forget to set ``<delegate>`` to true.
+
+**URL:**
+http://localhost:8181/restconf/operations/network-topology-pcep:update-lsp
+
+**Method:** POST
+
+**Content-Type:** application/xml
+
+**Body:**
+
+**PCE Active Stateful:**
+
+.. code:: xml
+
+    <input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
+     <node>pcc://43.43.43.43</node>
+     <name>update-tunel</name>
+     <arguments>
+      <lsp xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
+       <delegate>true</delegate>
+       <administrative>true</administrative>
+      </lsp>
+      <ero>
+       <subobject>
+        <loose>false</loose>
+        <ip-prefix><ip-prefix>200.20.160.41/32</ip-prefix></ip-prefix>
+       </subobject>
+       <subobject>
+        <loose>false</loose>
+        <ip-prefix><ip-prefix>196.20.160.39/32</ip-prefix></ip-prefix>
+       </subobject>
+       <subobject>
+        <loose>false</loose>
+        <ip-prefix><ip-prefix>39.39.39.39/32</ip-prefix></ip-prefix>
+       </subobject>
+      </ero>
+     </arguments>
+     <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
+    </input>
+
+**PCE Segment Routing:**
+
+.. code:: xml
+
+    <input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
+     <node>pcc://43.43.43.43</node>
+     <name>update-tunnel</name>
+     <arguments>
+      <lsp xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
+       <delegate>true</delegate>
+       <administrative>true</administrative>
+      </lsp>
+      <path-setup-type xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
+       <pst>1</pst>
+      </path-setup-type>
+      <ero>
+       <subobject>
+        <loose>false</loose>
+        <sid-type xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">ipv4-node-id</sid-type>
+        <m-flag xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">true</m-flag>
+        <sid xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">11</sid>
+        <ip-address xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">200.20.160.41</ip-address>
+       </subobject>
+       <subobject>
+        <loose>false</loose>
+        <sid-type xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">ipv4-node-id</sid-type>
+        <m-flag xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">true</m-flag>
+        <sid xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">12</sid>
+        <ip-address xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">39.39.39.39</ip-address>
+       </subobject>
+      </ero>
+     </arguments>
+     <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
+    </input>
+
+Removing LSP
+~~~~~~~~~~~~
+
+Removing LSP from PCC is done via following RESTCONF URL. Making a
+remove-lsp operation will trigger a PCInitiate message to PCC, with
+remove-flag in SRP set to true.
+
+You can only successfully remove an LSP if you own the delegation. You
+automatically own the delegation, if you’ve created the LSP. You don’t
+own it, if another PCE created this LSP. In this case PCC is only
+reporting this LSP for you, as read-only (you’ll see
+``<delegate>false</delegate>``). However OpenDaylight won’t restrict you
+from trying to remove the LSP, but you will be stopped by receiving a
+PCErr message from PCC.
+
+To revoke delegation, don’t forget to set ``<delegate>`` to true.
+
+**URL:**
+http://localhost:8181/restconf/operations/network-topology-pcep:remove-lsp
+
+**Method:** POST
+
+**Content-Type:** application/xml
+
+**Body:**
+
+.. code:: xml
+
+    <input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
+     <node>pcc://43.43.43.43</node>
+     <name>update-tunel</name>
+     <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
+    </input>
+
+PCE-triggered Initial Synchronization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Making an trigger-sync operation will trigger a PCUpd message to PCC
+with PLSP-ID = 0 and SYNC = 1 in order to trigger the LSP-DB
+synchronization process.
+
+**URL:**
+http://localhost:8181/restconf/operations/network-topology-pcep:trigger-sync
+
+**Method:** POST
+
+**Content-Type:** application/xml
+
+**Body:**
+
+.. code:: xml
+
+    <input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
+     <node>pcc://43.43.43.43</node>
+     <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
+    </input>
+
+PCE-triggered Re-synchronization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Making an trigger-resync operation will trigger a PCUpd message to PCC.
+The PCE can choose to re-synchronize its entire LSP database or a single
+LSP.
+
+**URL:**
+http://localhost:8181/restconf/operations/network-topology-pcep:trigger-sync
+
+**Method:** POST
+
+**Content-Type:** application/xml
+
+**Body:**
+
+.. code:: xml
+
+    <input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
+     <node>pcc://43.43.43.43</node>
+     <name>re-sync-lsp</name>
+     <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
+    </input>
+
+PCE-triggered LSP database Re-synchronization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+PCE-triggered LSP database Re-synchronization works same as in
+PCE-triggered Initial Synchronization.
+
diff --git a/docs/user-guide/service-function-chaining.rst b/docs/user-guide/service-function-chaining.rst
new file mode 100644 (file)
index 0000000..1ee320f
--- /dev/null
@@ -0,0 +1,2496 @@
+Service Function Chaining
+=========================
+
+OpenDaylight Service Function Chaining (SFC) Overiew
+----------------------------------------------------
+
+OpenDaylight Service Function Chaining (SFC) provides the ability to
+define an ordered list of a network services (e.g. firewalls, load
+balancers). These service are then "stitched" together in the network to
+create a service chain. This project provides the infrastructure
+(chaining logic, APIs) needed for ODL to provision a service chain in
+the network and an end-user application for defining such chains.
+
+-  ACE - Access Control Entry
+
+-  ACL - Access Control List
+
+-  SCF - Service Classifier Function
+
+-  SF - Service Function
+
+-  SFC - Service Function Chain
+
+-  SFF - Service Function Forwarder
+
+-  SFG - Service Function Group
+
+-  SFP - Service Function Path
+
+-  RSP - Rendered Service Path
+
+-  NSH - Network Service Header
+
+SFC User Interface
+------------------
+
+Overview
+~~~~~~~~
+
+SFC User Interface (SFC-UI) is based on Dlux project. It provides an
+easy way to create, read, update and delete configuration stored in
+Datastore. Moreover, it shows the status of all SFC features (e.g
+installed, uninstalled) and Karaf log messages as well.
+
+SFC-UI Architecture
+~~~~~~~~~~~~~~~~~~~
+
+SFC-UI operates purely by using RESTCONF.
+
+.. figure:: ./images/sfc/sfc-ui-architecture.png
+   :alt: SFC-UI integration into ODL
+
+   SFC-UI integration into ODL
+
+Configuring SFC-UI
+~~~~~~~~~~~~~~~~~~
+
+1. Run ODL distribution (run karaf)
+
+2. In karaf console execute: ``feature:install odl-sfc-ui``
+
+3. Visit SFC-UI on: ``http://<odl_ip_address>:8181/sfc/index.html``
+
+SFC Southbound REST Plugin
+--------------------------
+
+Overview
+~~~~~~~~
+
+The Southbound REST Plugin is used to send configuration from DataStore
+down to network devices supporting a REST API (i.e. they have a
+configured REST URI). It supports POST/PUT/DELETE operations, which are
+triggered accordingly by changes in the SFC data stores.
+
+-  Access Control List (ACL)
+
+-  Service Classifier Function (SCF)
+
+-  Service Function (SF)
+
+-  Service Function Group (SFG)
+
+-  Service Function Schedule Type (SFST)
+
+-  Service Function Forwader (SFF)
+
+-  Rendered Service Path (RSP)
+
+Southbound REST Plugin Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+From the user perspective, the REST plugin is another SFC Southbound
+plugin used to communicate with network devices.
+
+.. figure:: ./images/sfc/sb-rest-architecture-user.png
+   :alt: Soutbound REST Plugin integration into ODL
+
+   Soutbound REST Plugin integration into ODL
+
+Configuring Southbound REST Plugin
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+1. Run ODL distribution (run karaf)
+
+2. In karaf console execute: ``feature:install odl-sfc-sb-rest``
+
+3. Configure REST URIs for SF/SFF through SFC User Interface or RESTCONF
+   (required configuration steps can be found in the tutorial stated
+   bellow)
+
+Tutorial
+~~~~~~~~
+
+Comprehensive tutorial on how to use the Southbound REST Plugin and how
+to control network devices with it can be found on:
+https://wiki.opendaylight.org/view/Service_Function_Chaining:Main#SFC_101
+
+SFC-OVS integration
+-------------------
+
+Overview
+~~~~~~~~
+
+SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices.
+Integration is realized through mapping of SFC objects (like SF, SFF,
+Classifier, etc.) to OVS objects (like Bridge,
+TerminationPoint=Port/Interface). The mapping takes care of automatic
+instantiation (setup) of corresponding object whenever its counterpart
+is created. For example, when a new SFF is created, the SFC-OVS plugin
+will create a new OVS bridge and when a new OVS Bridge is created, the
+SFC-OVS plugin will create a new SFF.
+
+The feature is intended for SFC users willing to use Open vSwitch as
+underlying network infrastructure for deploying RSPs (Rendered Service
+Paths).
+
+SFC-OVS Architecture
+~~~~~~~~~~~~~~~~~~~~
+
+SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing
+information from/to OVS devices. From the user perspective SFC-OVS acts
+as a layer between SFC DataStore and OVSDB.
+
+.. figure:: ./images/sfc/sfc-ovs-architecture-user.png
+   :alt: SFC-OVS integration into ODL
+
+   SFC-OVS integration into ODL
+
+Configuring SFC-OVS
+~~~~~~~~~~~~~~~~~~~
+
+1. Run ODL distribution (run karaf)
+
+2. In karaf console execute: ``feature:install odl-sfc-ovs``
+
+3. Configure Open vSwitch to use ODL as a manager, using following
+   command: ``ovs-vsctl set-manager tcp:<odl_ip_address>:6640``
+
+Tutorials
+~~~~~~~~~
+
+Verifying mapping from OVS to SFF
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Overview
+''''''''
+
+This tutorial shows the usual workflow when OVS configuration is
+transformed to corresponding SFC objects (in this case SFF).
+
+Prerequisities
+''''''''''''''
+
+-  Open vSwitch installed (ovs-vsctl command available in shell)
+
+-  SFC-OVS feature configured as stated above
+
+Instructions
+''''''''''''
+
+1. ``ovs-vsctl set-manager tcp:<odl_ip_address>:6640``
+
+2. ``ovs-vsctl add-br br1``
+
+3. ``ovs-vsctl add-port br1 testPort``
+
+Verification
+''''''''''''
+
+a. visit SFC User Interface:
+   ``http://<odl_ip_address>:8181/sfc/index.html#/sfc/serviceforwarder``
+
+b. use pure RESTCONF and send GET request to URL:
+   ``http://<odl_ip_address>:8181/restconf/config/service-function-forwarder:service-function-forwarders``
+
+There should be SFF, which name will be ending with *br1* and the SFF
+should containt two DataPlane locators: *br1* and *testPort*.
+
+Verifying mapping from SFF to OVS
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Overview
+''''''''
+
+This tutorial shows the usual workflow during creation of OVS Bridge
+with use of SFC APIs.
+
+Prerequisities
+''''''''''''''
+
+-  Open vSwitch installed (ovs-vsctl command available in shell)
+
+-  SFC-OVS feature configured as stated above
+
+Instructions
+''''''''''''
+
+1. In shell execute: ``ovs-vsctl set-manager tcp:<odl_ip_address>:6640``
+
+2. Send POST request to URL:
+   ``http://<odl_ip_address>:8181/restconf/operations/service-function-forwarder-ovs:create-ovs-bridge``
+   Use Basic auth with credentials: "admin", "admin" and set
+   ``Content-Type: application/json``. The content of POST request
+   should be following:
+
+::
+
+    {
+        "input":
+        {
+            "name": "br-test",
+            "ovs-node": {
+                "ip": "<Open_vSwitch_ip_address>"
+            }
+        }
+    }
+
+Open\_vSwitch\_ip\_address is IP address of machine, where Open vSwitch
+is installed.
+
+Verification
+''''''''''''
+
+In shell execute: ``ovs-vsctl show``. There should be Bridge with name
+*br-test* and one port/interface called *br-test*.
+
+Also, corresponding SFF for this OVS Bridge should be configured, which
+can be verified through SFC User Interface or RESTCONF as stated in
+previous tutorial.
+
+SFC Classifier User Guide
+-------------------------
+
+Overview
+~~~~~~~~
+
+Description of classifier can be found in:
+https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/
+
+There are two types of classifier:
+
+1. OpenFlow Classifier
+
+2. Iptables Classifier
+
+OpenFlow Classifier
+~~~~~~~~~~~~~~~~~~~
+
+OpenFlow Classifier implements the classification criteria based on
+OpenFlow rules deployed into an OpenFlow switch. An Open vSwitch will
+take the role of a classifier and performs various encapsulations such
+NSH, VLAN, MPLS, etc. In the existing implementation, classifier can
+support NSH encapsulation. Matching information is based on ACL for MAC
+addresses, ports, protocol, IPv4 and IPv6. Supported protocols are TCP,
+UDP and SCTP. Actions information in the OF rules, shall be forwarding
+of the encapsulated packets with specific information related to the
+RSP.
+
+Classifier Architecture
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The OVSDB Southbound interface is used to create an instance of a bridge
+in a specific location (via IP address). This bridge contains the
+OpenFlow rules that perform the classification of the packets and react
+accordingly. The OpenFlow Southbound interface is used to translate the
+ACL information into OF rules within the Open vSwitch.
+
+.. note::
+
+    in order to create the instance of the bridge that takes the role of
+    a classifier, an "empty" SFF must be created.
+
+Configuring Classifier
+^^^^^^^^^^^^^^^^^^^^^^
+
+1. An empty SFF must be created in order to host the ACL that contains
+   the classification information.
+
+2. SFF data plane locator must be configured
+
+3. Classifier interface must be mannually added to SFF bridge.
+
+Administering or Managing Classifier
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Classification information is based on MAC addresses, protocol, ports
+and IP. ACL gathers this information and is assigned to an RSP which
+turns to be a specific path for a Service Chain.
+
+Iptables Classifier
+~~~~~~~~~~~~~~~~~~~
+
+Classifier manages everything from starting the packet listener to
+creation (and removal) of appropriate ip(6)tables rules and marking
+received packets accordingly. Its functionality is **available only on
+Linux** as it leverdges **NetfilterQueue**, which provides access to
+packets matched by an **iptables** rule. Classifier requires **root
+privileges** to be able to operate.
+
+So far it is capable of processing ACL for MAC addresses, ports, IPv4
+and IPv6. Supported protocols are TCP and UDP.
+
+Classifier Architecture
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Python code located in the project repository
+sfc-py/common/classifier.py.
+
+.. note::
+
+    classifier assumes that Rendered Service Path (RSP) **already
+    exists** in ODL when an ACL referencing it is obtained
+
+1. sfc\_agent receives an ACL and passes it for processing to the
+   classifier
+
+2. the RSP (its SFF locator) referenced by ACL is requested from ODL
+
+3. if the RSP exists in the ODL then ACL based iptables rules for it are
+   applied
+
+After this process is over, every packet successfully matched to an
+iptables rule (i.e. successfully classified) will be NSH encapsulated
+and forwarded to a related SFF, which knows how to traverse the RSP.
+
+Rules are created using appropriate iptables command. If the Access
+Control Entry (ACE) rule is MAC address related both iptables and
+ip6tabeles rules re issued. If ACE rule is IPv4 address related, only
+iptables rules are issued, same for IPv6.
+
+.. note::
+
+    iptables **raw** table contains all created rules
+
+Configuring Classifier
+^^^^^^^^^^^^^^^^^^^^^^
+
+| Classfier does’t need any configuration.
+| Its only requirement is that the **second (2) Netfilter Queue** is not
+  used by any other process and is **avalilable for the classifier**.
+
+Administering or Managing Classifier
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Classfier runs alongside sfc\_agent, therefore the commad for starting
+it locally is:
+
+::
+
+    sudo python3.4 sfc-py/sfc_agent.py --rest --odl-ip-port localhost:8181 --auto-sff-name --nfq-class
+
+SFC OpenFlow Renderer User Guide
+--------------------------------
+
+Overview
+~~~~~~~~
+
+The Service Function Chaining (SFC) OpenFlow Renderer (SFC OF Renderer)
+implements Service Chaining on OpenFlow switches. It listens for the
+creation of a Rendered Service Path (RSP), and once received it programs
+Service Function Forwarders (SFF) that are hosted on OpenFlow capable
+switches to steer packets through the service chain.
+
+Common acronyms used in the following sections:
+
+-  SF - Service Function
+
+-  SFF - Service Function Forwarder
+
+-  SFC - Service Function Chain
+
+-  SFP - Service Function Path
+
+-  RSP - Rendered Service Path
+
+SFC OpenFlow Renderer Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The SFC OF Renderer is invoked after a RSP is created using an MD-SAL
+listener called ``SfcOfRspDataListener``. Upon SFC OF Renderer
+initialization, the ``SfcOfRspDataListener`` registers itself to listen
+for RSP changes. When invoked, the ``SfcOfRspDataListener`` processes
+the RSP and calls the ``SfcOfFlowProgrammerImpl`` to create the
+necessary flows in the Service Function Forwarders configured in the
+RSP. Refer to the following diagram for more details.
+
+.. figure:: ./images/sfc/sfcofrenderer_architecture.png
+   :alt: SFC OpenFlow Renderer High Level Architecture
+
+   SFC OpenFlow Renderer High Level Architecture
+
+SFC OpenFlow Switch Flow pipeline
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The SFC OpenFlow Renderer uses the following tables for its Flow
+pipeline:
+
+-  Table 0, Classifier
+
+-  Table 1, Transport Ingress
+
+-  Table 2, Path Mapper
+
+-  Table 3, Path Mapper ACL
+
+-  Table 4, Next Hop
+
+-  Table 10, Transport Egress
+
+The OpenFlow Table Pipeline is intended to be generic to work for all of
+the different encapsulations supported by SFC.
+
+All of the tables are explained in detail in the following section.
+
+The SFFs (SFF1 and SFF2), SFs (SF1), and topology used for the flow
+tables in the following sections are as described in the following
+diagram.
+
+.. figure:: ./images/sfc/sfcofrenderer_nwtopo.png
+   :alt: SFC OpenFlow Renderer Typical Network Topology
+
+   SFC OpenFlow Renderer Typical Network Topology
+
+Classifier Table detailed
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+It is possible for the SFF to also act as a classifier. This table maps
+subscriber traffic to RSPs, and is explained in detail in the classifier
+documentation.
+
+If the SFF is not a classifier, then this table will just have a simple
+Goto Table 1 flow.
+
+Transport Ingress Table detailed
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The Transport Ingress table has an entry per expected tunnel transport
+type to be received in a particular SFF, as established in the SFC
+configuration.
+
+Here are two example on SFF1: one where the RSP ingress tunnel is MPLS
+assuming VLAN is used for the SFF-SF, and the other where the RSP
+ingress tunnel is NSH GRE (UDP port 4789):
+
++-------------+--------------------------------------+--------------------------+
+| Priority    | Match                                | Action                   |
++=============+======================================+==========================+
+| 256         | EtherType==0x8847 (MPLS unicast)     | Goto Table 2             |
++-------------+--------------------------------------+--------------------------+
+| 256         | EtherType==0x8100 (VLAN)             | Goto Table 2             |
++-------------+--------------------------------------+--------------------------+
+| 256         | EtherType==0x0800,udp,tp\_dst==4789  | Goto Table 2             |
+|             | (IP v4)                              |                          |
++-------------+--------------------------------------+--------------------------+
+| 5           | Match Any                            | Drop                     |
++-------------+--------------------------------------+--------------------------+
+
+Table: Table Transport Ingress
+
+Path Mapper Table detailed
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The Path Mapper table has an entry per expected tunnel transport info to
+be received in a particular SFF, as established in the SFC
+configuration. The tunnel transport info is used to determine the RSP
+Path ID, and is stored in the OpenFlow Metadata. This table is not used
+for NSH, since the RSP Path ID is stored in the NSH header.
+
+For SF nodes that do not support NSH tunneling, the IP header DSCP field
+is used to store the RSP Path Id. The RSP Path Id is written to the DSCP
+field in the Transport Egress table for those packets sent to an SF.
+
+Here is an example on SFF1, assuming the following details:
+
+-  VLAN ID 1000 is used for the SFF-SF
+
+-  The RSP Path 1 tunnel uses MPLS label 100 for ingress and 101 for
+   egress
+
+-  The RSP Path 2 (symmetric downlink path) uses MPLS label 101 for
+   ingress and 100 for egress
+
++--------------------------+--------------------------+--------------------------+
+| Priority                 | Match                    | Action                   |
++==========================+==========================+==========================+
+| 256                      | MPLS Label==100          | RSP Path=1, Pop MPLS,    |
+|                          |                          | Goto Table 4             |
++--------------------------+--------------------------+--------------------------+
+| 256                      | MPLS Label==101          | RSP Path=2, Pop MPLS,    |
+|                          |                          | Goto Table 4             |
++--------------------------+--------------------------+--------------------------+
+| 256                      | VLAN ID==1000, IP        | RSP Path=1, Pop VLAN,    |
+|                          | DSCP==1                  | Goto Table 4             |
++--------------------------+--------------------------+--------------------------+
+| 256                      | VLAN ID==1000, IP        | RSP Path=2, Pop VLAN,    |
+|                          | DSCP==2                  | Goto Table 4             |
++--------------------------+--------------------------+--------------------------+
+| 5                        | Match Any                | Goto Table 3             |
++--------------------------+--------------------------+--------------------------+
+
+Table: Table Path Mapper
+
+Path Mapper ACL Table detailed
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This table is only populated when PacketIn packets are received from the
+switch for TcpProxy type SFs. These flows are created with an inactivity
+timer of 60 seconds and will be automatically deleted upon expiration.
+
+Next Hop Table detailed
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The Next Hop table uses the RSP Path Id and appropriate packet fields to
+determine where to send the packet next. For NSH, only the NSP (Network
+Services Path, RSP ID) and NSI (Network Services Index, next hop) fields
+from the NSH header are needed to determine the VXLAN tunnel destination
+IP. For VLAN or MPLS, then the source MAC address is used to determine
+the destination MAC address.
+
+Here are two examples on SFF1, assuming SFF1 is connected to SFF2. RSP
+Paths 1 and 2 are symmetric VLAN paths. RSP Paths 3 and 4 are symmetric
+NSH paths. RSP Path 1 ingress packets come from external to SFC, for
+which we don’t have the source MAC address (MacSrc).
+
++------------+--------------------------------+--------------------------------+
+| Priority   | Match                          | Action                         |
++============+================================+================================+
+| 256        | RSP Path==1, MacSrc==SF1       | MacDst=SFF2, Goto Table 10     |
++------------+--------------------------------+--------------------------------+
+| 256        | RSP Path==2, MacSrc==SF1       | Goto Table 10                  |
++------------+--------------------------------+--------------------------------+
+| 256        | RSP Path==2, MacSrc==SFF2      | MacDst=SF1, Goto Table 10      |
++------------+--------------------------------+--------------------------------+
+| 246        | RSP Path==1                    | MacDst=SF1, Goto Table 10      |
++------------+--------------------------------+--------------------------------+
+| 256        | nsp=3,nsi=255 (SFF Ingress RSP | load:0xa000002→NXM\_NX\_TUN\_I |
+|            | 3)                             | PV4\_DST[],                    |
+|            |                                | Goto Table 10                  |
++------------+--------------------------------+--------------------------------+
+| 256        | nsp=3,nsi=254 (SFF Ingress     | load:0xa00000a→NXM\_NX\_TUN\_I |
+|            | from SF, RSP 3)                | PV4\_DST[],                    |
+|            |                                | Goto Table 10                  |
++------------+--------------------------------+--------------------------------+
+| 256        | nsp=4,nsi=254 (SFF1 Ingress    | load:0xa00000a→NXM\_NX\_TUN\_I |
+|            | from SFF2)                     | PV4\_DST[],                    |
+|            |                                | Goto Table 10                  |
++------------+--------------------------------+--------------------------------+
+| 5          | Match Any                      | Drop                           |
++------------+--------------------------------+--------------------------------+
+
+Table: Table Next Hop
+
+Transport Egress Table detailed
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The Transport Egress table prepares egress tunnel information and sends
+the packets out.
+
+Here are two examples on SFF1. RSP Paths 1 and 2 are symmetric MPLS
+paths that use VLAN for the SFF-SF. RSP Paths 3 and 4 are symmetric NSH
+paths. Since it is assumed that switches used for NSH will only have one
+VXLANport, the NSH packets are just sent back where they came from.
+
++------------+--------------------------------+--------------------------------+
+| Priority   | Match                          | Action                         |
++============+================================+================================+
+| 256        | RSP Path==1, MacDst==SF1       | Push VLAN ID 1000, Port=SF1    |
++------------+--------------------------------+--------------------------------+
+| 256        | RSP Path==1, MacDst==SFF2      | Push MPLS Label 101, Port=SFF2 |
++------------+--------------------------------+--------------------------------+
+| 256        | RSP Path==2, MacDst==SF1       | Push VLAN ID 1000, Port=SF1    |
++------------+--------------------------------+--------------------------------+
+| 246        | RSP Path==2                    | Push MPLS Label 100,           |
+|            |                                | Port=Ingress                   |
++------------+--------------------------------+--------------------------------+
+| 256        | nsp=3,nsi=255 (SFF Ingress RSP | IN\_PORT                       |
+|            | 3)                             |                                |
++------------+--------------------------------+--------------------------------+
+| 256        | nsp=3,nsi=254 (SFF Ingress     | IN\_PORT                       |
+|            | from SF, RSP 3)                |                                |
++------------+--------------------------------+--------------------------------+
+| 256        | nsp=4,nsi=254 (SFF1 Ingress    | IN\_PORT                       |
+|            | from SFF2)                     |                                |
++------------+--------------------------------+--------------------------------+
+| 5          | Match Any                      | Drop                           |
++------------+--------------------------------+--------------------------------+
+
+Table: Table Transport Egress
+
+Administering SFC OF Renderer
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use the SFC OpenFlow Renderer Karaf, at least the following Karaf
+features must be installed.
+
+-  odl-openflowplugin-nxm-extensions
+
+-  odl-openflowplugin-flow-services
+
+-  odl-sfc-provider
+
+-  odl-sfc-model
+
+-  odl-sfc-openflow-renderer
+
+-  odl-sfc-ui (optional)
+
+The following command can be used to view all of the currently installed
+Karaf features:
+
+::
+
+    opendaylight-user@root>feature:list -i
+
+Or, pipe the command to a grep to see a subset of the currently
+installed Karaf features:
+
+::
+
+    opendaylight-user@root>feature:list -i | grep sfc
+
+To install a particular feature, use the Karaf ``feature:install``
+command.
+
+SFC OF Renderer Tutorial
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+In this tutorial, 2 different encapsulations will be shown: MPLS and
+NSH. The following Network Topology diagram is a logical view of the
+SFFs and SFs involved in creating the Service Chains.
+
+.. figure:: ./images/sfc/sfcofrenderer_nwtopo.png
+   :alt: SFC OpenFlow Renderer Typical Network Topology
+
+   SFC OpenFlow Renderer Typical Network Topology
+
+Prerequisites
+^^^^^^^^^^^^^
+
+To use this example, SFF OpenFlow switches must be created and connected
+as illustrated above. Additionally, the SFs must be created and
+connected.
+
+Target Environment
+^^^^^^^^^^^^^^^^^^
+
+The target environment is not important, but this use-case was created
+and tested on Linux.
+
+Instructions
+^^^^^^^^^^^^
+
+The steps to use this tutorial are as follows. The referenced
+configuration in the steps is listed in the following sections.
+
+There are numerous ways to send the configuration. In the following
+configuration chapters, the appropriate ``curl`` command is shown for
+each configuration to be sent, including the URL.
+
+Steps to configure the SFC OF Renderer tutorial:
+
+1. Send the ``SF`` RESTCONF configuration
+
+2. Send the ``SFF`` RESTCONF configuration
+
+3. Send the ``SFC`` RESTCONF configuration
+
+4. Send the ``SFP`` RESTCONF configuration
+
+5. Create the ``RSP`` with a RESTCONF RPC command
+
+Once the configuration has been successfully created, query the Rendered
+Service Paths with either the SFC UI or via RESTCONF. Notice that the
+RSP is symmetrical, so the following 2 RSPs will be created:
+
+-  sfc-path1
+
+-  sfc-path1-Reverse
+
+At this point the Service Chains have been created, and the OpenFlow
+Switches are programmed to steer traffic through the Service Chain.
+Traffic can now be injected from a client into the Service Chain. To
+debug problems, the OpenFlow tables can be dumped with the following
+commands, assuming SFF1 is called ``s1`` and SFF2 is called ``s2``.
+
+::
+
+    sudo ovs-ofctl -O OpenFlow13  dump-flows s1
+
+::
+
+    sudo ovs-ofctl -O OpenFlow13  dump-flows s2
+
+In all the following configuration sections, replace the ``${JSON}``
+string with the appropriate JSON configuration. Also, change the
+``localhost`` desintation in the URL accordingly.
+
+SFC OF Renderer NSH Tutorial
+''''''''''''''''''''''''''''
+
+The following configuration sections show how to create the different
+elements using NSH encapsulation.
+
+| **NSH Service Function configuration**
+
+The Service Function configuration can be sent with the following
+command:
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function:service-functions/
+
+**SF configuration JSON.**
+
+::
+
+    {
+     "service-functions": {
+       "service-function": [
+         {
+           "name": "sf1",
+           "type": "http-header-enrichment",
+           "nsh-aware": true,
+           "ip-mgmt-address": "10.0.0.2",
+           "sf-data-plane-locator": [
+             {
+               "name": "sf1dpl",
+               "ip": "10.0.0.10",
+               "port": 4789,
+               "transport": "service-locator:vxlan-gpe",
+               "service-function-forwarder": "sff1"
+             }
+           ]
+         },
+         {
+           "name": "sf2",
+           "type": "firewall",
+           "nsh-aware": true,
+           "ip-mgmt-address": "10.0.0.3",
+           "sf-data-plane-locator": [
+             {
+               "name": "sf2dpl",
+                "ip": "10.0.0.20",
+                "port": 4789,
+                "transport": "service-locator:vxlan-gpe",
+               "service-function-forwarder": "sff2"
+             }
+           ]
+         }
+       ]
+     }
+    }
+
+| **NSH Service Function Forwarder configuration**
+
+The Service Function Forwarder configuration can be sent with the
+following command:
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
+
+**SFF configuration JSON.**
+
+::
+
+    {
+     "service-function-forwarders": {
+       "service-function-forwarder": [
+         {
+           "name": "sff1",
+           "service-node": "openflow:2",
+           "sff-data-plane-locator": [
+             {
+               "name": "sff1dpl",
+               "data-plane-locator":
+               {
+                   "ip": "10.0.0.1",
+                   "port": 4789,
+                   "transport": "service-locator:vxlan-gpe"
+               }
+             }
+           ],
+           "service-function-dictionary": [
+             {
+               "name": "sf1",
+               "sff-sf-data-plane-locator":
+               {
+                   "sf-dpl-name": "sf1dpl",
+                   "sff-dpl-name": "sff1dpl"
+               }
+             }
+           ]
+         },
+         {
+           "name": "sff2",
+           "service-node": "openflow:3",
+           "sff-data-plane-locator": [
+             {
+               "name": "sff2dpl",
+               "data-plane-locator":
+               {
+                   "ip": "10.0.0.2",
+                   "port": 4789,
+                   "transport": "service-locator:vxlan-gpe"
+               }
+             }
+           ],
+           "service-function-dictionary": [
+             {
+               "name": "sf2",
+               "sff-sf-data-plane-locator":
+               {
+                   "sf-dpl-name": "sf2dpl",
+                   "sff-dpl-name": "sff2dpl"
+               }
+             }
+           ]
+         }
+       ]
+     }
+    }
+
+| **NSH Service Function Chain configuration**
+
+The Service Function Chain configuration can be sent with the following
+command:
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/
+
+**SFC configuration JSON.**
+
+::
+
+    {
+     "service-function-chains": {
+       "service-function-chain": [
+         {
+           "name": "sfc-chain1",
+           "symmetric": true,
+           "sfc-service-function": [
+             {
+               "name": "hdr-enrich-abstract1",
+               "type": "http-header-enrichment"
+             },
+             {
+               "name": "firewall-abstract1",
+               "type": "firewall"
+             }
+           ]
+         }
+       ]
+     }
+    }
+
+| **NSH Service Function Path configuration**
+
+The Service Function Path configuration can be sent with the following
+command:
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-path:service-function-paths/
+
+**SFP configuration JSON.**
+
+::
+
+    {
+      "service-function-paths": {
+        "service-function-path": [
+          {
+            "name": "sfc-path1",
+            "service-chain-name": "sfc-chain1",
+            "transport-type": "service-locator:vxlan-gpe",
+            "symmetric": true
+          }
+        ]
+      }
+    }
+
+| **NSH Rendered Service Path creation**
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X POST --user admin:admin http://localhost:8181/restconf/operations/rendered-service-path:create-rendered-path/
+
+**RSP creation JSON.**
+
+::
+
+    {
+     "input": {
+         "name": "sfc-path1",
+         "parent-service-function-path": "sfc-path1",
+         "symmetric": true
+     }
+    }
+
+| **NSH Rendered Service Path removal**
+
+The following command can be used to remove a Rendered Service Path
+called ``sfc-path1``:
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '{"input": {"name": "sfc-path1" } }' -X POST --user admin:admin http://localhost:8181/restconf/operations/rendered-service-path:delete-rendered-path/
+
+| **NSH Rendered Service Path Query**
+
+The following command can be used to query all of the created Rendered
+Service Paths:
+
+::
+
+    curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET --user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
+
+SFC OF Renderer MPLS Tutorial
+'''''''''''''''''''''''''''''
+
+The following configuration sections show how to create the different
+elements using MPLS encapsulation.
+
+| **MPLS Service Function configuration**
+
+The Service Function configuration can be sent with the following
+command:
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function:service-functions/
+
+**SF configuration JSON.**
+
+::
+
+    {
+     "service-functions": {
+       "service-function": [
+         {
+           "name": "sf1",
+           "type": "http-header-enrichment",
+           "nsh-aware": false,
+           "ip-mgmt-address": "10.0.0.2",
+           "sf-data-plane-locator": [
+             {
+               "name": "sf1-sff1",
+               "mac": "00:00:08:01:02:01",
+               "vlan-id": 1000,
+               "transport": "service-locator:mac",
+               "service-function-forwarder": "sff1"
+             }
+           ]
+         },
+         {
+           "name": "sf2",
+           "type": "firewall",
+           "nsh-aware": false,
+           "ip-mgmt-address": "10.0.0.3",
+           "sf-data-plane-locator": [
+             {
+               "name": "sf2-sff2",
+               "mac": "00:00:08:01:03:01",
+               "vlan-id": 2000,
+               "transport": "service-locator:mac",
+               "service-function-forwarder": "sff2"
+             }
+           ]
+         }
+       ]
+     }
+    }
+
+| **MPLS Service Function Forwarder configuration**
+
+The Service Function Forwarder configuration can be sent with the
+following command:
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
+
+**SFF configuration JSON.**
+
+::
+
+    {
+     "service-function-forwarders": {
+       "service-function-forwarder": [
+         {
+           "name": "sff1",
+           "service-node": "openflow:2",
+           "sff-data-plane-locator": [
+             {
+               "name": "ulSff1Ingress",
+               "data-plane-locator":
+               {
+                   "mpls-label": 100,
+                   "transport": "service-locator:mpls"
+               },
+               "service-function-forwarder-ofs:ofs-port":
+               {
+                   "mac": "11:11:11:11:11:11",
+                   "port-id" : "1"
+               }
+             },
+             {
+               "name": "ulSff1ToSff2",
+               "data-plane-locator":
+               {
+                   "mpls-label": 101,
+                   "transport": "service-locator:mpls"
+               },
+               "service-function-forwarder-ofs:ofs-port":
+               {
+                   "mac": "33:33:33:33:33:33",
+                   "port-id" : "2"
+               }
+             },
+             {
+               "name": "toSf1",
+               "data-plane-locator":
+               {
+                   "mac": "22:22:22:22:22:22",
+                   "vlan-id": 1000,
+                   "transport": "service-locator:mac",
+               },
+               "service-function-forwarder-ofs:ofs-port":
+               {
+                   "mac": "33:33:33:33:33:33",
+                   "port-id" : "3"
+               }
+             }
+           ],
+           "service-function-dictionary": [
+             {
+               "name": "sf1",
+               "sff-sf-data-plane-locator":
+               {
+                   "sf-dpl-name": "sf1-sff1",
+                   "sff-dpl-name": "toSf1"
+               }
+             }
+           ]
+         },
+         {
+           "name": "sff2",
+           "service-node": "openflow:3",
+           "sff-data-plane-locator": [
+             {
+               "name": "ulSff2Ingress",
+               "data-plane-locator":
+               {
+                   "mpls-label": 101,
+                   "transport": "service-locator:mpls"
+               },
+               "service-function-forwarder-ofs:ofs-port":
+               {
+                   "mac": "44:44:44:44:44:44",
+                   "port-id" : "1"
+               }
+             },
+             {
+               "name": "ulSff2Egress",
+               "data-plane-locator":
+               {
+                   "mpls-label": 102,
+                   "transport": "service-locator:mpls"
+               },
+               "service-function-forwarder-ofs:ofs-port":
+               {
+                   "mac": "66:66:66:66:66:66",
+                   "port-id" : "2"
+               }
+             },
+             {
+               "name": "toSf2",
+               "data-plane-locator":
+               {
+                   "mac": "55:55:55:55:55:55",
+                   "vlan-id": 2000,
+                   "transport": "service-locator:mac"
+               },
+               "service-function-forwarder-ofs:ofs-port":
+               {
+                   "port-id" : "3"
+               }
+             }
+           ],
+           "service-function-dictionary": [
+             {
+               "name": "sf2",
+               "sff-sf-data-plane-locator":
+               {
+                   "sf-dpl-name": "sf2-sff2",
+                   "sff-dpl-name": "toSf2"
+
+               },
+               "service-function-forwarder-ofs:ofs-port":
+               {
+                   "port-id" : "3"
+               }
+             }
+           ]
+         }
+       ]
+     }
+    }
+
+| **MPLS Service Function Chain configuration**
+
+The Service Function Chain configuration can be sent with the following
+command:
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/
+
+**SFC configuration JSON.**
+
+::
+
+    {
+     "service-function-chains": {
+       "service-function-chain": [
+         {
+           "name": "sfc-chain1",
+           "symmetric": true,
+           "sfc-service-function": [
+             {
+               "name": "hdr-enrich-abstract1",
+               "type": "http-header-enrichment"
+             },
+             {
+               "name": "firewall-abstract1",
+               "type": "firewall"
+             }
+           ]
+         }
+       ]
+     }
+    }
+
+| **MPLS Service Function Path configuration**
+
+The Service Function Path configuration can be sent with the following
+command:
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-path:service-function-paths/
+
+**SFP configuration JSON.**
+
+::
+
+    {
+      "service-function-paths": {
+        "service-function-path": [
+          {
+            "name": "sfc-path1",
+            "service-chain-name": "sfc-chain1",
+            "transport-type": "service-locator:mpls",
+            "symmetric": true
+          }
+        ]
+      }
+    }
+
+| **MPLS Rendered Service Path creation**
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X POST --user admin:admin http://localhost:8181/restconf/operations/rendered-service-path:create-rendered-path/
+
+**RSP creation JSON.**
+
+::
+
+    {
+     "input": {
+         "name": "sfc-path1",
+         "parent-service-function-path": "sfc-path1",
+         "symmetric": true
+     }
+    }
+
+| **MPLS Rendered Service Path removal**
+
+The following command can be used to remove a Rendered Service Path
+called ``sfc-path1``:
+
+::
+
+    curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '{"input": {"name": "sfc-path1" } }' -X POST --user admin:admin http://localhost:8181/restconf/operations/rendered-service-path:delete-rendered-path/
+
+| **MPLS Rendered Service Path Query**
+
+The following command can be used to query all of the created Rendered
+Service Paths:
+
+::
+
+    curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET --user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
+
+SFC IOS XE Renderer User Guide
+------------------------------
+
+Overview
+~~~~~~~~
+
+The early Service Function Chaining (SFC) renderer for IOS-XE devices
+(SFC IOS-XE renderer) implements Service Chaining functionality on
+IOS-XE capable switches. It listens for the creation of a Rendered
+Service Path (RSP) and sets up Service Function Forwarders (SFF) that
+are hosted on IOS-XE switches to steer traffic through the service
+chain.
+
+Common acronyms used in the following sections:
+
+-  SF - Service Function
+
+-  SFF - Service Function Forwarder
+
+-  SFC - Service Function Chain
+
+-  SP - Service Path
+
+-  SFP - Service Function Path
+
+-  RSP - Rendered Service Path
+
+-  LSF - Local Service Forwarder
+
+-  RSF - Remote Service Forwarder
+
+SFC IOS-XE Renderer Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When the SFC IOS-XE renderer is initialized, all required listeners are
+registered to handle incoming data. It involves CSR/IOS-XE
+``NodeListener`` which stores data about all configurable devices
+including their mountpoints (used here as databrokers),
+``ServiceFunctionListener``, ``ServiceForwarderListener`` (see mapping)
+and ``RenderedPathListener`` used to listen for RSP changes. When the
+SFC IOS-XE renderer is invoked, ``RenderedPathListener`` calls the
+``IosXeRspProcessor`` which processes the RSP change and creates all
+necessary Service Paths and Remote Service Forwarders (if necessary) on
+IOS-XE devices.
+
+Service Path details
+~~~~~~~~~~~~~~~~~~~~
+
+Each Service Path is defined by index (represented by NSP) and contains
+service path entries. Each entry has appropriate service index (NSI) and
+definition of next hop. Next hop can be Service Function, different
+Service Function Forwarder or definition of end of chain - terminate.
+After terminating, the packet is sent to destination. If a SFF is
+defined as a next hop, it has to be present on device in the form of
+Remote Service Forwarder. RSFs are also created during RSP processing.
+
+Example of Service Path:
+
+::
+
+    service-chain service-path 200
+       service-index 255 service-function firewall-1
+       service-index 254 service-function dpi-1
+       service-index 253 terminate
+
+Mapping to IOS-XE SFC entities
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Renderer contains mappers for SFs and SFFs. IOS-XE capable device is
+using its own definition of Service Functions and Service Function
+Forwarders according to appropriate .yang file.
+``ServiceFunctionListener`` serves as a listener for SF changes. If SF
+appears in datastore, listener extracts its management ip address and
+looks into cached IOS-XE nodes. If some of available nodes match,
+Service function is mapped in ``IosXeServiceFunctionMapper`` to be
+understandable by IOS-XE device and it’s written into device’s config.
+``ServiceForwarderListener`` is used in a similar way. All SFFs with
+suitable management ip address it mapped in
+``IosXeServiceForwarderMapper``. Remapped SFFs are configured as a Local
+Service Forwarders. It is not possible to directly create Remote Service
+Forwarder using IOS-XE renderer. RSF is created only during RSP
+processing.
+
+Administering SFC IOS-XE renderer
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use the SFC IOS-XE Renderer Karaf, at least the following Karaf
+features must be installed:
+
+-  odl-aaa-shiro
+
+-  odl-sfc-model
+
+-  odl-sfc-provider
+
+-  odl-restconf
+
+-  odl-netconf-topology
+
+-  odl-sfc-ios-xe-renderer
+
+SFC IOS-XE renderer Tutorial
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This tutorial is a simple example how to create Service Path on IOS-XE
+capable device using IOS-XE renderer
+
+Preconditions
+^^^^^^^^^^^^^
+
+To connect to IOS-XE device, it is necessary to use several modified
+yang models and override device’s ones. All .yang files are in the
+``Yang/netconf`` folder in the ``sfc-ios-xe-renderer module`` in the SFC
+project. These files have to be copied to the ``cache/schema``
+directory, before Karaf is started. After that, custom capabilities have
+to be sent to network-topology:
+
+::
+
+    PUT ./config/network-topology:network-topology/topology/topology-netconf/node/<device-name>
+
+    <node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
+      <node-id>device-name</node-id>
+      <host xmlns="urn:opendaylight:netconf-node-topology">device-ip</host>
+      <port xmlns="urn:opendaylight:netconf-node-topology">2022</port>
+      <username xmlns="urn:opendaylight:netconf-node-topology">login</username>
+      <password xmlns="urn:opendaylight:netconf-node-topology">password</password>
+      <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
+      <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">0</keepalive-delay>
+      <yang-module-capabilities xmlns="urn:opendaylight:netconf-node-topology">
+         <override>true</override>
+         <capability xmlns="urn:opendaylight:netconf-node-topology">
+            urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&amp;revision=2013-07-15
+         </capability>
+         <capability xmlns="urn:opendaylight:netconf-node-topology">
+            urn:ietf:params:xml:ns:yang:ietf-yang-types?module=ietf-yang-types&amp;revision=2013-07-15
+         </capability>
+         <capability xmlns="urn:opendaylight:netconf-node-topology">
+            urn:ios?module=ned&amp;revision=2016-03-08
+         </capability>
+         <capability xmlns="urn:opendaylight:netconf-node-topology">
+            http://tail-f.com/yang/common?module=tailf-common&amp;revision=2015-05-22
+         </capability>
+         <capability xmlns="urn:opendaylight:netconf-node-topology">
+            http://tail-f.com/yang/common?module=tailf-meta-extensions&amp;revision=2013-11-07
+         </capability>
+         <capability xmlns="urn:opendaylight:netconf-node-topology">
+            http://tail-f.com/yang/common?module=tailf-cli-extensions&amp;revision=2015-03-19
+         </capability>
+      </yang-module-capabilities>
+    </node>
+
+.. note::
+
+    The device name in the URL and in the XML must match.
+
+Instructions
+^^^^^^^^^^^^
+
+When the IOS-XE renderer is installed, all NETCONF nodes in
+topology-netconf are processed and all capable nodes with accessible
+mountpoints are cached. The first step is to create LSF on node.
+
+``Service Function Forwarder configuration``
+
+::
+
+    PUT ./config/service-function-forwarder:service-function-forwarders
+
+    {
+        "service-function-forwarders": {
+            "service-function-forwarder": [
+                {
+                    "name": "CSR1Kv-2",
+                    "ip-mgmt-address": "172.25.73.23",
+                    "sff-data-plane-locator": [
+                        {
+                            "name": "CSR1Kv-2-dpl",
+                            "data-plane-locator": {
+                                "transport": "service-locator:vxlan-gpe",
+                                "port": 6633,
+                                "ip": "10.99.150.10"
+                            }
+                        }
+                    ]
+                }
+            ]
+        }
+    }
+
+If the IOS-XE node with appropriate management IP exists, this
+configuration is mapped and LSF is created on the device. The same
+approach is used for Service Functions.
+
+::
+
+    PUT ./config/service-function:service-functions
+
+    {
+        "service-functions": {
+            "service-function": [
+                {
+                    "name": "Firewall",
+                    "ip-mgmt-address": "172.25.73.23",
+                    "type": "service-function-type: firewall",
+                    "nsh-aware": true,
+                    "sf-data-plane-locator": [
+                        {
+                            "name": "firewall-dpl",
+                            "port": 6633,
+                            "ip": "12.1.1.2",
+                            "transport": "service-locator:gre",
+                            "service-function-forwarder": "CSR1Kv-2"
+                        }
+                    ]
+                },
+                {
+                    "name": "Dpi",
+                    "ip-mgmt-address": "172.25.73.23",
+                    "type":"service-function-type: dpi",
+                    "nsh-aware": true,
+                    "sf-data-plane-locator": [
+                        {
+                            "name": "dpi-dpl",
+                            "port": 6633,
+                            "ip": "12.1.1.1",
+                            "transport": "service-locator:gre",
+                            "service-function-forwarder": "CSR1Kv-2"
+                        }
+                    ]
+                },
+                {
+                    "name": "Qos",
+                    "ip-mgmt-address": "172.25.73.23",
+                    "type":"service-function-type: qos",
+                    "nsh-aware": true,
+                    "sf-data-plane-locator": [
+                        {
+                            "name": "qos-dpl",
+                            "port": 6633,
+                            "ip": "12.1.1.4",
+                            "transport": "service-locator:gre",
+                            "service-function-forwarder": "CSR1Kv-2"
+                        }
+                    ]
+                }
+            ]
+        }
+    }
+
+All these SFs are configured on the same device as the LSF. The next
+step is to prepare Service Function Chain. SFC is symmetric.
+
+::
+
+    PUT ./config/service-function-chain:service-function-chains/
+
+    {
+        "service-function-chains": {
+            "service-function-chain": [
+                {
+                    "name": "CSR3XSF",
+                    "symmetric": "true",
+                    "sfc-service-function": [
+                        {
+                            "name": "Firewall",
+                            "type": "service-function-type: firewall"
+                        },
+                        {
+                            "name": "Dpi",
+                            "type": "service-function-type: dpi"
+                        },
+                        {
+                            "name": "Qos",
+                            "type": "service-function-type: qos"
+                        }
+                    ]
+                }
+            ]
+        }
+    }
+
+Service Function Path:
+
+::
+
+    PUT ./config/service-function-path:service-function-paths/
+
+    {
+        "service-function-paths": {
+            "service-function-path": [
+                {
+                    "name": "CSR3XSF-Path",
+                    "service-chain-name": "CSR3XSF",
+                    "starting-index": 255,
+                    "symmetric": "true"
+                }
+            ]
+        }
+    }
+
+Without a classifier, there is possibility to POST RSP directly.
+
+::
+
+    POST ./operations/rendered-service-path:create-rendered-path
+
+    {
+      "input": {
+          "name": "CSR3XSF-Path-RSP",
+          "parent-service-function-path": "CSR3XSF-Path",
+          "symmetric": true
+      }
+    }
+
+The resulting configuration:
+
+::
+
+    !
+    service-chain service-function-forwarder local
+      ip address 10.99.150.10
+    !
+    service-chain service-function firewall
+    ip address 12.1.1.2
+      encapsulation gre enhanced divert
+    !
+    service-chain service-function dpi
+    ip address 12.1.1.1
+      encapsulation gre enhanced divert
+    !
+    service-chain service-function qos
+    ip address 12.1.1.4
+      encapsulation gre enhanced divert
+    !
+    service-chain service-path 1
+      service-index 255 service-function firewall
+      service-index 254 service-function dpi
+      service-index 253 service-function qos
+      service-index 252 terminate
+    !
+    service-chain service-path 2
+      service-index 255 service-function qos
+      service-index 254 service-function dpi
+      service-index 253 service-function firewall
+      service-index 252 terminate
+    !
+
+Service Path 1 is direct, Service Path 2 is reversed. Path numbers may
+vary.
+
+Service Function Scheduling Algorithms
+--------------------------------------
+
+Overview
+~~~~~~~~
+
+When creating the Rendered Service Path, the origin SFC controller chose
+the first available service function from a list of service function
+names. This may result in many issues such as overloaded service
+functions and a longer service path as SFC has no means to understand
+the status of service functions and network topology. The service
+function selection framework supports at least four algorithms (Random,
+Round Robin, Load Balancing and Shortest Path) to select the most
+appropriate service function when instantiating the Rendered Service
+Path. In addition, it is an extensible framework that allows 3rd party
+selection algorithm to be plugged in.
+
+Architecture
+~~~~~~~~~~~~
+
+The following figure illustrates the service function selection
+framework and algorithms.
+
+.. figure:: ./images/sfc/sf-selection-arch.png
+   :alt: SF Selection Architecture
+
+   SF Selection Architecture
+
+A user has three different ways to select one service function selection
+algorithm:
+
+1. Integrated RESTCONF Calls. OpenStack and/or other administration
+   system could provide plugins to call the APIs to select one
+   scheduling algorithm.
+
+2. Command line tools. Command line tools such as curl or browser
+   plugins such as POSTMAN (for Google Chrome) and RESTClient (for
+   Mozilla Firefox) could select schedule algorithm by making RESTCONF
+   calls.
+
+3. SFC-UI. Now the SFC-UI provides an option for choosing a selection
+   algorithm when creating a Rendered Service Path.
+
+The RESTCONF northbound SFC API provides GUI/RESTCONF interactions for
+choosing the service function selection algorithm. MD-SAL data store
+provides all supported service function selection algorithms, and
+provides APIs to enable one of the provided service function selection
+algorithms. Once a service function selection algorithm is enabled, the
+service function selection algorithm will work when creating a Rendered
+Service Path.
+
+Select SFs with Scheduler
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Administrator could use both the following ways to select one of the
+selection algorithm when creating a Rendered Service Path.
+
+-  Command line tools. Command line tools includes Linux commands curl
+   or even browser plugins such as POSTMAN(for Google Chrome) or
+   RESTClient(for Mozilla Firefox). In this case, the following JSON
+   content is needed at the moment:
+   Service\_function\_schudule\_type.json
+
+   ::
+
+       {
+         "service-function-scheduler-types": {
+           "service-function-scheduler-type": [
+             {
+               "name": "random",
+               "type": "service-function-scheduler-type:random",
+               "enabled": false
+             },
+             {
+               "name": "roundrobin",
+               "type": "service-function-scheduler-type:round-robin",
+               "enabled": true
+             },
+             {
+               "name": "loadbalance",
+               "type": "service-function-scheduler-type:load-balance",
+               "enabled": false
+             },
+             {
+               "name": "shortestpath",
+               "type": "service-function-scheduler-type:shortest-path",
+               "enabled": false
+             }
+           ]
+         }
+       }
+
+   If using the Linux curl command, it could be:
+
+   ::
+
+       curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '$${Service_function_schudule_type.json}'
+       -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-scheduler-type:service-function-scheduler-types/
+
+   Here is also a snapshot for using the RESTClient plugin:
+
+.. figure:: ./images/sfc/RESTClient-snapshot.png
+   :alt: Mozilla Firefox RESTClient
+
+   Mozilla Firefox RESTClient
+
+-  SFC-UI.SFC-UI provides a drop down menu for service function
+   selection algorithm. Here is a snapshot for the user interaction from
+   SFC-UI when creating a Rendered Service Path.
+
+.. figure:: ./images/sfc/karaf-webui-select-a-type.png
+   :alt: Karaf Web UI
+
+   Karaf Web UI
+
+.. note::
+
+    Some service function selection algorithms in the drop list are not
+    implemented yet. Only the first three algorithms are committed at
+    the moment.
+
+Random
+^^^^^^
+
+Select Service Function from the name list randomly.
+
+Overview
+''''''''
+
+The Random algorithm is used to select one Service Function from the
+name list which it gets from the Service Function Type randomly.
+
+Prerequisites
+'''''''''''''
+
+-  Service Function information are stored in datastore.
+
+-  Either no algorithm or the Random algorithm is selected.
+
+Target Environment
+''''''''''''''''''
+
+The Random algorithm will work either no algorithm type is selected or
+the Random algorithm is selected.
+
+Instructions
+''''''''''''
+
+Once the plugins are installed into Karaf successfully, a user can use
+his favorite method to select the Random scheduling algorithm type.
+There are no special instructions for using the Random algorithm.
+
+Round Robin
+^^^^^^^^^^^
+
+Select Service Function from the name list in Round Robin manner.
+
+Overview
+''''''''
+
+The Round Robin algorithm is used to select one Service Function from
+the name list which it gets from the Service Function Type in a Round
+Robin manner, this will balance workloads to all Service Functions.
+However, this method cannot help all Service Functions load the same
+workload because it’s flow-based Round Robin.
+
+Prerequisites
+'''''''''''''
+
+-  Service Function information are stored in datastore.
+
+-  Round Robin algorithm is selected
+
+Target Environment
+''''''''''''''''''
+
+The Round Robin algorithm will work one the Round Robin algorithm is
+selected.
+
+Instructions
+''''''''''''
+
+Once the plugins are installed into Karaf successfully, a user can use
+his favorite method to select the Round Robin scheduling algorithm type.
+There are no special instructions for using the Round Robin algorithm.
+
+Load Balance Algorithm
+^^^^^^^^^^^^^^^^^^^^^^
+
+Select appropriate Service Function by actual CPU utilization.
+
+Overview
+''''''''
+
+The Load Balance Algorithm is used to select appropriate Service
+Function by actual CPU utilization of service functions. The CPU
+utilization of service function obtained from monitoring information
+reported via NETCONF.
+
+Prerequisites
+'''''''''''''
+
+-  CPU-utilization for Service Function.
+
+-  NETCONF server.
+
+-  NETCONF client.
+
+-  Each VM has a NETCONF server and it could work with NETCONF client
+   well.
+
+Instructions
+''''''''''''
+
+Set up VMs as Service Functions. enable NETCONF server in VMs. Ensure
+that you specify them separately. For example:
+
+a. Set up 4 VMs include 2 SFs' type are Firewall, Others are Napt44.
+   Name them as firewall-1, firewall-2, napt44-1, napt44-2 as Service
+   Function. The four VMs can run either the same server or different
+   servers.
+
+b. Install NETCONF server on every VM and enable it. More information on
+   NETCONF can be found on the OpenDaylight wiki here:
+   https://wiki.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf:Manual_netopeer_installation
+
+c. Get Monitoring data from NETCONF server. These monitoring data should
+   be get from the NETCONF server which is running in VMs. The following
+   static XML data is an example:
+
+static XML data like this:
+
+::
+
+    <?xml version="1.0" encoding="UTF-8"?>
+    <service-function-description-monitor-report>
+      <SF-description>
+        <number-of-dataports>2</number-of-dataports>
+        <capabilities>
+          <supported-packet-rate>5</supported-packet-rate>
+          <supported-bandwidth>10</supported-bandwidth>
+          <supported-ACL-number>2000</supported-ACL-number>
+          <RIB-size>200</RIB-size>
+          <FIB-size>100</FIB-size>
+          <ports-bandwidth>
+            <port-bandwidth>
+              <port-id>1</port-id>
+              <ipaddress>10.0.0.1</ipaddress>
+              <macaddress>00:1e:67:a2:5f:f4</macaddress>
+              <supported-bandwidth>20</supported-bandwidth>
+            </port-bandwidth>
+            <port-bandwidth>
+              <port-id>2</port-id>
+              <ipaddress>10.0.0.2</ipaddress>
+              <macaddress>01:1e:67:a2:5f:f6</macaddress>
+              <supported-bandwidth>10</supported-bandwidth>
+            </port-bandwidth>
+          </ports-bandwidth>
+        </capabilities>
+      </SF-description>
+      <SF-monitoring-info>
+        <liveness>true</liveness>
+        <resource-utilization>
+            <packet-rate-utilization>10</packet-rate-utilization>
+            <bandwidth-utilization>15</bandwidth-utilization>
+            <CPU-utilization>12</CPU-utilization>
+            <memory-utilization>17</memory-utilization>
+            <available-memory>8</available-memory>
+            <RIB-utilization>20</RIB-utilization>
+            <FIB-utilization>25</FIB-utilization>
+            <power-utilization>30</power-utilization>
+            <SF-ports-bandwidth-utilization>
+              <port-bandwidth-utilization>
+                <port-id>1</port-id>
+                <bandwidth-utilization>20</bandwidth-utilization>
+              </port-bandwidth-utilization>
+              <port-bandwidth-utilization>
+                <port-id>2</port-id>
+                <bandwidth-utilization>30</bandwidth-utilization>
+              </port-bandwidth-utilization>
+            </SF-ports-bandwidth-utilization>
+        </resource-utilization>
+      </SF-monitoring-info>
+    </service-function-description-monitor-report>
+
+a. Unzip SFC release tarball.
+
+b. Run SFC: ${sfc}/bin/karaf. More information on Service Function
+   Chaining can be found on the OpenDaylight SFC’s wiki page:
+   https://wiki.opendaylight.org/view/Service_Function_Chaining:Main
+
+a. Deploy the SFC2 (firewall-abstract2⇒napt44-abstract2) and click
+   button to Create Rendered Service Path in SFC UI
+   (http://localhost:8181/sfc/index.html).
+
+b. Verify the Rendered Service Path to ensure the CPU utilization of the
+   selected hop is the minimum one among all the service functions with
+   same type. The correct RSP is firewall-1⇒napt44-2
+
+Shortest Path Algorithm
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Select appropriate Service Function by Dijkstra’s algorithm. Dijkstra’s
+algorithm is an algorithm for finding the shortest paths between nodes
+in a graph.
+
+Overview
+''''''''
+
+The Shortest Path Algorithm is used to select appropriate Service
+Function by actual topology.
+
+Prerequisites
+'''''''''''''
+
+-  Depolyed topology (include SFFs, SFs and their links).
+
+-  Dijkstra’s algorithm. More information on Dijkstra’s algorithm can be
+   found on the wiki here:
+   http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
+
+Instructions
+''''''''''''
+
+a. Unzip SFC release tarball.
+
+b. Run SFC: ${sfc}/bin/karaf.
+
+c. Depoly SFFs and SFs. import the service-function-forwarders.json and
+   service-functions.json in UI
+   (http://localhost:8181/sfc/index.html#/sfc/config)
+
+service-function-forwarders.json:
+
+::
+
+    {
+      "service-function-forwarders": {
+        "service-function-forwarder": [
+          {
+            "name": "SFF-br1",
+            "service-node": "OVSDB-test01",
+            "rest-uri": "http://localhost:5001",
+            "sff-data-plane-locator": [
+              {
+                "name": "eth0",
+                "service-function-forwarder-ovs:ovs-bridge": {
+                  "uuid": "4c3778e4-840d-47f4-b45e-0988e514d26c",
+                  "bridge-name": "br-tun"
+                },
+                "data-plane-locator": {
+                  "port": 5000,
+                  "ip": "192.168.1.1",
+                  "transport": "service-locator:vxlan-gpe"
+                }
+              }
+            ],
+            "service-function-dictionary": [
+              {
+                "sff-sf-data-plane-locator": {
+                  "port": 10001,
+                  "ip": "10.3.1.103"
+                },
+                "name": "napt44-1",
+                "type": "service-function-type:napt44"
+              },
+              {
+                "sff-sf-data-plane-locator": {
+                  "port": 10003,
+                  "ip": "10.3.1.102"
+                },
+                "name": "firewall-1",
+                "type": "service-function-type:firewall"
+              }
+            ],
+            "connected-sff-dictionary": [
+              {
+                "name": "SFF-br3"
+              }
+            ]
+          },
+          {
+            "name": "SFF-br2",
+            "service-node": "OVSDB-test01",
+            "rest-uri": "http://localhost:5002",
+            "sff-data-plane-locator": [
+              {
+                "name": "eth0",
+                "service-function-forwarder-ovs:ovs-bridge": {
+                  "uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a1",
+                  "bridge-name": "br-tun"
+                },
+                "data-plane-locator": {
+                  "port": 5000,
+                  "ip": "192.168.1.2",
+                  "transport": "service-locator:vxlan-gpe"
+                }
+              }
+            ],
+            "service-function-dictionary": [
+              {
+                "sff-sf-data-plane-locator": {
+                  "port": 10002,
+                  "ip": "10.3.1.103"
+                },
+                "name": "napt44-2",
+                "type": "service-function-type:napt44"
+              },
+              {
+                "sff-sf-data-plane-locator": {
+                  "port": 10004,
+                  "ip": "10.3.1.101"
+                },
+                "name": "firewall-2",
+                "type": "service-function-type:firewall"
+              }
+            ],
+            "connected-sff-dictionary": [
+              {
+                "name": "SFF-br3"
+              }
+            ]
+          },
+          {
+            "name": "SFF-br3",
+            "service-node": "OVSDB-test01",
+            "rest-uri": "http://localhost:5005",
+            "sff-data-plane-locator": [
+              {
+                "name": "eth0",
+                "service-function-forwarder-ovs:ovs-bridge": {
+                  "uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a4",
+                  "bridge-name": "br-tun"
+                },
+                "data-plane-locator": {
+                  "port": 5000,
+                  "ip": "192.168.1.2",
+                  "transport": "service-locator:vxlan-gpe"
+                }
+              }
+            ],
+            "service-function-dictionary": [
+              {
+                "sff-sf-data-plane-locator": {
+                  "port": 10005,
+                  "ip": "10.3.1.104"
+                },
+                "name": "test-server",
+                "type": "service-function-type:dpi"
+              },
+              {
+                "sff-sf-data-plane-locator": {
+                  "port": 10006,
+                  "ip": "10.3.1.102"
+                },
+                "name": "test-client",
+                "type": "service-function-type:dpi"
+              }
+            ],
+            "connected-sff-dictionary": [
+              {
+                "name": "SFF-br1"
+              },
+              {
+                "name": "SFF-br2"
+              }
+            ]
+          }
+        ]
+      }
+    }
+
+service-functions.json:
+
+::
+
+    {
+      "service-functions": {
+        "service-function": [
+          {
+            "rest-uri": "http://localhost:10001",
+            "ip-mgmt-address": "10.3.1.103",
+            "sf-data-plane-locator": [
+              {
+                "name": "preferred",
+                "port": 10001,
+                "ip": "10.3.1.103",
+                "service-function-forwarder": "SFF-br1"
+              }
+            ],
+            "name": "napt44-1",
+            "type": "service-function-type:napt44",
+            "nsh-aware": true
+          },
+          {
+            "rest-uri": "http://localhost:10002",
+            "ip-mgmt-address": "10.3.1.103",
+            "sf-data-plane-locator": [
+              {
+                "name": "master",
+                "port": 10002,
+                "ip": "10.3.1.103",
+                "service-function-forwarder": "SFF-br2"
+              }
+            ],
+            "name": "napt44-2",
+            "type": "service-function-type:napt44",
+            "nsh-aware": true
+          },
+          {
+            "rest-uri": "http://localhost:10003",
+            "ip-mgmt-address": "10.3.1.103",
+            "sf-data-plane-locator": [
+              {
+                "name": "1",
+                "port": 10003,
+                "ip": "10.3.1.102",
+                "service-function-forwarder": "SFF-br1"
+              }
+            ],
+            "name": "firewall-1",
+            "type": "service-function-type:firewall",
+            "nsh-aware": true
+          },
+          {
+            "rest-uri": "http://localhost:10004",
+            "ip-mgmt-address": "10.3.1.103",
+            "sf-data-plane-locator": [
+              {
+                "name": "2",
+                "port": 10004,
+                "ip": "10.3.1.101",
+                "service-function-forwarder": "SFF-br2"
+              }
+            ],
+            "name": "firewall-2",
+            "type": "service-function-type:firewall",
+            "nsh-aware": true
+          },
+          {
+            "rest-uri": "http://localhost:10005",
+            "ip-mgmt-address": "10.3.1.103",
+            "sf-data-plane-locator": [
+              {
+                "name": "3",
+                "port": 10005,
+                "ip": "10.3.1.104",
+                "service-function-forwarder": "SFF-br3"
+              }
+            ],
+            "name": "test-server",
+            "type": "service-function-type:dpi",
+            "nsh-aware": true
+          },
+          {
+            "rest-uri": "http://localhost:10006",
+            "ip-mgmt-address": "10.3.1.103",
+            "sf-data-plane-locator": [
+              {
+                "name": "4",
+                "port": 10006,
+                "ip": "10.3.1.102",
+                "service-function-forwarder": "SFF-br3"
+              }
+            ],
+            "name": "test-client",
+            "type": "service-function-type:dpi",
+            "nsh-aware": true
+          }
+        ]
+      }
+    }
+
+The depolyed topology like this:
+
+::
+
+                  +----+           +----+          +----+
+                  |sff1|+----------|sff3|---------+|sff2|
+                  +----+           +----+          +----+
+                    |                                  |
+             +--------------+                   +--------------+
+             |              |                   |              |
+        +----------+   +--------+          +----------+   +--------+
+        |firewall-1|   |napt44-1|          |firewall-2|   |napt44-2|
+        +----------+   +--------+          +----------+   +--------+
+
+-  Deploy the SFC2(firewall-abstract2⇒napt44-abstract2), select
+   "Shortest Path" as schedule type and click button to Create Rendered
+   Service Path in SFC UI (http://localhost:8181/sfc/index.html).
+
+.. figure:: ./images/sfc/sf-schedule-type.png
+   :alt: select schedule type
+
+   select schedule type
+
+-  Verify the Rendered Service Path to ensure the selected hops are
+   linked in one SFF. The correct RSP is firewall-1⇒napt44-1 or
+   firewall-2⇒napt44-2. The first SF type is Firewall in Service
+   Function Chain. So the algorithm will select first Hop randomly among
+   all the SFs type is Firewall. Assume the first selected SF is
+   firewall-2. All the path from firewall-1 to SF which type is Napt44
+   are list:
+
+   -  Path1: firewall-2 → sff2 → napt44-2
+
+   -  Path2: firewall-2 → sff2 → sff3 → sff1 → napt44-1 The shortest
+      path is Path1, so the selected next hop is napt44-2.
+
+.. figure:: ./images/sfc/sf-rendered-service-path.png
+   :alt: rendered service path
+
+   rendered service path
+
+Service Function Load Balancing User Guide
+------------------------------------------
+
+Overview
+~~~~~~~~
+
+SFC Load-Balancing feature implements load balancing of Service
+Functions, rather than a one-to-one mapping between
+Service-Function-Forwarder and Service-Function.
+
+Load Balancing Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Service Function Groups (SFG) can replace Service Functions (SF) in the
+Rendered Path model. A Service Path can only be defined using SFGs or
+SFs, but not a combination of both.
+
+Relevant objects in the YANG model are as follows:
+
+1. Service-Function-Group-Algorithm:
+
+   ::
+
+       Service-Function-Group-Algorithms {
+           Service-Function-Group-Algorithm {
+               String name
+               String type
+           }
+       }
+
+   ::
+
+       Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
+
+2. Service-Function-Group:
+
+   ::
+
+       Service-Function-Groups {
+           Service-Function-Group {
+               String name
+               String serviceFunctionGroupAlgorithmName
+               String type
+               String groupId
+               Service-Function-Group-Element {
+                   String service-function-name
+                   int index
+               }
+           }
+       }
+
+3. ServiceFunctionHop: holds a reference to a name of SFG (or SF)
+
+Tutorials
+~~~~~~~~~
+
+This tutorial will explain how to create a simple SFC configuration,
+with SFG instead of SF. In this example, the SFG will include two
+existing SF.
+
+Setup SFC
+^^^^^^^^^
+
+For general SFC setup and scenarios, please see the SFC wiki page:
+https://wiki.opendaylight.org/view/Service_Function_Chaining:Main#SFC_101
+
+Create an algorithm
+^^^^^^^^^^^^^^^^^^^
+
+POST -
+http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms
+
+::
+
+    {
+        "service-function-group-algorithm": [
+          {
+            "name": "alg1"
+            "type": "ALL"
+          }
+       ]
+    }
+
+(Header "content-type": application/json)
+
+Verify: get all algorithms
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+GET -
+http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms
+
+In order to delete all algorithms: DELETE -
+http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms
+
+Create a group
+^^^^^^^^^^^^^^
+
+POST -
+http://127.0.0.1:8181/restconf/config/service-function-group:service-function-groups
+
+::
+
+    {
+        "service-function-group": [
+        {
+            "rest-uri": "http://localhost:10002",
+            "ip-mgmt-address": "10.3.1.103",
+            "algorithm": "alg1",
+            "name": "SFG1",
+            "type": "service-function-type:napt44",
+            "sfc-service-function": [
+                {
+                    "name":"napt44-104"
+                },
+                {
+                    "name":"napt44-103-1"
+                }
+            ]
+          }
+        ]
+    }
+
+Verify: get all SFG’s
+^^^^^^^^^^^^^^^^^^^^^
+
+GET -
+http://127.0.0.1:8181/restconf/config/service-function-group:service-function-groups
+
+SFC Proof of Transit User Guide
+-------------------------------
+
+Overview
+~~~~~~~~
+
+Early Service Function Chaining (SFC) Proof of Transit (SFC Proof of
+Transit) implements Service Chaining Proof of Transit functionality on
+capable switches. After the creation of an Rendered Service Path (RSP),
+a user can configure to enable SFC proof of transit on the selected RSP
+to effect the proof of transit.
+
+Common acronyms used in the following sections:
+
+-  SF - Service Function
+
+-  SFF - Service Function Forwarder
+
+-  SFC - Service Function Chain
+
+-  SFP - Service Function Path
+
+-  RSP - Rendered Service Path
+
+-  SFCPOT - Service Function Chain Proof of Transit
+
+SFC Proof of Transit Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When SFC Proof of Transit is initialized, all required listeners are
+registered to handle incoming data. It involves ``SfcPotNodeListener``
+which stores data about all node devices including their mountpoints
+(used here as databrokers), ``SfcPotRSPDataListener``,
+``RenderedPathListener``. ``RenderedPathListener`` is used to listen for
+RSP changes. ``SfcPotRSPDataListener`` implements RPC services to enable
+or disable SFC Proof of Transit on a particular RSP. When the SFC Proof
+of Transit is invoked, RSP listeners and service implementations are
+setup to receive SFCPOT configurations. When a user configures via a
+POST RPC call to enable SFCPOT on a particular RSP, the configuration
+drives the creation of necessary augmentations to the RSP to effect the
+SFCPOT configurations.
+
+SFC Proof of Transit details
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Several deployments use traffic engineering, policy routing, segment
+routing or service function chaining (SFC) to steer packets through a
+specific set of nodes. In certain cases regulatory obligations or a
+compliance policy require to prove that all packets that are supposed to
+follow a specific path are indeed being forwarded across the exact set
+of nodes specified. I.e. if a packet flow is supposed to go through a
+series of service functions or network nodes, it has to be proven that
+all packets of the flow actually went through the service chain or
+collection of nodes specified by the policy. In case the packets of a
+flow weren’t appropriately processed, a proof of transit egress device
+would be required to identify the policy violation and take
+corresponding actions (e.g. drop or redirect the packet, send an alert
+etc.) corresponding to the policy.
+
+The SFCPOT approach is based on meta-data which is added to every
+packet. The meta data is updated at every hop and is used to verify
+whether a packet traversed all required nodes. A particular path is
+either described by a set of secret keys, or a set of shares of a single
+secret. Nodes on the path retrieve their individual keys or shares of a
+key (using for e.g. Shamir’s Shared Sharing Secret scheme) from a
+central controller. The complete key set is only known to the verifier-
+which is typically the ultimate node on a path that requires proof of
+transit. Each node in the path uses its secret or share of the secret to
+update the meta-data of the packets as the packets pass through the
+node. When the verifier receives a packet, it can use its key(s) along
+with the meta-data to validate whether the packet traversed the service
+chain correctly.
+
+SFC Proof of Transit entities
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In order to implement SFC Proof of Transit for a service function chain,
+an RSP is a pre-requisite to identify the SFC to enable SFC PoT on. SFC
+Proof of Transit for a particular RSP is enabled by an RPC request to
+the controller along with necessary parameters to control some of the
+aspects of the SFC Proof of Transit process.
+
+The RPC handler identifies the RSP and generates SFC Proof of Transit
+parameters like secret share, secret etc., and adds the generated SFCPOT
+configuration parameters to SFC main as well as the various SFC hops.
+The last node in the SFC is configured as a verifier node to allow
+SFCPOT Proof of Transit process to be completed.
+
+The SFCPOT configuration generators and related handling are done by
+``SfcPotAPI``, ``SfcPotConfigGenerator``, ``SfcPotListener``,
+``SfcPotPolyAPI``, ``SfcPotPolyClassAPI`` and ``SfcPotPolyClass``.
+
+Administering SFC Proof of Transit
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use the SFC Proof of Transit Karaf, at least the following Karaf
+features must be installed:
+
+-  odl-sfc-model
+
+-  odl-sfc-provider
+
+-  odl-sfc-netconf
+
+-  odl-restconf
+
+-  odl-netconf-topology
+
+-  odl-netconf-connector-all
+
+-  odl-sfc-pot
+
+SFC Proof of Transit Tutorial
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This tutorial is a simple example how to configure Service Function
+Chain Proof of Transit using SFC POT feature.
+
+Preconditions
+^^^^^^^^^^^^^
+
+To enable a device to handle SFC Proof of Transit, it is expected that
+the netconf server device advertise capability as under ioam-scv.yang
+present under src/main/yang folder of sfc-pot feature. It is also
+expected that netconf notifications be enabled and its support
+capability advertised as capabilities.
+
+It is also expected that the devices are netconf mounted and available
+in the topology-netconf store.
+
+Instructions
+^^^^^^^^^^^^
+
+When SFC Proof of Transit is installed, all netconf nodes in
+topology-netconf are processed and all capable nodes with accessible
+mountpoints are cached.
+
+First step is to create the required RSP as usually done.
+
+Once RSP name is avaiable it is used to send a POST RPC to the
+controller similar to below:
+
+::
+
+    POST ./restconf/operations/sfc-ioam-nb-pot:enable-sfc-ioam-pot-rendered-path
+
+    {
+      "input": {
+        "sfc-ioam-pot-rsp-name": "rsp1"
+      }
+    }
+
+The following can be used to disable the SFC Proof of Transit on an RSP
+which removes the augmentations and stores back the RSP without the
+SFCPOT enabled features and also sending down a delete configuration to
+the SFCPOT configuration sub-tree in the nodes.
+
+::
+
+    POST ./restconf/operations/sfc-ioam-nb-pot:disable-sfc-ioam-pot-rendered-path
+
+    {
+      "input": {
+        "sfc-ioam-pot-rsp-name": "rsp1"
+      }
+    }
+
diff --git a/docs/user-guide/snmp-plugin-user-guide.rst b/docs/user-guide/snmp-plugin-user-guide.rst
new file mode 100644 (file)
index 0000000..a32cf1a
--- /dev/null
@@ -0,0 +1,172 @@
+SNMP Plugin User Guide
+======================
+
+Installing Feature
+------------------
+
+The SNMP Plugin can be installed using a single karaf feature:
+**odl-snmp-plugin**
+
+After starting Karaf:
+
+-  Install the feature: **feature:install odl-snmp-plugin**
+
+-  Expose the northbound API: **feature:install odl-restconf**
+
+Northbound APIs
+---------------
+
+There are two exposed northbound APIs: snmp-get & snmp-set
+
+SNMP GET
+~~~~~~~~
+
+Default URL: http://localhost:8181/restconf/operations/snmp:snmp-get
+
+POST Input
+^^^^^^^^^^
+
++----------------+----------------+----------------+----------------+----------------+
+| Field Name     | Type           | Description    | Example        | Required?      |
++================+================+================+================+================+
+| ip-address     | Ipv4 Address   | The IPv4       | 10.86.3.13     | Yes            |
+|                |                | Address of the |                |                |
+|                |                | desired        |                |                |
+|                |                | network node   |                |                |
++----------------+----------------+----------------+----------------+----------------+
+| oid            | String         | The Object     | 1.3.6.1.2.1.1. | Yes            |
+|                |                | Identifier of  | 1              |                |
+|                |                | the desired    |                |                |
+|                |                | MIB            |                |                |
+|                |                | table/object   |                |                |
++----------------+----------------+----------------+----------------+----------------+
+| get-type       | ENUM (GET,     | The type of    | GET-BULK       | Yes            |
+|                | GET-NEXT,      | get request to |                |                |
+|                | GET-BULK,      | send           |                |                |
+|                | GET-WALK)      |                |                |                |
++----------------+----------------+----------------+----------------+----------------+
+| community      | String         | The community  | private        | No. (Default:  |
+|                |                | string to use  |                | public)        |
+|                |                | for the SNMP   |                |                |
+|                |                | request        |                |                |
++----------------+----------------+----------------+----------------+----------------+
+
+**Example.**
+
+::
+
+     {
+         "input": {
+             "ip-address": "10.86.3.13",
+             "oid" : "1.3.6.1.2.1.1.1",
+             "get-type" : "GET-BULK",
+             "community" : "private"
+         }
+     }
+
+POST Output
+^^^^^^^^^^^
+
++--------------------------+--------------------------+--------------------------+
+| Field Name               | Type                     | Description              |
++==========================+==========================+==========================+
+| results                  | List of { "value" :      | The results of the SNMP  |
+|                          | String } pairs           | query                    |
++--------------------------+--------------------------+--------------------------+
+
+**Example.**
+
+::
+
+     {
+         "snmp:results": [
+             {
+                 "value": "Ethernet0/0/0",
+                 "oid": "1.3.6.1.2.1.2.2.1.2.1"
+             },
+             {
+                 "value": "FastEthernet0/0/0",
+                 "oid": "1.3.6.1.2.1.2.2.1.2.2"
+             },
+             {
+                 "value": "GigabitEthernet0/0/0",
+                 "oid": "1.3.6.1.2.1.2.2.1.2.3"
+             }
+         ]
+     }
+
+SNMP SET
+~~~~~~~~
+
+Default URL: http://localhost:8181/restconf/operations/snmp:snmp-set
+
+POST Input
+^^^^^^^^^^
+
++----------------+----------------+----------------+----------------+----------------+
+| Field Name     | Type           | Description    | Example        | Required?      |
++================+================+================+================+================+
+| ip-address     | Ipv4 Address   | The Ipv4       | 10.86.3.13     | Yes            |
+|                |                | address of the |                |                |
+|                |                | desired        |                |                |
+|                |                | network node   |                |                |
++----------------+----------------+----------------+----------------+----------------+
+| oid            | String         | The Object     | 1.3.6.2.1.1.1  | Yes            |
+|                |                | Identifier of  |                |                |
+|                |                | the desired    |                |                |
+|                |                | MIB object     |                |                |
++----------------+----------------+----------------+----------------+----------------+
+| value          | String         | The value to   | "Hello World"  | Yes            |
+|                |                | set on the     |                |                |
+|                |                | network device |                |                |
++----------------+----------------+----------------+----------------+----------------+
+| community      | String         | The community  | private        | No. (Default:  |
+|                |                | string to use  |                | public)        |
+|                |                | for the SNMP   |                |                |
+|                |                | request        |                |                |
++----------------+----------------+----------------+----------------+----------------+
+
+**Example.**
+
+::
+
+     {
+         "input": {
+             "ip-address": "10.86.3.13",
+             "oid" : "1.3.6.1.2.1.1.1.0",
+             "value" : "Sample description",
+             "community" : "private"
+         }
+     }
+
+POST Output
+^^^^^^^^^^^
+
+On a successful SNMP-SET, no output is presented, just a HTTP status of
+200.
+
+Errors
+^^^^^^
+
+If any errors happen in the set request, you will be presented with an
+error message in the output.
+
+For example, on a failed set request you may see an error like:
+
+::
+
+     {
+         "errors": {
+             "error": [
+                 {
+                     "error-type": "application",
+                     "error-tag": "operation-failed",
+                     "error-message": "SnmpSET failed with error status: 17, error index: 1. StatusText: Not writable"
+                 }
+             ]
+         }
+     }
+
+which corresponds to Error status 17 in the SNMPv2 RFC:
+https://tools.ietf.org/html/rfc1905.
+
diff --git a/docs/user-guide/snmp4sdn-user-guide.rst b/docs/user-guide/snmp4sdn-user-guide.rst
new file mode 100644 (file)
index 0000000..da5f3dd
--- /dev/null
@@ -0,0 +1,630 @@
+SNMP4SDN User Guide
+===================
+
+Overview
+--------
+
+We propose a southbound plugin that can control the off-the-shelf
+commodity Ethernet switches for the purpose of building SDN using
+Ethernet switches. For Ethernet switches, forwarding table, VLAN table,
+and ACL are where one can install flow configuration on, and this is
+done via SNMP and CLI in the proposed plugin. In addition, some settings
+required for Ethernet switches in SDN, e.g., disabling STP and flooding,
+are proposed.
+
+.. figure:: ./images/snmp4sdn_in_odl_architecture.jpg
+   :alt: SNMP4SDN as an OpenDaylight southbound plugin
+
+   SNMP4SDN as an OpenDaylight southbound plugin
+
+Configuration
+-------------
+
+Just follow the steps:
+
+Prepare the switch list database file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A sample is
+`here <https://wiki.opendaylight.org/view/SNMP4SDN:switch_list_file>`__,
+and we suggest to save it as */etc/snmp4sdn\_swdb.csv* so that SNMP4SDN
+Plugin can automatically load this file. Note that the first line is
+title and should not be removed.
+
+Prepare the vendor-specific configuration file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A sample is
+`here <https://wiki.opendaylight.org/view/SNMP4SDN:snmp4sdn_VendorSpecificSwitchConfig_file>`__,
+and we suggest to save it as
+*/etc/snmp4sdn\_VendorSpecificSwitchConfig.xml* so that SNMP4SDN Plugin
+can automatically load this file.
+
+Install SNMP4SDN Plugin
+-----------------------
+
+If using SNMP4SDN Plugin provided in OpenDaylight release, just do the
+following from the Karaf CLI:
+
+::
+
+    feature:install odl-snmp4sdn-all
+
+Troubleshooting
+---------------
+
+Installation Troubleshooting
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Feature installation failure
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When trying to install a feature, if the following failure occurs:
+
+::
+
+    Error executing command: Could not start bundle ...
+    Reason: Missing Constraint: Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.7))"
+
+A workaround: exit Karaf, and edit the file
+<karaf\_directory>/etc/config.properties, remove the line
+*${services-${karaf.framework}}* and the ", \\" in the line above.
+
+Runtime Troubleshooting
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Problem starting SNMP Trap Interface
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+It is possible to get the following exception during controller startup.
+(The error would not be printed in Karaf console, one may see it in
+<karaf\_directory>/data/log/karaf.log)
+
+::
+
+    2014-01-31 15:00:44.688 CET [fileinstall-./plugins] WARN  o.o.snmp4sdn.internal.SNMPListener - Problem starting SNMP Trap Interface: {}
+     java.net.BindException: Permission denied
+            at java.net.PlainDatagramSocketImpl.bind0(Native Method) ~[na:1.7.0_51]
+            at java.net.AbstractPlainDatagramSocketImpl.bind(AbstractPlainDatagramSocketImpl.java:95) ~[na:1.7.0_51]
+            at java.net.DatagramSocket.bind(DatagramSocket.java:376) ~[na:1.7.0_51]
+            at java.net.DatagramSocket.<init>(DatagramSocket.java:231) ~[na:1.7.0_51]
+            at java.net.DatagramSocket.<init>(DatagramSocket.java:284) ~[na:1.7.0_51]
+            at java.net.DatagramSocket.<init>(DatagramSocket.java:256) ~[na:1.7.0_51]
+            at org.snmpj.SNMPTrapReceiverInterface.<init>(SNMPTrapReceiverInterface.java:126) ~[org.snmpj-1.4.3.jar:na]
+            at org.snmpj.SNMPTrapReceiverInterface.<init>(SNMPTrapReceiverInterface.java:99) ~[org.snmpj-1.4.3.jar:na]
+            at org.opendaylight.snmp4sdn.internal.SNMPListener.<init>(SNMPListener.java:75) ~[bundlefile:na]
+            at org.opendaylight.snmp4sdn.core.internal.Controller.start(Controller.java:174) [bundlefile:na]
+    ...
+
+This indicates that the controller is being run as a user which does not
+have sufficient OS privileges to bind the SNMPTRAP port (162/UDP)
+
+Switch list file missing
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The SNMP4SDN Plugin needs a switch list file, which is necessary for
+topology discovery and should be provided by the administrator (so
+please prepare one for the first time using SNMP4SDN Plugin, here is the
+`sample <https://wiki.opendaylight.org/view/SNMP4SDN:switch_list_file>`__).
+The default file path is /etc/snmp4sdn\_swdb.csv. SNMP4SDN Plugin would
+automatically load this file and start topology discovery. If this file
+is not ready there, the following message like this will pop up:
+
+::
+
+    2016-02-02 04:21:52,476 | INFO| Event Dispatcher | CmethUtil                        | 466 - org.opendaylight.snmp4sdn - 0.3.0.SNAPSHOT | CmethUtil.readDB() err: {}
+    java.io.FileNotFoundException: /etc/snmp4sdn_swdb.csv (No such file or directory)
+        at java.io.FileInputStream.open0(Native Method)[:1.8.0_65]
+        at java.io.FileInputStream.open(FileInputStream.java:195)[:1.8.0_65]
+        at java.io.FileInputStream.<init>(FileInputStream.java:138)[:1.8.0_65]
+        at java.io.FileInputStream.<init>(FileInputStream.java:93)[:1.8.0_65]
+        at java.io.FileReader.<init>(FileReader.java:58)[:1.8.0_65]
+        at org.opendaylight.snmp4sdn.internal.util.CmethUtil.readDB(CmethUtil.java:66)
+        at org.opendaylight.snmp4sdn.internal.util.CmethUtil.<init>(CmethUtil.java:43)
+    ...
+
+Configuration
+-------------
+
+Just follow the steps:
+
+1. Prepare the switch list database file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A sample is
+`here <https://wiki.opendaylight.org/view/SNMP4SDN:switch_list_file>`__,
+and we suggest to save it as */etc/snmp4sdn\_swdb.csv* so that SNMP4SDN
+Plugin can automatically load this file.
+
+.. note::
+
+    The first line is title and should not be removed.
+
+2. Prepare the vendor-specific configuration file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A sample is
+`here <https://wiki.opendaylight.org/view/SNMP4SDN:snmp4sdn_VendorSpecificSwitchConfig_file>`__,
+and we suggest to save it as
+*/etc/snmp4sdn\_VendorSpecificSwitchConfig.xml* so that SNMP4SDN Plugin
+can automatically load this file.
+
+3. Install SNMP4SDN Plugin
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If using SNMP4SDN Plugin provided in OpenDaylight release, just do the
+following:
+
+Launch Karaf in Linux console:
+
+::
+
+    cd <Beryllium_controller_directory>/bin
+    (for example, cd distribution-karaf-x.x.x-Beryllium/bin)
+
+::
+
+    ./karaf
+
+Then in Karaf console, execute:
+
+::
+
+    feature:install odl-snmp4sdn-all
+
+4. Load switch list
+~~~~~~~~~~~~~~~~~~~
+
+For initialization, we need to feed SNMP4SDN Plugin the switch list.
+Actually SNMP4SDN Plugin automatically try to load the switch list at
+/etc/snmp4sdn\_swdb.csv if there is. If so, this step could be skipped.
+In Karaf console, execute:
+
+::
+
+    snmp4sdn:ReadDB <switch_list_path>
+    (For example, snmp4sdn:ReadDB /etc/snmp4sdn_swdb.csv)
+    (in Windows OS, For example, snmp4sdn:ReadDB D://snmp4sdn_swdb.csv)
+
+A sample is
+`here <https://wiki.opendaylight.org/view/SNMP4SDN:switch_list_file>`__,
+and we suggest to save it as */etc/snmp4sdn\_swdb.csv* so that SNMP4SDN
+Plugin can automatically load this file.
+
+.. note::
+
+    The first line is title and should not be removed.
+
+5. Show switch list
+~~~~~~~~~~~~~~~~~~~
+
+::
+
+    snmp4sdn:PrintDB
+
+Tutorial
+--------
+
+Topology Service
+~~~~~~~~~~~~~~~~
+
+Execute topology discovery
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The SNMP4SDN Plugin automatically executes topology discovery on
+startup. One may use the following commands to invoke topology discovery
+manually. Note that you may need to wait for seconds for itto complete.
+
+.. note::
+
+    Currently, one needs to manually execute *snmp4sdn:TopoDiscover*
+    first (just once), then later the automatic topology discovery can
+    be successful. If switches change (switch added or removed),
+    *snmp4sdn:TopoDiscover* is also required. A future version will fix
+    it to eliminate these requirements.
+
+::
+
+    snmp4sdn:TopoDiscover
+
+If one like to discover all inventory (i.e. switches and their ports)
+but not edges, just execute "TopoDiscoverSwitches":
+
+::
+
+    snmp4sdn:TopoDiscoverSwitches
+
+If one like to only discover all edges but not inventory, just execute
+"TopoDiscoverEdges":
+
+::
+
+    snmp4sdn:TopoDiscoverEdges
+
+You can also trigger topology discovery via the REST API by using
+``curl`` from the Linux console (or any other REST client):
+
+::
+
+    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:rediscover
+
+You can change the periodic topology discovery interval via a REST API:
+
+::
+
+    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:set-discovery-interval -d "{"input":{"interval-second":'<interval_time>'}}"
+    For example, set the interval as 15 seconds:
+    curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:set-discovery-interval -d "{"input":{"interval-second":'15'}}"
+
+Show the topology
+^^^^^^^^^^^^^^^^^
+
+SNMP4SDN Plugin supports to show topology via REST API:
+
+-  Get topology
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:get-edge-list
+
+-  Get switch list
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:get-node-list
+
+-  Get switches' ports list
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:get-node-connector-list
+
+-  The three commands above are just for user to get the latest topology
+   discovery result, it does not trigger SNMP4SDN Plugin to do topology
+   discovery.
+
+-  To trigger SNMP4SDN Plugin to do topology discover, as described in
+   aforementioned *Execute topology discovery*.
+
+Flow configuration
+~~~~~~~~~~~~~~~~~~
+
+FDB configuration
+^^^^^^^^^^^^^^^^^
+
+SNMP4SDN supports to add entry on FDB table via REST API:
+
+-  Get FDB table
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:get-fdb-table -d "{input:{"node-id":<switch-mac-address-in-number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:get-fdb-table -d "{input:{"node-id":158969157063648}}"
+
+-  Get FDB table entry
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:get-fdb-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "dest-mac-addr":<destination-mac-address-in-number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:get-fdb-entry -d "{input:{"node-id":158969157063648, "vlan-id":1, "dest-mac-addr":158969157063648}}"
+
+-  Set FDB table entry
+
+   (Notice invalid value: (1) non unicast mac (2) port not in the VLAN)
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:set-fdb-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "dest-mac-addr":<destination-mac-address-in-number>, "port":<port-in-number>, "type":'<type>'}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:set-fdb-entry -d "{input:{"node-id":158969157063648, "vlan-id":1, "dest-mac-addr":187649984473770, "port":23, "type":'MGMT'}}"
+
+-  Delete FDB table entry
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:del-fdb-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "dest-mac-addr":<destination-mac-address-in-number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:del-fdb-entry -d "{input:{"node-id":158969157063648, "vlan-id":1, "dest-mac-addr":187649984473770}}"
+
+VLAN configuration
+^^^^^^^^^^^^^^^^^^
+
+SNMP4SDN supports to add entry on VLAN table via REST API:
+
+-  Get VLAN table
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:get-vlan-table -d "{input:{node-id:<switch-mac-address-in-number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:get-vlan-table -d "{input:{node-id:158969157063648}}"
+
+-  Add VLAN
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:add-vlan -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "vlan-name":'<vlan-name>'}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:add-vlan -d "{"input":{"node-id":158969157063648, "vlan-id":123, "vlan-name":'v123'}}"
+
+-  Delete VLAN
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:delete-vlan -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:delete-vlan -d "{"input":{"node-id":158969157063648, "vlan-id":123}}"
+
+-  Add VLAN and set ports
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:add-vlan-and-set-ports -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "vlan-name":'<vlan-name>', "tagged-port-list":'<tagged-ports-separated-by-comma>', "untagged-port-list":'<untagged-ports-separated-by-comma>'}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:add-vlan-and-set-ports -d "{"input":{"node-id":158969157063648, "vlan-id":123, "vlan-name":'v123', "tagged-port-list":'1,2,3', "untagged-port-list":'4,5,6'}}"
+
+-  Set VLAN ports
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:set-vlan-ports -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "tagged-port-list":'<tagged-ports-separated-by-comma>', "untagged-port-list":'<untagged-ports-separated-by-comma>'}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:set-vlan-ports -d "{"input":{"node-id":"158969157063648", "vlan-id":"123", "tagged-port-list":'4,5', "untagged-port-list":'2,3'}}"
+
+ACL configuration
+^^^^^^^^^^^^^^^^^
+
+SNMP4SDN supports to add flow on ACL table via REST API. However, it is
+so far only implemented for the D-Link DGS-3120 switch.
+
+ACL configuration via CLI is vendor-specific, and SNMP4SDN will support
+configuration with vendor-specific CLI in future release.
+
+To do ACL configuration using the REST APIs, use commands like the
+following:
+
+-  Clear ACL table
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:clear-acl-table -d "{"input":{"nodeId":<switch-mac-address-in-number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:clear-acl-table -d "{"input":{"nodeId":158969157063648}}"
+
+-  Create ACL profile (IP layer)
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"acl-layer":'IP',"vlan-mask":<vlan_mask_in_number>,"src-ip-mask":'<src_ip_mask>',"dst-ip-mask":"<destination_ip_mask>"}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":158969157063648,"profile-id":1,"profile-name":'profile_1',"acl-layer":'IP',"vlan-mask":1,"src-ip-mask":'255.255.0.0',"dst-ip-mask":'255.255.255.255'}}"
+
+-  Create ACL profile (MAC layer)
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"acl-layer":'ETHERNET',"vlan-mask":<vlan_mask_in_number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":158969157063648,"profile-id":2,"profile-name":'profile_2',"acl-layer":'ETHERNET',"vlan-mask":4095}}"
+
+-  Delete ACL profile
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":158969157063648,"profile-id":1}}"
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-name":"<profile_name>"}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":158969157063648,"profile-name":'profile_2'}}"
+
+-  Set ACL rule
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:set-acl-rule -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"rule-id":<rule_id_in_number>,"port-list":[<port_number>,<port_number>,...],"acl-layer":'<acl_layer>',"vlan-id":<vlan_id_in_number>,"src-ip":"<src_ip_address>","dst-ip":'<dst_ip_address>',"acl-action":'<acl_action>'}}"
+       (<acl_layer>: IP or ETHERNET)
+       (<acl_action>: PERMIT as permit, DENY as deny)
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:set-acl-rule -d "{input:{"nodeId":158969157063648,"profile-id":1,"profile-name":'profile_1',"rule-id":1,"port-list":[1,2,3],"acl-layer":'IP',"vlan-id":2,"src-ip":'1.1.1.1',"dst-ip":'2.2.2.2',"acl-action":'PERMIT'}}"
+
+-  Delete ACL rule
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:del-acl-rule -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"rule-id":<rule_id_in_number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-rule -d "{input:{"nodeId":158969157063648,"profile-id":1,"profile-name":'profile_1',"rule-id":1}}"
+
+Special configuration
+~~~~~~~~~~~~~~~~~~~~~
+
+SNMP4SDN supports setting the following special configurations via REST
+API:
+
+-  Set STP port state
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:set-stp-port-state -d "{input:{"node-id":<switch-mac-address-in-number>, "port":<port_number>, enable:<true_or_false>}}"
+       (true: enable, false: disable)
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:set-stp-port-state -d "{input:{"node-id":158969157063648, "port":2, enable:false}}"
+
+-  Get STP port state
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-stp-port-state -d "{input:{"node-id":<switch-mac-address-in-number>, "port":<port_number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-stp-port-state -d "{input:{"node-id":158969157063648, "port":2}}"
+
+-  Get STP port root
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-stp-port-root -d "{input:{"node-id":<switch-mac-address-in-number>, "port":<port_number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-stp-port-root -d "{input:{"node-id":158969157063648, "port":2}}"
+
+-  Enable STP
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:enable-stp -d "{input:{"node-id":<switch-mac-address-in-number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:enable-stp -d "{input:{"node-id":158969157063648}}"
+
+-  Disable STP
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:disable-stp -d "{input:{"node-id":<switch-mac-address-in-number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:disable-stp -d "{input:{"node-id":158969157063648}}"
+
+-  Get ARP table
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-arp-table -d "{input:{"node-id":<switch-mac-address-in-number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-arp-table -d "{input:{"node-id":158969157063648}}"
+
+-  Set ARP entry
+
+   (Notice to give IP address with subnet prefix)
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:set-arp-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "ip-address":'<ip_address>', "mac-address":<mac_address_in_number>}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:set-arp-entry -d "{input:{"node-id":158969157063648, "ip-address":'10.217.9.9', "mac-address":1}}"
+
+-  Get ARP entry
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-arp-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "ip-address":'<ip_address>'}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-arp-entry -d "{input:{"node-id":158969157063648, "ip-address":'10.217.9.9'}}"
+
+-  Delete ARP entry
+
+   ::
+
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:delete-arp-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "ip-address":'<ip_address>'}}"
+
+       For example:
+       curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:delete-arp-entry -d "{input:{"node-id":158969157063648, "ip-address":'10.217.9.9'}}"
+
+Using Postman to invoke REST API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Besides using the curl tool to invoke REST API, like the examples
+aforementioned, one can also use GUI tool like Postman for better data
+display.
+
+-  Install Postman: `Install Postman in the Chrome
+   browser <https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en>`__
+
+-  In the chrome browser bar enter
+
+   ::
+
+       chrome://apps/
+
+-  Click on Postman.
+
+Example: Get VLAN table using Postman
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+As the screenshot shown below, one needs to fill in required fields.
+
+::
+
+    URL:
+    http://<controller_ip_address>:8181/restconf/operations/vlan:get-vlan-table
+
+    Accept header:
+    application/json
+
+    Content-type:
+    application/json
+
+    Body:
+    {input:{"node-id":<node_id>}}
+    for example:
+    {input:{"node-id":158969157063648}}
+
+.. figure:: ./images/snmp4sdn_getvlantable_postman.jpg
+   :alt: Example: Get VLAN table using Postman
+
+   Example: Get VLAN table using Postman
+
+Multi-vendor support
+--------------------
+
+So far the supported vendor-specific configurations:
+
+-  Add VLAN and set ports
+
+-  (More functions are TBD)
+
+The SNMP4SDN Plugin would examine whether the configuration is described
+in the vendor-specific configuration file. If yes, the configuration
+description would be adopted, otherwise just use the default
+configuration. For example, adding VLAN and setting the ports is
+supported via SNMP standard MIB. However we found some special cases,
+for example, certain Accton switch requires to add VLAN first and then
+allows to set the ports. So one may describe this in the vendor-specific
+configuration file.
+
+A vendor-specific configuration file sample is
+`here <https://wiki.opendaylight.org/view/SNMP4SDN:snmp4sdn_VendorSpecificSwitchConfig_file>`__,
+and we suggest to save it as
+*/etc/snmp4sdn\_VendorSpecificSwitchConfig.xml* so that SNMP4SDN Plugin
+can automatically load it.
+
+Help
+----
+
+-  `SNMP4SDN Wiki <https://wiki.opendaylight.org/view/SNMP4SDN:Main>`__
+
+-  SNMP4SDN Mailing Lists:
+   (`user <https://lists.opendaylight.org/mailman/listinfo/snmp4sdn-users>`__,
+   `developer <https://lists.opendaylight.org/mailman/listinfo/snmp4sdn-dev>`__)
+
+-  Latest
+   `troubleshooting <https://wiki.opendaylight.org/view/SNMP4SDN:User_Guide#Troubleshooting>`__
+   in Wiki
+
diff --git a/docs/user-guide/sxp-user-guide.rst b/docs/user-guide/sxp-user-guide.rst
new file mode 100644 (file)
index 0000000..832acaf
--- /dev/null
@@ -0,0 +1,439 @@
+SXP User Guide
+==============
+
+Overview
+--------
+
+SXP (Source-Group Tag eXchange Protocol) project is an effort to enhance
+OpenDaylight platform with IP-SGT (IP Address to Source Group Tag)
+bindings that can be learned from connected SXP-aware network nodes. The
+current implementation supports SXP protocol version 4 according to the
+Smith, Kandula - SXP `IETF
+draft <https://tools.ietf.org/html/draft-smith-kandula-sxp-04>`__ and
+grouping of peers and creating filters based on ACL/Prefix-list syntax
+for filtering outbound and inbound IP-SGT bindings. All protocol legacy
+versions 1-3 are supported as well. Additionally, version 4 adds
+bidirectional connection type as an extension of a unidirectional one.
+
+SXP Architecture
+----------------
+
+The SXP Server manages all connected clients in separate threads and a
+common SXP protocol agreement is used between connected peers. Each SXP
+network peer is modelled with its pertaining class, e.g., SXP Server
+represents the SXP Speaker, SXP Listener the Client. The server program
+creates the ServerSocket object on a specified port and waits until a
+client starts up and requests connect on the IP address and port of the
+server. The client program opens a Socket that is connected to the
+server running on the specified host IP address and port.
+
+The SXP Listener maintains connection with its speaker peer. From an
+opened channel pipeline, all incoming SXP messages are processed by
+various handlers. Message must be decoded, parsed and validated.
+
+The SXP Speaker is a counterpart to the SXP Listener. It maintains a
+connection with its listener peer and sends composed messages.
+
+The SXP Binding Handler extracts the IP-SGT binding from a message and
+pulls it into the SXP-Database. If an error is detected during the
+IP-SGT extraction, an appropriate error code and sub-code is selected
+and an error message is sent back to the connected peer. All transitive
+messages are routed directly to the output queue of SXP Binding
+Dispatcher.
+
+The SXP Binding Dispatcher represents a selector that will decides how
+many data from the SXP-database will be sent and when. It is responsible
+for message content composition based on maximum message length.
+
+The SXP Binding Filters handles filtering of outgoing and incoming
+IP-SGT bindings according to BGP filtering using ACL and Prefix List
+syntax for specifying filter or based on Peer-sequence length.
+
+The SXP Domains feature provides isolation of SXP peers and bindings
+learned between them, also exchange of Bindings is possible across
+SXP-Domains by ACL, Prefix List or Peer-Sequence filters
+
+Configuring SXP
+---------------
+
+The OpenDaylight Karaf distribution comes pre-configured with baseline
+SXP configuration. Configuration of SXP Nodes is also possible via
+NETCONF.
+
+-  **22-sxp-controller-one-node.xml** (defines the basic parameters)
+
+Administering or Managing SXP
+-----------------------------
+
+By RPC (response is XML document containing requested data or operation
+status):
+
+-  Get Connections POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:get-connections
+
+::
+
+    <input xmlns:xsi="urn:opendaylight:sxp:controller">
+     <domain-name>global</domain-name>
+     <requested-node>0.0.0.100</requested-node>
+    </input>
+
+-  Add Connection POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:add-connection
+
+::
+
+    <input xmlns:xsi="urn:opendaylight:sxp:controller">
+     <requested-node>0.0.0.100</requested-node>
+     <domain-name>global</domain-name>
+     <connections>
+      <connection>
+       <peer-address>172.20.161.50</peer-address>
+       <tcp-port>64999</tcp-port>
+       <!-- Password setup: default | none leave empty -->
+       <password>default</password>
+       <!-- Mode: speaker/listener/both -->
+       <mode>speaker</mode>
+       <version>version4</version>
+       <description>Connection to ASR1K</description>
+       <!-- Timers setup: 0 to disable specific timer usability, the default value will be used -->
+       <connection-timers>
+        <!-- Speaker -->
+        <hold-time-min-acceptable>45</hold-time-min-acceptable>
+        <keep-alive-time>30</keep-alive-time>
+       </connection-timers>
+      </connection>
+      <connection>
+       <peer-address>172.20.161.178</peer-address>
+       <tcp-port>64999</tcp-port>
+       <!-- Password setup: default | none leave empty-->
+       <password>default</password>
+       <!-- Mode: speaker/listener/both -->
+       <mode>listener</mode>
+       <version>version4</version>
+       <description>Connection to ISR</description>
+       <!-- Timers setup: 0 to disable specific timer usability, the default value will be used -->
+       <connection-timers>
+        <!-- Listener -->
+        <reconciliation-time>120</reconciliation-time>
+        <hold-time>90</hold-time>
+        <hold-time-min>90</hold-time-min>
+        <hold-time-max>180</hold-time-max>
+       </connection-timers>
+      </connection>
+     </connections>
+    </input>
+
+-  Delete Connection POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-connection
+
+::
+
+    <input xmlns:xsi="urn:opendaylight:sxp:controller">
+     <requested-node>0.0.0.100</requested-node>
+     <domain-name>global</domain-name>
+     <peer-address>172.20.161.50</peer-address>
+    </input>
+
+-  Add Binding Entry POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:add-entry
+
+::
+
+    <input xmlns:xsi="urn:opendaylight:sxp:controller">
+     <requested-node>0.0.0.100</requested-node>
+     <domain-name>global</domain-name>
+     <ip-prefix>192.168.2.1/32</ip-prefix>
+     <sgt>20</sgt >
+    </input>
+
+-  Update Binding Entry POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:update-entry
+
+::
+
+    <input xmlns:xsi="urn:opendaylight:sxp:controller">
+     <requested-node>0.0.0.100</requested-node>
+     <domain-name>global</domain-name>
+     <original-binding>
+      <ip-prefix>192.168.2.1/32</ip-prefix>
+      <sgt>20</sgt>
+     </original-binding>
+     <new-binding>
+      <ip-prefix>192.168.3.1/32</ip-prefix>
+      <sgt>30</sgt>
+     </new-binding>
+    </input>
+
+-  Delete Binding Entry POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-entry
+
+::
+
+    <input xmlns:xsi="urn:opendaylight:sxp:controller">
+     <requested-node>0.0.0.100</requested-node>
+     <domain-name>global</domain-name>
+     <ip-prefix>192.168.3.1/32</ip-prefix>
+     <sgt>30</sgt >
+    </input>
+
+-  Get Node Bindings
+
+   This RPC gets particular device bindings. An SXP-aware node is
+   identified with a unique Node-ID. If a user requests bindings for a
+   Speaker 20.0.0.2, the RPC will search for an appropriate path, which
+   contains 20.0.0.2 Node-ID, within locally learnt SXP data in the SXP
+   database and replies with associated bindings. POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:get-node-bindings
+
+::
+
+    <input xmlns:xsi="urn:opendaylight:sxp:controller">
+     <requested-node>20.0.0.2</requested-node>
+     <bindings-range>all</bindings-range>
+     <domain-name>global</domain-name>
+    </input>
+
+-  Get Binding SGTs POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:get-binding-sgts
+
+::
+
+    <input xmlns:xsi="urn:opendaylight:sxp:controller">
+     <requested-node>0.0.0.100</requested-node>
+     <domain-name>global</domain-name>
+     <ip-prefix>192.168.12.2/32</ip-prefix>
+    </input>
+
+-  Add PeerGroup with or without filters to node. POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:add-peer-group
+
+::
+
+    <input xmlns="urn:opendaylight:sxp:controller">
+     <requested-node>127.0.0.1</requested-node>
+     <sxp-peer-group>
+      <name>TEST</name>
+      <sxp-peers>
+      </sxp-peers>
+      <sxp-filter>
+       <filter-type>outbound</filter-type>
+       <acl-entry>
+        <entry-type>deny</entry-type>
+        <entry-seq>1</entry-seq>
+        <sgt-start>1</sgt-start>
+        <sgt-end>100</sgt-end>
+       </acl-entry>
+       <acl-entry>
+        <entry-type>permit</entry-type>
+        <entry-seq>45</entry-seq>
+        <matches>1</matches>
+        <matches>3</matches>
+        <matches>5</matches>
+       </acl-entry>
+      </sxp-filter>
+     </sxp-peer-group>
+    </input>
+
+-  Delete PeerGroup with peer-group-name from node request-node. POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-peer-group
+
+::
+
+    <input xmlns="urn:opendaylight:sxp:controller">
+     <requested-node>127.0.0.1</requested-node>
+     <peer-group-name>TEST</peer-group-name>
+    </input>
+
+-  Get PeerGroup with peer-group-name from node request-node. POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:get-peer-group
+
+::
+
+    <input xmlns="urn:opendaylight:sxp:controller">
+     <requested-node>127.0.0.1</requested-node>
+     <peer-group-name>TEST</peer-group-name>
+    </input>
+
+-  Add Filter to peer group on node request-node. POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:add-filter
+
+::
+
+    <input xmlns="urn:opendaylight:sxp:controller">
+     <requested-node>127.0.0.1</requested-node>
+     <peer-group-name>TEST</peer-group-name>
+     <sxp-filter>
+      <filter-type>outbound</filter-type>
+      <acl-entry>
+       <entry-type>deny</entry-type>
+       <entry-seq>1</entry-seq>
+       <sgt-start>1</sgt-start>
+       <sgt-end>100</sgt-end>
+      </acl-entry>
+      <acl-entry>
+       <entry-type>permit</entry-type>
+       <entry-seq>45</entry-seq>
+       <matches>1</matches>
+       <matches>3</matches>
+       <matches>5</matches>
+      </acl-entry>
+     </sxp-filter>
+    </input>
+
+-  Delete Filter from peer group on node request-node. POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-filter
+
+::
+
+    <input xmlns="urn:opendaylight:sxp:controller">
+     <requested-node>127.0.0.1</requested-node>
+     <peer-group-name>TEST</peer-group-name>
+     <filter-type>outbound</filter-type>
+    </input>
+
+-  Update Filter of the same type in peer group on node request-node.
+   POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:update-filter
+
+::
+
+    <input xmlns="urn:opendaylight:sxp:controller">
+     <requested-node>127.0.0.1</requested-node>
+     <peer-group-name>TEST</peer-group-name>
+     <sxp-filter>
+      <filter-type>outbound</filter-type>
+      <acl-entry>
+       <entry-type>deny</entry-type>
+       <entry-seq>1</entry-seq>
+       <sgt-start>1</sgt-start>
+       <sgt-end>100</sgt-end>
+      </acl-entry>
+      <acl-entry>
+       <entry-type>permit</entry-type>
+       <entry-seq>45</entry-seq>
+       <matches>1</matches>
+       <matches>3</matches>
+       <matches>5</matches>
+      </acl-entry>
+     </sxp-filter>
+    </input>
+
+-  Add new SXP aware Node POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:add-node
+
+::
+
+    <input xmlns="urn:opendaylight:sxp:controller">
+        <node-id>1.1.1.1</node-id>
+        <source-ip>0.0.0.0</source-ip>
+        <timers>
+            <retry-open-time>5</retry-open-time>
+            <hold-time-min-acceptable>120</hold-time-min-acceptable>
+            <delete-hold-down-time>120</delete-hold-down-time>
+            <hold-time-min>90</hold-time-min>
+            <reconciliation-time>120</reconciliation-time>
+            <hold-time>90</hold-time>
+            <hold-time-max>180</hold-time-max>
+            <keep-alive-time>30</keep-alive-time>
+        </timers>
+        <mapping-expanded>150</mapping-expanded>
+        <security>
+            <password>password</password>
+        </security>
+        <tcp-port>64999</tcp-port>
+        <version>version4</version>
+        <description>ODL SXP Controller</description>
+        <master-database></master-database>
+    </input>
+
+-  Delete SXP aware node POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-node
+
+::
+
+    <input xmlns="urn:opendaylight:sxp:controller">
+     <node-id>1.1.1.1</node-id>
+    </input>
+
+-  Add SXP Domain on node request-node. POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:add-domain
+
+::
+
+    <input xmlns="urn:opendaylight:sxp:controller">
+      <node-id>1.1.1.1</node-id>
+      <domain-name>global</domain-name>
+    </input>
+
+-  Delete SXP Domain on node request-node. POST
+   http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-domain
+
+::
+
+    <input xmlns="urn:opendaylight:sxp:controller">
+     <node-id>1.1.1.1</node-id>
+     <domain-name>global</domain-name>
+    </input>
+
+Use cases for SXP
+~~~~~~~~~~~~~~~~~
+
+Cisco has a wide installed base of network devices supporting SXP. By
+including SXP in OpenDaylight, the binding of policy groups to IP
+addresses can be made available for possible further processing to a
+wide range of devices, and applications running on OpenDaylight. The
+range of applications that would be enabled is extensive. Here are just
+a few of them:
+
+OpenDaylight based applications can take advantage of the IP-SGT binding
+information. For example, access control can be defined by an operator
+in terms of policy groups, while OpenDaylight can configure access
+control lists on network elements using IP addresses, e.g., existing
+technology.
+
+Interoperability between different vendors. Vendors have different
+policy systems. Knowing the IP-SGT binding for Cisco makes it possible
+to maintain policy groups between Cisco and other vendors.
+
+OpenDaylight can aggregate the binding information from many devices and
+communicate it to a network element. For example, a firewall can use the
+IP-SGT binding information to know how to handle IPs based on the
+group-based ACLs it has set. But to do this with SXP alone, the firewall
+has to maintain a large number of network connections to get the binding
+information. This incurs heavy overhead costs to maintain all of the SXP
+peering and protocol information. OpenDaylight can aggregate the
+IP-group information so that the firewall need only connect to
+OpenDaylight. By moving the information flow outside of the network
+elements to a centralized position, we reduce the overhead of the CPU
+consumption on the enforcement element. This is a huge savings - it
+allows the enforcement point to only have to make one connection rather
+than thousands, so it can concentrate on its primary job of forwarding
+and enforcing.
+
+OpenDaylight can relay the binding information from one network element
+to others. Changes in group membership can be propagated more readily
+through a centralized model. For example, in a security application a
+particular host (e.g., user or IP Address) may be found to be acting
+suspiciously or violating established security policies. The defined
+response is to put the host into a different source group for
+remediation actions such as a lower quality of service, restricted
+access to critical servers, or special routing conditions to ensure
+deeper security enforcement (e.g., redirecting the host’s traffic
+through an IPS with very restrictive policies). Updated group membership
+for this host needs to be communicated to multiple network elements as
+soon as possible; a very efficient and effective method of propagation
+can be performed using OpenDaylight as a centralized point for relaying
+the information.
+
+OpenDayLight can create filters for exporting and receiving IP-SGT
+bindings used on specific peer groups, thus can provide more complex
+maintaining of policy groups.
+
+Although the IP-SGT binding is only one specific piece of information,
+and although SXP is implemented widely in a single vendor’s equipment,
+bringing the ability of OpenDaylight to process and distribute the
+bindings, is a very specific immediate useful implementation of policy
+groups. It would go a long way to develop both the usefulness of
+OpenDaylight and of policy groups.
+
diff --git a/docs/user-guide/tsdr-user-guide.rst b/docs/user-guide/tsdr-user-guide.rst
new file mode 100644 (file)
index 0000000..adf3973
--- /dev/null
@@ -0,0 +1,617 @@
+TSDR User Guide
+===============
+
+This document describes how to use HSQLDB, HBase, and Cassandra data
+stores to capture time series data using Time Series Data Repository
+(TSDR) features in OpenDaylight. This document contains configuration,
+administration, management, usage, and troubleshooting sections for the
+features.
+
+Overview
+--------
+
+The Time Series Data Repository (TSDR) project in OpenDaylight (ODL)
+creates a framework for collecting, storing, querying, and maintaining
+time series data. TSDR provides the framework for plugging in proper
+data collectors to collect various time series data and store the data
+into TSDR Data Stores. With a common data model and generic TSDR data
+persistence APIs, the user can choose various data stores to be plugged
+into the TSDR persistence framework. Currently, three types of data
+stores are supported: HSQLDB relational database, HBase NoSQL database,
+and Cassandra NoSQL database.
+
+With the capabilities of data collection, storage, query, aggregation,
+and purging provided by TSDR, network administrators can leverage
+various data driven appliations built on top of TSDR for security risk
+detection, performance analysis, operational configuration optimization,
+traffic engineering, and network analytics with automated intelligence.
+
+TSDR Architecture
+-----------------
+
+TSDR has the following major components:
+
+-  Data Collection Service
+
+-  Data Storage Service
+
+-  TSDR Persistence Layer with data stores as plugins
+
+-  TSDR Data Stores
+
+-  Data Query Service
+
+-  Grafana integration for time series data visualization
+
+-  Data Aggregation Service
+
+-  Data Purging Service
+
+The Data Collection Service handles the collection of time series data
+into TSDR and hands it over to the Data Storage Service. The Data
+Storage Service stores the data into TSDR through the TSDR Persistence
+Layer. The TSDR Persistence Layer provides generic Service APIs allowing
+various data stores to be plugged in. The Data Aggregation Service
+aggregates time series fine-grained raw data into course-grained roll-up
+data to control the size of the data. The Data Purging Service
+periodically purges both fine-grained raw data and course-granined
+aggregated data according to user-defined schedules.
+
+We have implemented The Data Collection Service, Data Storage Service,
+TSDR Persistence Layer, TSDR HSQLDB Data Store, TSDR HBase Data Store,
+and TSDR Cassandra Datastore. Among these services and components, time
+series data is communicated using a common TSDR data model, which is
+designed and implemented for the abstraction of time series data
+commonalities. With these functions, TSDR is able to collect the data
+from the data sources and store them into one of the TSDR data stores:
+HSQLDB Data Store, HBase Data Store or Cassandra Data Store. Besides a
+simple query command from Karaf console to retrieve data from the TSDR
+data stores, we also provided a Data Query Service for the user to use
+REST API to query the data from the data stores. Moreover, the user can
+use Grafana, which is a time series visualization tool to view the data
+stored in TSDR in various charting formats.
+
+Configuring TSDR Data Stores
+----------------------------
+
+To Configure HSQLDB Data Store
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The HSQLDB based storage files get stored automatically in <karaf
+install folder>/tsdr/ directory. If you want to change the default
+storage location, the configuration file to change can be found in
+<karaf install folder>/etc directory. The filename is
+org.ops4j.datasource-metric.cfg. Change the last portion of the
+url=jdbc:hsqldb:./tsdr/metric to point to different directory.
+
+To Configure HBase Data Store
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+After installing HBase Server on the same machine as OpenDaylight, if
+the user accepts the default configuration of the HBase Data Store, the
+user can directly proceed with the installation of HBase Data Store from
+Karaf console.
+
+Optionally, the user can configure TSDR HBase Data Store following HBase
+Data Store Configuration Procedure.
+
+-  HBase Data Store Configuration Steps
+
+   -  Open the file etc/tsdr-persistence-hbase.peroperties under karaf
+      distribution directory.
+
+   -  Edit the following parameters:
+
+      -  HBase server name
+
+      -  HBase server port
+
+      -  HBase client connection pool size
+
+      -  HBase client write buffer size
+
+After the configuration of HBase Data Store is complete, proceed with
+the installation of HBase Data Store from Karaf console.
+
+-  HBase Data Store Installation Steps
+
+   -  Start Karaf Console
+
+   -  Run the following commands from Karaf Console: feature:install
+      odl-tsdr-hbase
+
+To Configure Cassandra Data Store
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Currently, there’s no configuration needed for Cassandra Data Store. The
+user can use Cassandra data store directly after installing the feature
+from Karaf console.
+
+Additionally separate commands have been implemented to install various
+data collectors.
+
+Administering or Managing TSDR Data Stores
+------------------------------------------
+
+To Administer HSQLDB Data Store
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once the TSDR default datastore feature (odl-tsdr-hsqldb-all) is
+enabled, the TSDR captured OpenFlow statistics metrics can be accessed
+from Karaf Console by executing the command
+
+::
+
+    tsdr:list <metric-category> <starttimestamp> <endtimestamp>
+
+wherein
+
+-  <metric-category> = any one of the following categories
+   FlowGroupStats, FlowMeterStats, FlowStats, FlowTableStats, PortStats,
+   QueueStats
+
+-  <starttimestamp> = to filter the list of metrics starting this
+   timestamp
+
+-  <endtimestamp> = to filter the list of metrics ending this timestamp
+
+-  <starttimestamp> and <endtimestamp> are optional.
+
+-  Maximum 1000 records will be displayed.
+
+To Administer HBase Data Store
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+-  Using Karaf Command to retrieve data from HBase Data Store
+
+The user first need to install hbase data store from karaf console:
+
+feature:install odl-tsdr-hbase
+
+The user can retrieve the data from HBase data store using the following
+commands from Karaf console:
+
+::
+
+    tsdr:list
+    tsdr:list <CategoryName> <StartTime> <EndTime>
+
+Typing tab will get the context prompt of the arguments when typeing the
+command in Karaf console.
+
+To Administer Cassandra Data Store
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The user first needs to install Cassandra data store from Karaf console:
+
+::
+
+    feature:install odl-tsdr-cassandra
+
+Then the user can retrieve the data from Cassandra data store using the
+following commands from Karaf console:
+
+::
+
+    tsdr:list
+    tsdr:list <CategoryName> <StartTime> <EndTime>
+
+Typing tab will get the context prompt of the arguments when typeing the
+command in Karaf console.
+
+Installing TSDR Data Collectors
+-------------------------------
+
+When the user uses HSQLDB data store and installed "odl-tsdr-hsqldb-all"
+feature from Karaf console, besides the HSQLDB data store, OpenFlow data
+collector is also installed with this command. However, if the user
+needs to use other collectors, such as NetFlow Collector, Syslog
+Collector, SNMP Collector, and Controller Metrics Collector, the user
+needs to install them with separate commands. If the user uses HBase or
+Cassandra data store, no collectors will be installed when the data
+store is installed. Instead, the user needs to install each collector
+separately using feature install command from Karaf console.
+
+The following is the list of supported TSDR data collectors with the
+associated feature install commands:
+
+-  OpenFlow Data Collector
+
+   ::
+
+       feature:install odl-tsdr-openflow-statistics-collector
+
+-  SNMP Data Collector
+
+   ::
+
+       feature:install odl-tsdr-snmp-data-collector
+
+-  NetFlow Data Collector
+
+   ::
+
+       feature:install odl-tsdr-netflow-statistics-collector
+
+-  sFlow Data Collector feature:install
+   odl-tsdr-sflow-statistics-colletor
+
+-  Syslog Data Collector
+
+   ::
+
+       feature:install odl-tsdr-syslog-collector
+
+-  Controller Metrics Collector
+
+   ::
+
+       feature:install odl-tsdr-controller-metrics-collector
+
+In order to use controller metrics collector, the user needs to install
+Sigar library.
+
+The following is the instructions for installing Sigar library on
+Ubuntu:
+
+-  Install back end library by "sudo apt-get install
+   libhyperic-sigar-java"
+
+-  Execute "export
+   LD\_LIBRARY\_PATH=/usr/lib/jni/:/usr/lib:/usr/local/lib" to set the
+   path of the JNI (you can add this to the ".bashrc" in your home
+   directory)
+
+-  Download the file "sigar-1.6.4.jar". It might be also in your ".m2"
+   directory under "~/.m2/resources/org/fusesource/sigar/1.6.4"
+
+-  Create the directory "org/fusesource/sigar/1.6.4" under the "system"
+   directory in your controller home directory and place the
+   "sigar-1.6.4.jar" there
+
+Configuring TSDR Data Collectors
+--------------------------------
+
+-  SNMP Data Collector Device Credential Configuration
+
+After installing SNMP Data Collector, a configuration file under etc/
+directory of ODL distribution is generated: etc/tsdr.snmp.cfg is
+created.
+
+The following is a sample tsdr.snmp.cfg file:
+
+credentials=[192.168.0.2,public],[192.168.0.3,public]
+
+The above credentials indicate that TSDR SNMP Collector is going to
+connect to two devices. The IPAddress and Read community string of these
+two devices are (192.168.0.2, public), and (192.168.0.3) respectively.
+
+The user can make changes to this configuration file any time during
+runtime. The configuration will be picked up by TSDR in the next cycle
+of data collection.
+
+Polling interval configuration for SNMP Collector and OpenFlow Stats Collector
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The default polling interval of SNMP Collector and OpenFlow Stats
+Collector is 30 seconds and 15 seconds respectively. The user can change
+the polling interval through restconf APIs at any time. The new polling
+interval will be picked up by TSDR in the next collection cycle.
+
+-  Retrieve Polling Interval API for SNMP Collector
+
+   -  URL:
+      http://localhost:8181/restconf/config/tsdr-snmp-data-collector:TSDRSnmpDataCollectorConfig
+
+   -  Verb: GET
+
+-  Update Polling Interval API for SNMP Collector
+
+   -  URL:
+      http://localhost:8181/restconf/operations/tsdr-snmp-data-collector:setPollingInterval
+
+   -  Verb: POST
+
+   -  Content Type: application/json
+
+   -  Input Payload:
+
+      ::
+
+          {
+             "input": {
+                 "interval": "15000"
+             }
+          }
+
+-  Retrieve Polling Interval API for OpenFlowStats Collector
+
+   -  URL:
+      http://localhost:8181/restconf/config/tsdr-openflow-statistics-collector:TSDROSCConfig
+
+   -  Verb: GET
+
+-  Update Polling Interval API for OpenFlowStats Collector
+
+   -  URL:
+      http://localhost:8181/restconf/operations/tsdr-openflow-statistics-collector:setPollingInterval
+
+   -  Verb: POST
+
+   -  Content Type: application/json
+
+   -  Input Payload:
+
+      ::
+
+          {
+             "input": {
+                 "interval": "15000"
+             }
+          }
+
+Querying TSDR from REST APIs
+----------------------------
+
+TSDR provides two REST APIs for querying data stored in TSDR data
+stores.
+
+-  Query of TSDR Metrics
+
+   -  URL: http://localhost:8181/tsdr/metrics/query
+
+   -  Verb: GET
+
+   -  Parameters:
+
+      -  tsdrkey=[NID=][DC=][MN=][RK=]
+
+         ::
+
+             The TSDRKey format indicates the NodeID(NID), DataCategory(DC), MetricName(MN), and RecordKey(RK) of the monitored objects.
+             For example, the following is a valid tsdrkey:
+             [NID=openflow:1][DC=FLOWSTATS][MN=PacketCount][RK=Node:openflow:1,Table:0,Flow:3]
+             The following is also a valid tsdrkey:
+             tsdrkey=[NID=][DC=FLOWSTATS][MN=][RK=]
+             In the case when the sections in the tsdrkey is empty, the query will return all the records in the TSDR data store that matches the filled tsdrkey. In the above example, the query will return all the data in FLOWSTATS data category.
+             The query will return only the first 1000 records that match the query criteria.
+
+      -  from=<time\_in\_seconds>
+
+      -  until=<time\_in\_seconds>
+
+The following is an example curl command for querying metric data from
+TSDR data store:
+
+curl -G -v -H "Accept: application/json" -H "Content-Type:
+application/json" "http://localhost:8181/tsdr/metrics/query"
+--data-urlencode "tsdrkey=[NID=][DC=FLOWSTATS][MN=][RK=]"
+--data-urlencode "from=0" --data-urlencode "until=240000000000"\|more
+
+-  Query of TSDR Log type of data
+
+   -  URL:http://localhost:8181/tsdr/logs/query
+
+   -  Verb: GET
+
+   -  Parameters:
+
+      -  tsdrkey=tsdrkey=[NID=][DC=][RK=]
+
+         ::
+
+             The TSDRKey format indicates the NodeID(NID), DataCategory(DC), and RecordKey(RK) of the monitored objects.
+             For example, the following is a valid tsdrkey:
+             [NID=openflow:1][DC=NETFLOW][RK]
+             The query will return only the first 1000 records that match the query criteria.
+
+      -  from=<time\_in\_seconds>
+
+      -  until=<time\_in\_seconds>
+
+The following is an example curl command for querying log type of data
+from TSDR data store:
+
+curl -G -v -H "Accept: application/json" -H "Content-Type:
+application/json" "http://localhost:8181/tsdr/logs/query"
+--data-urlencode "tsdrkey=[NID=][DC=NETFLOW][RK=]" --data-urlencode
+"from=0" --data-urlencode "until=240000000000"\|more
+
+Grafana integration with TSDR
+-----------------------------
+
+TSDR provides northbound integration with Grafana time series data
+visualization tool. All the metric type of data stored in TSDR data
+store can be visualized using Grafana.
+
+For the detailed instruction about how to install and configure Grafana
+to work with TSDR, please refer to the following link:
+
+https://wiki.opendaylight.org/view/Grafana_Integration_with_TSDR_Step-by-Step
+
+Purging Service configuration
+-----------------------------
+
+After the data stores are installed from Karaf console, the purging
+service will be installed as well. A configuration file called
+tsdr.data.purge.cfg will be generated under etc/ directory of ODL
+distribution.
+
+The following is the sample default content of the tsdr.data.purge.cfg
+file:
+
+host=127.0.0.1 data\_purge\_enabled=true data\_purge\_time=23:59:59
+data\_purge\_interval\_in\_minutes=1440 retention\_time\_in\_hours=168
+
+The host indicates the IPAddress of the data store. In the case when the
+data store is together with ODL controller, 127.0.0.1 should be the
+right value for the host IP. The other attributes are self-explained.
+The user can change those attributes at any time. The configuration
+change will be picked up right away by TSDR Purging service at runtime.
+
+How to use TSDR to collect, store, and view OpenFlow Interface Statistics
+-------------------------------------------------------------------------
+
+Overview
+~~~~~~~~
+
+This tutorial describes an example of using TSDR to collect, store, and
+view one type of time series data in OpenDaylight environment.
+
+Prerequisites
+~~~~~~~~~~~~~
+
+You would need to have the following as prerequisits:
+
+-  One or multiple OpenFlow enabled switches. Alternatively, you can use
+   mininet to simulate such a switch.
+
+-  Successfully installed OpenDaylight Controller.
+
+-  Successfully installed HBase Data Store following TSDR HBase Data
+   Store Installation Guide.
+
+-  Connect the OpenFlow enabled switch(es) to OpenDaylight Controller.
+
+Target Environment
+~~~~~~~~~~~~~~~~~~
+
+HBase data store is only supported in Linux operation system.
+
+Instructions
+~~~~~~~~~~~~
+
+-  Start OpenDaylight.
+
+-  Connect OpenFlow enabled switch(es) to the controller.
+
+   -  If using mininet, run the following commands from mininet command
+      line:
+
+      -  mn --topo single,3 --controller
+         *remote,ip=172.17.252.210,port=6653* --switch
+         ovsk,protocols=OpenFlow13
+
+-  Install tsdr hbase feature from Karaf:
+
+   -  feature:install odl-tsdr-hbase
+
+-  Install OpenFlow Statistics Collector from Karaf:
+
+   -  feature:install odl-tsdr-openflow-statistics-collector
+
+-  run the following command from Karaf console:
+
+   -  tsdr:list PORTSTATS
+
+You should be able to see the interface statistics of the switch(es)
+from the HBase Data Store. If there are too many rows, you can use
+"tsdr:list InterfaceStats\|more" to view it page by page.
+
+By tabbing after "tsdr:list", you will see all the supported data
+categories. For example, "tsdr:list FlowStats" will output the Flow
+statistics data collected from the switch(es).
+
+Troubleshooting
+---------------
+
+Karaf logs
+~~~~~~~~~~
+
+All TSDR features and components write logging information including
+information messages, warnings, errors and debug messages into
+karaf.log.
+
+HBase and Cassandra logs
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+For HBase and Cassandra data stores, the database level logs are written
+into HBase log and Cassandra logs.
+
+-  HBase log
+
+   -  HBase log is under <HBase-installation-directory>/logs/.
+
+-  Cassandra log
+
+   -  Cassandra log is under {cassandra.logdir}/system.log. The default
+      {cassandra.logdir} is /var/log/cassandra/.
+
+Security
+--------
+
+TSDR gets the data from a variety of sources, which can be secured in
+different ways.
+
+-  OpenFlow Security
+
+   -  The OpenFlow data can be configured with Transport Layer Security
+      (TLS) since the OpenFlow Plugin that TSDR depends on provides this
+      security support.
+
+-  SNMP Security
+
+   -  The SNMP version3 has security support. However, since ODL SNMP
+      Plugin that TSDR depends on does not support version 3, we (TSDR)
+      will not have security support at this moment.
+
+-  NetFlow Security
+
+   -  NetFlow, which cannot be configured with security so we recommend
+      making sure it flows only over a secured management network.
+
+-  Syslog Security
+
+   -  Syslog, which cannot be configured with security so we recommend
+      making sure it flows only over a secured management network.
+
+Support multiple data stores simultaneously at runtime
+------------------------------------------------------
+
+TSDR supports running multiple data stores simultaneously at runtim. For
+example, it is possible to configure TSDR to push log type of data into
+Cassandra data store, while pushing metrics type of data into HBase.
+
+When you install one TSDR data store from karaf console, such as using
+feature:install odl-tsdr-hsqldb, a properties file will be generated
+under <Karaf-distribution-directory>/etc/. For example, when you install
+hsqldb, a file called tsdr-persistence-hsqldb.properties is generated
+under that directory.
+
+By default, all the types of data are supported in the data store. For
+example, the default content of tsdr-persistence-hsqldb.properties is as
+follows:
+
+::
+
+    metric-persistency=true
+    log-persistency=true
+    binary-persistency=true
+
+When the user would like to use different data stores to support
+different types of data, he/she could enable or disable a particular
+type of data persistence in the data stores by configuring the
+properties file accordingly.
+
+For example, if the user would like to store the log type of data in
+HBase, and store the metric and binary type of data in Cassandra, he/she
+needs to install both hbase and cassandra data stores from Karaf
+console. Then the user needs to modify the properties file under
+<Karaf-distribution-directory>/etc as follows:
+
+-  tsdr-persistence-hbase.properties
+
+   ::
+
+       metric-persistency=false
+       log-persistency=true
+       binary-persistency=true
+
+-  tsdr-persistence-cassandra.properties
+
+   ::
+
+       metric-psersistency=true
+       log-persistency=false
+       binary-persistency=false
+
diff --git a/docs/user-guide/ttp-cli-tools-user-guide.rst b/docs/user-guide/ttp-cli-tools-user-guide.rst
new file mode 100644 (file)
index 0000000..08583a7
--- /dev/null
@@ -0,0 +1,21 @@
+TTP CLI Tools User Guide
+========================
+
+Overview
+--------
+
+Table Type Patterns are a specification developed by the `Open
+Networking Foundation <https://www.opennetworking.org/>`__ to enable the
+description and negotiation of subsets of the OpenFlow protocol. This is
+particularly useful for hardware switches that support OpenFlow as it
+enables the to describe what features they do (and thus also what
+features they do not) support. More details can be found in the full
+specification listed on the `OpenFlow specifications
+page <https://www.opennetworking.org/sdn-resources/onf-specifications/openflow>`__.
+
+TTP CLI Tools Architecture
+--------------------------
+
+The TTP CLI Tools use the TTP Model and the YANG Tools/RESTCONF codecs
+to translate between the Data Transfer Objects (DTOs) and JSON/XML.
+
diff --git a/docs/user-guide/uni-manager-plug-in-project.rst b/docs/user-guide/uni-manager-plug-in-project.rst
new file mode 100644 (file)
index 0000000..6573553
--- /dev/null
@@ -0,0 +1,92 @@
+UNI Manager Plug In Project
+===========================
+
+Overview
+--------
+
+The version of the UNI Manager (UNIMgr) plug-in included in OpenDaylight
+Beryllium release is experimental, serving as a proof-of-concept (PoC)
+for using features of OpenDaylight to provision networked elements with
+attributes satisfying Metro Ethernet Forum (MEF) requirements for
+delivery of Carrier Ethernet service. This initial version of UNIMgr
+does not enable the full set of MEF-defined functionality for Carrier
+Ethernet service. UNI Manager adheres to a minimum set of functionality
+defined by MEF 7.2 and 10.2 specifications.
+
+UNIMgr receives a request from applications to create an Ethernet
+Private Line (EPL) private Ethernet connection between two endpoints on
+the network. The request must include the IP addresses of the endpoints
+and a class of service identifier.
+
+UNI Manager plug-in translates the request for EPL service into (a)
+configuring two instances of Open vSwitch (OVS), each instance running
+in one of the UNI endpoints, with two ports and a bridge between the
+ports, and (b) creating a GRE tunnel to provide a private connection
+between the endpoints. This initial version of UNIMgr uses only OVSDB on
+its southbound interface to send configuration commands.
+
+UNIMgr also accepts a bits per second datarate parameter, which is
+translated to an OVSDB command to limit the rate at which the OVS
+instances will forward data traffic.
+
+The YANG module used to create the UNIMgr plug-in models MEF-defined UNI
+and Ethernet Virtual Connection (EVC) attributes but does not include
+the full set of UNI and EVC attributes. And of the attributes modeled in
+the YANG module only a subset of them are implemented in the UNIMgr
+listener code translating the Operational data tree to OVSDB commands.
+The YANG module used to develop the PoC UNIMgr plug-in is
+cl-unimgr-mef.yang. A copy of this module is available in the
+odl-unimgr-api bundle of the UNIMgr project.
+
+Limitations of the PoC version of UNI Manager in OpenDaylight Beryllium
+include those listed below: \* Uses only OVSDB southbound interface of
+OpenDaylight \* Only uses UNI ID, IP Address, and speed UNI attributes
+\* Only uses a subset of EVC per UNI attributes \* Does not use MEF
+Class of Service or Bandwidth Profile attributes \* Configures only Open
+vSwitch network elements
+
+Opportunities for evolution of the UNI Manager plug in include using
+complete MEF Service Layer and MEF Resource Layer YANG models and
+supporting other OpenDaylight southbound protocols like NetConf and
+OpenFlow.
+
+UNI Manager Components
+----------------------
+
+UNI Manager is comprised of the following OpenDaylight Karaf features:
+
++--------------------------------------+--------------------------------------+
+| odl-unimgr-api                       | OpenDaylight :: UniMgr :: api        |
++--------------------------------------+--------------------------------------+
+| odl-unimgr                           | OpenDaylight :: UniMgr               |
++--------------------------------------+--------------------------------------+
+| odl-unimgr-console                   | OpenDaylight :: UniMgr :: CLI        |
++--------------------------------------+--------------------------------------+
+| odl-unimgr-rest                      | OpenDaylight :: UniMgr :: REST       |
++--------------------------------------+--------------------------------------+
+| odl-unimgr-ui                        | OpenDaylight :: UniMgr :: UI         |
++--------------------------------------+--------------------------------------+
+
+Installing UNI Manager Plug-in
+------------------------------
+
+After launching OpenDaylight install the feature for the UNI Manager
+plug-in. From the karaf command prompt execute the following command to
+install the UNI Manager plug-in:
+
+::
+
+    $ feature:install odl-manager-ui
+
+Explore and exercise the UNI Manager REST API
+---------------------------------------------
+
+To see the UNI Manager APIs, browse to this URL:
+http://localhost:8181/apidoc/explorer/index.html
+
+Replace localhost with the IP address or hostname where OpenDaylight is
+running if you are not running OpenDaylight locally on your machine.
+
+See also the UNI Manager Developer’s Guide for a full list and
+description of UNI Manager POSTMAN calls.
+
diff --git a/docs/user-guide/unified-secure-channel.rst b/docs/user-guide/unified-secure-channel.rst
new file mode 100644 (file)
index 0000000..983c33d
--- /dev/null
@@ -0,0 +1,145 @@
+Unified Secure Channel
+======================
+
+This document describes how to use the Unified Secure Channel (USC)
+feature in OpenDaylight. This document contains configuration,
+administration, and management sections for the feature.
+
+Overview
+--------
+
+In enterprise networks, more and more controller and network management
+systems are being deployed remotely, such as in the cloud. Additionally,
+enterprise networks are becoming more heterogeneous - branch, IoT,
+wireless (including cloud access control). Enterprise customers want a
+converged network controller and management system solution. This
+feature is intended for device and network administrators looking to use
+unified secure channels for their systems.
+
+USC Channel Architecture
+------------------------
+
+-  USC Agent
+
+   -  The USC Agent provides proxy and agent functionality on top of all
+      standard protocols supported by the device. It initiates call-home
+      with the controller, maintains live connections with with the
+      controller, acts as a demuxer/muxer for packets with the USC
+      header, and authenticates the controller.
+
+-  USC Plugin
+
+   -  The USC Plugin is responsible for communication between the
+      controller and the USC agent . It responds to call-home with the
+      controller, maintains live connections with the devices, acts as a
+      muxer/demuxer for packets with the USC header, and provides
+      support for TLS/DTLS.
+
+-  USC Manager
+
+   -  The USC Manager handles configurations, high availability,
+      security, monitoring, and clustering support for USC.
+
+-  USC UI
+
+   -  The USC UI is responsible for displaying a graphical user
+      interface representing the state of USC in the OpenDaylight DLUX
+      UI.
+
+Installing USC Channel
+----------------------
+
+To install USC, download OpenDaylight and use the Karaf console to
+install the following feature:
+
+odl-usc-channel-ui
+
+Configuring USC Channel
+-----------------------
+
+This section gives details about the configuration settings for various
+components in USC.
+
+The USC configuration files for the Karaf distribution are located in
+distribution/karaf/target/assembly/etc/usc
+
+-  certificates
+
+   -  The certificates folder contains the client key, pem, and rootca
+      files as is necessary for security.
+
+-  akka.conf
+
+   -  This file contains configuration related to clustering. Potential
+      configuration properties can be found on the akka website at
+      http://doc.akka.io
+
+-  usc.properties
+
+   -  This file contains configuration related to USC. Use this file to
+      set the location of certificates, define the source of additional
+      akka configurations, and assign default settings to the USC
+      behavior.
+
+Administering or Managing USC Channel
+-------------------------------------
+
+After installing the odl-usc-channel-ui feature from the Karaf console,
+users can administer and manage USC channels from the the UI or APIDOCS
+explorer.
+
+Go to
+`http://${ipaddress}:8181/index.html <http://${ipaddress}:8181/index.html>`__,
+sign in, and click on the USC side menu tab. From there, users can view
+the state of USC channels.
+
+Go to
+`http://${ipaddress}:8181/apidoc/explorer/index.html <http://${ipaddress}:8181/apidoc/explorer/index.html>`__,
+sign in, and expand the usc-channel panel. From there, users can execute
+various API calls to test their USC deployment such as add-channel,
+delete-channel, and view-channel.
+
+Tutorials
+---------
+
+Below are tutorials for USC Channel
+
+Viewing USC Channel
+~~~~~~~~~~~~~~~~~~~
+
+The purpose of this tutorial is to view USC Channel
+
+Overview
+^^^^^^^^
+
+This tutorial walks users through the process of viewing the USC Channel
+environment topology including established channels connecting the
+controllers and devices in the USC topology.
+
+Prerequisites
+^^^^^^^^^^^^^
+
+For this tutorial, we assume that a device running a USC agent is
+already installed.
+
+Instructions
+^^^^^^^^^^^^
+
+-  Run the OpenDaylight distribution and install odl-usc-channel-ui from
+   the Karaf console.
+
+-  Go to
+   `http://${ipaddress}:8181/apidoc/explorer/index.html <http://${ipaddress}:8181/apidoc/explorer/index.html>`__
+
+-  Execute add-channel with the following json data:
+
+   -  {"input":{"channel":{"hostname":"127.0.0.1","port":1068,"remote":false}}}
+
+-  Go to
+   `http://${ipaddress}:8181/index.html <http://${ipaddress}:8181/index.html>`__
+
+-  Click on the USC side menu tab.
+
+-  The UI should display a table including the added channel from step
+   3.
+
index e7dbd2db9dc646b0dae2a56060dede340fe6502b..70b0332c9147c499e5289cf137b7b6d1efdc051f 100644 (file)
@@ -22,7 +22,7 @@ To log in to DLUX, after installing the application:
 
 2. Login to the application with your username and password credentials.
 
-    **Note**
+.. note::
 
     OpenDaylight’s default credentials are *admin* for both the username
     and password.
@@ -33,7 +33,7 @@ Working with DLUX
 After you login to DLUX, if you enable only odl-dlux-core feature, you
 will see only topology application available in the left pane.
 
-    **Note**
+.. note::
 
     To make sure topology displays all the details, enable the
     odl-l2switch-switch feature in Karaf.
@@ -47,7 +47,7 @@ odl-dlux-yangui respectively in the Karaf distribution.
 
    DLUX Modules
 
-    **Note**
+.. note::
 
     If you install your application in dlux, they will also show up on
     the left hand navigation after browser page refresh.
@@ -82,7 +82,7 @@ Viewing Network Topology
 The Topology tab displays a graphical representation of network topology
 created.
 
-    **Note**
+.. note::
 
     DLUX does not allow for editing or adding topology information. The
     topology is generated and edited in other modules, e.g., the
@@ -128,9 +128,9 @@ To use Yang UI:
 2. The top part displays a tree of APIs, subAPIs, and buttons to call
    possible functions (GET, POST, PUT, and DELETE).
 
-       **Note**
+   .. note::
 
-.. note:: every subAPI can call every function. For example, subAPIs in
+       every subAPI can call every function. For example, subAPIs in
        the *operational* store have GET functionality only.
 
    Inputs can be filled from OpenDaylight when existing data from
diff --git a/docs/user-guide/virtual-tenant-network-(vtn).rst b/docs/user-guide/virtual-tenant-network-(vtn).rst
new file mode 100644 (file)
index 0000000..39708cf
--- /dev/null
@@ -0,0 +1,4724 @@
+Virtual Tenant Network (VTN)
+============================
+
+VTN Overview
+------------
+
+OpenDaylight Virtual Tenant Network (VTN) is an application that
+provides multi-tenant virtual network on an SDN controller.
+
+Conventionally, huge investment in the network systems and operating
+expenses are needed because the network is configured as a silo for each
+department and system. So, various network appliances must be installed
+for each tenant and those boxes cannot be shared with others. It is a
+heavy work to design, implement and operate the entire complex network.
+
+The uniqueness of VTN is a logical abstraction plane. This enables the
+complete separation of logical plane from physical plane. Users can
+design and deploy any desired network without knowing the physical
+network topology or bandwidth restrictions.
+
+VTN allows the users to define the network with a look and feel of
+conventional L2/L3 network. Once the network is designed on VTN, it will
+automatically be mapped into underlying physical network, and then
+configured on the individual switch leveraging SDN control protocol. The
+definition of logical plane makes it possible not only to hide the
+complexity of the underlying network but also to better manage network
+resources. It achieves reducing reconfiguration time of network services
+and minimizing network configuration errors.
+
+.. figure:: ./images/vtn/VTN_Overview.jpg
+   :alt: VTN Overview
+
+   VTN Overview
+
+It is implemented as two major components
+
+-  `VTN Manager <#_vtn_manager>`__
+
+-  `VTN Coordinator <#_vtn_coordinator>`__
+
+VTN Manager
+~~~~~~~~~~~
+
+An OpenDaylight Plugin that interacts with other modules to implement
+the components of the VTN model. It also provides a REST interface to
+configure VTN components in OpenDaylight. VTN Manager is implemented as
+one plugin to the OpenDaylight. This provides a REST interface to
+create/update/delete VTN components. The user command in VTN Coordinator
+is translated as REST API to VTN Manager by the OpenDaylight Driver
+component. In addition to the above mentioned role, it also provides an
+implementation to the OpenStack L2 Network Functions API.
+
+Features Overview
+^^^^^^^^^^^^^^^^^
+
+-  **odl-vtn-manager** provides VTN Manager’s JAVA API.
+
+-  **odl-vtn-manager-rest** provides VTN Manager’s REST API.
+
+-  **odl-vtn-manager-neutron** provides the integration with Neutron
+   interface.
+
+REST API
+^^^^^^^^
+
+VTN Manager provides REST API for virtual network functions.
+
+Here is an example of how to create a virtual tenant network.
+
+::
+
+     curl --user "admin":"admin" -H "Accept: application/json" -H \
+     "Content-type: application/json" -X POST \
+     http://localhost:8181/restconf/operations/vtn:update-vtn \
+     -d '{"input":{"tenant-name":"vtn1"}}'
+
+You can check the list of all tenants by executing the following
+command.
+
+::
+
+     curl --user "admin":"admin" -H "Accept: application/json" -H \
+     "Content-type: application/json" -X GET \
+     http://localhost:8181/restconf/operational/vtn:vtns
+
+REST API documentation for VTN Manager, please refer to:
+https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/
+
+VTN Coordinator
+~~~~~~~~~~~~~~~
+
+The VTN Coordinator is an external application that provides a REST
+interface for an user to use OpenDaylight VTN Virtualization. It
+interacts with VTN Manager plugin to implement the user configuration.
+It is also capable of multiple OpenDaylight orchestration. It realizes
+Virtual Tenant Network (VTN) provisioning in OpenDaylight instances. In
+the OpenDaylight architecture VTN Coordinator is part of the network
+application, orchestration and services layer. VTN Coordinator will use
+the REST interface exposed by the VTN Manger to realize the virtual
+network using OpenDaylight. It uses OpenDaylight APIs (REST) to
+construct the virtual network in OpenDaylight instances. It provides
+REST APIs for northbound VTN applications and supports virtual networks
+spanning across multiple OpenDaylight by coordinating across
+OpenDaylight.
+
+For VTN Coordinator REST API, please refer to:
+https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_%28VTN%29:VTN_Coordinator:RestApi
+
+Network Virtualization Function
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The user first defines a VTN. Then, the user maps the VTN to a physical
+network, which enables communication to take place according to the VTN
+definition. With the VTN definition, L2 and L3 transfer functions and
+flow-based traffic control functions (filtering and redirect) are
+possible.
+
+Virtual Network Construction
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following table shows the elements which make up the VTN. In the
+VTN, a virtual network is constructed using virtual nodes (vBridge,
+vRouter) and virtual interfaces and links. It is possible to configure a
+network which has L2 and L3 transfer function, by connecting the virtual
+intrefaces made on virtual nodes via virtual links.
+
++--------------------------------------+--------------------------------------+
+| vBridge                              | The logical representation of L2     |
+|                                      | switch function.                     |
++--------------------------------------+--------------------------------------+
+| vRouter                              | The logical representation of router |
+|                                      | function.                            |
++--------------------------------------+--------------------------------------+
+| vTep                                 | The logical representation of Tunnel |
+|                                      | End Point - TEP.                     |
++--------------------------------------+--------------------------------------+
+| vTunnel                              | The logical representation of        |
+|                                      | Tunnel.                              |
++--------------------------------------+--------------------------------------+
+| vBypass                              | The logical representation of        |
+|                                      | connectivity between controlled      |
+|                                      | networks.                            |
++--------------------------------------+--------------------------------------+
+| Virtual interface                    | The representation of end point on   |
+|                                      | the virtual node.                    |
++--------------------------------------+--------------------------------------+
+| Virtual Linkv(vLink)                 | The logical representation of L1     |
+|                                      | connectivity between virtual         |
+|                                      | interfaces.                          |
++--------------------------------------+--------------------------------------+
+
+The following figure shows an example of a constructed virtual network.
+VRT is defined as the vRouter, BR1 and BR2 are defined as vBridges.
+interfaces of the vRouter and vBridges are connected using vLinks.
+
+.. figure:: ./images/vtn/VTN_Construction.jpg
+   :alt: VTN Construction
+
+   VTN Construction
+
+Mapping of Physical Network Resources
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Map physical network resources to the constructed virtual network.
+Mapping identifies which virtual network each packet transmitted or
+received by an OpenFlow switch belongs to, as well as which interface in
+the OpenFlow switch transmits or receives that packet. There are two
+mapping methods. When a packet is received from the OFS, port mapping is
+first searched for the corresponding mapping definition, then VLAN
+mapping is searched, and the packet is mapped to the relevant vBridge
+according to the first matching mapping.
+
++--------------------------------------+--------------------------------------+
+| Port mapping                         | Maps physical network resources to   |
+|                                      | an interface of vBridge using Switch |
+|                                      | ID, Port ID and VLAN ID of the       |
+|                                      | incoming L2 frame. Untagged frame    |
+|                                      | mapping is also supported.           |
++--------------------------------------+--------------------------------------+
+| VLAN mapping                         | Maps physical network resources to a |
+|                                      | vBridge using VLAN ID of the         |
+|                                      | incoming L2 frame.Maps physical      |
+|                                      | resources of a particular switch to  |
+|                                      | a vBridge using switch ID and VLAN   |
+|                                      | ID of the incoming L2 frame.         |
++--------------------------------------+--------------------------------------+
+| MAC mapping                          | Maps physical resources to an        |
+|                                      | interface of vBridge using MAC       |
+|                                      | address of the incoming L2 frame(The |
+|                                      | initial contribution does not        |
+|                                      | include this method).                |
++--------------------------------------+--------------------------------------+
+
+VTN can learn the terminal information from a terminal that is connected
+to a switch which is mapped to VTN. Further, it is possible to refer
+that terminal information on the VTN.
+
+-  Learning terminal information VTN learns the information of a
+   terminal that belongs to VTN. It will store the MAC address and VLAN
+   ID of the terminal in relation to the port of the switch.
+
+-  Aging of terminal information Terminal information, learned by the
+   VTN, will be maintained until the packets from terminal keep flowing
+   in VTN. If the terminal gets disconnected from the VTN, then the
+   aging timer will start clicking and the terminal information will be
+   maintained till timeout.
+
+The following figure shows an example of mapping. An interface of BR1 is
+mapped to port GBE0/1 of OFS1 using port mapping. Packets received from
+GBE0/1 of OFS1 are regarded as those from the corresponding interface of
+BR1. BR2 is mapped to VLAN 200 using VLAN mapping. Packets with VLAN tag
+200 received from any ports of any OFSs are regarded as those from an
+interface of BR2.
+
+.. figure:: ./images/vtn/VTN_Mapping.jpg
+   :alt: VTN Mapping
+
+   VTN Mapping
+
+vBridge Functions
+~~~~~~~~~~~~~~~~~
+
+The vBridge provides the bridge function that transfers a packet to the
+intended virtual port according to the destination MAC address. The
+vBridge looks up the MAC address table and transmits the packet to the
+corresponding virtual interface when the destination MAC address has
+been learned. When the destination MAC address has not been learned, it
+transmits the packet to all virtual interfaces other than the receiving
+port (flooding). MAC addresses are learned as follows.
+
+-  MAC address learning The vBridge learns the MAC address of the
+   connected host. The source MAC address of each received frame is
+   mapped to the receiving virtual interface, and this MAC address is
+   stored in the MAC address table created on a per-vBridge basis.
+
+-  MAC address aging The MAC address stored in the MAC address table is
+   retained as long as the host returns the ARP reply. After the host is
+   disconnected, the address is retained until the aging timer times
+   out. To have the vBridge learn MAC addresses statically, you can
+   register MAC addresses manually.
+
+vRouter Functions
+~~~~~~~~~~~~~~~~~
+
+The vRouter transfers IPv4 packets between vBridges. The vRouter
+supports routing, ARP learning, and ARP aging functions. The following
+outlines the functions.
+
+-  Routing function When an IP address is registered with a virtual
+   interface of the vRouter, the default routing information for that
+   interface is registered. It is also possible to statically register
+   routing information for a virtual interface.
+
+-  ARP learning function The vRouter associates a destination IP
+   address, MAC address and a virtual interface, based on an ARP request
+   to its host or a reply packet for an ARP request, and maintains this
+   information in an ARP table prepared for each routing domain. The
+   registered ARP entry is retained until the aging timer, described
+   later, times out. The vRouter transmits an ARP request on an
+   individual aging timer basis and deletes the associated entry from
+   the ARP table if no reply is returned. For static ARP learning, you
+   can register ARP entry information manually.
+
+-  DHCP relay agent function The vRouter also provides the DHCP relay
+   agent function.
+
+Flow Filter Functions
+~~~~~~~~~~~~~~~~~~~~~
+
+Flow Filter function is similar to ACL. It is possible to allow or
+prohibit communication with only certain kind of packets that meet a
+particular condition. Also, it can perform a processing called
+Redirection - WayPoint routing, which is different from the existing
+ACL. Flow Filter can be applied to any interface of a vNode within VTN,
+and it is possible to the control the packets that pass interface. The
+match conditions that could be specified in Flow Filter are as follows.
+It is also possible to specify a combination of multiple conditions.
+
+-  Source MAC address
+
+-  Destination MAC address
+
+-  MAC ether type
+
+-  VLAN Priority
+
+-  Source IP address
+
+-  Destination IP address
+
+-  DSCP
+
+-  IP Protocol
+
+-  TCP/UDP source port
+
+-  TCP/UDP destination port
+
+-  ICMP type
+
+-  ICMP code
+
+The types of Action that can be applied on packets that match the Flow
+Filter conditions are given in the following table. It is possible to
+make only those packets, which match a particular condition, to pass
+through a particular server by specifying Redirection in Action. E.g.,
+path of flow can be changed for each packet sent from a particular
+terminal, depending upon the destination IP address. VLAN priority
+control and DSCP marking are also supported.
+
++--------------------------------------+--------------------------------------+
+| Action                               | Function                             |
++--------------------------------------+--------------------------------------+
+| Pass                                 | Pass particular packets matching the |
+|                                      | specified conditions.                |
++--------------------------------------+--------------------------------------+
+| Drop                                 | Discards particular packets matching |
+|                                      | the specified conditions.            |
++--------------------------------------+--------------------------------------+
+| Redirection                          | Redirects the packet to a desired    |
+|                                      | virtual interface. Both Transparent  |
+|                                      | Redirection (not changing MAC        |
+|                                      | address) and Router Redirection      |
+|                                      | (changing MAC address) are           |
+|                                      | supported.                           |
++--------------------------------------+--------------------------------------+
+
+The following figure shows an example of how the flow filter function
+works.
+
+If there is any matching condition specified by flow filter when a
+packet being transferred within a virtual network goes through a virtual
+interface, the function evaluates the matching condition to see whether
+the packet matches it. If the packet matches the condition, the function
+applies the matching action specified by flow filter. In the example
+shown in the figure, the function evaluates the matching condition at
+BR1 and discards the packet if it matches the condition.
+
+.. figure:: ./images/vtn/VTN_Flow_Filter.jpg
+   :alt: VTN FlowFilter
+
+   VTN FlowFilter
+
+Multiple SDN Controller Coordination
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+With the network abstractions, VTN enables to configure virtual network
+across multiple SDN controllers. This provides highly scalable network
+system.
+
+VTN can be created on each SDN controller. If users would like to manage
+those multiple VTNs with one policy, those VTNs can be integrated to a
+single VTN.
+
+As a use case, this feature is deployed to multi data center
+environment. Even if those data centers are geographically separated and
+controlled with different controllers, a single policy virtual network
+can be realized with VTN.
+
+Also, one can easily add a new SDN Controller to an existing VTN or
+delete a particular SDN Controller from VTN.
+
+In addition to this, one can define a VTN which covers both OpenFlow
+network and Overlay network at the same time.
+
+Flow Filter, which is set on the VTN, will be automatically applied on
+the newly added SDN Controller.
+
+Coordination between OpenFlow Network and L2/L3 Network
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+It is possible to configure VTN on an environment where there is mix of
+L2/L3 switches as well. L2/L3 switch will be shown on VTN as vBypass.
+Flow Filter or policing cannot be configured for a vBypass. However, it
+is possible to treat it as a virtual node inside VTN.
+
+Virtual Tenant Network (VTN) API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+VTN provides Web APIs. They are implemented by REST architecture and
+provide the access to resources within VTN that are identified by URI.
+User can perform the operations like GET/PUT/POST/DELETE against the
+virtual network resources (e.g. vBridge or vRouter) by sending a message
+to VTN through HTTPS communication in XML or JSON format.
+
+.. figure:: ./images/vtn/VTN_API.jpg
+   :alt: VTN API
+
+   VTN API
+
+Function Outline
+^^^^^^^^^^^^^^^^
+
+VTN provides following operations for various network resources.
+
++----------------+----------------+----------------+----------------+----------------+
+| Resources      | GET            | POST           | PUT            | DELETE         |
++----------------+----------------+----------------+----------------+----------------+
+| VTN            | Yes            | Yes            | Yes            | Yes            |
++----------------+----------------+----------------+----------------+----------------+
+| vBridge        | Yes            | Yes            | Yes            | Yes            |
++----------------+----------------+----------------+----------------+----------------+
+| vRouter        | Yes            | Yes            | Yes            | Yes            |
++----------------+----------------+----------------+----------------+----------------+
+| vTep           | Yes            | Yes            | Yes            | Yes            |
++----------------+----------------+----------------+----------------+----------------+
+| vTunnel        | Yes            | Yes            | Yes            | Yes            |
++----------------+----------------+----------------+----------------+----------------+
+| vBypass        | Yes            | Yes            | Yes            | Yes            |
++----------------+----------------+----------------+----------------+----------------+
+| vLink          | Yes            | Yes            | Yes            | Yes            |
++----------------+----------------+----------------+----------------+----------------+
+| Interface      | Yes            | Yes            | Yes            | Yes            |
++----------------+----------------+----------------+----------------+----------------+
+| Port map       | Yes            | No             | Yes            | Yes            |
++----------------+----------------+----------------+----------------+----------------+
+| Vlan map       | Yes            | Yes            | Yes            | Yes            |
++----------------+----------------+----------------+----------------+----------------+
+| Flowfilter     | Yes            | Yes            | Yes            | Yes            |
+| (ACL/redirect) |                |                |                |                |
++----------------+----------------+----------------+----------------+----------------+
+| Controller     | Yes            | Yes            | Yes            | Yes            |
+| information    |                |                |                |                |
++----------------+----------------+----------------+----------------+----------------+
+| Physical       | Yes            | No             | No             | No             |
+| topology       |                |                |                |                |
+| information    |                |                |                |                |
++----------------+----------------+----------------+----------------+----------------+
+| Alarm          | Yes            | No             | No             | No             |
+| information    |                |                |                |                |
++----------------+----------------+----------------+----------------+----------------+
+
+Example usage
+^^^^^^^^^^^^^
+
+The following is an example of the usage to construct a virtual network.
+
+-  Create VTN
+
+::
+
+       curl --user admin:adminpass -X POST -H 'content-type: application/json'  \
+      -d '{"vtn":{"vtn_name":"VTN1"}}' http://172.1.0.1:8083/vtn-webapi/vtns.json
+
+-  Create Controller Information
+
+::
+
+       curl --user admin:adminpass -X POST -H 'content-type: application/json'  \
+      -d '{"controller": {"controller_id":"CONTROLLER1","ipaddr":"172.1.0.1","type":"odc","username":"admin", \
+      "password":"admin","version":"1.0"}}' http://172.1.0.1:8083/vtn-webapi/controllers.json
+
+-  Create vBridge under VTN
+
+::
+
+      curl --user admin:adminpass -X POST -H 'content-type: application/json' \
+      -d '{"vbridge":{"vbr_name":"VBR1","controller_id": "CONTROLLER1","domain_id": "(DEFAULT)"}}' \
+      http://172.1.0.1:8083/vtn-webapi/vtns/VTN1/vbridges.json
+
+-  Create the interface under vBridge
+
+::
+
+      curl --user admin:adminpass -X POST -H 'content-type: application/json' \
+      -d '{"interface":{"if_name":"IF1"}}' http://172.1.0.1:8083/vtn-webapi/vtns/VTN1/vbridges/VBR1/interfaces.json
+
+VTN OpenStack Configuration
+---------------------------
+
+This guide describes how to set up OpenStack for integration with
+OpenDaylight Controller.
+
+While OpenDaylight Controller provides several ways to integrate with
+OpenStack, this guide focus on the way which uses VTN features available
+on OpenDaylight. In the integration, VTN Manager work as network service
+provider for OpenStack.
+
+VTN Manager features, enable OpenStack to work in pure OpenFlow
+environment in which all switches in data plane are OpenFlow switch.
+
+Requirements
+~~~~~~~~~~~~
+
+-  OpenDaylight Controller. (VTN features must be installed)
+
+-  OpenStack Control Node.
+
+-  OpenStack Compute Node.
+
+-  OpenFlow Switch like mininet(Not Mandatory).
+
+The VTN features support multiple OpenStack nodes. You can deploy
+multiple OpenStack Compute Nodes. In management plane, OpenDaylight
+Controller, OpenStack nodes and OpenFlow switches should communicate
+with each other. In data plane, Open vSwitches running in OpenStack
+nodes should communicate with each other through a physical or logical
+OpenFlow switches. The core OpenFlow switches are not mandatory.
+Therefore, you can directly connect to the Open vSwitch’s.
+
+.. figure:: ./images/vtn/OpenStack_Demo_Picture.png
+   :alt: Openstack Overview
+
+   Openstack Overview
+
+Sample Configuration
+~~~~~~~~~~~~~~~~~~~~
+
+Below steps depicts the configuration of single OpenStack Control node
+and OpenStack Compute node setup. Our test setup is as follows
+
+.. figure:: ./images/vtn/vtn_devstack_setup.png
+   :alt: LAB Setup
+
+   LAB Setup
+
+**Server Preparation**
+
+-  Install Ubuntu 14.04 LTS in two servers (OpenStack Control node and
+   Compute node respectively)
+
+-  While installing, Ubuntu mandates creation of a User, we created the
+   user "stack"(We will use the same user for running devstack)
+
+-  Proceed with the below mentioned User Settings and Network Settings
+   in both the Control and Compute nodes.
+
+**User Settings for devstack** - Login to both servers - Disable Ubuntu
+Firewall
+
+::
+
+    sudo ufw disable
+
+-  Install the below packages (optional, provides ifconfig and route
+   coammnds, handy for debugging!!)
+
+   ::
+
+       sudo apt-get install net-tools
+
+-  Edit sudo vim /etc/sudoers and add an entry as follows
+
+   ::
+
+       stack ALL=(ALL) NOPASSWD: ALL
+
+**Network Settings** - Checked the output of ifconfig -a, two interfaces
+were listed eth0 and eth1 as indicated in the image above. - We had
+connected eth0 interface to the Network where OpenDaylight is reachable.
+- eth1 interface in both servers were connected to a different network
+to act as data plane for the VM’s created using the OpenStack. -
+Manually edited the file : sudo vim /etc/network/interfaces and made
+entries as follows
+
+::
+
+     stack@ubuntu-devstack:~/devstack$ cat /etc/network/interfaces
+     # This file describes the network interfaces available on your system
+     # and how to activate them. For more information, see interfaces(5).
+     # The loop-back network interface
+     auto lo
+     iface lo inet loopback
+     # The primary network interface
+     auto eth0
+     iface eth0 inet static
+          address <IP_ADDRESS_TO_REACH_ODL>
+          netmask <NET_MASK>
+          broadcast <BROADCAST_IP_ADDRESS>
+          gateway <GATEWAY_IP_ADDRESS>
+    auto eth1
+    iface eth1 inet static
+         address <IP_ADDRESS_UNIQ>
+         netmask <NETMASK>
+
+.. note::
+
+    Please ensure that the eth0 interface is the default route and it is
+    able to reach the ODL\_IP\_ADDRESS NOTE: The entries for eth1 are
+    not mandatory, If not set, we may have to manually do "ifup eth1"
+    after the stacking is complete to activate the interface
+
+**Finalize the user and network settings** - Please reboot both nodes
+after the user and network settings to have the network settings applied
+to the network - Login again and check the output of ifconfig to ensure
+that both interfaces are listed
+
+OpenDaylight Settings and Execution
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+VTN Configuration for OpenStack Integration:
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  VTN uses the configuration parameters from "90-vtn-neutron.xml" file
+   for the OpenStack integration.
+
+-  These values will be set for the OpenvSwitch, in all the
+   participating OpenStack nodes.
+
+-  A configuration file "90-vtn-neutron.xml" will be generated
+   automatically by following the below steps,
+
+-  Download the latest Beryllium karaf distribution from the below link,
+
+   ::
+
+       http://www.opendaylight.org/software/downloads
+
+-  cd "distribution-karaf-0.4.0-Beryllium" and run karaf by using the
+   following command "./bin/karaf".
+
+-  Install the below feature to generate "90-vtn-neutron.xml"
+
+::
+
+     feature:install odl-vtn-manager-neutron
+
+-  Logout from the karaf console and Check "90-vtn-neutron.xml" file
+   from the following path
+   "distribution-karaf-0.4.0-Beryllium/etc/opendaylight/karaf/".
+
+-  The contents of "90-vtn-neutron.xml" should be as follows:
+
+bridgename=br-int portname=eth1 protocols=OpenFlow13 failmode=secure
+
+-  The values of the configuration parameters must be changed based on
+   the user environment.
+
+-  Especially, "portname" should be carefully configured, because if the
+   value is wrong, OpenDaylight fails to forward packets.
+
+-  Other parameters works fine as is for general use cases.
+
+   -  bridgename
+
+      -  The name of the bridge in Open vSwitch, that will be created by
+         OpenDaylight Controller.
+
+      -  It must be "br-int".
+
+   -  portname
+
+      -  The name of the port that will be created in the vbridge in
+         Open vSwitch.
+
+      -  This must be the same name of the interface of OpenStack Nodes
+         which is used for interconnecting OpenStack Nodes in data
+         plane.(in our case:eth1)
+
+      -  By default, if 90-vtn-neutron.xml is not created, VTN uses
+         ens33 as portname.
+
+   -  protocols
+
+      -  OpenFlow protocol through which OpenFlow Switch and Controller
+         communicate.
+
+      -  The values can be OpenFlow13 or OpenFlow10.
+
+   -  failmode
+
+      -  The value can be "standalone" or "secure".
+
+      -  Please use "secure" for general use cases.
+
+Start ODL Controller
+^^^^^^^^^^^^^^^^^^^^
+
+-  Please refer to the Installation Pages to run ODL with VTN Feature
+   enabled.
+
+-  After running ODL Controller, please ensure ODL Controller listens to
+   the ports:6633,6653, 6640 and 8080
+
+-  Please allow the ports in firewall for the devstack to be able to
+   communicate with ODL Controller.
+
+.. note::
+
+    -  6633/6653 - OpenFlow Ports
+
+    -  6640 - OVS Manager Port
+
+    -  8080 - Port for REST API
+
+Devstack Setup
+~~~~~~~~~~~~~~
+
+Get Devstack (All nodes)
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Install git application using
+
+   -  sudo apt-get install git
+
+-  Get devstack
+
+   -  git clone https://git.openstack.org/openstack-dev/devstack;
+
+-  Switch to stable/Juno Version branch
+
+   -  cd devstack
+
+      ::
+
+          git checkout stable/juno
+
+.. note::
+
+    If you want to use stable/kilo Version branch, Please execute the
+    below command in devstack folder
+
+::
+
+    git checkout stable/kilo
+
+.. note::
+
+    If you want to use stable/liberty Version branch, Please execute the
+    below command in devstack folder
+
+::
+
+    git checkout stable/liberty
+
+Stack Control Node
+^^^^^^^^^^^^^^^^^^
+
+-  local.conf:
+
+-  cd devstack in the controller node
+
+-  Copy the contents of local.conf for juno (devstack control node) from
+   https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:devstack
+   and save it as "local.conf" in the "devstack".
+
+-  Copy the contents of local.conf for kilo and liberty (devstack
+   control node) from
+   https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:devstack_post_juno_versions
+   and save it as "local.conf" in the "devstack".
+
+-  Please modify the IP Address values as required.
+
+-  Stack the node
+
+   ::
+
+       ./stack.sh
+
+Verify Control Node stacking
+''''''''''''''''''''''''''''
+
+-  stack.sh prints out Horizon is now available at
+   `http://<CONTROL\_NODE\_IP\_ADDRESS>:8080/ <http://<CONTROL_NODE_IP_ADDRESS>:8080/>`__
+
+-  Execute the command *sudo ovs-vsctl show* in the control node
+   terminal and verify if the bridge *br-int* is created.
+
+-  Typical output of the ovs-vsctl show is indicated below:
+
+::
+
+    e232bbd5-096b-48a3-a28d-ce4a492d4b4f
+       Manager "tcp:192.168.64.73:6640"
+           is_connected: true
+       Bridge br-int
+           Controller "tcp:192.168.64.73:6633"
+               is_connected: true
+           fail_mode: secure
+           Port "eth1"
+              Interface "eth1"
+       ovs_version: "2.0.2"
+
+Stack Compute Node
+^^^^^^^^^^^^^^^^^^
+
+-  local.conf:
+
+-  cd devstack in the controller node
+
+-  Copy the contents of local.conf for juno (devstack compute node) from
+   https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:devstack
+   and save it as "local.conf" in the "devstack".
+
+-  Copy the contents of local.conf file for kilo and liberty (devstack
+   compute node) from
+   https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:devstack_post_juno_versions
+   and save it as "local.conf" in the "devstack".
+
+-  Please modify the IP Address values as required.
+
+-  Stack the node
+
+   ::
+
+       ./stack.sh
+
+Verify Compute Node Stacking
+''''''''''''''''''''''''''''
+
+-  stack.sh prints out This is your host ip:
+   <COMPUTE\_NODE\_IP\_ADDRESS>
+
+-  Execute the command *sudo ovs-vsctl show* in the control node
+   terminal and verify if the bridge *br-int* is created.
+
+-  The output of the ovs-vsctl show will be similar to the one seen in
+   control node.
+
+Additional Verifications
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Please visit the OpenDaylight DLUX GUI after stacking all the nodes,
+   `http://<ODL\_IP\_ADDRESS>:8181/index.html <http://<ODL_IP_ADDRESS>:8181/index.html>`__.
+   The switches, topology and the ports that are currently read can be
+   validated.
+
+::
+
+    http://<controller-ip>:8181/index.html
+
+    **Tip**
+
+    If the interconnected between the Open vSwitch is not seen, Please
+    bring up the interface for the dataplane manually using the below
+    comamnd
+
+::
+
+    ifup <interface_name>
+
+-  Please Accept Promiscuous mode in the networks involving the
+   interconnect.
+
+Create VM from Devstack Horizon GUI
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Login to
+   `http://<CONTROL\_NODE\_IP>:8080/ <http://<CONTROL_NODE_IP>:8080/>`__
+   to check the horizon GUI.
+
+.. figure:: ./images/vtn/OpenStackGui.png
+   :alt: Horizon GUI
+
+   Horizon GUI
+
+Enter the value for User Name as admin and enter the value for Password
+as labstack.
+
+-  We should first ensure both the hypervisors(control node and compute
+   node) are mapped under hypervisors by clicking on Hpervisors tab.
+
+.. figure:: ./images/vtn/Hypervisors.png
+   :alt: Hypervisors
+
+   Hypervisors
+
+-  Create a new Network from Horizon GUI.
+
+-  Click on Networks Tab.
+
+-  click on the Create Network button.
+
+.. figure:: ./images/vtn/Create_Network.png
+   :alt: Create Network
+
+   Create Network
+
+-  A popup screen will appear.
+
+-  Enter network name and click Next button.
+
+.. figure:: ./images/vtn/Creare_Network_Step_1.png
+   :alt: Step 1
+
+   Step 1
+
+-  Create a sub network by giving Network Address and click Next button
+   .
+
+.. figure:: ./images/vtn/Create_Network_Step_2.png
+   :alt: Step 2
+
+   Step 2
+
+-  Specify the additional details for subnetwork (please refer the image
+   for your reference).
+
+.. figure:: ./images/vtn/Create_Network_Step_3.png
+   :alt: Step 3
+
+   Step 3
+
+-  Click Create button
+
+-  Create VM Instance
+
+-  Navigate to Instances tab in the GUI.
+
+.. figure:: ./images/vtn/Instance_Creation.png
+   :alt: Instance Creation
+
+   Instance Creation
+
+-  Click on Launch Instances button.
+
+.. figure:: ./images/vtn/Launch_Instance.png
+   :alt: Launch Instance
+
+   Launch Instance
+
+-  Click on Details tab to enter the VM details.For this demo we are
+   creating Ten VM’s(instances).
+
+-  In the Networking tab, we must select the network,for this we need to
+   drag and drop the Available networks to Selected Networks (i.e.,)
+   Drag vtn1 we created from Available networks to Selected Networks and
+   click Launch to create the instances.
+
+.. figure:: ./images/vtn/Launch_Instance_network.png
+   :alt: Launch Network
+
+   Launch Network
+
+-  Ten VM’s will be created.
+
+.. figure:: ./images/vtn/Load_All_Instances.png
+   :alt: Load All Instances
+
+   Load All Instances
+
+-  Click on any VM displayed in the Instances tab and click the Console
+   tab.
+
+.. figure:: ./images/vtn/Instance_Console.png
+   :alt: Instance Console
+
+   Instance Console
+
+-  Login to the VM console and verify with a ping command.
+
+.. figure:: ./images/vtn/Instance_ping.png
+   :alt: Ping
+
+   Ping
+
+Verification of Control and Compute Node after VM creation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Every time a new VM is created, more interfaces are added to the
+   br-int bridge in Open vSwitch.
+
+-  Use *sudo ovs-vsctl show* to list the number of interfaces added.
+
+-  Please visit the DLUX GUI to list the new nodes in every switch.
+
+Getting started with DLUX
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Ensure that you have created a topology and enabled MD-SAL feature in
+the Karaf distribution before you use DLUX for network management.
+
+Logging In
+^^^^^^^^^^
+
+To log in to DLUX, after installing the application: \* Open a browser
+and enter the login URL. If you have installed DLUX as a stand-alone,
+then the login URL is http://localhost:9000/DLUX/index.html. However if
+you have deployed DLUX with Karaf, then the login URL is
+`http://\\<your <http://\<your>`__ IP\\>:8181/dlux/index.html. \* Login
+to the application with user ID and password credentials as admin.
+NOTE:admin is the only user type available for DLUX in this release.
+
+Working with DLUX
+^^^^^^^^^^^^^^^^^
+
+To get a complete DLUX feature list, install restconf, odl l2 switch,
+and switch while you start the DLUX distribution.
+
+.. figure:: ./images/vtn/Dlux_login.png
+   :alt: DLUX\_GUI
+
+   DLUX\_GUI
+
+.. note::
+
+    DLUX enables only those modules, whose APIs are responding. If you
+    enable just the MD-SAL in beginning and then start dlux, only MD-SAL
+    related tabs will be visible. While using the GUI if you enable
+    AD-SAL karaf features, those tabs will appear automatically.
+
+Viewing Network Statistics
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The Nodes module on the left pane enables you to view the network
+statistics and port information for the switches in the network. \* To
+use the Nodes module: \*\* Select Nodeson the left pane.
+
+::
+
+    The right pane displays atable that lists all the nodes, node connectors and the statistics.
+
+-  Enter a node ID in the Search Nodes tab to search by node connectors.
+
+-  Click on the Node Connector number to view details such as port ID,
+   port name, number of ports per switch, MAC Address, and so on.
+
+-  Click Flows in the Statistics column to view Flow Table Statistics
+   for the particular node like table ID, packet match, active flows and
+   so on.
+
+-  Click Node Connectors to view Node Connector Statistics for the
+   particular node ID.
+
+Viewing Network Topology
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+To view network topology: \* Select Topology on the left pane. You will
+view the graphical representation on the right pane.
+
+::
+
+    In the diagram
+    blue boxes represent the switches,black represents the hosts available, and lines represents how switches are connected.
+
+.. note::
+
+    DLUX UI does not provide ability to add topology information. The
+    Topology should be created using an open flow plugin. Controller
+    stores this information in the database and displays on the DLUX
+    page, when the you connect to the controller using openflow.
+
+.. figure:: ./images/vtn/Dlux_topology.png
+   :alt: Topology
+
+   Topology
+
+OpenStack PackStack Installation Steps
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+-  Please go through the below wiki page for OpenStack PackStack
+   installation steps.
+
+   -  https://wiki.opendaylight.org/view/Release/Lithium/VTN/User_Guide/Openstack_Packstack_Support
+
+References
+~~~~~~~~~~
+
+-  http://devstack.org/guides/multinode-lab.html
+
+-  https://wiki.opendaylight.org/view/File:Vtn_demo_hackfest_2014_march.pdf
+
+VTN Manager Usage Examples
+--------------------------
+
+How to provision virtual L2 Network
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This page explains how to provision virtual L2 network using VTN
+Manager. This page targets Beryllium release, so the procedure described
+here does not work in other releases.
+
+.. figure:: ./images/vtn/How_to_provision_virtual_L2_network.png
+   :alt: Virtual L2 network for host1 and host3
+
+   Virtual L2 network for host1 and host3
+
+Requirements
+^^^^^^^^^^^^
+
+Mininet
+'''''''
+
+-  To provision OpenFlow switches, this page uses Mininet. Mininet
+   details and set-up can be referred at the following page:
+   https://wiki.opendaylight.org/view/OpenDaylight_Controller:Installation#Using_Mininet
+
+-  Start Mininet and create three switches(s1, s2, and s3) and four
+   hosts(h1, h2, h3, and h4) in it.
+
+::
+
+     mininet@mininet-vm:~$ sudo mn --controller=remote,ip=192.168.0.100 --topo tree,2
+
+.. note::
+
+    Replace "192.168.0.100" with the IP address of OpenDaylight
+    controller based on your environment.
+
+-  you can check the topology that you have created by executing "net"
+   command in the Mininet console.
+
+::
+
+     mininet> net
+     h1 h1-eth0:s2-eth1
+     h2 h2-eth0:s2-eth2
+     h3 h3-eth0:s3-eth1
+     h4 h4-eth0:s3-eth2
+     s1 lo:  s1-eth1:s2-eth3 s1-eth2:s3-eth3
+     s2 lo:  s2-eth1:h1-eth0 s2-eth2:h2-eth0 s2-eth3:s1-eth1
+     s3 lo:  s3-eth1:h3-eth0 s3-eth2:h4-eth0 s3-eth3:s1-eth2
+
+-  In this guide, you will provision the virtual L2 network to establish
+   communication between h1 and h3.
+
+Configuration
+^^^^^^^^^^^^^
+
+To provision the virtual L2 network for the two hosts (h1 and h3),
+execute REST API provided by VTN Manager as follows. It uses curl
+command to call the REST API.
+
+-  Create a virtual tenant named vtn1 by executing `the update-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"vtn1"}}'
+
+-  Create a virtual bridge named vbr1 in the tenant vtn1 by executing
+   `the update-vbridge
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1"}}'
+
+-  Create two interfaces into the virtual bridge by executing `the
+   update-vinterface
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if1"}}'
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if2"}}'
+
+-  Configure two mappings on the created interfaces by executing `the
+   set-port-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map>`__.
+
+   -  The interface if1 of the virtual bridge will be mapped to the port
+      "s2-eth1" of the switch "openflow:2" of the Mininet.
+
+      -  The h1 is connected to the port "s2-eth1".
+
+   -  The interface if2 of the virtual bridge will be mapped to the port
+      "s3-eth1" of the switch "openflow:3" of the Mininet.
+
+      -  The h3 is connected to the port "s3-eth1".
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if1", "node":"openflow:2", "port-name":"s2-eth1"}}'
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if2", "node":"openflow:3", "port-name":"s3-eth1"}}'
+
+Verification
+^^^^^^^^^^^^
+
+-  Please execute ping from h1 to h3 to verify if the virtual L2 network
+   for h1 and h3 is provisioned successfully.
+
+::
+
+     mininet> h1 ping h3
+     PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
+     64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=243 ms
+     64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.341 ms
+     64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=0.078 ms
+     64 bytes from 10.0.0.3: icmp_seq=4 ttl=64 time=0.079 ms
+
+-  You can also verify the configuration by executing the following REST
+   API. It shows all configuration in VTN Manager.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X GET http://localhost:8181/restconf/operational/vtn:vtns/
+
+-  The result of the command should be like this.
+
+::
+
+    {
+      "vtns": {
+        "vtn": [
+        {
+          "name": "vtn1",
+            "vtenant-config": {
+              "idle-timeout": 300,
+              "hard-timeout": 0
+            },
+            "vbridge": [
+            {
+              "name": "vbr1",
+              "bridge-status": {
+                "state": "UP",
+                "path-faults": 0
+              },
+              "vbridge-config": {
+                "age-interval": 600
+              },
+              "vinterface": [
+              {
+                "name": "if2",
+                "vinterface-status": {
+                  "entity-state": "UP",
+                  "state": "UP",
+                  "mapped-port": "openflow:3:3"
+                },
+                "vinterface-config": {
+                  "enabled": true
+                },
+                "port-map-config": {
+                  "vlan-id": 0,
+                  "port-name": "s3-eth1",
+                  "node": "openflow:3"
+                }
+              },
+              {
+                "name": "if1",
+                "vinterface-status": {
+                  "entity-state": "UP",
+                  "state": "UP",
+                  "mapped-port": "openflow:2:1"
+                },
+                "vinterface-config": {
+                  "enabled": true
+                },
+                "port-map-config": {
+                  "vlan-id": 0,
+                  "port-name": "s2-eth1",
+                  "node": "openflow:2"
+                }
+              }
+              ]
+            }
+          ]
+        }
+        ]
+      }
+    }
+
+Cleaning Up
+^^^^^^^^^^^
+
+-  You can delete the virtual tenant vtn1 by executing `the remove-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"vtn1"}}'
+
+How To Test Vlan-Map In Mininet Environment
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This page explains how to test Vlan-map in a multi host scenario using
+mininet. This page targets Beryllium release, so the procedure described
+here does not work in other releases.
+
+.. figure:: ./images/vtn/vlanmap_using_mininet.png
+   :alt: Example that demonstrates vlanmap testing in Mininet
+   Environment
+
+   Example that demonstrates vlanmap testing in Mininet Environment
+
+Requirements
+^^^^^^^^^^^^
+
+Save the mininet script given below as vlan\_vtn\_test.py and run the
+mininet script in the mininet environment where Mininet is installed.
+
+Mininet Script
+^^^^^^^^^^^^^^
+
+https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:Mininet#Network_with_hosts_in_different_vlan
+
+-  Run the mininet script
+
+::
+
+    sudo mn --controller=remote,ip=192.168.64.13 --custom vlan_vtn_test.py --topo mytopo
+
+.. note::
+
+    Replace "192.168.64.13" with the IP address of OpenDaylight
+    controller based on your environment.
+
+-  You can check the topology that you have created by executing "net"
+   command in the Mininet console.
+
+::
+
+     mininet> net
+     h1 h1-eth0.200:s1-eth1
+     h2 h2-eth0.300:s2-eth2
+     h3 h3-eth0.200:s2-eth3
+     h4 h4-eth0.300:s2-eth4
+     h5 h5-eth0.200:s3-eth2
+     h6 h6-eth0.300:s3-eth3
+     s1 lo:  s1-eth1:h1-eth0.200 s1-eth2:s2-eth1 s1-eth3:s3-eth1
+     s2 lo:  s2-eth1:s1-eth2 s2-eth2:h2-eth0.300 s2-eth3:h3-eth0.200 s2-eth4:h4-eth0.300
+     s3 lo:  s3-eth1:s1-eth3 s3-eth2:h5-eth0.200 s3-eth3:h6-eth0.300
+     c0
+
+Configuration
+^^^^^^^^^^^^^
+
+To test vlan-map, execute REST API provided by VTN Manager as follows.
+
+-  Create a virtual tenant named vtn1 by executing `the update-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"vtn1"}}'
+
+-  Create a virtual bridge named vbr1 in the tenant vtn1 by executing
+   `the update-vbridge
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1"}}'
+
+-  Configure a vlan map with vlanid 200 for vBridge vbr1 by executing
+   `the add-vlan-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vlan-map.html#add-vlan-map>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vlan-map:add-vlan-map -d '{"input":{"vlan-id":200,"tenant-name":"vtn1","bridge-name":"vbr1"}}'
+
+-  Create a virtual bridge named vbr2 in the tenant vtn1 by executing
+   `the update-vbridge
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr2"}}'
+
+-  Configure a vlan map with vlanid 300 for vBridge vbr2 by executing
+   `the add-vlan-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vlan-map.html#add-vlan-map>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vlan-map:add-vlan-map -d '{"input":{"vlan-id":300,"tenant-name":"vtn1","bridge-name":"vbr2"}}'
+
+Verification
+^^^^^^^^^^^^
+
+-  Please execute pingall in mininet environment to view the host
+   reachability.
+
+::
+
+     mininet> pingall
+     Ping: testing ping reachability
+     h1 -> X h3 X h5 X
+     h2 -> X X h4 X h6
+     h3 -> h1 X X h5 X
+     h4 -> X h2 X X h6
+     h5 -> h1 X h3 X X
+     h6 -> X h2 X h4 X
+
+-  You can also verify the configuration by executing the following REST
+   API. It shows all configurations in VTN Manager.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X GET http://localhost:8181/restconf/operational/vtn:vtns
+
+-  The result of the command should be like this.
+
+::
+
+    {
+      "vtns": {
+        "vtn": [
+        {
+          "name": "vtn1",
+            "vtenant-config": {
+              "hard-timeout": 0,
+              "idle-timeout": 300,
+              "description": "creating vtn"
+            },
+            "vbridge": [
+            {
+              "name": "vbr2",
+              "vbridge-config": {
+                "age-interval": 600,
+                "description": "creating vbr2"
+              },
+              "bridge-status": {
+                "state": "UP",
+                "path-faults": 0
+              },
+              "vlan-map": [
+              {
+                "map-id": "ANY.300",
+                "vlan-map-config": {
+                  "vlan-id": 300
+                },
+                "vlan-map-status": {
+                  "active": true
+                }
+              }
+              ]
+            },
+            {
+              "name": "vbr1",
+              "vbridge-config": {
+                "age-interval": 600,
+                "description": "creating vbr1"
+              },
+              "bridge-status": {
+                "state": "UP",
+                "path-faults": 0
+              },
+              "vlan-map": [
+              {
+                "map-id": "ANY.200",
+                "vlan-map-config": {
+                  "vlan-id": 200
+                },
+                "vlan-map-status": {
+                  "active": true
+                }
+              }
+              ]
+            }
+          ]
+        }
+        ]
+      }
+    }
+
+Cleaning Up
+^^^^^^^^^^^
+
+-  You can delete the virtual tenant vtn1 by executing `the remove-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"vtn1"}}'
+
+How To Configure Service Function Chaining using VTN Manager
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This page explains how to configure VTN Manager for Service Chaining.
+This page targets Beryllium release, so the procedure described here
+does not work in other releases.
+
+.. figure:: ./images/vtn/Service_Chaining_With_One_Service.png
+   :alt: Service Chaining With One Service
+
+   Service Chaining With One Service
+
+Requirements
+^^^^^^^^^^^^
+
+-  Please refer to the `Installation
+   Pages <https://wiki.opendaylight.org/view/VTN:Beryllium:Installation_Guide>`__
+   to run ODL with VTN Feature enabled.
+
+-  Please ensure Bridge-Utils package is installed in mininet
+   environment before running the mininet script.
+
+-  To install Bridge-Utils package run sudo apt-get install bridge-utils
+   (assuming Ubuntu is used to run mininet, If not then this is not
+   required).
+
+-  Save the mininet script given below as topo\_handson.py and run the
+   mininet script in the mininet environment where Mininet is installed.
+
+Mininet Script
+^^^^^^^^^^^^^^
+
+-  `Script for emulating network with multiple
+   hosts <https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:Mininet>`__.
+
+-  Before executing the mininet script, please confirm Controller is up
+   and running.
+
+-  Run the mininet script.
+
+-  Replace <path> and <Controller IP> based on your environment
+
+::
+
+    sudo mn --controller=remote,ip=<Controller IP> --custom <path>\topo_handson.py --topo mytopo2
+
+::
+
+     mininet> net
+     h11 h11-eth0:s1-eth1
+     h12 h12-eth0:s1-eth2
+     h21 h21-eth0:s2-eth1
+     h22 h22-eth0:s2-eth2
+     h23 h23-eth0:s2-eth3
+     srvc1 srvc1-eth0:s3-eth3 srvc1-eth1:s4-eth3
+     srvc2 srvc2-eth0:s3-eth4 srvc2-eth1:s4-eth4
+     s1 lo:  s1-eth1:h11-eth0 s1-eth2:h12-eth0 s1-eth3:s2-eth4 s1-eth4:s3-eth2
+     s2 lo:  s2-eth1:h21-eth0 s2-eth2:h22-eth0 s2-eth3:h23-eth0 s2-eth4:s1-eth3 s2-eth5:s4-eth1
+     s3 lo:  s3-eth1:s4-eth2 s3-eth2:s1-eth4 s3-eth3:srvc1-eth0 s3-eth4:srvc2-eth0
+     s4 lo:  s4-eth1:s2-eth5 s4-eth2:s3-eth1 s4-eth3:srvc1-eth1 s4-eth4:srvc2-eth1
+
+Configurations
+^^^^^^^^^^^^^^
+
+Mininet
+'''''''
+
+-  Please follow the below steps to configure the network in mininet as
+   in the below image:
+
+.. figure:: ./images/vtn/Mininet_Configuration.png
+   :alt: Mininet Configuration
+
+   Mininet Configuration
+
+Configure service nodes
+'''''''''''''''''''''''
+
+-  Please execute the following commands in the mininet console where
+   mininet script is executed.
+
+::
+
+     mininet> srvc1 ip addr del 10.0.0.6/8 dev srvc1-eth0
+     mininet> srvc1 brctl addbr br0
+     mininet> srvc1 brctl addif br0 srvc1-eth0
+     mininet> srvc1 brctl addif br0 srvc1-eth1
+     mininet> srvc1 ifconfig br0 up
+     mininet> srvc1 tc qdisc add dev srvc1-eth1 root netem delay 200ms
+     mininet> srvc2 ip addr del 10.0.0.7/8 dev srvc2-eth0
+     mininet> srvc2 brctl addbr br0
+     mininet> srvc2 brctl addif br0 srvc2-eth0
+     mininet> srvc2 brctl addif br0 srvc2-eth1
+     mininet> srvc2 ifconfig br0 up
+     mininet> srvc2 tc qdisc add dev srvc2-eth1 root netem delay 300ms
+
+Controller
+^^^^^^^^^^
+
+Multi-Tenancy
+'''''''''''''
+
+-  Please execute the below commands to configure the network topology
+   in the controller as in the below image:
+
+.. figure:: ./images/vtn/Tenant2.png
+   :alt: Tenant2
+
+   Tenant2
+
+Please execute the below commands in controller
+'''''''''''''''''''''''''''''''''''''''''''''''
+
+.. note::
+
+    The below commands are for the difference in behavior of Manager in
+    Beryllium topology. The Link below has the details for this bug:
+    https://bugs.opendaylight.org/show_bug.cgi?id=3818.
+
+::
+
+    curl --user admin:admin -H 'content-type: application/json' -H 'ipaddr:127.0.0.1' -X PUT http://localhost:8181/restconf/config/vtn-static-topology:vtn-static-topology/static-edge-ports -d '{"static-edge-ports": {"static-edge-port": [ {"port": "openflow:3:3"}, {"port": "openflow:3:4"}, {"port": "openflow:4:3"}, {"port": "openflow:4:4"}]}}'
+
+-  Create a virtual tenant named vtn1 by executing `the update-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"vtn1","update-mode":"CREATE","operation":"SET","description":"creating vtn","idle-timeout":300,"hard-timeout":0}}'
+
+-  Create a virtual bridge named vbr1 in the tenant vtn1 by executing
+   `the update-vbridge
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"creating vbr","tenant-name":"vtn1","bridge-name":"vbr1"}}'
+
+-  Create interface if1 into the virtual bridge vbr1 by executing `the
+   update-vinterface
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vbrif1 interface","tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1"}}'
+
+-  Configure port mapping on the interface by executing `the
+   set-port-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map>`__.
+
+   -  The interface if1 of the virtual bridge will be mapped to the port
+      "s1-eth2" of the switch "openflow:1" of the Mininet.
+
+      -  The h12 is connected to the port "s1-eth2".
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"vlan-id":0,"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1","node":"openflow:1","port-name":"s1-eth2"}}'
+
+-  Create interface if2 into the virtual bridge vbr1 by executing `the
+   update-vinterface
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vbrif2 interface","tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if2"}}'
+
+-  Configure port mapping on the interface by executing `the
+   set-port-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map>`__.
+
+   -  The interface if2 of the virtual bridge will be mapped to the port
+      "s2-eth2" of the switch "openflow:2" of the Mininet.
+
+      -  The h22 is connected to the port "s2-eth2".
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"vlan-id":0,"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if2","node":"openflow:2","port-name":"s2-eth2"}}'
+
+-  Create interface if3 into the virtual bridge vbr1 by executing `the
+   update-vinterface
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vbrif3 interface","tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if3"}}'
+
+-  Configure port mapping on the interfaces by executing `the
+   set-port-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map>`__.
+
+   -  The interface if3 of the virtual bridge will be mapped to the port
+      "s2-eth3" of the switch "openflow:2" of the Mininet.
+
+      -  The h23 is connected to the port "s2-eth3".
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"vlan-id":0,"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if3","node":"openflow:2","port-name":"s2-eth3"}}'
+
+Traffic filtering
+^^^^^^^^^^^^^^^^^
+
+-  Create flowcondition named cond\_1 by executing `the
+   set-flow-condition
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#set-flow-condition>`__.
+
+   -  For option source and destination-network, get inet address of
+      host h12(src) and h22(dst) from mininet.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"operation":"SET","present":"false","name":"cond_1","vtn-flow-match":[{"index":1,"vtn-ether-match":{},"vtn-inet-match":{"source-network":"10.0.0.2/32","destination-network":"10.0.0.4/32"}}]}}'
+
+-  Flow filter demonstration with DROP action-type. Create Flowfilter in
+   VBR Interface if1 by executing `the set-flow-filter
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input":{"output":"false","tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1","vtn-flow-filter":[{"condition":"cond_1","index":10,"vtn-drop-filter":{}}]}}'
+
+Service Chaining
+^^^^^^^^^^^^^^^^
+
+With One Service
+''''''''''''''''
+
+-  Please execute the below commands to configure the network topology
+   which sends some specific traffic via a single service(External
+   device) in the controller as in the below image:
+
+.. figure:: ./images/vtn/Service_Chaining_With_One_Service_LLD.png
+   :alt: Service Chaining With One Service LLD
+
+   Service Chaining With One Service LLD
+
+-  Create a virtual terminal named vt\_srvc1\_1 in the tenant vtn1 by
+   executing `the update-vterminal
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vterminal.html#update-vterminal>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vterminal:update-vterminal -d '{"input":{"update-mode":"CREATE","operation":"SET","tenant-name":"vtn1","terminal-name":"vt_srvc1_1","description":"Creating vterminal"}}'
+
+-  Create interface IF into the virtual terminal vt\_srvc1\_1 by
+   executing `the update-vinterface
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vterminal IF","enabled":"true","tenant-name":"vtn1","terminal-name":"vt_srvc1_1","interface-name":"IF"}}'
+
+-  Configure port mapping on the interfaces by executing `the
+   set-port-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map>`__.
+
+   -  The interface IF of the virtual terminal will be mapped to the
+      port "s3-eth3" of the switch "openflow:3" of the Mininet.
+
+      -  The h12 is connected to the port "s3-eth3".
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1","terminal-name":"vt_srvc1_1","interface-name":"IF","node":"openflow:3","port-name":"s3-eth3"}}'
+
+-  Create a virtual terminal named vt\_srvc1\_2 in the tenant vtn1 by
+   executing `the update-vterminal
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vterminal.html#update-vterminal>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vterminal:update-vterminal -d '{"input":{"update-mode":"CREATE","operation":"SET","tenant-name":"vtn1","terminal-name":"vt_srvc1_2","description":"Creating vterminal"}}'
+
+-  Create interface IF into the virtual terminal vt\_srvc1\_2 by
+   executing `the update-vinterface
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vterminal IF","enabled":"true","tenant-name":"vtn1","terminal-name":"vt_srvc1_2","interface-name":"IF"}}'
+
+-  Configure port mapping on the interfaces by executing `the
+   set-port-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map>`__.
+
+   -  The interface IF of the virtual terminal will be mapped to the
+      port "s4-eth3" of the switch "openflow:4" of the Mininet.
+
+      -  The h22 is connected to the port "s4-eth3".
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1","terminal-name":"vt_srvc1_2","interface-name":"IF","node":"openflow:4","port-name":"s4-eth3"}}'
+
+-  Create flowcondition named cond\_1 by executing `the
+   set-flow-condition
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#set-flow-condition>`__.
+
+   -  For option source and destination-network, get inet address of
+      host h12(src) and h22(dst) from mininet.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"operation":"SET","present":"false","name":"cond_1","vtn-flow-match":[{"index":1,"vtn-ether-match":{},"vtn-inet-match":{"source-network":"10.0.0.2/32","destination-network":"10.0.0.4/32"}}]}}'
+
+-  Create flowcondition named cond\_any by executing `the
+   set-flow-condition
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#set-flow-condition>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"operation":"SET","present":"false","name":"cond_any","vtn-flow-match":[{"index":1}]}}'
+
+-  Flow filter demonstration with redirect action-type. Create
+   Flowfilter in virtual terminal vt\_srvc1\_2 interface IF by executing
+   `the set-flow-filter
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter>`__.
+
+   -  Flowfilter redirects vt\_srvc1\_2 to bridge1-IF2
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input":{"output":"false","tenant-name":"vtn1","terminal-name":"vt_srvc1_2","interface-name":"IF","vtn-flow-filter":[{"condition":"cond_any","index":10,"vtn-redirect-filter":{"redirect-destination":{"bridge-name":"vbr1","interface-name":"if2"},"output":"true"}}]}}'
+
+-  Flow filter demonstration with redirect action-type. Create
+   Flowfilter in vbridge vbr1 interface if1 by executing `the
+   set-flow-filter
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter>`__.
+
+   -  Flow filter redirects Bridge1-IF1 to vt\_srvc1\_1
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input":{"output":"false","tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1","vtn-flow-filter":[{"condition":"cond_1","index":10,"vtn-redirect-filter":{"redirect-destination":{"terminal-name":"vt_srvc1_1","interface-name":"IF"},"output":"true"}}]}}'
+
+Verification
+^^^^^^^^^^^^
+
+.. figure:: ./images/vtn/Service_Chaining_With_One_Service_Verification.png
+   :alt: Service Chaining With One Service
+
+   Service Chaining With One Service
+
+-  Ping host12 to host22 to view the host rechability, a delay of 200ms
+   will be taken to reach host22 as below.
+
+::
+
+     mininet> h12 ping h22
+     PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
+     64 bytes from 10.0.0.4: icmp_seq=35 ttl=64 time=209 ms
+     64 bytes from 10.0.0.4: icmp_seq=36 ttl=64 time=201 ms
+     64 bytes from 10.0.0.4: icmp_seq=37 ttl=64 time=200 ms
+     64 bytes from 10.0.0.4: icmp_seq=38 ttl=64 time=200 ms
+
+With two services
+'''''''''''''''''
+
+-  Please execute the below commands to configure the network topology
+   which sends some specific traffic via two services(External device)
+   in the controller as in the below image.
+
+.. figure:: ./images/vtn/Service_Chaining_With_Two_Services_LLD.png
+   :alt: Service Chaining With Two Services LLD
+
+   Service Chaining With Two Services LLD
+
+-  Create a virtual terminal named vt\_srvc2\_1 in the tenant vtn1 by
+   executing `the update-vterminal
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vterminal.html#update-vterminal>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vterminal:update-vterminal -d '{"input":{"update-mode":"CREATE","operation":"SET","tenant-name":"vtn1","terminal-name":"vt_srvc2_1","description":"Creating vterminal"}}'
+
+-  Create interface IF into the virtual terminal vt\_srvc2\_1 by
+   executing `the update-vinterface
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vterminal IF","enabled":"true","tenant-name":"vtn1","terminal-name":"vt_srvc2_1","interface-name":"IF"}}'
+
+-  Configure port mapping on the interfaces by executing `the
+   set-port-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map>`__.
+
+   -  The interface IF of the virtual terminal will be mapped to the
+      port "s3-eth4" of the switch "openflow:3" of the Mininet.
+
+      -  The host h12 is connected to the port "s3-eth4".
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1","terminal-name":"vt_srvc2_1","interface-name":"IF","node":"openflow:3","port-name":"s3-eth4"}}'
+
+-  Create a virtual terminal named vt\_srvc2\_2 in the tenant vtn1 by
+   executing `the update-vterminal
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vterminal.html#update-vterminal>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vterminal:update-vterminal -d '{"input":{"update-mode":"CREATE","operation":"SET","tenant-name":"vtn1","terminal-name":"vt_srvc2_2","description":"Creating vterminal"}}'
+
+-  Create interfaces IF into the virtual terminal vt\_srvc2\_2 by
+   executing `the update-vinterface
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vterminal IF","enabled":"true","tenant-name":"vtn1","terminal-name":"vt_srvc2_2","interface-name":"IF"}}'
+
+-  Configure port mapping on the interfaces by executing `the
+   set-port-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map>`__.
+
+   -  The interface IF of the virtual terminal will be mapped to the
+      port "s4-eth4" of the switch "openflow:4" of the mininet.
+
+      -  The host h22 is connected to the port "s4-eth4".
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1","terminal-name":"vt_srvc2_2","interface-name":"IF","node":"openflow:4","port-name":"s4-eth4"}}'
+
+-  Flow filter demonstration with redirect action-type. Create
+   Flowfilter in virtual terminal vt\_srvc2\_2 interface IF by executing
+   `the set-flow-filter
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter>`__.
+
+   -  Flow filter redirects vt\_srvc2\_2 to Bridge1-IF2.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input":{"output":"false","tenant-name":"vtn1","terminal-name":"vt_srvc2_2","interface-name":"IF","vtn-flow-filter":[{"condition":"cond_any","index":10,"vtn-redirect-filter":{"redirect-destination":{"bridge-name":"vbr1","interface-name":"if2"},"output":"true"}}]}}'
+
+-  Flow filter demonstration with redirect action-type. Create
+   Flowfilter in virtual terminal vt\_srvc2\_2 interface IF by executing
+   `the set-flow-filter
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter>`__.
+
+   -  Flow filter redirects vt\_srvc1\_2 to vt\_srvc2\_1.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input":{"output":"false","tenant-name":"vtn1","terminal-name":"vt_srvc1_2","interface-name":"IF","vtn-flow-filter":[{"condition":"cond_any","index":10,"vtn-redirect-filter":{"redirect-destination":{"terminal-name":"vt_srvc2_1","interface-name":"IF"},"output":"true"}}]}}'
+
+Verification
+^^^^^^^^^^^^
+
+.. figure:: ./images/vtn/Service_Chaining_With_Two_Services.png
+   :alt: Service Chaining With Two Service
+
+   Service Chaining With Two Service
+
+-  Ping host12 to host22 to view the host rechability, a delay of 500ms
+   will be taken to reach host22 as below.
+
+::
+
+     mininet> h12 ping h22
+     PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
+     64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=512 ms
+     64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=501 ms
+     64 bytes from 10.0.0.4: icmp_seq=3 ttl=64 time=500 ms
+     64 bytes from 10.0.0.4: icmp_seq=4 ttl=64 time=500 ms
+
+-  You can verify the configuration by executing the following REST API.
+   It shows all configuration in VTN Manager.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X GET http://localhost:8181/restconf/operational/vtn:vtns
+
+::
+
+    {
+      "vtn": [
+      {
+        "name": "vtn1",
+          "vtenant-config": {
+            "hard-timeout": 0,
+            "idle-timeout": 300,
+            "description": "creating vtn"
+          },
+          "vbridge": [
+          {
+            "name": "vbr1",
+            "vbridge-config": {
+              "age-interval": 600,
+              "description": "creating vbr"
+            },
+            "bridge-status": {
+              "state": "UP",
+              "path-faults": 0
+            },
+            "vinterface": [
+            {
+              "name": "if1",
+              "vinterface-status": {
+                "mapped-port": "openflow:1:2",
+                "state": "UP",
+                "entity-state": "UP"
+              },
+              "port-map-config": {
+                "vlan-id": 0,
+                "node": "openflow:1",
+                "port-name": "s1-eth2"
+              },
+              "vinterface-config": {
+                "description": "Creating vbrif1 interface",
+                "enabled": true
+              },
+              "vinterface-input-filter": {
+                "vtn-flow-filter": [
+                {
+                  "index": 10,
+                  "condition": "cond_1",
+                  "vtn-redirect-filter": {
+                    "output": true,
+                    "redirect-destination": {
+                      "terminal-name": "vt_srvc1_1",
+                      "interface-name": "IF"
+                    }
+                  }
+                }
+                ]
+              }
+            },
+            {
+              "name": "if2",
+              "vinterface-status": {
+                "mapped-port": "openflow:2:2",
+                "state": "UP",
+                "entity-state": "UP"
+              },
+              "port-map-config": {
+                "vlan-id": 0,
+                "node": "openflow:2",
+                "port-name": "s2-eth2"
+              },
+              "vinterface-config": {
+                "description": "Creating vbrif2 interface",
+                "enabled": true
+              }
+            },
+            {
+              "name": "if3",
+              "vinterface-status": {
+                "mapped-port": "openflow:2:3",
+                "state": "UP",
+                "entity-state": "UP"
+              },
+              "port-map-config": {
+                "vlan-id": 0,
+                "node": "openflow:2",
+                "port-name": "s2-eth3"
+              },
+              "vinterface-config": {
+                "description": "Creating vbrif3 interface",
+                "enabled": true
+              }
+            }
+            ]
+          }
+        ],
+          "vterminal": [
+          {
+            "name": "vt_srvc2_2",
+            "bridge-status": {
+              "state": "UP",
+              "path-faults": 0
+            },
+            "vinterface": [
+            {
+              "name": "IF",
+              "vinterface-status": {
+                "mapped-port": "openflow:4:4",
+                "state": "UP",
+                "entity-state": "UP"
+              },
+              "port-map-config": {
+                "vlan-id": 0,
+                "node": "openflow:4",
+                "port-name": "s4-eth4"
+              },
+              "vinterface-config": {
+                "description": "Creating vterminal IF",
+                "enabled": true
+              },
+              "vinterface-input-filter": {
+                "vtn-flow-filter": [
+                {
+                  "index": 10,
+                  "condition": "cond_any",
+                  "vtn-redirect-filter": {
+                    "output": true,
+                    "redirect-destination": {
+                      "bridge-name": "vbr1",
+                      "interface-name": "if2"
+                    }
+                  }
+                }
+                ]
+              }
+            }
+            ],
+              "vterminal-config": {
+                "description": "Creating vterminal"
+              }
+          },
+          {
+            "name": "vt_srvc1_1",
+            "bridge-status": {
+              "state": "UP",
+              "path-faults": 0
+            },
+            "vinterface": [
+            {
+              "name": "IF",
+              "vinterface-status": {
+                "mapped-port": "openflow:3:3",
+                "state": "UP",
+                "entity-state": "UP"
+              },
+              "port-map-config": {
+                "vlan-id": 0,
+                "node": "openflow:3",
+                "port-name": "s3-eth3"
+              },
+              "vinterface-config": {
+                "description": "Creating vterminal IF",
+                "enabled": true
+              }
+            }
+            ],
+              "vterminal-config": {
+                "description": "Creating vterminal"
+              }
+          },
+          {
+            "name": "vt_srvc1_2",
+            "bridge-status": {
+              "state": "UP",
+              "path-faults": 0
+            },
+            "vinterface": [
+            {
+              "name": "IF",
+              "vinterface-status": {
+                "mapped-port": "openflow:4:3",
+                "state": "UP",
+                "entity-state": "UP"
+              },
+              "port-map-config": {
+                "vlan-id": 0,
+                "node": "openflow:4",
+                "port-name": "s4-eth3"
+              },
+              "vinterface-config": {
+                "description": "Creating vterminal IF",
+                "enabled": true
+              },
+              "vinterface-input-filter": {
+                "vtn-flow-filter": [
+                {
+                  "index": 10,
+                  "condition": "cond_any",
+                  "vtn-redirect-filter": {
+                    "output": true,
+                    "redirect-destination": {
+                      "terminal-name": "vt_srvc2_1",
+                      "interface-name": "IF"
+                    }
+                  }
+                }
+                ]
+              }
+            }
+            ],
+              "vterminal-config": {
+                "description": "Creating vterminal"
+              }
+          },
+          {
+            "name": "vt_srvc2_1",
+            "bridge-status": {
+              "state": "UP",
+              "path-faults": 0
+            },
+            "vinterface": [
+            {
+              "name": "IF",
+              "vinterface-status": {
+                "mapped-port": "openflow:3:4",
+                "state": "UP",
+                "entity-state": "UP"
+              },
+              "port-map-config": {
+                "vlan-id": 0,
+                "node": "openflow:3",
+                "port-name": "s3-eth4"
+              },
+              "vinterface-config": {
+                "description": "Creating vterminal IF",
+                "enabled": true
+              }
+            }
+            ],
+              "vterminal-config": {
+                "description": "Creating vterminal"
+              }
+          }
+        ]
+      }
+      ]
+    }
+
+Cleaning Up
+^^^^^^^^^^^
+
+-  To clean up both VTN and flowconditions.
+
+-  You can delete the virtual tenant vtn1 by executing `the remove-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"vtn1"}}'
+
+-  You can delete the flowcondition cond\_1 and cond\_any by executing
+   `the remove-flow-condition
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#remove-flow-condition>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:remove-flow-condition -d '{"input":{"name":"cond_1"}}'
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:remove-flow-condition -d '{"input":{"name":"cond_any"}}'
+
+How To View Dataflows
+~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This page explains how to view Dataflows using VTN Manager. This page
+targets Beryllium release, so the procedure described here does not work
+in other releases.
+
+Dataflow feature enables retrieval and display of data flows in the
+openflow network. The data flows can be retrieved based on an openflow
+switch or a switch port or a L2 source host.
+
+The flow information provided by this feature are
+
+-  Location of virtual node which maps the incoming packet and outgoing
+   packets.
+
+-  Location of physical switch port where incoming and outgoing packets
+   is sent and received.
+
+-  A sequence of physical route info which represents the packet route
+   in the physical network.
+
+Configuration
+^^^^^^^^^^^^^
+
+-  To view Dataflow information, configure with VLAN Mapping
+   https://wiki.opendaylight.org/view/VTN:Mananger:How_to_test_Vlan-map_using_mininet.
+
+Verification
+^^^^^^^^^^^^
+
+After creating vlan mapping configuration from the above page, execute
+as below in mininet to get switch details.
+
+::
+
+     mininet> net
+     h1 h1-eth0.200:s1-eth1
+     h2 h2-eth0.300:s2-eth2
+     h3 h3-eth0.200:s2-eth3
+     h4 h4-eth0.300:s2-eth4
+     h5 h5-eth0.200:s3-eth2
+     h6 h6-eth0.300:s3-eth3
+     s1 lo:  s1-eth1:h1-eth0.200 s1-eth2:s2-eth1 s1-eth3:s3-eth1
+     s2 lo:  s2-eth1:s1-eth2 s2-eth2:h2-eth0.300 s2-eth3:h3-eth0.200 s2-eth4:h4-eth0.300
+     s3 lo:  s3-eth1:s1-eth3 s3-eth2:h5-eth0.200 s3-eth3:h6-eth0.300
+     c0
+     mininet>
+
+Please execute ping from h1 to h3 to check hosts reachability.
+
+::
+
+     mininet> h1 ping h3
+     PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
+     64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=11.4 ms
+     64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.654 ms
+     64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=0.093 ms
+
+Parallely execute below Restconf command to get data flow information of
+node "openflow:1" and its port "s1-eth1".
+
+-  Get the Dataflows information by executing `the get-data-flow
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow.html#get-data-flow>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow:get-data-flow -d '{"input":{"tenant-name":"vtn1","mode":"DETAIL","node":"openflow:1","data-flow-port":{"port-id":"1","port-name":"s1-eth1"}}}'
+
+::
+
+    {
+      "output": {
+        "data-flow-info": [
+        {
+          "averaged-data-flow-stats": {
+            "packet-count": 1.1998800119988002,
+              "start-time": 1455241209151,
+              "end-time": 1455241219152,
+              "byte-count": 117.58824117588242
+          },
+            "physical-route": [
+            {
+              "physical-ingress-port": {
+                "port-name": "s2-eth3",
+                "port-id": "3"
+              },
+              "physical-egress-port": {
+                "port-name": "s2-eth1",
+                "port-id": "1"
+              },
+              "node": "openflow:2",
+              "order": 0
+            },
+            {
+              "physical-ingress-port": {
+                "port-name": "s1-eth2",
+                "port-id": "2"
+              },
+              "physical-egress-port": {
+                "port-name": "s1-eth1",
+                "port-id": "1"
+              },
+              "node": "openflow:1",
+              "order": 1
+            }
+          ],
+            "data-egress-node": {
+              "bridge-name": "vbr1",
+              "tenant-name": "vtn1"
+            },
+            "hard-timeout": 0,
+            "idle-timeout": 300,
+            "data-flow-stats": {
+              "duration": {
+                "nanosecond": 640000000,
+                "second": 362
+              },
+              "packet-count": 134,
+              "byte-count": 12932
+            },
+            "data-egress-port": {
+              "node": "openflow:1",
+              "port-name": "s1-eth1",
+              "port-id": "1"
+            },
+            "data-ingress-node": {
+              "bridge-name": "vbr1",
+              "tenant-name": "vtn1"
+            },
+            "data-ingress-port": {
+              "node": "openflow:2",
+              "port-name": "s2-eth3",
+              "port-id": "3"
+            },
+            "creation-time": 1455240855753,
+            "data-flow-match": {
+              "vtn-ether-match": {
+                "vlan-id": 200,
+                "source-address": "6a:ff:e2:81:86:bb",
+                "destination-address": "26:9f:82:70:ec:66"
+              }
+            },
+            "virtual-route": [
+            {
+              "reason": "VLANMAPPED",
+              "virtual-node-path": {
+                "bridge-name": "vbr1",
+                "tenant-name": "vtn1"
+              },
+              "order": 0
+            },
+            {
+              "reason": "FORWARDED",
+              "virtual-node-path": {
+                "bridge-name": "vbr1",
+                "tenant-name": "vtn1"
+              },
+              "order": 1
+            }
+          ],
+            "flow-id": 16
+        },
+        {
+          "averaged-data-flow-stats": {
+            "packet-count": 1.1998800119988002,
+            "start-time": 1455241209151,
+            "end-time": 1455241219152,
+            "byte-count": 117.58824117588242
+          },
+          "physical-route": [
+          {
+            "physical-ingress-port": {
+              "port-name": "s1-eth1",
+              "port-id": "1"
+            },
+            "physical-egress-port": {
+              "port-name": "s1-eth2",
+              "port-id": "2"
+            },
+            "node": "openflow:1",
+            "order": 0
+          },
+          {
+            "physical-ingress-port": {
+              "port-name": "s2-eth1",
+              "port-id": "1"
+            },
+            "physical-egress-port": {
+              "port-name": "s2-eth3",
+              "port-id": "3"
+            },
+            "node": "openflow:2",
+            "order": 1
+          }
+          ],
+            "data-egress-node": {
+              "bridge-name": "vbr1",
+              "tenant-name": "vtn1"
+            },
+            "hard-timeout": 0,
+            "idle-timeout": 300,
+            "data-flow-stats": {
+              "duration": {
+                "nanosecond": 587000000,
+                "second": 362
+              },
+              "packet-count": 134,
+              "byte-count": 12932
+            },
+            "data-egress-port": {
+              "node": "openflow:2",
+              "port-name": "s2-eth3",
+              "port-id": "3"
+            },
+            "data-ingress-node": {
+              "bridge-name": "vbr1",
+              "tenant-name": "vtn1"
+            },
+            "data-ingress-port": {
+              "node": "openflow:1",
+              "port-name": "s1-eth1",
+              "port-id": "1"
+            },
+            "creation-time": 1455240855747,
+            "data-flow-match": {
+              "vtn-ether-match": {
+                "vlan-id": 200,
+                "source-address": "26:9f:82:70:ec:66",
+                "destination-address": "6a:ff:e2:81:86:bb"
+              }
+            },
+            "virtual-route": [
+            {
+              "reason": "VLANMAPPED",
+              "virtual-node-path": {
+                "bridge-name": "vbr1",
+                "tenant-name": "vtn1"
+              },
+              "order": 0
+            },
+            {
+              "reason": "FORWARDED",
+              "virtual-node-path": {
+                "bridge-name": "vbr1",
+                "tenant-name": "vtn1"
+              },
+              "order": 1
+            }
+          ],
+            "flow-id": 15
+        }
+        ]
+      }
+    }
+
+How To Create Mac Map In VTN
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+-  This page demonstrates Mac Mapping. This demonstration aims at
+   enabling communication between two hosts and denying communication of
+   particular host by associating a Vbridge to the hosts and configuring
+   Mac Mapping (mac address) to the Vbridge.
+
+-  This page targets Beryllium release, so the procedure described here
+   does not work in other releases.
+
+.. figure:: ./images/vtn/Single_Controller_Mapping.png
+   :alt: Single Controller Mapping
+
+   Single Controller Mapping
+
+Requirement
+^^^^^^^^^^^
+
+Configure mininet and create a topology
+'''''''''''''''''''''''''''''''''''''''
+
+-  `Script for emulating network with multiple
+   hosts <https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:Mininet#Network_with_Multiple_Hosts_for_Service_Function_Chain>`__.
+
+-  Before executing the mininet script, please confirm Controller is up
+   and running.
+
+-  Run the mininet script.
+
+-  Replace <path> and <Controller IP> based on your environment.
+
+::
+
+    sudo mn --controller=remote,ip=<Controller IP> --custom <path>\topo_handson.py --topo mytopo2
+
+::
+
+    mininet> net
+    h11 h11-eth0:s1-eth1
+    h12 h12-eth0:s1-eth2
+    h21 h21-eth0:s2-eth1
+    h22 h22-eth0:s2-eth2
+    h23 h23-eth0:s2-eth3
+    srvc1 srvc1-eth0:s3-eth3 srvc1-eth1:s4-eth3
+    srvc2 srvc2-eth0:s3-eth4 srvc2-eth1:s4-eth4
+    s1 lo:  s1-eth1:h11-eth0 s1-eth2:h12-eth0 s1-eth3:s2-eth4 s1-eth4:s3-eth2
+    s2 lo:  s2-eth1:h21-eth0 s2-eth2:h22-eth0 s2-eth3:h23-eth0 s2-eth4:s1-eth3 s2-eth5:s4-eth1
+    s3 lo:  s3-eth1:s4-eth2 s3-eth2:s1-eth4 s3-eth3:srvc1-eth0 s3-eth4:srvc2-eth0
+    s4 lo:  s4-eth1:s2-eth5 s4-eth2:s3-eth1 s4-eth3:srvc1-eth1 s4-eth4:srvc2-eth1
+
+Configuration
+^^^^^^^^^^^^^
+
+To create Mac Map in VTN, execute REST API provided by VTN Manager as
+follows. It uses curl command to call REST API.
+
+-  Create a virtual tenant named Tenant1 by executing `the update-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"Tenant1"}}'
+
+-  Create a virtual bridge named vBridge1 in the tenant Tenant1 by
+   executing `the update-vbridge
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"Tenant1","bridge-name":"vBridge1"}}'
+
+-  Configuring Mac Mappings on the vBridge1 by giving the mac address of
+   host h12 and host h22 as follows to allow the communication by
+   executing `the set-mac-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-mac-map.html#set-mac-map>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-mac-map:set-mac-map -d '{"input":{"operation":"SET","allowed-hosts":["de:05:40:c4:96:76@0","62:c5:33:bc:d7:4e@0"],"tenant-name":"Tenant1","bridge-name":"vBridge1"}}'
+
+.. note::
+
+    Mac Address of host h12 and host h22 can be obtained with the
+    following command in mininet.
+
+::
+
+     mininet> h12 ifconfig
+     h12-eth0  Link encap:Ethernet  HWaddr 62:c5:33:bc:d7:4e
+     inet addr:10.0.0.2  Bcast:10.255.255.255  Mask:255.0.0.0
+     inet6 addr: fe80::60c5:33ff:febc:d74e/64 Scope:Link
+
+::
+
+     mininet> h22 ifconfig
+     h22-eth0  Link encap:Ethernet  HWaddr de:05:40:c4:96:76
+     inet addr:10.0.0.4  Bcast:10.255.255.255  Mask:255.0.0.0
+     inet6 addr: fe80::dc05:40ff:fec4:9676/64 Scope:Link
+
+-  MAC Mapping will not be activated just by configuring it, a two end
+   communication needs to be established to activate Mac Mapping.
+
+-  Ping host h22 from host h12 in mininet, the ping will not happen
+   between the hosts as only one way activation is enabled.
+
+::
+
+     mininet> h12 ping h22
+     PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
+     From 10.0.0.2 icmp_seq=1 Destination Host Unreachable
+     From 10.0.0.2 icmp_seq=2 Destination Host Unreachable
+
+-  Ping host h12 from host h22 in mininet, now the ping communication
+   will take place as the two end communication is enabled.
+
+::
+
+     mininet> h22 ping h12
+     PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
+     64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=91.8 ms
+     64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.510 ms
+
+-  After two end communication enabled, now host h12 can ping host h22
+
+::
+
+     mininet> h12 ping h22
+     PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
+     64 bytes from 10.0.0.4: icmp_req=1 ttl=64 time=0.780 ms
+     64 bytes from 10.0.0.4: icmp_req=2 ttl=64 time=0.079 ms
+
+Verification
+^^^^^^^^^^^^
+
+-  To view the configured Mac Map of allowed host execute the following
+   command.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X GET http://localhost:8181/restconf/operational/vtn:vtns/vtn/Tenant1/vbridge/vBridge1/mac-map
+
+::
+
+    {
+      "mac-map": {
+        "mac-map-status": {
+          "mapped-host": [
+          {
+            "mac-address": "c6:44:22:ba:3e:72",
+              "vlan-id": 0,
+              "port-id": "openflow:1:2"
+          },
+          {
+            "mac-address": "f6:e0:43:b6:3a:b7",
+            "vlan-id": 0,
+            "port-id": "openflow:2:2"
+          }
+          ]
+        },
+          "mac-map-config": {
+            "allowed-hosts": {
+              "vlan-host-desc-list": [
+              {
+                "host": "c6:44:22:ba:3e:72@0"
+              },
+              {
+                "host": "f6:e0:43:b6:3a:b7@0"
+              }
+              ]
+            }
+          }
+      }
+    }
+
+.. note::
+
+    When Deny is configured a broadcast message is sent to all the hosts
+    connected to the vBridge, so a two end communication need not be
+    establihed like allow, the hosts can communicate directly without
+    any two way communication enabled.
+
+1. To Deny host h23 communication from hosts connected on vBridge1, the
+   following configuration can be applied.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-mac-map:set-mac-map -d '{"input":{"operation": "SET", "denied-hosts": ["0a:d3:ea:3d:8f:a5@0"],"tenant-name": "Tenant1","bridge-name": "vBridge1"}}'
+
+Cleaning Up
+^^^^^^^^^^^
+
+-  You can delete the virtual tenant Tenant1 by executing `the
+   remove-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"Tenant1"}}'
+
+How To Configure Flowfilters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+-  This page explains how to provision flowfilter using VTN Manager.
+   This page targets Beryllium release, so the procedure described here
+   does not work in other releases.
+
+-  The flow-filter function discards, permits, or redirects packets of
+   the traffic within a VTN, according to specified flow conditions. The
+   table below lists the actions to be applied when a packet matches the
+   condition:
+
++-----------------------+----------------------------------------------------+
+| Action                | Function                                           |
++=======================+====================================================+
+| Pass                  | | Permits the packet to pass along the determined  |
+|                       |   path.                                            |
+|                       | | As options, packet transfer priority (set        |
+|                       |   priority) and DSCP change (set ip-dscp) is       |
+|                       |   specified.                                       |
++-----------------------+----------------------------------------------------+
+| Drop                  | Discards the packet.                               |
++-----------------------+----------------------------------------------------+
+| Redirect              | | Redirects the packet to a desired virtual        |
+|                       |   interface.                                       |
+|                       | | As an option, it is possible to change the MAC   |
+|                       |   address when the packet is transferred.          |
++-----------------------+----------------------------------------------------+
+
+.. figure:: ./images/vtn/flow_filter_example.png
+   :alt: Flow Filter Example
+
+   Flow Filter Example
+
+-  Following steps explain flow-filter function:
+
+   -  when a packet is transferred to an interface within a virtual
+      network, the flow-filter function evaluates whether the
+      transferred packet matches the condition specifed in the
+      flow-list.
+
+   -  If the packet matches the condition, the flow-filter applies the
+      flow-list matching action specified in the flow-filter.
+
+Requirements
+^^^^^^^^^^^^
+
+To apply the packet filter, configure the following:
+
+-  Create a flow condition.
+
+-  Specify where to apply the flow-filter, for example VTN, vBridge, or
+   interface of vBridge.
+
+To provision OpenFlow switches, this page uses Mininet. Mininet details
+and set-up can be referred at the below page:
+https://wiki.opendaylight.org/view/OpenDaylight_Controller:Installation#Using_Mininet
+
+Start Mininet, and create three switches (s1, s2, and s3) and four hosts
+(h1, h2, h3 and h4) in it.
+
+::
+
+    sudo mn --controller=remote,ip=192.168.0.100 --topo tree,2
+
+.. note::
+
+    Replace "192.168.0.100" with the IP address of OpenDaylight
+    controller based on your environment.
+
+You can check the topology that you have created by executing "net"
+command in the Mininet console.
+
+::
+
+     mininet> net
+     h1 h1-eth0:s2-eth1
+     h2 h2-eth0:s2-eth2
+     h3 h3-eth0:s3-eth1
+     h4 h4-eth0:s3-eth2
+     s1 lo:  s1-eth1:s2-eth3 s1-eth2:s3-eth3
+     s2 lo:  s2-eth1:h1-eth0 s2-eth2:h2-eth0 s2-eth3:s1-eth1
+     s3 lo:  s3-eth1:h3-eth0 s3-eth2:h4-eth0 s3-eth3:s1-eth2
+
+In this guide, you will provision flowfilters to establish communication
+between h1 and h3.
+
+Configuration
+^^^^^^^^^^^^^
+
+To provision the virtual L2 network for the two hosts (h1 and h3),
+execute REST API provided by VTN Manager as follows. It uses curl
+command to call the REST API.
+
+-  Create a virtual tenant named vtn1 by executing `the update-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"vtn1"}}'
+
+-  Create a virtual bridge named vbr1 in the tenant vtn1 by executing
+   `the update-vbridge
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1"}}'
+
+-  Create two interfaces into the virtual bridge by executing `the
+   update-vinterface
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1"}}'
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if2"}}'
+
+-  Configure two mappings on the interfaces by executing `the
+   set-port-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map>`__.
+
+   -  The interface if1 of the virtual bridge will be mapped to the port
+      "s2-eth1" of the switch "openflow:2" of the Mininet.
+
+      -  The h1 is connected to the port "s2-eth1".
+
+   -  The interface if2 of the virtual bridge will be mapped to the port
+      "s3-eth1" of the switch "openflow:3" of the Mininet.
+
+      -  The h3 is connected to the port "s3-eth1".
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if1", "node":"openflow:2", "port-name":"s2-eth1"}}'
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if2", "node":"openflow:3", "port-name":"s3-eth1"}}'
+
+-  Create flowcondition named cond\_1 by executing `the
+   set-flow-condition
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#set-flow-condition>`__.
+
+   -  For option source and destination-network, get inet address of
+      host h1 and h3 from mininet.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"name":"cond_1", "vtn-flow-match":[{"vtn-ether-match":{},"vtn-inet-match":{"source-network":"10.0.0.1/32","protocol":1,"destination-network":"10.0.0.3/32"},"index":"1"}]}}'
+
+-  Flowfilter can be applied either in VTN, VBR or VBR Interfaces. Here
+   in this page we provision flowfilter with VBR Interface and
+   demonstrate with action type drop and then pass.
+
+-  Flow filter demonstration with DROP action-type. Create Flowfilter in
+   VBR Interface if1 by executing `the set-flow-filter
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input": {"tenant-name": "vtn1", "bridge-name": "vbr1","interface-name":"if1","vtn-flow-filter":[{"condition":"cond_1","vtn-drop-filter":{},"vtn-flow-action":[{"order": "1","vtn-set-inet-src-action":{"ipv4-address":"10.0.0.1/32"}},{"order": "2","vtn-set-inet-dst-action":{"ipv4-address":"10.0.0.3/32"}}],"index": "1"}]}}'
+
+Verification of the drop filter
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Please execute ping from h1 to h3. As we have applied the action type
+   "drop" , ping should fail with no packet flows between hosts h1 and
+   h3 as below,
+
+::
+
+     mininet> h1 ping h3
+
+Configuration for pass filter
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  Update the flow filter to pass the packets by executing `the
+   set-flow-filter
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input": {"tenant-name": "vtn1", "bridge-name": "vbr1","interface-name":"if1","vtn-flow-filter":[{"condition":"cond_1","vtn-pass-filter":{},"vtn-flow-action":[{"order": "1","vtn-set-inet-src-action":{"ipv4-address":"10.0.0.1/32"}},{"order": "2","vtn-set-inet-dst-action":{"ipv4-address":"10.0.0.3/32"}}],"index": "1"}]}}'
+
+Verification For Packets Success
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  As we have applied action type PASS now ping should happen between
+   hosts h1 and h3.
+
+::
+
+     mininet> h1 ping h3
+     PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
+     64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=0.984 ms
+     64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.110 ms
+     64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.098 ms
+
+-  You can also verify the configurations by executing the following
+   REST API. It shows all configuration in VTN Manager.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X GET http://localhost:8181/restconf/operational/vtn:vtns/vtn/vtn1
+
+::
+
+    {
+      "vtn": [
+      {
+        "name": "vtn1",
+          "vtenant-config": {
+            "hard-timeout": 0,
+            "idle-timeout": 300,
+            "description": "creating vtn"
+          },
+          "vbridge": [
+          {
+            "name": "vbr1",
+            "vbridge-config": {
+              "age-interval": 600,
+              "description": "creating vBridge1"
+            },
+            "bridge-status": {
+              "state": "UP",
+              "path-faults": 0
+            },
+            "vinterface": [
+            {
+              "name": "if1",
+              "vinterface-status": {
+                "mapped-port": "openflow:2:1",
+                "state": "UP",
+                "entity-state": "UP"
+              },
+              "port-map-config": {
+                "vlan-id": 0,
+                "node": "openflow:2",
+                "port-name": "s2-eth1"
+              },
+              "vinterface-config": {
+                "description": "Creating if1 interface",
+                "enabled": true
+              },
+              "vinterface-input-filter": {
+                "vtn-flow-filter": [
+                {
+                  "index": 1,
+                  "condition": "cond_1",
+                  "vtn-flow-action": [
+                  {
+                    "order": 1,
+                    "vtn-set-inet-src-action": {
+                      "ipv4-address": "10.0.0.1/32"
+                    }
+                  },
+                  {
+                    "order": 2,
+                    "vtn-set-inet-dst-action": {
+                      "ipv4-address": "10.0.0.3/32"
+                    }
+                  }
+                  ],
+                    "vtn-pass-filter": {}
+                },
+                {
+                  "index": 10,
+                  "condition": "cond_1",
+                  "vtn-drop-filter": {}
+                }
+                ]
+              }
+            },
+            {
+              "name": "if2",
+              "vinterface-status": {
+                "mapped-port": "openflow:3:1",
+                "state": "UP",
+                "entity-state": "UP"
+              },
+              "port-map-config": {
+                "vlan-id": 0,
+                "node": "openflow:3",
+                "port-name": "s3-eth1"
+              },
+              "vinterface-config": {
+                "description": "Creating if2 interface",
+                "enabled": true
+              }
+            }
+            ]
+          }
+        ]
+      }
+      ]
+    }
+
+Cleaning Up
+^^^^^^^^^^^
+
+-  To clean up both VTN and flowcondition.
+
+-  You can delete the virtual tenant vtn1 by executing `the remove-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"vtn1"}}'
+
+-  You can delete the flowcondition cond\_1 by executing `the
+   remove-flow-condition
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#remove-flow-condition>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:remove-flow-condition -d '{"input":{"name":"cond_1"}}'
+
+How to use VTN to change the path of the packet flow
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+-  This page explains how to create specific VTN Pathmap using VTN
+   Manager. This page targets Beryllium release, so the procedure
+   described here does not work in other releases.
+
+.. figure:: ./images/vtn/Pathmap.png
+   :alt: Pathmap
+
+   Pathmap
+
+Requirement
+^^^^^^^^^^^
+
+-  Save the mininet script given below as pathmap\_test.py and run the
+   mininet script in the mininet environment where Mininet is installed.
+
+-  Create topology using the below mininet script:
+
+::
+
+     from mininet.topo import Topo
+     class MyTopo( Topo ):
+        "Simple topology example."
+        def __init__( self ):
+            "Create custom topo."
+            # Initialize topology
+            Topo.__init__( self )
+            # Add hosts and switches
+            leftHost = self.addHost( 'h1' )
+            rightHost = self.addHost( 'h2' )
+            leftSwitch = self.addSwitch( 's1' )
+            middleSwitch = self.addSwitch( 's2' )
+            middleSwitch2 = self.addSwitch( 's4' )
+            rightSwitch = self.addSwitch( 's3' )
+            # Add links
+            self.addLink( leftHost, leftSwitch )
+            self.addLink( leftSwitch, middleSwitch )
+            self.addLink( leftSwitch, middleSwitch2 )
+            self.addLink( middleSwitch, rightSwitch )
+            self.addLink( middleSwitch2, rightSwitch )
+            self.addLink( rightSwitch, rightHost )
+     topos = { 'mytopo': ( lambda: MyTopo() ) }
+
+-  After creating new file with the above script start the mininet as
+   below,
+
+::
+
+    sudo mn --controller=remote,ip=10.106.138.124 --custom pathmap_test.py --topo mytopo
+
+.. note::
+
+    Replace "10.106.138.124" with the IP address of OpenDaylight
+    controller based on your environment.
+
+::
+
+     mininet> net
+     h1 h1-eth0:s1-eth1
+     h2 h2-eth0:s3-eth3
+     s1 lo:  s1-eth1:h1-eth0 s1-eth2:s2-eth1 s1-eth3:s4-eth1
+     s2 lo:  s2-eth1:s1-eth2 s2-eth2:s3-eth1
+     s3 lo:  s3-eth1:s2-eth2 s3-eth2:s4-eth2 s3-eth3:h2-eth0
+     s4 lo:  s4-eth1:s1-eth3 s4-eth2:s3-eth2
+     c0
+
+-  Generate traffic by pinging between host h1 and host h2 before
+   creating the portmaps respectively.
+
+::
+
+     mininet> h1 ping h2
+     PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
+     From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
+     From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
+     From 10.0.0.1 icmp_seq=3 Destination Host Unreachable
+     From 10.0.0.1 icmp_seq=4 Destination Host Unreachable
+
+Configuration
+^^^^^^^^^^^^^
+
+-  To change the path of the packet flow, execute REST API provided by
+   VTN Manager as follows. It uses curl command to call the REST API.
+
+-  Create a virtual tenant named vtn1 by executing `the update-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"vtn1"}}'
+
+-  Create a virtual bridge named vbr1 in the tenant vtn1 by executing
+   `the update-vbridge
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1"}}'
+
+-  Create two interfaces into the virtual bridge by executing `the
+   update-vinterface
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1"}}'
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if2"}}'
+
+-  Configure two mappings on the interfaces by executing `the
+   set-port-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map>`__.
+
+   -  The interface if1 of the virtual bridge will be mapped to the port
+      "s2-eth1" of the switch "openflow:1" of the Mininet.
+
+      -  The h1 is connected to the port "s1-eth1".
+
+   -  The interface if2 of the virtual bridge will be mapped to the port
+      "s3-eth1" of the switch "openflow:3" of the Mininet.
+
+      -  The h3 is connected to the port "s3-eth3".
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if1", "node":"openflow:1", "port-name":"s1-eth1"}}'
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if2", "node":"openflow:3", "port-name":"s3-eth3"}}'
+
+-  Genarate traffic by pinging between host h1 and host h2 after
+   creating the portmaps respectively.
+
+::
+
+     mininet> h1 ping h2
+     PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
+     64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.861 ms
+     64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.101 ms
+     64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.101 ms
+
+-  Get the Dataflows information by executing `the get-data-flow
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow.html#get-data-flow>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow:get-data-flow -d '{"input":{"tenant-name":"vtn1","mode":"DETAIL","node":"openflow:1","data-flow-port":{"port-id":1,"port-name":"s1-eth1"}}}'
+
+-  Create flowcondition named cond\_1 by executing `the
+   set-flow-condition
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#set-flow-condition>`__.
+
+   -  For option source and destination-network, get inet address of
+      host h1 or host h2 from mininet
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"operation":"SET","present":"false","name":"cond_1", "vtn-flow-match":[{"vtn-ether-match":{},"vtn-inet-match":{"source-network":"10.0.0.1/32","protocol":1,"destination-network":"10.0.0.2/32"},"index":"1"}]}}'
+
+-  Create pathmap with flowcondition cond\_1 by executing `the
+   set-path-map
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-path-map.html#set-path-map>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-path-map:set-path-map -d '{"input":{"tenant-name":"vtn1","path-map-list":[{"condition":"cond_1","policy":"1","index": "1","idle-timeout":"300","hard-timeout":"0"}]}}'
+
+-  Create pathpolicy by executing `the set-path-policy
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-path-policy.html#set-path-policy>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-path-policy:set-path-policy -d '{"input":{"operation":"SET","id": "1","default-cost": "10000","vtn-path-cost": [{"port-desc":"openflow:1,3,s1-eth3","cost":"1000"},{"port-desc":"openflow:4,2,s4-eth2","cost":"1000"},{"port-desc":"openflow:3,3,s3-eth3","cost":"100000"}]}}'
+
+Verification
+^^^^^^^^^^^^
+
+-  Before applying Path policy get node information by executing get
+   dataflow command.
+
+::
+
+    "data-flow-info": [
+    {
+      "physical-route": [
+      {
+        "physical-ingress-port": {
+          "port-name": "s3-eth3",
+            "port-id": "3"
+        },
+          "physical-egress-port": {
+            "port-name": "s3-eth1",
+            "port-id": "1"
+          },
+          "node": "openflow:3",
+          "order": 0
+      },
+      {
+        "physical-ingress-port": {
+          "port-name": "s2-eth2",
+          "port-id": "2"
+        },
+        "physical-egress-port": {
+          "port-name": "s2-eth1",
+          "port-id": "1"
+        },
+        "node": "openflow:2",
+        "order": 1
+      },
+      {
+        "physical-ingress-port": {
+          "port-name": "s1-eth2",
+          "port-id": "2"
+        },
+        "physical-egress-port": {
+          "port-name": "s1-eth1",
+          "port-id": "1"
+        },
+        "node": "openflow:1",
+        "order": 2
+      }
+      ],
+        "data-egress-node": {
+          "interface-name": "if1",
+          "bridge-name": "vbr1",
+          "tenant-name": "vtn1"
+        },
+        "data-egress-port": {
+          "node": "openflow:1",
+          "port-name": "s1-eth1",
+          "port-id": "1"
+        },
+        "data-ingress-node": {
+          "interface-name": "if2",
+          "bridge-name": "vbr1",
+          "tenant-name": "vtn1"
+        },
+        "data-ingress-port": {
+          "node": "openflow:3",
+          "port-name": "s3-eth3",
+          "port-id": "3"
+        },
+        "flow-id": 32
+      },
+    }
+
+-  After applying Path policy get node information by executing get
+   dataflow command.
+
+::
+
+    "data-flow-info": [
+    {
+      "physical-route": [
+      {
+        "physical-ingress-port": {
+          "port-name": "s1-eth1",
+            "port-id": "1"
+        },
+          "physical-egress-port": {
+            "port-name": "s1-eth3",
+            "port-id": "3"
+          },
+          "node": "openflow:1",
+          "order": 0
+      },
+      {
+        "physical-ingress-port": {
+          "port-name": "s4-eth1",
+          "port-id": "1"
+        },
+        "physical-egress-port": {
+          "port-name": "s4-eth2",
+          "port-id": "2"
+        },
+        "node": "openflow:4",
+        "order": 1
+      },
+      {
+        "physical-ingress-port": {
+          "port-name": "s3-eth2",
+          "port-id": "2"
+        },
+        "physical-egress-port": {
+          "port-name": "s3-eth3",
+          "port-id": "3"
+        },
+        "node": "openflow:3",
+        "order": 2
+      }
+      ],
+        "data-egress-node": {
+          "interface-name": "if2",
+          "bridge-name": "vbr1",
+          "tenant-name": "vtn1"
+        },
+        "data-egress-port": {
+          "node": "openflow:3",
+          "port-name": "s3-eth3",
+          "port-id": "3"
+        },
+        "data-ingress-node": {
+          "interface-name": "if1",
+          "bridge-name": "vbr1",
+          "tenant-name": "vtn1"
+        },
+        "data-ingress-port": {
+          "node": "openflow:1",
+          "port-name": "s1-eth1",
+          "port-id": "1"
+        },
+    }
+
+Cleaning Up
+^^^^^^^^^^^
+
+-  To clean up both VTN and flowcondition.
+
+-  You can delete the virtual tenant vtn1 by executing `the remove-vtn
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"vtn1"}}'
+
+-  You can delete the flowcondition cond\_1 by executing `the
+   remove-flow-condition
+   RPC <https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#remove-flow-condition>`__.
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:remove-flow-condition -d '{"input":{"name":"cond_1"}}'
+
+VTN Coordinator Usage Examples
+------------------------------
+
+How to configure L2 Network with Single Controller
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This example provides the procedure to demonstrate configuration of VTN
+Coordinator with L2 network using VTN Virtualization(single controller).
+Here is the Example for vBridge Interface Mapping with Single Controller
+using mininet. mininet details and set-up can be referred at below URL:
+https://wiki.opendaylight.org/view/OpenDaylight_Controller:Installation#Using_Mininet
+
+.. figure:: ./images/vtn/vtn-single-controller-topology-example.png
+   :alt: EXAMPLE DEMONSTRATING SINGLE CONTROLLER
+
+   EXAMPLE DEMONSTRATING SINGLE CONTROLLER
+
+Requirements
+^^^^^^^^^^^^
+
+-  Configure mininet and create a topology:
+
+::
+
+    mininet@mininet-vm:~$ sudo mn --controller=remote,ip=<controller-ip> --topo tree,2
+
+-  mininet> net
+
+::
+
+     s1 lo:  s1-eth1:h1-eth0 s1-eth2:s2-eth1
+     s2 lo:  s2-eth1:s1-eth2 s2-eth2:h2-eth0
+     h1 h1-eth0:s1-eth1
+     h2 h2-eth0:s2-eth2
+
+Configuration
+^^^^^^^^^^^^^
+
+-  Create a Controller named controllerone and mention its ip-address in
+   the below create-controller command.
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"controller": {"controller_id": "controllerone", "ipaddr":"10.0.0.2", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers.json
+
+-  Create a VTN named vtn1 by executing the create-vtn command
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vtn" : {"vtn_name":"vtn1","description":"test VTN" }}' http://127.0.0.1:8083/vtn-webapi/vtns.json
+
+-  Create a vBridge named vBridge1 in the vtn1 by executing the
+   create-vbr command.
+
+::
+
+     curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vbridge" : {"vbr_name":"vBridge1","controller_id":"controllerone","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges.json
+
+-  Create two Interfaces named if1 and if2 into the vBridge1
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if1","description": "if_desc1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if2","description": "if_desc2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
+
+-  Get the list of logical ports configured
+
+::
+
+    Curl --user admin:adminpass -H 'content-type: application/json' -X GET http://127.0.0.1:8083/vtn-webapi/controllers/controllerone/domains/\(DEFAULT\)/logical_ports.json
+
+-  Configure two mappings on each of the interfaces by executing the
+   below command.
+
+The interface if1 of the virtual bridge will be mapped to the port
+"s2-eth1" of the switch "openflow:2" of the Mininet. The h1 is connected
+to the port "s2-eth1".
+
+The interface if2 of the virtual bridge will be mapped to the port
+"s3-eth1" of the switch "openflow:3" of the Mininet. The h3 is connected
+to the port "s3-eth1".
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:03-s3-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if1/portmap.json
+    curl --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:02-s2-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if2/portmap.json
+
+Verification
+^^^^^^^^^^^^
+
+Please verify whether the Host1 and Host3 are pinging.
+
+-  Send packets from Host1 to Host3
+
+::
+
+     mininet> h1 ping h3
+     PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
+     64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=0.780 ms
+     64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.079 ms
+
+How to configure L2 Network with Multiple Controllers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+-  This example provides the procedure to demonstrate configuration of
+   VTN Coordinator with L2 network using VTN Virtualization Here is the
+   Example for vBridge Interface Mapping with Multi-controller using
+   mininet.
+
+.. figure:: ./images/vtn/MutiController_Example_diagram.png
+   :alt: EXAMPLE DEMONSTRATING MULTIPLE CONTROLLERS
+
+   EXAMPLE DEMONSTRATING MULTIPLE CONTROLLERS
+
+Requirements
+^^^^^^^^^^^^
+
+-  Configure multiple controllers using the mininet script given below:
+   https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_%28VTN%29:Scripts:Mininet#Network_with_multiple_switches_and_OpenFlow_controllers
+
+Configuration
+^^^^^^^^^^^^^
+
+-  Create a VTN named vtn3 by executing the create-vtn command
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"vtn" : {"vtn_name":"vtn3"}}' http://127.0.0.1:8083/vtn-webapi/vtns.json
+
+-  Create two Controllers named odc1 and odc2 with its ip-address in the
+   below create-controller command.
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"controller": {"controller_id": "odc1", "ipaddr":"10.100.9.52", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers.json
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"controller": {"controller_id": "odc2", "ipaddr":"10.100.9.61", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers.json
+
+-  Create two vBridges in the VTN like, vBridge1 in Controller1 and
+   vBridge2 in Controller2
+
+::
+
+     curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vbridge" : {"vbr_name":"vbr1","controller_id":"odc1","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges.json
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"vbridge" : {"vbr_name":"vbr2","controller_id":"odc2","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges.json
+
+-  Create two Interfaces if1, if2 for the two vBridges vbr1 and vbr2.
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr1/interfaces.json
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr1/interfaces.json
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr2/interfaces.json
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr2/interfaces.json
+
+-  Get the list of logical ports configured
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X GET http://127.0.0.1:8083/vtn-webapi/controllers/odc1/domains/\(DEFAULT\)/logical_ports/detail.json
+
+-  Create boundary and vLink for the two controllers
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'   -X POST -d '{"boundary": {"boundary_id": "b1", "link": {"controller1_id": "odc1", "domain1_id": "(DEFAULT)", "logical_port1_id": "PP-OF:00:00:00:00:00:00:00:01-s1-eth3", "controller2_id": "odc2", "domain2_id": "(DEFAULT)", "logical_port2_id": "PP-OF:00:00:00:00:00:00:00:04-s4-eth3"}}}' http://127.0.0.1:8083/vtn-webapi/boundaries.json
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"vlink": {"vlk_name": "vlink1" , "vnode1_name": "vbr1", "if1_name":"if2", "vnode2_name": "vbr2", "if2_name": "if2", "boundary_map": {"boundary_id":"b1","vlan_id": "50"}}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vlinks.json
+
+-  Configure two mappings on each of the interfaces by executing the
+   below command.
+
+The interface if1 of the vbr1 will be mapped to the port "s2-eth2" of
+the switch "openflow:2" of the Mininet. The h2 is connected to the port
+"s2-eth2".
+
+The interface if2 of the vbr2 will be mapped to the port "s5-eth2" of
+the switch "openflow:5" of the Mininet. The h6 is connected to the port
+"s5-eth2".
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:02-s2-eth2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr1/interfaces/if1/portmap.json
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:05-s5-eth2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr2/interfaces/if1/portmap.json
+
+Verification
+^^^^^^^^^^^^
+
+Please verify whether Host h2 and Host h6 are pinging.
+
+-  Send packets from h2 to h6
+
+::
+
+    mininet> h2 ping h6
+
+::
+
+     PING 10.0.0.6 (10.0.0.3) 56(84) bytes of data.
+     64 bytes from 10.0.0.6: icmp_req=1 ttl=64 time=0.780 ms
+     64 bytes from 10.0.0.6: icmp_req=2 ttl=64 time=0.079 ms
+
+How To Test Vlan-Map In Mininet Environment
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This example explains how to test vlan-map in a multi host scenario.
+
+.. figure:: ./images/vtn/vlanmap_using_mininet.png
+   :alt: Example that demonstrates vlanmap testing in Mininet
+   Environment
+
+   Example that demonstrates vlanmap testing in Mininet Environment
+
+Requirements
+^^^^^^^^^^^^
+
+-  Save the mininet script given below as vlan\_vtn\_test.py and run the
+   mininet script in the mininet environment where Mininet is installed.
+
+Mininet Script
+^^^^^^^^^^^^^^
+
+https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:Mininet#Network_with_hosts_in_different_vlan
+
+-  Run the mininet script
+
+::
+
+    sudo mn --controller=remote,ip=192.168.64.13 --custom vlan_vtn_test.py --topo mytopo
+
+Configuration
+^^^^^^^^^^^^^
+
+Please follow the below steps to test a vlan map using mininet:
+
+-  Create a Controller named controllerone and mention its ip-address in
+   the below create-controller command.
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"controller": {"controller_id": "controllerone", "ipaddr":"10.0.0.2", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers
+
+-  Create a VTN named vtn1 by executing the create-vtn command
+
+::
+
+    curl -X POST -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' -d '{"vtn" : {"vtn_name":"vtn1","description":"test VTN" }}' http://127.0.0.1:8083/vtn-webapi/vtns.json
+
+-  Create a vBridge named vBridge1 in the vtn1 by executing the
+   create-vbr command.
+
+::
+
+    curl -X POST -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' -d '{"vbridge" : {"vbr_name":"vBridge1","controller_id":"controllerone","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges.json
+
+-  Create a vlan map with vlanid 200 for vBridge vBridge1
+
+::
+
+    curl -X POST -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' -d '{"vlanmap" : {"vlan_id": 200 }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/vlanmaps.json
+
+-  Create a vBridge named vBridge2 in the vtn1 by executing the
+   create-vbr command.
+
+::
+
+    curl -X POST -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' -d '{"vbridge" : {"vbr_name":"vBridge2","controller_id":"controllerone","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges.json
+
+-  Create a vlan map with vlanid 300 for vBridge vBridge2
+
+::
+
+    curl -X POST -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' -d '{"vlanmap" : {"vlan_id": 300 }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge2/vlanmaps.json
+
+Verification
+^^^^^^^^^^^^
+
+Ping all in mininet environment to view the host reachability.
+
+::
+
+    mininet> pingall
+    Ping: testing ping reachability
+    h1 -> X h3 X h5 X
+    h2 -> X X h4 X h6
+    h3 -> h1 X X h5 X
+    h4 -> X h2 X X h6
+    h5 -> h1 X h3 X X
+    h6 -> X h2 X h4 X
+
+How To View Specific VTN Station Information.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This example demonstrates on how to view a specific VTN Station
+information.
+
+.. figure:: ./images/vtn/vtn_stations.png
+   :alt: EXAMPLE DEMONSTRATING VTN STATIONS
+
+   EXAMPLE DEMONSTRATING VTN STATIONS
+
+Requirement
+^^^^^^^^^^^
+
+-  Configure mininet and create a topology:
+
+::
+
+     $ sudo mn --custom /home/mininet/mininet/custom/topo-2sw-2host.py --controller=remote,ip=10.100.9.61 --topo mytopo
+    mininet> net
+
+     s1 lo:  s1-eth1:h1-eth0 s1-eth2:s2-eth1
+     s2 lo:  s2-eth1:s1-eth2 s2-eth2:h2-eth0
+     h1 h1-eth0:s1-eth1
+     h2 h2-eth0:s2-eth2
+
+-  Generate traffic by pinging between hosts h1 and h2 after configuring
+   the portmaps respectively
+
+::
+
+     mininet> h1 ping h2
+     PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
+     64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=16.7 ms
+     64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=13.2 ms
+
+Configuration
+^^^^^^^^^^^^^
+
+-  Create a Controller named controllerone and mention its ip-address in
+   the below create-controller command
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"controller": {"controller_id": "controllerone", "ipaddr":"10.100.9.61", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers.json
+
+-  Create a VTN named vtn1 by executing the create-vtn command
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vtn" : {"vtn_name":"vtn1","description":"test VTN" }}' http://127.0.0.1:8083/vtn-webapi/vtns.json
+
+-  Create a vBridge named vBridge1 in the vtn1 by executing the
+   create-vbr command.
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vbridge" : {"vbr_name":"vBridge1","controller_id":"controllerone","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges.json
+
+-  Create two Interfaces named if1 and if2 into the vBridge1
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if1","description": "if_desc1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
+    curl -v --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if2","description": "if_desc2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
+
+-  Configure two mappings on each of the interfaces by executing the
+   below command.
+
+The interface if1 of the virtual bridge will be mapped to the port
+"s1-eth1" of the switch "openflow:1" of the Mininet. The h1 is connected
+to the port "s1-eth1".
+
+The interface if2 of the virtual bridge will be mapped to the port
+"s1-eth2" of the switch "openflow:1" of the Mininet. The h2 is connected
+to the port "s1-eth2".
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:01-s1-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if1/portmap.json
+    curl -v --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:02-s2-eth2"}}' http://17.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if2/portmap.json
+
+-  Get the VTN stations information
+
+::
+
+    curl -X GET -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' "http://127.0.0.1:8083/vtn-webapi/vtnstations?controller_id=controllerone&vtn_name=vtn1"
+
+Verification
+^^^^^^^^^^^^
+
+::
+
+    curl -X GET -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' "http://127.0.0.1:8083/vtn-webapi/vtnstations?controller_id=controllerone&vtn_name=vtn1"
+    {
+       "vtnstations": [
+           {
+               "domain_id": "(DEFAULT)",
+               "interface": {},
+               "ipaddrs": [
+                   "10.0.0.2"
+               ],
+               "macaddr": "b2c3.06b8.2dac",
+               "no_vlan_id": "true",
+               "port_name": "s2-eth2",
+               "station_id": "178195618445172",
+               "switch_id": "00:00:00:00:00:00:00:02",
+               "vnode_name": "vBridge1",
+               "vnode_type": "vbridge",
+               "vtn_name": "vtn1"
+           },
+           {
+               "domain_id": "(DEFAULT)",
+               "interface": {},
+               "ipaddrs": [
+                   "10.0.0.1"
+               ],
+               "macaddr": "ce82.1b08.90cf",
+               "no_vlan_id": "true",
+               "port_name": "s1-eth1",
+               "station_id": "206130278144207",
+               "switch_id": "00:00:00:00:00:00:00:01",
+               "vnode_name": "vBridge1",
+               "vnode_type": "vbridge",
+               "vtn_name": "vtn1"
+           }
+       ]
+    }
+
+How To View Dataflows in VTN
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This example demonstrates on how to view a specific VTN Dataflow
+information.
+
+Configuration
+^^^^^^^^^^^^^
+
+The same Configuration as Vlan Mapping
+Example(\ https://wiki.opendaylight.org/view/VTN:Coordinator:Beryllium:HowTos:How_To_test_vlanmap_using_mininet)
+
+Verification
+^^^^^^^^^^^^
+
+Get the VTN Dataflows information
+
+::
+
+    curl -X GET -H 'content-type: application/json' --user 'admin:adminpass' "http://127.0.0.1:8083/vtn-webapi/dataflows?controller_id=controllerone&srcmacaddr=924c.e4a3.a743&vlan_id=300&switch_id=openflow:2&port_name=s2-eth1"
+
+::
+
+    {
+       "dataflows": [
+           {
+               "controller_dataflows": [
+                   {
+                       "controller_id": "controllerone",
+                       "controller_type": "odc",
+                       "egress_domain_id": "(DEFAULT)",
+                       "egress_port_name": "s3-eth3",
+                       "egress_station_id": "3",
+                       "egress_switch_id": "00:00:00:00:00:00:00:03",
+                       "flow_id": "29",
+                       "ingress_domain_id": "(DEFAULT)",
+                       "ingress_port_name": "s2-eth2",
+                       "ingress_station_id": "2",
+                       "ingress_switch_id": "00:00:00:00:00:00:00:02",
+                       "match": {
+                           "macdstaddr": [
+                               "4298.0959.0e0b"
+                           ],
+                           "macsrcaddr": [
+                               "924c.e4a3.a743"
+                           ],
+                           "vlan_id": [
+                               "300"
+                           ]
+                       },
+                       "pathinfos": [
+                           {
+                               "in_port_name": "s2-eth2",
+                               "out_port_name": "s2-eth1",
+                               "switch_id": "00:00:00:00:00:00:00:02"
+                           },
+                           {
+                               "in_port_name": "s1-eth2",
+                               "out_port_name": "s1-eth3",
+                               "switch_id": "00:00:00:00:00:00:00:01"
+                           },
+                           {
+                               "in_port_name": "s3-eth1",
+                               "out_port_name": "s3-eth3",
+                               "switch_id": "00:00:00:00:00:00:00:03"
+                           }
+                       ]
+                   }
+               ],
+               "reason": "success"
+           }
+       ]
+    }
+
+How To Configure Flow Filters Using VTN
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+The flow-filter function discards, permits, or redirects packets of the
+traffic within a VTN, according to specified flow conditions The table
+below lists the actions to be applied when a packet matches the
+condition:
+
++--------------------------------------+--------------------------------------+
+| Action                               | Function                             |
++--------------------------------------+--------------------------------------+
+| Pass                                 | Permits the packet to pass. As       |
+|                                      | options, packet transfer priority    |
+|                                      | (set priority) and DSCP change (se t |
+|                                      | ip-dscp) is specified.               |
++--------------------------------------+--------------------------------------+
+| Drop                                 | Discards the packet.                 |
++--------------------------------------+--------------------------------------+
+| Redirect                             | Redirects the packet to a desired    |
+|                                      | virtual interface. As an option, it  |
+|                                      | is possible to change the MAC        |
+|                                      | address when the packet is           |
+|                                      | transferred.                         |
++--------------------------------------+--------------------------------------+
+
+.. figure:: ./images/vtn/flow_filter_example.png
+   :alt: Flow Filter
+
+   Flow Filter
+
+Following steps explain flow-filter function:
+
+-  When a packet is transferred to an interface within a virtual
+   network, the flow-filter function evaluates whether the transferred
+   packet matches the condition specified in the flow-list.
+
+-  If the packet matches the condition, the flow-filter applies the
+   flow-list matching action specified in the flow-filter.
+
+Requirements
+^^^^^^^^^^^^
+
+To apply the packet filter, configure the following:
+
+-  Create a flow-list and flow-listentry.
+
+-  Specify where to apply the flow-filter, for example VTN, vBridge, or
+   interface of vBridge.
+
+Configure mininet and create a topology:
+
+::
+
+    $  mininet@mininet-vm:~$ sudo mn --controller=remote,ip=<controller-ip> --topo tree
+
+Please generate the following topology
+
+::
+
+    $  mininet@mininet-vm:~$ sudo mn --controller=remote,ip=<controller-ip> --topo tree,2
+    mininet> net
+    c0
+    s1 lo:  s1-eth1:s2-eth3 s1-eth2:s3-eth3
+    s2 lo:  s2-eth1:h1-eth0 s2-eth2:h2-eth0 s2-eth3:s1-eth1
+    s3 lo:  s3-eth1:h3-eth0 s3-eth2:h4-eth0 s3-eth3:s1-eth2
+    h1 h1-eth0:s2-eth1
+    h2 h2-eth0:s2-eth2
+    h3 h3-eth0:s3-eth1
+    h4 h4-eth0:s3-eth2
+
+Configuration
+^^^^^^^^^^^^^
+
+-  Create a Controller named controller1 and mention its ip-address in
+   the below create-controller command.
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"controller": {"controller_id": "controller1", "ipaddr":"10.100.9.61", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers
+
+-  Create a VTN named vtn\_one by executing the create-vtn command
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vtn" : {"vtn_name":"vtn_one","description":"test VTN" }}' http://127.0.0.1:8083/vtn-webapi/vtns.json
+
+-  Create a vBridge named vbr\_two in the vtn1 by executing the
+   create-vbr command.
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vbridge" : {"vbr_name":"vbr_one^C"controller_id":"controller1","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges.json
+    curl -v --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vbridge" :
+    {"vbr_name":"vbr_two","controller_id":"controller1","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges.json
+
+-  Create two Interfaces named if1 and if2 into the vbr\_two
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if1","description": "if_desc1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces.json
+    curl -v --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if1","description": "if_desc1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces.json
+
+-  Get the list of logical ports configured
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X GET  http://127.0.0.1:8083/vtn-webapi/controllers/controllerone/domains/\(DEFAULT\)/logical_ports.json
+
+-  Configure two mappings on each of the interfaces by executing the
+   below command.
+
+The interface if1 of the virtual bridge will be mapped to the port
+"s2-eth1" of the switch "openflow:2" of the Mininet. The h1 is connected
+to the port "s2-eth1".
+
+The interface if2 of the virtual bridge will be mapped to the port
+"s3-eth1" of the switch "openflow:3" of the Mininet. The h3 is connected
+to the port "s3-eth1".
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:03-s3-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces/if1/portmap.json
+    curl -v --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:02-s2-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces/if2/portmap.json
+
+-  Create Flowlist
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"flowlist": {"fl_name": "flowlist1", "ip_version":"IP"}}' http://127.0.0.1:8083/vtn-webapi/flowlists.json
+
+-  Create Flowlistentry
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"flowlistentry": {"seqnum": "233","macethertype": "0x8000","ipdstaddr": "10.0.0.3","ipdstaddrprefix": "2","ipsrcaddr": "10.0.0.2","ipsrcaddrprefix": "2","ipproto": "17","ipdscp": "55","icmptypenum":"232","icmpcodenum": "232"}}' http://127.0.0.1:8083/vtn-webapi/flowlists/flowlist1/flowlistentries.json
+
+-  Create vBridge Interface Flowfilter
+
+::
+
+    curl --user admin:adminpass -X POST -H 'content-type: application/json' -d '{"flowfilter" : {"ff_type": "in"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces/if1/flowfilters.json
+
+Flow filter demonstration with DROP action-type
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+    curl --user admin:adminpass -X POST -H 'content-type: application/json' -d '{"flowfilterentry": {"seqnum": "233", "fl_name": "flowlist1", "action_type":"drop", "priority":"3", "dscp":"55" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces/if1/flowfilters/in/flowfilterentries.json
+
+Verification
+^^^^^^^^^^^^
+
+As we have applied the action type "drop" , ping should fail.
+
+::
+
+    mininet> h1 ping h3
+    PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
+    From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
+    From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
+
+Flow filter demonstration with PASS action-type
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+::
+
+    curl --user admin:adminpass -X PUT -H 'content-type: application/json' -d '{"flowfilterentry": {"seqnum": "233", "fl_name": "flowlist1", "action_type":"pass", "priority":"3", "dscp":"55" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces/if1/flowfilters/in/flowfilterentries/233.json
+
+Verification
+^^^^^^^^^^^^
+
+::
+
+    mininet> h1 ping h3
+    PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
+    64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=0.984 ms
+    64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.110 ms
+    64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.098 ms
+
+How To Use VTN To Make Packets Take Different Paths
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This example demonstrates on how to create a specific VTN Path Map
+information.
+
+.. figure:: ./images/vtn/Pathmap.png
+   :alt: PathMap
+
+   PathMap
+
+Requirement
+^^^^^^^^^^^
+
+-  Save the mininet script given below as pathmap\_test.py and run the
+   mininet script in the mininet environment where Mininet is installed.
+
+-  Create topology using the below mininet script:
+
+::
+
+     from mininet.topo import Topo
+     class MyTopo( Topo ):
+        "Simple topology example."
+        def __init__( self ):
+            "Create custom topo."
+            # Initialize topology
+            Topo.__init__( self )
+            # Add hosts and switches
+            leftHost = self.addHost( 'h1' )
+            rightHost = self.addHost( 'h2' )
+            leftSwitch = self.addSwitch( 's1' )
+            middleSwitch = self.addSwitch( 's2' )
+            middleSwitch2 = self.addSwitch( 's4' )
+            rightSwitch = self.addSwitch( 's3' )
+            # Add links
+            self.addLink( leftHost, leftSwitch )
+            self.addLink( leftSwitch, middleSwitch )
+            self.addLink( leftSwitch, middleSwitch2 )
+            self.addLink( middleSwitch, rightSwitch )
+            self.addLink( middleSwitch2, rightSwitch )
+            self.addLink( rightSwitch, rightHost )
+     topos = { 'mytopo': ( lambda: MyTopo() ) }
+
+::
+
+     mininet> net
+     c0
+     s1 lo:  s1-eth1:h1-eth0 s1-eth2:s2-eth1 s1-eth3:s4-eth1
+     s2 lo:  s2-eth1:s1-eth2 s2-eth2:s3-eth1
+     s3 lo:  s3-eth1:s2-eth2 s3-eth2:s4-eth2 s3-eth3:h2-eth0
+     s4 lo:  s4-eth1:s1-eth3 s4-eth2:s3-eth2
+     h1 h1-eth0:s1-eth1
+     h2 h2-eth0:s3-eth3
+
+-  Generate traffic by pinging between hosts h1 and h2 before creating
+   the portmaps respectively
+
+::
+
+      mininet> h1 ping h2
+      PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
+      From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
+      From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
+      From 10.0.0.1 icmp_seq=3 Destination Host Unreachable
+      From 10.0.0.1 icmp_seq=4 Destination Host Unreachable
+
+Configuration
+^^^^^^^^^^^^^
+
+-  Create a Controller named controller1 and mention its ip-address in
+   the below create-controller command.
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"controller": {"controller_id": "odc", "ipaddr":"10.100.9.42", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers.json
+
+-  Create a VTN named vtn1 by executing the create-vtn command
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"vtn" : {"vtn_name":"vtn1","description":"test VTN" }}' http://127.0.0.1:8083/vtn-webapi/vtns.json
+
+-  Create a vBridge named vBridge1 in the vtn1 by executing the
+   create-vbr command.
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"vbridge" : {"vbr_name":"vBridge1","controller_id":"odc","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges.json
+
+-  Create two Interfaces named if1 and if2 into the vBridge1
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if1","description": "if_desc1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
+    curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if2","description": "if_desc2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
+
+-  Configure two mappings on each of the interfaces by executing the
+   below command.
+
+The interface if1 of the virtual bridge will be mapped to the port
+"s1-eth1" of the switch "openflow:1" of the Mininet. The h1 is connected
+to the port "s1-eth1".
+
+The interface if2 of the virtual bridge will be mapped to the port
+"s3-eth3" of the switch "openflow:3" of the Mininet. The h2 is connected
+to the port "s3-eth3".
+
+::
+
+    curl --user admin:adminpass -H 'content-type: application/json'  -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:01-s1-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if1/portmap.json
+    curl --user admin:adminpass -H 'content-type: application/json'  -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:03-s3-eth3"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if2/portmap.json
+
+-  Generate traffic by pinging between hosts h1 and h2 after creating
+   the portmaps respectively
+
+::
+
+      mininet> h1 ping h2
+      PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
+      64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=36.4 ms
+      64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.880 ms
+      64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.073 ms
+      64 bytes from 10.0.0.2: icmp_req=4 ttl=64 time=0.081 ms
+
+-  Get the VTN Dataflows information
+
+::
+
+    curl -X GET -H 'content-type: application/json' --user 'admin:adminpass' "http://127.0.0.1:8083/vtn-webapi/dataflows?&switch_id=00:00:00:00:00:00:00:01&port_name=s1-eth1&controller_id=odc&srcmacaddr=de3d.7dec.e4d2&no_vlan_id=true"
+
+-  Create a Flowcondition in the VTN
+
+**(The flowconditions, pathmap and pathpolicy commands have to be
+executed in the controller).**
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"operation":"SET","present":"false","name":"cond_1", "vtn-flow-match":[{"vtn-ether-match":{},"vtn-inet-match":{"source-network":"10.0.0.1/32","protocol":1,"destination-network":"10.0.0.2/32"},"index":"1"}]}}'
+
+-  Create a Pathmap in the VTN
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-path-map:set-path-map -d '{"input":{"tenant-name":"vtn1","path-map-list":[{"condition":"cond_1","policy":"1","index": "1","idle-timeout":"300","hard-timeout":"0"}]}}'
+
+-  Get the Path policy information
+
+::
+
+    curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-path-policy:set-path-policy -d '{"input":{"operation":"SET","id": "1","default-cost": "10000","vtn-path-cost": [{"port-desc":"openflow:1,3,s1-eth3","cost":"1000"},{"port-desc":"openflow:4,2,s4-eth2","cost":"100000"},{"port-desc":"openflow:3,3,s3-eth3","cost":"10000"}]}}'
+
+Verification
+^^^^^^^^^^^^
+
+-  Before applying Path policy information in the VTN
+
+::
+
+    {
+            "pathinfos": [
+                {
+                  "in_port_name": "s1-eth1",
+                  "out_port_name": "s1-eth3",
+                  "switch_id": "openflow:1"
+                },
+                {
+                  "in_port_name": "s4-eth1",
+                  "out_port_name": "s4-eth2",
+                  "switch_id": "openflow:4"
+                },
+                {
+                   "in_port_name": "s3-eth2",
+                   "out_port_name": "s3-eth3",
+                   "switch_id": "openflow:3"
+                }
+                         ]
+    }
+
+-  After applying Path policy information in the VTN
+
+::
+
+    {
+        "pathinfos": [
+                {
+                  "in_port_name": "s1-eth1",
+                  "out_port_name": "s1-eth2",
+                  "switch_id": "openflow:1"
+                },
+                {
+                  "in_port_name": "s2-eth1",
+                  "out_port_name": "s2-eth2",
+                  "switch_id": "openflow:2"
+                },
+                {
+                   "in_port_name": "s3-eth1",
+                   "out_port_name": "s3-eth3",
+                   "switch_id": "openflow:3"
+                }
+                         ]
+    }
+
+VTN Coordinator(Troubleshooting HowTo)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overview
+^^^^^^^^
+
+This page demonstrates Installation troubleshooting steps of VTN
+Coordinator. OpenDaylight VTN provides multi-tenant virtual network
+functions on OpenDaylight controllers. OpenDaylight VTN consists of two
+parts:
+
+-  VTN Coordinator.
+
+-  VTN Manager.
+
+VTN Coordinator orchestrates multiple VTN Managers running in
+OpenDaylight Controllers, and provides VTN Applications with VTN API.
+VTN Manager is OSGi bundles running in OpenDaylight Controller. Current
+VTN Manager supports only OpenFlow switches. It handles PACKET\_IN
+messages, sends PACKET\_OUT messages, manages host information, and
+installs flow entries into OpenFlow switches to provide VTN Coordinator
+with virtual network functions. The requirements for installing these
+two are different.Therefore, we recommend that you install VTN Manager
+and VTN Coordinator in different machines.
+
+List of installation Troubleshooting How to’s
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Installation:VTN_Coordinator
+
+**After executing db\_setup, you have encountered the error "Failed to
+setup database"?**
+
+The error could be due to the below reasons
+
+-  Access Restriction
+
+The user who owns /usr/local/vtn/ directory and installs VTN
+Coordinator, can only start db\_setup. Example :
+
+::
+
+      The directory should appear as below (assuming the user as "vtn"):
+      # ls -l /usr/local/
+        drwxr-xr-x. 12 vtn  vtn  4096 Mar 14 21:53 vtn
+      If the user doesnot own /usr/local/vtn/ then, please run the below command (assuming the username as vtn),
+                  chown -R vtn:vtn /usr/local/vtn
+
+-  Postgres not Present
+
+::
+
+    1. In case of Fedora/CentOS/RHEL, please check if /usr/pgsql/<version> directory is present and also ensure the commands initdb, createdb,pg_ctl,psql are working. If, not please re-install postgres packages
+    2. In case of Ubuntu, check if /usr/lib/postgres/<version> directory is present and check for the commands as in the previous step.
+
+-  Not enough space to create tables
+
+::
+
+    Please check df -k and ensure enough free space is available.
+
+-  If the above steps do not solve the problem, please refer to the log
+   file for the exact problem
+
+::
+
+    /usr/local/vtn/var/dbm/unc_setup_db.log for the exact error.
+
+-  list of VTN Coordinator processes
+
+-  Run the below command ensure the Coordinator daemons are running.
+
+::
+
+           Command:     /usr/local/vtn/bin/unc_dmctl status
+           Name              Type           IPC Channel       PID
+        -----------       -----------      --------------     ------
+            drvodcd         DRIVER           drvodcd           15972
+            lgcnwd         LOGICAL           lgcnwd            16010
+            phynwd         PHYSICAL          phynwd            15996
+
+-  Issue the curl command to fetch version and ensure the process is
+   able to respond.
+
+**How to debug a startup failure?.**
+
+The following activities take place in order during startup
+
+-  Database server is started after setting virtual memory to required
+   value,Any database startup errors will be reflected in any of the
+   below logs.
+
+::
+
+             /usr/local/vtn/var/dbm/unc_db_script.log.
+             /usr/local/vtn/var/db/pg_log/postgresql-*.log (the pattern will have the date)
+
+-  uncd daemon is kicked off, The daemon in turn kicks off the rest of
+   the daemons.
+
+::
+
+      Any  uncd startup failures will be reflected in /usr/local/vtn/var/uncd/uncd_start.err.
+
+After setting up the apache tomcat server, what are the aspects that should be checked.
+'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+**Please check if catalina is running..**
+
+::
+
+        The command ps -ef | grep catalina | grep -v grep should list a catalina process
+
+**If you encounter an erroneous situation where the REST API is always
+failing..**
+
+::
+
+      Please ensure the firewall settings for port:8181(Beryllium release) or port:8083(Post Beryllium release) and enable the same.
+
+**How to debug a REST API returning a failure message?.**
+
+Please check the /usr/share/java/apache-tomcat-7.0.39/logs/core/core.log
+for failure details.
+
+**REST API for VTN configuration fails, how to debug?.**
+
+The default log level for all daemons is "INFO", to debug the situation
+TRACE or DEBUG logs may be needed. To increase the log level for
+individual daemons, please use the commands suggested below
+
+::
+
+      /usr/local/vtn/bin/lgcnw_control loglevel trace -- upll daemon log
+       /usr/local/vtn/bin/phynw_control loglevel trace -- uppl daemon log
+       /usr/local/vtn/bin/unc_control loglevel trace -- uncd daemon log
+       /usr/local/vtn/bin/drvodc_control loglevel trace -- Driver daemon log
+
+After setting the log levels, the operation can be repeated and the log
+files can be referred for debugging.
+
+**Problems while Installing PostgreSQL due to openssl.**
+
+Errors may occur when trying to install postgreSQL rpms. Recently
+PostgreSQL has upgraded all their binaries to use the latest openssl
+versions with fix for http://en.wikipedia.org/wiki/Heartbleed Please
+upgrade the openssl package to the latest version and re-install. For
+RHEL 6.1/6.4 : If you have subscription, Please use the same and update
+the rpms. The details are available in the following link
+https://access.redhat.com/site/solutions/781793 ACCESS-REDHAT
+
+::
+
+      rpm -Uvh http://mirrors.kernel.org/centos/6/os/x86_64/Packages/openssl-1.0.1e-15.el6.x86_64.rpm
+      rpm -ivh http://mirrors.kernel.org/centos/6/os/x86_64/Packages/openssl-devel-1.0.1e-15.el6.x86_64.rpm
+
+For other linux platforms, Please do yum update, the public respositroes
+will have the latest openssl, please install the same.
+
+Support for Microsoft SCVMM 2012 R2 with ODL VTN
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Introduction
+^^^^^^^^^^^^
+
+System Center Virtual Machine Manager (SCVMM) is Microsoft’s virtual
+machine support center for window’s based emulations. SCVMM is a
+management solution for the virtualized data center. You can use it to
+configure and manage your virtualization host, networking, and storage
+resources in order to create and deploy virtual machines and services to
+private clouds that you have created.
+
+The VSEM Provider is a plug-in to bridge between SCVMM and OpenDaylight.
+
+Microsoft Hyper-V is a server virtualization developed by Microsoft,
+which provides virtualization services through hypervisor-based
+emulations.
+
+.. figure:: ./images/vtn/setup_diagram_SCVMM.png
+   :alt: Set-Up Diagram
+
+   Set-Up Diagram
+
+**The topology used in this set-up is:**
+
+-  A SCVMM with VSEM Provider installed and a running VTN Coordinator
+   and OpenDaylight with VTN Feature installed.
+
+-  PF1000 virtual switch extension has been installed in the two Hyper-V
+   servers as it implements the OpenFlow capability in Hyper-V.
+
+-  Three OpenFlow switches simulated using mininet and connected to
+   Hyper-V.
+
+-  Four VM’s hosted using SCVMM.
+
+**It is implemented as two major components:**
+
+-  SCVMM
+
+-  OpenDaylight (VTN Feature)
+
+-  VTN Coordinator
+
+VTN Coordinator
+^^^^^^^^^^^^^^^
+
+OpenDaylight VTN as Network Service provider for SCVMM where VSEM
+provider is added in the Network Service which will handle all requests
+from SCVMM and communicate with the VTN Coordinator. It is used to
+manage the network virtualization provided by OpenDaylight.
+
+Installing HTTPS in VTN Coordinator
+'''''''''''''''''''''''''''''''''''
+
+-  System Center Virtual Machine Manager (SCVMM) supports only https
+   protocol.
+
+**Apache Portable Runtime (APR) Installation Steps**
+
+-  Enter the command "yum install **apr**" in VTN Coordinator installed
+   machine.
+
+-  In /usr/bin, create a soft link as "ln –s /usr/bin/apr-1-config
+   /usr/bin/apr-config".
+
+-  Extract tomcat under "/usr/share/java" by using the below command
+   "tar -xvf apache-tomcat-8.0.27.tar.gz –C /usr/share/java".
+
+.. note::
+
+    Please go through the bleow link to download
+    apache-tomcat-8.0.27.tar.gz file,
+    https://archive.apache.org/dist/tomcat/tomcat-8/v8.0.27/bin/
+
+-  Please go to the directory "cd
+   /usr/share/java/apache-tomcat-8.0.27/bin and unzip tomcat-native.gz
+   using this command "tar -xvf tomcat-native.gz".
+
+-  Go to the directory "cd
+   /usr/share/java/apache-tomcat-8.0.27/bin/tomcat-native-1.1.33-src/jni/native".
+
+-  Enter the command "./configure --with-os-type=bin
+   --with-apr=/usr/bin/apr-config".
+
+-  Enter the command "make" and "make install".
+
+-  Apr libraries are successfully installed in "/usr/local/apr/lib".
+
+**Enable HTTP/HTTPS in VTN Coordinator**
+
+Enter the command "firewall-cmd --zone=public --add-port=8083/tcp
+--permanent" and "firewall-cmd --reload" to enable firewall settings in
+server.
+
+**Create a CA’s private key and a self-signed certificate in server**
+
+-  Execute the following command "openssl req -x509 -days 365
+   -extensions v3\_ca -newkey rsa:2048 –out /etc/pki/CA/cacert.pem
+   –keyout /etc/pki/CA/private/cakey.pem" in a single line.
+
++-----------------------+----------------------------------------------------+
+| Argument              | Description                                        |
++=======================+====================================================+
+| Country Name          | | Specify the country code.                        |
+|                       | | For example, JP                                  |
++-----------------------+----------------------------------------------------+
+| State or Province     | | Specify the state or province.                   |
+| Name                  | | For example, Tokyo                               |
++-----------------------+----------------------------------------------------+
+| Locality Name         | | Locality Name                                    |
+|                       | | For example, Chuo-Ku                             |
++-----------------------+----------------------------------------------------+
+| Organization Name     | Specify the company.                               |
++-----------------------+----------------------------------------------------+
+| Organizational Unit   | Specify the department, division, or the like.     |
+| Name                  |                                                    |
++-----------------------+----------------------------------------------------+
+| Common Name           | Specify the host name.                             |
++-----------------------+----------------------------------------------------+
+| Email Address         | Specify the e-mail address.                        |
++-----------------------+----------------------------------------------------+
+
+-  Execute the following commands: "touch /etc/pki/CA/index.txt" and
+   "echo 00 > /etc/pki/CA/serial" in server after setting your CA’s
+   private key.
+
+**Create a private key and a CSR for web server**
+
+-  Execute the following command "openssl req -new -newkey rsa:2048 -out
+   csr.pem –keyout /usr/local/vtn/tomcat/conf/key.pem" in a single line.
+
+-  Enter the PEM pass phrase: Same password you have given in CA’s
+   private key PEM pass phrase.
+
++-----------------------+----------------------------------------------------+
+| Argument              | Description                                        |
++=======================+====================================================+
+| Country Name          | | Specify the country code.                        |
+|                       | | For example, JP                                  |
++-----------------------+----------------------------------------------------+
+| State or Province     | | Specify the state or province.                   |
+| Name                  | | For example, Tokyo                               |
++-----------------------+----------------------------------------------------+
+| Locality Name         | | Locality Name                                    |
+|                       | | For example, Chuo-Ku                             |
++-----------------------+----------------------------------------------------+
+| Organization Name     | Specify the company.                               |
++-----------------------+----------------------------------------------------+
+| Organizational Unit   | Specify the department, division, or the like.     |
+| Name                  |                                                    |
++-----------------------+----------------------------------------------------+
+| Common Name           | Specify the host name.                             |
++-----------------------+----------------------------------------------------+
+| Email Address         | Specify the e-mail address.                        |
++-----------------------+----------------------------------------------------+
+| A challenge password  | Specify the challenge password.                    |
++-----------------------+----------------------------------------------------+
+| An optional company   | Specify an optional company name.                  |
+| name                  |                                                    |
++-----------------------+----------------------------------------------------+
+
+**Create a certificate for web server**
+
+-  Execute the following command "openssl ca –in csr.pem –out
+   /usr/local/vtn/tomcat/conf/cert.pem –days 365 –batch" in a single
+   line.
+
+-  Enter pass phrase for /etc/pki/CA/private/cakey.pem: Same password
+   you have given in CA’s private key PEM pass phrase.
+
+-  Open the tomcat file using "vim /usr/local/vtn/tomcat/bin/tomcat".
+
+-  Include the line " TOMCAT\_PROPS="$TOMCAT\_PROPS
+   -Djava.library.path=\\"/usr/local/apr/lib\\"" " in 131th line and
+   save the file.
+
+**Edit server.xml file and restart the server**
+
+-  Open the server.xml file using "vim
+   /usr/local/vtn/tomcat/conf/server.xml" and add the below lines.
+
+   ::
+
+       <Connector port="${vtn.port}" protocol="HTTP/1.1" SSLEnabled="true"
+       maxThreads="150" scheme="https" secure="true"
+       SSLCertificateFile="/usr/local/vtn/tomcat/conf/cert.pem"
+       SSLCertificateKeyFile="/usr/local/vtn/tomcat/conf/key.pem"
+       SSLPassword=same password you have given in CA's private key PEM pass phrase
+       connectionTimeout="20000" />
+
+-  Save the file and restart the server.
+
+-  To stop vtn use the following command.
+
+   ::
+
+       /usr/local/vtn/bin/vtn_stop
+
+-  To start vtn use the following command.
+
+   ::
+
+       /usr/local/vtn/bin/vtn_start
+
+-  Copy the created CA certificate from cacert.pem to cacert.crt by
+   using the following command,
+
+   ::
+
+       openssl x509 –in /etc/pki/CA/cacert.pem –out cacert.crt
+
+   **Checking the HTTP and HTTPS connection from client**
+
+-  You can check the HTTP connection by using the following command:
+
+   ::
+
+       curl -X GET -H 'contenttype:application/json' -H 'username:admin' -H 'password:adminpass' http://<server IP address>:8083/vtn-webapi/api_version.json
+
+-  You can check the HTTPS connection by using the following command:
+
+   ::
+
+       curl -X GET -H 'contenttype:application/json' -H 'username:admin' -H 'password:adminpass' https://<server IP address>:8083/vtn-webapi/api_version.json --cacert /etc/pki/CA/cacert.pem
+
+-  The response should be like this for both HTTP and HTTPS:
+
+   ::
+
+       {"api_version":{"version":"V1.4"}}
+
+Prerequisites to create Network Service in SCVMM machine, Please follow the below steps
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+1.  Please go through the below link to download VSEM Provider zip file,
+    https://nexus.opendaylight.org/content/groups/public/org/opendaylight/vtn/application/vtnmanager-vsemprovider/2.0.0-Beryllium/vtnmanager-vsemprovider-2.0.0-Beryllium-bin.zip
+
+2.  Unzip the vtnmanager-vsemprovider-2.0.0-Beryllium-bin.zip file
+    anywhere in your SCVMM machine.
+
+3.  Stop SCVMM service from **"service manager→tools→servers→select
+    system center virtual machine manager"** and click stop.
+
+4.  Go to **"C:/Program Files"** in your SCVMM machine. Inside
+    **"C:/Program Files"**, create a folder named as **"ODLProvider"**.
+
+5.  Inside **"C:/Program Files/ODLProvider"**, create a folder named as
+    "Module" in your SCVMM machine.
+
+6.  Inside "C:/Program Files/ODLProvider/Module", Create two folders
+    named as **"Odl.VSEMProvider"** and **"VSEMOdlUI"** in your SCVMM
+    machine.
+
+7.  Copy the **"VSEMOdl.dll"** file from
+    **"ODL\_SCVMM\_PROVIDER/ODL\_VSEM\_PROVIDER"** to **"C:/Program
+    Files/ODLProvider/Module/Odl.VSEMProvider"** in your SCVMM machine.
+
+8.  Copy the **"VSEMOdlProvider.psd1"** file from
+    **"application/vsemprovider/VSEMOdlProvider/VSEMOdlProvider.psd1"**
+    to **"C:/Program Files/ODLProvider/Module/Odl.VSEMProvider"** in
+    your SCVMM machine.
+
+9.  Copy the **"VSEMOdlUI.dll"** file from
+    **"ODL\_SCVMM\_PROVIDER/ODL\_VSEM\_PROVIDER\_UI"** to **"C:/Program
+    Files/ODLProvider/Module/VSEMOdlUI"** in your SCVMM machine.
+
+10. Copy the **"VSEMOdlUI.psd1"** file from
+    **"application/vsemprovider/VSEMOdlUI"** to **"C:/Program
+    Files/ODLProvider/Module/VSEMOdlUI"** in your SCVMM machine.
+
+11. Copy the **"reg\_entry.reg"** file from
+    **"ODL\_SCVMM\_PROVIDER/Register\_settings"** to your SCVMM desktop
+    and double click the **"reg\_entry.reg"** file to install registry
+    entry in your SCVMM machine.
+
+12. Download **"PF1000.msi"** from this link,
+    https://www.pf-info.com/License/en/index.php?url=index/index_non_buyer
+    and place into **"C:/Program Files/Switch Extension Drivers"** in
+    your SCVMM machine.
+
+13. Start SCVMM service from **"service manager→tools→servers→select
+    system center virtual machine manager"** and click start.
+
+System Center Virtual Machine Manager (SCVMM)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+It supports two major features:
+
+-  Failover Clustering
+
+-  Live Migration
+
+Failover Clustering
+'''''''''''''''''''
+
+A single Hyper-V can host a number of virtual machines. If the host were
+to fail then all of the virtual machines that are running on it will
+also fail, thereby resulting in a major outage. Failover clustering
+treats individual virtual machines as clustered resources. If a host
+were to fail then clustered virtual machines are able to fail over to a
+different Hyper-V server where they can continue to run.
+
+Live Migration
+''''''''''''''
+
+Live Migration is used to migrate the running virtual machines from one
+Hyper-V server to another Hyper-V server without any interruptions.
+Please go through the below video for more details,
+
+-  https://youtu.be/34YMOTzbNJM
+
+SCVMM User Guide
+^^^^^^^^^^^^^^^^
+
+-  Please go through the below link for SCVMM user guide:
+   https://wiki.opendaylight.org/images/c/ca/ODL_SCVMM_USER_GUIDE_final.pdf
+
+-  Please go through the below links for more details
+
+   -  OpenDaylight SCVMM VTN Integration: https://youtu.be/iRt4dxtiz94
+
+   -  OpenDaylight Congestion Control with SCVMM VTN:
+      https://youtu.be/34YMOTzbNJM
+
diff --git a/docs/user-guide/yang-ide-user-guide.rst b/docs/user-guide/yang-ide-user-guide.rst
new file mode 100644 (file)
index 0000000..5b03b63
--- /dev/null
@@ -0,0 +1,388 @@
+YANG IDE User Guide
+===================
+
+Overview
+--------
+
+The YANG IDE project provides an Eclipse plugin that is used to create,
+view, and edit Yang model files. It currently supports version 1.0 of
+the Yang specification.
+
+The YANG IDE project uses components from the OpenDaylight project for
+parsing and verifying Yang model files. The "yangtools" parser in
+OpenDaylight is generally used for generating Java code associated with
+Yang models. If you are just using the YANG IDE to view and edit Yang
+models, you do not need to know any more about this.
+
+Although the YANG IDE plugin is used in Eclipse, it is not necessary to
+be familiar with the Java programming language to use it effectively.
+
+The YANG IDE also uses the Maven build tool, but you do not have to be a
+Maven expert to use it, or even know that much about it. Very little
+configuration of Maven files will have to be done by you. In fact, about
+the only thing you will likely ever need to change can be done entirely
+in the Eclipse GUI forms, without even seeing the internal structure of
+the Maven POM file (Project Object Model).
+
+The YANG IDE plugin provides features that are similar to other
+programming language plugins in the Eclipse ecosystem.
+
+For instance, you will find support for the following:
+
+-  Immediate "as-you-type" display of syntactic and semantic errors
+
+-  Intelligent completion of language tokens, limited to only choices
+   valid in the current scope and namespace
+
+-  Consistent (and customizable) color-coding of syntactic and semantic
+   symbols
+
+-  Provides access to remote Yang models by specifying dependency on
+   Maven artifact containing models (or by manual inclusion in project)
+
+-  One-click navigation to referenced symbols in external files
+
+-  Mouse hovers display descriptions of referenced components
+
+-  Tools for refactoring or renaming components respect namespaces
+
+-  Code templates can be entered for common conventions
+
+Forthcoming sections of this manual will step through how to utilize
+these features.
+
+Creating a Yang Project
+-----------------------
+
+After the plugin is installed, the next thing you have to do is create a
+Yang Project. This is done from the "File" menu, selecting "New", and
+navigating to the "Yang" section and selecting "YANG Project", and then
+clicking "Next" for more items to configure.
+
+Some shortcuts for these steps are the following:
+
+-  Typically, the key sequence "Ctrl+n" (press "n" while holding down
+   one of the "ctrl" keys) is bound to the "new" function
+
+-  In the "New" wizard dialog, the initial focus is in the filter field,
+   where you can enter "yang" to limit the choices to only the functions
+   provided by the YANG IDE plugin
+
+-  On the "New" wizard dialog, instead of clicking the "Next" button
+   with your mouse, you can press "Alt+n" (you will see a hint for this
+   with the "N" being underlined)
+
+First Yang Project Wizard Page
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+After the "Next" button is pressed, it goes to the first wizard page
+that is specific to creating Yang projects. you will see a subtitle on
+this page of "YANG Tools Configuration". In almost all cases, you should
+be able to click "Next" again on this page to go to the next wizard
+page.
+
+However, some information about the fields on this page would be
+helpful.
+
+You will see the following labeled fields and sections:
+
+Yang Files Root Directory
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This defaults to "src/main/yang". Except when creating your first Yang
+file, you, you do not even have to know this, as Eclipse presents the
+same interface to view your Yang files no matter what you set this to.
+
+Source Code Generators
+^^^^^^^^^^^^^^^^^^^^^^
+
+If you do not know what this is, you do not need to know about it. The
+"yangtools" Yang parser from OpenDaylight uses a "code generator"
+component to generate specific kinds of Java classes from the Yang
+models. Again, if you do not need to work with the generated Java code,
+you do not need to change this.
+
+Create Example YANG File
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+This is likely the only field you will ever have any reason to change.
+If this checkbox is set, when the YANG IDE creates the Yang project, it
+will create a sample "acme-system.yang" file which you can view and edit
+to demonstrate the features of the tool to yourself. If you do not need
+this file, then either delete it from the project or uncheck the
+checkbox to prevent its creation.
+
+When done with the fields on this page, click the "Next" button to go to
+the next wizard page.
+
+Second Yang Project Wizard Page
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This page has a subtitle of "New Maven project". There are several
+fields on this page, but you will only ever have to see and change the
+setting of the first field, the "Create a simple project" checkbox. You
+should always set this ON to avoid the selection of a Maven archetype,
+which is something you do not need to do for creating a Yang project.
+
+Click "Next" at the bottom of the page to move to the next wizard page.
+
+Third Yang Project Wizard Page
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This also has a subtitle of "New Maven project", but with different
+fields to set. You will likely only ever set the first two fields, and
+completely ignore everything else.
+
+The first field is labeled "Group id" in the "Artifact" section. It
+really does not matter what you set this to, but it does have to be set
+to something. For consistency, you might set this to the name or
+nickname of your organization. Otherwise, there are no constraints on
+the value of this field.
+
+The second field is labeled "Artifact id". The value of this field will
+be used as the name of the project you create, so you will have to think
+about what you want the project to be called. Also note that this name
+has to be unique in the Eclipse workspace. You cannot have two projects
+with the same name.
+
+After you have set this field, you will notice that the "Next" button is
+insensitive, but now the "Finish" button is sensitive. You can click
+"Finish" now (or use the keyboard shortcut of "Alt+f"), and the Yang IDE
+will finally create your project.
+
+Creating a Yang File
+--------------------
+
+Now that you have created your project, it is time to create your first
+Yang file.
+
+When you created the Yang project, you might have noticed the other
+option next to "YANG Project", which was "YANG File". That is what you
+will select now. Click "Next" to go to the first wizard page.
+
+First Yang File Wizard Page
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This wizard page lets you specify where the new file will be located,
+and its name.
+
+You have to select the particular project you want the file to go into,
+and it needs to go into the "src/main/yang" folder (or a different
+location if you changed that field when creating the project).
+
+You then enter the desired name of the file in the "File name". The file
+name should have no spaces or "special characters" in it. You can
+specify a ".yang" extent if you want. If you do not specify an extent,
+the YANG IDE will create it with the ".yang" extent.
+
+Click "Next" to go to the next wizard page.
+
+Second Yang File Wizard Page
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On this wizard page, you set some metadata about the module that is used
+to initialize the contents of the Yang file.
+
+It has the following fields:
+
+Module Name
+^^^^^^^^^^^
+
+This will default to the "base name" of the file name you created. For
+instance, if the file name you created was "network-setup.yang", this
+field will default to "network-setup". You should leave this value as
+is. There is no good reason to define a model with a name different from
+the file name.
+
+Namespace
+^^^^^^^^^
+
+This defaults to "urn:opendaylight:xxx", where "xxx" is the "base name"
+of the file name you created. You should put a lot of thought into
+designing a namespace naming scheme that is used throughout your
+organization. It is quite common for this namespace value to look like a
+"http" URL, but note that that is just a convention, and will not
+necessarily imply that there is a web page residing at that HTTP
+address.
+
+Prefix
+^^^^^^
+
+This defaults to the "base name" of the file name you created. It mostly
+does not technically matter what you set this to, as long as it is not
+empty. Conventionally, it should be a "nickname" that is used to refer
+to the given namespace in an abbreviated form, when referenced in an
+"import" statement in another Yang model file.
+
+Revision
+^^^^^^^^
+
+This has to be a date value in the form of "yyyy-mm-dd", representing
+the last modified date of this Yang model. The value will default to the
+current date.
+
+Revision Description
+^^^^^^^^^^^^^^^^^^^^
+
+This is just human-readable text, which will go into the "description"
+field underneath the Yang "revision" field, which will describe what
+went into this revision.
+
+When all the fields have the content you want, click the "Finish" button
+to set the YANG IDE create the file in the specified location. It will
+then present the new file in the editor view for additional
+modifications.
+
+Accessing Artifacts for Yang Model Imports
+------------------------------------------
+
+You might be working on Yang models that are "abstract" or are intended
+to be imported by other Yang models. You might also, and more likely, be
+working on Yang models that import other "abstract" Yang models.
+
+Assuming you are in that latter more common group, you need to consider
+for yourself, and for your organization, how you are going to get access
+to those models that you import.
+
+You could use a very simple and primitive approach of somehow obtaining
+those models from some source as plain files and just copying them into
+the "src/main/yang" folder of your project. For a simple demo or a
+"one-off" very short project, that might be sufficient.
+
+A more robust and maintainable approach would be to reference
+"coordinates" of the artifacts containing Yang models to import. When
+you specify unique coordinates associated with that artifact, the Yang
+IDE can retrieve the artifact in the background and make it available
+for your "import" statements.
+
+Those "coordinates" that I speak of refer to the Maven concepts of
+"group id", "artifact id", and "version". you may remember "group id"
+and "artifact id" from the wizard page for creating a Yang project. It
+is the same idea. If you ever produce Yang model artifacts that other
+people are going to import, you will want to think more about what you
+set those values to when you created the project.
+
+For example, the OpenDaylight project produces several importable
+artifacts that you can specify to get access to common Yang models.
+
+Turning on Indexing for Maven Repositories
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Before we talk about how to add dependencies to Maven artifacts with
+Yang models for import, I need to explain how to make it easier to find
+those artifacts.
+
+In the Yang project that you have created, the "pom.xml" file (also
+called a "POM file") is the file that Maven uses to specify
+dependencies. We will talk about that in a minute, but first we need to
+talk about "repositories". These are where artifacts are stored.
+
+We are going to have Eclipse show us the "Maven Repositories" view. In
+the main menu, select "Window" and then "Show View", and then "Other".
+Like in the "New" dialog, you can enter "maven" in the filter field to
+limit the list to views with "maven" in the name. Click on the "Maven
+Repositories" entry and click OK.
+
+This will usually create the view in the bottom panel of the window.
+
+The view presents an outline view of four principal elements:
+
+-  Local Repositories
+
+-  Global Repositories
+
+-  Project Repositories
+
+-  Custom Repositories
+
+For this purpose, the only section you care about is "Project
+Repositories", being the repositories that are only specified in the POM
+for the project. There should be a "right-pointing arrow" icon on the
+line. Click that to expand the entry.
+
+You should see two entries there:
+
+-  opendaylight-release
+
+-  opendaylight-snapshot
+
+You will also see internet URLs associated with each of those
+repositories.
+
+For this purpose, you only care about the first one. Right-click on that
+entry and select "Full Index Enabled". The first time you do this on the
+first project you create, it will spend several minutes walking the
+entire tree of artifacts available at that repository and "indexing" all
+of those components. When this is done, searching for available
+artifacts in that repository will go very quickly.
+
+Adding Dependencies Containing Yang Models
+------------------------------------------
+
+Double-click the "pom.xml" file in your project. Instead of just
+bringing up the view of an XML file (although you can see that if you
+like), it presents a GUI form editor with a handful of tabs.
+
+The first tab, "Overview", shows things like the "Group Id", "Artifact
+Id", and "Version", which represents the "Maven coordinate" of your
+project, which I have mentioned before.
+
+Now click on the "Dependencies" tab. You will now see two list
+components, labeled "Dependencies" and "Dependency Management". You only
+care about the "Dependencies" section.
+
+In the "Dependencies" section, you should see one dependency for an
+artifact called "yang-binding". This artifact is part of OpenDaylight,
+but you do not need to know anything about it.
+
+Now click the "Add" button.
+
+This brings up a dialog titled "Select Dependency". It has three fields
+at the top labeled "Group Id", "Artifact Id", and "Version", with a
+"Scope" dropdown. You will never have a need to change the "Scope"
+dropdown, so ignore it. Despite the fact that you will need to get
+values into these fields, in general usage, you will never have to
+manually enter values into them, but you will see values being inserted
+into these fields by the next steps I describe.
+
+Below those fields is a field labeled "Enter groupId, artifactId …".
+This is effectively a "filter field", like on the "New" dialog, but
+instead of limiting the list from a short list of choices, the value you
+enter there will be matched against all of the artifacts that were
+indexed in the "opendaylight-release" repository (and others). It will
+match the string you enter as a substring of any groupId or artifactId.
+
+For all of the entries that match that substring, it will list an entry
+showing the groupId and artifactId, with an expansion arrow. If you open
+it by clicking on the arrow, you will see individual entries
+corresponding to each available version of that artifact, along with
+some metadata about the artifacts between square brackets, mostly
+indicating what "type" of artifact is.
+
+For your purposes, you only ever want to use "bundle" or "jar"
+artifacts.
+
+Let us consider an example that many people will probably be using.
+
+In the filter field, enter "ietf-yang-types". Depending on what versions
+are available, you should see a small handful of "groupId, artifactId"
+entries there. One of them should be groupId
+"org.opendaylight.mdsal.model" and artifactId "ietf-yang-types". Click
+on the expansion arrow to open that.
+
+What you will see at this point depends on what versions are available.
+You will likely want to select the newest one (most likely top of the
+list) that is also either a "bundle" or "jar" type artifact.
+
+If you click on that resulting version entry, you should notice at this
+point that the "Group Id", "Artifact Id", and "Version" fields at the
+top of the dialog are now filled in with the values corresponding to
+this artifact and version.
+
+If this is the version that you want, click OK and this artifact will be
+added to the dependencies in the POM.
+
+This will now make the Yang models found in that artifact available in
+"import" statements in Yang models, not to mention the completion
+choices for that "import" statement.
+
diff --git a/docs/user-guide/yang-push.rst b/docs/user-guide/yang-push.rst
new file mode 100644 (file)
index 0000000..795c41f
--- /dev/null
@@ -0,0 +1,82 @@
+YANG-PUSH
+=========
+
+This section describes how to use the YANG-PUSH feature in OpenDaylight
+and contains contains configuration, administration, and management
+sections for the feature.
+
+Overview
+--------
+
+YANG PUBSUB project allows applications to place subscriptions upon
+targeted subtrees of YANG datastores residing on remote devices. Changes
+in YANG objects within the remote subtree can be pushed to an
+OpenDaylight MD-SAL and to the application as specified without a
+requiring the controller to make a continuous set of fetch requests.
+
+YANG-PUSH capabilities available
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This module contains the base code which embodies the intent of
+YANG-PUSH requirements for subscription as defined in
+{i2rs-pub-sub-requirements}
+[https://datatracker.ietf.org/doc/draft-ietf-i2rs-pub-sub-requirements/].
+The mechanism for delivering on these YANG-PUSH requirements over
+Netconf transport is defined in {netconf-yang-push} [netconf-yang-push:
+https://tools.ietf.org/html/draft-ietf-netconf-yang-push-00].
+
+Note that in the current release, not all capabilities of
+draft-ietf-netconf-yang-push are realized. Currently only implemented is
+**create-subscription** RPC support from
+ietf-datastore-push@2015-10-15.yang; and this will be for periodic
+subscriptions only. There of course is intent to provide much additional
+functionality in future OpenDaylight releases.
+
+Future YANG-PUSH capabilities
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Over time, the intent is to flesh out more robust capabilities which
+will allow OpenDaylight applications to subscribe to YANG-PUSH compliant
+devices. Capabilities for future releases will include:
+
+Support for subscription change/delete: **modify-subscription** rpc
+support for all mountpoint devices or particular mountpoint device
+**delete-subscription** rpc support for all mountpoint devices or
+particular mountpoint device
+
+Support for static subscriptions: This will enable the receipt of
+subscription updates pushed from publishing devices where no signaling
+from the controller has been used to establish the subscriptions.
+
+Support for additional transports: NETCONF is not the only transport of
+interest to OpenDaylight or the subscribed devices. Over time this code
+will support Restconf and HTTP/2 transport requirements defined in
+{netconf-restconf-yang-push}
+[https://tools.ietf.org/html/draft-voit-netconf-restconf-yang-push-01]
+
+YANG-PUSH Architecture
+----------------------
+
+The code architecture of Yang push consists of two main elements
+
+YANGPUSH Provider YANGPUSH Listener
+
+YANGPUSH Provider receives create-subscription requests from
+applications and then establishes/registers the corresponding listener
+which will receive information pushed by a publisher. In addition,
+YANGPUSH Provider also invokes an augmented OpenDaylight
+create-subscription RPC which enables applications to register for
+notification as per rfc5277. This augmentation adds periodic time period
+(duration) and subscription-id values to the existing RPC parameters.
+The Java package supporting this capability is
+“org.opendaylight.yangpush.impl”. YangpushDomProvider is the class which
+supports this YANGPUSH Provider capability.
+
+The YANGPUSH Listener accepts update notifications from a device after
+they have been de-encapsulated from the NETCONF transport. The YANGPUSH
+Listener then passes these updates to MD-SAL. This function is
+implemented via the YangpushDOMNotificationListener class within the
+“org.opendaylight.yangpush.listner” Java package. Applications should
+monitor MD-SAL for the availability of newly pushed subscription
+updates.
+
index 17457bc35118073d5339da1884e00806ccb68783..e03b5cc4f4237718772ee607ba5181c776908545 100644 (file)
@@ -1,127 +1,3 @@
 == ALTO Developer Guide ==
 
-=== Overview ===
-The topics of this guide are:
-
-. How to add alto projects as dependencies;
-. How to put/fetch data from ALTO;
-. Basic API and DataType;
-. How to use customized service implementations.
-
-=== Adding ALTO Projects as Dependencies ===
-
-Most ALTO packages can be added as dependencies in Maven projects by putting the
-following code in the _pom.xml_ file.
-
-    <dependency>
-        <groupId>org.opendaylight.alto</groupId>
-        <artifactId>${THE_NAME_OF_THE_PACKAGE_YOU_NEED}</artifactId>
-        <version>${ALTO_VERSION}</version>
-    </dependency>
-
-The current stable version for ALTO is `0.3.0-Boron`.
-
-=== Putting/Fetching data from ALTO ===
-
-==== Using RESTful API ====
-
-There are two kinds of RESTful APIs for ALTO: the one provided by
-`alto-northbound` which follows the formats defined in
-link:https://tools.ietf.org/html/rfc7285[RFC 7285], and the one provided by
-RESTCONF whose format is defined by the YANG model proposed in
-link:https://tools.ietf.org/html/draft-shi-alto-yang-model-03[this draft].
-
-One way to get the URLs for the resources from `alto-northbound` is to visit
-the IRD service first where there is a `uri` field for every entry. However, the
-IRD service is not yet implemented so currently the developers have to construct
-the URLs themselves. The base URL is `/alto` and below is a list
-of the specific paths defined in `alto-core/standard-northbound-route`
-using Jersey `@Path` annotation:
-
-* `/ird/{rid}`: the path to access __IRD__ services;
-* `/networkmap/{rid}[/{tag}]`: the path to access __Network Map__ and __Filtered Network Map__ services;
-* `/costmap/{rid}[/{tag}[/{mode}/{metric}]]`: the path to access __Cost Map__ and __Filtered Cost Map__ services;
-* `/endpointprop`: the path to access __Endpoint Property__ services;
-* `/endpointcost`: the path to access __Endpoint Cost__ services.
-
-NOTE: The segments in brackets are optional.
-
-If you want to fetch the data using RESTCONF, it is highly recommended to take a
-look at the `apidoc` page (http://{CONTROLLER_IP}:8181/apidoc/explorer/index.html)
-after installing the `odl-alto-release` feature in karaf.
-
-It is also worth pointing out that `alto-northbound` only supports `GET` and
-`POST` operations so it is impossible to manipulate the data through its RESTful
-APIs. To modify the data, use `PUT` and `DELETE` methods with RESTCONF.
-
-NOTE: The current implementation uses the `configuration` data store and that
-enables the developers to modify the data directly through RESTCONF. In the future this
-approach might be disabled in the core packages of ALTO but may still be
-available as an extension.
-
-==== Using MD-SAL ====
-
-You can also fetch data from the datastore directly.
-
-First you must get the access to the datastore by registering your module with
-a data broker.
-
-Then an `InstanceIdentifier` must be created. Here is an example of how to build
-an `InstanceIdentifier` for a _network map_:
-
-  import org.opendaylight...alto...Resources;
-  import org.opendaylight...alto...resources.NetworkMaps;
-  import org.opendaylight...alto...resources.network.maps.NetworkMap;
-  import org.opendaylight...alto...resources.network.maps.NetworkMapKey;
-  ...
-  protected
-  InstanceIdentifier<NetworkMap> getNetworkMapIID(String resource_id) {
-    ResourceId rid = ResourceId.getDefaultInstance(resource_id);
-    NetworkMapKey key = new NetworkMapKey(rid);
-    InstanceIdentifier<NetworkMap> iid = null;
-    iid = InstanceIdentifier.builder(Resources.class)
-                            .child(NetworkMaps.class)
-                            .child(NetworkMap.class, key)
-                            .build();
-    return iid;
-  }
-  ...
-
-With the `InstanceIdentifier` you can use `ReadOnlyTransaction`,
-`WriteTransaction` and `ReadWriteTransaction` to manipulate the data
-accordingly. The `simple-impl` package, which provides some of the AD-SAL APIs
-mentioned above, is using this method to get data from the datastore and then
-convert them into RFC7285-compatible objects.
-
-=== Basic API and DataType
-
-.. alto-basic-types: Defines basic types of ALTO protocol.
-
-.. alto-service-model-api: Includes the YANG models for the five basic ALTO services defined in link:https://tools.ietf.org/html/rfc7285[RFC 7285].
-
-.. alto-resourcepool: Manages the meta data of each ALTO service, including capabilities and versions.
-
-.. alto-northbound: Provides the root of RFC7285-compatible services at http://localhost:8080/alto.
-
-.. alto-northbound-route: Provides the root of the network map resources at http://localhost:8080/alto/networkmap/.
-
-=== How to customize service
-
-==== Define new service API
-
-Add a new module in `alto-core/standard-service-models`. For example, we named our
-service model module as `model-example`.
-
-==== Implement service RPC
-
-Add a new module in `alto-basic` to implement a service RPC in `alto-core`.
-
-Currently `alto-core/standard-service-models/model-base` has defined a template of the service RPC.
-You can define your own RPC using `augment` in YANG. Here is an example in `alto-simpleird`.
-
-[source,yang]
-include::augment.yang[]
-
-==== Register northbound route
-
-If necessary, you can add a northbound route module in `alto-core/standard-northbound-routes`.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/alto-developer-guide.html
index c10ae50fa9a25581623e4d0cf526bf2b3284580c..82f8246a61acb44f08e5a247623783eb49b78c52 100644 (file)
@@ -1,70 +1,3 @@
 == Atrium Developer Guide
 
-=== Overview
-Project Atrium is an open source SDN distribution - a vertically integrated
-set of open source components which together form a complete SDN stack.
-It’s goals are threefold:
-
-* Close the large integration-gap of the elements that are needed to build an SDN stack -
-  while there are multiple choices at each layer, there are missing pieces with poor or no integration.
-* Overcome a massive gap in interoperability - This exists both at the switch level,
-  where existing products from different vendors have limited compatibility,
-  making it difficult to connect an arbitrary switch and controller and at an API level,
-  where its difficult to write a portable application across multiple controller platforms.
-* Work closely with network operators on deployable use-cases, so that they could download
-  near production quality code from one location, and get started with functioning
-  software defined networks on real hardware.
-
-=== Architecture
-The key components of Atrium BGP Peering Router Application are as follows:
-
-* Data Plane Switch - Data plane switch is the entity that uses flow table entries installed by
-  BGP Routing Application through SDN controller. In the simplest form data plane switch with
-  the installed flows act like a BGP Router.
-* OpenDaylight Controller - OpenDaylight SDN controller has many utility applications or plugins
-  which are leveraged by the BGP Router application to manage the control plane information.
-* BGP Routing Application - An application running within the OpenDaylight runtime environment
-  to handle I-BGP updates.
-* <<_didm_developer_guide,DIDM>> - DIDM manages the drivers specific to each data plane switch connected to the controller.
-  The drivers are created primarily to hide the underlying complexity of the devices
-  and to expose a uniform API to applications.
-* Flow Objectives API - The driver implementation provides a pipeline abstraction and
-  exposes Flow Objectives API. This means applications need to be aware of only the
-  Flow Objectives API without worrying about the Table IDs or the pipelines.
-* Control Plane Switch - This component is primarily used to connect the OpenDaylight SDN controller
-  with the Quagga Soft-Router and establish a path for forwarding E-BGP packets to and from Quagga.
-* Quagga soft router - An open source routing software that handles E-BGP updates.
-
-=== Key APIs and Interfaces
-
-==== BGP Routing Configuration
-The BGP Routing Configuration maintains information about its BGP Speakers & BGP Peers.
-
-* Configuration data about BGP speakers can be accessed from the below URL:
-+
-     GET http://<controller_ip>:8181/restconf/config/bgpconfig:bgpSpeakers/
-+
-* Configuration data about BGP peers can be accessed from the below URL:
-+
-     GET http://<controller_ip>:8181/restconf/config/bgpconfig:bgpPeers/
-
-==== Host Service
-Host Service API contains the host specific details that can be used during address resolution
-
-* Host specific data can be accessed by using the below REST request:
-+
-    GET http://<controller_ip>:8181/restconf/config/hostservice-api:addresses/
-
-==== BGP Routing Information Base
-The BGP RIB module stores all the route information that it has learnt from its peers.
-
-* Routing Information Base entries can be accessed from the URL below:
-+
-     GET http://<controller_ip>:8181/restconf/operational/bgp-rib:bgp-rib/
-
-==== Forwarding Information Base
-The Forwarding Information Base is used to keep track of active FIB entries.
-
-* FIB entries can be accessed from the URL below:
-+
-     GET http://<controller_ip>:8181/restconf/config/routingservice-api:fibEntries/
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/atrium-developer-guide.html
index 382df4856e9cd67de1c44c782d60d01fab36b23f..5eb312bdb7c4776928fe4a16092b44279f1a7f86 100644 (file)
@@ -1,291 +1,3 @@
 == BGP Developer Guide
 
-=== Overview
-This section provides an overview of the `odl-bgpcep-bgp-all` Karaf feature. This
-feature will install everything needed for BGP (Border Gateway Protocol)
-from establishing the connection, storing the data in RIBs (Route Information
-Base) and displaying data in network-topology overview.
-
-=== BGP Architecture
-
-Each feature represents a module in the BGPCEP codebase. The following diagram
-illustrates how the features are related.
-
-image::bgpcep/bgp-dependency-tree.png[width="500px",title="BGP Dependency Tree"]
-
-=== Key APIs and Interfaces
-
-==== BGP concepts
-
-This module contains the base BGP concepts contained in
-http://tools.ietf.org/html/rfc4271[RFC 4271],
-http://tools.ietf.org/html/rfc4760[RFC 4760],
-http://tools.ietf.org/html/rfc4456[RFC 4456],
-http://tools.ietf.org/html/rfc1997[RFC 1997] and
-http://tools.ietf.org/html/rfc4360[RFC 4360].
-
-All the concepts are described in one yang model:
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/concepts/src/main/yang/bgp-types.yang;hb=refs/heads/stable/beryllium[bgp-types.yang].
-
-Outside generated classes, there is just one class
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/concepts/src/main/java/org/opendaylight/bgp/concepts/NextHopUtil.java;hb=refs/heads/stable/beryllium[NextHopUtil]
-that contains methods for serializing and parsing NextHop.
-
-==== BGP parser
-
-Base BGP parser includes messages and attributes from
-http://tools.ietf.org/html/rfc4271[RFC 4271],
-http://tools.ietf.org/html/rfc4760[RFC 4760],
-http://tools.ietf.org/html/rfc1997[RFC 1997] and
-http://tools.ietf.org/html/rfc4360[RFC 4360].
-
-_API_ module defines BGP messages in YANG.
-
-_IMPL_ module contains actual parsers and serializers for BGP messages
-and
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/parser-impl/src/main/java/org/opendaylight/protocol/bgp/parser/impl/BGPActivator.java;hb=refs/heads/stable/beryllium[Activator]
-class
-
-_SPI_ module contains helper classes needed for registering parsers into
-activators
-
-===== Registration
-
-All parsers and serializers need to be registered
-into the _Extension provider_. This _Extension provider_ is configured in
-initial configuration of the parser-spi module (`31-bgp.xml`).
-
-[source,xml]
-----
- <module>
-  <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:parser:spi">prefix:bgp-extensions-impl</type>
-  <name>global-bgp-extensions</name>
-  <extension>
-   <type xmlns:bgpspi="urn:opendaylight:params:xml:ns:yang:controller:bgp:parser:spi">bgpspi:extension</type>
-   <name>base-bgp-parser</name>
-  </extension>
-  <extension>
-   <type xmlns:bgpspi="urn:opendaylight:params:xml:ns:yang:controller:bgp:parser:spi">bgpspi:extension</type>
-   <name>bgp-linkstate</name>
-  </extension>
- </module>
-----
-
-* _base-bgp-parser_ - will register parsers and serializers
-implemented in the bgp-parser-impl module
-
-* _bgp-linkstate_ - will register parsers and serializers
-implemented in the bgp-linkstate module
-
-The bgp-linkstate module is a good example of a BGP parser extension.
-
-The configuration of bgp-parser-spi specifies one implementation of
-_Extension provider_ that will take care of registering mentioned parser
-extensions:
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/parser-spi/src/main/java/org/opendaylight/protocol/bgp/parser/spi/pojo/SimpleBGPExtensionProviderContext.java;hb=refs/heads/stable/beryllium[SimpleBGPExtensionProviderContext].
-All registries are implemented in package
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=tree;f=bgp/parser-spi/src/main/java/org/opendaylight/protocol/bgp/parser/spi;hb=refs/heads/stable/beryllium[bgp-parser-spi].
-
-===== Serializing
-
-The serializing of BGP elements is mostly done in the same way as in <<_pcep_developer_guide,PCEP>>, the only
-exception is the serialization of path attributes, which is described
-here. Path attributes are different from any other BGP element, as
-path attributes don't implement one common interface, but this
-interface contains getters for individual path attributes (this
-structure is because update message can contain exactly one instance of
-each path attribute). This means, that a given _PathAttributes_ object,
-you can only get to the specific type of the path attribute through
-checking its presence. Therefore method _serialize()_ in
-_AttributeRegistry_, won't look up the registered class, instead it will
-go through the registrations and offer this object to the each
-registered parser. This way the object will be passed also to
-serializers unknown to module bgp-parser, for example to
-LinkstateAttributeParser. RFC 4271 recommends ordering path attributes,
-hence the serializers are ordered in a list as they are registered in
-the _Activator_. In other words, this is the only case, where
-registration ordering matters.
-
-image::bgpcep/PathAttributesSerialization.png[width="500px",title="PathAttributesSerialization"]
-
-_serialize()_ method in each Path Attribute parser contains check for
-presence of its attribute in the PathAttributes object, which simply
-returns, if the attribute is not there:
-
-[source,java]
-----
- if (pathAttributes.getAtomicAggregate() == null) {
-     return;
- }
- //continue with serialization of Atomic Aggregate
-----
-
-=== BGP RIB
-
-The BGP RIB module can be divided into two parts:
-
-* BGP listener and speaker session handling
-* RIB handling.
-
-==== Session handling
-
-`31-bgp.xml` defines only bgp-dispatcher and the parser it should be
-using (global-bgp-extensions).
-
-[source,xml]
-----
-<module>
- <type>prefix:bgp-dispatcher-impl</type>
- <name>global-bgp-dispatcher</name>
- <bgp-extensions>
-  <type>bgpspi:extensions</type>
-  <name>global-bgp-extensions</name>
- </bgp-extensions>
- <boss-group>
-  <type>netty:netty-threadgroup</type>
-  <name>global-boss-group</name>
- </boss-group>
- <worker-group>
-  <type>netty:netty-threadgroup</type>
-  <name>global-worker-group</name>
- </worker-group>
-</module>
-----
-
-For user configuration of BGP, check User Guide.
-
-==== Synchronization
-
-Synchronization is a phase, where upon connection, a BGP speaker sends all
-available data about topology to its new client. After the whole
-topology has been advertised, the synchronization is over. For the
-listener, the synchronization is over when the RIB receives End-of-RIB
-(EOR) messages. There is a special EOR message for each AFI (Address Family
-Identifier).
-
-* IPv4 EOR is an empty Update message.
-* Ipv6 EOR is an Update message with empty MP_UNREACH attribute where
-AFI and SAFI (Subsequent Address Family Identifier) are set to Ipv6.
-OpenDaylight also supports EOR for IPv4 in this format.
-* Linkstate EOR is an Update message with empty MP_UNREACH attribute
-where AFI and SAFI are set to Linkstate.
-
-For BGP connections, where both peers support graceful restart, the EORs
-are sent by the BGP speaker and are redirected to RIB, where the specific
-AFI/SAFI table is set to _true_. Without graceful restart, the
-messages are generated by OpenDaylight itself and sent after second keepalive for
-each AFI/SAFI. This is done in
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/BGPSynchronization.java;hb=refs/heads/stable/beryllium[BGPSynchronization].
-
-*Peers*
-
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/BGPPeer.java;hb=refs/heads/stable/beryllium[BGPPeer]
-has various meanings. If you configure BGP listener, _BGPPeer_
-represents the BGP listener itself. If you are configuring BGP speaker,
-you need to provide a list of peers, that are allowed to connect to this
-speaker. Unknown peer represents, in this case, a peer that is allowed
-to be refused. _BGPPeer_ represents in this case peer, that is supposed
-to connect to your speaker. _BGPPeer_ is stored in https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/StrictBGPPeerRegistry.java;hb=refs/heads/stable/beryllium[BGPPeerRegistry].
-This registry controls the number of sessions. Our strict implementation
-limits sessions to one per peer.
-
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/ApplicationPeer.java;hb=refs/heads/stable/beryllium[ApplicationPeer]
-is a special case of peer, that has it's own RIB. This RIB is populated
-from RESTCONF. The RIB is synchronized with default BGP RIB. Incoming
-routes to the default RIB are treated in the same way as they were from a
-BGP peer (speaker or listener) in the network.
-
-==== RIB handling
-
-RIB (Route Information Base) is defined as a concept in
-http://tools.ietf.org/html/rfc4271#section-3.2[RFC 4271]. RFC does not
-define how it should be implemented. In our implementation,
-the routes are stored in the MD-SAL datastore. There are four supported
-routes - _Ipv4Routes_, _Ipv6Routes_, _LinkstateRoutes_ and
-_FlowspecRoutes_.
-
-Each route type needs to provide a
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-spi/src/main/java/org/opendaylight/protocol/bgp/rib/spi/RIBSupport.java;hb=refs/heads/stable/beryllium[RIBSupport.java]
-implementation. _RIBSupport_ tells RIB how to parse binding-aware data
-(BGP Update message) to binding-independent (datastore format).
-
-Following picture describes the data flow from BGP message that is sent
-to _BGPPeer_ to datastore and various types of RIB.
-
-image::bgpcep/RIB.png[height="450px", width="550px",title="RIB"]
-
-*https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/AdjRibInWriter.java;hb=refs/heads/stable/beryllium[AdjRibInWriter]*
-- represents the first step in putting data to datastore. This writer is
-notified whenever a peer receives an Update message. The message is
-transformed into binding-independent format and pushed into datastore to
-_adj-rib-in_. This RIB is associated with a peer.
-
-*https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/EffectiveRibInWriter.java;hb=refs/heads/stable/beryllium[EffectiveRibInWriter]*
-- this writer is notified whenever _adj-rib-in_ is updated. It applies
-all configured import policies to the routes and stores them in
-_effective-rib-in_. This RIB is also associated with a peer.
-
-*https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/LocRibWriter.java;hb=refs/heads/stable/beryllium[LocRibWriter]*
-- this writer is notified whenever *any* _effective-rib-in_ is updated
-(in any peer). Performs best path selection filtering and stores the
-routes in _loc-rib_. It also determines which routes need to be
-advertised and fills in _adj-rib-out_ that is per peer as well.
-
-*https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/rib-impl/src/main/java/org/opendaylight/protocol/bgp/rib/impl/AdjRibOutListener.java;h=a14fd54a29ea613b381a36248f67491d968963b8;hb=refs/heads/stable/beryllium[AdjRibOutListener]*
-- listens for changes in _adj-rib-out_, transforms the routes into
-BGPUpdate messages and sends them to its associated peer.
-
-=== BGP inet
-
-This module contains only one YANG model
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/inet/src/main/yang/bgp-inet.yang;hb=refs/heads/stable/beryllium[bgp-inet.yang]
-that summarizes the ipv4 and ipv6 extensions to RIB routes and BGP
-messages.
-
-=== BGP flowspec
-
-BGP flowspec is a module that implements
-http://tools.ietf.org/html/rfc5575[RFC 5575] for IPv4 AFI and https://tools.ietf.org/html/draft-ietf-idr-flow-spec-v6-06[draft-ietf-idr-flow-spec-v6-06] for IPv6 AFI.
-The RFC defines an extension to BGP in form of a new subsequent address family, NLRI and
-extended communities. All of those are defined in the
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/flowspec/src/main/yang/bgp-flowspec.yang;hb=refs/heads/stable/beryllium[bgp-flowspec.yang]
-model. In addition to generated sources, the module contains parsers for
-newly defined elements and RIBSupport for flowspec-routes. The route key of
-flowspec routes is a string representing human-readable flowspec
-request.
-
-=== BGP linkstate
-
-BGP linkstate is a module that implements
-http://tools.ietf.org/html/draft-ietf-idr-ls-distribution-04[draft-ietf-idr-ls-distribution]
-version 04. The draft defines an extension to BGP in form of a new
-address family, subsequent address family, NLRI and path attribute. All
-of those are defined in the
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/yang/bgp-linkstate.yang;hb=refs/heads/stable/beryllium[bgp-linkstate.yang]
-model. In addition to generated sources, the module contains
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/java/org/opendaylight/protocol/bgp/linkstate/attribute/LinkstateAttributeParser.java;hb=refs/heads/stable/beryllium[LinkstateAttributeParser],
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=bgp/linkstate/src/main/java/org/opendaylight/protocol/bgp/linkstate/nlri/LinkstateNlriParser.java;hb=refs/heads/stable/beryllium[LinkstateNlriParser],
-activators for both, parser and RIB, and RIBSupport handler for
-linkstate address family. As each route needs a key, in case of
-linkstate, the route key is defined as a binary string, containing all
-the NLRI serialized to byte format.
-The BGP linkstate extension also supports distribution of MPLS TE state as defined in https://tools.ietf.org/html/draft-ietf-idr-te-lsp-distribution-03[draft-ietf-idr-te-lsp-distribution-03],
-extension for Segment Routing https://tools.ietf.org/html/draft-gredler-idr-bgp-ls-segment-routing-ext-00[draft-gredler-idr-bgp-ls-segment-routing-ext-00] and
-Segment Routing Egress Peer Engineering https://tools.ietf.org/html/draft-ietf-idr-bgpls-segment-routing-epe-02[draft-ietf-idr-bgpls-segment-routing-epe-02].
-
-=== BGP labeled-unicast
-
-BGP labeled unicast is a module that implements https://tools.ietf.org/html/rfc3107[RFC 3107]. The RFC defines an extension to the BGP MP to carry Label Mapping Information
-as a part of the NLRI. The AFI indicates, as usual, the address family of the associated route. The fact that the NLRI contains a label
-is indicated by using SAFI value 4. All of those are defined in https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob_plain;f=bgp/labeled-unicast/src/main/yang/bgp-labeled-unicast.yang;hb=refs/heads/stable/beryllium[bgp-labeled-unicast.yang] model. In addition to the generated sources,
-the module contains new NLRI codec and RIBSupport. The route key is defined as a binary, where whole NLRI information is encoded. 
-
-=== BGP topology provider
-
-BGP data besides RIB, is stored in network-topology view. The
-format of how the data is displayed there conforms to
-https://tools.ietf.org/html/draft-clemm-netmod-yang-network-topo-01[draft-clemm-netmod-yang-network-topo].
-
-=== API Reference Documentation
-Javadocs are generated while creating mvn:site
-and they are located in target/ directory in each module.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/bgp-developer-guide.html
index 88d20e13a1715c4d163d3536aded5e8afb2527d5..ef13fc70ed951e8d30cddee0032b8c680bb133e0 100644 (file)
@@ -1,149 +1,3 @@
 == BGP Monitoring Protocol Developer Guide
 
-=== Overview
-This section provides an overview of *feature odl-bgpcep-bmp*. This
-feature will install everything needed for BMP (BGP Monitoring Protocol)
-including establishing the connection, processing messages, storing
-information about monitored routers, peers and their Adj-RIB-In
-(unprocessed routing information) and Post-Policy Adj-RIB-In
-and displaying data in BGP RIBs overview.
-The OpenDaylight BMP plugin plays the role of a monitoring station.
-
-=== Key APIs and Interfaces
-
-==== Session handling
-
-_32-bmp.xml_ defines only bmp-dispatcher the parser should be
-using (global-bmp-extensions).
-
-[source,xml]
-----
- <module>
-  <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">prefix:bmp-dispatcher-impl</type>
-  <name>global-bmp-dispatcher</name>
-   <bmp-extensions>
-    <type xmlns:bmp-spi="urn:opendaylight:params:xml:ns:yang:controller:bmp:spi">bmp-spi:extensions</type>
-    <name>global-bmp-extensions</name>
-   </bmp-extensions>
-   <boss-group>
-    <type xmlns:netty="urn:opendaylight:params:xml:ns:yang:controller:netty">netty:netty-threadgroup</type>
-    <name>global-boss-group</name>
-   </boss-group>
-   <worker-group>
-    <type xmlns:netty="urn:opendaylight:params:xml:ns:yang:controller:netty">netty:netty-threadgroup</type>
-    <name>global-worker-group</name>
-  </worker-group>
- </module>
-----
-
-For user configuration of BMP, check User Guide.
-
-==== Parser
-
-The base BMP parser includes messages and attributes from
-https://tools.ietf.org/html/draft-ietf-grow-bmp-15
-
-==== Registration
-
-All parsers and serializers need to be registered
-into _Extension provider_. This _Extension provider_ is configured in
-initial configuration of the parser (_32-bmp.xml_).
-
-[source,xml]
-----
- <module>
-  <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bmp:spi">prefix:bmp-extensions-impl</type>
-  <name>global-bmp-extensions</name>
-  <extension>
-   <type xmlns:bmp-spi="urn:opendaylight:params:xml:ns:yang:controller:bmp:spi">bmp-spi:extension</type>
-   <name>bmp-parser-base</name>
-  </extension>
- </module>
-----
-
-* _bmp-parser-base_ - will register parsers and serializers
-implemented in bmp-impl module
-
-==== Parsing
-
-Parsing of BMP elements is mostly done equally to BGP. Some of the BMP messages includes wrapped
-BGP messages.
-
-==== BMP Monitoring Station
-
-The BMP application (Monitoring Station) serves as message processor incoming from monitored routers.
-The processed message is transformed and relevant information is stored. Route information is stored in a BGP
-RIB data structure.
-
-BMP data is displayed only through one URL that is accessible from the base BMP URL:
-
-_http://<controllerIP>:8181/restconf/operational/bmp-monitor:bmp-monitor_
-
-Each Monitor station will be displayed and it may contains multiple monitored routers and peers within:
-
-[source,xml]
-----
-<bmp-monitor xmlns="urn:opendaylight:params:xml:ns:yang:bmp-monitor">
- <monitor>
- <monitor-id>example-bmp-monitor</monitor-id>
-  <router>
-  <router-id>127.0.0.11</router-id>
-   <status>up</status>
-   <peer>
-    <peer-id>20.20.20.20</peer-id>
-    <as>72</as>
-    <type>global</type>
-    <peer-session>
-     <remote-port>5000</remote-port>
-     <timestamp-sec>5</timestamp-sec>
-     <status>up</status>
-     <local-address>10.10.10.10</local-address>
-     <local-port>220</local-port>
-    </peer-session>
-    <pre-policy-rib>
-     <tables>
-      <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv4-address-family</afi>
-      <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:unicast-subsequent-address-family</safi>
-      <ipv4-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-inet">
-       <ipv4-route>
-        <prefix>10.10.10.0/24</prefix>
-         <attributes>
-          ...
-         </attributes>
-       </ipv4-route>
-      </ipv4-routes>
-      <attributes>
-       <uptodate>true</uptodate>
-      </attributes>
-     </tables>
-    </pre-policy-rib>
-    <address>10.10.10.10</address>
-    <post-policy-rib>
-     ...
-    </post-policy-rib>
-    <bgp-id>20.20.20.20</bgp-id>
-    <stats>
-     <timestamp-sec>5</timestamp-sec>
-     <invalidated-cluster-list-loop>53</invalidated-cluster-list-loop>
-     <duplicate-prefix-advertisements>16</duplicate-prefix-advertisements>
-     <loc-rib-routes>100</loc-rib-routes>
-     <duplicate-withdraws>11</duplicate-withdraws>
-     <invalidated-as-confed-loop>55</invalidated-as-confed-loop>
-     <adj-ribs-in-routes>10</adj-ribs-in-routes>
-     <invalidated-as-path-loop>66</invalidated-as-path-loop>
-     <invalidated-originator-id>70</invalidated-originator-id>
-     <rejected-prefixes>8</rejected-prefixes>
-    </stats>
-   </peer>
-   <name>name</name>
-   <description>description</description>
-   <info>some info;</info>
-  </router>
- </monitor>
-</bmp-monitor>
-</source>
-----
-
-=== API Reference Documentation
-Javadocs are generated while creating mvn:site
-and they are located in target/ directory in each module.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/bgp-monitoring-protocol-developer-guide.html
index 8f0bdcd9e604742a39001357e2c06905dda82b38..6e84749d83cd18e786a70139a7350f8d336b8ad0 100644 (file)
@@ -1,298 +1,3 @@
 == PCEP Developer Guide
 
-=== Overview
-This section provides an overview of *feature odl-bgpcep-pcep-all* . This
-feature will install everything needed for PCEP (Path Computation Element
-Protocol) including establishing the connection, storing information about LSPs
-(Label Switched Paths) and displaying data in network-topology overview.
-
-=== PCEP Architecture
-Each feature represents a module in the BGPCEP codebase. The following diagram
-illustrates how the features are related.
-
-image::bgpcep/pcep-dependency-tree.png[height="450px",width="550px",title="PCEP Dependency Tree"]
-
-=== Key APIs and Interfaces
-
-==== PCEP
-
-===== Session handling
-
-_32-pcep.xml_ defines only pcep-dispatcher the parser should be
-using (global-pcep-extensions), factory for creating session proposals
-(you can create different proposals for different PCCs (Path Computation Clients)).
-
-[source,xml]
-----
- <module>
-  <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:pcep:impl">prefix:pcep-dispatcher-impl</type>
-  <name>global-pcep-dispatcher</name>
-  <pcep-extensions>
-   <type xmlns:pcepspi="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">pcepspi:extensions</type>
-   <name>global-pcep-extensions</name>
-  </pcep-extensions>
-  <pcep-session-proposal-factory>
-   <type xmlns:pcep="urn:opendaylight:params:xml:ns:yang:controller:pcep">pcep:pcep-session-proposal-factory</type>
-   <name>global-pcep-session-proposal-factory</name>
-  </pcep-session-proposal-factory>
-  <boss-group>
-   <type xmlns:netty="urn:opendaylight:params:xml:ns:yang:controller:netty">netty:netty-threadgroup</type>
-   <name>global-boss-group</name>
-  </boss-group>
-  <worker-group>
-   <type xmlns:netty="urn:opendaylight:params:xml:ns:yang:controller:netty">netty:netty-threadgroup</type>
-   <name>global-worker-group</name>
-  </worker-group>
- </module>
-----
-
-For user configuration of PCEP, check User Guide.
-
-===== Parser
-
-The base PCEP parser includes messages and attributes from
-http://tools.ietf.org/html/rfc5441[RFC5441],
-http://tools.ietf.org/html/rfc5541[RFC5541],
-http://tools.ietf.org/html/rfc5455[RFC5455],
-http://tools.ietf.org/html/rfc5557[RFC5557] and
-http://tools.ietf.org/html/rfc5521[RFC5521].
-
-===== Registration
-
-All parsers and serializers need to be registered
-into _Extension provider_. This _Extension provider_ is configured in
-initial configuration of the parser-spi module (_32-pcep.xml_).
-
-[source,xml]
-----
-<module>
- <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">prefix:pcep-extensions-impl</type>
- <name>global-pcep-extensions</name>
- <extension>
-  <type xmlns:pcepspi="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">pcepspi:extension</type>
-  <name>pcep-parser-base</name>
- </extension>
- <extension>
-  <type xmlns:pcepspi="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">pcepspi:extension</type>
-  <name>pcep-parser-ietf-stateful07</name>
- </extension>
- <extension>
-  <type xmlns:pcepspi="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">pcepspi:extension</type>
-  <name>pcep-parser-ietf-initiated00</name>
- </extension>
- <extension>
-  <type xmlns:pcepspi="urn:opendaylight:params:xml:ns:yang:controller:pcep:spi">pcepspi:extension</type>
-  <name>pcep-parser-sync-optimizations</name>
- </extension>
-</module>
-----
-
-* _pcep-parser-base_ - will register parsers and serializers
-implemented in pcep-impl module
-
-* _pcep-parser-ietf-stateful07_ - will register parsers and
-serializers of draft-ietf-pce-stateful-pce-07 implementation
-
-* _pcep-parser-ietf-initiated00_ - will register parser and
-serializer of draft-ietf-pce-pce-initiated-lsp-00 implementation
-
-* _pcep-parser-sync-optimizations_ - will register parser and
-serializers of draft-ietf-pce-stateful-sync-optimizations-03 implementation
-
-Stateful07 module is a good example of a PCEP parser extension.
-
-Configuration of PCEP parsers specifies one implementation of _Extension
-provider_ that will take care of registering mentioned parser
-extensions:
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/spi/src/main/java/org/opendaylight/protocol/pcep/spi/pojo/SimplePCEPExtensionProviderContext.java;hb=refs/for/stable/beryllium[SimplePCEPExtensionProviderContext].
-All registries are implemented in package
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=tree;f=pcep/spi/src/main/java/org/opendaylight/protocol/pcep/spi/pojo;hb=refs/for/stable/beryllium[pcep-spi].
-
-===== Parsing
-
-Parsing of PCEP elements is mostly done equally to BGP,
-the only exception is message parsing, that is described here.
-
-In BGP messages, parsing of first-level elements (path-attributes)
-can be validated in a simple way, as the attributes should be ordered
-chronologically. PCEP, on the other hand, has a strict object order
-policy, that is described in RBNF (Routing Backus-Naur Form) in each RFC.
-Therefore the algorithm for parsing here is to parse all objects in order
-as they appear in the message. The result of parsing is a list of _PCEPObjects_,
-that is put through validation. _validate()_ methods are present in each
-message parser. Depending on the complexity of the message, it can
-contain either a simple condition (checking the presence of a mandatory
-object) or a full state machine.
-
-In addition to that, PCEP requires sending error message for each
-documented parsing error. This is handled by creating an empty list of
-messages _errors_ which is then passed as argument throughout whole
-parsing process. If some parser encounters _PCEPDocumentedException_,
-it has the duty to create appropriate PCEP error message and add it to
-this list. In the end, when the parsing is finished, this list is
-examined and all messages are sent to peer.
-
-Better understanding provides this sequence diagram:
-
-image::bgpcep/pcep-parsing.png[height="450px",width="550px",title="Parsing"]
-
-==== PCEP IETF stateful
-
-This section summarizes module pcep-ietf-stateful07. The term
-_stateful_ refers to
-http://tools.ietf.org/html/draft-ietf-pce-stateful-pce[draft-ietf-pce-stateful-pce]
-and
-http://tools.ietf.org/html/draft-ietf-pce-pce-initiated-lsp[draft-ietf-pce-pce-initiated-lsp]
-in versions draft-ietf-pce-stateful-pce-07 with draft-ietf-pce-pce-initiated-lsp-00.
-
-We will upgrade our implementation, when the stateful draft gets
-promoted to RFC.
-
-The stateful module is implemented as extensions to pcep-base-parser.
-The stateful draft declared new elements as well as additional fields or
-TLVs (type,length,value) to known objects. All new elements are defined in yang models, that
-contain augmentations to elements defined in
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/api/src/main/yang/pcep-types.yang;hb=refs/for/stable/beryllium[pcep-types.yang].
-In the case of extending known elements, the _Parser_ class merely extends
-the base class and overrides necessary methods as shown in following
-diagram:
-
-image::bgpcep/validation.png[height="450px",width="550px",title="Extending existing parsers"]
-
-All parsers (including those for newly defined PCEP elements) have to be
-registered via the _Activator_ class. This class is present in both modules.
-
-In addition to parsers, the stateful module also introduces additional session
-proposal. This proposal includes new fields defined in stateful drafts
-for Open object.
-
-==== PCEP segment routing (SR)
-
-PCEP Segment Routing is an extension of base PCEP and
-pcep-ietf-stateful-07 extension. The pcep-segment-routing module
-implements
-http://tools.ietf.org/html/draft-ietf-pce-segment-routing-01[draft-ietf-pce-segment-routing-01].
-
-The extension brings new SR-ERO (Explicit Route Object) and SR-RRO (Reported Route Object)
-subobject composed of SID (Segment Identifier) and/or NAI (Node or Adjacency Identifier).
-The segment Routing path is carried in the ERO and RRO object, as a list of
-SR-ERO/SR-RRO subobjects in an order specified by the user. The draft defines new TLV -
-SR-PCE-CAPABILITY TLV, carried in PCEP Open object, used to negotiate Segment
-Routing ability.
-
-The yang models of subobject, SR-PCE-CAPABILITY TLV and appropriate
-augmentations are defined in
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/yang/odl-pcep-segment-routing.yang;hb=refs/for/stable/beryllium[odl-pcep-segment-routing.yang]. +
-The pcep-segment-routing module includes parsers/serializers for new
-subobject
-(https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/java/org/opendaylight/protocol/pcep/segment/routing/SrEroSubobjectParser.java;hb=refs/for/stable/beryllium[SrEroSubobjectParser])
-and TLV
-(https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/segment-routing/src/main/java/org/opendaylight/protocol/pcep/segment/routing/SrPceCapabilityTlvParser.java;hb=refs/for/stable/beryllium[SrPceCapabilityTlvParser]).
-
-The pcep-segment-routing module implements
-http://tools.ietf.org/html/draft-ietf-pce-lsp-setup-type-01[draft-ietf-pce-lsp-setup-type-01],
-too. The draft defines new TLV - Path Setup Type TLV, which value
-indicate path setup signaling technique. The TLV may be included in
-RP(Request Parameters)/SRP(Stateful PCE Request Parameters) object.
-For the default RSVP-TE (Resource Reservation Protocol), the TLV is omitted.
-For Segment Routing, PST = 1 is defined.
-
-The Path Setup Type TLV is modeled with yang in module
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/api/src/main/yang/pcep-types.yang;hb=refs/for/stable/beryllium[pcep-types.yang].
-A parser/serializer is implemented in
-https://git.opendaylight.org/gerrit/gitweb?p=bgpcep.git;a=blob;f=pcep/impl/src/main/java/org/opendaylight/protocol/pcep/impl/tlv/PathSetupTypeTlvParser.java;hb=refs/for/stable/beryllium[PathSetupTypeTlvParser]
-and it is overriden in segment-routing module to provide the aditional
-PST.
-
-==== PCEP Synchronization Procedures Optimization
-
-Optimizations of Label Switched Path State Synchronization Procedures for a Stateful PCE draft-ietf-pce-stateful-sync-optimizations-03 specifies following optimizations for state synchronization and the corresponding PCEP procedures and extensions:
-
-* *State Synchronization Avoidance:* To skip state synchronization if the state has survived and not changed during session restart.
-
-* *Incremental State Synchronization:* To do incremental (delta) state synchronization when possible.
-
-* *PCE-triggered Initial Synchronization:* To let PCE control the timing of the initial state synchronization.
-The capability can be applied to both full and incremental state synchronization.
-
-* *PCE-triggered Re-synchronization:* To let PCE re-synchronize the state for sanity check.
-
-
-==== PCEP Topology
-
-PCEP data is displayed only through one URL that is accessible from the base network-topology URL:
-
-_http://localhost:8181/restconf/operational/network-topology:network-topology/topology/pcep-topology_
-
-Each PCC will be displayed as a node:
-
-[source,xml]
-----
-<node>
- <path-computation-client>
-  <ip-address>42.42.42.42</ip-address>
-  <state-sync>synchronized</state-sync>
-  <stateful-tlv>
-   <stateful>
-    <initiation>true</initiation>
-    <lsp-update-capability>true</lsp-update-capability>
-   </stateful>
-  </stateful-tlv>
- </path-computation-client>
- <node-id>pcc://42.42.42.42</node-id>
-</node>
-</source>
-----
-
-If some tunnels are configured on the network, they would be displayed on the same page, within a node that initiated the tunnel:
-
-[source,xml]
-----
-<node>
- <path-computation-client>
-  <state-sync>synchronized</state-sync>
-  <stateful-tlv>
-   <stateful>
-    <initiation>true</initiation>
-    <lsp-update-capability>true</lsp-update-capability>
-   </stateful>
-  </stateful-tlv>
-  <reported-lsp>
-   <name>foo</name>
-   <lsp>
-    <operational>down</operational>
-    <sync>false</sync>
-    <ignore>false</ignore>
-    <plsp-id>1</plsp-id>
-    <create>false</create>
-    <administrative>true</administrative>
-    <remove>false</remove>
-    <delegate>true</delegate>
-    <processing-rule>false</processing-rule>
-    <tlvs>
-    <lsp-identifiers>
-      <ipv4>
-       <ipv4-tunnel-sender-address>43.43.43.43</ipv4-tunnel-sender-address>
-       <ipv4-tunnel-endpoint-address>0.0.0.0</ipv4-tunnel-endpoint-address>
-       <ipv4-extended-tunnel-id>0.0.0.0</ipv4-extended-tunnel-id>
-      </ipv4>
-      <tunnel-id>0</tunnel-id>
-      <lsp-id>0</lsp-id>
-     </lsp-identifiers>
-     <symbolic-path-name>
-      <path-name>Zm9v</path-name>
-     </symbolic-path-name>
-    </tlvs>
-   </lsp>
-  </reported-lsp>
-  <ip-address>43.43.43.43</ip-address>
- </path-computation-client>
- <node-id>pcc://43.43.43.43</node-id>
-</node>
-----
-
-Note that, the _<path-name>_ tag displays tunnel name in Base64 encoding.
-
-=== API Reference Documentation
-Javadocs are generated while creating mvn:site
-and they are located in target/ directory in each module.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/pcep-developer-guide.html
index 7c03e34b4b8fcc987a78f18822dd01822f1388d0..17f42a7651c5cfb24c87c05e37faea3b0cbfd50e 100644 (file)
@@ -106,6 +106,8 @@ include::packetcable/packetcable-dev.adoc[Packet Cable PCMM Southbound Plugin]
 
 include::sfc/sfc.adoc[]
 
+include::snbi/odl-snbi-dev.adoc[]
+
 include::snmp4sdn/snmp4sdn-developer.adoc[SNMP4SDN]
 
 include::sxp/odl-sxp-dev.adoc[]
index e0c94c53b59af3a0f000353795b32efde06af61d..7025ef68ed769eb54f4cffc773ddcda853532530 100644 (file)
@@ -1,93 +1,3 @@
 == Cardinal: OpenDaylight Monitoring as a Service 
 
-=== Overview
-Cardinal (OpenDaylight Monitoring as a Service) enables OpenDaylight and the underlying software defined network to be remotely monitored by deployed Network Management Systems (NMS) or Analytics suite. In the Boron release, Cardinal adds:
-
-. OpenDaylight MIB.
-. Enable ODL diagnostics/monitoring to be exposed across SNMP (v2c, v3) and REST north-bound.
-. Extend ODL System health, Karaf parameter and feature info, ODL plugin scalability and network parameters.
-. Support autonomous notifications (SNMP Traps).
-
-=== Cardinal Architecture
-The Cardinal architecture can be found at the below link:
-
-https://wiki.opendaylight.org/images/8/89/Cardinal-ODL_Monitoring_as_a_Service_V2.pdf
-
-=== Key APIs and Interfaces
-There are 2 main APIs for requesting snmpget request of the Karaf info and System info.
-To expose these APIs, it assumes that you already have the `odl-cardinal` and `odl-restconf` features installed. You can do that by entering the following at the Karaf console:
-
-       feature:install odl-cardinal
-       feature:install odl-restconf-all
-
-==== System Info APIs
-
-Open the REST interface and using the basic authentication, execute REST APIs for system info as:
-
-       http://localhost:8181/restconf/operational/cardinal:CardinalSystemInfo/
-
-You should get the response code of the same as 200 OK with the following output as:
-
- {
-   "CardinalSystemInfo": {
-     "odlSystemMemUsage": " 9",
-     "odlSystemSysInfo": " OpenDaylight Node Information",
-     "odlSystemOdlUptime": " 00:29",
-     "odlSystemCpuUsage": " 271",
-     "odlSystemHostAddress": " Address of the Host should come up"
-   }
- }
-
-==== Karaf Info APIs
-
-Open the REST interface and using the basic authentication, execute REST APIs for system info as:
-
-       http://localhost:8181/restconf/operational/cardinal-karaf:CardinalKarafInfo/
-
-You should get the response code of the same as 200 OK with the following output as:
-
-   {
-   "CardinalKarafInfo": {
-     "odlKarafBundleListActive1": " org.ops4j.pax.url.mvn_2.4.5 [1]",
-     "odlKarafBundleListActive2": " org.ops4j.pax.url.wrap_2.4.5 [2]",
-     "odlKarafBundleListActive3": " org.ops4j.pax.logging.pax-logging-api_1.8.4 [3]",
-     "odlKarafBundleListActive4": " org.ops4j.pax.logging.pax-logging-service_1.8.4 [4]",
-     "odlKarafBundleListActive5": " org.apache.karaf.service.guard_3.0.6 [5]",
-     "odlKarafBundleListActive6": " org.apache.felix.configadmin_1.8.4 [6]",
-     "odlKarafBundleListActive7": " org.apache.felix.fileinstall_3.5.2 [7]",
-     "odlKarafBundleListActive8": " org.objectweb.asm.all_5.0.3 [8]",
-     "odlKarafBundleListActive9": " org.apache.aries.util_1.1.1 [9]",
-     "odlKarafBundleListActive10": " org.apache.aries.proxy.api_1.0.1 [10]",
-     "odlKarafBundleListInstalled1": " org.ops4j.pax.url.mvn_2.4.5 [1]",
-     "odlKarafBundleListInstalled2": " org.ops4j.pax.url.wrap_2.4.5 [2]",
-     "odlKarafBundleListInstalled3": " org.ops4j.pax.logging.pax-logging-api_1.8.4 [3]",
-     "odlKarafBundleListInstalled4": " org.ops4j.pax.logging.pax-logging-service_1.8.4 [4]",
-     "odlKarafBundleListInstalled5": " org.apache.karaf.service.guard_3.0.6 [5]",
-     "odlKarafFeatureListInstalled1": " config",
-     "odlKarafFeatureListInstalled2": " region",
-     "odlKarafFeatureListInstalled3": " package",
-     "odlKarafFeatureListInstalled4": " http",
-     "odlKarafFeatureListInstalled5": " war",
-     "odlKarafFeatureListInstalled6": " kar",
-     "odlKarafFeatureListInstalled7": " ssh",
-     "odlKarafFeatureListInstalled8": " management",
-     "odlKarafFeatureListInstalled9": " odl-netty",
-     "odlKarafFeatureListInstalled10": " odl-lmax",
-     "odlKarafBundleListResolved1": " org.ops4j.pax.url.mvn_2.4.5 [1]",
-     "odlKarafBundleListResolved2": " org.ops4j.pax.url.wrap_2.4.5 [2]",
-     "odlKarafBundleListResolved3": " org.ops4j.pax.logging.pax-logging-api_1.8.4 [3]",
-     "odlKarafBundleListResolved4": " org.ops4j.pax.logging.pax-logging-service_1.8.4 [4]",
-     "odlKarafBundleListResolved5": " org.apache.karaf.service.guard_3.0.6 [5]",
-     "odlKarafFeatureListUnInstalled1": " aries-annotation",
-     "odlKarafFeatureListUnInstalled2": " wrapper",
-     "odlKarafFeatureListUnInstalled3": " service-wrapper",
-     "odlKarafFeatureListUnInstalled4": " obr", 
-     "odlKarafFeatureListUnInstalled5": " http-whiteboard",
-     "odlKarafFeatureListUnInstalled6": " jetty",
-     "odlKarafFeatureListUnInstalled7": " webconsole",
-     "odlKarafFeatureListUnInstalled8": " scheduler",
-     "odlKarafFeatureListUnInstalled9": " eventadmin",
-     "odlKarafFeatureListUnInstalled10": " jasypt-encryption"
-   }
- }
-
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/cardinal_-opendaylight-monitoring-as-a-service.rst
diff --git a/manuals/developer-guide/src/main/asciidoc/controller/config.adoc b/manuals/developer-guide/src/main/asciidoc/controller/config.adoc
deleted file mode 100644 (file)
index 9ffd233..0000000
+++ /dev/null
@@ -1,108 +0,0 @@
-=== Config Subsystem
-
-==== Overview
-The Controller configuration operation has three stages:
-
-* First, a Proposed configuration is created. Its target is to replace the old configuration.
-* Second, the Proposed configuration is validated, and then committed. If it passes validation successfully, the Proposed configuration state will be changed to Validated.
-* Finally, a Validated configuration can be Committed, and the affected modules can be reconfigured.
-
-In fact, each configuration operation is wrapped in a transaction. Once a transaction is created, it can be configured, that is to say, a user can abort the transaction during this stage. After the transaction configuration is done, it is committed to the validation stage. In this stage, the validation procedures are invoked.
- If one or more validations fail, the transaction can be reconfigured. Upon success, the second phase commit is invoked.
- If this commit is successful, the transaction enters the last stage, committed. After that, the desired modules are reconfigured. If the second phase commit fails, it means that the transaction is unhealthy - basically, a new configuration instance creation failed, and the application can be in an inconsistent state.
-
-.Configuration states
-image::configuration.jpg[width=500]
-
-.Transaction states
-image::Transaction.jpg[width=500]
-
-==== Validation
-To secure the consistency and safety of the new configuration and to avoid conflicts, the configuration validation process is necessary.
-Usually, validation checks the input parameters of a new configuration, and mostly verifies module-specific relationships.
-The validation procedure results in a decision on whether the proposed configuration is healthy.
-
-==== Dependency resolver
-Since there can be dependencies between modules, a change in a module configuration can affect the state of other modules. Therefore, we need to verify whether dependencies on other modules can be resolved.
-The Dependency Resolver acts in a manner similar to dependency injectors. Basically, a dependency tree is built.
-
-==== APIs and SPIs
-This section describes configuration system APIs and SPIs.
-
-
-===== SPIs
-*Module* org.opendaylight.controller.config.spi. Module is the common interface for all modules: every module must implement it. The module is designated to hold configuration attributes, validate them, and create instances of service based on the attributes.
-This instance must implement the AutoCloseable interface, owing to resources clean up. If the module was created from an already running instance, it contains an old instance of the module. A module can implement multiple services. If the module depends on other modules, setters need to be annotated with @RequireInterface.
-
-*Module creation*
-
-. The module needs to be configured, set with all required attributes.
-. The module is then moved to the commit stage for validation. If the validation fails, the module attributes can be reconfigured. Otherwise, a new instance is either created, or an old instance is reconfigured.
-A module instance is identified by ModuleIdentifier, consisting of the factory name and instance name.
-
-*ModuleFactory* org.opendaylight.controller.config.spi. The ModuleFactory interface must be implemented by each module factory. +
-A module factory can create a new module instance in two ways: +
-
-* From an existing module instance
-* An entirely new instance +
-ModuleFactory can also return default modules, useful for populating registry with already existing configurations.
-A module factory implementation must have a globally unique name.
-
-===== APIs
-
-|===
-| ConfigRegistry | Represents functionality provided by a configuration transaction (create, destroy module, validate, or abort transaction).
-| ConfigTransactionController | Represents functionality for manipulating with configuration transactions (begin, commit config).
-| RuntimeBeanRegistratorAwareConfiBean | The module implementing this interface will receive RuntimeBeanRegistrator before getInstance is invoked.
-|===
-
-===== Runtime APIs
-
-|===
-| RuntimeBean | Common interface for all runtime beans
-| RootRuntimeBeanRegistrator | Represents functionality for root runtime bean registration, which subsequently allows hierarchical registrations
-| HierarchicalRuntimeBeanRegistration | Represents functionality for runtime bean registration and unreregistration from hierarchy
-|===
-
-===== JMX APIs
-
-JMX API is purposed as a transition between the Client API and the JMX platform. +
-
-|===
-| ConfigTransactionControllerMXBean | Extends ConfigTransactionController, executed by Jolokia clients on configuration transaction.
-| ConfigRegistryMXBean | Represents entry point of configuration management for MXBeans.
-| Object names | Object Name is the pattern used in JMX to locate JMX beans. It consists of domain and key properties (at least one key-value pair). Domain is defined as "org.opendaylight.controller". The only mandatory property is "type".
-|===
-
-===== Use case scenarios
-
-A few samples of successful and unsuccessful transaction scenarios follow: +
-
-*Successful commit scenario*
-
-. The user creates a transaction calling creteTransaction() method on ConfigRegistry.
-. ConfigRegisty creates a transaction controller, and registers the transaction as a new bean.
-. Runtime configurations are copied to the transaction. The user can create modules and set their attributes.
-. The configuration transaction is to be committed.
-. The validation process is performed.
-. After successful validation, the second phase commit begins.
-. Modules proposed to be destroyed are destroyed, and their service instances are closed.
-. Runtime beans are set to registrator.
-. The transaction controller invokes the method getInstance on each module.
-. The transaction is committed, and resources are either closed or released.
-
-*Validation failure scenario* +
-The transaction is the same as the previous case until the validation process. +
-
-. If validation fails, (that is to day, illegal input attributes values or dependency resolver failure), the validationException is thrown and exposed to the user.
-. The user can decide to reconfigure the transaction and commit again, or abort the current transaction.
-. On aborted transactions, TransactionController and JMXRegistrator are properly closed.
-. Unregistration event is sent to ConfigRegistry.
-
-===== Default module instances
-The configuration subsystem provides a way for modules to create default instances. A default instance is an instance of a module, that is created at the module bundle start-up (module becomes visible for
-configuration subsystem, for example, its bundle is activated in the OSGi environment). By default, no default instances are produced.
-
-The default instance does not differ from instances created later in the module life-cycle. The only difference is that the configuration for the default instance cannot be provided by the configuration subsystem.
-The module has to acquire the configuration for these instances on its own. It can be acquired from, for example, environment variables.
-After the creation of a default instance, it acts as a regular instance and fully participates in the configuration subsystem (It can be reconfigured or deleted in following transactions.).
index 903c4c5a7340471476feb41f226f3bc1af5c20d6..8b6ed5d1a948e9decbaa86394492d21ff4590563 100644 (file)
@@ -1,54 +1,3 @@
 == Controller
 
-=== Overview ===
-
-OpenDaylight Controller is Java-based, model-driven controller using YANG
-as its modeling language for various aspects of the system and applications
-and with its components serves as a base platform for other OpenDaylight
-applications.
-
-The OpenDaylight Controller relies on the following technologies:
-
-* *OSGI* - This framework is the back-end of OpenDaylight as it allows
-dynamically loading of bundles and packages JAR files, and binding bundles
-together for exchanging information.
-* *Karaf* - Application container built on top of OSGI, which simplifies
-    operational aspects of packaging and installing applications.
-* *YANG* - a data modeling language used to model configuration and
-   state data manipulated by the applications, remote procedure calls, and
-   notifications.
-
-The OpenDaylight Controller provides following model-driven subsystems as a
-foundation for Java applications:
-
-* *<<_config_subsystem, Config Subsystem>>* - an activation, dependency-injection
-   and configuration framework, which allows two-phase commits of configuration
-   and dependency-injection, and allows for run-time rewiring.
-* *<<_md_sal_overview,MD-SAL>>* - messaging and data storage functionality for data,
-   notifications and RPCs modeled by application developers. MD-SAL uses YANG
-   as the modeling for both interface and data definitions, and provides
-   a messaging and data-centric runtime for such services based on YANG modeling.
-* *MD-SAL Clustering* - enables cluster support for core MD-SAL functionality
-   and provides location-transparent accesss to YANG-modeled data.
-
-The OpenDaylight Controller supports external access to applications and data using
-following model-driven protocols:
-
-* *NETCONF* - XML-based RPC protocol, which provides abilities for client to
-   invoke YANG-modeled RPCs, receive notifications and to read, modify and
-   manipulate YANG modeled data.
-* *RESTCONF* - HTTP-based protocol, which provides REST-like APIs to manipulate
-   YANG modeled data and invoke YANG modeled RPCs, using XML or JSON as payload
-   format.
-
-include::md-sal-overview.adoc[MD-SAL]
-
-include::md-sal-data-tx.adoc[]
-
-include::md-sal-rpc-routing.adoc[MD-SAL Rpc Routing]
-
-include::restconf.adoc[RESTCONF]
-
-include::websocket-notifications.adoc[Websocket Notifications]
-
-include::config.adoc[Config Subsystem]
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/controller.html
diff --git a/manuals/developer-guide/src/main/asciidoc/controller/md-sal-data-tx.adoc b/manuals/developer-guide/src/main/asciidoc/controller/md-sal-data-tx.adoc
deleted file mode 100644 (file)
index 2f0f553..0000000
+++ /dev/null
@@ -1,422 +0,0 @@
-=== MD-SAL Data Transactions
-
-MD-SAL *Data Broker* provides transactional access to conceptual *data trees*
-representing configuration and operational state.
-
-NOTE: *Data tree* usually represents state of the modeled data, usually this
-      is state of controller, applications and also external systems (network
-      devices).
-
-*Transactions* provide *<<_transaction_isolation, stable and isolated view>>*
-from other currently running transactions. The state of running transaction and
-underlying data tree is not affected by other concurrently running transactions.
-
-.Transaction Types
-Write-Only::
-    Transaction provides only modification capabilities, but does not provide
-    read capabilities. Write-only transaction is allocated using
-    `newWriteOnlyTransaction()`.
-+
-NOTE: This allows less state tracking for
-      write-only transactions and allows MD-SAL Clustering to optimize
-      internal representation of transaction in cluster.
-Read-Write::
-    Transaction provides both read and write capabilities. It is allocated using
-    `newReadWriteTransaction()`.
-Read-Only::
-    Transaction provides stable read-only view based on current data tree.
-    Read-only view is not affected by any subsequent write transactions.
-    Read-only transaction is allocated using `newReadOnlyTransaction()`.
-+
-NOTE: If an application needs to observe changes itself in data tree, it should use
-*data tree listeners* instead of read-only transactions and polling data tree.
-
-Transactions may be allocated using the *data broker* itself or using
-*transaction chain*. In the case of *transaction chain*, the new allocated transaction
-is not based on current state of data tree, but rather on state introduced by
-previous transaction from the same chain, even if the commit for previous transaction
-has not yet occurred (but transaction was submitted).
-
-
-==== Write-Only & Read-Write Transaction
-
-Write-Only and Read-Write transactions provide modification capabilities for
-the conceptual data trees.
-
-.Usual workflow for data tree modifications
-1. application allocates new transactions using `newWriteOnlyTransaction()`
-   or `newReadWriteTransaction()`.
-2. application <<_modification_of_data_tree,modifies data tree>> using `put`,
-   `merge` and/or `delete`.
-3. application finishes transaction using <<_submitting_transaction,`submit()`>>,
-   which seals transaction and submits it to be processed.
-4. application observes the result of the transaction commit using either blocking
-   or asynchronous calls.
-
-The *initial state* of the write transaction is a *stable snapshot* of the current
-data tree state captured when transaction was created and it's state and
-underlying data tree are not affected by other concurrently running transactions.
-
-Write transactions are *isolated* from other concurrent write transactions. All
-*<<_transaction_local_state,writes are local>>* to the transaction and
-represents only a *proposal of state change* for data tree and *are not visible*
-to any other concurrently running transactions (including read-only transactions).
-
-The transaction *<<_commit_failure_scenarios,commit may fail>>* due to failing
-verification of data or concurrent transaction modifying and affected data
-in an incompatible way.
-
-===== Modification of Data Tree
-
-Write-only and read-write transaction provides following methods to modify
-data tree:
-
-put::
-+
-[source, java]
-----
-<T> void put(LogicalDatastoreType store, InstanceIdentifier<T> path, T data);
-----
-+
-Stores a piece of data at a specified path. This acts as an *add / replace*
-operation, which is to say that whole subtree will be replaced by the
-specified data.
-
-
-merge::
-+
-[source, java]
-----
-<T> void merge(LogicalDatastoreType store, InstanceIdentifier<T> path, T data);
-----
-+
-Merges a piece of data with the existing data at a specified path.
-Any *pre-existing data* which are not explicitly overwritten *will be preserved*.
-This means that if you store a container, its child subtrees will be merged.
-
-delete::
-+
-[source, java]
-----
-void delete(LogicalDatastoreType store, InstanceIdentifier<?> path);
-----
-+
-Removes a whole subtree from a specified path.
-
-===== Submitting transaction
-
-Transaction is submitted to be processed and committed using following method:
-
-[source, java]
-----
-CheckedFuture<Void,TransactionCommitFailedException> submit();
-----
-
-Applications publish the changes proposed in the transaction by calling `submit()`
-on the transaction.
-This *seals the transaction* (preventing any further writes using this transaction)
-and submits it to be processed and applied to global conceptual data tree.
-The `submit()` method does not block, but rather returns `ListenableFuture`, which
-will complete successfully once processing of transaction is finished and changes
-are applied to data tree. If *commit* of data failed, the future will fail with
-`TransactionFailedException`.
-
-Application may listen on commit state asynchronously using `ListenableFuture`.
-
-[source, java]
-----
-Futures.addCallback( writeTx.submit(), new FutureCallback<Void>() { // <1>
-        public void onSuccess( Void result ) { // <2>
-            LOG.debug("Transaction committed successfully.");
-        }
-
-        public void onFailure( Throwable t ) { // <3>
-            LOG.error("Commit failed.",e);
-        }
-    });
-----
-
-<1> Submits `writeTx` and registers application provided `FutureCallback`
-    on returned future.
-<2> Invoked when future completed successfully - transaction `writeTx` was
-    successfully committed to data tree.
-<3> Invoked when future failed - commit of transaction `writeTx` failed.
-    Supplied exception provides additional details and cause of failure.
-
-If application need to block till commit is finished it may use `checkedGet()`
-to wait till commit is finished.
-
-[source, java]
-----
-try {
-    writeTx.submit().checkedGet(); // <1>
-} catch (TransactionCommitFailedException e) { // <2>
-    LOG.error("Commit failed.",e);
-}
-----
-
-<1> Submits `writeTx` and blocks till commit of `writeTx` is finished. If
-    commit fails `TransactionCommitFailedException` will be thrown.
-<2> Catches `TransactionCommitFailedException` and logs it.
-
-===== Transaction local state
-
-Read-Write transactions maintain transaction-local state, which renders all
-modifications as if they happened, but this is only local to transaction.
-
-Reads from the transaction returns data as if the previous modifications in
-transaction already happened.
-
-Let assume initial state of data tree for `PATH` is `A`.
-[source, java]
-----
-ReadWriteTransaction rwTx = broker.newReadWriteTransaction(); // <1>
-
-rwRx.read(OPERATIONAL,PATH).get(); // <2>
-rwRx.put(OPERATIONAL,PATH,B); // <3>
-rwRx.read(OPERATIONAL,PATH).get(); // <4>
-rwRx.put(OPERATIONAL,PATH,C); // <5>
-rwRx.read(OPERATIONAL,PATH).get(); // <6>
-----
-
-<1> Allocates new `ReadWriteTransaction`.
-<2> Read from `rwTx` will return value `A` for `PATH`.
-<3> Writes value `B` to `PATH` using `rwTx`.
-<4> Read will return value `B` for `PATH`, since previous write occurred in same
-    transaction.
-<5> Writes value `C` to `PATH` using `rwTx`.
-<6> Read will return value `C` for `PATH`, since previous write occurred in same
-    transaction.
-
-==== Transaction isolation
-
-Running (not submitted) transactions are isolated from each other and changes
-done in one transaction are not observable in other currently running
-transaction.
-
-Lets assume initial state of data tree for `PATH` is `A`.
-
-[source, java]
-----
-ReadOnlyTransaction txRead = broker.newReadOnlyTransaction(); // <1>
-ReadWriteTransaction txWrite = broker.newReadWriteTransaction(); // <2>
-
-txRead.read(OPERATIONAL,PATH).get(); // <3>
-txWrite.put(OPERATIONAL,PATH,B); // <4>
-txWrite.read(OPERATIONAL,PATH).get(); // <5>
-txWrite.submit().get(); // <6>
-txRead.read(OPERATIONAL,PATH).get(); // <7>
-txAfterCommit = broker.newReadOnlyTransaction(); // <8>
-txAfterCommit.read(OPERATIONAL,PATH).get(); // <9>
-----
-
-<1> Allocates read only transaction, which is based on data tree which
-    contains value  `A` for `PATH`.
-<2> Allocates read write transaction, which is based on data tree which
-    contains value `A` for `PATH`.
-<3> Read from read-only transaction returns value `A` for `PATH`.
-<4> Data tree is updated using read-write transaction, `PATH` contains `B`.
-    Change is not public and only local to transaction.
-<5> Read from read-write transaction returns value `B` for `PATH`.
-<6> Submits changes in read-write transaction to be committed to data tree.
-    Once commit will finish, changes will be published and `PATH` will be
-    updated for value `B`. Previously allocated transactions are not affected by
-    this change.
-<7> Read from previously allocated read-only transaction still returns value `A`
-    for `PATH`, since it provides stable and isolated view.
-<8> Allocates new read-only transaction, which is based on data tree,
-    which contains value `B` for `PATH`.
-<9> Read from new read-only transaction return value `B` for `PATH` since
-    read-write transaction was committed.
-
-NOTE: Examples contain blocking calls on future only to illustrate
-that action happened after other asynchronous action. The use of the blocking call
-`ListenableFuture#get()` is discouraged for most use-cases and you should use
-`Futures#addCallback(ListenableFuture, FutureCallback)` to listen asynchronously
-for result.
-
-
-==== Commit failure scenarios
-
-A transaction commit may fail because of following reasons:
-
-Optimistic Lock Failure::
-Another transaction finished earlier and *modified the same node in a
-non-compatible way*. The commit (and the returned future) will fail
-with an `OptimisticLockFailedException`.
-+
-It is the responsibility of the
-caller to create a new transaction and submit the same modification again in
-order to update data tree.
-+
-[WARNING]
-====
-`OptimisticLockFailedException` usually exposes *multiple writers* to
-the same data subtree, which may conflict on same resources.
-
-In most cases, retrying may result in a probability of success.
-
-There are scenarios, albeit unusual, where any number of retries will
-not succeed. Therefore it is strongly recommended to limit the number of
-retries (2 or 3) to avoid an endless loop.
-====
-
-Data Validation::
-The data change introduced by this transaction *did not pass validation* by
-commit handlers or data was incorrectly structured. The returned future will
-fail with a `DataValidationFailedException`. User *should not retry* to
-create new transaction with same data, since it probably will fail again.
-
-===== Example conflict of two transactions
-
-This example illustrates two concurrent transactions, which derived from
-same initial state of data tree and proposes conflicting modifications.
-
-[source, java]
-----
-WriteTransaction txA = broker.newWriteTransaction();
-WriteTransaction txB = broker.newWriteTransaction();
-
-txA.put(CONFIGURATION, PATH, A);    // <1>
-txB.put(CONFIGURATION, PATH, B);     // <2>
-
-CheckedFuture<?,?> futureA = txA.submit(); // <3>
-CheckedFuture<?,?> futureB = txB.submit(); // <4>
-----
-
-<1> Updates `PATH` to value `A` using `txA`
-<2> Updates `PATH` to value `B` using `txB`
-<3> Seals & submits `txA`. The commit will be processed asynchronously and
-    data tree will be updated to contain value `A` for `PATH`.
-    The returned `ListenableFuture' will complete successfully once
-    state is applied to data tree.
-<4> Seals & submits `txB`. Commit of `txB` will fail, because previous transaction
-    also modified path in a concurrent way. The state introduced by `txB` will
-    not be applied. The returned `ListenableFuture` will fail
-    with `OptimisticLockFailedException` exception, which indicates
-    that concurrent transaction prevented the submitted transaction from being
-    applied.
-
-===== Example asynchronous retry-loop
-
-[source, java]
-----
-private void doWrite( final int tries ) {
-    WriteTransaction writeTx = dataBroker.newWriteOnlyTransaction();
-
-    MyDataObject data = ...;
-    InstanceIdentifier<MyDataObject> path = ...;
-    writeTx.put( LogicalDatastoreType.OPERATIONAL, path, data );
-
-    Futures.addCallback( writeTx.submit(), new FutureCallback<Void>() {
-        public void onSuccess( Void result ) {
-            // succeeded
-        }
-
-        public void onFailure( Throwable t ) {
-            if( t instanceof OptimisticLockFailedException && (( tries - 1 ) > 0)) {
-                doWrite( tries - 1 );
-            }
-        }
-      });
-}
-...
-doWrite( 2 );
-----
-
-==== Concurrent change compatibility
-
-There are several sets of changes which could be considered incompatible
-between two transactions which are derived from same initial state.
-Rules for conflict detection applies recursively for each subtree
-level.
-
-Following table shows  state changes and failures between two concurrent
-transactions, which are based on same initial state, `tx1` is submitted before
-`tx2`.
-
-// FIXME: Providing model and concrete data structures will be probably better.
-
-INFO: Following tables stores numeric values and shows data using `toString()`
-to simplify examples.
-
-.Concurrent change resolution for leaves and leaf-list items
-[options="header"]
-|===========================================================
-|Initial state | tx1  | tx2 | Observable Result
-|Empty |`put(A,1)` |`put(A,2)` |`tx2` will fail, value of `A` is `1`
-|Empty |`put(A,1)` |`merge(A,2)` |value of `A` is `2`
-|Empty |`merge(A,1)` |`put(A,2)` |`tx2` will fail, value of `A` is `1`
-|Empty |`merge(A,1)` |`merge(A,2)` |`A` is `2`
-|A=0 |`put(A,1)` |`put(A,2)` |`tx2` will fail, `A` is `1`
-|A=0 |`put(A,1)` |`merge(A,2)` |`A` is `2`
-|A=0 |`merge(A,1)` |`put(A,2)` |`tx2` will fail, value of `A` is `1`
-|A=0 |`merge(A,1)` |`merge(A,2)` |`A` is `2`
-|A=0 |`delete(A)` |`put(A,2)` |`tx2` will fail, `A` does not exists
-|A=0 |`delete(A)` |`merge(A,2)` |`A` is `2`
-|===========================================================
-
-.Concurrent change resolution for containers, lists, list items
-[options="header"]
-|=======================================================================
-|Initial state |`tx1` |`tx2` |Result
-|Empty |put(TOP,[]) |put(TOP,[]) |`tx2` will fail, state is TOP=[]
-
-|Empty |put(TOP,[]) |merge(TOP,[]) |TOP=[]
-
-|Empty |put(TOP,[FOO=1]) |put(TOP,[BAR=1]) |`tx2` will fail, state is
-TOP=[FOO=1]
-
-|Empty |put(TOP,[FOO=1]) |merge(TOP,[BAR=1]) |TOP=[FOO=1,BAR=1]
-
-|Empty |merge(TOP,[FOO=1]) |put(TOP,[BAR=1]) |`tx2` will fail, state is
-TOP=[FOO=1]
-
-|Empty |merge(TOP,[FOO=1]) |merge(TOP,[BAR=1]) |TOP=[FOO=1,BAR=1]
-
-|TOP=[] |put(TOP,[FOO=1]) |put(TOP,[BAR=1]) |`tx2` will fail, state is
-TOP=[FOO=1]
-
-|TOP=[] |put(TOP,[FOO=1]) |merge(TOP,[BAR=1]) |state is
-TOP=[FOO=1,BAR=1]
-
-|TOP=[] |merge(TOP,[FOO=1]) |put(TOP,[BAR=1]) |`tx2` will fail, state is
-TOP=[FOO=1]
-
-|TOP=[] |merge(TOP,[FOO=1]) |merge(TOP,[BAR=1]) |state is
-TOP=[FOO=1,BAR=1]
-
-|TOP=[] |delete(TOP) |put(TOP,[BAR=1]) |`tx2` will fail, state is empty
-store
-
-|TOP=[] |delete(TOP) |merge(TOP,[BAR=1]) |state is TOP=[BAR=1]
-
-|TOP=[] |put(TOP/FOO,1) |put(TOP/BAR,1]) |state is TOP=[FOO=1,BAR=1]
-
-|TOP=[] |put(TOP/FOO,1) |merge(TOP/BAR,1) |state is TOP=[FOO=1,BAR=1]
-
-|TOP=[] |merge(TOP/FOO,1) |put(TOP/BAR,1) |state is TOP=[FOO=1,BAR=1]
-
-|TOP=[] |merge(TOP/FOO,1) |merge(TOP/BAR,1) |state is TOP=[FOO=1,BAR=1]
-
-|TOP=[] |delete(TOP) |put(TOP/BAR,1) |`tx2` will fail, state is empty
-store
-
-|TOP=[] |delete(TOP) |merge(TOP/BAR,1] |`tx2` will fail, state is empty
-store
-
-|TOP=[FOO=1] |put(TOP/FOO,2) |put(TOP/BAR,1) |state is TOP=[FOO=2,BAR=1]
-
-|TOP=[FOO=1] |put(TOP/FOO,2) |merge(TOP/BAR,1) |state is
-TOP=[FOO=2,BAR=1]
-
-|TOP=[FOO=1] |merge(TOP/FOO,2) |put(TOP/BAR,1) |state is
-TOP=[FOO=2,BAR=1]
-
-|TOP=[FOO=1] |merge(TOP/FOO,2) |merge(TOP/BAR,1) |state is
-TOP=[FOO=2,BAR=1]
-
-|TOP=[FOO=1] |delete(TOP/FOO) |put(TOP/BAR,1) |state is TOP=[BAR=1]
-
-|TOP=[FOO=1] |delete(TOP/FOO) |merge(TOP/BAR,1] |state is TOP=[BAR=1]
-|=======================================================================
diff --git a/manuals/developer-guide/src/main/asciidoc/controller/md-sal-overview.adoc b/manuals/developer-guide/src/main/asciidoc/controller/md-sal-overview.adoc
deleted file mode 100644 (file)
index 9bf057c..0000000
+++ /dev/null
@@ -1,96 +0,0 @@
-=== MD-SAL Overview
-
-The Model-Driven Service Adaptation Layer (MD-SAL) is message-bus inspired
-extensible middleware component that provides messaging and data storage
-functionality based on data and interface models defined by application developers
-(i.e. user-defined models).
-
-The MD-SAL:
-
- * Defines a *common-layer, concepts, data model building blocks and messaging
-   patterns* and provides infrastructure / framework for applications and
-   inter-application communication.
-
-// FIXME: Common integration point / reword this better
- * Provide common support for user-defined transport and payload formats, including
-   payload serialization and adaptation (e.g. binary, XML or JSON).
-
-The MD-SAL uses *YANG* as the modeling language for both interface and data
-definitions, and provides a messaging and data-centric runtime for such services
-based on YANG modeling.
-
-The MD-SAL provides two different API types (flavours): +
-
-* *MD-SAL Binding:* MD-SAL APIs which extensively uses APIs and classes generated
-  from YANG models, which provides compile-time safety.
-* *MD-SAL DOM:* (Document Object Model) APIs which uses DOM-like
-  representation of data, which makes them more powerful, but provides less
-  compile-time safety.
-
-NOTE: Model-driven nature of the MD-SAL and *DOM*-based APIs allows for
-behind-the-scene API and payload type mediation and transformation
-to facilitate seamless communication between applications - this enables
-for other components and applications to provide connectors / expose different
-set of APIs and derive most of its functionality purely from models, which
-all existing code can benefit from without modification.
-For example *RESTCONF Connector* is an application built on top of MD-SAL
-and exposes YANG-modeled application APIs transparently via HTTP and adds support
-for XML and JSON payload type.
-
-==== Basic concepts
-
-Basic concepts are building blocks which are used by applications, and from
-which MD-SAL uses to define messaging patterns and to provide services and
-behavior based on developer-supplied YANG models.
-
-Data Tree::
-All state-related data are modeled and represented as data tree,
-with possibility to address any element / subtree
-+
-  * *Operational Data Tree* - Reported state of the system, published by the
-     providers using MD-SAL. Represents a feedback loop for applications
-     to observe state of the network / system.
-  * *Configuration Data Tree* - Intended state of the system or network,
-     populated by consumers, which expresses their intention.
-
-Instance Identifier::
-Unique identifier of node / subtree in data tree, which provides unambiguous
-information, how to reference and retrieve node / subtree from conceptual
-data trees.
-
-Notification::
-Asynchronous transient event which may be consumed by subscribers and they may
-act upon it
-
-RPC::
-asynchronous request-reply message pair, when request is triggered
-by consumer, send to the provider, which in future replies with reply message.
-+
-NOTE: In MD-SAL terminology, the term 'RPC' is used to define the input and
-output for a procedure (function) that is to be provided by a provider,
-and mediated by the MD-SAL, that means it may not result in remote call.
-
-==== Messaging Patterns
-
-MD-SAL provides several messaging patterns using broker derived from
-basic concepts, which are intended to transfer YANG modeled data between
-applications to provide data-centric integration between applications instead
-of API-centric integration.
-
-* *Unicast communication*
-** *Remote Procedure Calls* - unicast between consumer and provider, where
-consumer sends *request* message to provider, which asynchronously responds
-with *reply* message
-
-* *Publish / Subscribe*
-** *Notifications* - multicast transient message which is published by provider
-   and is delivered to subscribers
-** *Data Change Events* - multicast asynchronous event, which is sent by data
-broker if there is change in conceptual data tree, and is delivered to subscribers
-
-* *Transactional access to Data Tree*
-** Transactional *reads* from conceptual *data tree* - read-only transactions with
-   isolation from other running transactions.
-** Transactional *modification* to conceptual *data tree* - write transactions with
-   isolation from other running transactions.
-** *Transaction chaining*
diff --git a/manuals/developer-guide/src/main/asciidoc/controller/md-sal-rpc-routing.adoc b/manuals/developer-guide/src/main/asciidoc/controller/md-sal-rpc-routing.adoc
deleted file mode 100644 (file)
index 0b09dba..0000000
+++ /dev/null
@@ -1,168 +0,0 @@
-// Source: https://ask.opendaylight.org/question/99/how-does-request-routing-works/
-=== MD-SAL RPC routing
-
-The MD-SAL provides a way to deliver Remote Procedure Calls (RPCs) to a
-particular implementation based on content in the input as it is modeled in
-YANG. This part of the the RPC input is referred to as a *context reference*.
-
-The MD-SAL does not dictate the name of the leaf which is used for this RPC
-routing, but provides necessary functionality for YANG model author to define
-their *context reference* in their model of RPCs.
-
-MD-SAL routing behavior is modeled using following terminology and its
-application to YANG models:
-
-Context Type::
-  Logical type of RPC routing. Context type is modeled as YANG `identity`
-  and is referenced in model to provide scoping information.
-Context Instance::
-  Conceptual location in data tree, which represents context in which RPC
-  could be executed. Context instance usually represent logical point
-  to which RPC execution is attached.
-Context Reference::
-  Field of RPC input payload which contains Instance Identifier referencing
-  *context instance*  in which the RPC should be executed.
-
-==== Modeling a routed RPC
-
-In order to define routed RPCs, the YANG model author needs to declare (or
-reuse) a *context type*, set of possible *context instances* and finally RPCs
-which will contain *context reference* on which they will be routed.
-
-===== Declaring a routing context type
-
-[source,yang]
-----
-identity node-context {
-    description "Identity used to mark node context";
-}
-----
-
-This declares an identity named `node-context`, which is used as marker
-for node-based routing and is used in other places to reference that routing
-type.
-
-===== Declaring possible context instances
-
-In order to define possible values of *context instances* for routed RPCs, we
-need to model that set accordingly using `context-instance` extension from the
-`yang-ext` model.
-
-[source,yang]
-----
-import yang-ext { prefix ext; }
-
-/** Base structure **/
-container nodes {
-    list node {
-        key "id";
-        ext:context-instance "node-context";
-        // other node-related fields would go here
-    }
-}
-----
-
-The statement `ext:context-instance "node-context";` marks any element of the
-`list node` as a possible valid *context instance* in `node-context` based
-routing.
-
-[NOTE]
-====
-The existence of a *context instance* node in operational or config data tree
-is not strongly tied to existence of RPC implementation.
-
-For most routed RPC models, there is relationship between the data present in
-operational data tree and RPC implementation availability, but this is
-not enforced by MD-SAL. This provides some flexibility for YANG model writers
-to better specify their routing model and requirements for implementations.
-Details when RPC implementations are available should be documented in YANG model.
-
-If user invokes RPC with a *context instance* that has no registered
-implementation, the RPC invocation will fail with the exception
-`DOMRpcImplementationNotAvailableException`.
-====
-
-===== Declaring a routed RPC
-
-To declare RPC to be routed based on `node-context` we need to add leaf
-of `instance-identifier` type (or type derived from `instance-identifier`)
-to the RPC and mark it as *context reference*.
-
-This is achieved using YANG extension `context-reference` from `yang-ext` model
-on leaf, which will be used for RPC routing.
-
-[source,yang]
-----
-rpc example-routed-rpc  {
-    input {
-        leaf node {
-            ext:context-reference "node-context";
-            type "instance-identifier";
-        }
-        // other input to the RPC would go here
-    }
-}
-----
-
-The statement `ext:context-reference "node-context"` marks `leaf node` as
-*context reference* of type `node-context`. The value of this leaf, will be used
-by the MD-SAL to select the particular RPC implementation that registered itself
-as the implementation of the RPC for particular *context instance*.
-
-==== Using routed RPCs
-
-From a user perspective (e.g. invoking RPCs) there is no difference between
-routed and non-routed RPCs. Routing information is just an additional leaf in
-RPC which must be populated.
-
-// TODO: Add simple snippet of invoking such RPC even if it does not differ
-// from normal one.
-
-==== Implementing a routed RPC
-
-// TODO: Update this section to show some other example model
-// along with binding and DOM implementations
-
-Implementation
-
-===== Registering implementations
-
-// FIXME: Clean up bit wording in following section, use different example
-
-Implementations of a routed RPC (e.g., southbound plugins) will specify an
-instance-identifier for the *context reference* (in this case a node) for which
-they want to provide an implementation during registration. Consumers, e.g.,
-those calling the RPC are required to specify that instance-identifier (in this
-case the identifier of a node) when invoking RPC.
-
-Simple code which showcases that for add-flow via Binding-Aware APIs
-(https://git.opendaylight.org/gerrit/gitweb?p=controller.git;a=blob;f=opendaylight/md-sal/sal-binding-it/src/test/java/org/opendaylight/controller/test/sal/binding/it/RoutedServiceTest.java;h=d49d6f0e25e271e43c8550feb5eef63d96301184;hb=HEAD[RoutedServiceTest.java]
-):
-
-[source, java]
-----
- 61  @Override
- 62  public void onSessionInitiated(ProviderContext session) {
- 63      assertNotNull(session);
- 64      firstReg = session.addRoutedRpcImplementation(SalFlowService.class, salFlowService1);
- 65  }
-----
-Line 64: We are registering salFlowService1 as implementation of
-SalFlowService RPC
-
-[source, java]
-----
-107  NodeRef nodeOne = createNodeRef("foo:node:1");
-109  /**
-110   * Provider 1 registers path of node 1
-111   */
-112  firstReg.registerPath(NodeContext.class, nodeOne);
-----
-
-Line 107: We are creating NodeRef (encapsulation of InstanceIdentifier)
-for "foo:node:1".
-
-Line 112: We register salFlowService1 as implementation for nodeOne.
-
-The salFlowService1 will be executed only for RPCs which contains
-Instance Identifier for foo:node:1.
index 28a89f2d42b15fd414990349014c71ab9a4b8e07..4da211e48b97cb3d4e3e308ffedf31adeddbcf34 100644 (file)
@@ -1,209 +1,3 @@
 == NETCONF Developer Guide
 
-NOTE: Reading the NETCONF section in the User Guide is likely useful as
-      it contains an overview of NETCONF in OpenDaylight and a how-to
-      for spawning and configuring NETCONF connectors.
-
-This chapter is recommended for application developers who want to
-interact with mounted NETCONF devices from their application code. It
-tries to demonstrate all the use cases from user guide with
-RESTCONF but now from the code level. One important difference would
-be the demonstration of NETCONF notifications and notification
-listeners. The notifications were not shown using RESTCONF because
-*RESTCONF does not support notifications from mounted NETCONF
-devices.*
-
-NOTE: It may also be useful to read the generic 
-      https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:MD-SAL_App_Tutorial[OpenDaylight
-      MD-SAL app
-      development tutorial] before diving into this chapter.
-      This guide assumes awareness of basic OpenDaylight application
-      development.
-
-=== Sample app overview
-All the examples presented here are implemented by a sample OpenDaylight
-application called *ncmount* in the `coretutorials` OpenDaylight project.
-It can be found on the github mirror of OpenDaylight's repositories:
-
-* https://github.com/opendaylight/coretutorials/tree/stable/lithium/ncmount
-
-or checked out from the official OpenDaylight repository:
-
-* https://git.opendaylight.org/gerrit/#/admin/projects/coretutorials
-
-*The application was built using the
-https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:Startup_Project_Archetype[project
-startup maven archetype] and demonstrates how to:*
-
-* preconfigure connectors to NETCONF devices
-* retrieve MountPointService (registry of available mount points)
-* listen and react to changing connection state of netconf-connector
-* add custom device YANG models to the app and work with them
-* read data from device in binding aware format (generated java APIs
-  from provided YANG models)
-* write data into device in binding aware format
-* trigger and listen to NETCONF notifications in binding aware format
-
-Detailed information about the structure of the application can be
-found at:
-https://wiki.opendaylight.org/view/Controller_Core_Functionality_Tutorials:Tutorials:Netconf_Mount
-// TODO Migrate the information from wiki here
-
-NOTE: The code in ncmount is fully *binding aware* (works with generated
-java APIs from provided YANG models). However it is also possible to
-perform the same operations in *binding independent* manner.
-// TODO Add BI NcmountProvider version into the sample app and link it from here
-
-==== NcmountProvider
-The NcmountProvider class (found in NcmountProvider.java) is the central
-point of the ncmount application and all the application logic is
-contained there. The following sections will detail its most interesting
-pieces.
-
-===== Retrieve MountPointService
-The MountPointService is a central registry of all available mount points
-in OpenDaylight. It is just another MD-SAL service and is available from the
-+session+ attribute passed by +onSessionInitiated+ callback:
-
-----
-@Override
-public void onSessionInitiated(ProviderContext session) {
-    LOG.info("NcmountProvider Session Initiated");
-
-    // Get references to the data broker and mount service
-    this.mountService = session.getSALService(MountPointService.class);
-
-    ...
-
-    }
-}
-----
-
-===== Listen for connection state changes
-It is important to know when a mount point appears, when it is fully
-connected and when it is disconnected or removed. The exact states of a
-mount point are:
-
-* Connected
-* Connecting
-* Unable to connect
-
-To receive this kind of information, an application has to register
-itself as a notification listener for the preconfigured
-netconf-topology subtree in MD-SAL's datastore. This can be performed
-in the +onSessionInitiated+ callback as well:
-
-----
-@Override
-public void onSessionInitiated(ProviderContext session) {
-
-    ...
-
-    this.dataBroker = session.getSALService(DataBroker.class);
-
-    // Register ourselves as the REST API RPC implementation
-    this.rpcReg = session.addRpcImplementation(NcmountService.class, this);
-
-    // Register ourselves as data change listener for changes on Netconf
-    // nodes. Netconf nodes are accessed via "Netconf Topology" - a special
-    // topology that is created by the system infrastructure. It contains
-    // all Netconf nodes the Netconf connector knows about. NETCONF_TOPO_IID
-    // is equivalent to the following URL:
-    // .../restconf/operational/network-topology:network-topology/topology/topology-netconf
-    if (dataBroker != null) {
-        this.dclReg = dataBroker.registerDataChangeListener(LogicalDatastoreType.OPERATIONAL,
-                NETCONF_TOPO_IID.child(Node.class),
-                this,
-                DataChangeScope.SUBTREE);
-    }
-}
-----
-
-The implementation of the callback from MD-SAL when the data change can be
-found in the
-+onDataChanged(AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject>
-change)+ callback of
-https://github.com/opendaylight/coretutorials/blob/stable/lithium/ncmount/impl/src/main/java/ncmount/impl/NcmountProvider.java[NcmountProvider
-class].
-
-===== Reading data from the device
-The first step when trying to interact with the device is to get the exact
-mount point instance (identified by an instance identifier) from the MountPointService:
-
-----
-@Override
-public Future<RpcResult<ShowNodeOutput>> showNode(ShowNodeInput input) {
-    LOG.info("showNode called, input {}", input);
-
-    // Get the mount point for the specified node
-    // Equivalent to '.../restconf/<config | operational>/opendaylight-inventory:nodes/node/<node-name>/yang-ext:mount/'
-    // Note that we can read both config and operational data from the same
-    // mount point
-    final Optional<MountPoint> xrNodeOptional = mountService.getMountPoint(NETCONF_TOPO_IID
-            .child(Node.class, new NodeKey(new NodeId(input.getNodeName()))));
-
-    Preconditions.checkArgument(xrNodeOptional.isPresent(),
-            "Unable to locate mountpoint: %s, not mounted yet or not configured",
-            input.getNodeName());
-    final MountPoint xrNode = xrNodeOptional.get();
-
-    ....
-}
-----
-
-NOTE: The triggering method in this case is called +showNode+. It is a
-YANG-defined RPC and NcmountProvider serves as an MD-SAL RPC
-implementation among other things. This means that +showNode+ an be
-triggered using RESTCONF.
-
-The next step is to retrieve an instance of the +DataBroker+ API from the
-mount point and start a read transaction:
-
-----
-@Override
-public Future<RpcResult<ShowNodeOutput>> showNode(ShowNodeInput input) {
-
-    ...
-
-    // Get the DataBroker for the mounted node
-    final DataBroker xrNodeBroker = xrNode.getService(DataBroker.class).get();
-    // Start a new read only transaction that we will use to read data
-    // from the device
-    final ReadOnlyTransaction xrNodeReadTx = xrNodeBroker.newReadOnlyTransaction();
-
-    ...
-}
-----
-
-Finally, it is possible to perform the read operation:
-
-----
-@Override
-public Future<RpcResult<ShowNodeOutput>> showNode(ShowNodeInput input) {
-
-    ...
-
-    InstanceIdentifier<InterfaceConfigurations> iid =
-            InstanceIdentifier.create(InterfaceConfigurations.class);
-
-    Optional<InterfaceConfigurations> ifConfig;
-    try {
-        // Read from a transaction is asynchronous, but a simple
-        // get/checkedGet makes the call synchronous
-        ifConfig = xrNodeReadTx.read(LogicalDatastoreType.CONFIGURATION, iid).checkedGet();
-    } catch (ReadFailedException e) {
-        throw new IllegalStateException("Unexpected error reading data from " + input.getNodeName(), e);
-    }
-
-    ...
-}
-----
-
-The instance identifier is used here again to specify a subtree to read
-from the device. At this point application can process the data as it sees
-fit. The ncmount app transforms the data into its own format and returns
-it from +showNode+.
-
-NOTE: More information can be found in the source code of ncmount
-sample app + on wiki:
-https://wiki.opendaylight.org/view/Controller_Core_Functionality_Tutorials:Tutorials:Netconf_Mount
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/netconf-developer-guide.html
diff --git a/manuals/developer-guide/src/main/asciidoc/controller/restconf.adoc b/manuals/developer-guide/src/main/asciidoc/controller/restconf.adoc
deleted file mode 100644 (file)
index ec07e1d..0000000
+++ /dev/null
@@ -1,356 +0,0 @@
-=== OpenDaylight Controller MD-SAL: RESTCONF
-
-==== RESCONF operations overview
-
-RESTCONF allows access to datastores in the controller. +
-There are two datastores: +
-
-* Config: Contains data inserted via controller
-* Operational: Contains other data
-
-NOTE: Each request must start with the URI /restconf. +
-RESTCONF listens on port 8080 for HTTP requests.
-
-RESTCONF supports *OPTIONS*, *GET*, *PUT*, *POST*, and *DELETE* operations. Request and response data can either be in the XML or JSON format. XML structures according to yang are defined at: http://tools.ietf.org/html/rfc6020[XML-YANG]. JSON structures are defined at: http://tools.ietf.org/html/draft-lhotka-netmod-yang-json-02[JSON-YANG]. Data in the request must have a correctly set *Content-Type* field in the http header with the allowed value of the media type. The media type of the requested data has to be set in the *Accept* field. Get the media types for each resource by calling the OPTIONS operation.
-Most of the paths of the pathsRestconf endpoints use https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:Concepts#Instance_Identifier[Instance Identifier]. +<identifier>+ is used in the explanation of the operations.
-
-*<identifier>* +
-
-* It must start with <moduleName>:<nodeName> where <moduleName> is a name of the module and <nodeName> is the name of a node in the module. It is sufficient to just use <nodeName> after <moduleName>:<nodeName>. Each <nodeName> has to be separated by /.
-* <nodeName> can represent a data node which is a list or container yang built-in type. If the data node is a list, there must be defined keys of the list behind the data node name for example, <nodeName>/<valueOfKey1>/<valueOfKey2>.
-* The format <moduleName>:<nodeName> has to be used in this case as well: +
-Module A has node A1. Module B augments node A1 by adding node X. Module C augments node A1 by adding node X. For clarity, it has to be known which node is X (for example: C:X).
-For more details about encoding, see: http://tools.ietf.org/html/draft-bierman-netconf-restconf-02#section-5.3.1[RESTCONF 02 - Encoding YANG Instance Identifiers in the Request URI.]
-
-==== Mount point
-A Node can be behind a mount point. In this case, the URI has to be in format <identifier>/*yang-ext:mount*/<identifier>. The first <identifier> is the path to a mount point and the second <identifier> is the path to a node behind the mount point. A URI can end in a mount point itself by using <identifier>/*yang-ext:mount*. +
-More information on how to actually use mountpoints is available at: https://wiki.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf[OpenDaylight Controller:Config:Examples:Netconf].
-
-==== HTTP methods
-
-===== OPTIONS /restconf +
-
-* Returns the XML description of the resources with the required request and response media types in Web Application Description Language (WADL)
-
-===== GET /restconf/config/<identifier> +
-
-* Returns a data node from the Config datastore.
-* <identifier> points to a data node which must be retrieved.
-
-===== GET /restconf/operational/<identifier> +
-
-* Returns the value of the data node from the Operational datastore.
-* <identifier> points to a data node which must be retrieved.
-
-===== PUT /restconf/config/<identifier>
-
-* Updates or creates data in the Config datastore and returns the state about success.
-* <identifier> points to a data node which must be stored.
-
-*Example:* +
-----
-PUT http://<controllerIP>:8080/restconf/config/module1:foo/bar
-Content-Type: applicaton/xml
-<bar>
-  …
-</bar>
-----
-*Example with mount point:* +
-----
-PUT http://<controllerIP>:8080/restconf/config/module1:foo1/foo2/yang-ext:mount/module2:foo/bar
-Content-Type: applicaton/xml
-<bar>
-  …
-</bar>
-----
-===== POST /restconf/config
-* Creates the data if it does not exist
-
-For example: +
-----
-POST URL: http://localhost:8080/restconf/config/
-content-type: application/yang.data+json
-JSON payload:
-
-   {
-     "toaster:toaster" :
-     {
-       "toaster:toasterManufacturer" : "General Electric",
-       "toaster:toasterModelNumber" : "123",
-       "toaster:toasterStatus" : "up"
-     }
-  }
-----
-===== POST /restconf/config/<identifier>
-
-* Creates the data if it does not exist in the Config datastore, and returns the state about success.
-* <identifier> points to a data node where data must be stored.
-* The root element of data must have the namespace (data are in XML) or module name (data are in JSON.)
-
-*Example:* +
-----
-POST http://<controllerIP>:8080/restconf/config/module1:foo
-Content-Type: applicaton/xml/
-<bar xmlns=“module1namespace”>
-  …
-</bar>
-----
-*Example with mount point:*
-----
-http://<controllerIP>:8080/restconf/config/module1:foo1/foo2/yang-ext:mount/module2:foo
-Content-Type: applicaton/xml
-<bar xmlns=“module2namespace”>
-  …
-</bar>
-----
-===== POST /restconf/operations/<moduleName>:<rpcName>
-
-* Invokes RPC.
-* <moduleName>:<rpcName> - <moduleName> is the name of the module and <rpcName> is the name of the RPC in this module.
-* The Root element of the data sent to RPC must have the name “input”.
-* The result can be the status code or the retrieved data having the root element “output”.
-
-*Example:* +
-----
-POST http://<controllerIP>:8080/restconf/operations/module1:fooRpc
-Content-Type: applicaton/xml
-Accept: applicaton/xml
-<input>
-  …
-</input>
-
-The answer from the server could be:
-<output>
-  …
-</output>
-----
-*An example using a JSON payload:* +
-----
-POST http://localhost:8080/restconf/operations/toaster:make-toast
-Content-Type: application/yang.data+json
-{
-  "input" :
-  {
-     "toaster:toasterDoneness" : "10",
-     "toaster:toasterToastType":"wheat-bread"
-  }
-}
-----
-
-NOTE: Even though this is a default for the toasterToastType value in the yang, you still need to define it.
-
-===== DELETE /restconf/config/<identifier>
-
-* Removes the data node in the Config datastore and returns the state about success.
-* <identifier> points to a data node which must be removed.
-
-More information is available in the http://tools.ietf.org/html/draft-bierman-netconf-restconf-02[RESTCONF RFC].
-
-==== How RESTCONF works
-RESTCONF uses these base classes: +
-
-InstanceIdentifier:: Represents the path in the data tree
-ConsumerSession:: Used for invoking RPCs
-DataBrokerService:: Offers manipulation with transactions and reading data from the datastores
-SchemaContext:: Holds information about yang modules
-MountService:: Returns MountInstance based on the InstanceIdentifier pointing to a mount point
-MountInstace:: Contains the SchemaContext behind the mount point
-DataSchemaNode:: Provides information about the schema node
-SimpleNode:: Possesses the same name as the schema node, and contains the value representing the data node value
-CompositeNode:: Can contain CompositeNode-s and SimpleNode-s
-
-==== GET in action
-Figure 1 shows the GET operation with URI restconf/config/M:N where M is the module name, and N is the node name.
-
-
-.Get
-image::Get.png[width=500]
-
-. The requested URI is translated into the InstanceIdentifier which points to the data node. During this translation, the DataSchemaNode that conforms to the data node is obtained. If the data node is behind the mount point, the MountInstance is obtained as well.
-. RESTCONF asks for the value of the data node from DataBrokerService based on InstanceIdentifier.
-. DataBrokerService returns CompositeNode as data.
-. StructuredDataToXmlProvider or StructuredDataToJsonProvider is called based on the *Accept* field from the http request. These two providers can transform CompositeNode regarding DataSchemaNode to an XML or JSON document.
-. XML or JSON is returned as the answer on the request from the client.
-
-==== PUT in action
-
-Figure 2 shows the PUT operation with the URI restconf/config/M:N where M is the module name, and N is the node name. Data is sent in the request either in the XML or JSON format.
-
-.Put
-
-image::Put.png[width=500]
-
-. Input data is sent to JsonToCompositeNodeProvider or XmlToCompositeNodeProvider. The correct provider is selected based on the Content-Type field from the http request. These two providers can transform input data to CompositeNode. However, this CompositeNode does not contain enough information for transactions.
-. The requested URI is translated into InstanceIdentifier which points to the data node. DataSchemaNode conforming to the data node is obtained during this translation. If the data node is behind the mount point, the MountInstance is obtained as well.
-. CompositeNode can be normalized by adding additional information from DataSchemaNode.
-. RESTCONF begins the transaction, and puts CompositeNode with InstanceIdentifier into it. The response on the request from the client is the status code which depends on the result from the transaction.
-
-
-// FIXME: Replace with coretutorials tutorial or point to openflow location
-==== Something practical
-
-. Create a new flow on the switch openflow:1 in table 2.
-
-*HTTP request* +
-----
-Operation: POST
-URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2
-Content-Type: application/xml
-----
-----
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<flow
-    xmlns="urn:opendaylight:flow:inventory">
-    <strict>false</strict>
-    <instructions>
-        <instruction>
-               <order>1</order>
-            <apply-actions>
-                <action>
-                  <order>1</order>
-                    <flood-all-action/>
-                </action>
-            </apply-actions>
-        </instruction>
-    </instructions>
-    <table_id>2</table_id>
-    <id>111</id>
-    <cookie_mask>10</cookie_mask>
-    <out_port>10</out_port>
-    <installHw>false</installHw>
-    <out_group>2</out_group>
-    <match>
-        <ethernet-match>
-            <ethernet-type>
-                <type>2048</type>
-            </ethernet-type>
-        </ethernet-match>
-        <ipv4-destination>10.0.0.1/24</ipv4-destination>
-    </match>
-    <hard-timeout>0</hard-timeout>
-    <cookie>10</cookie>
-    <idle-timeout>0</idle-timeout>
-    <flow-name>FooXf22</flow-name>
-    <priority>2</priority>
-    <barrier>false</barrier>
-</flow>
-----
-*HTTP response* +
-----
-Status: 204 No Content
-----
-[start=2]
-. Change _strict_ to _true_ in the previous flow.
-
-*HTTP request* +
-----
-Operation: PUT
-URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2/flow/111
-Content-Type: application/xml
-----
-----
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<flow
-    xmlns="urn:opendaylight:flow:inventory">
-    <strict>true</strict>
-    <instructions>
-        <instruction>
-               <order>1</order>
-            <apply-actions>
-                <action>
-                  <order>1</order>
-                    <flood-all-action/>
-                </action>
-            </apply-actions>
-        </instruction>
-    </instructions>
-    <table_id>2</table_id>
-    <id>111</id>
-    <cookie_mask>10</cookie_mask>
-    <out_port>10</out_port>
-    <installHw>false</installHw>
-    <out_group>2</out_group>
-    <match>
-        <ethernet-match>
-            <ethernet-type>
-                <type>2048</type>
-            </ethernet-type>
-        </ethernet-match>
-        <ipv4-destination>10.0.0.1/24</ipv4-destination>
-    </match>
-    <hard-timeout>0</hard-timeout>
-    <cookie>10</cookie>
-    <idle-timeout>0</idle-timeout>
-    <flow-name>FooXf22</flow-name>
-    <priority>2</priority>
-    <barrier>false</barrier>
-</flow>
-----
-*HTTP response* +
-----
-Status: 200 OK
-----
-[start=3]
-. Show flow: check that _strict_ is _true_.
-
-*HTTP request* +
-----
-Operation: GET
-URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2/flow/111
-Accept: application/xml
-----
-*HTTP response* +
-----
-Status: 200 OK
-----
-
-----
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<flow
-    xmlns="urn:opendaylight:flow:inventory">
-    <strict>true</strict>
-    <instructions>
-        <instruction>
-               <order>1</order>
-            <apply-actions>
-                <action>
-                  <order>1</order>
-                    <flood-all-action/>
-                </action>
-            </apply-actions>
-        </instruction>
-    </instructions>
-    <table_id>2</table_id>
-    <id>111</id>
-    <cookie_mask>10</cookie_mask>
-    <out_port>10</out_port>
-    <installHw>false</installHw>
-    <out_group>2</out_group>
-    <match>
-        <ethernet-match>
-            <ethernet-type>
-                <type>2048</type>
-            </ethernet-type>
-        </ethernet-match>
-        <ipv4-destination>10.0.0.1/24</ipv4-destination>
-    </match>
-    <hard-timeout>0</hard-timeout>
-    <cookie>10</cookie>
-    <idle-timeout>0</idle-timeout>
-    <flow-name>FooXf22</flow-name>
-    <priority>2</priority>
-    <barrier>false</barrier>
-</flow>
-----
-[start=4]
-. Delete the flow created.
-
-*HTTP request* +
-----
-Operation: DELETE
-URI: http://192.168.11.1:8080/restconf/config/opendaylight-inventory:nodes/node/openflow:1/table/2/flow/111
-----
-*HTTP response* +
-----
-Status: 200 OK
-----
diff --git a/manuals/developer-guide/src/main/asciidoc/controller/websocket-notifications.adoc b/manuals/developer-guide/src/main/asciidoc/controller/websocket-notifications.adoc
deleted file mode 100644 (file)
index 083dd96..0000000
+++ /dev/null
@@ -1,271 +0,0 @@
-=== Websocket change event notification subscription tutorial
-
-Subscribing to data change notifications makes it possible to obtain
-notifications about data manipulation (insert, change, delete) which are
-done on any specified *path* of any specified *datastore* with specific
-*scope*. In following examples _\{odlAddress}_ is address of server
-where ODL is running and _\{odlPort}_ is port on which OpenDaylight is
-running.
-
-==== Websocket notifications subscription process
-
-In this section we will learn what steps need to be taken in order to
-successfully subscribe to data change event notifications.
-
-===== Create stream
-
-In order to use event notifications you first need to call RPC that
-creates notification stream that you can later listen to. You need to
-provide three parameters to this RPC:
-
-* *path*: data store path that you plan to listen to. You can register
-  listener on containers, lists and leaves.
-* *datastore*: data store type. _OPERATIONAL_ or _CONFIGURATION_.
-* *scope*: Represents scope of data change. Possible options are:
-** BASE: only changes directly to the data tree node specified in the
-   path will be reported
-** ONE: changes to the node and to direct child nodes will be reported
-** SUBTREE: changes anywhere in the subtree starting at the node will
-   be reported
-
-The RPC to create the stream can be invoked via RESCONF like this:
-
-* URI:
-\http://\{odlAddress}:\{odlPort}/restconf/operations/sal-remote:create-data-change-event-subscription
-* HEADER: Content-Type=application/json
-* OPERATION: POST
-* DATA:
-+
-[source,json]
-----
-{
-    "input": {
-        "path": "/toaster:toaster/toaster:toasterStatus",
-        "sal-remote-augment:datastore": "OPERATIONAL",
-        "sal-remote-augment:scope": "ONE"
-    }
-}
-----
-
-The response should look something like this:
-
-[source,json]
-----
-{
-    "output": {
-        "stream-name": "toaster:toaster/toaster:toasterStatus/datastore=CONFIGURATION/scope=SUBTREE"
-    }
-}
-----
-
-*stream-name* is important because you will need to use it when you
-subscribe to the stream in the next step.
-
-NOTE: Internally, this will create a new listener  for _stream-name_
-      if it did not already exist.
-
-===== Subscribe to stream
-
-In order to subscribe to stream and obtain WebSocket location you need
-to call _GET_ on your stream path. The URI should generally be
-\http://\{odlAddress}:\{odlPort}/restconf/streams/stream/\{streamName},
-where _\{streamName}_ is the _stream-name_ parameter contained in
-response from _create-data-change-event-subscription_ RPC from the
-previous step.
-
-* URI:
-\http://\{odlAddress}:\{odlPort}/restconf/streams/stream/toaster:toaster/datastore=CONFIGURATION/scope=SUBTREE
-* OPERATION: GET
-
-The expected response status is 200 OK and response body should be empty.
-You will get your WebSocket location from *Location* header of response.
-For example in our particular toaster example location header would have
-this value:
-_ws://\{odlAddress}:8185/toaster:toaster/datastore=CONFIGURATION/scope=SUBTREE_
-
-NOTE: During this phase there is an internal check for to see if a
-      listener for the _stream-name_ from the URI exists. If not, new
-      a new listener is registered with the DOM data broker.
-
-===== Receive notifications
-
-You should now have a data change notification stream created and have
-location of a WebSocket. You can use this WebSocket to listen to data
-change notifications. To listen to notifications you can use a
-JavaScript client or if you are using chrome browser you can use the
-https://chrome.google.com/webstore/detail/simple-websocket-client/pfdhoblngboilpfeibdedpjgfnlcodoo[Simple
-WebSocket Client].
-
-Also, for testing purposes, there is simple Java application named
-WebSocketClient. The application is placed in the
-_-sal-rest-connector-classes.class_ project. It accepts a WebSocket URI
-as and input parameter. After starting the utility (WebSocketClient
-class directly in Eclipse/InteliJ Idea) received notifications should be
-displayed in console.
-
-Notifications are always in XML format and look like this:
-
-[source,xml]
-----
-<notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
-    <eventTime>2014-09-11T09:58:23+02:00</eventTime>
-    <data-changed-notification xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:remote">
-        <data-change-event>
-            <path xmlns:meae="http://netconfcentral.org/ns/toaster">/meae:toaster</path>
-            <operation>updated</operation>
-            <data>
-               <!-- updated data -->
-            </data>
-        </data-change-event>
-    </data-changed-notification>
-</notification>
-----
-
-==== Example use case
-
-The typical use case is listening to data change events to update web
-page data in real-time. In this tutorial we will be using toaster as
-the base.
-// TODO: link to toaster tutorial?
-
-When you call _make-toast_ RPC, it sets _toasterStatus_ to "down" to
-reflect that the toaster is busy making toast. When it finishes,
-_toasterStatus_ is set to "up" again. We will listen to this toaster
-status changes in data store and will reflect it on our web page in
-real-time thanks to WebSocket data change notification.
-
-==== Simple javascript client implementation
-
-We will create simple JavaScript web application that will listen
-updates on _toasterStatus_ leaf and update some element of our web page
-according to new toaster status state.
-
-===== Create stream
-
-First you need to create stream that you are planing to subscribe to.
-This can be achieved by invoking "create-data-change-event-subscription"
-RPC on RESTCONF via AJAX request. You need to provide data store *path*
-that you plan to listen on, *data store type* and *scope*. If the
-request is successful you can extract the *stream-name* from the
-response and use that to subscribe to the newly created stream. The
-_\{username}_ and _\{password}_ fields represent your credentials that
-you use to connect to OpenDaylight via RESTCONF:
-
-NOTE: The default user name and password are "admin".
-
-[source,javascript]
-----
-function createStream() {
-    $.ajax(
-        {
-            url: 'http://{odlAddress}:{odlPort}/restconf/operations/sal-remote:create-data-change-event-subscription',
-            type: 'POST',
-            headers: {
-              'Authorization': 'Basic ' + btoa('{username}:{password}'),
-              'Content-Type': 'application/json'
-            },
-            data: JSON.stringify(
-                {
-                    'input': {
-                        'path': '/toaster:toaster/toaster:toasterStatus',
-                        'sal-remote-augment:datastore': 'OPERATIONAL',
-                        'sal-remote-augment:scope': 'ONE'
-                    }
-                }
-            )
-        }).done(function (data) {
-            // this function will be called when ajax call is executed successfully
-            subscribeToStream(data.output['stream-name']);
-        }).fail(function (data) {
-            // this function will be called when ajax call fails
-            console.log("Create stream call unsuccessful");
-        })
-}
-----
-
-===== Subscribe to stream
-
-The Next step is to subscribe to the stream. To subscribe to the stream
-you need to call _GET_ on
-_\http://\{odlAddress}:\{odlPort}/restconf/streams/stream/\{stream-name}_.
-If the call is successful, you get WebSocket address for this stream in
-*Location* parameter inside response header. You can get response header
-by calling _getResponseHeader('Location')_ on HttpRequest object inside
-_done()_ function call:
-
-[source,javascript]
-----
-function subscribeToStream(streamName) {
-    $.ajax(
-        {
-            url: 'http://{odlAddress}:{odlPort}/restconf/streams/stream/' + streamName;
-            type: 'GET',
-            headers: {
-              'Authorization': 'Basic ' + btoa('{username}:{password}'),
-            }
-        }
-    ).done(function (data, textStatus, httpReq) {
-        // we need function that has http request object parameter in order to access response headers.
-        listenToNotifications(httpReq.getResponseHeader('Location'));
-    }).fail(function (data) {
-        console.log("Subscribe to stream call unsuccessful");
-    });
-}
-----
-
-===== Receive notifications
-
-Once you got WebSocket server location you can now connect to it and
-start receiving data change events. You need to define functions that
-will handle events on WebSocket. In order to process incoming events
-from OpenDaylight you need to provide a function that will handle
-_onmessage_ events. The function must have one parameter that represents
-the received event object. The event data will be stored in _event.data_.
-The data will be in an XML format that you can then easily parse using
-jQuery.
-
-[source,javascript]
-----
-function listenToNotifications(socketLocation) {
-    try {
-        var notificatinSocket = new WebSocket(socketLocation);
-
-        notificatinSocket.onmessage = function (event) {
-            // we process our received event here
-            console.log('Received toaster data change event.');
-            $($.parseXML(event.data)).find('data-change-event').each(
-                function (index) {
-                    var operation = $(this).find('operation').text();
-                    if (operation == 'updated') {
-                        // toaster status was updated so we call function that gets the value of toasterStatus leaf
-                        updateToasterStatus();
-                        return false;
-                    }
-                }
-            );
-        }
-        notificatinSocket.onerror = function (error) {
-            console.log("Socket error: " + error);
-        }
-        notificatinSocket.onopen = function (event) {
-            console.log("Socket connection opened.");
-        }
-        notificatinSocket.onclose = function (event) {
-            console.log("Socket connection closed.");
-        }
-        // if there is a problem on socket creation we get exception (i.e. when socket address is incorrect)
-    } catch(e) {
-        alert("Error when creating WebSocket" + e );
-    }
-}
-----
-
-The _updateToasterStatus()_ function represents function that calls
-_GET_ on the path that was modified and sets toaster status in some web
-page element according to received data. After the WebSocket connection
-has been established you can test events by calling make-toast RPC via
-RESTCONF.
-
-NOTE: for more information about WebSockets in JavaScript visit
-https://developer.mozilla.org/en-US/docs/WebSockets/Writing_WebSocket_client_applications[Writing
-WebSocket client applications]
index 03db7d2b402579d07382e476d27d357885e687a3..c64e7546ef803c8b6d4c6c7af2bf9541748fb893 100644 (file)
@@ -1,69 +1,3 @@
 == IoTDM Developer Guide
 
-=== Overview
-The Internet of Things Data Management (IoTDM) on OpenDaylight
-project is about developing a data-centric middleware
-that will act as a oneM2M compliant IoT Data Broker and enable
-authorized applications to retrieve IoT data uploaded by any
-device. The OpenDaylight platform is used to implement the oneM2M
-data store which models a hierarchical containment tree, where each
-node in the tree represents an oneM2M resource. Typically, IoT
-devices and applications interact with the resource tree over
-standard protocols such as CoAP, MQTT, and HTTP.
-Initially, the oneM2M resource tree is used by applications to
-retrieve data. Possible applications are inventory or device
-management systems or big data analytic systems designed to
-make sense of the collected data. But, at some point,
-applications will need to configure the devices. Features and
-tools will have to be provided to enable configuration of the
-devices based on applications responding to user input, network
-conditions, or some set of programmable rules or policies possibly
-triggered by the receipt of data collected from the devices.
-The OpenDaylight platform, with its rich unique cross-section of SDN
-capabilities, NFV, and now IoT device and application management,
-can be bundled with a targeted set of features and deployed
-anywhere in the network to give the network service provider
-ultimate control. Depending on the use case, the OpenDaylight IoT
-platform can be configured with only IoT data collection capabilities
-where it is deployed near the IoT devices and its footprint needs to be
-small, or it can be configured to run as a highly scaled up and
-out distributed cluster with IoT, SDN and NFV functions enabled
-and deployed in a high traffic data center.
-
-=== oneM2M Architecture
-The architecture provides a framework that enables the support of
-the oneM2M resource containment tree. The onem2m-core implements
-the MDSAL RPCs defined in the onem2m-api YANG files. These RPCs
-enable oneM2M resources to be created, read, updated, and
-deleted (CRUD), and also enables the management of subscriptions.
-When resources are CRUDed, the onem2m-notifier issues oneM2M
-notification events to interested subscribers. TS0001: oneM2M
-Functional Architecture and TS0004: oneM2M Service Layer Protocol
-are great reference documents to learn details of oneM2M resource
-types, message flow, formats, and CRUD/N semantics.  Both of these
-specifications can be found at
-http://onem2m.org/technical/published-documents
-
-The oneM2M resource tree is modeled in YANG and essentially is a
-meta-model for the tree.  The oneM2M wire protocols allow the
-resource tree to be constructed via HTTP or CoAP messages that
-populate nodes in the tree with resource specific attributes.
-Each oneM2M resource type has semantic behaviour associated with
-it.  For example: a container resource has attributes which
-control quotas on how many and how big the collection of data or
-content instance objects that can exist below it in the tree.
-Depending on the resource type, the oneM2M core software
-implements and enforces the resource type specific rules to
-ensure a well-behaved resource tree.
-
-The resource tree can be simultaneously accessed by many
-concurrent applications wishing to manage or access the tree,
-and also many devices can be reporting in new data or sensor
-readings into their appropriate place in the tree.
-
-=== Key APIs and Interfaces
-The API's to access the oneM2M datastore are well documented
-in TS0004 (referred above) found on onem2m.org
-
-RESTCONF is available too but generally HTTP and CoAP are used to
-access the oneM2M data tree.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/iotdm-developer-guide.html
index 81a5c1de8a02b29af974289d9c3b195ef47d2885..726c9837fc9cf103ea1459c93bc16f9c54429f12 100644 (file)
@@ -1,88 +1,3 @@
 == LACP Developer Guide
-=== LACP Overview
-The OpenDaylight LACP (Link Aggregation Control Protocol) project can be used to
-aggregate multiple links between OpenDaylight controlled network switches and 
-LACP enabled legacy switches or hosts operating in active LACP mode.
 
-OpenDaylight LACP passively negotiates automatic bundling of multiple links to form
-a single LAG (Link Aggregation Group). LAGs  are realised in the OpenDaylight controlled
-switches using OpenFlow 1.3+ group table functionality.
-
-
-=== LACP Architecture
-
-* *inventory*
-   ** Maintains list of OpenDaylight controlled switches and port information
-   ** List of LAGs created and physical ports that are part
-      of the LAG 
-   ** Interacts with MD-SAL to update LACP related information
-      
-* *inventorylistener*
-   ** This module interacts with MD-SAL for receiving node/node-connector notifications
-   
-* *flow*
-  ** Programs the switch to punt LACP PDU (Protocol Data Unit) to controller
-
-* *packethandler*
-   ** Receives and transmits LACP PDUs to the LACP enabled endpoint
-   ** Provides infrastructure services for group table programming
-   
-* *core*
-   ** Performs LACP state machine processing
-
-
-==== How LAG programming is implemented
-
-The LAG representing the aggregated multiple physical ports
-are realized in the OpenDaylight controlled switches by creating a
-group table entry (Group table supported from OpenFlow 1.3 onwards).
-The group table entry has a group type *Select* and action referring to
-the aggregated physical ports.
-Any data traffic to be sent out through the LAG can be sent
-through the *group entry* available for the LAG.
-
-Suppose there are ports P1-P8 in a node.
-When LACP project is installed, a group table entry for handling broadcast traffic is automatically 
-created on all the switches that have registered to the controller.
-
-[options="header"]
-|=================================
-|GroupID    |GroupType|EgressPorts
-|<B'castgID>|ALL      |P1,P2,...P8
-|=================================
-
-Now, assume P1 & P2 are now part of LAG1. The group table would be programmed as follows:
-
-[options="header"]
-|========================================
-|GroupID    |GroupType|EgressPorts
-|<B'castgID>|ALL      |P3,P4,...P8
-|<LAG1>     |SELECT   |P1,P2
-|========================================
-
-When a second LAG, LAG2, is formed with ports P3 and P4,
-
-[options="header"]
-|===============================================
-|GroupID    |GroupType|EgressPorts
-|<B'castgID>|ALL      |P5,P6,...P8
-|<LAG1>     |SELECT   |P1,P2
-|<LAG2>     |SELECT   |P3,P4
-|===============================================
-
-==== How applications can program OpenFlow flows using LACP-created LAG groups
-
-OpenDaylight controller modules can get the information of LAG by listening/querying the LACP Aggregator datastore.
-
-When any application receives packets, it can check, if the ingress port is part of a LAG by verifying the 
-LAG Aggregator reference (lacp-agg-ref) for the source nodeConnector that OpenFlow plugin provides.
-
-When applications want to add flows to egress out of the LAG, they must use the group entry corresponding to the LAG.
-
-From the above example, for a flow to egress out of LAG1,
-
-*add-flow  eth_type=<xxxx>,ip_dst=<x.x.x.x>,actions=output:<LAG1>*
-
-Similarly, when applications want traffic to be broadcasted, they should use the group table entries *<B'castgID>,<LAG1>,<LAG2>* in output action.
-
-For all applications, the group table information is accessible from LACP Aggregator datastore.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/lacp-developer-guide.html
index 1b011e478e8c17cab61a517596ac4f66334a1c76..e83e7e9e672eb8151beaa87739dd1eef7063c53d 100644 (file)
@@ -1,236 +1,3 @@
 == NetIDE Developer Guide ==
 
-=== Overview ===
-The NetIDE Network Engine enables portability and cooperation inside a single 
-network by using a client/server multi-controller SDN architecture. Separate 
-"Client SDN Controllers" host the various SDN Applications with their access 
-to the actual physical network abstracted and coordinated through a single 
-"Server SDN Controller", in this instance OpenDaylight. This allows 
-applications written for Ryu/Floodlight/Pyretic to execute on OpenDaylight 
-managed infrastructure.
-
-The "Network Engine" is modular by design:
-
-* An OpenDaylight plugin, "shim", sends/receives messages to/from subscribed SDN 
-Client Controllers. This consumes the ODL OpenFlow Plugin
-* An initial suite of SDN Client Controller "Backends": Floodlight, Ryu, Pyretic. 
-Further controllers may be added over time as the engine is extensible.
-
-The Network Engine provides a compatibility layer capable of translating calls of 
-the network applications running on top of the client controllers, into calls for 
-the server controller framework. The communication between the client and the 
-server layers is achieved through the NetIDE intermediate protocol, 
-which is an application-layer protocol on top of TCP that transmits the network 
-control/management messages from the client to the server controller and vice-versa.
-Between client and server controller sits the Core Layer which also "speaks" the 
-intermediate protocol. The core layer implements three main functions: 
-
-... interfacing with the client backends and server shim, controlling the lifecycle 
-of controllers as well as modules in them, 
-... orchestrating the execution of individual modules (in one client controller) 
-or complete applications (possibly spread across multiple client controllers), 
-... interfacing with the tools.
-
-.NetIDE Network Engine Architecture
-image::netide/arch-engine.jpg[width=500]
-
-=== NetIDE Intermediate Protocol ===
-
-The Intermediate Protocol serves several needs, it has to: 
-
-... carry control messages between core and shim/backend, e.g., to start up/take 
-down a particular module, providing unique identifiers for modules, 
-... carry event and action messages between shim, core, and backend, properly
-demultiplexing such messages to the right module based on identifiers, 
-... encapsulate messages specific to a particular SBI protocol version (e.g., 
-OpenFlow 1.X, NETCONF, etc.) towards the client controllers with proper information 
-to recognize these messages as such.
-
-The NetIDE packages can be added as dependencies in Maven projects by putting the
-following code in the _pom.xml_ file.
-
-    <dependency>
-        <groupId>org.opendaylight.netide</groupId>
-        <artifactId>api</artifactId>
-        <version>${NETIDE_VERSION}</version>
-    </dependency>
-
-The current stable version for NetIDE is `0.1.0-Beryllium`.
-
-
-
-==== Protocol specification 
-
-Messages of the NetIDE protocol contain two basic elements: the NetIDE header and 
-the data (or payload). The NetIDE header, described below, is placed 
-before the payload and serves as the communication and control link between the 
-different components of the Network Engine. The payload can contain management 
-messages, used by the components of the Network Engine to exchange relevant 
-information, or control/configuration messages (such as OpenFlow, NETCONF, etc.) 
-crossing the Network Engine generated by either network application modules or by 
-the network elements.
-
-The NetIDE header is defined as follows:
-
-  0                   1                   2                   3
-  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- |   netide_ver  |      type     |             length            |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- |                         xid                                   |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- |                       module_id                               |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
- |                                                               |
- +                     datapath_id                               +
- |                                                               |
- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
-where each tick mark represents one bit position. Alternatively, in a C-style coding 
-format, the NetIDE header can be represented with the following structure:
-
- struct netide_header {
-     uint8_t netide_ver ;
-     uint8_t type ;
-     uint16_t length ;
-     uint32_t xid
-     uint32_t module_id
-     uint64_t datapath_id
- };
-
-* +netide_ver+ is the version of the NetIDE protocol (the current version is v1.2, which 
-is identified with value 0x03).
-* +length+ is the total length of the payload in bytes.
-* +type+ contains a code that indicates the type of the message according with the 
-following values:
-+
- enum type {
-     NETIDE_HELLO = 0x01 ,
-     NETIDE_ERROR = 0x02 ,
-     NETIDE_MGMT = 0x03 ,
-     MODULE_ANNOUNCEMENT = 0x04 ,
-     MODULE_ACKNOWLEDGE = 0x05 ,
-     NETIDE_HEARTBEAT = 0x06 ,
-     NETIDE_OPENFLOW = 0x11 ,
-     NETIDE_NETCONF = 0x12 ,
-     NETIDE_OPFLEX = 0x13
- };
-+
-* +datapath_id+ is a 64-bit field that uniquely identifies the network elements.
-* +module_id+ is a 32-bits field that uniquely identifies Backends and application modules running 
-on top of each client controller. The composition mechanism in the core layer leverages 
-on this field to implement the correct execution flow of these modules. 
-* +xid+ is the transaction identifier associated to the each message. Replies must use the same 
-value to facilitate the pairing.
-
-
-==== Module announcement
-
-The first operation performed by a Backend is registering itself and the modules that  
-it is running to the Core. This is done by using the +MODULE_ANNOUNCEMENT+ and 
-+MODULE_ACKNOWLEDGE+ message types. As a result of this process, each Backend and 
-application module can be recognized by the Core through an identifier (the +module_id+) 
-placed in the NetIDE header. First, a Backend registers itself by using the following 
-schema: backend-<platform name>-<pid>.
-
-For example,odule a Ryu Backend will register by using the following name in the message 
-backend-ryu-12345 where 12345 is the process ID of the registering instance of the 
-Ryu platform. The format of the message is the following:
-
- struct NetIDE_message {
-     netide_ver = 0x03
-     type = MODULE_ANNOUNCEMENT
-     length = len(" backend -< platform_name >-<pid >")
-     xid = 0
-     module_id = 0
-     datapath_id = 0
-     data = " backend -< platform_name >-<pid >"
- }
-
-The answer generated by the Core will include a +module_id+ number and the Backend name in
-the payload (the same indicated in the +MODULE_ANNOUNCEMENT+ message):
-
- struct NetIDE_message {
-     netide_ver = 0x03
-     type = MODULE_ACKNOWLEDGE
-     length = len(" backend -< platform_name >-<pid >")
-     xid = 0
-     module_id = MODULE_ID
-     datapath_id = 0
-     data = " backend -< platform_name >-<pid >"
- }
-    
-Once a Backend has successfully registered itself, it can start registering its modules with the same
-procedure described above by indicating the name of the module in the data (e.g. data="Firewall").
-From this point on, the Backend will insert its own +module_id+ in the header of the messages it generates
- (e.g. heartbeat, hello messages, OpenFlow echo messages from the client controllers, etc.).
-Otherwise, it will encapsulate the control/configuration messages (e.g. FlowMod, PacketOut, 
-FeatureRequest, NETCONF request, etc.) generated by network application modules with the specific
-+module_id+s.
-
-
-==== Heartbeat
-
-The heartbeat mechanism has been introduced after the adoption of the ZeroMQ messaging queuing
-library to transmit the NetIDE messages. Unfortunately, the ZeroMQ library does not offer any
-mechanism to find out about disrupted connections (and also completely unresponsive peers).
-This limitation of the ZeroMQ library can be an issue for the Core's composition mechanism and for
-the tools connected to the Network Engine, as they cannot understand when an client controller
-disconnects or crashes. As a consequence, Backends must periodically send (let's say every 5
-seconds) a "heartbeat" message to the Core. If the Core does not receive at least one "heartbeat"
-message from the Backend within a certain timeframe, the Core considers it disconnected, removes
-all the related data from its memory structures and informs the relevant tools. The format of the
-message is the following:
-
- struct NetIDE_message {
-     netide_ver = 0x03
-     type = NETIDE_HEARTBEAT
-     length = 0
-     xid = 0
-     module_id = backend -id
-     datapath_id = 0
-     data = 0
- }
-
-==== Handshake
-
-Upon a successful connection with the Core, the client controller must immediately send a hello
-message with the list of the control and/or management protocols needed by the applications
-deployed on top of it.
-
- struct NetIDE_message {
-     struct netide_header header ;
-     uint8 data [0]
- };
-
-The header contains the following values:
-
-* +netide ver=0x03+
-* +type=NETIDE_HELLO+
-* +length=2*NR_PROTOCOLS+
-* +data+ contains one 2-byte word (in big endian order) for each protocol, with the first
-byte containing the code of the protocol according to the above enum, while the second byte in-
-dictates the version of the protocol (e.g. according to the ONF specification, 0x01 for OpenFlow
-v1.0, 0x02 for OpenFlow v1.1, etc.). NETCONF version is marked with 0x01 that refers to the
-specification in the RFC6241, while OpFlex version is marked with 0x00 since this protocol is
-still in work-in-progress stage.
-
-The Core relays hello messages to the server controller which responds with another hello message
-containing the following:
-
-* +netide ver=0x03+
-* +type=NETIDE_HELLO+
-* +length=2*NR_PROTOCOLS+
-
-If at least one of the protocols requested by the client is supported. In particular, +data+ contains the
-codes of the protocols that match the client's request (2-bytes words, big endian order). If the hand-
-shake fails because none of the requested protocols is supported by the server controller, the header
-of the answer is as follows:
-
-* +netide ver=0x03+
-* +type=NETIDE_ERROR+
-* +length=2*NR_PROTOCOLS+
-* +data+ contains the codes of all the protocols supported by the server
-controller (2-bytes words, big endian order). In this case, the TCP session is terminated by the
-server controller just after the answer is received by the client.
-`
\ No newline at end of file
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/netide-developer-guide.html
index b366d60757772ac786676354886c5260abde2a2f..03962f88e998b103a45b6695557321545b0f73f4 100644 (file)
@@ -1,107 +1,3 @@
 == Neutron Northbound
 
-=== How to add new API support
-OpenStack Neutron is a moving target. It is continuously adding new features
-as new rest APIs. Here is a basic step to add new API support:
-
-In the Neutron Northbound project:
-
-* Add new YANG model for it under `neutron/model/src/main/yang` and
-  `update neutron.yang`
-* Add northbound API for it, and neutron-spi
-** Implement `Neutron<New API>Request.java` and `Neutron<New API>Norhtbound.java`
-   under
-   `neutron/northbound-api/src/main/java/org/opendaylight/neutron/northbound/api/`
-** Implement `INeutron<New API>CRUD.java` and new data structure if any under
-   `neutron/neutron-spi/src/main/java/org/opendaylight/neutron/spi/`
-** update
-   `neutron/neutron-spi/src/main/java/org/opendaylight/neutron/spi/NeutronCRUDInterfaces.java`
-   to wire new CRUD interface
-** Add unit tests, `Neutron<New structure>JAXBTest.java` under
-   `neutron/neutron-spi/src/test/java/org/opendaylight/neutron/spi/`
-* update 
-  `neutron/northbound-api/src/main/java/org/opendaylight/neutron/northbound/api/NeutronNorthboundRSApplication.java`
-  to wire new northbound api to `RSApplication`
-* Add transcriber, `Neutron<New API>Interface.java` under
-  `transcriber/src/main/java/org/opendaylight/neutron/transcriber/`
-* update
-  `transcriber/src/main/java/org/opendaylight/neutron/transcriber/NeutronTranscriberProvider.java`
-  to wire a new transcriber
-** Add integration tests `Neutron<New API>Tests.java`
-   under `integration/test/src/test/java/org/opendaylight/neutron/e2etest/`
-** update `integration/test/src/test/java/org/opendaylight/neutron/e2etest/ITNeutronE2E.java`
-   to run a newly added tests.
-
-In OpenStack networking-odl
-
-* Add new driver (or plugin) for new API with tests.
-
-In a southbound Neutron Provider
-
-* implement actual backend to realize those new API by listening related YANG
-  models.
-
-
-=== How to write transcriber
-
-For each Neutron data object, there is an `Neutron*Interface` defined within
-the transcriber artifact that will write that object to the MD-SAL
-configuration datastore.
-
-All `Neutron*Interface` extend `AbstractNeutronInterface`, in which two methods
-are defined:
-
-* one takes the Neutron object as input, and will create a data object from it.
-* one takes an uuid as input, and will create a data object containing the uuid.
-
-----
-protected abstract T toMd(S neutronObject);
-protected abstract T toMd(String uuid);
-----
-
-In addition the `AbstractNeutronInterface` class provides several other
-helper methods (`addMd`, `updateMd`, `removeMd`), which handle the actual
-writing to the configuration datastore.
-
-==== The semantics of the `toMD()` methods
-Each of the Neutron YANG models defines structures containing data.
-Further each YANG-modeled structure has it own builder.
-A particular `toMD()` method instantiates an instance of the correct
-builder, fills in the properties of the builder from the corresponding
-values of the Neutron object and then creates the YANG-modeled structures
-via the `build()` method.
-
-As an example, one of the `toMD` code for Neutron Networks is
-presented below:
-
-----
-protected Network toMd(NeutronNetwork network) {
-    NetworkBuilder networkBuilder = new NetworkBuilder();
-    networkBuilder.setAdminStateUp(network.getAdminStateUp());
-    if (network.getNetworkName() != null) {
-        networkBuilder.setName(network.getNetworkName());
-    }
-    if (network.getShared() != null) {
-        networkBuilder.setShared(network.getShared());
-    }
-    if (network.getStatus() != null) {
-        networkBuilder.setStatus(network.getStatus());
-    }
-    if (network.getSubnets() != null) {
-        List<Uuid> subnets = new ArrayList<Uuid>();
-        for( String subnet : network.getSubnets()) {
-            subnets.add(toUuid(subnet));
-        }
-        networkBuilder.setSubnets(subnets);
-    }
-    if (network.getTenantID() != null) {
-        networkBuilder.setTenantId(toUuid(network.getTenantID()));
-    }
-    if (network.getNetworkUUID() != null) {
-        networkBuilder.setUuid(toUuid(network.getNetworkUUID()));
-    } else {
-        logger.warn("Attempting to write neutron network without UUID");
-    }
-    return networkBuilder.build();
-}
-----
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/neutron-northbound.html
index 2a9f922e38cd439db8c73074dc5a47067c27815d..a01d4a451053acb7bc4e371b4c470e1db28cdb2f 100644 (file)
@@ -1,139 +1,3 @@
 == Neutron Service Developer Guide
 
-=== Overview
-This Karaf feature (`odl-neutron-service`) provides integration support for OpenStack Neutron
-via the OpenDaylight ML2 mechanism driver. The Neutron Service is only one of the
-components necessary for OpenStack integration.
-It defines YANG models for OpenStack Neutron data models and northbound
-API via REST API and YANG model RESTCONF.
-
-Those developers who want to add new provider for new OpenStack Neutron
-extensions/services (Neutron constantly adds new extensions/services and OpenDaylight
-will keep up with those new things) need to communicate with this Neutron
-Service or add models to Neutron Service.
-If you want to add new extensions/services themselves to the Neutron Service,
-new YANG data models need to be added, but that is out of scope of this document
-because this guide is for a developer who will be _using_ the feature
-to build something separate, but _not_ somebody who will be developing
-code for this feature itself.
-
-=== Neutron Service Architecture
-image::neutron/odl-neutron-service-developer-architecture.png[height="450px", width="550px", title="Neutron Service Architecture"]
-// image original: https://docs.google.com/drawings/d/15xtroJahSFt93K10Zp8AVln_WZgowmhv7MC_2VdZQzg/edit?usp=sharing
-
-The Neutron Service defines YANG models for OpenStack Neutron integration.
-When OpenStack admins/users request changes (creation/update/deletion)
-of Neutron resources, e.g., Neutron network, Neutron subnet, Neutron port, the corresponding YANG model within OpenDaylight will be modified.
-The OpenDaylight OpenStack will subscribe the changes on those models and
-will be notified those modification through MD-SAL when changes are made.
-Then the provider will do the necessary tasks to realize OpenStack integration.
-How to realize it (or even not realize it) is up to each provider.
-The Neutron Service itself does not take care of it.
-
-=== How to Write a SB Neutron Consumer
-In Boron, there is only one options for SB Neutron Consumers:
-
-* Listening for changes via the Neutron YANG model
-
-Until Beryllium there was another way with the legacy I*Aware interface.
-From Boron, the interface was eliminated. So all the SB Neutron Consumers
-have to use Neutron YANG model.
-
-
-=== Neutron YANG models
-Neutron service defines YANG models for Neutron. The details can be found
-at
-
-* https://git.opendaylight.org/gerrit/gitweb?p=neutron.git;a=tree;f=model/src/main/yang;hb=refs/heads/stable/boron
-
-Basically those models are based on OpenStack Neutron API definitions.
-For exact definitions, OpenStack Neutron source code needs to be referred
-as the above documentation doesn't always cover the necessary details.
-There is nothing special to utilize those Neutron YANG models.
-The basic procedure will be:
-
-. subscribe for changes made to the the model
-. respond on the data change notification for each models
-
-[NOTE]
-Currently there is no way to refuse the request configuration at this
-point. That is left to future work.
-
-[source,java]
-----
-public class NeutronNetworkChangeListener implements DataChangeListener, AutoCloseable {
-    private ListenerRegistration<DataChangeListener> registration;
-    private DataBroker db;
-
-    public NeutronNetworkChangeListener(DataBroker db){
-        this.db = db;
-        // create identity path to register on service startup
-        InstanceIdentifier<Network> path = InstanceIdentifier
-                .create(Neutron.class)
-                .child(Networks.class)
-                .child(Network.class);
-        LOG.debug("Register listener for Neutron Network model data changes");
-        // register for Data Change Notification
-        registration =
-                this.db.registerDataChangeListener(LogicalDatastoreType.CONFIGURATION, path, this, DataChangeScope.ONE);
-
-    }
-
-    @Override
-    public void onDataChanged(
-            AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> changes) {
-        LOG.trace("Data changes : {}",changes);
-
-        // handle data change notification
-        Object[] subscribers = NeutronIAwareUtil.getInstances(INeutronNetworkAware.class, this);
-        createNetwork(changes, subscribers);
-        updateNetwork(changes, subscribers);
-        deleteNetwork(changes, subscribers);
-    }
-}
-----
-
-=== Neutron configuration
-From Boron, new models of configuration for OpenDaylight to tell
-OpenStack neutron/networking-odl its configuration/capability.
-
-==== hostconfig
-This is for OpenDaylight to tell per-node configuration to Neutron.
-Especially this is used by pseudo agent port binding heavily.
-
-The model definition can be found at
-
-* https://git.opendaylight.org/gerrit/gitweb?p=neutron.git;a=blob;f=model/src/main/yang/neutron-hostconfig.yang;hb=refs/heads/stable/boron
-
-How to populate this for pseudo agent port binding is documented at
-
-* http://git.openstack.org/cgit/openstack/networking-odl/tree/doc/source/devref/hostconfig.rst
-
-==== Neutron extension config
-In Boron this is experimental.
-The model definition can be found at
-
-* https://git.opendaylight.org/gerrit/gitweb?p=neutron.git;a=blob;f=model/src/main/yang/neutron-extensions.yang;hb=refs/heads/stable/boron
-
-Each Neutron Service provider has its own feature set. Some support
-the full features of OpenStack, but others support only a subset.
-With same supported Neutron API, some functionality may or may not be
-supported. So there is a need for a way that OpenDaylight can tell networking-odl its
-capability. Thus networking-odl can initialize Neutron properly based
-on reported capability.
-
-
-=== Neutorn Logger
-There is another small Karaf feature, `odl-neutron-logger`, which logs changes of Neutron
-YANG models. which can be used for debug/audit.
-
-It would also help to understand how to listen the change.
-
-* https://git.opendaylight.org/gerrit/gitweb?p=neutron.git;a=blob;f=neutron-logger/src/main/java/org/opendaylight/neutron/logger/NeutronLogger.java;hb=refs/heads/stable/boron
-
-
-=== API Reference Documentation
-The OpenStack Neutron API references
-
-* http://developer.openstack.org/api-ref-networking-v2.html
-* http://developer.openstack.org/api-ref-networking-v2-ext.html
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/neutron-service-developer-guide.html
index bebdb7e08b5fc56c9bf1f3a6b65d375e8f1943d4..5d1538fa50e534f45afa78d6272b8424a0796de5 100644 (file)
@@ -1,870 +1,3 @@
 == OCP Plugin Developer Guide
-This document is intended for both OCP (ORI [Open Radio Interface] C&M [Control and Management]
-Protocol) agent developers and OpenDaylight service/application developers.
-It describes essential information needed to implement an OCP agent that is capable of interoperating
-with the OCP plugin running in OpenDaylight, including the OCP connection establishment and
-state machines used on both ends of the connection. It also provides a detailed description of the
-northbound/southbound APIs that the OCP plugin exposes to allow automation and programmability.
 
-=== Overview
-OCP is an ETSI standard protocol for control and management of Remote Radio Head (RRH)
-equipment. The OCP Project addresses the need for a southbound plugin that allows
-applications and controller services to interact with RRHs using OCP. The OCP southbound
-plugin will allow applications acting as a Radio Equipment Control (REC) to interact
-with RRHs that support an OCP agent.
-
-.OCP southbound plugin
-image::ocpplugin/ocp-sb-plugin.jpg[OCP southbound plugin,width=500]
-
-=== Architecture
-OCP is a vendor-neutral standard communications interface defined to enable control and management
-between RE and REC of an ORI architecture. The OCP Plugin supports the implementation of the OCP
-specification; it is based on the Model Driven Service Abstraction Layer (MD-SAL) architecture.
-
-The OCP Plugin project consists of three main components: OCP southbound plugin, OCP protocol library
-and OCP service. For details on each of them, refer to the OCP Plugin User Guide.
-
-.Overall architecture
-image::ocpplugin/plugin-design.jpg[Overall architecture,width=500]
-
-=== Connection Establishment
-The OCP layer is transported over a TCP/IP connection established between the RE and the REC.
-OCP provides the following functions:
-
-* Control & Management of the RE by the REC
-* Transport of AISG/3GPP Iuant Layer 7 messages and alarms between REC and RE
-
-==== Hello Message
-Hello message is used by the OCP agent during connection setup. It is used for version negotiation.
-When the connection is established, the OCP agent immediately sends a Hello message with the version
-field set to highest OCP version supported by itself, along with the verdor ID and serial number of
-the radio head it is running on.
-
-The combinaiton of the verdor ID and serial number will be used by the OCP plugin to uniquely identify
-a managed radio head. When not receiving reply from the OCP plugin, the OCP agent can resend Hello
-message with pre-defined Hello timeout (THLO) and Hello resend times (NHLO).
-
-According to ORI spec, the default value of TCP Link Monitoring Timer (TTLM) is 50 seconds. The RE shall
-trigger an OCP layer restart while TTLM expires in RE or the RE detects a TCP link failure. So we may
-define NHLO * THLO = 50 seconds (e.g. NHLO = 10, THLO = 5 seconds).
-
-By nature the Hello message is a new type of indication, and it contains supported OCP version, vendor
-ID and serial number as shown below.
-
-.Hello message
-----
-<?xml version="1.0" encoding="UTF-8"?>
-<msg xmlns="http://uri.etsi.org/ori/002-2/v4.1.1">
-  <header>
-    <msgType>IND</msgType>
-    <msgUID>0</msgUID>
-  </header>
-  <body>
-    <helloInd>
-      <version>4.1.1</version>
-      <vendorId>XYZ</vendorId>
-      <serialNumber>ABC123</serialNumber>
-    </helloInd>
-  </body>
-</msg>
-----
-
-==== Ack Message
-Hello from the OCP agent will always make the OCP plugin respond with ACK. In case everything is OK,
-it will be ACK(OK). In case something is wrong, it will be ACK(FAIL).
-
-If the OCP agent receives ACK(OK), it goes to Established state. If the OCP agent receives ACK(FAIL),
-it goes to Maintenance state. The failure code and reason of ACK(FAIL) are defined as below:
-
-* FAIL_OCP_VERSION (OCP version not supported)
-* FAIL_NO_MORE_CAPACITY (OCP plugin cannot control any more radio heads)
-
-The result inside Ack message indicates OK or FAIL with different reasons.
-
-.Ack message
-----
-<?xml version="1.0" encoding="UTF-8"?>
-<msg xmlns="http://uri.etsi.org/ori/002-2/v4.1.1">
-  <header>
-    <msgType>ACK</msgType>
-    <msgUID>0</msgUID>
-  </header>
-  <body>
-    <helloAck>
-      <result>FAIL_OCP_VERSION</result>
-    </helloAck>
-  </body>
-</msg>
-----
-
-==== State Machines
-The following figures illustrate the Finite State Machine (FSM) of the OCP agent and OCP plugin
-for new connection procedure.
-
-.OCP agent state machine
-image::ocpplugin/ocpagent-state-machine.jpg[OCP agent state machine,width=500]
-
-.OCP plugin state machine
-image::ocpplugin/ocpplugin-state-machine.jpg[OCP plugin state machine,width=500]
-
-=== Northbound APIs
-There are ten exposed northbound APIs: health-check, set-time, re-reset, get-param,
-modify-param, create-obj, delete-obj, get-state, modify-state and get-fault
-
-==== health-check
-The Health Check procedure allows the application to verify that the OCP layer is functioning
-correctly at the RE.
-
-Default URL: http://localhost:8181/restconf/operations/ocp-service:health-check-nb
-
-===== POST Input
-
-[options="header",cols="2,1,2,2,1"]
-|=======
-|Field Name | Type | Description | Example | Required?
-| nodeId | String | Inventory node reference for OCP radio head | ocp:MTI-101-200 | Yes
-| tcpLinkMonTimeout | unsignedShort | TCP Link Monitoring Timeout (unit: seconds) | 50 | Yes
-|=======
-
-.Example
-----
-{
-    "health-check-nb": {
-        "input": {
-            "nodeId": "ocp:MTI-101-200",
-            "tcpLinkMonTimeout": "50"
-        }
-    }
-}
-----
-
-===== POST Output
-
-[options="header",cols="1,1,2"]
-|=======
-|Field Name | Type | Description
-| result | String, enumerated | Common default result codes
-|=======
-
-.Example
-----
-{
-    "output": {
-        "result": "SUCCESS"
-    }
-}
-----
-
-==== set-time
-The Set Time procedure allows the application to set/update the absolute time reference that
-shall be used by the RE.
-
-Default URL: http://localhost:8181/restconf/operations/ocp-service:set-time-nb
-
-===== POST Input
-
-[options="header",cols="1,1,2,2,1"]
-|=======
-|Field Name | Type | Description | Example | Required?
-| nodeId | String | Inventory node reference for OCP radio head | ocp:MTI-101-200 | Yes
-| newTime | dateTime | New datetime setting for radio head | 2016-04-26T10:23:00-05:00 | Yes
-|=======
-
-.Example
-----
-{
-    "set-time-nb": {
-        "input": {
-            "nodeId": "ocp:MTI-101-200",
-            "newTime": "2016-04-26T10:23:00-05:00"
-        }
-    }
-}
-----
-
-===== POST Output
-
-[options="header",cols="1,1,2"]
-|=======
-|Field Name | Type | Description
-| result | String, enumerated | Common default result codes + FAIL_INVALID_TIMEDATA
-|=======
-
-.Example
-----
-{
-    "output": {
-        "result": "SUCCESS"
-    }
-}
-----
-
-==== re-reset
-The RE Reset procedure allows the application to reset a specific RE.
-
-Default URL: http://localhost:8181/restconf/operations/ocp-service:re-reset-nb
-
-===== POST Input
-
-[options="header",cols="1,1,2,2,1"]
-|=======
-|Field Name | Type | Description | Example | Required?
-| nodeId | String | Inventory node reference for OCP radio head | ocp:MTI-101-200 | Yes
-|=======
-
-.Example
-----
-{
-    "re-reset-nb": {
-        "input": {
-            "nodeId": "ocp:MTI-101-200"
-        }
-    }
-}
-----
-
-===== POST Output
-
-[options="header",cols="1,1,2"]
-|=======
-|Field Name | Type | Description
-| result | String, enumerated | Common default result codes
-|=======
-
-.Example
-----
-{
-    "output": {
-        "result": "SUCCESS"
-    }
-}
-----
-
-==== get-param
-The Object Parameter Reporting procedure allows the application to retrieve the following information:
-
-. the defined object types and instances within the Resource Model of the RE
-. the values of the parameters of the objects
-
-Default URL: http://localhost:8181/restconf/operations/ocp-service:get-param-nb
-
-===== POST Input
-
-[options="header",cols="1,1,2,2,1"]
-|=======
-|Field Name | Type | Description | Example | Required?
-| nodeId | String | Inventory node reference for OCP radio head | ocp:MTI-101-200 | Yes
-| objId | String | Object ID | RxSigPath_5G:1 | Yes
-| paramName | String | Parameter name | dataLink | Yes
-|=======
-
-.Example
-----
-{
-    "get-param-nb": {
-        "input": {
-            "nodeId": "ocp:MTI-101-200",
-            "objId": "RxSigPath_5G:1",
-            "paramName": "dataLink"
-        }
-    }
-}
-----
-
-===== POST Output
-
-[options="header",cols="1,1,2"]
-|=======
-|Field Name | Type | Description
-| id | String | Object ID
-| name | String | Object parameter name
-| value | String | Object parameter value
-| result | String, enumerated | Common default result codes + "FAIL_UNKNOWN_OBJECT", "FAIL_UNKNOWN_PARAM"
-|=======
-
-.Example
-----
-{
-    "output": {
-        "obj": [
-            {
-                "id": "RxSigPath_5G:1",
-                "param": [
-                    {
-                        "name": "dataLink",
-                        "value": "dataLink:1"
-                    }
-                ]
-            }
-        ],
-        "result": "SUCCESS"
-    }
-}
-----
-
-==== modify-param
-The Object Parameter Modification procedure allows the application to configure the values of the
-parameters of the objects identified by the Resource Model.
-
-Default URL: http://localhost:8181/restconf/operations/ocp-service:modify-param-nb
-
-===== POST Input
-
-[options="header",cols="1,1,2,2,1"]
-|=======
-|Field Name | Type | Description | Example | Required?
-| nodeId | String | Inventory node reference for OCP radio head | ocp:MTI-101-200 | Yes
-| objId | String | Object ID | RxSigPath_5G:1 | Yes
-| name | String | Object parameter name | dataLink | Yes
-| value | String | Object parameter value | dataLink:1 | Yes
-|=======
-
-.Example
-----
-{
-    "modify-param-nb": {
-        "input": {
-            "nodeId": "ocp:MTI-101-200",
-            "objId": "RxSigPath_5G:1",
-            "param": [
-                {
-                    "name": "dataLink",
-                    "value": "dataLink:1"
-                }
-            ]
-        }
-    }
-}
-----
-
-===== POST Output
-
-[options="header",cols="1,1,2"]
-|=======
-|Field Name | Type | Description
-| objId | String | Object ID
-| globResult | String, enumerated | Common default result codes + "FAIL_UNKNOWN_OBJECT", "FAIL_PARAMETER_FAIL",
-  "FAIL_NOSUCH_RESOURCE"
-| name | String | Object parameter name
-| result | String, enumerated | "SUCCESS", "FAIL_UNKNOWN_PARAM", "FAIL_PARAM_READONLY", "FAIL_PARAM_LOCKREQUIRED",
-  "FAIL_VALUE_OUTOF_RANGE", "FAIL_VALUE_TYPE_ERROR"
-|=======
-
-.Example
-----
-{
-    "output": {
-        "objId": "RxSigPath_5G:1",
-        "globResult": "SUCCESS",
-        "param": [
-            {
-                "name": "dataLink",
-                "result": "SUCCESS"
-            }
-        ]
-    }
-}
-----
-
-==== create-obj
-The Object Creation procedure allows the application to create and initialize a new instance
-of the given object type on the RE.
-
-Default URL: http://localhost:8181/restconf/operations/ocp-service:create-obj-nb
-
-===== POST Input
-
-[options="header",cols="1,1,2,2,1"]
-|=======
-|Field Name | Type | Description | Example | Required?
-| nodeId | String | Inventory node reference for OCP radio head | ocp:MTI-101-200 | Yes
-| objType | String | Object type | RxSigPath_5G | Yes
-| name | String | Object parameter name | dataLink | No
-| value | String | Object parameter value | dataLink:1 | No
-|=======
-
-.Example
-----
-{
-    "create-obj-nb": {
-        "input": {
-            "nodeId": "ocp:MTI-101-200",
-            "objType": "RxSigPath_5G",
-            "param": [
-                {
-                    "name": "dataLink",
-                    "value": "dataLink:1"
-                }
-            ]
-        }
-    }
-}
-----
-
-===== POST Output
-
-[options="header",cols="1,1,2"]
-|=======
-|Field Name | Type | Description
-| objId | String | Object ID
-| globResult | String, enumerated | Common default result codes + "FAIL_UNKNOWN_OBJTYPE", "FAIL_STATIC_OBJTYPE",
-  "FAIL_UNKNOWN_OBJECT", "FAIL_CHILD_NOTALLOWED", "FAIL_OUTOF_RESOURCES" "FAIL_PARAMETER_FAIL", "FAIL_NOSUCH_RESOURCE"
-| name | String | Object parameter name
-| result | String, enumerated | "SUCCESS", "FAIL_UNKNOWN_PARAM", "FAIL_PARAM_READONLY", "FAIL_PARAM_LOCKREQUIRED",
-  "FAIL_VALUE_OUTOF_RANGE", "FAIL_VALUE_TYPE_ERROR"
-|=======
-
-.Example
-----
-{
-    "output": {
-        "objId": "RxSigPath_5G:0",
-        "globResult": "SUCCESS",
-        "param": [
-            {
-                "name": "dataLink",
-                "result": "SUCCESS"
-            }
-        ]
-    }
-}
-----
-
-==== delete-obj
-The Object Deletion procedure allows the application to delete a given object instance and
-recursively its entire child objects on the RE.
-
-Default URL: http://localhost:8181/restconf/operations/ocp-service:delete-obj-nb
-
-===== POST Input
-
-[options="header",cols="1,1,2,2,1"]
-|=======
-|Field Name | Type | Description | Example | Required?
-| nodeId | String | Inventory node reference for OCP radio head | ocp:MTI-101-200 | Yes
-| objId | String | Object ID | RxSigPath_5G:1 | Yes
-|=======
-
-.Example
-----
-{
-    "delete-obj-nb": {
-        "input": {
-            "nodeId": "ocp:MTI-101-200",
-            "obj-id": "RxSigPath_5G:0"
-        }
-    }
-}
-----
-
-===== POST Output
-
-[options="header",cols="1,1,2"]
-|=======
-|Field Name | Type | Description
-| result | String, enumerated | Common default result codes + "FAIL_UNKNOWN_OBJECT",
-  "FAIL_STATIC_OBJTYPE", "FAIL_LOCKREQUIRED"
-|=======
-
-.Example
-----
-{
-    "output": {
-        "result": "SUCCESS"
-    }
-}
-----
-
-==== get-state
-The Object State Reporting procedure allows the application to acquire the current state
-(for the requested state type) of one or more objects of the RE resource model, and
-additionally configure event-triggered reporting of the detected state changes for all
-state types of the indicated objects.
-
-Default URL: http://localhost:8181/restconf/operations/ocp-service:get-state-nb
-
-===== POST Input
-
-[options="header",cols="2,1,2,2,1"]
-|=======
-|Field Name | Type | Description | Example | Required?
-| nodeId | String | Inventory node reference for OCP radio head | ocp:MTI-101-200 | Yes
-| objId | String | Object ID | RxSigPath_5G:1 | Yes
-| stateType | String, enumerated | Valid values: "AST", "FST", "ALL" | ALL | Yes
-| eventDrivenReporting | Boolean | Event-triggered reporting of state change | true | Yes
-|=======
-
-.Example
-----
-{
-    "get-state-nb": {
-        "input": {
-            "nodeId": "ocp:MTI-101-200",
-            "objId": "antPort:0",
-            "stateType": "ALL",
-            "eventDrivenReporting": "true"
-        }
-    }
-}
-----
-
-===== POST Output
-
-[options="header",cols="1,1,2"]
-|=======
-|Field Name | Type | Description
-| id | String | Object ID
-| type | String, enumerated | State type. Valid values: "AST", "FST
-| value | String, enumerated | State value. Valid values: For state type = "AST": "LOCKED", "UNLOCKED".
-  For state type = "FST": "PRE_OPERATIONAL", "OPERATIONAL", "DEGRADED", "FAILED", "NOT_OPERATIONAL", "DISABLED"
-| result | String, enumerated | Common default result codes + "FAIL_UNKNOWN_OBJECT", "FAIL_UNKNOWN_STATETYPE",
-  "FAIL_VALUE_OUTOF_RANGE"
-|=======
-
-.Example
-----
-{
-    "output": {
-        "obj": [
-            {
-                "id": "antPort:0",
-                "state": [
-                    {
-                        "type": "FST",
-                        "value": "DISABLED"
-                    },
-                    {
-                        "type": "AST",
-                        "value": "LOCKED"
-                    }
-                ]
-            }
-        ],
-        "result": "SUCCESS"
-    }
-}
-----
-
-==== modify-state
-The Object State Modification procedure allows the application to trigger a change in the
-state of an object of the RE Resource Model.
-
-Default URL: http://localhost:8181/restconf/operations/ocp-service:modify-state-nb
-
-===== POST Input
-
-[options="header",cols="1,1,2,2,1"]
-|=======
-|Field Name | Type | Description | Example | Required?
-| nodeId | String | Inventory node reference for OCP radio head | ocp:MTI-101-200 | Yes
-| objId | String | Object ID | RxSigPath_5G:1 | Yes
-| stateType | String, enumerated | Valid values: "AST", "FST", "ALL" | AST | Yes
-| stateValue | String, enumerated | Valid values: For state type = "AST": "LOCKED", "UNLOCKED".
-  For state type = "FST": "PRE_OPERATIONAL", "OPERATIONAL", "DEGRADED", "FAILED", "NOT_OPERATIONAL", "DISABLED" | LOCKED | Yes
-|=======
-
-.Example
-----
-{
-    "modify-state-nb": {
-        "input": {
-            "nodeId": "ocp:MTI-101-200",
-            "objId": "RxSigPath_5G:1",
-            "stateType": "AST",
-            "stateValue": "LOCKED"
-        }
-    }
-}
-----
-
-===== POST Output
-
-[options="header",cols="1,1,2"]
-|=======
-|Field Name | Type | Description
-| objId | String | Object ID
-| stateType | String, enumerated | State type. Valid values: "AST", "FST
-| stateValue | String, enumerated | State value. Valid values: For state type = "AST": "LOCKED", "UNLOCKED".
-  For state type = "FST": "PRE_OPERATIONAL", "OPERATIONAL", "DEGRADED", "FAILED", "NOT_OPERATIONAL", "DISABLED"
-| result | String, enumerated | Common default result codes + "FAIL_UNKNOWN_OBJECT", "FAIL_UNKNOWN_STATETYPE",
-"FAIL_UNKNOWN_STATEVALUE", "FAIL_STATE_READONLY", "FAIL_RESOURCE_UNAVAILABLE", "FAIL_RESOURCE_INUSE",
-"FAIL_PARENT_CHILD_CONFLICT", "FAIL_PRECONDITION_NOTMET
-|=======
-
-.Example
-----
-{
-    "output": {
-        "objId": "RxSigPath_5G:1",
-        "stateType": "AST",
-        "stateValue": "LOCKED",
-        "result": "SUCCESS",
-    }
-}
-----
-
-==== get-fault
-The Fault Reporting procedure allows the application to acquire information about all current
-active faults associated with a primary object, as well as configure the RE to report when the
-fault status changes for any of faults associated with the indicated primary object.
-
-Default URL: http://localhost:8181/restconf/operations/ocp-service:get-fault-nb
-
-===== POST Input
-
-[options="header",cols="1,1,2,2,1"]
-|=======
-|Field Name | Type | Description | Example | Required?
-| nodeId | String | Inventory node reference for OCP radio head | ocp:MTI-101-200 | Yes
-| objId | String | Object ID | RE:0 | Yes
-| eventDrivenReporting | Boolean | Event-triggered reporting of fault | true | Yes
-|=======
-
-.Example
-----
-{
-    "get-fault-nb": {
-        "input": {
-            "nodeId": "ocp:MTI-101-200",
-            "objId": "RE:0",
-            "eventDrivenReporting": "true"
-        }
-    }
-}
-----
-
-===== POST Output
-
-[options="header",cols="1,1,2"]
-|=======
-|Field Name | Type | Description
-| result | String, enumerated | Common default result codes + "FAIL_UNKNOWN_OBJECT", "FAIL_VALUE_OUTOF_RANGE"
-| id (obj) | String | Object ID
-| id (fault) | String | Fault ID
-| severity | String | Fault severity
-| timestamp | dateTime | Time stamp
-| descr | String | Text description
-| affectedObj | String | Affected object
-|=======
-
-.Example
-----
-{
-    "output": {
-        "result": "SUCCESS",
-        "obj": [
-            {
-                "id": "RE:0",
-                "fault": [
-                    {
-                        "id": "FAULT_OVERTEMP",
-                        "severity": "DEGRADED",
-                        "timestamp": "2012-02-12T16:35:00",
-                        "descr": "PA temp too high; Pout reduced",
-                        "affectedObj": [
-                            "TxSigPath_EUTRA:0",
-                            "TxSigPath_EUTRA:1"
-                        ]
-                    },
-                    {
-                        "id": "FAULT_VSWR_OUTOF_RANGE",
-                        "severity": "WARNING",
-                        "timestamp": "2012-02-12T16:01:05",
-                    }
-                ]
-            }
-        ],
-    }
-}
-----
-
-NOTE:
-The northbound APIs described above wrap the southbound APIs to make them accessible to external applications
-via RESTCONF, as well as take care of synchronizing the RE resource model between radio heads and the controller's
-datastore. See applications/ocp-service/src/main/yang/ocp-resourcemodel.yang for the yang representation of the RE
-resource model.
-
-=== Java Interfaces (Southbound APIs)
-The southbound APIs provide concrete implementation of the following OCP elementary functions:  health-check,
-set-time, re-reset, get-param, modify-param, create-obj, delete-obj, get-state, modify-state and get-fault.
-Any OpenDaylight services/applications (of course, including OCP service) wanting to speak OCP to radio heads will
-need to use them.
-
-==== SalDeviceMgmtService
-Interface SalDeviceMgmtService defines three methods corresponding to health-check, set-time and re-reset.
-
-.SalDeviceMgmtService.java
-----
-package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.device.mgmt.rev150811;
-
-public interface SalDeviceMgmtService
-    extends
-    RpcService
-{
-    Future<RpcResult<HealthCheckOutput>> healthCheck(HealthCheckInput input);
-
-    Future<RpcResult<SetTimeOutput>> setTime(SetTimeInput input);
-
-    Future<RpcResult<ReResetOutput>> reReset(ReResetInput input);
-
-}
-----
-
-==== SalConfigMgmtService
-Interface SalConfigMgmtService defines two methods corresponding to get-param and modify-param.
-
-.SalConfigMgmtService.java
-----
-package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.config.mgmt.rev150811;
-
-public interface SalConfigMgmtService
-    extends
-    RpcService
-{
-
-    Future<RpcResult<GetParamOutput>> getParam(GetParamInput input);
-
-    Future<RpcResult<ModifyParamOutput>> modifyParam(ModifyParamInput input);
-
-}
-----
-
-==== SalObjectLifecycleService
-Interface SalObjectLifecycleService defines two methods corresponding to create-obj and delete-obj.
-
-.SalObjectLifecycleService.java
-----
-package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.object.lifecycle.rev150811;
-
-public interface SalObjectLifecycleService
-    extends
-    RpcService
-{
-
-    Future<RpcResult<CreateObjOutput>> createObj(CreateObjInput input);
-
-    Future<RpcResult<DeleteObjOutput>> deleteObj(DeleteObjInput input);
-
-}
-----
-
-==== SalObjectStateMgmtService
-Interface SalObjectStateMgmtService defines two methods corresponding to get-state and modify-state.
-
-.SalObjectStateMgmtService.java
-----
-package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.object.state.mgmt.rev150811;
-
-public interface SalObjectStateMgmtService
-    extends
-    RpcService
-{
-
-    Future<RpcResult<GetStateOutput>> getState(GetStateInput input);
-
-    Future<RpcResult<ModifyStateOutput>> modifyState(ModifyStateInput input);
-
-}
-----
-
-==== SalFaultMgmtService
-Interface SalFaultMgmtService defines only one method corresponding to get-fault.
-
-.SalFaultMgmtService.java
-----
-package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.fault.mgmt.rev150811;
-
-public interface SalFaultMgmtService
-    extends
-    RpcService
-{
-
-    Future<RpcResult<GetFaultOutput>> getFault(GetFaultInput input);
-
-}
-----
-
-=== Notifications
-In addition to indication messages, the OCP southbound plugin will translate specific events
-(e.g., connect, disconnect) coming up from the OCP protocol library into MD-SAL Notification
-objects and then publish them to the MD-SAL. Also, the OCP service will notify the completion
-of certain operation via Notification as well.
-
-==== SalDeviceMgmtListener
-An onDeviceConnected Notification will be published to the MD-SAL as soon as a
-radio head is connected to the controller, and when that radio head is disconnected
-the OCP southbound plugin will publish an onDeviceDisconnected Notification in response
-to the disconnect event propagated from the OCP protocol library.
-
-.SalDeviceMgmtListener.java
-----
-package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.device.mgmt.rev150811;
-
-public interface SalDeviceMgmtListener
-    extends
-    NotificationListener
-{
-
-    void onDeviceConnected(DeviceConnected notification);
-
-    void onDeviceDisconnected(DeviceDisconnected notification);
-
-}
-----
-
-==== OcpServiceListener
-The OCP service will publish an onAlignmentCompleted Notification to the MD-SAL once
-it has completed the OCP alignment procedure with the radio head.
-
-.OcpServiceListener.java
-----
-package org.opendaylight.yang.gen.v1.urn.opendaylight.params.xml.ns.yang.ocp.applications.ocp.service.rev150811;
-
-public interface OcpServiceListener
-    extends
-    NotificationListener
-{
-
-    void onAlignmentCompleted(AlignmentCompleted notification);
-
-}
-----
-
-==== SalObjectStateMgmtListener
-When receiving a state change indication message, the OCP southbound plugin will propagate
-the indication message to upper layer services/applications by publishing a corresponding
-onStateChangeInd Notification to the MD-SAL.
-
-.SalObjectStateMgmtListener.java
-----
-package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.object.state.mgmt.rev150811;
-
-public interface SalObjectStateMgmtListener
-    extends
-    NotificationListener
-{
-
-    void onStateChangeInd(StateChangeInd notification);
-
-}
-----
-
-==== SalFaultMgmtListener
-When receiving a fault indication message, the OCP southbound plugin will propagate
-the indication message to upper layer services/applications by publishing a corresponding
-onFaultInd Notification to the MD-SAL.
-
-.SalFaultMgmtListener.java
-----
-package org.opendaylight.yang.gen.v1.urn.opendaylight.ocp.fault.mgmt.rev150811;
-
-public interface SalFaultMgmtListener
-    extends
-    NotificationListener
-{
-
-    void onFaultInd(FaultInd notification);
-
-}
-----
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/ocp-plugin-developer-guide.html
index 32ca692209869b3389b341833fd57d54c1f9d7bb..f959067920552e38a62c929ed6798593f2905746 100755 (executable)
@@ -1,119 +1,3 @@
 == OF-CONFIG Developer Guide ==
 
-=== Overview ===
-OF-CONFIG defines an OpenFlow switch as an abstraction called an
-OpenFlow Logical Switch. The OF-CONFIG protocol enables configuration of
-essential artifacts of an OpenFlow Logical Switch so that an OpenFlow
-controller can communicate and control the OpenFlow Logical switch via
-the OpenFlow protocol. OF-CONFIG introduces an operating context for one
-or more OpenFlow data paths called an OpenFlow Capable Switch for one or
-more switches. An OpenFlow Capable Switch is intended to be equivalent
-to an actual physical or virtual network element (e.g. an Ethernet
-switch) which is hosting one or more OpenFlow data paths by partitioning
-a set of OpenFlow related resources such as ports and queues among the
-hosted OpenFlow data paths. The OF-CONFIG protocol enables dynamic
-association of the OpenFlow related resources of an OpenFlow Capable
-Switch with specific OpenFlow Logical Switches which are being hosted on
-the OpenFlow Capable Switch. OF-CONFIG does not specify or report how
-the partitioning of resources on an OpenFlow Capable Switch is achieved.
-OF-CONFIG assumes that resources such as ports and queues are
-partitioned amongst multiple OpenFlow Logical Switches such that each
-OpenFlow Logical Switch can assume full control over the resources that
-is assigned to it.
-
-=== How to start ===
-
-- start OF-CONFIG feature as below:
-+
- feature:install odl-of-config-all
-
-=== Compatible with NETCONF ===
-
-- Config OpenFlow Capable Switch via OpenFlow Configuration Points
-+
-Method: POST
-+
-URI: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules
-+
-Headers: Content-Type" and "Accept" header attributes set to application/xml
-+
-Payload:
-+
-[source, xml]
-----
-<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
-  <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">prefix:sal-netconf-connector</type>
-  <name>testtool</name>
-  <address xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">10.74.151.67</address>
-  <port xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">830</port>
-  <username xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">mininet</username>
-  <password xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">mininet</password>
-  <tcp-only xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">false</tcp-only>
-  <event-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:netty">prefix:netty-event-executor</type>
-    <name>global-event-executor</name>
-  </event-executor>
-  <binding-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
-    <name>binding-osgi-broker</name>
-  </binding-registry>
-  <dom-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">prefix:dom-broker-osgi-registry</type>
-    <name>dom-broker</name>
-  </dom-registry>
-  <client-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:config:netconf">prefix:netconf-client-dispatcher</type>
-    <name>global-netconf-dispatcher</name>
-  </client-dispatcher>
-  <processing-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:threadpool</type>
-    <name>global-netconf-processing-executor</name>
-  </processing-executor>
-</module>
-----
-
-- NETCONF establishes the connections with OpenFlow Capable Switches using
-the parameters in the previous step. NETCONF also gets the
-information of whether the OpenFlow Switch supports NETCONF during the signal handshaking.
-The information will be stored in the NETCONF topology as prosperity of a node.
-
-- OF-CONFIG can be aware of the switches accessing and leaving
-by monitoring the data changes in the NETCONF topology. For the detailed
-information it can be refered to the link:https://git.opendaylight.org/gerrit/gitweb?p=of-config.git;a=blob_plain;f=southbound/southbound-impl/src/main/java/org/opendaylight/ofconfig/southbound/impl/OdlOfconfigApiServiceImpl.java;hb=refs/heads/stable/boron[implementation].
-
-=== The establishment of OF-CONFIG topology ===
-
-Firstly, OF-CONFIG will check whether the newly accessed switch supports
-OF-CONFIG by querying the NETCONF interface.
-
-. During the NETCONF connection's establishment, the NETCONF and the
-switches will exchange the their capabilities via the "hello" message.
-
-. OF-CONFIG gets the connection information between the NETCONF and
-switches by monitoring the data changes via the interface of
-DataChangeListener.
-
-. After the NETCONF connection established, the OF-CONFIG module will
-check whether OF-CONFIG capability is in the switch's capabilities list
-which is got in step 1.
-
-. If the result of step 3 is yes, the OF-CONFIG will call the
-following processing steps to create the topology database.
-
-
-For the detailed information it can be referred to the link:https://git.opendaylight.org/gerrit/gitweb?p=of-config.git;a=blob_plain;f=southbound/southbound-impl/src/main/java/org/opendaylight/ofconfig/southbound/impl/listener/OfconfigListenerHelper.java;hb=refs/heads/stable/boron[implementation].
-
-Secondly, the capable switch node and logical switch node are added in
-the OF-CONFIG topology if the switch supports OF-CONFIG.
-
-OF-CONFIG's topology compromise: Capable Switch topology (underlay) and
-logical Switch topology (overlay). Both of them are enhanced (augment) on
-
-/topo:network-topology/topo:topology/topo:node
-
-The NETCONF will add the nodes in the Topology via the path
-of "/topo:network-topology/topo:topology/topo:node" if it gets the
-configuration information of the switches.
-
-For the detailed information it can be referred to the link:https://git.opendaylight.org/gerrit/gitweb?p=of-config.git;a=blob;f=southbound/southbound-api/src/main/yang/odl-ofconfig-topology.yang;h=dbdaec46ee59da3791386011f571d7434dd1e416;hb=refs/heads/stable/boron[implementation].
-
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/of-config-developer-guide.html
index ff4ed2463629c75973e6a33bff142ffbb3bacf45..86ab46928e42586968ba496b1ce270d3da54fd35 100644 (file)
@@ -1,810 +1,3 @@
 == OpenFlow Protocol Library Developer Guide
 
-=== Introduction
-OpenFlow Protocol Library is component in OpenDaylight, that mediates communication
-between OpenDaylight controller and hardware devices supporting OpenFlow protocol.
-Primary goal is to provide user (or upper layers of OpenDaylight) communication
-channel, that can be used for managing network hardware devices.
-
-=== Features Overview
-There are three features inside openflowjava:
-
-* *odl-openflowjava-protocol* provides all openflowjava bundles, that are needed
-for communication with openflow devices. It ensures message translation and
-handles network connections. It also provides openflow protocol specific
-model.
-* *odl-openflowjava-all* currently contains only odl-openflowjava-protocol feature.
-* *odl-openflowjava-stats* provides mechanism for message counting and reporting.
-Can be used for performance analysis.
-
-=== odl-openflowjava-protocol Architecture
-Basic bundles contained in this feature are openflow-protocol-api,
-openflow-protocol-impl, openflow-protocol-spi and util.
-
-* *openflow-protocol-api* - contains openflow model, constants and keys used for
-(de)serializer registration.
-* *openflow-protocol-impl* - contains message factories, that translate binary
-messages into DataObjects and vice versa. Bundle also contains network connection
-handlers - servers, netty pipeline handlers, ...
-* *openflow-protocol-spi* - entry point for openflowjava configuration,
-startup and close. Basically starts implementation.
-* *util* - utility classes for binary-Java conversions and to ease experimenter
-key creation
-
-=== odl-openflowjava-stats Feature
-Runs over odl-openflowjava-protocol. It counts various message types / events
-and reports counts in specified time periods. Statistics collection can be
-configured in openflowjava-config/src/main/resources/45-openflowjava-stats.xml
-
-=== Key APIs and Interfaces
-Basic API / SPI classes are ConnectionAdapter (Rpc/notifications) and
-SwitchConnectionProcider (configure, start, shutdown)
-
-//=== API Reference Documentation
-//Provide links to JavaDoc, REST API documentation, etc.  [TBD]
-
-=== Installation ===
-Pull the code and import project into your IDE.
-----
-git clone ssh://<username>@git.opendaylight.org:29418/openflowjava.git
-----
-=== Configuration ===
-Current implementation allows to configure:
-
-* listening port (mandatory)
-* transfer protocol (mandatory)
-* switch idle timeout (mandatory)
-* TLS configuration (optional)
-* thread count (optional)
-
-You can find exemplary Openflow Protocol Library instance configuration below:
-----
-<data xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
-  <modules xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
-    <!-- default OF-switch-connection-provider (port 6633) -->
-    <module>
-      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider:impl">prefix:openflow-switch-connection-provider-impl</type>
-      <name>openflow-switch-connection-provider-default-impl</name>
-      <port>6633</port>
-<!--  Possible transport-protocol options: TCP, TLS, UDP -->
-      <transport-protocol>TCP</transport-protocol>
-      <switch-idle-timeout>15000</switch-idle-timeout>
-<!--       Exemplary TLS configuration:
-            - uncomment the <tls> tag
-            - copy exemplary-switch-privkey.pem, exemplary-switch-cert.pem and exemplary-cacert.pem
-              files into your virtual machine
-            - set VM encryption options to use copied keys
-            - start communication
-           Please visit OpenflowPlugin or Openflow Protocol Library#Documentation wiki pages
-           for detailed information regarding TLS -->
-<!--       <tls>
-             <keystore>/exemplary-ctlKeystore</keystore>
-             <keystore-type>JKS</keystore-type>
-             <keystore-path-type>CLASSPATH</keystore-path-type>
-             <keystore-password>opendaylight</keystore-password>
-             <truststore>/exemplary-ctlTrustStore</truststore>
-             <truststore-type>JKS</truststore-type>
-             <truststore-path-type>CLASSPATH</truststore-path-type>
-             <truststore-password>opendaylight</truststore-password>
-             <certificate-password>opendaylight</certificate-password>
-           </tls> -->
-<!--       Exemplary thread model configuration. Uncomment <threads> tag below to adjust default thread model -->
-<!--       <threads>
-             <boss-threads>2</boss-threads>
-             <worker-threads>8</worker-threads>
-           </threads> -->
-    </module>
-----
-----
-    <!-- default OF-switch-connection-provider (port 6653) -->
-    <module>
-      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider:impl">prefix:openflow-switch-connection-provider-impl</type>
-      <name>openflow-switch-connection-provider-legacy-impl</name>
-      <port>6653</port>
-<!--  Possible transport-protocol options: TCP, TLS, UDP -->
-      <transport-protocol>TCP</transport-protocol>
-      <switch-idle-timeout>15000</switch-idle-timeout>
-<!--       Exemplary TLS configuration:
-            - uncomment the <tls> tag
-            - copy exemplary-switch-privkey.pem, exemplary-switch-cert.pem and exemplary-cacert.pem
-              files into your virtual machine
-            - set VM encryption options to use copied keys
-            - start communication
-           Please visit OpenflowPlugin or Openflow Protocol Library#Documentation wiki pages
-           for detailed information regarding TLS -->
-<!--       <tls>
-             <keystore>/exemplary-ctlKeystore</keystore>
-             <keystore-type>JKS</keystore-type>
-             <keystore-path-type>CLASSPATH</keystore-path-type>
-             <keystore-password>opendaylight</keystore-password>
-             <truststore>/exemplary-ctlTrustStore</truststore>
-             <truststore-type>JKS</truststore-type>
-             <truststore-path-type>CLASSPATH</truststore-path-type>
-             <truststore-password>opendaylight</truststore-password>
-             <certificate-password>opendaylight</certificate-password>
-           </tls> -->
-<!--       Exemplary thread model configuration. Uncomment <threads> tag below to adjust default thread model -->
-<!--       <threads>
-             <boss-threads>2</boss-threads>
-             <worker-threads>8</worker-threads>
-           </threads> -->
-    </module>
-----
-----
-    <module>
-      <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:openflow:common:config:impl">prefix:openflow-provider-impl</type>
-      <name>openflow-provider-impl</name>
-      <openflow-switch-connection-provider>
-        <type xmlns:ofSwitch="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider">ofSwitch:openflow-switch-connection-provider</type>
-        <name>openflow-switch-connection-provider-default</name>
-      </openflow-switch-connection-provider>
-      <openflow-switch-connection-provider>
-        <type xmlns:ofSwitch="urn:opendaylight:params:xml:ns:yang:openflow:switch:connection:provider">ofSwitch:openflow-switch-connection-provider</type>
-        <name>openflow-switch-connection-provider-legacy</name>
-      </openflow-switch-connection-provider>
-      <binding-aware-broker>
-        <type xmlns:binding="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">binding:binding-broker-osgi-registry</type>
-        <name>binding-osgi-broker</name>
-      </binding-aware-broker>
-    </module>
-  </modules>
-----
-Possible transport-protocol options:
-
-* TCP
-* TLS
-* UDP
-
-Switch-idle timeout specifies time needed to detect idle state of switch. When
-no message is received from switch within this time, upper layers are notified
-on switch idleness.
-To be able to use this exemplary TLS configuration:
-
-* uncomment the +<tls>+ tag
-* copy _exemplary-switch-privkey.pem_, _exemplary-switch-cert.pem_ and
-_exemplary-cacert.pem_ files into your virtual machine
-* set VM encryption options to use copied keys (please visit TLS support wiki page
-for detailed information regarding TLS)
-* start communication
-
-Thread model configuration specifies how many threads are desired to perform
-Netty's I/O operations.
-
-* boss-threads specifies the number of threads that register incoming connections
-* worker-threads specifies the number of threads performing read / write
-(+ serialization / deserialization) operations.
-
-
-=== Architecture
-
-==== Public API +(openflow-protocol-api)+
-Set of interfaces and builders for immutable data transfer objects representing
-Openflow Protocol structures.
-
-Transfer objects and service APIs are infered from several YANG models
-using code generator to reduce verbosity of definition and repeatibility of code.
-
-The following YANG modules are defined:
-
-* openflow-types - defines common Openflow specific types
-* openflow-instruction - defines base Openflow instructions
-* openflow-action - defines base Openflow actions
-* openflow-augments - defines object augmentations
-* openflow-extensible-match - defines Openflow OXM match
-* openflow-protocol - defines Openflow Protocol messages
-* system-notifications - defines system notification objects
-* openflow-configuration - defines structures used in ConfigSubsystem
-
-This modules also reuse types from following YANG modules:
-
-* ietf-inet-types - IP adresses, IP prefixes, IP-protocol related types
-* ietf-yang-types - Mac Address, etc.
-
-The use of predefined types is to make APIs contracts more safe, better readable
-and documented (e.g using MacAddress instead of byte array...)
-
-==== TCP Channel pipeline +(openflow-protocol-impl)+
-
-Creates channel processing pipeline based on configuration and support.
-
-.TCP Channel pipeline
-imageopenflowjava/500px-TCPChannelPipeline.png[width=500]
-
-.Switch Connection Provider
-Implementation of connection point for other projects. Library exposes its
-functionality through this class.
-Library can be configured, started and shutdowned here. There are also methods
-for custom (de)serializer registration.
-
-.Tcp Connection Initializer
-In order to initialize TCP connection to a device (switch), OF Plugin calls method
-+initiateConnection()+ in +SwitchConnectionProvider+. This method in turn initializes
-(Bootstrap) server side channel towards the device.
-
-.TCP Handler
-Represents single server that is handling incoming connections over TCP / TLS protocol.
-TCP Handler creates a single instance of TCP Channel Initializer that will initialize
-channels. After that it binds to configured InetAddress and port. When a new
-device connects, TCP Handler registers its channel and passes control to
-TCP Channel Initializer.
-
-.TCP Channel Initializer
-This class is used for channel initialization / rejection and passing arguments.
-After a new channel has been registered it calls Switch Connection Handler's
-(OF Plugin) accept method to decide if the library should keep the newly registered
-channel or if the channel should be closed. If the channel has been accepted,
-TCP Channel Initializer creates the whole pipeline with needed handlers and also
-with ConnectionAdapter instance. After the channel pipeline is ready, Switch
-Connection Handler is notified with +onConnectionReady+ notification.
-OpenFlow Plugin can now start sending messages downstream.
-
-.Idle Handler
-If there are no messages received for more than time specified, this handler
-triggers idle state notification.
-The switch idle timeout is received as a parameter from ConnectionConfiguration
-settings. Idle State Handler is inactive while there are messages received within
-the switch idle timeout. If there are no messages received for more than timeout
-specified, handler creates SwitchIdleEvent message and sends it upstream.
-
-.TLS Handler
-It encrypts and decrypts messages over TLS protocol.
-Engaging TLS Handler into pipeline is matter of configuration (+<tls>+ tag).
-TLS communication is either unsupported or required. TLS Handler is represented
-as a Netty's SslHandler.
-
-.OF Frame Decoder
-Parses input stream into correct length message frames for further processing.
-Framing is based on Openflow header length. If received message is shorter than
-minimal length of OpenFlow message (8 bytes), OF Frame Decoder waits for more data.
-After receiving at least 8 bytes the decoder checks length in OpenFlow header.
-If there are still some bytes missing, the decoder waits for them. Else the OF
-Frame Decoder sends correct length message to next handler in the channel pipeline.
-
-.OF Version Detector
-Detects version of used OpenFlow Protocol and discards unsupported version messages.
-If the detected version is supported, OF Version Detector creates
-+VersionMessageWrapper+ object containing the detected version and byte message
-and sends this object upstream.
-
-.OF Decoder
-Chooses correct deserilization factory (based on message type) and deserializes
-messages into generated DTOs (Data Transfer Object).
-OF Decoder receives +VersionMessageWrapper+ object and passes it to
-+DeserializationFactory+ which will return translated DTO. +DeserializationFactory+
-creates +MessageCodeKey+ object with version and type of received message and
-Class of object that will be the received message deserialized into. This object
-is used as key when searching for appropriate decoder in +DecoderTable+.
-+DecoderTable+ is basically a map storing decoders. Found decoder translates
-received message into DTO. If there was no decoder found, null is returned. After
-returning translated DTO back to OF Decoder, the decoder checks if it is null or not.
-When the DTO is null, the decoder logs this state and throws an Exception. Else it
-passes the DTO further upstream. Finally, the OF Decoder releases ByteBuf containing
-received and decoded byte message.
-
-.OF Encoder
-Chooses correct serialization factory (based on type of DTO) and serializes DTOs
-into byte messages.
-OF Encoder does the opposite than the OF Decoder using the same principle.
-OF Encoder receives DTO, passes it for translation and if the result is not null,
-it sends translated DTO downstream as a ByteBuf. Searching for appropriate encoder
-is done via MessageTypeKey, based on version and class of received DTO.
-
-.Delegating Inbound Handler
-Delegates received DTOs to Connection Adapter.
-It also reacts on channelInactive and channelUnregistered events. Upon one of
-these events is triggered, DelegatingInboundHandler creates DisconnectEvent message
-and sends it upstream, notifying upper layers about switch disconnection.
-
-.Channel Outbound Queue
-Message flushing handler.
-Stores outgoing messages (DTOs) and flushes them. Flush is performed based on time
-expired and on the number of messages enqueued.
-
-.Connection Adapter
-Provides a facade on top of pipeline, which hides netty.io specifics. Provides a
-set of methods to register for incoming messages and to send messages to particular
-channel / session.
-ConnectionAdapterImpl basically implements three interfaces (unified in one
-superinterface ConnectionFacade):
-
-* ConnectionAdapter
-* MessageConsumer
-* OpenflowProtocolService
-
-
-*ConnectionAdapter* interface has methods for setting up listeners (message,
-system and connection ready listener), method to check if all listeners are set,
-checking if the channel is alive and disconnect method. Disconnect method clears
-responseCache and disables consuming of new messages.
-
-*MessageConsumer* interface holds only one method: +consume()+. +Consume()+ method
-is called from DelegatingInboundHandler. This method processes received DTO's based
-on their type. There are three types of received objects:
-
-* System notifications - invoke system notifications in OpenFlow Plugin
-(systemListener set). In case of +DisconnectEvent+ message, the Connection Adapter
-clears response cache and disables consume() method processing,
-* OpenFlow asynchronous messages (from switch) - invoke corresponding notifications
-in OpenFlow Plugin,
-* OpenFlow symmetric messages (replies to requests) - create +RpcResponseKey+
-with XID and DTO's class set. This +RpcResponseKey+ is then used to find
-corresponding future object in responseCache. Future object is set with success
-flag, received message and errors (if any occurred). In case no corresponding
-future was found in responseCache, Connection Adapter logs warning and discards
-the message. Connection Adapter also logs warning when an unknown DTO is received.
-
-*OpenflowProtocolService* interface contains all rpc-methods for sending messages
-from upper layers (OpenFlow Plugin) downstream and responding. Request messages
-return Future filled with expected reply message, otherwise the expected Future
-is of type Void.
-
-*NOTE:*
-MultipartRequest message is the only exception. Basically it is request - reply
-Message type, but it wouldn't be able to process more following MultipartReply
-messages if this was implemented as rpc (only one Future). This is why MultipartReply
-is implemented as notification. OpenFlow Plugin takes care of correct message
-processing.
-
-
-==== UDP Channel pipeline (openflow-protocol-impl)
-Creates UDP channel processing pipeline based on configuration and support.
-*Switch Connection Provider*, *Channel Outbound Queue* and *Connection Adapter*
-fulfill the same role as in case of TCP connection / channel pipeline (please
-see above).
-
-.UDP Channel pipeline
-image::openflowjava/500px-UdpChannelPipeline.png[width=500]
-
-.UDP Handler
-
-Represents single server that is handling incoming connections over UDP (DTLS)
-protocol.
-UDP Handler creates a single instance of UDP Channel Initializer that will
-initialize channels. After that it binds to configured InetAddress and port.
-When a new device connects, UDP Handler registers its channel and passes control
-to UDP Channel Initializer.
-
-.UDP Channel Initializer
-This class is used for channel initialization and passing arguments.
-After a new channel has been registered (for UDP there is always only one channel)
-UDP Channel Initializer creates whole pipeline with needed handlers.
-
-.DTLS Handler
-Haven't been implemented yet. Will take care of secure DTLS connections.
-
-.OF Datagram Packet Handler
-Combines functionality of OF Frame Decoder and OF Version Detector. Extracts
-messages from received datagram packets and checks if message version is supported.
-If there is a message received from yet unknown sender, OF Datagram Packet Handler
-creates Connection Adapter for this sender and stores it under sender's address in
-+UdpConnectionMap+. This map is also used for sending the messages and for correct
-Connection Adapter lookup - to delegate messages from one channel to multiple sessions.
-
-.OF Datagram Packet Decoder
-Chooses correct deserilization factory (based on message type) and deserializes
-messages into generated DTOs.
-OF Decoder receives +VersionMessageUdpWrapper+ object and passes it to
-+DeserializationFactory+ which will return translated DTO. +DeserializationFactory+
-creates +MessageCodeKey+ object with version and type of received message and
-Class of object that will be the received message deserialized into. This object
-is used as key when searching for appropriate decoder in +DecoderTable+.
-+DecoderTable+ is basically a map storing decoders. Found decoder translates
-received message into DTO (DataTransferObject). If there was no decoder found,
-null is returned. After returning translated DTO back to OF Datagram Packet Decoder,
-the decoder checks if it is null or not. When the DTO is null, the decoder logs
-this state. Else it looks up appropriate Connection Adapter in +UdpConnectionMap+
-and passes the DTO to found Connection Adapter. Finally, the OF Decoder releases
-+ByteBuf+ containing received and decoded byte message.
-
-.OF Datagram Packet Encoder
-Chooses correct serialization factory (based on type of DTO) and serializes DTOs
-into byte messages.
-OF Datagram Packet Encoder does the opposite than the OF Datagram Packet Decoder
-using the same principle. OF Encoder receives DTO, passes it for translation and
-if the result is not null, it sends translated DTO downstream as a datagram packet.
-Searching for appropriate encoder is done via MessageTypeKey, based on version
-and class of received DTO.
-
-==== SPI (openflow-protocol-spi)
-Defines interface for library's connection point for other projects. Library
-exposes its functionality through this interface.
-
-==== Integration test (openflow-protocol-it)
-Testing communication with simple client.
-
-==== Simple client(simple-client)
-Lightweight switch simulator - programmable with desired scenarios.
-
-==== Utility (util)
-Contains utility classes, mainly for work with ByteBuf.
-
-
-=== Library's lifecycle
-
-Steps (after the library's bundle is started):
-
-* [1] Library is configured by ConfigSubsystem (adress, ports, encryption, ...)
-* [2] Plugin injects its SwitchConnectionHandler into the Library
-* [3] Plugin starts the Library
-* [4] Library creates configured protocol handler (e.g. TCP Handler)
-* [5] Protocol Handler creates Channel Initializer
-* [6] Channel Initializer asks plugin whether to accept incoming connection on
-each new switch connection
-* [7] Plugin responds:
-    - true - continue building pipeline
-    - false - reject connection / disconnect channel
-* [8] Library notifies Plugin with onSwitchConnected(ConnectionAdapter)
-notification, passing reference to ConnectionAdapter, that will handle the connection
-* [9] Plugin registers its system and message listeners
-* [10] FireConnectionReadyNotification() is triggered, announcing that pipeline
-handlers needed for communication have been created and Plugin can start
-communication
-* [11] Plugin shutdowns the Library when desired
-
-.Library lifecycle
-image::openflowjava/Library_lifecycle.png[width=500]
-
-
-=== Statistics collection
-
-==== Introduction
-Statistics collection collects message statistics.
-Current collected statistics (+DS+ - downstream, +US+ - upstream):
-
-* +DS_ENTERED_OFJAVA+ - all messages that entered openflowjava (picked up from
-openflowplugin)
-* +DS_ENCODE_SUCCESS+ - successfully encoded messages
-* +DS_ENCODE_FAIL+ - messages that failed during encoding (serialization) process
-* +DS_FLOW_MODS_ENTERED+ - all flow-mod messages that entered openflowjava
-* +DS_FLOW_MODS_SENT+ - all flow-mod messages that were successfully sent
-* +US_RECEIVED_IN_OFJAVA+ - messages received from switch
-* +US_DECODE_SUCCESS+ - successfully decoded messages
-* +US_DECODE_FAIL+ - messages that failed during decoding (deserialization) process
-* +US_MESSAGE_PASS+ - messages handed over to openflowplugin
-
-==== Karaf
-In orded to start statistics, it is needed to feature:install odl-openflowjava-stats.
-To see the logs one should use log:set DEBUG org.opendaylight.openflowjava.statistics
-and than probably log:display (you can log:list to see if the logging has been set).
-To adjust collection settings it is enough to modify 45-openflowjava-stats.xml.
-
-==== JConsole
-JConsole provides two commands for the statistics collection:
-
-* printing current statistics
-* resetting statistic counters
-
-After attaching JConsole to correct process, one only needs to go into MBeans
-+tab -> org.opendaylight.controller -> RuntimeBean -> statistics-collection-service-impl
--> statistics-collection-service-impl -> Operations+  to be able to use this commands.
-
-=== TLS Support
-NOTE: see OpenFlow Plugin Developper Guide
-
-=== Extensibility
-
-==== Introduction
-
-Entry point for the extensibility is +SwitchConnectionProvider+.
-+SwitchConnectionProvider+ contains methods for (de)serializer registration.
-To register deserializer it is needed to use .register*Deserializer(key, impl).
-To register serializer one must use .register*Serializer(key, impl). Registration
-can occur either during configuration or at runtime.
-
-*NOTE*: In case when experimenter message is received and no (de)serializer was
-registered, the library will throw +IllegalArgumentException+.
-
-==== Basic Principle
-In order to use extensions it is needed to augment existing model and register new (de)serializers.
-
-Augmenting the model:
-1. Create new augmentation
-
-Register (de)serializers:
-1. Create your (de)serializer
-2. Let it implement +OFDeserializer<>+ / +OFSerializer<>+
-- in case the structure you are (de)serializing needs to be used in Multipart
-TableFeatures messages, let it implement +HeaderDeserializer<>+ / +HeaderSerializer+
-3. Implement prescribed methods
-4. Register your deserializer under appropriate key (in our case
-+ExperimenterActionDeserializerKey+)
-5. Register your serializer under appropriate key (in our case
-+ExperimenterActionSerializerKey+)
-6. Done, test your implementation
-
-*NOTE*: If you don't know what key should be used with your (de)serializer
-implementation, please visit <<registration_keys, Registration keys>> page.
-
-==== Example
-Let's say we have vendor / experimenter action represented by this structure:
-----
-struct foo_action {
-    uint16_t type;
-    uint16_t length;
-    uint32_t experimenter;
-    uint16_t first;
-    uint16_t second;
-    uint8_t  pad[4];
-}
-----
-First, we have to augment existing model. We create new module, which imports
-"+openflow-types.yang+" (don't forget to update your +pom.xml+ with api dependency).
-Now we create foo action identity:
-----
-import openflow-types {prefix oft;}
-identity foo {
-    description "Foo action description";
-    base oft:action-base;
-}
-----
-
-This will be used as type in our structure. Now we must augment existing action
-structure, so that we will have the desired fields first and second. In order to
-create new augmentation, our module has to import "+openflow-action.yang+". The
-augment should look like this:
-----
-import openflow-action {prefix ofaction;}
-augment "/ofaction:actions-container/ofaction:action" {
-    ext:augment-identifier "foo-action";
-        leaf first {
-            type uint16;
-        }
-        leaf second {
-            type uint16;
-        }
-    }
-----
-We are finished with model changes. Run mvn clean compile to generate sources.
-After generation is done, we need to implement our (de)serializer.
-
-Deserializer:
-----
-public class FooActionDeserializer extends OFDeserializer<Action> {
-   @Override
-   public Action deserialize(ByteBuf input) {
-       ActionBuilder builder = new ActionBuilder();
-       input.skipBytes(SIZE_OF_SHORT_IN_BYTES); *// we know the type of action*
-       builder.setType(Foo.class);
-       input.skipBytes(SIZE_OF_SHORT_IN_BYTES); *// we don't need length*
-       *// now create experimenterIdAugmentation - so that openflowplugin can
-       differentiate correct vendor codec*
-       ExperimenterIdActionBuilder expIdBuilder = new ExperimenterIdActionBuilder();
-       expIdBuilder.setExperimenter(new ExperimenterId(input.readUnsignedInt()));
-       builder.addAugmentation(ExperimenterIdAction.class, expIdBuilder.build());
-       FooActionBuilder fooBuilder = new FooActionBuilder();
-       fooBuilder.setFirst(input.readUnsignedShort());
-       fooBuilder.setSecond(input.readUnsignedShort());
-       builder.addAugmentation(FooAction.class, fooBuilder.build());
-       input.skipBytes(4); *// padding*
-       return builder.build();
-   }
-}
-----
-Serializer:
-----
-public class FooActionSerializer extends OFSerializer<Action> {
-   @Override
-   public void serialize(Action action, ByteBuf outBuffer) {
-       outBuffer.writeShort(FOO_CODE);
-       outBuffer.writeShort(16);
-       *// we don't have to check for ExperimenterIdAction augmentation - our
-       serializer*
-       *// was called based on the vendor / experimenter ID, so we simply write
-       it to buffer*
-       outBuffer.writeInt(VENDOR / EXPERIMENTER ID);
-       FooAction foo = action.getAugmentation(FooAction.class);
-       outBuffer.writeShort(foo.getFirst());
-       outBuffer.writeShort(foo.getSecond());
-       outBuffer.writeZero(4); //write padding
-   }
-}
-----
-Register both deserializer and serializer:
-+SwitchConnectionProvider.registerDeserializer(new
-ExperimenterActionDeserializerKey(0x04, VENDOR / EXPERIMENTER ID),
-new FooActionDeserializer());+
-+SwitchConnectionProvider.registerSerializer(new
-ExperimenterActionSerializerKey(0x04, VENDOR / EXPERIMENTER ID),
-new FooActionSerializer());+
-
-We are ready to test our implementation.
-
-*NOTE:* Vendor / Experimenter structures define only vendor / experimenter ID as
-common distinguisher (besides action type). Vendor / Experimenter ID is unique
-for all vendor messages - that's why vendor is able to register only one class
-under ExperimenterAction(De)SerializerKey. And that's why vendor has to switch
-/ choose between his subclasses / subtypes on his own.
-
-==== Detailed walkthrough: Deserialization extensibility
-
-.External interface & class description
-*OFGeneralDeserializer:*
-
-* +OFDeserializer<E extends DataObject>+
-** _deserialize(ByteBuf)_ - deserializes given ByteBuf
-* +HeaderDeserializer<E extends DataObject>+
-** _deserializeHeaders(ByteBuf)_ - deserializes only E headers (used in Multipart
-TableFeatures messages)
-
-*DeserializerRegistryInjector*
-
-* +injectDeserializerRegistry(DeserializerRegistry)+ - injects deserializer
-registry into deserializer. Useful when custom deserializer needs access to
-other deserializers.
-
-*NOTE:* DeserializerRegistryInjector is not OFGeneralDeserializer descendand.
-It is a standalone interface.
-
-*MessageCodeKey and its descendants*
-These keys are used as for deserializer lookup in DeserializerRegistry.
-MessageCodeKey should is used in general, while its descendants are used in more
-special cases. For Example ActionDeserializerKey is used for Action deserializer
-lookup and (de)registration. Vendor is provided with special keys, which contain
-only the most necessary fields. These keys usually start with "Experimenter"
-prefix (MatchEntryDeserializerKey is an exception).
-
-MessageCodeKey has these fields:
-
-* short version - Openflow wire version number
-* int value - value read from byte message
-* Class<?> clazz - class of object being creating
-
-.Scenario walkthrough
-* [1] The scenario starts in a custom bundle which wants to extend library's
-functionality. The custom bundle creates deserializers which implement exposed
-+OFDeserializer+ / +HeaderDeserializer+ interfaces (wrapped under
-+OFGeneralDeserializer+ unifying super interface).
-* [2] Created deserializers are paired with corresponding ExperimenterKeys,
-which are used for deserializer lookup.
-If you don't know what key should be used with your (de)serializer implementation,
-please visit <<registration_keys, Registration keys>> page.
-* [3] Paired deserializers are passed to the OF Library
-via *SwitchConnectionProvider*._registerCustomDeserializer(key, impl)_.
-Library registers the deserializer.
-** While registering, Library checks if the deserializer is an instance of
-*DeserializerRegistryInjector* interface. If yes, the DeserializerRegistry
-(which stores all deserializer references) is injected into the deserializer.
-
-This is particularly useful when the deserializer needs access to other
-deserializers. For example +IntructionsDeserializer+ needs access to
-+ActionsDeserializer+ in order to be able to process
-OFPIT_WRITE_ACTIONS/OFPIT_APPLY_ACTIONS instructions.
-
-.Deserialization scenario walkthrough
-image::openflowjava/800px-Extensibility.png[width=500]
-
-==== Detailed walkthrough: Serialization extensibility
-.External interface & class description
-
-*OFGeneralSerializer:*
-
-* OFSerializer<E extends DataObject>
-** _serialize(E,ByteBuf)_ - serializes E into given ByteBuf
-* +HeaderSerializer<E extends DataObject>+
-** _serializeHeaders(E,ByteBuf)_ - serializes E headers (used in Multipart
-TableFeatures messages)
-
-*SerializerRegistryInjector*
-* +injectSerializerRegistry(SerializerRegistry)+ - injects serializer registry
-into serializer. Useful when custom serializer needs access to other serializers.
-
-*NOTE:* SerializerRegistryInjector is not OFGeneralSerializer descendand.
-
-*MessageTypeKey and its descendants*
-These keys are used as for serializer lookup in SerializerRegistry.
-MessageTypeKey should is used in general, while its descendants are used in more
-special cases. For Example ActionSerializerKey is used for Action serializer
-lookup and (de)registration. Vendor is provided with special keys, which contain
-only the most necessary fields. These keys usually start with "Experimenter"
-prefix (MatchEntrySerializerKey is an exception).
-
-MessageTypeKey has these fields:
-
-* _short version_ - Openflow wire version number
-* _Class<E> msgType_ - DTO class
-
-Scenario walkthrough
-
-* [1] Serialization extensbility principles are similar to the deserialization
-principles. The scenario starts in a custom bundle. The custom bundle creates
-serializers which implement exposed OFSerializer / HeaderSerializer interfaces
-(wrapped under OFGeneralSerializer unifying super interface).
-* [2] Created serializers are paired with their ExperimenterKeys, which are used
-for serializer lookup.
-If you don't know what key should be used with your serializer implementation,
-please visit <<registration_keys, Registration keys>> page.
-* [3] Paired serializers are passed to the OF Library via
-*SwitchConnectionProvider*._registerCustomSerializer(key, impl)_. Library
-registers the serializer.
-* While registering, Library checks if the serializer is an instance of
-*SerializerRegistryInjector* interface. If yes, the SerializerRegistry (which
-stores all serializer references) is injected into the serializer.
-
-This is particularly useful when the serializer needs access to other deserializers.
-For example IntructionsSerializer needs access to ActionsSerializer in order to
-be able to process OFPIT_WRITE_ACTIONS/OFPIT_APPLY_ACTIONS instructions.
-
-.Serialization scenario walkthrough
-image::openflowjava/800px-Extensibility2.png[width=500]
-
-==== Internal description
-
-*SwitchConnectionProvider*
-+SwitchConnectionProvider+ constructs and initializes both deserializer and
-serializer registries with default (de)serializers. It also injects the
-+DeserializerRegistry+ into the +DeserializationFactory+, the +SerializerRegistry+
-into the +SerializationFactory+.
-When call to register custom (de)serializer is made, +SwitchConnectionProvider+
-calls register method on appropriate registry.
-
-*DeserializerRegistry / SerializerRegistry*
-Both registries contain init() method to initialize default (de)serializers.
-Registration checks if key or (de)serializer implementation are not +null+. If at
-least one of the is +null+, +NullPointerException+ is thrown. Else the
-(de)serializer implementation is checked if it is +(De)SerializerRegistryInjector+
-instance. If it is an instance of this interface, the registry is injected into
-this (de)serializer implementation.
-
-+GetSerializer(key)+ or +GetDeserializer(key)+ performs registry lookup. Because
-there are two separate interfaces that might be put into the registry, the
-registry uses their unifying super interface. Get(De)Serializer(key) method casts
-the super interface to desired type. There is also a null check for the
-(de)serializer received from the registry. If the deserializer wasn't found,
-+NullPointerException+ with key description is thrown. 
-
-
-[[registration_keys]]
-==== Registration keys
-
-.Deserialization
-
-*Possible openflow extensions and their keys*
-
-There are three vendor specific extensions in Openflow v1.0 and eight in
-Openflow v1.3. These extensions are registered under registration keys,
-that are shown in table below: 
-
-.*Deserialization*
-[options="header",cols="20%,10%,40%,30%"]
-|========================================================================================================================================================
-|Extension type            |OpenFlow|Registration key                                                                 |Utility class
-|Vendor message            |1.0     |ExperimenterIdDeserializerKey(1, experimenterId, ExperimenterMessage.class)      |ExperimenterDeserializerKeyFactory
-|Action                    |1.0     |ExperimenterActionDeserializerKey(1, experimenter ID)                            |.
-|Stats message             |1.0     |ExperimenterMultipartReplyMessageDeserializerKey(1, experimenter ID)             |ExperimenterDeserializerKeyFactory
-|Experimenter message      |1.3     |ExperimenterIdDeserializerKey(4, experimenterId, ExperimenterMessage.class)      |ExperimenterDeserializerKeyFactory
-|Match entry               |1.3     |MatchEntryDeserializerKey(4, (number) ${oxm_Class}, (number) ${oxm_Field});      |.
-|                          |        |key.setExperimenterId(experimenter ID);                                          |.
-|Action                    |1.3     |ExperimenterActionDeserializerKey(4, experimenter ID)                            |.
-|Instruction               |1.3     |ExperimenterInstructionDeserializerKey(4, experimenter ID)                       |.
-|Multipart                 |1.3     |ExperimenterIdDeserializerKey(4, experimenterId, MultipartReplyMessage.class)    |ExperimenterDeserializerKeyFactory
-|Multipart - Table features|1.3     |ExperimenterIdDeserializerKey(4, experimenterId, TableFeatureProperties.class)   |ExperimenterDeserializerKeyFactory
-|Error                     |1.3     |ExperimenterIdDeserializerKey(4, experimenterId, ErrorMessage.class)             |ExperimenterDeserializerKeyFactory
-|Queue property            |1.3     |ExperimenterIdDeserializerKey(4, experimenterId, QueueProperty.class)            |ExperimenterDeserializerKeyFactory
-|Meter band type           |1.3     |ExperimenterIdDeserializerKey(4, experimenterId, MeterBandExperimenterCase.class)|ExperimenterDeserializerKeyFactory
-|========================================================================================================================================================
-
-.Serialization
-
-*Possible openflow extensions and their keys*
-
-There are three vendor specific extensions in Openflow v1.0 and seven Openflow
-v1.3. These extensions are registered under registration keys, that are shown in
-table below:
-
-
-.*Serialization*
-[options="header",cols="20%,10%,40%,30%"]
-|=============================================================================================================================================================
-|Extension type            |OpenFlow|Registration key                                                                        |Utility class
-|Vendor message            |1.0     |ExperimenterIdSerializerKey<>(1, experimenterId, ExperimenterInput.class)               |ExperimenterSerializerKeyFactory
-|Action                    |1.0     |ExperimenterActionSerializerKey(1, experimenterId, sub-type)                            |.
-|Stats message             |1.0     |ExperimenterMultipartRequestSerializerKey(1, experimenter ID)                           |ExperimenterSerializerKeyFactory
-|Experimenter message      |1.3     |ExperimenterIdSerializerKey<>(4, experimenterId, ExperimenterInput.class)               |ExperimenterSerializerKeyFactory
-|Match entry               |1.3     |MatchEntrySerializerKey<>(4, (class) ${oxm_Class}, (class) ${oxm_Field});               |.
-|                          |        |key.setExperimenterId(experimenter ID)                                                  |.
-|Action                    |1.3     |ExperimenterActionSerializerKey(4, experimenterId, sub-type)                            |.
-|Instruction               |1.3     |ExperimenterInstructionSerializerKey(4, experimenter ID)                                |.
-|Multipart                 |1.3     |ExperimenterIdSerializerKey<>(4, experimenterId, MultipartRequestExperimenterCase.class)|ExperimenterSerializerKeyFactory
-|Multipart - Table features|1.3     |ExperimenterIdSerializerKey<>(4, experimenterId, TableFeatureProperties.class)          |ExperimenterSerializerKeyFactory
-|Meter band type           |1.3     |ExperimenterIdSerializerKey<>(4, experimenterId, MeterBandExperimenterCase.class)       |ExperimenterSerializerKeyFactory 
-|=============================================================================================================================================================
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/openflow-protocol-library-developer-guide.html
index 7def03d1532094bca77f37398f455e0ddc0b1fbd..2d8ffe6ae66dd17e905bac9f8154fc0b3dd3692a 100644 (file)
@@ -1,14 +1,4 @@
 == OVSDB NetVirt
 
-include::ovsdb-overview.adoc[]
-
-include::ovsdb-library-developer.adoc[]
-
-include::ovsdb-southbound-developer.adoc[]
-
-include::ovsdb-openstack-developer.adoc[]
-
-include::ovsdb-sfc-developer.adoc[]
-
-include::ovsdb-hwvtep-developer.adoc[]
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/ovsdb-netvirt.html
 
diff --git a/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-hwvtep-developer.adoc b/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-hwvtep-developer.adoc
deleted file mode 100644 (file)
index 24b19c0..0000000
+++ /dev/null
@@ -1,10 +0,0 @@
-=== OVSDB Hardware VTEP Developer Guide
-
-==== Overview
-
-TBD
-
-==== OVSDB Hardware VTEP Architecture
-
-TBD
-
diff --git a/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-library-developer.adoc b/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-library-developer.adoc
deleted file mode 100644 (file)
index a470d83..0000000
+++ /dev/null
@@ -1,193 +0,0 @@
-[[ovsdb-library-developer-guide]]
-=== OVSDB Library Developer Guide
-
-[[overview]]
-==== Overview
-
-The OVSDB library manages the Netty connections to network nodes and
-handles bidirectional JSON-RPC messages. It not only provides OVSDB
-protocol functionality to OpenDaylight OVSDB plugin but also can be used
-as standalone JAVA library for OVSDB protocol.
-
-The main responsibilities of OVSDB library include:
-
-* Manage connections to peers
-* Marshal and unmarshal JSON Strings to JSON objects.
-* Marshal and unmarshal JSON Strings from and to the Network Element.
-
-[[connection-service]]
-==== Connection Service
-
-The OVSDB library provides connection management through the OvsdbConnection
-interface. The OvsdbConnection interface provides OVSDB connection
-management APIs which include both active and passive connections. From
-the library perspective, active OVSDB connections are initiated from the
-controller to OVS nodes while passive OVSDB connections are initiated
-from OVS nodes to the controller. In the active connection scenario
-an application needs to provide the IP address and listening port of OVS nodes
-to the library management API. On the other hand, the library management API
-only requires the info of the controller listening port in the passive
-connection scenario.
-
-For a passive connection scenario, the library also provides a connection
-event listener through the OvsdbConnectionListener interface. The listener
-interface has connected() and disconnected() methods to notify an
-application when a new passive connection is established or an existing
-connection is terminated.
-
-[[ssl-connection]]
-==== SSL Connection
-
-In addition to a regular TCP connection, the OvsdbConnection interface
-also provides a connection management API for an SSL connection. To start
-an OVSDB connection with SSL, an application will need to provide a Java
-SSLContext object to the management API. There are different ways
-to create a Java SSLContext, but in most cases a Java KeyStore with
-certificate and private key provided by the application is required.
-Detailed steps about how to create a Java SSLContext is out of the scope of
-this document and can be found in the Java documentation for
-http://goo.gl/5svszT[JAVA Class SSlContext].
-
-In the active connection scenario, the library uses the given SSLContext to
-create a Java SSLEngine and configures the SSL engine with the client mode for
-SSL handshaking. Normally clients are not required to authenticate
-themselves.
-
-In the passive connection scenario, the library uses the given SSLContext to
-create a Java SSLEngine which will operate in server mode for SSL
-handshaking. For security reasons, the SSLv3 protocol and some cipher suites
-are disabled. Currently the OVSDB server only supports the
-TLS_RSA_WITH_AES_128_CBC_SHA cipher suite and the following protocols:
-SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2.
-
-The SSL engine is also configured to operate on two-way authentication
-mode for passive connection scenarios, i.e., the OVSDB server (controller)
-will authenticate clients (OVS nodes) and clients (OVS nodes) are also
-required to authenticate the server (controller). In the two-way
-authentication mode, an application should keep a trust manager to store
-the certificates of trusted clients and initialize a Java SSLContext with this
-trust manager. Thus during the SSL handshaking process the OVSDB server
-(controller) can use the trust manager to verify clients and only accept
-connection requests from trusted clients. On the other hand, users should
-also configure OVS nodes to authenticate the controller. Open vSwitch
-already supports this functionality in the ovsdb-server command with option
-`--ca-cert=cacert.pem` and `--bootstrap-ca-cert=cacert.pem`. On the OVS
-node, a user can use the option `--ca-cert=cacert.pem` to specify a controller
-certificate directly and the node will only allow connections to the
-controller with the specified certificate. If the OVS node runs ovsdb-server
-with option `--bootstrap-ca-cert=cacert.pem`, it will authenticate the
-controller with the specified certificate cacert.pem. If the certificate
-file doesn’t exist, it will attempt to obtain a certificate from the
-peer (controller) on its first SSL connection and save it to the named
-PEM file `cacert.pem`. Here is an example of ovsdb-server with
-`--bootstrap-ca-cert=cacert.pem` option:
-
-`ovsdb-server --pidfile --detach --log-file --remote punix:/var/run/openvswitch/db.sock --remote=db:hardware_vtep,Global,managers --private-key=/etc/openvswitch/ovsclient-privkey.pem -- certificate=/etc/openvswitch/ovsclient-cert.pem --bootstrap-ca-cert=/etc/openvswitch/vswitchd.cacert`
-
-[[ovsdb-protocol-transactions]]
-==== OVSDB protocol transactions
-
-The OVSDB protocol defines the RPC transaction methods in RFC 7047.
-The following RPC methods are supported in OVSDB protocol:
-
-* List databases
-* Get schema
-* Transact
-* Cancel
-* Monitor
-* Update notification
-* Monitor cancellation
-* Lock operations
-* Locked notification
-* Stolen notification
-* Echo
-
-According to RFC 7047, an OVSDB server must implement all methods, and
-an OVSDB client is only required to implement the "Echo" method and
-otherwise free to implement whichever methods suit its needs. However,
-the OVSDB library currently doesn’t support all RPC methods. For the "Echo"
-method, the library can handle "Echo" messages from a peer and send a JSON
-response message back, but the library doesn’t support actively sending an
-"Echo" JSON request to a peer. Other unsupported RPC methods are listed
-below:
-
-* Cancel
-* Lock operations
-* Locked notification
-* Stolen notification
-
-In the OVSDB library the RPC methods are defined in the Java interface OvsdbRPC.
-The library also provides a high-level interface OvsdbClient as the main
-interface to interact with peers through the OVSDB protocol. In the passive
-connection scenario, each connection will have a corresponding
-OvsdbClient object, and the application can obtain the OvsdbClient
-object through connection listener callback methods. In other words, if
-the application implements the OvsdbConnectionListener interface, it will
-get notifications of connection status changes with the corresponding
-OvsdbClient object of that connection.
-
-[[ovsdb-database-operations]]
-==== OVSDB database operations
-
-RFC 7047 also defines database operations, such as insert, delete, and
-update, to be performed as part of a "transact" RPC request. The OVSDB
-library defines the data operations in Operations.java and provides
-the TransactionBuilder class to help build "transact" RPC requests. To build
-a JSON-RPC transact request message, the application can obtain
-the TransactionBuilder object through a transactBuilder() method in
-the OvsdbClient interface.
-
-The TransactionBuilder class provides the following methods to help build
-transactions:
-
-* getOperations(): Get the list of operations in this transaction.
-* add(): Add data operation to this transaction.
-* build(): Return the list of operations in this transaction. This is the
-same as the getOperations() method.
-* execute(): Send the JSON RPC transaction to peer.
-* getDatabaseSchema(): Get the database schema of this transaction.
-
-If the application wants to build and send a "transact" RPC request to
-modify OVSDB tables on a peer, it can take the following steps:
-
-. Statically import parameter "op" in Operations.java
-+
-`import static org.opendaylight.ovsdb.lib.operations.Operations.op;`
-+
-. Obtain transaction builder through transacBuilder() method in
-OvsdbClient:
-+
-`TransactionBuilder transactionBuilder = ovsdbClient.transactionBuilder(dbSchema);`
-+
-. Add operations to transaction builder:
-+
-`transactionBuilder.add(op.insert(schema, row));`
-+
-. Send transaction to peer and get JSON RPC response:
-+
-`operationResults = transactionBuilder.execute().get();`
-+
-NOTE:
-Although the "select" operation is supported in the OVSDB library, the
-library implementation is a little different from RFC 7047. In RFC 7047,
-section 5.2.2 describes the "select" operation as follows:
-+
-“The "rows" member of the result is an array of objects. Each object
-corresponds to a matching row, with each column specified in "columns"
-as a member, the column's name as the member name, and its value as the
-member value. If "columns" is not specified, all the table's columns are
-included (including the internally generated "_uuid" and "_version"
-columns).”
-+
-The OVSDB library implementation always requires the column’s name in the
-"columns" field of a JSON message. If the "columns" field is not
-specified, none of the table’s columns are included. If the application
-wants to get the table entry with all columns, it needs to specify all
-the columns’ names in the "columns" field.
-
-[[reference-documentation]]
-==== Reference Documentation
-
-RFC 7047 The Open vSwitch Databse Management Protocol
-https://tools.ietf.org/html/rfc7047
-
diff --git a/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-openstack-developer.adoc b/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-openstack-developer.adoc
deleted file mode 100644 (file)
index 9248017..0000000
+++ /dev/null
@@ -1,29 +0,0 @@
-=== OVSDB Openstack Developer Guide
-
-==== Overview
-The Open vSwitch database (OVSDB) Southbound Plugin component for OpenDaylight implements
-the OVSDB  https://tools.ietf.org/html/rfc7047[RFC 7047] management protocol
-that allows the southbound configuration of switches that support OVSDB. The
-component comprises a library and a plugin. The OVSDB protocol
-uses JSON-RPC calls to manipulate a physical or virtual switch that supports OVSDB.
-Many vendors support OVSDB on various hardware platforms.
-The OpenDaylight controller uses the library project to interact with an OVS
-instance.
-
-http://www.openstack.org[OpenStack] is a popular open source Infrastructure
-as a Service (IaaS) project, covering compute, storage and network management.
-OpenStack can use OpenDaylight as its network management provider through the
-Neutron API, which acts as a northbound for OpenStack. the OVSDB NetVirt piece
-of the OVSDB project is a provider for the Neutron API in OpenDaylight.
-OpenDaylight manages the network flows for the OpenStack compute nodes via
-the OVSDB project, with the south-bound plugin. This section describes how to
-set that up, and how to tell when everything is working.
-
-==== OVSDB Openstack Architecture
-The OpenStack integration architecture uses the following technologies:
-
-* https://tools.ietf.org/html/rfc7047[RFC 7047] - The Open vSwitch Database Management Protocol
-* http://www.opennetworking.org/images/stories/downloads/sdn-resources/onf-specifications/openflow/openflow-switch-v1.3.4.pdf[OpenFlow v1.3]
-* https://wiki.openstack.org/wiki/Neutron/ML2[OpenStack Neutron ML2 Plugin]
-
-image:openstack_integration.png[Openstack Integration]
diff --git a/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-overview.adoc b/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-overview.adoc
deleted file mode 100644 (file)
index 727b6c3..0000000
+++ /dev/null
@@ -1,584 +0,0 @@
-=== OVSDB Integration\r
-The Open vSwitch database (OVSDB) Southbound Plugin component for OpenDaylight implements\r
-the OVSDB  https://tools.ietf.org/html/rfc7047[RFC 7047] management protocol\r
-that allows the southbound configuration of switches that support OVSDB. The\r
-component comprises a library and a plugin. The OVSDB protocol\r
-uses JSON-RPC calls to manipulate a physical or virtual switch that supports OVSDB.\r
-Many vendors support OVSDB on various hardware platforms.\r
-The OpenDaylight controller uses the library project to interact with an OVS\r
-instance.\r
-\r
-NOTE:\r
-Read the OVSDB User Guide before you begin development.\r
-\r
-==== OpenDaylight OVSDB integration\r
-The OpenStack integration architecture uses the following technologies:\r
-\r
-* https://tools.ietf.org/html/rfc7047[RFC 7047] - The Open vSwitch Database Management Protocol\r
-* http://www.opennetworking.org/images/stories/downloads/sdn-resources/onf-specifications/openflow/openflow-switch-v1.3.4.pdf[OpenFlow v1.3]\r
-* https://wiki.openstack.org/wiki/Neutron/ML2[OpenStack Neutron ML2 Plugin]\r
-\r
-===== OpenDaylight Mechanism Driver for Openstack Neutron ML2\r
-This code is a part of OpenStack and is available at: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py\r
-\r
-The ODL neutron driver implementation can be found at: https://github.com/openstack/networking-odl\r
-\r
-To make changes to this code, please read about https://wiki.openstack.org/wiki/NeutronDevelopment[Neutron Development].\r
-\r
-Before submitting the code, run the following tests:\r
-\r
-----\r
-tox -e py27\r
-tox -e pep8\r
-----\r
-\r
-===== Importing the code in to Eclipse or IntelliJ\r
-To import code, look at either of the following pages:\r
-\r
-* https://wiki.opendaylight.org/view/Eclipse_Setup[Getting started with Eclipse]\r
-* https://wiki.opendaylight.org/view/OpenDaylight_Controller:Developing_With_Intellij[Developing with Intellij]\r
-\r
-.Avoid conflicting project names\r
-image::OVSDB_Eclipse.png[]\r
-\r
-* To ensure that a project in Eclipse does not have a conflicting name in the workspace, select Advanced > Name Template > [groupId].[artifactId] when importing the project.\r
-\r
-===== Browsing the code\r
-The code is mirrored to https://github.com/opendaylight/ovsdb[GitHub] to make reading code online easier. \r
-\r
-===== Source code organization\r
-\r
-The OVSDB project generates the following Karaf modules:\r
-\r
-* ovsdb.karaf  -- all openstack netvirt related artifacts\r
-* ovsdb.library-karaf -- the OVSDB library reference implementation\r
-* ovsdb.openstack.net-virt-sfc-karaf  -- openflow service function chaining\r
-* ovsdb.hwvtepsouthbound-karaf -- the hw_vtep schema southbound plugin\r
-* ovsdb.southbound-karaf - the Open_vSwitch schema plugin\r
-\r
-Following are a brief descriptions on directories you will find a the root ovsdb/ directory:\r
-\r
-* _commons_ contains the parent POM file for Maven project which is used to get consistency of settings across the project.\r
-\r
-* _features_ contains all the Karaf related feature files.\r
-\r
-* _hwvtepsouthbound_ contains the hw_vtep southbound plugin.\r
-\r
-* _karaf_ contains the ovsdb library and southbound and OpenStack bundles for the OpenStack integration.\r
-\r
-* _library_ contains a schema-independent library that is a reference implementation for RFC 7047.\r
-\r
-* _openstack_ contains the northbound handlers for Neutron used by OVSDB, as well as their providers. The NetVirt SFC implementation is also located here.\r
-\r
-* _ovsdb-ui_ contains the DLUX implementation for displaying network virtualization.\r
-\r
-* _resources_ contains useful scripts, how-tos, demos and other resources.\r
-\r
-* _schemas_ contains the OVSDB schemas that are implemented in OpenDaylight.\r
-\r
-* _southbound_ contains the plugin for converting from the OVSDB protocol to MD-SAL and vice-versa.\r
-\r
-* _utils_ contains a collection of utilities for using the OpenFlow plugin, southbound, Neutron and other helper methods.\r
-\r
-==== Building and running OVSDB\r
-*Prerequisites* +\r
-\r
-* JDK 1.7+\r
-* Maven 3+\r
-\r
-[[ovsdbBuildSteps]]\r
-===== Building a Karaf feature and deploying it in an Opendaylight Karaf distribution +\r
-. From the root ovsdb/ directory, run *mvn clean install*.\r
-. Unzip the karaf-<VERSION_NUMBER>-SNAPSHOT.zip file created from step 1 in the directory ovsdb/karaf/target/:\r
-----\r
-unzip karaf-<VERSION_NUMBER>-SNAPSHOT.zip\r
-----\r
-===== Downloading OVSDB's Karaf distribution +\r
-Instead of building, you can download the latest OVSDB distribution from the Nexus server. The link for that is:\r
-----\r
-https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/ovsdb/karaf/1.3.0-SNAPSHOT/\r
-----\r
-\r
-===== Running Karaf feature from OVSDB's Karaf distribution +\r
-\r
-[[ovsdbStartingOdl]]\r
-. Start ODL, from the unzipped directory\r
-----\r
-bin/karaf\r
-----\r
-. Once karaf has started, and you see the Opendaylight ascii art in the console, the last step is to start the OVSDB plugin framework with the following command in the karaf console: \r
-----\r
-feature:install odl-ovsdb-openstack\r
-----\r
-\r
-====== Sample output from the Karaf console\r
-----\r
-opendaylight-user@root>feature:list | grep -i ovsdb \r
-opendaylight-user@root>feature:list -i | grep ovsdb\r
-odl-ovsdb-southbound-api          | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: api\r
-odl-ovsdb-southbound-impl         | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: impl\r
-odl-ovsdb-southbound-impl-rest    | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: impl :: REST\r
-odl-ovsdb-southbound-impl-ui      | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: impl :: UI\r
-odl-ovsdb-library                 | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-library-1.2.1-SNAPSHOT        | OpenDaylight :: library\r
-odl-ovsdb-openstack               | 1.2.1-SNAPSHOT   | x         | ovsdb-1.2.1-SNAPSHOT                    | OpenDaylight :: OVSDB :: OpenStack Network Virtual\r
-----\r
-\r
-===== Testing patches\r
-It is recommended that you test your patches locally before submission.\r
-\r
-===== Neutron integration\r
-To test patches to the Neutron integration, you need a http://devstack.org/guides/multinode-lab.html[Multi-Node Devstack Setup]. The ``resources`` folder contains sample ``local.conf`` files.\r
-\r
-===== Open vSwitch\r
-To test patches to the library, you will need a working http://openvswitch.org/[Open vSwitch]. Packages are available for most Linux distributions. If you would like to run multiple versions of Open vSwitch for testing you can use https://github.com/dave-tucker/docker-ovs[docker-ovs] to run Open vSwitch in https://www.docker.com/[Docker] containers. \r
-\r
-===== Mininet\r
-http://mininet.org/[Mininet] is another useful resource for testing patches. Mininet creates multiple Open vSwitches connected in a configurable topology. \r
-\r
-===== Vagrant\r
-The Vagrant file in the root of the OVSDB source code provides an easy way to create VMs for tests.\r
-\r
-* To install Vagrant on your machine, follow the steps at: https://docs.vagrantup.com/v2/installation/[Installing Vagrant].\r
-\r
-*Testing with Devstack*\r
-\r
-. Start the controller.\r
-----\r
-vagrant up devstack-control\r
-vagrant ssh devstack-control\r
-cd devstack\r
-./stack.sh\r
-----\r
-[start=2]\r
-. Run the following:\r
-----\r
-vagrant up devstack-compute-1\r
-vagrant ssh devstack-compute-1\r
-cd devstack\r
-./stack.sh\r
-----\r
-[start=3]\r
-. To start testing, create a new VM.\r
-----\r
-nova boot --flavor m1.tiny --image $(nova image-list | grep 'cirros-0.3.1-x86_64-uec\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep private | awk '{print $2}') test\r
-----\r
-To create three, use the following:\r
-----\r
-nova boot --flavor m1.tiny --image $(nova image-list | grep 'cirros-0.3.1-x86_64-uec\s' | awk '{print $2}') --nic net-id=$(neutron net-list | grep private | awk '{print $2}') --num-instances 3 test\r
-----\r
-[start=4]\r
-.To get a mininet installation for testing:\r
-----\r
-vagrant up mininet\r
-vagrant ssh mininet\r
-----\r
-[start=5]\r
-. Use the following to clean up when finished:\r
-----\r
-vagrant destroy\r
-----\r
-\r
-==== OVSDB integration design\r
-===== Resources\r
-See the following: +\r
-\r
-* http://networkheresy.com/2012/09/15/remembering-the-management-plane/[Network Heresy]\r
-\r
-See the OVSDB YouTube Channel for getting started videos and other tutorials: +\r
-\r
-* http://www.youtube.com/channel/UCMYntfZ255XGgYFrxCNcAzA[ODL OVSDB Youtube Channel]\r
-* https://wiki.opendaylight.org/view/OVSDB_Integration:Mininet_OVSDB_Tutorial[Mininet OVSDB Tutorial]\r
-* https://wiki.opendaylight.org/view/OVSDB_Integration:Main#Getting_Started_with_OpenDaylight_OVSDB_Plugin_Network_Virtualization[OVSDB Getting Started]\r
-\r
-==== OpenDaylight OVSDB southbound plugin architecture and design\r
-OpenVSwitch (OVS) is generally accepted as the unofficial standard for Virtual Switching in the Open hypervisor based solutions. Every other Virtual Switch implementation, properietery or otherwise, uses OVS in some form.\r
-For information on OVS, see http://openvswitch.org/[Open vSwitch].\r
-\r
-In Software Defined Networking (SDN), controllers and applications interact using two channels: OpenFlow and OVSDB. OpenFlow addresses the forwarding-side of the OVS functionality. OVSDB, on the other hand, addresses the management-plane. \r
-A simple and concise overview of Open Virtual Switch Database(OVSDB) is available at: http://networkstatic.net/getting-started-ovsdb/\r
-\r
-===== Overview of OpenDaylight Controller architecture\r
-The OpenDaylight controller platform is designed as a highly modular and plugin based middleware that serves various network applications in a variety of use-cases. The modularity is achieved through the Java OSGi framework. The controller consists of many Java OSGi bundles that work together to provide the required\r
- controller functionalities. \r
\r
-The bundles can be placed in the following broad categories: +\r
-\r
-* Network Service Functional Modules (Examples: Topology Manager, Inventory Manager, Forwarding Rules Manager,and others) \r
-* NorthBound API Modules (Examples: Topology APIs, Bridge Domain APIs, Neutron APIs, Connection Manager APIs, and others) \r
-* Service Abstraction Layer(SAL)- (Inventory Services, DataPath Services, Topology Services, Network Config, and others) \r
-* SouthBound Plugins (OpenFlow Plugin, OVSDB Plugin, OpenDove Plugin, and others) \r
-* Application Modules (Simple Forwarding, Load Balancer)\r
-\r
-Each layer of the Controller architecture performs specified tasks, and hence aids in modularity. \r
-While the Northbound API layer addresses all the REST-Based application needs, the SAL layer takes care of abstracting the SouthBound plugin protocol specifics from the Network Service functions. \r
\r
-Each of the SouthBound Plugins serves a different purpose, with some overlapping.\r
-For example, the OpenFlow plugin might serve the Data-Plane needs of an OVS element, while the OVSDB plugin can serve the management plane needs of the same OVS element.\r
-As the Openflow Plugin talks OpenFlow protocol with the OVS element, the OVSDB plugin will use OVSDB schema over JSON-RPC transport.\r
-\r
-==== OVSDB southbound plugin\r
-The http://tools.ietf.org/html/draft-pfaff-ovsdb-proto-02[Open vSwitch Database Management Protocol-draft-02] and http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf[Open vSwitch Manual] provide theoretical information about OVSDB.\r
-The OVSDB protocol draft is generic enough to lay the groundwork on Wire Protocol and Database Operations, and the OVS Manual currently covers 13 tables leaving space for future OVS expansion, and vendor expansions on proprietary implementations.\r
-The OVSDB Protocol is a database records transport protocol using JSON RPC1.0. For information on the protocol structure, see http://networkstatic.net/getting-started-ovsdb/[Getting Started with OVSDB].\r
-The OpenDaylight OVSDB southbound plugin consists of one or more OSGi bundles addressing the following services or functionalities: +\r
-\r
-* Connection Service - Based on Netty \r
-* Network Configuration Service \r
-* Bidirectional JSON-RPC Library \r
-* OVSDB Schema definitions and Object mappers \r
-* Overlay Tunnel management \r
-* OVSDB to OpenFlow plugin mapping service \r
-* Inventory Service \r
-\r
-==== Connection service\r
-One of the primary services that most southbound plugins provide in Opendaylight a Connection Service. The service provides protocol specific connectivity to network elements, and supports the connectivity management services as specified by the OpenDaylight Connection Manager.\r
-The connectivity services include: +\r
-\r
-* Connection to a specified element given IP-address, L4-port, and other connectivity options (such as authentication,...) \r
-* Disconnection from an element \r
-* Handling Cluster Mode change notifications to support the OpenDaylight Clustering/High-Availability feature \r
-\r
-==== Network Configuration Service\r
-The goal of the OpenDaylight Network Configuration services is to provide complete management plane solutions needed to successfully install, configure, and deploy the various SDN based network services. These are generic services which can be implemented in part or full by any south-bound protocol plugin.\r
-The south-bound plugins can be either of the following: +\r
-\r
-* The new network virtualization protocol plugins such as OVSDB JSON-RPC\r
-* The traditional management protocols such as SNMP or any others in the middle. \r
-\r
-The above definition, and more information on Network Configuration Services, is available at : https://wiki.opendaylight.org/view/OpenDaylight_Controller:NetworkConfigurationServices \r
-\r
-===== Bidirectional JSON-RPC library\r
-The OVSDB plugin implements a Bidirectional JSON-RPC library.  It is easy to design the library as a module that manages the Netty connection towards the Element. \r
-\r
-The main responsibilities of this Library are: +\r
-\r
-* Demarshal and marshal JSON Strings to JSON objects \r
-* Demarshal and marshal JSON Strings from and to the Network Element.\r
-\r
-===== OVSDB Schema definitions and Object mappers\r
-The OVSDB Schema definitions and Object Mapping layer sits above the JSON-RPC library. It maps the generic JSON objects to OVSDB schema POJOs (Plain Old Java Object) and vice-versa. This layer mostly provides the Java Object definition for the corresponding OVSDB schema (13 of them) and also will provide much more friendly API abstractions on top of these object data. This helps in hiding the JSON semantics from the functional modules such as Configuration Service and Tunnel management.\r
-\r
-On the demarshaling side the mapping logic differentiates the Request and Response messages as follows : +\r
-\r
-* Request messages are mapped by its "method" \r
-* Response messages are mapped by their IDs which were originally populated by the Request message.\r
-The JSON semantics of these OVSDB schema is quite complex.\r
-The following figures summarize two of the end-to-end scenarios: +\r
-\r
-.End-to-end handling of a Create Bridge request\r
-image::ConfigurationService-example1.png[width=500]\r
-\r
-.End-to-end handling of a monitor response\r
-image::MonitorResponse.png[width=500]\r
-\r
-===== Overlay tunnel management\r
-\r
-Network Virtualization using OVS is achieved through Overlay Tunnels. The actual Type of the Tunnel may be GRE, VXLAN, or STT. The differences in the encapsulation and configuration decide the tunnel types. Establishing a tunnel using configuration service requires just the sending of OVSDB messages towards the ovsdb-server. However, the scaling issues that would arise on the state management at the data-plane (using OpenFlow) can get challenging. Also, this module can assist in various optimizations in the presence of Gateways. It can also help in providing Service guarantees for the VMs using these overlays with the help of underlay orchestration. \r
-\r
-===== OVSDB to OpenFlow plugin mapping service\r
-The connect() of the ConnectionService  would result in a Node that represents an ovsdb-server. The CreateBridgeDomain() Configuration on the above Node would result in creating an OVS bridge. This OVS Bridge is an OpenFlow Agent for the OpenDaylight OpenFlow plugin with its own Node represented as (example) OF|xxxx.yyyy.zzzz. \r
-Without any help from the OVSDB plugin, the Node Mapping Service of the Controller platform would not be able to map the following: +\r
-----\r
-{OVSDB_NODE + BRIDGE_IDENTFIER} <---> {OF_NODE}.\r
-----\r
-Without such mapping, it would be extremely difficult for the applications to manage and maintain such nodes. This Mapping Service provided by the OVSDB plugin would essentially help in providing more value added services to the orchestration layers that sit atop the Northbound APIs (such as OpenStack). \r
-\r
-==== OpenDaylight OVSDB Developer Getting Started Video Series\r
-The video series were started to help developers bootstrap into OVSDB development.\r
-\r
-* http://www.youtube.com/watch?v=ieB645oCIPs[OpenDaylight OVSDB Developer Getting Started]\r
-* http://www.youtube.com/watch?v=xgevyaQ12cg[OpenDaylight OVSDB Developer Getting Started - Northbound API Usage]\r
-* http://www.youtube.com/watch?v=xgevyaQ12cg[OpenDaylight OVSDB Developer Getting Started - Java APIs]\r
-* http://www.youtube.com/watch?v=NayuY6J-AMA[OpenDaylight OVSDB Developer Getting Started - OpenStack Integration OpenFlow v1.0]\r
-\r
-===== Other developer tutorials\r
-\r
-* https://docs.google.com/presentation/d/1KIuNDuUJGGEV37Zk9yzx9OSnWExt4iD2Z7afycFLf_I/edit?usp=sharing[OVSDB NetVirt Tutorial]\r
-* https://www.youtube.com/watch?v=2axNKHvt5MY&list=PL8F5jrwEpGAiJG252ShQudYeodGSsks2l&index=43[Youtube of OVSDB NetVirt tutorial]\r
-* https://wiki.opendaylight.org/view/OVSDB:OVSDB_OpenStack_Guide[OVSDB OpenFlow v1.3 Neutron ML2 Integration]\r
-* http://networkstatic.net/getting-started-ovsdb/[Open vSwitch Database Table Explanations and Simple Jackson Tutorial]\r
-\r
-==== OVSDB integration: New features\r
-===== Schema independent library\r
-The OVS connection is a node which can have multiple databases. Each database is represented by a schema. A single connection can have multiple schemas.\r
-OSVDB supports multiple schemas. Currently, these are two schemas available in the\r
-OVSDB, but there is no restriction on the number of schemas. Owing to the Northbound v3 API, no code changes in ODL are needed for supporting additional schemas.\r
-\r
-Schemas: +\r
-\r
-*  openvswitch : Schema wrapper that represents http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf\r
-*  hardwarevtep: Schema wrapper that represents http://openvswitch.org/docs/vtep.5.pdf\r
-\r
-===== Port security\r
-Based on the fact that security rules can be obtained from a port object, OVSDB can apply Open Flow rules. These rules will match on what types of traffic the Openstack tenant VM is allowed to use.\r
\r
-Support for security groups is very experimental. There are limitations in determining the state of flows in the Open vSwitch. See http://%20https//www.youtube.com/watch?v=DSop2uLJZS8[Open vSwitch and the Intelligent Edge] from Justin Petit for a deep dive into the challenges we faced creating a flow based port security implementation. The current set of rules that will be installed only supports filtering of the TCP protocol. This is because via a Nicira TCP_Flag read we can match on a flows TCP_SYN flag, and permit or deny the flow based on the Neutron port security rules. If rules are requested for ICMP and UDP, they are ignored until greater visibility from the Linux kernel is available as outlined in the OpenStack presentation mentioned earlier. \r
-\r
-Using the port security groups of Neutron, one can add rules that restrict the network access of the tenants. The OVSDB Neutron integration checks the port security rules configured, and apply them by means of openflow rules. \r
-\r
-Through the ML2 interface, Neutron security rules are available in the port object, following this scope: Neutron Port -> Security Group -> Security Rules. \r
-\r
-The current rules are applied on the basis of the following attributes: ingress/egress, tcp protocol, port range, and prefix.\r
\r
-====== OpenStack workflow\r
-. Create a stack.\r
-. Add the network and subnet. \r
-. Add the Security Group and Rules.\r
-\r
-NOTE: This is no different than what users normally do in regular openstack deployments. \r
-----\r
-neutron security-group-create group1 --description "Group 1"\r
-neutron security-group-list\r
-neutron security-group-rule-create --direction ingress --protocol tcp group1\r
-----\r
-[start=4]\r
-. Start the tenant, specifying the security-group.\r
-----\r
-nova boot --flavor m1.tiny \\r
---image $(nova image-list | grep 'cirros-0.3.1-x86_64-uec\s' | awk '{print $2}') \\r
---nic net-id=$(neutron net-list | grep 'vxlan2' | awk '{print $2}') vxlan2 \\r
---security-groups group1\r
-----\r
-====== Examples: Rules supported\r
-----\r
-neutron security-group-create group2 --description "Group 2"\r
-neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 54 group2\r
-neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 80 group2\r
-neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 1633 group2\r
-neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 22 group2\r
-----\r
-----\r
-neutron security-group-create group3 --description "Group 3"\r
-neutron security-group-rule-create --direction ingress --protocol tcp --remote-ip-prefix 10.200.0.0/16 group3\r
-----\r
-----\r
-neutron security-group-create group4 --description "Group 4"\r
-neutron security-group-rule-create --direction ingress --remote-ip-prefix 172.24.0.0/16 group4\r
-----\r
-----\r
-neutron security-group-create group5 --description "Group 5"\r
-neutron security-group-rule-create --direction ingress --protocol tcp group5\r
-neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 54 group5\r
-neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 80 group5\r
-neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 1633 group5\r
-neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 22 group5\r
-----\r
-----\r
-neutron security-group-create group6 --description "Group 6"\r
-neutron security-group-rule-create --direction ingress --protocol tcp --remote-ip-prefix 0.0.0.0/0 group6\r
-----\r
-----\r
-neutron security-group-create group7 --description "Group 7"\r
-neutron security-group-rule-create --direction egress --protocol tcp --port-range-min 443 --remote-ip-prefix 172.16.240.128/25 group7\r
-----\r
-*Reference gist*:https://gist.github.com/anonymous/1543a410d57f491352c8[Gist]\r
-\r
-====== Security group rules supported in ODL \r
-The following rules formata are supported in the current implementation. The direction (ingress/egress) is always expected. Rules are implemented such that tcp-syn packets that do not satisfy the rules are dropped.\r
-[cols="3", width="60%"]\r
-|===\r
-| Proto | Port | IP Prefix\r
-\r
-|TCP |x |x\r
-|Any | Any |x\r
-|TCP |x |Any\r
-|TCP |Any |Any\r
-|===\r
-\r
-====== Limitations\r
-* Soon, conntrack will be supported by OVS. Until then, TCP flags are used as way of checking for connection state. Specifically, that is done by matching on the TCP-SYN flag.\r
-* The param '--port-range-max' in 'security-group-rule-create' is not used until the implementation uses contrack. \r
-* No UDP/ICMP specific match support is provided.\r
-* No IPv6 support is provided.\r
-\r
-===== L3 forwarding\r
-OVSDB extends support for the usage of an ODL-Neutron-driver so that OVSDB can configure OF 1.3 rules to route IPv4 packets. The driver eliminates the need for the router of the L3 Agent. In order to accomplish that, OVS 2.1 or a newer version is required.\r
-OVSDB also supports inbound/outbound NAT, floating IPs.\r
-\r
-====== Starting OVSDB and OpenStack\r
-. Build or download OVSDB distribution, as mentioned in <<ovsdbBuildSteps,building a Karaf feature section>>.\r
-. http://docs.vagrantup.com/v2/installation/index.html[Install Vagrant].\r
-\r
-[start=3]\r
-. Enable the L3 Forwarding feature:\r
-----\r
-echo 'ovsdb.l3.fwd.enabled=yes' >> ./opendaylight/configuration/config.ini\r
-echo 'ovsdb.l3gateway.mac=${GATEWAY_MAC}' >> ./configuration/config.ini\r
-----\r
-[start=4]\r
-. Run the following commands to get the odl neutron drivers:\r
-[start=5]\r
-----\r
-git clone https://github.com/dave-tucker/odl-neutron-drivers.git\r
-cd odl-neutron-drivers\r
-vagrant up devstack-control devstack-compute-1\r
-----\r
-[start=6]\r
-. Use ssh to go to the control node, and clone odl-neutron-drivers again:\r
-----\r
-vagrant ssh devstack-control\r
-git clone https://github.com/dave-tucker/odl-neutron-drivers.git\r
-cd odl-neutron-drivers\r
-sudo python setup.py install\r
-*leave this shell open*\r
-----\r
-[start=7]\r
-. Start odl, as mentioned in <<ovsdbStartingOdl,running Karaf feature section>>.\r
-[start=8]\r
-. To see processing of neutron event related to L3, do this from prompt:\r
-----\r
-log:set debug org.opendaylight.ovsdb.openstack.netvirt.impl.NeutronL3Adapter\r
-----\r
-[start=9]\r
-. From shell, do one of the following: open on ssh into control node or vagrant ssh devstack-control.\r
-----\r
-cd ~/devstack && ./stack.sh\r
-----\r
-[start=10]\r
-. From a new shell in the host system, run the following:\r
-----\r
-cd odl-neutron-drivers\r
-vagrant ssh devstack-compute-1\r
-cd ~/devstack && ./stack.sh\r
-----\r
-\r
-====== OpenStack workflow\r
-.Sample workflow\r
-image::L3FwdSample.png[height=250]\r
-\r
-Use the following steps to set up a workflow like the one shown in figure above.\r
-\r
-. Set up authentication. From shell on stack control or vagrant ssh devstack-control:\r
-----\r
-source openrc admin admin\r
-----\r
-\r
-----\r
-rm -f id_rsa_demo* ; ssh-keygen -t rsa -b 2048 -N  -f id_rsa_demo\r
- nova keypair-add --pub-key  id_rsa_demo.pub  demo_key\r
- # nova keypair-list\r
-----\r
-[start=2]\r
-. Create two networks and two subnets.\r
-----\r
-neutron net-create net1 --tenant-id $(keystone tenant-list | grep '\s'admin | awk '{print $2}') \\r
- --provider:network_type gre --provider:segmentation_id 555\r
-----\r
-----\r
-neutron subnet-create --tenant-id $(keystone tenant-list | grep '\s'admin | awk '{print $2}') \\r
-net1 10.0.0.0/16 --name subnet1 --dns-nameserver 8.8.8.8\r
-----\r
-----\r
-neutron net-create net2 --tenant-id $(keystone tenant-list | grep '\s'admin | awk '{print $2}') \\r
- --provider:network_type gre --provider:segmentation_id 556\r
-----\r
-----\r
-neutron subnet-create --tenant-id $(keystone tenant-list | grep '\s'admin | awk '{print $2}') \\r
- net2 20.0.0.0/16 --name subnet2 --dns-nameserver 8.8.8.8\r
-----\r
-[start=3]\r
-. Create a router, and add an interface to each of the two subnets.\r
-----\r
-neutron router-create demorouter --tenant-id $(keystone tenant-list | grep '\s'admin | awk '{print $2}')\r
- neutron router-interface-add demorouter subnet1\r
- neutron router-interface-add demorouter subnet2\r
- # neutron router-port-list demorouter\r
-----\r
-[start=4]\r
-. Create two tenant instances.\r
-----\r
-nova boot --poll --flavor m1.nano --image $(nova image-list | grep 'cirros-0.3.2-x86_64-uec\s' | awk '{print $2}') \\r
- --nic net-id=$(neutron net-list | grep -w net1 | awk '{print $2}'),v4-fixed-ip=10.0.0.10 \\r
- --availability-zone nova:devstack-control \\r
- --key-name demo_key host10\r
-----\r
-----\r
-nova boot --poll --flavor m1.nano --image $(nova image-list | grep 'cirros-0.3.2-x86_64-uec\s' | awk '{print $2}') \\r
- --nic net-id=$(neutron net-list | grep -w net2 | awk '{print $2}'),v4-fixed-ip=20.0.0.20 \\r
- --availability-zone nova:devstack-compute-1 \\r
- --key-name demo_key host20\r
-----\r
-\r
-====== Limitations\r
-* To use this feature, you need OVS 2.1 or newer version.\r
-* Owing to OF limitations, icmp responses due to routing failures, like ttl expired or host unreacheable, are not generated.\r
-* The MAC address of the default route is not automatically mapped. In order to route to L3 destinations outside the networks of the tenant, the manual configuration of the default route is necessary. To provide the MAC address of the default route, use ovsdb.l3gateway.mac in file configuration/config.ini ; \r
-* This feature is Tech preview, which depends on later versions of OpenStack to be used without the provided neutron-driver. \r
-* No IPv6 support is provided.\r
\r
-*More information on L3 forwarding*: +\r
-\r
-* odl-neutron-driver: https://github.com/dave-tucker/odl-neutron-drivers\r
-* OF rules example: http://dtucker.co.uk/hack/building-a-router-with-openvswitch.html\r
-\r
-===== LBaaS\r
-Load-Balancing-as-a-Service (LBaaS) creates an Open vSwitch powered L3-L4 stateless load-balancer in a virtualized network environment so that individual TCP connections destined to a designated virtual IP (VIP) are sent to the appropriate servers (that is to say, serving app VMs). The load-balancer works in a session-preserving, proactive manner without involving the controller during flow setup.\r
-\r
-A Neutron northbound interface is provided to create a VIP which will map to a pool of servers (that is to say, members) within a subnet. The pools consist of members identified by an IP address. The goal is to closely match the API to the OpenStack LBaaS v2 API: http://docs.openstack.org/api/openstack-network/2.0/content/lbaas_ext.html.\r
-\r
-====== Creating an OpenStack workflow\r
-. Create a subnet. \r
-. Create a floating VIP 'A' that maps to a private VIP 'B'. \r
-. Create a Loadbalancer pool 'X'. \r
-----\r
-neutron lb-pool-create --name http-pool --lb-method ROUND_ROBIN --protocol HTTP --subnet-id XYZ\r
-----\r
-[start=4]\r
-. Create a Loadbalancer pool member 'Y' and associate with pool 'X'. \r
-----\r
-neutron lb-member-create --address 10.0.0.10 --protocol-port 80 http-pool\r
-neutron lb-member-create --address 10.0.0.11 --protocol-port 80 http-pool\r
-neutron lb-member-create --address 10.0.0.12 --protocol-port 80 http-pool\r
-neutron lb-member-create --address 10.0.0.13 --protocol-port 80 http-pool\r
-----\r
-[start=5]\r
-. Create a Loadbalancer instance 'Z', and associate pool 'X' and VIP 'B' with it.\r
-----\r
-neutron lb-vip-create --name http-vip --protocol-port 80 --protocol HTTP --subnet-id XYZ http-pool\r
-----\r
-\r
-====== Implementation\r
-\r
-The current implementation of the proactive stateless load-balancer was made using "multipath" action in the Open vSwitch. The "multipath" action takes a max_link parameter value (which is same as the number of pool members) as input, and performs a hash of the fields to get a value between (0, max_link). The value of the hash is used as an index to select a pool member to handle that session. \r
-\r
-===== Open vSwitch rules\r
-Assuming that table=20 contains all the rules to forward the traffic destined for a specific destination MAC address, the following are the rules needed to be programmed in the LBaaS service table=10. The programmed rules makes the translation from the VIP to a different pool member for every session.\r
-\r
-* Proactive forward rules:\r
-----\r
-sudo ovs-ofctl -O OpenFlow13 add-flow s1 "table=10,reg0=0,ip,nw_dst=10.0.0.5,actions=load:0x1->NXM_NX_REG0[[]],multipath(symmetric_l4, 1024, modulo_n, 4, 0, NXM_NX_REG1[0..12]),resubmit(,10)"\r
-sudo ovs-ofctl -O OpenFlow13 add-flow s1 table=10,reg0=1,nw_dst=10.0.0.5,ip,reg1=0,actions=mod_dl_dst:00:00:00:00:00:10,mod_nw_dst:10.0.0.10,goto_table:20\r
-sudo ovs-ofctl -O OpenFlow13 add-flow s1 table=10,reg0=1,nw_dst=10.0.0.5,ip,reg1=1,actions=mod_dl_dst:00:00:00:00:00:11,mod_nw_dst:10.0.0.11,goto_table:20\r
-sudo ovs-ofctl -O OpenFlow13 add-flow s1 table=10,reg0=1,nw_dst=10.0.0.5,ip,reg1=2,actions=mod_dl_dst:00:00:00:00:00:12,mod_nw_dst:10.0.0.12,goto_table:20\r
-sudo ovs-ofctl -O OpenFlow13 add-flow s1 table=10,reg0=1,nw_dst=10.0.0.5,ip,reg1=3,actions=mod_dl_dst:00:00:00:00:00:13,mod_nw_dst:10.0.0.13,goto_table:20\r
-----\r
-* Proactive reverse rules: \r
-----\r
-sudo ovs-ofctl -O OpenFlow13 add-flow s1 table=10,ip,tcp,tp_src=80,actions=mod_dl_src:00:00:00:00:00:05,mod_nw_src:10.0.0.5,goto_table:20\r
----- \r
-\r
-====== OVSDB project code\r
-The current implementation handles all neutron calls in the net-virt/LBaaSHandler.java code, and makes calls to the net-virt-providers/LoadBalancerService to program appropriate flowmods. The rules are updated whenever there is a change in the Neutron LBaaS settings. There is no cache of state kept in the net-virt or providers. \r
-\r
-====== Limitations\r
-Owing to the inflexibility of the multipath action, the existing LBaaS implementation comes with some limitations: \r
-\r
-* TCP, HTTP or HTTPS are supported protocols for the pool. (Caution: You can lose access to the members if you assign {Proto:TCP, Port:22} to LB) \r
-\r
-* Member weights are ignored. \r
-* The update of an LB instance is done as a delete + add, and not an actual delta. \r
-* The update of an LB member is not supported (because weights are ignored). \r
-* Deletion of an LB member leads to the reprogramming of the LB on all nodes (because of the way multipath does link hash).\r
-* There is only a single LB instance per subnet because the pool-id is not reported in the create load-balancer call. \r
-\r
-\r
-\r
-\r
-\r
-\r
-\r
-\r
-\r
-                       \r
-\r
-\r
diff --git a/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-sfc-developer.adoc b/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-sfc-developer.adoc
deleted file mode 100644 (file)
index df4dc0e..0000000
+++ /dev/null
@@ -1,159 +0,0 @@
-=== OVSDB Service Function Chaining Developer Guide
-
-==== Overview
-The OVSDB NetVirtSfc provides a classification and traffic steering component when integrated with OpenStack. Please refer to the Service Function Chaining project for the theory and programming of service chains.
-
-==== Installing the NetVirt SFC Feature
-Install the odl-ovsdb-sfc feature. The feature will also ensure that the odl-ovsdb-openstack feature as well as the openflowplugin, neutron and sfc features are installed.
-
----
-feature:install odl-ovsdb-sfc-ui
----
-
-Verify the required features are installed:
-
----
-opendaylight-user@root>feature:list -i | grep ovsdb
-
-odl-ovsdb-southbound-api             | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: api
-odl-ovsdb-southbound-impl            | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: impl
-odl-ovsdb-southbound-impl-rest       | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: impl :: REST
-odl-ovsdb-southbound-impl-ui         | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-southbound-1.2.1-SNAPSHOT     | OpenDaylight :: southbound :: impl :: UI
-odl-ovsdb-library                    | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-library-1.2.1-SNAPSHOT        | OpenDaylight :: library
-odl-ovsdb-openstack                  | 1.2.1-SNAPSHOT   | x         | ovsdb-1.2.1-SNAPSHOT                    | OpenDaylight :: OVSDB :: OpenStack Network Virtual
-odl-ovsdb-sfc-api                    | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc :: api
-odl-ovsdb-sfc                        | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc
-odl-ovsdb-sfc-rest                   | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc :: REST
-odl-ovsdb-sfc-ui                     | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc :: UI
-
-opendaylight-user@root>feature:list -i | grep sfc
-odl-sfc-model                        | 0.2.0-SNAPSHOT   | x         | odl-sfc-0.2.0-SNAPSHOT                  | OpenDaylight :: sfc :: Model
-odl-sfc-provider                     | 0.2.0-SNAPSHOT   | x         | odl-sfc-0.2.0-SNAPSHOT                  | OpenDaylight :: sfc :: Provider
-odl-sfc-provider-rest                | 0.2.0-SNAPSHOT   | x         | odl-sfc-0.2.0-SNAPSHOT                  | OpenDaylight :: sfc :: Provider
-odl-sfc-ovs                          | 0.2.0-SNAPSHOT   | x         | odl-sfc-0.2.0-SNAPSHOT                  | OpenDaylight :: OpenvSwitch
-odl-sfcofl2                          | 0.2.0-SNAPSHOT   | x         | odl-sfc-0.2.0-SNAPSHOT                  | OpenDaylight :: sfcofl2
-odl-ovsdb-sfc-test                   | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-test1.2.1-SNAPSHOT        | OpenDaylight :: ovsdb-sfc-test
-odl-ovsdb-sfc-api                    | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc :: api
-odl-ovsdb-sfc                        | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc
-odl-ovsdb-sfc-rest                   | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc :: REST
-odl-ovsdb-sfc-ui                     | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc :: UI
-
-opendaylight-user@root>feature:list -i | grep neutron
-odl-neutron-service                  | 0.6.0-SNAPSHOT   | x         | odl-neutron-0.6.0-SNAPSHOT              | OpenDaylight :: Neutron :: API
-odl-neutron-northbound-api           | 0.6.0-SNAPSHOT   | x         | odl-neutron-0.6.0-SNAPSHOT              | OpenDaylight :: Neutron :: Northbound
-odl-neutron-spi                      | 0.6.0-SNAPSHOT   | x         | odl-neutron-0.6.0-SNAPSHOT              | OpenDaylight :: Neutron :: API
-odl-neutron-transcriber              | 0.6.0-SNAPSHOT   | x         | odl-neutron-0.6.0-SNAPSHOT              | OpenDaylight :: Neutron :: Implementation
----
-
-==== OVSDB NetVirt Service Function Chaining Example
-The architecture within OpenDaylight can be seen in the following figure:
-
-.OpenDaylight OVSDB NetVirt SFC Architecture
-image::ovsdb/ODL_SFC_Architecture.png[]
-
-Tacker is a Virtual Network Functions Manager that is responsible for orchestrating the Service Function Chaining. Tacker is responsible for generating templates for Virtual Network Functions for OpenStack to instantiate the Service Functions. Tacker also uses the RESTCONF interfaces of OpenDaylight to create the Service Function Chains.
-
-==== Classification
-OVSDB NetVirt SFC implements the classification for the chains. The classification steers traffic from the tenant overlay to the chain overlay and back to the tenant overlay.
-
-An Access Control List used by NetVirtSFC to create the classifier is shown below. This is an example of classifying HTTP traffic using the tcp port 80. In this example the user would have created a Service Function Chain with the name "http-sfc" as well as all the associated Service Functions and Service Function Forwarders for the chain.
-
----
-http://localhost:8181/restconf/config/ietf-access-control-list:access-lists
-
-{
-    "access-lists": {
-        "acl": [
-            {
-                "acl-name": "http-acl",
-                "access-list-entries": {
-                    "ace": [
-                        {
-                            "rule-name": "http-rule",
-                            "matches": {
-                                "source-port-range": {
-                                    "lower-port": 0,
-                                    "upper-port": 0
-                                },
-                                "protocol": 6,
-                                "destination-port-range": {
-                                    "lower-port": 80,
-                                    "upper-port": 80
-                                }
-                            },
-                            "actions": {
-                                "netvirt-sfc-acl:sfc-name": "http-sfc"
-                            }
-                        }
-                    ]
-                }
-            }
-        ]
-    }
-}
----
-
-When the chain is rendered using the Rendered Service Path RPC, NetvirtSfc will add the classification flows. The classification flows are shown below. The list shown has been modified to remove the NetVirt tenant overlay flows. The classification flow is identified with the cookie: 0x1110010000040255. The 6th digit of the cookie identifies the flow type as the classifier. The last 8 digits identify the chain with the first four digits indicating the NSH NSP and the last four digits identifying the NSH NSI. In this case the chain is identified with an NSP of 4 and the NSI is 255 to indicate the beginning of the chain.
-
----
-sudo ovs-ofctl --protocol=OpenFlow13 dump-flows br-int
-OFPST_FLOW reply (OF1.3) (xid=0x2):
- cookie=0x0, duration=17.157s, table=0, n_packets=0, n_bytes=0, priority=6 actions=goto_table:1
- cookie=0x14, duration=10.692s, table=0, n_packets=0, n_bytes=0, priority=400,udp,in_port=4,tp_dst=6633 actions=LOCAL
- cookie=0x0, duration=17.134s, table=0, n_packets=0, n_bytes=0, dl_type=0x88cc actions=CONTROLLER:65535
- cookie=0x14, duration=10.717s, table=0, n_packets=0, n_bytes=0, priority=350,nsp=4 actions=goto_table:152
- cookie=0x14, duration=10.688s, table=0, n_packets=0, n_bytes=0, priority=400,udp,nw_dst=10.2.1.1,tp_dst=6633 actions=output:4
- cookie=0x0, duration=17.157s, table=1, n_packets=0, n_bytes=0, priority=0 actions=goto_table:11
- cookie=0x1110070000040254, duration=10.608s, table=1, n_packets=0, n_bytes=0, priority=40000,reg0=0x1,nsp=4,nsi=254,in_port=1 actions=goto_table:21
- cookie=0x0, duration=17.157s, table=11, n_packets=0, n_bytes=0, priority=0 actions=goto_table:21
- cookie=0x1110060000040254, duration=10.625s, table=11, n_packets=0, n_bytes=0, nsp=4,nsi=254,in_port=4 actions=load:0x1->NXM_NX_REG0[],move:NXM_NX_NSH_C2[]->NXM_NX_TUN_ID[0..31],resubmit(1,1)
- cookie=0x1110010000040255, duration=10.615s, table=11, n_packets=0, n_bytes=0, tcp,reg0=0x1,tp_dst=80 actions=move:NXM_NX_TUN_ID[0..31]->NXM_NX_NSH_C2[],set_nshc1:0xc0a83246,set_nsp:0x4,set_nsi:255,load:0xa020101->NXM_NX_TUN_IPV4_DST[],load:0x4->NXM_NX_TUN_ID[0..31],resubmit(,0)
- cookie=0x0, duration=17.157s, table=21, n_packets=0, n_bytes=0, priority=0 actions=goto_table:31
- cookie=0x1110040000000000, duration=10.765s, table=21, n_packets=0, n_bytes=0, priority=1024,arp,in_port=LOCAL,arp_tpa=10.2.1.1,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:f6:00:00:0f:00:01->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xf600000f0001->NXM_NX_ARP_SHA[],load:0xa020101->NXM_OF_ARP_SPA[],IN_PORT
- cookie=0x0, duration=17.157s, table=31, n_packets=0, n_bytes=0, priority=0 actions=goto_table:41
- cookie=0x0, duration=17.157s, table=41, n_packets=0, n_bytes=0, priority=0 actions=goto_table:51
- cookie=0x0, duration=17.157s, table=51, n_packets=0, n_bytes=0, priority=0 actions=goto_table:61
- cookie=0x0, duration=17.142s, table=61, n_packets=0, n_bytes=0, priority=0 actions=goto_table:71
- cookie=0x0, duration=17.140s, table=71, n_packets=0, n_bytes=0, priority=0 actions=goto_table:81
- cookie=0x0, duration=17.116s, table=81, n_packets=0, n_bytes=0, priority=0 actions=goto_table:91
- cookie=0x0, duration=17.116s, table=91, n_packets=0, n_bytes=0, priority=0 actions=goto_table:101
- cookie=0x0, duration=17.107s, table=101, n_packets=0, n_bytes=0, priority=0 actions=goto_table:111
- cookie=0x0, duration=17.083s, table=111, n_packets=0, n_bytes=0, priority=0 actions=drop
- cookie=0x14, duration=11.042s, table=150, n_packets=0, n_bytes=0, priority=5 actions=goto_table:151
- cookie=0x14, duration=11.027s, table=151, n_packets=0, n_bytes=0, priority=5 actions=goto_table:152
- cookie=0x14, duration=11.010s, table=152, n_packets=0, n_bytes=0, priority=5 actions=goto_table:158
- cookie=0x14, duration=10.668s, table=152, n_packets=0, n_bytes=0, priority=650,nsp=4,nsi=255 actions=load:0xa020101->NXM_NX_TUN_IPV4_DST[],goto_table:158
- cookie=0x14, duration=10.995s, table=158, n_packets=0, n_bytes=0, priority=5 actions=drop
- cookie=0xba5eba11ba5eba11, duration=10.645s, table=158, n_packets=0, n_bytes=0, priority=751,nsp=4,nsi=255,in_port=4 actions=move:NXM_NX_NSH_C1[]->NXM_NX_NSH_C1[],move:NXM_NX_NSH_C2[]->NXM_NX_NSH_C2[],move:NXM_NX_TUN_ID[0..31]->NXM_NX_TUN_ID[0..31],IN_PORT
- cookie=0xba5eba11ba5eba11, duration=10.590s, table=158, n_packets=0, n_bytes=0, priority=751,nsp=4,nsi=254,in_port=4 actions=move:NXM_NX_NSI[]->NXM_NX_NSI[],move:NXM_NX_NSP[]->NXM_NX_NSP[],move:NXM_NX_NSH_C1[]->NXM_NX_TUN_IPV4_DST[],move:NXM_NX_NSH_C2[]->NXM_NX_TUN_ID[0..31],IN_PORT
- cookie=0xba5eba11ba5eba11, duration=10.640s, table=158, n_packets=0, n_bytes=0, priority=750,nsp=4,nsi=255 actions=move:NXM_NX_NSH_C1[]->NXM_NX_NSH_C1[],move:NXM_NX_NSH_C2[]->NXM_NX_NSH_C2[],move:NXM_NX_TUN_ID[0..31]->NXM_NX_TUN_ID[0..31],output:4
- cookie=0xba5eba11ba5eba11, duration=10.571s, table=158, n_packets=0, n_bytes=0, priority=761,nsp=4,nsi=254,nshc1=3232248390,in_port=4 actions=move:NXM_NX_NSI[]->NXM_NX_NSI[],move:NXM_NX_NSP[]->NXM_NX_NSP[],move:NXM_NX_NSH_C1[]->NXM_NX_TUN_IPV4_DST[],move:NXM_NX_NSH_C2[]->NXM_NX_TUN_ID[0..31],set_nshc1:0,resubmit(,11)
----
-
-==== Configuration
-Some configuration is required due to application coexistence for the OpenFlow programming. The SFC project programs flows for the SFC overlay and NetVirt programs flows for the tenant overlay. Coexistence is achieved by each application owning a unique set of tables and providing a simple handoff between the tables.
-
-First configure NetVirt to use table 1 as it's starting table:
-
----
-http://localhost:8181/restconf/config/netvirt-providers-config:netvirt-providers-config
-
-{
-    "netvirt-providers-config": {
-        "table-offset": 1
-    }
-}
----
-
-Next configure SFC to start at table 150 and configure the table handoff. The configuration starts SFC at table 150 and sets the handoff to table 11 which is the NetVirt SFC classification table.
-
----
-http://localhost:8181/restconf/config/sfc-of-renderer:sfc-of-renderer-config
-
-{
-    "sfc-of-renderer-config": {
-        "sfc-of-app-egress-table-offset": 11,
-        "sfc-of-table-offset": 150
-    }
-}
----
\ No newline at end of file
diff --git a/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-southbound-developer.adoc b/manuals/developer-guide/src/main/asciidoc/ovsdb/ovsdb-southbound-developer.adoc
deleted file mode 100644 (file)
index 8759dcd..0000000
+++ /dev/null
@@ -1,299 +0,0 @@
-=== OVSDB MD-SAL Southbound Plugin Developer Guide
-
-==== Overview
-The Open vSwitch Database (OVSDB) Model Driven Service Abstraction Layer
-(MD-SAL) Southbound Plugin provides an MD-SAL based interface to
-Open vSwitch systems.  This is done by augmenting the MD-SAL topology node with
-a YANG model which replicates some (but not all) of the Open vSwitch schema.
-
-==== OVSDB MD-SAL Southbound Plugin Architecture and Operation
-The architecture and operation of the OVSDB MD-SAL Southbound plugin is
-illustrated in the following set of diagrams.
-
-===== Connecting to an OVSDB Node
-An OVSDB node is a system which is running the OVS software and is capable of
-being managed by an OVSDB manager.  The OVSDB MD-SAL Southbound plugin in
-OpenDaylight is capable of operating as an OVSDB manager.  Depending on the
-configuration of the OVSDB node, the connection of the OVSDB manager can
-be active or passive.
-
-====== Active OVSDB Node Manager Workflow
-An active OVSDB node manager connection is made when OpenDaylight initiates the
-connection to the OVSDB node.  In order for this to work, you must configure the
-OVSDB node to listen on a TCP port for the connection (i.e.
-OpenDaylight is active and the OVSDB node is passive).  This option can be
-configured on the OVSDB node using the following command:
-
- ovs-vsctl set-manager ptcp:6640
-
-The following diagram illustrates the sequence of events which occur when
-OpenDaylight initiates an active OVSDB manager connection to an OVSDB node.
-
-.Active OVSDB Manager Connection
-image::ovsdb-sb-active-connection.jpg[width=500]
-
-Step 1::
-Create an OVSDB node by using RESTCONF or an OpenDaylight plugin. The OVSDB node
-is listed under the OVSDB topology node.
-Step 2::
-Add the OVSDB node to the OVSDB MD-SAL southbound configuration datastore. The
-OVSDB southbound provider is registered to listen for data change events on the
-portion of the MD-SAL topology data store which contains the OVSDB southbound
-topology node augmentations. The addition of an OVSDB node causes an event which
-is received by the OVSDB Southbound provider.
-Step 3::
-The OVSDB Southbound provider initiates a connection to the OVSDB node using
-the connection information provided in the configuration OVSDB node (i.e. IP
-address and TCP port number).
-Step 4::
-The OVSDB Southbound provider adds the OVSDB node to the OVSDB MD-SAL
-operational data store.  The operational data store contains OVSDB node
-objects which represent active connections to OVSDB nodes.
-Step 5::
-The OVSDB Southbound provider requests the schema and databases which are
-supported by the OVSDB node.
-Step 6::
-The OVSDB Southbound provider uses the database and schema information to
-construct a monitor request which causes the OVSDB node to send the controller
-any updates made to the OVSDB databases on the OVSDB node.
-
-
-====== Passive OVSDB Node Manager Workflow
-A passive OVSDB node connection to OpenDaylight is made when the OVSDB node
-initiates the connection to OpenDaylight.  In order for this to work, you must
-configure the OVSDB node to connect to the IP address and OVSDB port on which
-OpenDaylight is listening.  This option can be configured on the OVSDB node
-using the following command:
-
- ovs-vsctl set-manager tcp:<IP address>:6640
-
-The following diagram illustrates the sequence of events which occur when an
-OVSDB node connects to OpenDaylight.
-
-.Passive OVSDB Manager Connection
-image::ovsdb-sb-passive-connection.jpg[width=500]
-
-Step 1::
-The OVSDB node initiates a connection to OpenDaylight.
-Step 2::
-The OVSDB Southbound provider adds the OVSDB node to the OVSDB MD-SAL
-operational data store.  The operational data store contains OVSDB node
-objects which represent active connections to OVSDB nodes.
-Step 3::
-The OVSDB Southbound provider requests the schema and databases which are
-supported by the OVSDB node.
-Step 4::
-The OVSDB Southbound provider uses the database and schema information to
-construct a monitor request which causes the OVSDB node to send back
-any updates which have been made to the OVSDB databases on the OVSDB node.
-
-===== OVSDB Node ID in the Southbound Operational MD-SAL
-When OpenDaylight initiates an active connection to an OVSDB node, it
-writes an external-id to the Open_vSwitch table on the OVSDB node.  The
-external-id is an OpenDaylight instance identifier which identifies the
-OVSDB topology node which has just been created.
-Here is an example showing the value of the 'opendaylight-iid' entry
-in the external-ids column of the Open_vSwitch table where the
-node-id of the OVSDB node is 'ovsdb:HOST1'.
-
- $ ovs-vsctl list open_vswitch
- ...
- external_ids        : {opendaylight-iid="/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']"}
- ...
-
-The 'opendaylight-iid' entry in the external-ids column of the Open_vSwitch
-table causes the OVSDB node to have same node-id in the operational
-MD-SAL datastore as in the configuration MD-SAL datastore.  This holds true
-if the OVSDB node manager settings are subsequently changed so that a
-passive OVSDB manager connection is made.
-
-If there is no 'opendaylight-iid' entry in the external-ids column and
-a passive OVSDB manager connection is made, then the node-id of the OVSDB
-node in the operational MD-SAL datastore will be constructed using the UUID
-of the Open_vSwitch table as follows.
-
- "node-id": "ovsdb://uuid/b8dc0bfb-d22b-4938-a2e8-b0084d7bd8c1"
-The 'opendaylight-iid' entry can be removed from the Open_vSwitch table using
-the following command.
-
- $ sudo ovs-vsctl remove open_vswitch . external-id "opendaylight-iid"
-
-===== OVSDB Changes by using OVSDB Southbound Config MD-SAL
-After the connection has been made to an OVSDB node, you can make changes to the
-OVSDB node by using the OVSDB Southbound Config MD-SAL.  You can
-make CRUD operations by using the RESTCONF interface or by a plugin
-using the MD-SAL APIs.  The following diagram illustrates the high-level flow of
-events.
-
-.OVSDB Changes by using the Southbound Config MD-SAL
-image::ovsdb-sb-config-crud.jpg[width=500]
-
-Step 1::
-A change to the OVSDB Southbound Config MD-SAL is made.  Changes include adding
-or deleting bridges and ports, or setting attributes of OVSDB nodes, bridges or
-ports.
-Step 2::
-The OVSDB Southbound provider receives notification of the changes made to the
-OVSDB Southbound Config MD-SAL data store.
-Step 3::
-As appropriate, OVSDB transactions are constructed and transmitted to the OVSDB
-node to update the OVSDB database on the OVSDB node.
-Step 4::
-The OVSDB node sends update messages to the OVSDB Southbound provider to
-indicate the changes made to the OVSDB nodes database.
-Step 5::
-The OVSDB Southbound provider maps the changes received from the OVSDB node
-into corresponding changes made to the OVSDB Southbound Operational
-MD-SAL data store.
-
-===== Detecting changes in OVSDB coming from outside OpenDaylight
-Changes to the OVSDB nodes database may also occur independently of OpenDaylight.
-OpenDaylight also receives notifications for these events and updates the
-Southbound operational MD-SAL.  The following diagram illustrates the sequence
-of events.
-
-.OVSDB Changes made directly on the OVSDB node
-image::ovsdb-sb-oper-crud.jpg[width=500]
-
-Step 1::
-Changes are made to the OVSDB node outside of OpenDaylight (e.g. ovs-vsctl).
-Step 2::
-The OVSDB node constructs update messages to inform OpenDaylight of the changes
-made to its databases.
-Step 3::
-The OVSDB Southbound provider maps the OVSDB database changes to corresponding
-changes in the OVSDB Southbound operational MD-SAL data store.
-
-// ==== OpenFlow controller
-// Discussion of how the OpenFlow controller node is associated with the OVSDB
-// southbound model
-
-===== OVSDB Model
-The OVSDB Southbound MD-SAL operates using a YANG model which is based on the
-abstract topology node model found in the 
-https://github.com/opendaylight/yangtools/blob/stable/lithium/model/ietf/ietf-topology/src/main/yang/network-topology%402013-10-21.yang[network topology model].
-
-The augmentations for the OVSDB Southbound MD-SAL are defined in the
-https://github.com/opendaylight/ovsdb/blob/stable/lithium/southbound/southbound-api/src/main/yang/ovsdb.yang[ovsdb.yang] file.
-
-There are three augmentations:
-
-*ovsdb-node-augmentation*::
-This augments the topology node and maps primarily to the Open_vSwitch table of
-the OVSDB schema.  It contains the following attributes.
-  * *connection-info* - holds the local and remote IP address and TCP port numbers for the OpenDaylight to OVSDB node connections
-  * *db-version* - version of the OVSDB database
-  * *ovs-version* - version of OVS
-  * *list managed-node-entry* - a list of references to ovsdb-bridge-augmentation nodes, which are the OVS bridges managed by this OVSDB node
-  * *list datapath-type-entry* - a list of the datapath types supported by the OVSDB node (e.g. 'system', 'netdev') - depends on newer OVS versions
-  * *list interface-type-entry* - a list of the interface types supported by the OVSDB node (e.g. 'internal', 'vxlan', 'gre', 'dpdk', etc.) - depends on newer OVS verions
-  * *list openvswitch-external-ids* - a list of the key/value pairs in the Open_vSwitch table external_ids column
-  * *list openvswitch-other-config* - a list of the key/value pairs in the Open_vSwitch table other_config column
-*ovsdb-bridge-augmentation*::
-This augments the topology node and maps to an specific bridge in the OVSDB
-bridge table of the associated OVSDB node. It contains the following attributes.
-  * *bridge-uuid* - UUID of the OVSDB bridge
-  * *bridge-name* - name of the OVSDB bridge
-  * *bridge-openflow-node-ref* - a reference (instance-identifier) of the OpenFlow node associated with this bridge
-  * *list protocol-entry* - the version of OpenFlow protocol to use with the OpenFlow controller
-  * *list controller-entry* - a list of controller-uuid and is-connected status of the OpenFlow controllers associated with this bridge
-  * *datapath-id* - the datapath ID associated with this bridge on the OVSDB node
-  * *datapath-type* - the datapath type of this bridge
-  * *fail-mode* - the OVSDB fail mode setting of this bridge
-  * *flow-node* - a reference to the flow node corresponding to this bridge
-  * *managed-by* - a reference to the ovsdb-node-augmentation (OVSDB node) that is managing this bridge
-  * *list bridge-external-ids* - a list of the key/value pairs in the bridge table external_ids column for this bridge
-  * *list bridge-other-configs* - a list of the key/value pairs in the bridge table other_config column for this bridge
-*ovsdb-termination-point-augmentation*::
-This augments the topology termination point model.  The OVSDB Southbound
-MD-SAL uses this model to represent both the OVSDB port and OVSDB interface for
-a given port/interface in the OVSDB schema.  It contains the following
-attributes.
-  * *port-uuid* - UUID of an OVSDB port row
-  * *interface-uuid* - UUID of an OVSDB interface row
-  * *name* - name of the port
-  * *interface-type* - the interface type
-  * *list options* - a list of port options
-  * *ofport* - the OpenFlow port number of the interface
-  * *ofport_request* - the requested OpenFlow port number for the interface
-  * *vlan-tag* - the VLAN tag value
-  * *list trunks* - list of VLAN tag values for trunk mode
-  * *vlan-mode* - the VLAN mode (e.g. access, native-tagged, native-untagged, trunk)
-  * *list port-external-ids* - a list of the key/value pairs in the port table external_ids column for this port
-  * *list interface-external-ids* - a list of the key/value pairs in the interface table external_ids interface for this interface
-  * *list port-other-configs* - a list of the key/value pairs in the port table other_config column for this port
-  * *list interface-other-configs* - a list of the key/value pairs in the interface table other_config column for this interface
-
-==== Examples of OVSDB Southbound MD-SAL API
-
-===== Connect to an OVSDB Node
-This example RESTCONF command adds an OVSDB node object to the OVSDB
-Southbound configuration data store and attempts to connect to the OVSDB host
-located at the IP address 10.11.12.1 on TCP port 6640.
-
- POST http://<host>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
- Content-Type: application/json
- {
-   "node": [
-      {
-        "node-id": "ovsdb:HOST1",
-        "connection-info": {
-          "ovsdb:remote-ip": "10.11.12.1",
-          "ovsdb:remote-port": 6640
-        }
-      }
-   ]
- }
-
-===== Query the OVSDB Southbound Configuration MD-SAL
-Following on from the previous example, if the OVSDB Southbound configuration
-MD-SAL is queried, the RESTCONF command and the resulting reply is similar
-to the following example.
-
- GET http://<host>:8080/restconf/config/network-topology:network-topology/topology/ovsdb:1/
- Application/json data in the reply
- {
-   "topology": [
-     {
-       "topology-id": "ovsdb:1",
-       "node": [
-         {
-           "node-id": "ovsdb:HOST1",
-           "ovsdb:connection-info": {
-             "remote-port": 6640,
-             "remote-ip": "10.11.12.1"
-           }
-         }
-       ]
-     }
-   ]
- }
-
-// ==== Query the OVSDB Southbound Operational MD-SAL
-// If the previous example POST command is successful in connecting to the OVSDB
-// node, then eventually the OVSDB Southbound operational MD-SAL is populated
-// with information received in an OVSDB update message from the OVSDB node.  The
-// RESTCONF query and resulting reply is similar to the following example.
-// 
-//  http://<host>:8080/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
-// 
-//  Application/json data in the reply
-//  TBD - things not working well at time of writing
-// 
-// 
-// 
-// ==== Add a bridge
-// TBD
-// 
-// ==== Add a port
-// TBD
-// 
-// ==== Set attributes
-// TBD
-// 
-// ==== Delete examples
-// TBD
-
-==== Reference Documentation
-http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf[Openvswitch schema]
index a9d9eb3bc70179cc5ffcdc83b14398042a3846f1..19b761c0a5ebe73c923e3b29c3a0fa2632e18f28 100644 (file)
@@ -1,327 +1,3 @@
 == PacketCable Developer Guide
 
-[[pcmm-specification]]
-=== PCMM Specification
-
-http://www.cablelabs.com/specification/packetcable-multimedia-specification[PacketCable™
- Multimedia Specification]
-
-[[system-overview]]
-=== System Overview
-
-These components introduce a DOCSIS QoS Service Flow management using
-the PCMM protocol. The driver component is responsible for the
-PCMM/COPS/PDP functionality required to service requests from
-PacketCable Provider and FlowManager. Requests are transposed into PCMM
-Gate Control messages and transmitted via COPS to the CCAP/CMTS. This plugin
-adheres to the PCMM/COPS/PDP functionality defined in the CableLabs
-specification. PacketCable solution is an MDSAL compliant component.
-
-[[packetcable-components]]
-=== PacketCable Components
-
-The packetcable maven project is comprised of several modules.
-
-
-[options="header"]
-|=======================
-|Bundle                    |Description
-|packetcable-driver        |A common module that containts the COPS stack and 
-                            manages all connections to CCAPS/CMTSes.
-|packetcable-emulator      |A basic CCAP emulator to facilitate testing the
-                            the plugin when no physical CCAP is avaible.
-|packetcable-policy-karaf  |Generates a Karaf distribution with a config that
-                            loads all the packetcable features at runtime.
-|packetcable-policy-model  |Contains the YANG information model.
-|packetcable-policy-server |Provider hosts the model processing, RESTCONF,
-                            and API implementation.
-|=======================
-
-
-[[loging-levels]]
-==== Setting Logging Levels
-From the Karaf console
-
-    log:set <LEVEL> (<PACKAGE>|<BUNDLE>)
-    Example
-    log:set DEBUG org.opendaylight.packetcable.packetcable-policy-server
-
-[[tools-for-testing]]
-=== Tools for Testing
-
-[[postman]]
-==== Postman REST client for Chrome
-
-https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en[Install
-the Chrome extension]
-
-https://git.opendaylight.org/gerrit/gitweb?p=packetcable.git;a=tree;f=packetcable-policy-server/doc/restconf-samples[Download
-and import sample packetcable collection]
-
-[[view-rest-api]]
-==== View Rest API
-1. Install the `odl-mdsal-apidocs` feature from the karaf console.
-2. Open http://localhost:8181/apidoc/explorer/index.html default dev build user/pass is admin/admin
-3. Navigate to the PacketCable section.
-
-[[yang-ide]]
-==== Yang-IDE
-Editing yang can be done in any text editor but Yang-IDE will help prevent mistakes.
-
-https://github.com/xored/yang-ide/wiki/Setup-and-build[Setup and Build
-Yang-IDE for Eclipse]
-
-[[using-wireshark-to-trace-pcmm]]
-=== Using Wireshark to Trace PCMM
-
-1.  To start wireshark with privileges issue the following command:
-+
-----------------
-sudo wireshark &
-----------------
-2.  Select the interface to monitor.
-3.  Use the Filter to only display COPS messages by applying “cops” in
-the filter field.
-
-image:Screenshot8.png[width=500]
-
-[[debugging-and-verifying-dqos-gate-flows-on-the-cmts]]
-=== Debugging and Verifying DQoS Gate (Flows) on the CCAP/CMTS
-
-Below are some of the most useful CCAP/CMTS commands to verify flows have been
-enabled on the CMTS.
-
-[[cisco]]
-==== Cisco
-
-http://www.cisco.com/c/en/us/td/docs/cable/cmts/cmd_ref/b_cmts_cable_cmd_ref.pdf[Cisco
-CMTS Cable Command Reference]
-
-[[find-the-cable-modem]]
-==== Find the Cable Modem
-
------------------------------------------------------------------------------------
-10k2-DSG#show cable modem
-                                                                                  D
-MAC Address    IP Address      I/F           MAC           Prim RxPwr  Timing Num I
-                                             State         Sid  (dBmv) Offset CPE P
-0010.188a.faf6 0.0.0.0         C8/0/0/U0     offline       1    0.00   1482   0   N
-74ae.7600.01f3 10.32.115.150   C8/0/10/U0    online        1    -0.50  1431   0   Y
-0010.188a.fad8 10.32.115.142   C8/0/10/UB    w-online      2    -0.50  1507   1   Y
-000e.0900.00dd 10.32.115.143   C8/0/10/UB    w-online      3    1.00   1677   0   Y
-e86d.5271.304f 10.32.115.168   C8/0/10/UB    w-online      6    -0.50  1419   1   Y
------------------------------------------------------------------------------------
-
-[[show-pcmm-plugin-connection]]
-==== Show PCMM Plugin Connection
-
-----------------------------------------------------------------------------
-10k2-DSG#show packetcabl ?
-  cms     Gate Controllers connected to this PacketCable client
-  event   Event message server information
-  gate    PacketCable gate information
-  global  PacketCable global information
-
-10k2-DSG#show packetcable cms
-GC-Addr        GC-Port  Client-Addr    COPS-handle  Version PSID Key PDD-Cfg
-
-
-10k2-DSG#show packetcable cms
-GC-Addr        GC-Port  Client-Addr    COPS-handle  Version PSID Key PDD-Cfg
-10.32.0.240    54238    10.32.15.3     0x4B9C8150/1    4.0   0    0   0   
-----------------------------------------------------------------------------
-
-[[show-cops-messages]]
-==== Show COPS Messages
-
-------------------
-debug cops details
-------------------
-
-[[use-cm-mac-address-to-list-service-flows]]
-==== Use CM Mac Address to List Service Flows
-
-------------------------------------------------------------------------------------
-10k2-DSG#show cable modem    
-                                                                                  D
-MAC Address    IP Address      I/F           MAC           Prim RxPwr  Timing Num I
-                                             State         Sid  (dBmv) Offset CPE P
-0010.188a.faf6 ---             C8/0/0/UB     w-online      1    0.50   1480   1   N
-74ae.7600.01f3 10.32.115.150   C8/0/10/U0    online        1    -0.50  1431   0   Y
-0010.188a.fad8 10.32.115.142   C8/0/10/UB    w-online      2    -0.50  1507   1   Y
-000e.0900.00dd 10.32.115.143   C8/0/10/UB    w-online      3    0.00   1677   0   Y
-e86d.5271.304f 10.32.115.168   C8/0/10/UB    w-online      6    -0.50  1419   1   Y
-
-
-10k2-DSG#show cable modem 000e.0900.00dd service-flow
-                                                 
-
-SUMMARY:
-MAC Address    IP Address      Host          MAC           Prim  Num Primary    DS
-                               Interface     State         Sid   CPE Downstream RfId
-000e.0900.00dd 10.32.115.143   C8/0/10/UB    w-online      3     0   Mo8/0/2:1  2353
-
-
-Sfid  Dir Curr  Sid   Sched  Prio MaxSusRate  MaxBrst     MinRsvRate  Throughput 
-          State       Type
-23    US  act   3     BE     0    0           3044        0           39         
-30    US  act   16    BE     0    500000      3044        0           0          
-24    DS  act   N/A   N/A    0    0           3044        0           17         
-
-
-
-UPSTREAM SERVICE FLOW DETAIL:
-
-SFID  SID   Requests   Polls      Grants     Delayed    Dropped    Packets   
-                                             Grants     Grants
-23    3     784        0          784        0          0          784       
-30    16    0          0          0          0          0          0         
-
-
-DOWNSTREAM SERVICE FLOW DETAIL:
-
-SFID  RP_SFID QID    Flg Policer               Scheduler             FrwdIF    
-                         Xmits      Drops      Xmits      Drops
-24    33019   131550     0          0          777        0          Wi8/0/2:2
-
-Flags Legend:
-$: Low Latency Queue (aggregated)
-~: CIR Queue
-------------------------------------------------------------------------------------
-
-[[deleting-a-pcmm-gate-message-from-the-cmts]]
-==== Deleting a PCMM Gate Message from the CMTS
-
-------------------------------------------
-10k2-DSG#test cable dsd  000e.0900.00dd 30
-------------------------------------------
-
-[[find-service-flows]]
-==== Find service flows
-
-All gate controllers currently connected to the PacketCable client are
-displayed
-
-------------------------------------------------------
-show cable modem 00:11:22:33:44:55 service flow   ????
-show cable modem
-------------------------------------------------------
-
-[[debug-and-display-pcmm-gate-messages]]
-==== Debug and display PCMM Gate messages
-
-------------------------------
-debug packetcable gate control
-debug packetcable gate events
-show packetcable gate summary
-show packetcable global
-show packetcable cms
-------------------------------
-
-[[debug-cops-messages]]
-==== Debug COPS messages
-
------------------------------
-debug cops detail
-debug packetcable cops
-debug cable dynamic_qos trace
------------------------------
-
-// [[arris]]
-// ==== Arris
-//
-// Pending
-
-[[integration-verification]]
-=== Integration Verification
-
-Checkout the integration project and perform regression tests.
-
---------------------------------------------------------------------------
-git clone ssh://${ODL_USERNAME}@git.opendaylight.org:29418/integration.git
-git clone https:/git.opendaylight.org/gerrit/integration.git
---------------------------------------------------------------------------
-
-1.  Check and edit the
-integration/features/src/main/resources/features.xml and follow the
-directions there.
-2.  Check and edit the integration/features/pom.xml and add a dependency
-for your feature file
-3.  Build integration/features and debug
-
-`  mvn clean install`
-
-Test your feature in the integration/distributions/extra/karaf/
-distribution
-
------------------------------------------
-cd integration/distributions/extra/karaf/
-mvn clean install
-cd target/assembly/bin
-./karaf
------------------------------------------
-
-[[service-wrapper]]
-==== service-wrapper
-
-Install http://karaf.apache.org/manual/latest/users-guide/wrapper.html
-
---------------------------------------------------------------------------------------------------------
-opendaylight-user@root>feature:install service-wrapper
-opendaylight-user@root>wrapper:install --help
-DESCRIPTION
-        wrapper:install
-
-Install the container as a system service in the OS.
-
-SYNTAX
-        wrapper:install [options]
-
-OPTIONS
-        -d, --display
-                The display name of the service.
-                (defaults to karaf)
-        --help
-                Display this help message
-        -s, --start-type
-                Mode in which the service is installed. AUTO_START or DEMAND_START (Default: AUTO_START)
-                (defaults to AUTO_START)
-        -n, --name
-                The service name that will be used when installing the service. (Default: karaf)
-                (defaults to karaf)
-        -D, --description
-                The description of the service.
-                (defaults to )
-
-opendaylight-user@root> wrapper:install
-Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/bin/karaf-wrapper
-Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/bin/karaf-service
-Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/etc/karaf-wrapper.conf
-Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/lib/libwrapper.so
-Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/lib/karaf-wrapper.jar
-Creating file: /home/user/odl/distribution-karaf-0.3.0-Lithium/lib/karaf-wrapper-main.jar
-
-Setup complete.  You may wish to tweak the JVM properties in the wrapper configuration file:
-/home/user/odl/distribution-karaf-0.3.0-Lithium/etc/karaf-wrapper.conf
-before installing and starting the service.
-
-
-Ubuntu/Debian Linux system detected:
-  To install the service:
-    $ ln -s /home/user/odl/distribution-karaf-0.3.0-Lithium/bin/karaf-service /etc/init.d/
-
-  To start the service when the machine is rebooted:
-    $ update-rc.d karaf-service defaults
-
-  To disable starting the service when the machine is rebooted:
-    $ update-rc.d -f karaf-service remove
-
-  To start the service:
-    $ /etc/init.d/karaf-service start
-
-  To stop the service:
-    $ /etc/init.d/karaf-service stop
-
-  To uninstall the service :
-    $ rm /etc/init.d/karaf-service
---------------------------------------------------------------------------------------------------------
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/packetcable-developer-guide.html
diff --git a/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-classifier-dev.adoc b/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-classifier-dev.adoc
deleted file mode 100644 (file)
index 4eefb10..0000000
+++ /dev/null
@@ -1,54 +0,0 @@
-=== SFC Classifier Control and Date plane Developer guide
-
-==== Overview
-Description of classifier can be found in: https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/
-
-Classifier manages everything from starting the packet listener to creation (and removal) of appropriate ip(6)tables rules and marking received packets accordingly. Its functionality is *available only on Linux* as it leverdges *NetfilterQueue*, which provides access to packets matched by an *iptables* rule. Classifier requires *root privileges* to be able to operate.
-
-So far it is capable of processing ACL for MAC addresses, ports, IPv4 and IPv6. Supported protocols are TCP and UDP.
-
-==== Classifier Architecture
-Python code located in the project repository sfc-py/common/classifier.py.
-
-NOTE: classifier assumes that Rendered Service Path (RSP) *already exists* in ODL when an ACL referencing it is obtained
-
-.How it works:
-. sfc_agent receives an ACL and passes it for processing to the classifier
-. the RSP (its SFF locator) referenced by ACL is requested from ODL
-. if the RSP exists in the ODL then ACL based iptables rules for it are applied
-
-After this process is over, every packet successfully matched to an iptables rule (i.e. successfully classified) will be NSH encapsulated and forwarded to a related SFF, which knows how to traverse the RSP.
-
-Rules are created using appropriate iptables command. If the Access Control Entry (ACE) rule is MAC address related both iptables and ip6tabeles rules re issued. If ACE rule is IPv4 address related, only iptables rules are issued, same for IPv6.
-
-NOTE: iptables *raw* table contains all created rules
-
-Information regarding already registered RSP(s) are stored in an internal data-store, which is represented as a dictionary:
-
-       {rsp_id: {'name': <rsp_name>,
-                     'chains': {'chain_name': (<ipv>,),
-                                ...
-                                },
-                     'sff': {'ip': <ip>,
-                             'port': <port>,
-                             'starting-index': <starting-index>,
-                             'transport-type': <transport-type>
-                             },
-                     },
-       ...
-       }
-
-.Where
-    * `name`: name of the RSP
-    * `chains`: dictionary of iptables chains related to the RSP with information about IP version for which the chain exists
-    * `SFF`: SFF forwarding parameters
-        - `ip`: SFF IP address
-        - `port`: SFF port
-        - `starting-index`: index given to packet at first RSP hop
-        - `transport-type`: encapsulation protocol
-
-==== Key APIs and Interfaces
-This features exposes API to configure classifier (corresponds to service-function-classifier.yang)
-
-==== API Reference Documentation
-See: sfc-model/src/main/yang/service-function-classifier.yang
diff --git a/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-load-balance-dev.adoc b/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-load-balance-dev.adoc
deleted file mode 100644 (file)
index 650de8e..0000000
+++ /dev/null
@@ -1,49 +0,0 @@
-=== Service Function Load Balancing Developer Guide
-
-==== Overview
-SFC Load-Balancing feature implements load balancing of Service Functions, rather than a one-to-one mapping between Service Function Forwarder and Service Function. 
-
-==== Load Balancing Architecture
-Service Function Groups (SFG) can replace Service Functions (SF) in the Rendered Path model. 
-A Service Path can only be defined using SFGs or SFs, but not a combination of both.
-
-Relevant objects in the YANG model are as follows:
-
-1. Service-Function-Group-Algorithm:
-
-       Service-Function-Group-Algorithms {
-               Service-Function-Group-Algorithm {
-                       String name
-                       String type
-               }
-       }
-
-       Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
-       
-2. Service-Function-Group:
-
-       Service-Function-Groups {
-               Service-Function-Group {
-                       String name
-                       String serviceFunctionGroupAlgorithmName
-                       String type
-                       String groupId
-                       Service-Function-Group-Element {
-                               String service-function-name
-                               int index
-                       }
-               }
-       }
-
-3. ServiceFunctionHop: holds a reference to a name of SFG (or SF)
-==== Key APIs and Interfaces
-This feature enhances the existing SFC API.
-
-REST API commands include:
-* For Service Function Group (SFG): read existing SFG, write new SFG, delete existing SFG, add Service Function (SF) to SFG, and delete SF from SFG
-* For Service Function Group Algorithm (SFG-Alg): read, write, delete
-
-Bundle providing the REST API: sfc-sb-rest
-* Service Function Groups and Algorithms are defined in: sfc-sfg and sfc-sfg-alg
-* Relevant JAVA API: SfcProviderServiceFunctionGroupAPI, SfcProviderServiceFunctionGroupAlgAPI
\ No newline at end of file
diff --git a/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-ovs-dev.adoc b/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-ovs-dev.adoc
deleted file mode 100644 (file)
index be1f1d8..0000000
+++ /dev/null
@@ -1,31 +0,0 @@
-=== SFC-OVS Plugin
-
-==== Overview
-SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices.
-Integration is realized through mapping of SFC objects (like SF, SFF,
-Classifier, etc.) to OVS objects (like Bridge, TerminationPoint=Port/Interface).
-The mapping takes care of automatic instantiation (setup) of corresponding object
-whenever its counterpart is created. For example, when a new SFF is created,
-the SFC-OVS plugin will create a new OVS bridge and when a new OVS Bridge is
-created, the SFC-OVS plugin will create a new SFF.
-
-==== SFC-OVS Architecture
-SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing information
-from/to OVS devices. The core functionality consists of two types of mapping:
-
-a. mapping from OVS to SFC
-** OVS Bridge is mapped to SFF
-** OVS TerminationPoints are mapped to SFF DataPlane locators
-
-b. mapping from SFC to OVS
-** SFF is mapped to OVS Bridge
-** SFF DataPlane locators are mapped to OVS TerminationPoints
-
-.SFC < -- > OVS mapping flow diagram
-image::sfc/sfc-ovs-architecture.png[width=500]
-
-==== Key APIs and Interfaces
-* SFF to OVS mapping API (methods to convert SFF object to OVS Bridge
-and OVS TerminationPoints)
-* OVS to SFF mapping API (methods to convert OVS Bridge and OVS TerminationPoints
-to SFF object)
diff --git a/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-sb-rest-dev.adoc b/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-sb-rest-dev.adoc
deleted file mode 100644 (file)
index 58745e7..0000000
+++ /dev/null
@@ -1,55 +0,0 @@
-=== SFC Southbound REST Plugin
-
-==== Overview
-The Southbound REST Plugin is used to send configuration from DataStore down to
-network devices supporting a REST API (i.e. they have a configured REST URI).
-It supports POST/PUT/DELETE operations, which are triggered accordingly by
-changes in the SFC data stores.
-
-.In its current state it listens to changes in these SFC data stores:
-* Access Control List (ACL)
-* Service Classifier Function (SCF)
-* Service Function (SF)
-* Service Function Group (SFG)
-* Service Function Schedule Type (SFST)
-* Service Function Forwader (SFF)
-* Rendered Service Path (RSP)
-
-==== Southbound REST Plugin Architecture
-.The Southbound REST Plugin is built from three main components:
-. *listeners* - used to listen on changes in the SFC data stores
-. *JSON exporters* - used to export JSON-encoded data from binding-aware data
-store objects
-. *tasks* - used to collect REST URIs of network devices and to send JSON-encoded
-data down to these devices
-
-.Southbound REST Plugin Architecture diagram
-image::sfc/sb-rest-architecture.png[width=500]
-
-==== Key APIs and Interfaces
-The plugin provides Southbound REST API designated to listening REST devices. It supports
-POST/PUT/DELETE operations. The operation (with corresponding JSON-encoded data) is sent
-to unique REST URL belonging to certain datatype.
-
-.The URLs are following:
-* Access Control List (ACL):
-+http://<host>:<port>/config/ietf-acl:access-lists/access-list/+
-* Service Function (SF):
-+http://<host>:<port>/config/service-function:service-functions/service-function/+
-* Service Function Group (SFG):
-+http://<host>:<port>/config/service-function:service-function-groups/service-function-group/+
-* Service Function Schedule Type (SFST):
-+http://<host>:<port>/config/service-function-scheduler-type:service-function-scheduler-types/service-function-scheduler-type/+
-* Service Function Forwarder (SFF):
-+http://<host>:<port>/config/service-function-forwarder:service-function-forwarders/service-function-forwarder/+
-* Rendered Service Path (RSP):
-+http://<host>:<port>/operational/rendered-service-path:rendered-service-paths/rendered-service-path/+
-
-Therefore, network devices willing to receive REST messages must listen on
-these REST URLs.
-
-[NOTE]
-Service Classifier Function (SCF) URL does not exist, because SCF is considered
-as one of the network devices willing to receive REST messages. However, there
-is a listener hooked on the SCF data store, which is triggering POST/PUT/DELETE
-operations of ACL object, because ACL is referenced in +service-function-classifier.yang+
diff --git a/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-sf-monitoring-dev.adoc b/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-sf-monitoring-dev.adoc
deleted file mode 100644 (file)
index 0b74d41..0000000
+++ /dev/null
@@ -1,19 +0,0 @@
-=== Service Function Monitoring
-
-==== SF Monitoring Overview
-TBD
-
-==== SF Monitoring Architecture
-TBD
-
-==== SF Monitoring YANG model
-TBD
-
-==== Key APIs and Interfaces
-TBD
-
-===== API Group 1
-TBD
-
-==== API Reference Documentation
-TBD
diff --git a/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-sf-scheduler-dev.adoc b/manuals/developer-guide/src/main/asciidoc/sfc/odl-sfc-sf-scheduler-dev.adoc
deleted file mode 100644 (file)
index d13ad5b..0000000
+++ /dev/null
@@ -1,53 +0,0 @@
-=== Service Function Scheduling Algorithms
-
-==== Overview
-When creating the Rendered Service Path (RSP), the earlier release of SFC
-chose the first available service function from a list of service function
-names. Now a new API is introduced to allow developers to develop their own
-schedule algorithms when creating the RSP. There are four scheduling algorithms
-(Random, Round Robin, Load Balance and Shortest Path) are provided as examples
-for the API definition. This guide gives a simple introduction of how to develop
-service function scheduling algorithms based on the current extensible framework.
-
-==== Architecture
-The following figure illustrates the service function selection framework and
-algorithms.
-
-.SF Scheduling Algorithm framework Architecture
-image::sfc-sf-selection-arch.png["SF Selection Architecture",width=500]
-
-The YANG Model defines the Service Function Scheduling Algorithm type
-identities and how they are stored in the MD-SAL data store for the scheduling
-algorithms.
-
-The MD-SAL data store stores all informations for the scheduling algorithms,
-including their types, names, and status.
-
-The API provides some basic APIs to manage the informations stored in the
-MD-SAL data store, like putting new items into it, getting all scheduling
-algorithms, etc.
-
-The RESTCONF API provides APIs to manage the informations stored in the MD-SAL
-data store through RESTful calls.
-
-The Service Function Chain Renderer gets the enabled scheduling algorithm type,
-and schedules the service functions with scheduling algorithm implementation.
-
-==== Key APIs and Interfaces
-While developing a new Service Function Scheduling Algorithm, a new class
-should be added and it should extend the base schedule class
-SfcServiceFunctionSchedulerAPI. And the new class should implement the abstract
-function:
-
-+public List<String> scheduleServiceFuntions(ServiceFunctionChain chain, int serviceIndex)+.
-
-.input
-* *+ServiceFunctionChain chain+*: the chain which will be rendered
-* *+int serviceIndex+*: the initial service index for this rendered service path
-
-.output
-* *+List<String>+*: a list of service funtion names which scheduled by the
-Service Function Scheduling Algorithm.
-
-==== API Reference Documentation
-Please refer the API docs generated in the mdsal-apidocs.
index d5d3b2aee635606fdfd84aaf601b194f68f0d55d..62a4ca87bd45313cd5e1f8c4fb22fc9b1acf47f3 100644 (file)
@@ -1,17 +1,3 @@
 == Service Function Chaining
 
-include::sfc_overview.adoc[SFC Overview]
-
-include::odl-sfc-classifier-dev.adoc[SFC Classifier]
-
-include::odl-sfc-ovs-dev.adoc[SFC OVS]
-
-include::odl-sfc-sb-rest-dev.adoc[SFC SouthBound REST plugin]
-
-include::odl-sfc-load-balance-dev.adoc[Service Function Grouping and Load Balancing developer guide]
-
-include::odl-sfc-sf-scheduler-dev.adoc[Service Function selection scheduler]
-
-// Commented out because it has no content
-//include::odl-sfc-sf-monitoring-dev.adoc[Service Function Monitoring]
-
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/alto-developer-guide.html
diff --git a/manuals/developer-guide/src/main/asciidoc/sfc/sfc_overview.adoc b/manuals/developer-guide/src/main/asciidoc/sfc/sfc_overview.adoc
deleted file mode 100644 (file)
index fc1f130..0000000
+++ /dev/null
@@ -1,15 +0,0 @@
-=== OpenDaylight Service Function Chaining (SFC) Overiew
-
-OpenDaylight Service Function Chaining (SFC) provides the ability to define an ordered list of a network services (e.g. firewalls, load balancers). These service are then "stitched" together in the network to create a service chain. This project provides the infrastructure (chaining logic, APIs) needed for ODL to provision a service chain in the network and an end-user application for defining such chains.
-
-.List of acronyms:
-* ACE - Access Control Entry
-* ACL - Access Control List
-* SCF - Service Classifier Function
-* SF - Service Function
-* SFC - Service Function Chain
-* SFF - Service Function Forwarder
-* SFG - Service Function Group
-* SFP - Service Function Path
-* RSP - Rendered Service Path
-* NSH - Network Service Header
diff --git a/manuals/developer-guide/src/main/asciidoc/snbi/odl-snbi-dev.adoc b/manuals/developer-guide/src/main/asciidoc/snbi/odl-snbi-dev.adoc
new file mode 100644 (file)
index 0000000..794f91e
--- /dev/null
@@ -0,0 +1,124 @@
+== SNBI Developer Guide
+
+=== Overview
+Key distribution in a scaled network has always been a challenge. Typically, operators must perform some manual key distribution process before secure communication is possible between a set of network devices. The Secure Network Bootstrapping Infrastructure (SNBI) project securely and automatically brings up an integrated set of network devices and controllers, simplifying the process of bootstrapping network devices with the keys required for secure communication. SNBI enables connectivity to the network devices by assigning unique IPv6 addresses and bootstrapping devices with the required keys. Admission control of devices into a specific domain is achieved using whitelist of authorized devices.
+
+=== SNBI Architecture
+At a high level, SNBI architecture consists of the following components:
+
+* SNBI Registrar
+* SNBI Forwarding Element (FE)
+
+.SNBI Architecture Diagram
+image::snbi/snbi_arch.png["SNBI Architecture",width=500]
+
+==== SNBI Registrar
+Registrar is a device in a network that validates device against a whitelist and delivers device domain certificate. Registrar includes the following:
+
+* RESTCONF API for Domain Whitelist Configuration
+* Certificate Authority
+* SNBI Southbound Plugin
+
+.RESTCONF API for Domain Whitelist Configuration:
+RESTCONF APIs are used to configure the whitelist set device in the registrar in the controller. The registrar interacts with the MD-SAL to obtain the whitelist set of devices and validate the device trying to join a domain. Furthermore it is possible to run multiple registrar instances pertaining to each domain.
+
+.SNBI Southbound Plugin:
+The Southbound Plugin implements the protocol state machine necessary to exchange device identifiers, and deliver certificates. The southbound plugin interacts with MD-SAL and the certificate authority to validate and create device domain certificates. The device domain certificate thus generated could be used to prove the validity of the devices within the domain.
+
+.Certificate Authority:
+A simple certificate authority is implemented using the Bouncy Castle package. The Certificate Authority creates the certificates from the device CSR requests received from the devices. The certificates thus generated are delievered to the devices using the Southbound Plugin as discussed earlier.
+
+==== SNBI Forwarding Element (FE)
+The SNBI Forwarding Element runs on Linux machines which have to join the domain. The Device UDI(Universal Device Identifier) or the device identifier could be derived from a multitude of parameters in the host machine, but most of the parameters derived from the host are known ahead or doesn't remain constant across reloads. Therefore, each of the SNBI FE should be configured explicitly with a UDI that is already present in the device white list. The registrar service IP address must be provided to the first host (Forwarding Element) to be bootstrapped. As mentioned in the <<_host_configuration>> section, the registrar service IP address is *fd08::aaaa:bbbb:1*. The First Forwarding Element must be configured with this IPv6 address.
+
+The forwarding element must be installed or unpacked on a Linux host whose network layer traffic must be secured. The FE performs the following functions:
+
+* Neighour Discovery
+* Bootstrapping with device domain certificates
+* Host Configuration
+
+===== Neighbour Discovery
+Neighbour Discovery (ND) is the first step in accommodating devices in a secure network. SNBI performs periodic neighbour discovery of SNBI agents by transmitting ND hello packets. The discovered devices are populated in an ND table. Neighbour Discovery is periodic and bidirectional. ND hello packets are transmitted every 10 seconds. A 40 second refresh timer is set for each discovered neighbour. On expiry of the refresh timer, the Neighbour Adjacency is removed from the ND table as the Neighbour Adjacency is no longer valid.  It is possible that the same SNBI neighbour is discovered on multiple links, the expiry of a device on one link does not automatically remove the device entry from the ND table. In the exchange of ND keepalives, the device UDI is exchanged.
+
+===== Bootstrapping with Device Domain Certificates
+Bootstrapping a device involves the following sequential steps:
+
+* Authenticate a device using device identifier (UDI-Universal Device Identifier or SUDI-Secure Universal Device Identifier) - The device
+identifier is exchanged in the hello messages.
+* Allocate the appropriate device ID and IPv6 address to uniquely identify the device in the network
+* Allocate the required keys by installing a Device Domain Certificate
+* Accommodate the device in the domain
+
+A device which is already bootstrapped acts as a proxy to bootstrap the new device which is trying to join the domain.
+
+* Neighbour Invite phase - When a proxy device detects a new neighbor bootStrap connect message is initiated on behalf of the New device --*NEIGHBOUR CONNECT* Msg. The message is sent to the registrar to authenticate the device UDI against the whitelist of devices. The source IPv6 address is the proxy IPv6 address and the destination IPv6 address is the registrar IPv6 address. The SNBI Registrar provides appropriate device ID and IPv6 address to uniquely identify the device in the network and then invites the device to join the domain. -- *NEIGHBOUR INVITE* Msg.
+
+* Neighbour Reject - If the Device UDI is not in the white list of devices, then the device is rejected and is not accepted into the domain. The proxy device just updates its DB with the reject information but still maintains the Neighbour relationship.
+
+* Neighbour BootStrap Phase - Once the new device gets a neighbour invite message, it tries to boot strap itself by generating a key pair. The device generates a Certificate Sign Request (CSR) PKCS10 request and gets it signed by the CA running at the SNBI Registrar. -- *BS REQ* Msg. Once the certificate is enrolled and signed by the CA, the generated x.509 certificate is returned to the new device to complete the bootstrap process. -- *BS RESP* Msg.
+
+==== Host Configuration
+Host configuration involves configuring a host to create a secure overlay network, assigning appropriate IPv6 address, setting up GRE tunnels, securing the tunnels traffic via IPsec and enabling connectivity via a routing protocol. Docker is used to package all the required dependent software modules.
+
+.SNBI Bootstrap Process
+image::snbi/first_fe_bs.png["SNBI Bootstrap Process", width=500]
+
+* Interace configuration: The Iproute2 package, which comes by default packaged in the Linux distributions, is used to configure the required interface (snbi-fe) and assign the appropriate IPv6 address.
+* GRE Tunnel Creation: LinkLocal GRE tunnels are created to each of the discovered devices that are part of the domain. The GRE tunnels are used to create the overlay network for the domain.
+* Routing over the Overlay: To enable reachability of devices within the overlay network a light weight routing protocol is used. The routing protocol of choice is the RPL (Routing Protocol for Low-Power and Lossy Networks) protocol. The routing protocol advertises the device domain IPv6 address over the overlay network. *Unstrung* is the open source implementation of RPL and is packaged within the docker image. More details on unstrung is available at http://unstrung.sandelman.ca/ 
+* IPsec: IPsec is used to secure any traffic routed over the tunnels. StrongSwan is used to encrypt traffic using IPsec. More details on StrongSwan is available at https://www.strongswan.org/
+
+==== Docker Image
+
+The SNBI Forwarding Element is packaged in a docker container available at this
+link: https://hub.docker.com/r/snbi/boron/.
+For more information on docker, refer to this link:
+https://docs.docker.com/linux/.
+
+To update an SNBI FE Daemon, build the image and copy the image to /home/snbi
+directory. When the docker image is run, it autoamtically generates a startup
+configuration file for the SNBI FE daemon. The startup configuration script is
+also available at /home/snbi.
+
+.SNBI Docker Image
+image::snbi/docker_snbi.png["SNBI Docker Image",width=500]
+
+
+=== Key APIs and Interfaces
+The only API that SNBI exposes is to configure the whitelist of devices for a domain.
+
+The POST method below configures a domain - "secure-domain" and configures a whitelist set of devices to be accommodated to the domain.
+----
+{
+  "snbi-domain": {
+    "domain-name": "secure-domain",
+    "device-list": [
+      {
+        "list-name": "demo list",
+        "list-type": "white",
+        "active": true,
+        "devices": [
+          {
+            "device-id": "UDI-FirstFE"
+          },
+          {
+            "device-id": "UDI-dev1"
+          },
+          {
+            "device-id": "UDI-dev2"
+          }
+        ]
+      }
+     ]
+  }
+}
+----
+The associated device ID must be configured on the SNBI FE (see above).
+
+
+=== API Reference Documentation
+
+See the generated RESTCONF API documentation at:
+http://localhost:8181/apidoc/explorer/index.html
+
+Look for the SNBI module to expand and see the various RESTCONF APIs.
diff --git a/manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-aggregation-filtration-dev.adoc b/manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-aggregation-filtration-dev.adoc
deleted file mode 100644 (file)
index 9df40d7..0000000
+++ /dev/null
@@ -1,76 +0,0 @@
-==== Chapter Overview
-The Topology Processing Framework allows the creation of aggregated topologies and filtered views over existing topologies. Currently, aggregation and filtration is supported for topologies that follow https://github.com/opendaylight/yangtools/blob/master/model/ietf/ietf-topology/src/main/yang/network-topology%402013-10-21.yang[network-topology], opendaylight-inventory or i2rs model. When a request to create an aggregated or filtered topology is received, the framework creates one listener per underlay topology. Whenever any specified underlay topology is changed, the appropriate listener is triggered with the change and the change is processed. Two types of correlations (functionalities) are currently supported:
-
-* Aggregation
-** Unification
-** Equality
-* Filtration
-
-==== Terminology
-We use the term underlay item (physical node) for items (nodes, links, termination-points) from underlay and overlay item (logical node) for items from overlay topologies regardless of whether those are actually physical network elements.
-
-==== Aggregation
-Aggregation is an operation which creates an aggregated item from two or more items in the underlay topology if the aggregation condition is fulfilled. Requests for aggregated topologies must specify a list of underlay topologies over which the overlay (aggregated) topology will be created and a target field in the underlay item that the framework will check for equality.
-
-===== Create Overlay Node
-First, each new underlay item is inserted into the proper topology store. Once the item is stored, the framework compares it (using the target field value) with all stored underlay items from underlay topologies. If there is a target-field match, a new overlay item is created containing pointers to all 'equal' underlay items. The newly created overlay item is also given new references to its supporting underlay items.
-
-.Equality case:
-If an item doesn't fulfill the equality condition with any other items, processing finishes after adding the item into topology store. It will stay there for future use, ready to create an aggregated item with a new underlay item, with which it would satisfy the equality condition.
-
-.Unification case:
-An overlay item is created for all underlay items, even those which don't fulfill the equality condition with any other items. This means that an overlay item is created for every underlay item, but for items which satisfy the equality condition, an aggregated item is created.
-
-===== Update Node
-Processing of updated underlay items depends on whether the target field has been modified. If yes, then:
-
-* if the underlay item belonged to some overlay item, it is removed from that item. Next, if the aggregation condition on the target field is satisfied, the item is inserted into another overlay item. If the condition isn't met then:
-** in equality case - the item will not be present in overlay topology.
-** in unification case - the item will create an overlay item with a single underlay item and this will be written into overlay topology.
-* if the item didn't belong to some overlay item, it is checked again for aggregation with other underlay items.
-
-===== Remove Node
-The underlay item is removed from the corresponding topology store, from it's overlay item (if it belongs to one) and this way it is also removed from overlay topology.
-
-.Equality case:
-If there is only one underlay item left in the overlay item, the overlay item is removed.
-
-.Unification case:
-The overlay item is removed once it refers to no underlay item.
-
-==== Filtration
-Filtration is an operation which results in creation of overlay topology containing only items fulfilling conditions set in the topoprocessing request.
-
-===== Create Underlay Item
-If a newly created underlay item passes all filtrators and their conditions, then it is stored in topology store and a creation notification is delivered into topology manager. No operation otherwise.
-
-===== Update Underlay Item
-First, the updated item is checked for presence in topology store:
-
-// TODO: what do processUpdatedData and processCreatedData notifications actually cause to happen?
-* if it is present in topology store:
-** if it meets the filtering conditions, then processUpdatedData notification is triggered
-** else processRemovedData notification is triggered
-* if item isn't present in topology store
-** if item meets filtering conditions, then processCreatedData notification is triggered
-** else it is ignored
-
-===== Remove Underlay Item
-If an underlay node is supporting some overlay node, the overlay node is simply removed.
-
-===== Default Filtrator Types
-There are seven types of default filtrators defined in the framework:
-
-* IPv4-address filtrator - checks if specified field meets IPv4 address + mask criteria
-* IPv6-address filtrator - checks if specified field meets IPv6 address + mask criteria
-* Specific number filtrator - checks for specific number
-* Specific string filtrator - checks for specific string
-* Range number filtrator - checks if specified field is higher than provided minimum (inclusive) and lower than provided maximum (inclusive)
-* Range string filtrator - checks if specified field is alphabetically greater than provided minimum (inclusive) and alphabetically lower than provided maximum (inclusive)
-* Script filtrator - allows a user or application to implement their own filtrator
-
-===== Register Custom Filtrator
-There might be some use case that cannot be achieved with the default filtrators. In these cases, the framework offers the possibility for a user or application to register a custom filtrator.
-
-==== Pre-Filtration / Filtration & Aggregation
-This feature was introduced in order to lower memory and performance demands. It is a combination of the filtration and aggregation operations. First, uninteresting items are filtered out and then aggregation is performed only on items that passed filtration. This way the framework saves on compute time. The PreAggregationFiltrator and TopologyAggregator share the same TopoStoreProvider (and thus topology store) which results in lower memory demands (as underlay items are stored only in one topology store - they aren't stored twice).
diff --git a/manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-architecture-dev.adoc b/manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-architecture-dev.adoc
deleted file mode 100644 (file)
index 0d2975b..0000000
+++ /dev/null
@@ -1,75 +0,0 @@
-==== Chapter Overview
-In this chapter we describe the architecture of the Topology Processing Framework. In the first part, we provide information about available features and basic class relationships. In the second part, we describe our model specific approach, which is used to provide support for different models.
-
-==== Basic Architecture
-The Topology Processing Framework consists of several Karaf features:
-
-* odl-topoprocessing-framework
-* odl-topoprocessing-inventory
-* odl-topoprocessing-network-topology
-* odl-topoprocessing-i2rs
-* odl-topoprocessing-inventory-rendering
-
-The feature odl-topoprocessing-framework contains the topoprocessing-api, topoprocessing-spi and topoprocessing-impl
-bundles. This feature is the core of the Topology Processing Framework and is required by all others features.
-
-* topoprocessing-api - contains correlation definitions and definitions required for rendering
-* topoprocessing-spi - entry point for topoprocessing service (start and close)
-* topoprocessing-impl - contains base implementations of handlers, listeners, aggregators and filtrators
-
-TopoProcessingProvider is the entry point for Topology Processing Framework. It requires a DataBroker instance. The DataBroker is needed for listener registration. There is also the TopologyRequestListener which listens on aggregated topology requests (placed into the configuration datastore) and UnderlayTopologyListeners which listen on underlay topology data changes (made in operational datastore). The TopologyRequestHandler saves toporequest data and provides a method for translating a path to the specified leaf. When a change in the topology occurs, the registered UnderlayTopologyListener processes this information for further aggregation and/or filtration. Finally, after an overlay topology is created, it is passed to the TopologyWriter, which writes this topology into operational datastore.
-
-.Class relationship
-image::topoprocessing/TopologyRequestHandler_classesRelationship.png[width=500]
-
-[1] TopologyRequestHandler instantiates TopologyWriter and TopologyManager. Then, according to the request, initializes either TopologyAggregator, TopologyFiltrator or LinkCalculator.
-
-[2] It creates as many instances of UnderlayTopologyListener as there are underlay topologies.
-
-[3] PhysicalNodes are created for relevant incoming nodes (those having node ID).
-
-[4a] It performs aggregation and creates logical nodes.
-
-[4b] It performs filtration and creates logical nodes.
-
-[4c] It performs link computation and creates links between logical nodes.
-
-[5] Logical nodes are put into wrapper.
-
-[6] The wrapper is translated into the appropriate format and written into datastore.
-
-==== Model Specific Approach
-The Topology Processing Framework consists of several modules and Karaf features, which provide support for different input models. Currently we support the network-topology, opendaylight-inventory and i2rs models. For each of these input models, the Topology Processing Framework has one module and one Karaf feature.
-
-===== How it works
-.User point of view:
-When you start the odl-topoprocessing-framework feature, the Topology Processing Framework starts without knowledge how to work with any input models. In order to allow the Topology Processing Framework to process some kind of input model, you must install one (or more) model specific features. Installing these features will also start odl-topoprocessing-framework feature if it is not already running. These features inject appropriate logic into the odl-topoprocessing-framework feature. From that point, the Topology Processing Framework is able to process different kinds of input models, specifically those that you install features for.
-
-.Developer point of view:
-The topoprocessing-impl module contains (among other things) classes and interfaces, which are common for every model specific topoprocessing module. These classes and interfaces are implemented and extended by classes in particular model specific modules.
-Model specific modules also depend on the TopoProcessingProvider class in the topoprocessing-spi module. This dependency is injected during installation of model specific features in Karaf. When a model specific feature is started, it calls the registerAdapters(adapters) method of the injected TopoProcessingProvider object. After this step, the Topology Processing Framework is able to use registered model adapters to work with input models.
-
-To achieve the described functionality we created a ModelAdapter interface. It represents installed feature and provides methods for creating crucial structures specific to each model.
-
-.ModelAdapter interface
-image::topoprocessing/ModelAdapter.png[width=300]
-
-===== Model Specific Features
-
-* odl-topoprocessing-network-topology - this feature contains logic to work with network-topology model
-* odl-topoprocessing-inventory - this feature contains logic to work with opendaylight-inventory model
-* odl-topoprocessing-i2rs - this feature contains logic to work with i2rs model
-
-==== Inventory Model Support
-The opendaylight-inventory model contains only nodes, termination points, information regarding these structures. This model co-operates with network-topology model, where other topology related information is stored. This means that we have to handle two input models at once. To support the inventory model, InventoryListener and NotificationInterConnector classes were introduced. Please see the flow diagrams below.
-
-.Network topology model
-image::topoprocessing/Network_topology_model_flow_diagram.png[width=500]
-
-.Inventory model
-image::topoprocessing/Inventory_model_listener_diagram.png[width=500]
-
-Here we can see the InventoryListener and NotificationInterConnector classes. InventoryListener listens on data changes in the inventory model and passes these changes wrapped as an UnderlayItem for further processing to NotificationInterConnector. It doesn't contain node information - it contains a leafNode (node based on which aggregation occurs) instead.
-The node information is stored in the topology model, where UnderlayTopologyListener is registered as usual. This listener delivers the missing information.
-
-Then the NotificationInterConnector combines the two notifications into a complete UnderlayItem (no null values) and delivers this UnderlayItem for further processing (to next TopologyOperator). 
index 599ad3ee79455288cc0d8a76063c7ea9c0fb40c8..0316cf4fe87b31934f522a343b8dd85f4d75cc35 100644 (file)
@@ -1,27 +1,3 @@
 == Topology Processing Framework Developer Guide
 
-=== Overview
-The Topology Processing Framework allows developers to aggregate and filter topologies according to defined correlations. It also provides functionality, which you can use to make your own topology model by automating the translation from one model to another. For example to translate from the opendaylight-inventory model to only using the network-topology model.
-
-=== Architecture
-include::odl-topoprocessing-architecture-dev.adoc[]
-
-=== Aggregation and Filtration
-include::odl-topoprocessing-aggregation-filtration-dev.adoc[]
-
-=== Link Computation
-include::odl-topoprocessing-link-computation-dev.adoc[]
-
-=== Wrapper, RPC Republishing, Writing Mechanism
-include::odl-topoprocessing-wrapper-rpc-writing-dev.adoc[]
-
-=== Topology Rendering Guide - Inventory Rendering
-include::odl-topoprocessing-inventory-rendering-dev.adoc[]
-
-=== Key APIs and Interfaces
-The basic provider class is TopoProcessingProvider which provides startup and shutdown
-methods. Otherwise, the framework communicates via requests and outputs stored
-in the MD-SAL datastores.
-
-=== API Reference Documentation
-You can find API examples on https://wiki.opendaylight.org/view/Topology_Processing_Framework:Developer_Guide:End_to_End_Example[this wiki page].
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/topology-processing-framework-developer-guide.html
diff --git a/manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-inventory-rendering-dev.adoc b/manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-inventory-rendering-dev.adoc
deleted file mode 100644 (file)
index 88a2e82..0000000
+++ /dev/null
@@ -1,401 +0,0 @@
-==== Chapter Overview
-In the most recent OpenDaylight release, the opendaylight-inventory model is marked as deprecated. To facilitate migration from it to the network-topology model, there were requests to render (translate) data from inventory model (whether augmented or not) to another model for further processing. The Topology Processing Framework was extended to provide this functionality by implementing several rendering-specific classes. This chapter is a step-by-step guide on how to implement your own topology rendering using our inventory rendering as an example.
-
-==== Use case
-For the purpose of this guide we are going to render the following augmented fields from the OpenFlow model:
-
-* from inventory node:
-** manufacturer
-** hardware
-** software
-** serial-number
-** description
-** ip-address
-* from inventory node-connector:
-** name
-** hardware-address
-** current-speed
-** maximum-speed
-
-We also want to preserve the node ID and termination-point ID from opendaylight-topology-inventory model, which is network-topology part of the inventory model. 
-
-==== Implementation
-There are two ways to implement support for your specific topology rendering:
-
-* add a module to your project that depends on the Topology Processing Framework
-* add a module to the Topology Processing Framework itself
-
-Regardless, a successful implementation must complete all of the following steps.
-
-===== Step1 - Target Model Creation
-Because the network-topology node does not have fields to store all desired data, it is necessary to create new model to render this extra data in to. For this guide we created the inventory-rendering model. The picture below shows how data will be rendered and stored.
-
-.Rendering to the inventory-rendering model
-image::topoprocessing/Inventory_Rendering_Use_case.png[width=500]
-
-IMPORTANT: When implementing your version of the topology-rendering model in the Topology Processing Framework, the source file of the model (.yang) must be saved in /topoprocessing-api/src/main/yang folder so corresponding structures can be generated during build and can be accessed from every module through dependencies. 
-
-When the target model is created you have to add an identifier through which you can set your new model as output model. To do that you have to add another identity item to topology-correlation.yang file. For our inventory-rendering model identity looks like this:
-
-[source,yang]
-----
-identity inventory-rendering-model {
-       description "inventory-rendering.yang";
-       base model;
-}
-----
-
-After that you will be able to set inventory-rendering-model as output model in XML.
-
-===== Step2 - Module and Feature Creation
-IMPORTANT: This and following steps are based on the <<_model_specific_approach,model specific approach>> in the Topology Processing Framework. We highly recommend that you familiarize yourself with this approach in advance.
-
-To create a base module and add it as a feature to Karaf in the Topology Processing Framework we made the changes in following https://git.opendaylight.org/gerrit/#/c/26223/[commit]. Changes in other projects will likely be similar.
-
-[options="header"]
-|======
-|File                                                                                           |Changes
-|pom.xml                                                                                        |add new module to topoprocessing
-|features.xml                                                                           |add feature to topoprocessing
-|features/pom.xml                                                                       |add dependencies needed by features
-|topoprocessing-artifacts/pom.xml                                       |add artifact
-|topoprocessing-config/pom.xml                                          |add configuration file
-|81-topoprocessing-inventory-rendering-config.xml       |configuration file for new module
-|topoprocessing-inventory-rendering/pom.xml                     |main pom for new module
-|TopoProcessingProviderIR.java                                          |contains startup method which register new model adapter
-|TopoProcessingProviderIRModule.java                            |generated class which contains createInstance method. You should call your startup method from here.
-|TopoProcessingProviderIRModuleFactory.java                     |generated class. You will probably not need to edit this file
-|log4j.xml                                                                                      |configuration file for logger
-topoprocessing-inventory-rendering-provider-impl.yang|main yang module. Generated classes are generated according to this yang file
-|======
-
-===== Step3 - Module Adapters Creation
-There are seven mandatory interfaces or abstract classes that needs to be implemented in each module. They are:
-
-* TopoProcessingProvider - provides module registration
-* ModelAdapter - provides model specific instances
-* TopologyRequestListener - listens on changes in the configuration datastore
-* TopologyRequestHandler - processes configuration datastore changes
-* UnderlayTopologyListener - listens for changes in the specific model
-* LinkTransaltor and NodeTranslator - used by OverlayItemTranslator to create NormalizedNodes from OverlayItems
-
-The name convention we used was to add an abbreviation for the specific model to the beginning of implementing class name (e.g. the IRModelAdapter refers to class which implements ModelAdapter in module Inventory Rendering). In the case of the provider class, we put the abbreviation at the end.
-
-[IMPORTANT]
-======
-* In the next sections, we use the terms TopologyRequestListener, TopologyRequestHandler, etc. without a prepended or appended abbreviation because the steps apply regardless of which specific model you are targeting.
-* If you want to implement rendering from inventory to network-topology, you can just copy-paste our module and additional changes will be required only in the output part.
-======
-
-*Provider part*
-
-This part is the starting point of the whole module. It is responsible for creating and registering TopologyRequestListeners. It is necessary to create three classes which will import:
-
-* *TopoProcessingProviderModule* - is a generated class from topoprocessing-inventory-rendering-provider-impl.yang (created in previous step, file will appear after first build). Its method `createInstance()` is called at the feature start and must be modified to create an instance of TopoProcessingProvider and call its `startup(TopoProcessingProvider topoProvider)` function.
-* *TopoProcessingProvider* - in `startup(TopoProcessingProvider topoProvider)` function provides ModelAdapter registration to TopoProcessingProviderImpl.
-* *ModelAdapter* - provides creation of corresponding module specific classes.
-
-*Input part*
-
-This includes the creation of the classes responsible for input data processing. In this case, we had to create five classes implementing:
-
-* *TopologyRequestListener* and *TopologyRequestHandler* - when notified about a change in the configuration datastore, verify if the change contains a topology request (has correlations in it) and creates UnderlayTopologyListeners if needed. The implementation of these classes will differ according to the model in which are correlations saved (network-topology or i2rs). In the case of using network-topology, as the input model, you can use our classes IRTopologyRequestListener and IRTopologyRequestHandler.
-* *UnderlayTopologyListener* - registers underlay listeners according to input model. In our case (listening in the inventory model), we created listeners for the network-topology model and inventory model, and set the NotificationInterConnector as the first operator and set the IRRenderingOperator as the second operator (after NotificationInterConnector). Same as for TopologyRequestListener/Handler, if you are rendering from the inventory model, you can use our class IRUnderlayTopologyListener.
-* *InventoryListener* - a new implementation of this class is required only for inventory input model. This is because the InventoryListener from topoprocessing-impl requires pathIdentifier which is absent in the case of rendering.
-* *TopologyOperator* - replaces classic topoprocessing operator. While the classic operator provides specific operations on topology, the rendering operator just wraps each received UnderlayItem to OverlayItem and sends them to write.
-
-[IMPORTANT]
-======
-For purposes of topology rendering from inventory to network-topology, there are misused fields in UnderlayItem as follows:
-
-* item - contains node from network-topology part of inventory
-* leafItem - contains node from inventory
-
-In case of implementing UnderlayTopologyListener or InventoryListener you have to carefully adjust UnderlayItem creation to these terms. 
-======
-
-*Output part*
-
-The output part of topology rendering is responsible for translating received overlay items to normalized nodes. In the case of inventory rendering, this is where node information from inventory are combined with node information from network-topology. This combined information is stored in our inventory-rendering model normalized node and passed to the writer.
-
-The output part consists of two translators implementing the NodeTranslator and LinkTranslator interfaces.
-
-*NodeTranslator implementation* - The NodeTranslator interface has one `translate(OverlayItemWrapper wrapper)` method. For our purposes, there is one important thing in wrapper - the list of OverlayItems which have one or more common UnderlayItems. Regardless of this list, in the case of rendering it will always contains only one OverlayItem. This item has list of UnderlayItems, but again in case of rendering there will be only one UnderlayItem item in this list. In NodeTranslator, the OverlayItem and corresponding UnderlayItem represent nodes from the translating model.
-
-The UnderlayItem has several attributes. How you will use these attributes in your rendering is up to you, as you create this item in your topology operator. For example, as mentioned above, in our inventory rendering example is an inventory node normalized node stored in the UnderlayItem leafNode attribute, and we also store node-id from network-topology model in UnderlayItem itemId attribute. You can now use these attributes to build a normalized node for your new model. How to read and create normalized nodes is out of scope of this document. 
-
-*LinkTranslator implementation* - The LinkTranslator interface also has one `translate(OverlayItemWrapper wrapper)` method. In our inventory rendering this method returns `null`, because the inventory model doesn't have links. But if you also need links, this is the place where you should translate it into a normalized node for your model. In LinkTranslator, the OverlayItem and corresponding UnderlayItem represent links from the translating model. As in NodeTranslator, there will be only one OverlayItem and one UnderlayItem in the corresponding lists.
-
-==== Testing
-If you want to test our implementation you must apply https://git.opendaylight.org/gerrit/#/c/26612[this patch]. It adds an OpenFlow Plugin dependency so we can use it in the Karaf distribution as a feature. After adding patch and building the whole framework, you can start Karaf. Next, you have to install necessary features. In our case it is:
-
-`feature:install odl-restconf-noauth odl-topoprocessing-inventory-rendering odl-openflowplugin-southbound odl-openflowplugin-nsf-model` 
-
-Now you can send messages to REST from any REST client (e.g. Postman in Chrome). Messages have to have following headers:
-
-[options="header"]
-|=====
-|Header                  |Value
-|Content-Type:|application/xml
-|Accept:         |application/xml
-|username:       |admin
-|password:       |admin 
-|=====
-
-Firstly send topology request to http://localhost:8181/restconf/config/network-topology:network-topology/topology/render:1 with method PUT. Example of simple rendering request: 
-
-[source, xml]
-----
-<topology xmlns="urn:TBD:params:xml:ns:yang:network-topology">
-  <topology-id>render:1</topology-id>  
-    <correlations xmlns="urn:opendaylight:topology:correlation" >
-      <output-model>inventory-rendering-model</output-model>
-      <correlation>
-         <correlation-id>1</correlation-id>
-          <type>rendering-only</type>
-          <correlation-item>node</correlation-item>
-          <rendering>
-            <underlay-topology>und-topo:1</underlay-topology>
-        </rendering>
-      </correlation>
-    </correlations>
-</topology>
-----
-This request says that we want create topology with name render:1 and this topology should be stored in the inventory-rendering-model and it should be created from topology flow:1 by node rendering.
-
-Next we send the network-topology part of topology flow:1. So to the URL http://localhost:8181/restconf/config/network-topology:network-topology/topology/und-topo:1 we PUT:
-[source,xml]
-----
-<topology xmlns="urn:TBD:params:xml:ns:yang:network-topology" 
-          xmlns:it="urn:opendaylight:model:topology:inventory"
-          xmlns:i="urn:opendaylight:inventory">
-    <topology-id>und-topo:1</topology-id>
-    <node>
-        <node-id>openflow:1</node-id>
-        <it:inventory-node-ref>
-       /i:nodes/i:node[i:id="openflow:1"]
-        </it:inventory-node-ref>
-        <termination-point>
-            <tp-id>tp:1</tp-id>
-            <it:inventory-node-connector-ref> 
-                /i:nodes/i:node[i:id="openflow:1"]/i:node-connector[i:id="openflow:1:1"]
-            </it:inventory-node-connector-ref>
-        </termination-point>
-    </node>
-</topology>
-----
-And the last input will be inventory part of topology. To the URL http://localhost:8181/restconf/config/opendaylight-inventory:nodes we PUT:
-[source,xml]
-----
-<nodes 
-    xmlns="urn:opendaylight:inventory">
-    <node>
-        <id>openflow:1</id>
-        <node-connector>
-            <id>openflow:1:1</id>
-            <port-number 
-                xmlns="urn:opendaylight:flow:inventory">1
-            </port-number>
-            <current-speed 
-                xmlns="urn:opendaylight:flow:inventory">10000000
-            </current-speed>
-            <name 
-                xmlns="urn:opendaylight:flow:inventory">s1-eth1
-            </name>
-            <supported 
-                xmlns="urn:opendaylight:flow:inventory">
-            </supported>
-            <current-feature 
-                xmlns="urn:opendaylight:flow:inventory">copper ten-gb-fd
-            </current-feature>
-            <configuration 
-                xmlns="urn:opendaylight:flow:inventory">
-            </configuration>
-            <peer-features 
-                xmlns="urn:opendaylight:flow:inventory">
-            </peer-features>
-            <maximum-speed 
-                xmlns="urn:opendaylight:flow:inventory">0
-            </maximum-speed>
-            <advertised-features 
-                xmlns="urn:opendaylight:flow:inventory">
-            </advertised-features>
-            <hardware-address 
-                xmlns="urn:opendaylight:flow:inventory">0E:DC:8C:63:EC:D1
-            </hardware-address>
-            <state 
-                xmlns="urn:opendaylight:flow:inventory">
-                <link-down>false</link-down>
-                <blocked>false</blocked>
-                <live>false</live>
-            </state>
-            <flow-capable-node-connector-statistics 
-                xmlns="urn:opendaylight:port:statistics">
-                <receive-errors>0</receive-errors>
-                <receive-frame-error>0</receive-frame-error>
-                <receive-over-run-error>0</receive-over-run-error>
-                <receive-crc-error>0</receive-crc-error>
-                <bytes>
-                    <transmitted>595</transmitted>
-                    <received>378</received>
-                </bytes>
-                <receive-drops>0</receive-drops>
-                <duration>
-                    <second>28</second>
-                    <nanosecond>410000000</nanosecond>
-                </duration>
-                <transmit-errors>0</transmit-errors>
-                <collision-count>0</collision-count>
-                <packets>
-                    <transmitted>7</transmitted>
-                    <received>5</received>
-                </packets>
-                <transmit-drops>0</transmit-drops>
-            </flow-capable-node-connector-statistics>
-        </node-connector>
-        <node-connector>
-            <id>openflow:1:LOCAL</id>
-            <port-number 
-                xmlns="urn:opendaylight:flow:inventory">4294967294
-            </port-number>
-            <current-speed 
-                xmlns="urn:opendaylight:flow:inventory">0
-            </current-speed>
-            <name 
-                xmlns="urn:opendaylight:flow:inventory">s1
-            </name>
-            <supported 
-                xmlns="urn:opendaylight:flow:inventory">
-            </supported>
-            <current-feature 
-                xmlns="urn:opendaylight:flow:inventory">
-            </current-feature>
-            <configuration 
-                xmlns="urn:opendaylight:flow:inventory">
-            </configuration>
-            <peer-features 
-                xmlns="urn:opendaylight:flow:inventory">
-            </peer-features>
-            <maximum-speed 
-                xmlns="urn:opendaylight:flow:inventory">0
-            </maximum-speed>
-            <advertised-features 
-                xmlns="urn:opendaylight:flow:inventory">
-            </advertised-features>
-            <hardware-address 
-                xmlns="urn:opendaylight:flow:inventory">BA:63:87:0C:76:41
-            </hardware-address>
-            <state 
-                xmlns="urn:opendaylight:flow:inventory">
-                <link-down>false</link-down>
-                <blocked>false</blocked>
-                <live>false</live>
-            </state>
-            <flow-capable-node-connector-statistics 
-                xmlns="urn:opendaylight:port:statistics">
-                <receive-errors>0</receive-errors>
-                <receive-frame-error>0</receive-frame-error>
-                <receive-over-run-error>0</receive-over-run-error>
-                <receive-crc-error>0</receive-crc-error>
-                <bytes>
-                    <transmitted>576</transmitted>
-                    <received>468</received>
-                </bytes>
-                <receive-drops>0</receive-drops>
-                <duration>
-                    <second>28</second>
-                    <nanosecond>426000000</nanosecond>
-                </duration>
-                <transmit-errors>0</transmit-errors>
-                <collision-count>0</collision-count>
-                <packets>
-                    <transmitted>6</transmitted>
-                    <received>6</received>
-                </packets>
-                <transmit-drops>0</transmit-drops>
-            </flow-capable-node-connector-statistics>
-        </node-connector>
-        <serial-number 
-            xmlns="urn:opendaylight:flow:inventory">None
-        </serial-number>
-        <manufacturer 
-            xmlns="urn:opendaylight:flow:inventory">Nicira, Inc.
-        </manufacturer>
-        <hardware 
-            xmlns="urn:opendaylight:flow:inventory">Open vSwitch
-        </hardware>
-        <software 
-            xmlns="urn:opendaylight:flow:inventory">2.1.3
-        </software>
-        <description 
-            xmlns="urn:opendaylight:flow:inventory">None
-        </description>
-               <ip-address
-                       xmlns="urn:opendaylight:flow:inventory">10.20.30.40
-      </ip-address>
-        <meter-features 
-            xmlns="urn:opendaylight:meter:statistics">
-            <max_bands>0</max_bands>
-            <max_color>0</max_color>
-            <max_meter>0</max_meter>
-        </meter-features>
-        <group-features 
-            xmlns="urn:opendaylight:group:statistics">
-            <group-capabilities-supported 
-                xmlns:x="urn:opendaylight:group:types">x:chaining
-            </group-capabilities-supported>
-            <group-capabilities-supported 
-                xmlns:x="urn:opendaylight:group:types">x:select-weight
-            </group-capabilities-supported>
-            <group-capabilities-supported 
-                xmlns:x="urn:opendaylight:group:types">x:select-liveness
-            </group-capabilities-supported>
-            <max-groups>4294967040</max-groups>
-            <actions>67082241</actions>
-            <actions>0</actions>
-        </group-features>
-    </node>
-</nodes>
-----
-After this, the expected result from a GET request to http://127.0.0.1:8181/restconf/operational/network-topology:network-topology is:
-[source,xml]
-----
-<network-topology 
-    xmlns="urn:TBD:params:xml:ns:yang:network-topology">
-    <topology>
-        <topology-id>render:1</topology-id>
-        <node>
-            <node-id>openflow:1</node-id>
-            <node-augmentation 
-                xmlns="urn:opendaylight:topology:inventory:rendering">
-                <ip-address>10.20.30.40</ip-address>
-                <serial-number>None</serial-number>
-                <manufacturer>Nicira, Inc.</manufacturer>
-                <description>None</description>
-                <hardware>Open vSwitch</hardware>
-                <software>2.1.3</software>
-            </node-augmentation>
-            <termination-point>
-                <tp-id>openflow:1:1</tp-id>
-                <tp-augmentation 
-                    xmlns="urn:opendaylight:topology:inventory:rendering">
-                    <hardware-address>0E:DC:8C:63:EC:D1</hardware-address>
-                    <current-speed>10000000</current-speed>
-                    <maximum-speed>0</maximum-speed>
-                    <name>s1-eth1</name>
-                </tp-augmentation>
-            </termination-point>
-            <termination-point>
-                <tp-id>openflow:1:LOCAL</tp-id>
-                <tp-augmentation 
-                    xmlns="urn:opendaylight:topology:inventory:rendering">
-                    <hardware-address>BA:63:87:0C:76:41</hardware-address>
-                    <current-speed>0</current-speed>
-                    <maximum-speed>0</maximum-speed>
-                    <name>s1</name>
-                </tp-augmentation>
-            </termination-point>
-        </node>
-    </topology>
-</network-topology>
-----
diff --git a/manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-link-computation-dev.adoc b/manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-link-computation-dev.adoc
deleted file mode 100644 (file)
index 576a2dd..0000000
+++ /dev/null
@@ -1,126 +0,0 @@
-==== Chapter Overview
-While processing the topology request, we create overlay nodes with lists of supporting underlay nodes. Because these overlay nodes have completely new identifiers, we lose link information. To regain this link information, we provide Link Computation functionality. Its main purpose is to create new overlay links based on the links from the underlay topologies and underlay items from overlay items. The required information for Link Computation is provided via the Link Computation model in (https://git.opendaylight.org/gerrit/gitweb?p=topoprocessing.git;a=blob;f=topoprocessing-api/src/main/yang/topology-link-computation.yang;hb=refs/heads/stable/beryllium[topology-link-computation.yang]).
-
-==== Link Computation Functionality
-Let us consider two topologies with following components:
-
-Topology 1:
-
-* Node: `node:1:1`
-* Node: `node:1:2`
-* Node: `node:1:3`
-* Link: `link:1:1` (from `node:1:1` to `node:1:2`)
-* Link: `link:1:2` (from `node:1:3` to `node:1:2`)
-
-Topology 2:
-
-* Node: `node:2:1`
-* Node: `node:2:2`
-* Node: `node:2:3`
-* Link: `link:2:1` (from `node:2:1` to `node:2:3`)
-
-Now let's say that we applied some operations over these topologies that results into aggregating together
-
-* `node:1:1` and `node:2:3` (`node:1`)
-* `node:1:2` and `node:2:2` (`node:2`)
-* `node:1:3` and `node:2:1` (`node:3`)
-
-At this point we can no longer use available links in new topology because of the node ID change, so we must create new overlay links with source and destination node set to new nodes IDs. It means that `link:1:1` from topology 1 will create new link `link:1`. Since original source (`node:1:1`) is already aggregated under `node:1`, it will become source node for `link:1`. Using same method the destination will be `node:2`. And the final output will be three links:
-
-* `link:1`, from `node:1` to `node:2`
-* `link:2`, from `node:3` to `node:2`
-* `link:3`, from `node:3` to `node:1`
-
-.Overlay topology with computed links
-image::topoprocessing/LinkComputation.png[width=461]
-
-==== In-Depth Look
-The main logic behind Link Computation is executed in the LinkCalculator operator. The required information is passed to LinkCalculator through the LinkComputation section of the topology request. This section is defined in the topology-link-computation.yang file. The main logic also covers cases when some underlay nodes may not pass through other topology operators.
-
-===== Link Computation Model
-There are three essential pieces of information for link computations. All of them are provided within the LinkComputation section. These pieces are:
-
-* output model
-
-[source, yang]
-----
-leaf output-model {
-    type identityref {
-        base topo-corr:model;
-    }
-    description "Desired output model for computed links.";
-}
-----
-
-* overlay topology with new nodes
-
-[source, yang]
-----
-container node-info {
-    leaf node-topology {
-        type string;
-        mandatory true;
-        description "Topology that contains aggregated nodes.
-                     This topology will be used for storing computed links.";
-    }
-    uses topo-corr:input-model-grouping;
-}
-----
-
-* underlay topologies with original links
-
-[source, yang]
-----
-list link-info {
-    key "link-topology input-model";
-    leaf link-topology {
-        type string;
-        mandatory true;
-        description "Topology that contains underlay (base) links.";
-    }
-    leaf aggregated-links {
-        type boolean;
-        description "Defines if link computation should be based on supporting-links.";
-    }
-    uses topo-corr:input-model-grouping;
-}
-----
-
-This whole section is augmented into `network-topology:topology`. By placing this section out of correlations section, it allows us to send link computation request separately from topology operations request.
-
-===== Main Logic
-Taking into consideration that some of the underlay nodes may not transform into overlay nodes (e.g. they are filtered out), we created two possible states for links:
-
-* matched - a link is considered as matched when both original source and destination node were transformed to overlay nodes
-* waiting - a link is considered as waiting if original source, destination or both nodes are missing from the overlay topology
-
-All links in waiting the state are stored in waitingLinks list, already matched links are stored in matchedLinks list and overlay nodes are stored in the storedOverlayNodes list. All processing is based only on information in these lists.
-Processing created, updated and removed underlay items is slightly different and described in next sections separately. 
-
-*Processing Created Items*
-
-Created items can be either nodes or links, depending on the type of listener from which they came. In the case of a link, it is immediately added to waitingLinks and calculation for possible overlay link creations (calculatePossibleLink) is started. The flow diagram for this process is shown in the following picture:
-
-.Flow diagram of processing created items
-image::topoprocessing/LinkComputationFlowDiagram.png[width=500]
-
-Searching for the source and destination nodes in the calculatePossibleLink method runs over each node in storedOverlayNodes and the IDs of each supporting node is compared against IDs from the underlay link's source and destination nodes. If there are any nodes missing, the link remains in the waiting state. If both the source and destination nodes are found, the corresponding overlay nodes is recorded as the new source and destination. The link is then removed from waitingLinks and a new CalculatedLink is added to the matched links. At the end, the new link (if it exists) is written into the datastore.
-
-If the created item is an overlayNode, this is added to storedOverlayNodes and we call calculatePossibleLink for every link in waitingLinks. 
-
-*Processing Updated Items*
-
-The difference from processing created items is that we have three possible types of updated items: overlay nodes, waiting underlay links, and matched underlay links.
-
-* In the case of a change in a matched link, this must be recalculated and based on the result it will either be matched with new source and destination or will be returned to waiting links. If the link is moved back to a waiting state, it must also be removed from the datastore.
-* In the case of change in a waiting link, it is passed to the calculation process and based on the result will either remain in waiting state or be promoted to the matched state.
-* In the case of a change in an overlay node, storedOverlayNodes must be updated properly and all links must be recalculated in case of changes.
-
-*Processing Removed items*
-
-Same as for processing updated item. There can be three types of removed items:
-
-* In case of waiting link removal, the link is just removed from waitingLinks
-* In case of matched link removal, the link is removed from matchingLinks and datastore
-* In case of overlay node removal, the node must be removed form storedOverlayNodes and all matching links must be recalculated
-
diff --git a/manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-wrapper-rpc-writing-dev.adoc b/manuals/developer-guide/src/main/asciidoc/topoprocessing/odl-topoprocessing-wrapper-rpc-writing-dev.adoc
deleted file mode 100644 (file)
index 26b7d90..0000000
+++ /dev/null
@@ -1,19 +0,0 @@
-==== Chapter Overview
-During the process of aggregation and filtration, overlay items (so called logical nodes) were created from underlay items (physical nodes). In the topology manager, overlay items are put into a wrapper. A wrapper is identified with unique ID and contains list of logical nodes. Wrappers are used to deal with transitivity of underlay items - which permits grouping of overlay items (into wrappers).
-
-.Wrapper
-image::topoprocessing/wrapper.png[width=500]
-
-PN1, PN2, PN3 = physical nodes
-
-LN1, LN2 = logical nodes
-
-==== RPC Republishing
-All RPCs registered to handle underlay items are re-registered under their corresponding wrapper ID. RPCs of underlay items (belonging to an overlay item) are gathered, and registered under ID of their wrapper.
-
-===== RPC Call
-When RPC is called on overlay item, this call is delegated to it's underlay items, this means that the RPC is called on all underlay items of this overlay item.
-
-==== Writing Mechanism
-When a wrapper (containing overlay item(s) with it's underlay item(s)) is ready to be written into data store, it has to be converted into DOM format. After this translation is done, the result is written into datastore. Physical nodes are stored as supporting-nodes.
-In order to use resources responsibly, writing operation is divided into two steps. First, a set of threads registers prepared operations (deletes and puts) and one thread makes actual write operation in batch.
index fee9b28107682563a768f8116a59b129d2abf8ef..243c133a7194a10eefe3fc91b17e9be82bc7f448 100644 (file)
@@ -1,46 +1,3 @@
 == TTP CLI Tools Developer Guide
 
-=== Overview
-Table Type Patterns are a specification developed by the
-https://www.opennetworking.org/[Open Networking Foundation] to enable
-the description and negotiation of subsets of the OpenFlow protocol.
-This is particularly useful for hardware switches that support OpenFlow
-as it enables the to describe what features they do (and thus also what
-features they do not) support. More details can be found in the full
-specification listed on the
-https://www.opennetworking.org/sdn-resources/onf-specifications/openflow[OpenFlow
-specifications page].
-
-The TTP CLI Tools provide a way for people interested in TTPs to read
-in, validate, output, and manipulate TTPs as a self-contained,
-executable jar file.
-
-=== TTP CLI Tools Architecture
-The TTP CLI Tools use the TTP Model and the YANG Tools/RESTCONF codecs
-to translate between the Data Transfer Objects (DTOs) and JSON/XML.
-
-=== Command Line Options
-This will cover the various options for the CLI Tools. For now, there
-are no options and it merely outputs fixed data using the codecs.
-
-// The CLI tools don't have an APIs in the common sense.
-//
-// === Key APIs and Interfaces
-// Document the key things a user would want to use. For some features,
-// there will only be one logical grouping of APIs. For others there may be
-// more than one grouping.
-//
-// Assuming the API is MD-SAL- and YANG-based, the APIs will be available
-// both via RESTCONF and via Java APIs. Giving a few examples using each is
-// likely a good idea.
-//
-// ==== API Group 1
-// Provide a description of what the API does and some examples of how to
-// use it.
-//
-// ==== API Group 2
-// Provide a description of what the API does and some examples of how to
-// use it.
-//
-// === API Reference Documentation
-// Provide links to JavaDoc, REST API documentation, etc.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/ttp-cli-tools-developer-guide.html
index 084215be1404dd66502ab15d226f4a7f46812080..1b9528b60b3075d254f97bde675d5e88664ee648 100644 (file)
@@ -1,492 +1,3 @@
 == TTP Model Developer Guide
 
-=== Overview
-Table Type Patterns are a specification developed by the
-https://www.opennetworking.org/[Open Networking Foundation] to enable
-the description and negotiation of subsets of the OpenFlow protocol.
-This is particularly useful for hardware switches that support OpenFlow
-as it enables the to describe what features they do (and thus also what
-features they do not) support. More details can be found in the full
-specification listed on the
-https://www.opennetworking.org/sdn-resources/onf-specifications/openflow[OpenFlow
-specifications page].
-
-=== TTP Model Architecture
-The TTP Model provides a YANG-modeled type for a TTP and allows them
-to be associated with a master list of known TTPs, as well as active
-and supported TTPs with nodes in the MD-SAL inventory model.
-
-=== Key APIs and Interfaces
-The key API provided by the TTP Model feature is the ability to store
-a set of TTPs in the MD-SAL as well as associate zero or one active
-TTPs and zero or more supported TTPs along with a given node in the
-MD-SAL inventory model.
-
-=== API Reference Documentation
-
-==== RESTCONF
-See the generated RESTCONF API documentation at:
-http://localhost:8181/apidoc/explorer/index.html
-
-Look for the onf-ttp module to expand and see the various RESTCONF
-APIs.
-
-==== Java Bindings
-
-//TODO: Provide a link to JavaDoc.
-
-As stated above there are 3 locations where a Table Type Pattern can be
-placed into the MD-SAL Data Store. They correspond to 3 different REST
-API URIs:
-
-. +restconf/config/onf-ttp:opendaylight-ttps/onf-ttp:table-type-patterns/+
-. +restconf/config/opendaylight-inventory:nodes/node/{id}/ttp-inventory-node:active_ttp/+
-. +restconf/config/opendaylight-inventory:nodes/node/{id}/ttp-inventory-node:supported_ttps/+
-
-[NOTE]
-===============================
-Typically, these URIs are running on the machine the controller is on
-at port 8181. If you are on the same machine they can thus be accessed
-at +http://localhost:8181/<uri>+
-===============================
-
-=== Using the TTP Model RESTCONF APIs
-
-==== Setting REST HTTP Headers
-
-===== Authentication
-
-The REST API calls require authentication by default. The default
-method is to use basic auth with a user name and password of `admin'.
-
-===== Content-Type and Accept
-
-RESTCONF supports both xml and json. This example focuses on JSON, but
-xml can be used just as easily. When doing a PUT or POST be sure to
-specify the appropriate +Conetnt-Type+ header: either
-+application/json+ or +application/xml+.
-
-When doing a GET be sure to specify the appropriate +Accept+ header:
-again, either +application/json+ or +application/xml+.
-
-==== Content
-
-The contents of a PUT or POST should be a OpenDaylight Table Type
-Pattern. An example of one is provided below. The example can also be
-found at https://git.opendaylight.org/gerrit/gitweb?p=ttp.git;a=blob;f=parser/sample-TTP-from-tests.ttp;h=45130949b25c6f86b750959d27d04ec2208935fb;hb=HEAD[+parser/sample-TTP-from-tests.ttp+ in the TTP git repository].
-
-.Sample Table Type Pattern (json)
------------------------------------------------------
-{
-    "table-type-patterns": {
-        "table-type-pattern": [
-            {
-                "security": {
-                    "doc": [
-                        "This TTP is not published for use by ONF. It is an example and for",
-                        "illustrative purposes only.",
-                        "If this TTP were published for use it would include",
-                        "guidance as to any security considerations in this doc member."
-                    ]
-                },
-                "NDM_metadata": {
-                    "authority": "org.opennetworking.fawg",
-                    "OF_protocol_version": "1.3.3",
-                    "version": "1.0.0",
-                    "type": "TTPv1",
-                    "doc": [
-                        "Example of a TTP supporting L2 (unicast, multicast, flooding), L3 (unicast only),",
-                        "and an ACL table."
-                    ],
-                    "name": "L2-L3-ACLs"
-                },
-                "identifiers": [
-                    {
-                        "doc": [
-                            "The VLAN ID of a locally attached L2 subnet on a Router."
-                        ],
-                        "var": "<subnet_VID>"
-                    },
-                    {
-                        "doc": [
-                            "An OpenFlow group identifier (integer) identifying a group table entry",
-                            "of the type indicated by the variable name"
-                        ],
-                        "var": "<<group_entry_types/name>>"
-                    }
-                ],
-                "features": [
-                    {
-                        "doc": [
-                            "Flow entry notification Extension – notification of changes in flow entries"
-                        ],
-                        "feature": "ext187"
-                    },
-                    {
-                        "doc": [
-                            "Group notifications Extension – notification of changes in group or meter entries"
-                        ],
-                        "feature": "ext235"
-                    }
-                ],
-                "meter_table": {
-                    "meter_types": [
-                        {
-                            "name": "ControllerMeterType",
-                            "bands": [
-                                {
-                                    "type": "DROP",
-                                    "rate": "1000..10000",
-                                    "burst": "50..200"
-                                }
-                            ]
-                        },
-                        {
-                            "name": "TrafficMeter",
-                            "bands": [
-                                {
-                                    "type": "DSCP_REMARK",
-                                    "rate": "10000..500000",
-                                    "burst": "50..500"
-                                },
-                                {
-                                    "type": "DROP",
-                                    "rate": "10000..500000",
-                                    "burst": "50..500"
-                                }
-                            ]
-                        }
-                    ],
-                    "built_in_meters": [
-                        {
-                            "name": "ControllerMeter",
-                            "meter_id": 1,
-                            "type": "ControllerMeterType",
-                            "bands": [
-                                {
-                                    "rate": 2000,
-                                    "burst": 75
-                                }
-                            ]
-                        },
-                        {
-                            "name": "AllArpMeter",
-                            "meter_id": 2,
-                            "type": "ControllerMeterType",
-                            "bands": [
-                                {
-                                    "rate": 1000,
-                                    "burst": 50
-                                }
-                            ]
-                        }
-                    ]
-                },
-                "table_map": [
-                    {
-                        "name": "ControlFrame",
-                        "number": 0
-                    },
-                    {
-                        "name": "IngressVLAN",
-                        "number": 10
-                    },
-                    {
-                        "name": "MacLearning",
-                        "number": 20
-                    },
-                    {
-                        "name": "ACL",
-                        "number": 30
-                    },
-                    {
-                        "name": "L2",
-                        "number": 40
-                    },
-                    {
-                        "name": "ProtoFilter",
-                        "number": 50
-                    },
-                    {
-                        "name": "IPv4",
-                        "number": 60
-                    },
-                    {
-                        "name": "IPv6",
-                        "number": 80
-                    }
-                ],
-                "parameters": [
-                    {
-                        "doc": [
-                            "documentation"
-                        ],
-                        "name": "Showing-curt-how-this-works",
-                        "type": "type1"
-                    }
-                ],
-                "flow_tables": [
-                    {
-                        "doc": [
-                            "Filters L2 control reserved destination addresses and",
-                            "may forward control packets to the controller.",
-                            "Directs all other packets to the Ingress VLAN table."
-                        ],
-                        "name": "ControlFrame",
-                        "flow_mod_types": [
-                            {
-                                "doc": [
-                                    "This match/action pair allows for flow_mods that match on either",
-                                    "ETH_TYPE or ETH_DST (or both) and send the packet to the",
-                                    "controller, subject to metering."
-                                ],
-                                "name": "Frame-To-Controller",
-                                "match_set": [
-                                    {
-                                        "field": "ETH_TYPE",
-                                        "match_type": "all_or_exact"
-                                    },
-                                    {
-                                        "field": "ETH_DST",
-                                        "match_type": "exact"
-                                    }
-                                ],
-                                "instruction_set": [
-                                    {
-                                        "doc": [
-                                            "This meter may be used to limit the rate of PACKET_IN frames",
-                                            "sent to the controller"
-                                        ],
-                                        "instruction": "METER",
-                                        "meter_name": "ControllerMeter"
-                                    },
-                                    {
-                                        "instruction": "APPLY_ACTIONS",
-                                        "actions": [
-                                            {
-                                                "action": "OUTPUT",
-                                                "port": "CONTROLLER"
-                                            }
-                                        ]
-                                    }
-                                ]
-                            }
-                        ],
-                        "built_in_flow_mods": [
-                            {
-                                "doc": [
-                                    "Mandatory filtering of control frames with C-VLAN Bridge reserved DA."
-                                ],
-                                "name": "Control-Frame-Filter",
-                                "priority": "1",
-                                "match_set": [
-                                    {
-                                        "field": "ETH_DST",
-                                        "mask": "0xfffffffffff0",
-                                        "value": "0x0180C2000000"
-                                    }
-                                ]
-                            },
-                            {
-                                "doc": [
-                                    "Mandatory miss flow_mod, sends packets to IngressVLAN table."
-                                ],
-                                "name": "Non-Control-Frame",
-                                "priority": "0",
-                                "instruction_set": [
-                                    {
-                                        "instruction": "GOTO_TABLE",
-                                        "table": "IngressVLAN"
-                                    }
-                                ]
-                            }
-                        ]
-                    }
-                ],
-                "group_entry_types": [
-                    {
-                        "doc": [
-                            "Output to a port, removing VLAN tag if needed.",
-                            "Entry per port, plus entry per untagged VID per port."
-                        ],
-                        "name": "EgressPort",
-                        "group_type": "INDIRECT",
-                        "bucket_types": [
-                            {
-                                "name": "OutputTagged",
-                                "action_set": [
-                                    {
-                                        "action": "OUTPUT",
-                                        "port": "<port_no>"
-                                    }
-                                ]
-                            },
-                            {
-                                "name": "OutputUntagged",
-                                "action_set": [
-                                    {
-                                        "action": "POP_VLAN"
-                                    },
-                                    {
-                                        "action": "OUTPUT",
-                                        "port": "<port_no>"
-                                    }
-                                ]
-                            },
-                            {
-                                "opt_tag": "VID-X",
-                                "name": "OutputVIDTranslate",
-                                "action_set": [
-                                    {
-                                        "action": "SET_FIELD",
-                                        "field": "VLAN_VID",
-                                        "value": "<local_vid>"
-                                    },
-                                    {
-                                        "action": "OUTPUT",
-                                        "port": "<port_no>"
-                                    }
-                                ]
-                            }
-                        ]
-                    }
-                ],
-                "flow_paths": [
-                    {
-                        "doc": [
-                            "This object contains just a few examples of flow paths, it is not",
-                            "a comprehensive list of the flow paths required for this TTP.  It is",
-                            "intended that the flow paths array could include either a list of",
-                            "required flow paths or a list of specific flow paths that are not",
-                            "required (whichever is more concise or more useful."
-                        ],
-                        "name": "L2-2",
-                        "path": [
-                            "Non-Control-Frame",
-                            "IV-pass",
-                            "Known-MAC",
-                            "ACLskip",
-                            "L2-Unicast",
-                            "EgressPort"
-                        ]
-                    },
-                    {
-                        "name": "L2-3",
-                        "path": [
-                            "Non-Control-Frame",
-                            "IV-pass",
-                            "Known-MAC",
-                            "ACLskip",
-                            "L2-Multicast",
-                            "L2Mcast",
-                            "[EgressPort]"
-                        ]
-                    },
-                    {
-                        "name": "L2-4",
-                        "path": [
-                            "Non-Control-Frame",
-                            "IV-pass",
-                            "Known-MAC",
-                            "ACL-skip",
-                            "VID-flood",
-                            "VIDflood",
-                            "[EgressPort]"
-                        ]
-                    },
-                    {
-                        "name": "L2-5",
-                        "path": [
-                            "Non-Control-Frame",
-                            "IV-pass",
-                            "Known-MAC",
-                            "ACLskip",
-                            "L2-Drop"
-                        ]
-                    },
-                    {
-                        "name": "v4-1",
-                        "path": [
-                            "Non-Control-Frame",
-                            "IV-pass",
-                            "Known-MAC",
-                            "ACLskip",
-                            "L2-Router-MAC",
-                            "IPv4",
-                            "v4-Unicast",
-                            "NextHop",
-                            "EgressPort"
-                        ]
-                    },
-                    {
-                        "name": "v4-2",
-                        "path": [
-                            "Non-Control-Frame",
-                            "IV-pass",
-                            "Known-MAC",
-                            "ACLskip",
-                            "L2-Router-MAC",
-                            "IPv4",
-                            "v4-Unicast-ECMP",
-                            "L3ECMP",
-                            "NextHop",
-                            "EgressPort"
-                        ]
-                    }
-                ]
-            }
-        ]
-    }
-}
------------------------------------------------------
-
-==== Making a REST Call
-
-In this example we'll do a PUT to install the sample TTP from above
-into OpenDaylight and then retrieve it both as json and as xml. We'll
-use the https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm[
-Postman - REST Client] for Chrome in the examples, but any method of
-accessing REST should work.
-
-First, we'll fill in the basic information:
-
-.Filling in URL, content, Content-Type and basic auth
-image::ttp-screen1-basic-auth.png[width=500]
-
-. Set the URL to +http://localhost:8181/restconf/config/onf-ttp:opendaylight-ttps/onf-ttp:table-type-patterns/+
-. Set the action to +PUT+
-. Click Headers and
-. Set a header for +Content-Type+ to +application/json+
-. Make sure the content is set to raw and
-. Copy the sample TTP from above into the content
-. Click the Basic Auth tab and
-. Set the username and password to admin
-. Click Refresh headers
-
-.Refreshing basic auth headers
-image::ttp-screen2-applied-basic-auth.png[width=500]
-
-After clicking Refresh headers, we can see that a new header
-(+Authorization+) has been created and this will allow us to
-authenticate to make the REST call.
-
-.PUTting a TTP
-image::ttp-screen3-sent-put.png[width=500]
-
-At this point, clicking send should result in a Status response of +200
-OK+ indicating we've successfully PUT the TTP into OpenDaylight.
-
-.Retrieving the TTP as json via a GET
-image::ttp-screen4-get-json.png[width=500]
-
-We can now retrieve the TTP by:
-
-. Changing the action to +GET+
-. Setting an +Accept+ header to +application/json+ and
-. Pressing send
-
-.Retrieving the TTP as xml via a GET
-image::ttp-screen5-get-xml.png[width=500]
-
-The same process can retrieve the content as xml by setting the
-+Accept+ header to +application/xml+.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/ttp-model-developer-guide.html
index 8bb47f2f3fb421cb34282fec037aa6063418de6d..1d3d8753eb1b92e2a691f6f5391ec8c10b43c200 100644 (file)
@@ -1,457 +1,3 @@
 == UNI Manager Plug-In Developer Guide\r
 \r
-The UNI Manager plug in exposes capabilities of OpenDaylight to configure\r
-networked equipment to operate according to Metro Ethernet Forum (MEF)\r
-requirements for User Network Interface (UNI) and to support the creation of\r
-an Ethernet Virtual Connection (EVC) according to MEF requirements.\r
-\r
-UNI Manager adheres to a minimum set of functionality defined by MEF 7.2 and\r
-10.2 specifications.\r
-\r
-=== Functionality\r
-The UNI manager plugin enables the creation of Ethernet Virtual Connections (EVC) as defined by the Metro Ethernet Forum (MEF). An EVC provides a simulated Ethernet connection among LANs existing at different geographical locations. This version of the plugin is limited to connecting two LANS.\r
-\r
-As defined by MEF, each location to be connected must have a User Network Interface, (UNI) which is a device that connects the user LAN to the EVC providers network.\r
-\r
-UNI and EVC are implemented via Open vSwitch, leveraging the OVSDB project: creating a UNI will end up creating an OVSDB node with an _ovsbr0_ bridge, interface and port. While creating a UNI, based on the MEF requirement, one can specify a desired QoS; this leverages the QoS and Queue tables from the OVS database. (see documentation bellow for full details).\r
-Same goes with the EVC, to which one can apply a given QoS to control the speed of the connection.\r
-Creating an EVC will add two additional ports to the _ovsbr0_ bridge:\r
-\r
-- _eht0_: the interface connected to a client laptop\r
-- _gre1_, interface used to for gre tunnelling between two clients (VXLAN).\r
-\r
-Finally, within this release, UniMgr is more a Proof Of Concept than a framework to be used in production. Previous demonstrations were made using Raspberry Pis, having a low NIC bandwith, thus the speed as defined in the API is actually mapped as follow:\r
-\r
-\r
-- `speed-10M`  => 1 Mb\r
-- `speed-100M` => 2 Mb\r
-- `speed-1G`   => 3 Mb\r
-- `speed-10G`  => 4 Mb\r
-\r
-=== UNI Manager REST APIs\r
-\r
-This API enables the creation and management of both UNI's and EVCs.  In order to create an EVC using this interface you would first create two UNI's via the following REST API (see documentation below for full details)\r
-\r
-----\r
-PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:uni/node/<uni-id>\r
-----\r
-\r
-You would then create an EVC, indicating that it is a connection between the two UNI's that were just created, via the following REST API (see documentation below for full details)\r
-----\r
-PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:evc/link/<evc-id>\r
-----\r
-You can then change attributes of the UNI's or EVCs, and delete these entities using this API (see documentation below for full details).\r
-\r
-This plugin uses the OpenDaylight OVSDB plugin to provision and the manage devices which implement the OVSDB REST interface, as needed to realize the UNI and EVC life-cycles\r
-\r
-NOTE: Both the configuration and operational databases can be operated upon by the unimgr REST API.  The only difference between the two is in the REST Path. The configuration datastore represents the desired state, the operational datastore represents the actual state.\r
-\r
-For operating on the config database\r
-----\r
-http://<host-ip>:8181/restconf/config/<PATH>\r
-----\r
-For operating on the operational database\r
-----\r
-http://<host-ip>:8181/restconf/operational/<PATH>\r
-----\r
-The documentation below shows examples of both\r
-\r
-==== CREATE UNI\r
-\r
-----\r
-PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:uni/node/<uni-id>\r
-----\r
-NOTE: uni-id is determined by and supplied by the caller both in the path and the body of the rest message\r
-\r
-Request Body\r
-----\r
-{\r
-  "network-topology:node": [\r
-    {\r
-      "node-id": "uni-id",\r
-      "speed": {\r
-        "speed-1G": 1\r
-      },\r
-      "uni:mac-layer": "IEEE 802.3-2005",\r
-      "uni:physical-medium": "100BASE-T",\r
-      "uni:mode": "syncEnabled"\r
-      "uni:type": "UNITYPE",\r
-      "uni:mtu-size": 1600,\r
-      "uni:mac-address": "68:5b:35:bb:f8:3e",\r
-      "uni:ip-address": "192.168.2.11",\r
-    }\r
-  ]\r
-}\r
-----\r
-Response on success: 200\r
-\r
-Input Options\r
-----\r
-"speed"\r
-    "speed-10M"\r
-    "speed-100M"\r
-    "speed-1G"\r
-    "speed-10G"\r
-"uni:mac-layer"\r
-    "IEEE 802.3-2005"\r
-uni:physical-medium\r
-    "10BASE-T"\r
-    "100BASE-T"\r
-    "1000BASE-T"\r
-    "10GBASE-T"\r
-"uni:mode"\r
-    "syncEnabled"\r
-    "syncDisabled"\r
-"uni:type"\r
-    "UNITYPE"\r
-    "uni:mtu-size"\r
-    1600 reccomended\r
-----\r
-On OVS, the QoS, the Queue were updated, and a bridge was added:\r
-----\r
-mininet@mininet-vm:~$ sudo ovs-vsctl list QoS\r
-_uuid               : 341c6e9d-ecb4-44ff-a21c-db644b466f4c\r
-external_ids        : {opendaylight-qos-id="qos://18db2a79-5655-4a94-afac-94015245e3f6"}\r
-other_config        : {dscp="0", max-rate="3000000"}\r
-queues              : {}\r
-type                : linux-htb\r
-\r
-mininet@mininet-vm:~$ sudo ovs-vsctl list Queue\r
-_uuid               : 8a0e1fc1-5d5f-4e7a-9c4d-ec412a5ec7de\r
-dscp                : 0\r
-external_ids        : {opendaylight-queue-id="queue://740a3809-5bef-4ad4-98d6-2ba81132bd06"}\r
-other_config        : {dscp="0", max-rate="3000000"}\r
-\r
-mininet@mininet-vm:~$ sudo ovs-vsctl show\r
-0b8ed0aa-67ac-4405-af13-70249a7e8a96\r
-    Manager "tcp:192.168.1.200:6640"\r
-        is_connected: true\r
-    Bridge "ovsbr0"\r
-        Port "ovsbr0"\r
-            Interface "ovsbr0"\r
-                type: internal\r
-    ovs_version: "2.4.0"\r
-----\r
-==== RETRIEVE UNI\r
-\r
-GET http://<host-ip>:8181/restconf/operational/network-topology:network-topology/topology/unimgr:uni/node/<uni-id>\r
-\r
-Response : 200\r
-----\r
-{\r
-    "node": [\r
-    {\r
-        "node-id": "uni-id",\r
-        "cl-unimgr-mef:speed": {\r
-            "speed-1G": [null]\r
-        },\r
-        "cl-unimgr-mef:mac-layer": "IEEE 802.3-2005",\r
-        "cl-unimgr-mef:physical-medium": "1000BASE-T",\r
-        "cl-unimgr-mef:mode": "syncEnabled",\r
-        "cl-unimgr-mef:type": "UNITYPE",\r
-        "cl-unimgr-mef:mtu-size": "1600",\r
-        "cl-unimgr-mef:mac-address": "00:22:22:22:22:22",\r
-        "cl-unimgr-mef:ip-address": "10.36.0.22"\r
-    }\r
-    ]\r
-}\r
-----\r
-Output Options\r
-----\r
-"cl-unimgr-mef:speed"\r
-    "speed-10M"\r
-    "speed-100M"\r
-    "speed-1G"\r
-    "speed-10G"\r
-"cl-unimgr-mef::mac-layer"\r
-    "IEEE 802.3-2005"\r
-"cl-unimgr-mef:physical-medium"\r
-    "10BASE-T"\r
-    "100BASE-T"\r
-    "1000BASE-T"\r
-    "10GBASE-T"\r
-"cl-unimgr-mef::mode"\r
-    "syncEnabled"\r
-    "syncDisabled"\r
-"cl-unimgr-mef::type"\r
-    "UNITYPE"\r
-----\r
-\r
-==== UPDATE UNI\r
-----\r
-PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:uni/node/<uni-id>\r
-----\r
-NOTE: uni-id is determined by and supplied by the caller both in the path and the body of the rest message\r
-\r
-Request Body\r
-----\r
-{\r
-    "network-topology:node": [\r
-    {\r
-        "node-id": "uni-id",\r
-        "speed": {\r
-            "speed-1G": 1\r
-        },\r
-        "uni:mac-layer": "IEEE 802.3-2005",\r
-        "uni:physical-medium": "100BASE-T",\r
-        "uni:mode": "syncEnabled"\r
-        "uni:type": "UNITYPE",\r
-        "uni:mtu-size": 1600,\r
-        "uni:mac-address": "68:5b:35:bb:f8:3e",\r
-        "uni:ip-address": "192.168.2.11",\r
-    }\r
-    ]\r
-}\r
-----\r
-Response on success: 200\r
-\r
-Input Options\r
-----\r
-"speed"\r
-    "speed-10M"\r
-    "speed-100M"\r
-    "speed-1G"\r
-    "speed-10G"\r
-"uni:mac-layer"\r
-    "IEEE 802.3-2005"\r
-uni:physical-medium\r
-    "10BASE-T"\r
-    "100BASE-T"\r
-    "1000BASE-T"\r
-    "10GBASE-T"\r
-"uni:mode"\r
-    "syncEnabled"\r
-    "syncDisabled"\r
-"uni:type"\r
-    "UNITYPE"\r
-"uni:mtu-size"\r
-    1600 reccomended\r
-----\r
-==== DELETE UNI\r
-----\r
-DELETE http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:uni/node/<uni-id>\r
-----\r
-Response on success: 200\r
-\r
-==== CREATE EVC\r
-----\r
-PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:evc/link/<evc-id>\r
-----\r
-NOTE: evc-id is determined by and supplied by the caller both in the path and the body of the rest message\r
-\r
-Request Body\r
-----\r
-{\r
-    "link": [\r
-    {\r
-        "link-id": "evc-1",\r
-        "source": {\r
-            "source-node": "/network-topology/topology/node/uni-1"\r
-        },\r
-        "destination": {\r
-            "dest-node": "/network-topology/topology/node/uni-2"\r
-      },\r
-      "cl-unimgr-mef:uni-source": [\r
-        {\r
-            "order": "0",\r
-            "ip-address": "192.168.2.11"\r
-        }\r
-        ],\r
-        "cl-unimgr-mef:uni-dest": [\r
-        {\r
-            "order": "0",\r
-            "ip-address": "192.168.2.10"\r
-        }\r
-        ],\r
-        "cl-unimgr-mef:cos-id": "gold",\r
-        "cl-unimgr-mef:ingress-bw": {\r
-            "speed-10G": {}\r
-        },\r
-        "cl-unimgr-mef:egress-bw": {\r
-            "speed-10G": {}\r
-      }\r
-    }\r
-    ]\r
-}\r
-----\r
-Response on success: 200\r
-\r
-Input Optionss\r
-----\r
-["source"]["source-node"]\r
-    Id of 1st UNI to assocate EVC with\r
-["cl-unimgr-mef:uni-source"][0]["ip-address"]\r
-    IP address of 1st UNI to associate EVC with\r
-["destination"]["dest-node"]\r
-    Id of 2nd UNI to assocate EVC with\r
-["cl-unimgr-mef:uni-dest"][0]["ip-address"]\r
-    IP address of 2nd UNI to associate EVC with\r
-"cl-unimgr-mef:cos-id"\r
-    class of service id to associate with the EVC\r
-"cl-unimgr-mef:ingress-bw"\r
-"cl-unimgr-mef:egress-bw"\r
-    "speed-10M"\r
-    "speed-100M"\r
-    "speed-1G"\r
-    "speed-10G"\r
-----\r
-On OVS, the QoS, the Queue were updated, and two ports were added:\r
-----\r
-mininet@mininet-vm:~$ sudo ovs-vsctl list QoS\r
-_uuid               : 341c6e9d-ecb4-44ff-a21c-db644b466f4c\r
-external_ids        : {opendaylight-qos-id="qos://18db2a79-5655-4a94-afac-94015245e3f6"}\r
-other_config        : {dscp="0", max-rate="3000000"}\r
-queues              : {}\r
-type                : linux-htb\r
-\r
-mininet@mininet-vm:~$ sudo ovs-vsctl list Queue\r
-_uuid               : 8a0e1fc1-5d5f-4e7a-9c4d-ec412a5ec7de\r
-dscp                : 0\r
-external_ids        : {opendaylight-queue-id="queue://740a3809-5bef-4ad4-98d6-2ba81132bd06"}\r
-other_config        : {dscp="0", max-rate="3000000"}\r
-\r
-mininet@mininet-vm:~$ sudo ovs-vsctl show\r
-0b8ed0aa-67ac-4405-af13-70249a7e8a96\r
-    Manager "tcp:192.168.1.200:6640"\r
-        is_connected: true\r
-    Bridge "ovsbr0"\r
-        Port "ovsbr0"\r
-            Interface "ovsbr0"\r
-                type: internal\r
-        Port "eth1"\r
-            Interface "eth1"\r
-        Port "gre1"\r
-            Interface "gre1"\r
-                type: gre\r
-                options: {remote_ip="192.168.1.233"}\r
-ovs_version: "2.4.0"\r
-----\r
-==== RETRIEVE EVC\r
-----\r
-GET http://<host-ip>:8181/restconf/operational/network-topology:network-topology/topology/unimgr:evc/link/<evc-id>\r
-----\r
-Response on success: 200\r
-----\r
-{\r
-    "link": [\r
-    {\r
-        "link-id": "evc-5",\r
-        "source": {\r
-            "source-node": "/network-topology/topology/node/uni-9"\r
-        },\r
-        "destination": {\r
-            "dest-node": "/network-topology/topology/node/uni-10"\r
-        },\r
-        "cl-unimgr-mef:uni-dest": [\r
-        {\r
-            "order": 0,\r
-            "uni": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='unimgr:uni']/network-topology:node[network-topology:node-id='uni-10']",\r
-            "ip-address": "10.0.0.22"\r
-        }\r
-        ],\r
-        "cl-unimgr-mef:ingress-bw": {\r
-            "speed-1G": [null]\r
-        },\r
-        "cl-unimgr-mef:cos-id": "new1",\r
-        "cl-unimgr-mef:uni-source": [\r
-        {\r
-            "order": 0,\r
-            "uni": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='unimgr:uni']/network-topology:node[network-topology:node-id='uni-9']",\r
-            "ip-address": "10.0.0.21"\r
-        }\r
-        ],\r
-        "cl-unimgr-mef:egress-bw": {\r
-        "speed-1G": [null]\r
-      }\r
-    }\r
-    ]\r
-}\r
-----\r
-Output Options\r
-----\r
-["source"]["source-node"]\r
-["cl-unimgr-mef:uni-source"][0]["uni"]\r
-    Id of 1st UNI assocated with EVC\r
-    ["cl-unimgr-mef:uni-source"][0]["ip-address"]\r
-    IP address of 1st UNI assocated with EVC\r
-["destination"]["dest-node"]\r
-["cl-unimgr-mef:uni-dest"][0]["uni"]\r
-    Id of 2nd UNI assocated with EVC\r
-["cl-unimgr-mef:uni-dest"][0]["ip-address"]\r
-    IP address of 2nd UNI assocated with EVC\r
-"cl-unimgr-mef:cos-id"\r
-    class of service id associated with the EVC\r
-"cl-unimgr-mef:ingress-bw"\r
-"cl-unimgr-mef:egress-bw"\r
-    "speed-10M"\r
-    "speed-100M"\r
-    "speed-1G"\r
-    "speed-10G"\r
-----\r
-==== UPDATE EVC\r
-----\r
-PUT http://<host-ip>:8181/restconf/config/network-topology:network-topology/topology/unimgr:evc/link/<evc-id>\r
-----\r
-NOTE: evc-id is determined by and supplied by the caller both in the path and the body of the rest message\r
-\r
-Request Body\r
-----\r
-{\r
-    "link": [\r
-    {\r
-        "link-id": "evc-1",\r
-        "source": {\r
-            "source-node": "/network-topology/topology/node/uni-1"\r
-        },\r
-        "destination": {\r
-            "dest-node": "/network-topology/topology/node/uni-2"\r
-        },\r
-        "cl-unimgr-mef:uni-source": [\r
-        {\r
-            "order": "0",\r
-            "ip-address": "192.168.2.11"\r
-        }\r
-        ],\r
-        "cl-unimgr-mef:uni-dest": [\r
-        {\r
-            "order": "0",\r
-            "ip-address": "192.168.2.10"\r
-        }\r
-        ],\r
-        "cl-unimgr-mef:cos-id": "gold",\r
-        "cl-unimgr-mef:ingress-bw": {\r
-            "speed-10G": {}\r
-        },\r
-        "cl-unimgr-mef:egress-bw": {\r
-        "speed-10G": {}\r
-      }\r
-    }\r
-    ]\r
-}\r
-----\r
-Response on success: 200\r
-\r
-Input Optionss\r
-----\r
-["source"]["source-node"]\r
-    Id of 1st UNI to assocate EVC with\r
-["cl-unimgr-mef:uni-source"][0]["ip-address"]\r
-    IP address of 1st UNI to associate EVC with\r
-["destination"]["dest-node"]\r
-    Id of 2nd UNI to assocate EVC with\r
-["cl-unimgr-mef:uni-dest"][0]["ip-address"]\r
-    IP address of 2nd UNI to associate EVC with\r
-"cl-unimgr-mef:cos-id"\r
-    class of service id to associate with the EVC\r
-"cl-unimgr-mef:ingress-bw"\r
-"cl-unimgr-mef:egress-bw"\r
-    "speed-10M"\r
-    "speed-100M"\r
-    "speed-1G"\r
-    "speed-10G"\r
-----\r
-==== DELETE EVC\r
-----\r
-DELETE http://host-ip:8181/restconf/config/network-topology:network-topology/topology/unimgr:evc/link/evc-id\r
-----\r
-Response on success: 200
\ No newline at end of file
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/uni-manager-plug-in-developer-guide.html\r
index fa8614d44cfa6b15e5864943f55628e1c95ce434..31a440aef621c02964920cf8fda27ee08f4cb0c6 100644 (file)
@@ -1,32 +1,3 @@
 == Unified Secure Channel
 
-=== Overview
-The Unified Secure Channel (USC) feature provides REST API, manager, and plugin for unified
-secure channels.  The REST API provides a northbound api.  The manager
-monitors, maintains, and provides channel related services.  The plugin
-handles the lifecycle of channels.
-
-=== USC Channel Architecture
-* USC Agent
-  ** The USC Agent provides proxy and agent functionality on top of all standard protocols supported by the device.  It initiates call-home with the controller, maintains live connections with with the controller, acts as a demuxer/muxer for packets with the USC header, and authenticates the controller.
-* USC Plugin
-  ** The USC Plugin is responsible for communication between the controller and the USC agent .  It responds to call-home with the controller, maintains live connections with the devices, acts as a muxer/demuxer for packets with the USC header, and provides support for TLS/DTLS.
-* USC Manager
-  ** The USC Manager handles configurations, high availability, security, monitoring, and clustering support for USC.
-* USC UI
-  ** The USC UI is responsible for displaying a graphical user interface representing the state of USC in the OpenDaylight DLUX UI.
-
-=== USC Channel APIs and Interfaces
-This section describes the APIs for interacting with the unified secure
-channels.
-
-==== USC Channel Topology API
-The USC project maintains a topology that is YANG-based in MD-SAL.  These models are available via RESTCONF.
-
-* Name: view-channel
-* URL: http://${IPADDRESS}:8181/restconf/operations/usc-channel:view-channel
-* Description: Views the current state of the USC environment.
-
-=== API Reference Documentation
-Go to http://${IPADDRESS}:8181/apidoc/explorer/index.html, sign in, and expand the usc-channel panel.  From there, users can execute various API calls to test their USC deployment.
-
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/unified-secure-channel.html
index be5256c73d3ad4033606d081d5b2fb66a247b04e..47876516a8e9e18a45c08f9f1df75a097ec9d674 100644 (file)
@@ -1,69 +1,3 @@
 == YANG-PUSH Developer Guide
 
-=== Overview
-The YANG PUBSUB project allows subscriptions to be placed on
-targeted subtrees of YANG datastores residing on remote devices.
-Changes in YANG objects within the remote subtree can be pushed
-to an OpenDaylight controller as specified without a requiring
-the controller to make a continuous set of fetch requests.
-
-==== YANG-PUSH capabilities available
-
-This module contains the base code which embodies the intent of YANG-PUSH requirements for subscription as defined in {i2rs-pub-sub-requirements} [https://datatracker.ietf.org/doc/draft-ietf-i2rs-pub-sub-requirements/].   The mechanism for delivering on these YANG-PUSH requirements over Netconf transport is defined in {netconf-yang-push} [netconf-yang-push: https://tools.ietf.org/html/draft-ietf-netconf-yang-push-00].  
-
-Note that in the current release, not all capabilities of draft-ietf-netconf-yang-push are realized.   Currently only implemented is *create-subscription* RPC support from ietf-datastore-push@2015-10-15.yang; and this will be for periodic subscriptions only.  There of course is intent to provide much additional functionality in future OpenDaylight releases.
-
-==== Future YANG-PUSH capabilities
-
-Over time, the intent is to flesh out more robust capabilities which will allow OpenDaylight applications to subscribe to YANG-PUSH compliant devices.  Capabilities for future releases will include:
-
-Support for subscription change/delete:
-*modify-subscription* rpc support for all mountpoint devices or particular mountpoint device
-*delete-subscription* rpc support for all mountpoint devices or particular mountpoint device
-
-Support for static subscriptions:
-This will enable the receipt of subscription updates pushed from publishing devices where no signaling from the controller has been used to establish the subscriptions.
-
-Support for additional transports:
-NETCONF is not the only transport of interest to OpenDaylight or the subscribed devices.  Over time this code will support Restconf and HTTP/2 transport requirements defined in {netconf-restconf-yang-push} [https://tools.ietf.org/html/draft-voit-netconf-restconf-yang-push-01]
-
-
-=== YANG-PUSH Architecture
-
-The code architecture of Yang push consists of two main elements
-
-YANGPUSH Provider
-YANGPUSH Listener
-
-YANGPUSH Provider receives create-subscription requests from applications and then establishes/registers the corresponding listener which will receive information pushed by a publisher.  In addition, YANGPUSH Provider also invokes an augmented OpenDaylight create-subscription RPC which enables applications to register for notification as per rfc5277. This augmentation adds periodic time period (duration) and subscription-id values to the existing RPC parameters. The Java package supporting this capability is “org.opendaylight.yangpush.impl”. Below class supports the YANGPUSH Provider capability:
-
-(1) YangpushDomProvider
-The Binding Independent version. It uses a neutral data Document Object Model format for data and API calls, which is independent of any generated Java language bindings from the YANG model.
-
-
-The YANGPUSH Listener accepts update notifications from a device after they have been de-encapsulated from the NETCONF transport.  The YANGPUSH Listener then passes these updates to MD-SAL.  This function is implemented via the YangpushDOMNotificationListener class within the “org.opendaylight.yangpush.listner” Java package.
-
-=== Key APIs and Interfaces
-
-==== YangpushDomProvider
-
-Central to this is onSessionInitiated  which acquires the Document Object Model format based versions of MD-SAL services, including the MountPoint service and RPCs.  Via these acquired services, invoke registerDataChangeListener over in YangpushDOMNotificationListener.
-
-==== YangpushDOMNotificationListener
-This API handles instances of a received Push Updates which are inbound to the listener and places these in MD-SAL.   Key classes in include:
-
-onPushUpdate
-Converts and validates the encoding of the pushed subscription update. If the subscription exists and is active, calls updateDataStoreForPushUpdate so that the information can be put in MD-SAL. Finally logs the pushed subscription update as well as some additional context information.
-
-updateDataStoreForPushUpdate
-Used to put the published information into MD-SAL.  This pushed information will also include elements such as the subscription-id, the identity of the publisher, the time of the update, the incoming encoding type, and the pushed YANG subtree information.
-
-YangpushDOMNotificationListener
-Starts the listener tracking a new Subscription ID from a particular publisher.
-
-
-=== API Reference Documentation
-Javadocs are generated while creating mvn:site
-and they are located in target/ directory in each module.
-
-
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/yang-push-developer-guide.html
diff --git a/manuals/developer-guide/src/main/asciidoc/yangtools/yang-java-binding-explained.adoc b/manuals/developer-guide/src/main/asciidoc/yangtools/yang-java-binding-explained.adoc
deleted file mode 100644 (file)
index f61ba50..0000000
+++ /dev/null
@@ -1,1101 +0,0 @@
-=== YANG Java Binding: Mapping rules\r
-This chapter covers the details of mapping YANG to Java.\r
-\r
-NOTE: The following source code examples does not show canonical generated\r
-code, but rather illustrative example. Generated classes and interfaces may\r
-differ from this examples, but APIs are preserved.\r
-\r
-==== General conversion rules\r
-\r
-===== Package names of YANG models\r
-\r
-The package name consists of the following parts: +\r
-\r
-* *Opendaylight prefix* - Specifies the opendaylight prefix. Every package name\r
-starts with the prefix `org.opendaylight.yang.gen.v`.\r
-* *Java Binding version* - Specifies the YANG Java Binding version.\r
-  Curent Binding version is `1`.\r
-* *Namespace* - Specified by the value of `namespace` substatement.\r
-   URI is converted to package name structure.\r
-* *Revision* - Specifies the concatenation of word `rev` and value of `module`\r
-  substatements `revision` argument value without leading zeros before month and day.\r
-  For example: `rev201379`\r
-\r
-After the package name is generated, we check if it contains any Java keywords\r
-or starts with a digit. If so, then we add an underscore before the offending\r
-token.\r
-\r
-The following is a list of keywords which are prefixed with underscore:\r
-\r
-abstract, assert, boolean, break, byte, case, catch, char, class, const,\r
-continue, default, double, do, else, enum, extends, false, final, finally,\r
-float, for, goto, if, implements, import, instanceof, int, interface, long,\r
-native, new, null, package, private, protected, public, return, short, static,\r
-strictfp, super, switch, synchronized, this, throw, throws, transient, true, try,\r
-void, volatile, while\r
-\r
-As an example suppose following yang model:\r
-\r
-[source, yang]\r
-----\r
-module module {\r
-    namespace "urn:2:case#module";\r
-    prefix "sbd";\r
-    organization "OPEN DAYLIGHT";\r
-    contact "http://www.example.com/";\r
-    revision 2013-07-09 {\r
-    }\r
-}\r
-----\r
-\r
-After applying rules (replacing digits and Java keywords) the resulting\r
-package name is `org.opendaylight.yang.gen.v1.urn._2._case.module.rev201379`\r
-\r
-===== Additional Packages\r
-\r
-In cases when YANG statement contain some of specific YANG\r
-statements additional packages are generated to designate this containment.\r
-Table below provides details of parent statement and nested statements, which\r
-yields additional package generation:\r
-\r
-[options="header"]\r
-|===\r
-|Parent statement  | Substatement\r
-|`list`  |list, container, choice\r
-|`container` | list, container, choice\r
-|`choice` | leaf, list, leaf-list, container, case\r
-|`case`  | list, container, choice\r
-|rpc `input` or `output` |  list, container, (choice isn't supported)\r
-|`notification` |  list, container, (choice isn't supported)\r
-|`augment`  | list, container, choice, case |\r
-|===\r
-\r
-Substatements are not only mapped to Java setter methods in the interface\r
-representing the parent statement, but they also generate packages with\r
-names consisting of the parent statement package name with the parent statement\r
-name appended.\r
-\r
-For example, this YANG model considers the container statement `cont` as the\r
-direct substatement of the module.\r
-\r
-[source, yang]\r
-----\r
-container cont {\r
-  container cont-inner {\r
-  }\r
-  list outter-list {\r
-    list list-in-list {\r
-    }\r
-  }\r
-}\r
-----\r
-\r
-Container `cont` is the parent statement for the substatements\r
-`cont-inner` and `outter-list`. `list outter-list` is the parent\r
-statement for substatement `list-in-list`.\r
-\r
-Java code is generated in the following structure: +\r
-\r
-* `org.opendaylight.yang.gen.v1.urn.module.rev201379` - package contains direct\r
-   substatements of module statement\r
-** `Cont.java`\r
-* `org.opendaylight.yang.gen.v1.urn.module.rev201379.cont` - package contains\r
-  substatements of `cont` container statement\r
-** `ContInner.java` - interface representing container `cont-inner`\r
-** `OutterList.java` - interface representing list `outer-list`\r
-* `org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.outter.list` - package\r
-  contains substatements of outter-list list element\r
-  ** `ListInList.java`\r
-\r
-===== Class and interface names\r
-Some YANG statements are mapped to Java classes and interfaces. The name of YANG\r
-element may contain various characters which aren't permitted in Java class names.\r
-Firstly whitespaces are trimmed from YANG name. Next the characters space, -, `\r
-are deleted and the subsequent letter is capitalized. At the end, first letter is\r
-capitalized.\r
-\r
-For example, \r
-`example-name_ without_capitalization` would map to\r
-`ExampleNameWithoutCapitalization`.\r
-\r
-===== Getter and setter names\r
-In some cases, YANG statements are converted to getter and/or setter methods.\r
-The process for getter is:\r
-\r
-. the name of YANG statement is converted to Java class name style as \r
-  <<_class_and_interface_names,explained above>>.\r
-. the word `get` is added as prefix, if resulting type is `Boolean`, the name\r
-  is prefixed with `is` prefix instead of `get`.\r
-. the return type of the getter method is set to Java type representing substatement\r
-\r
-The process for setter is:\r
-\r
-. the name of YANG statement is converted to Java class name style as\r
-  <<_class_and_interface_names,explained above>>.\r
-. the word `set` is added as prefix\r
-. the input parameter name is set to element's name converted to Java parameter style\r
-. the return parameter is set to builder type\r
-\r
-==== Statement specific mapping\r
-\r
-===== module statement\r
-\r
-YANG `module` statement is converted to Java as two Java classes.\r
-Each of the classes is in the separate Java file. The names of Java files are\r
-composed as follows:\r
-`<module name><suffix>.java` where `<suffix>` is either data or service.\r
-\r
-====== Data Interface\r
-\r
-Data Interface has a mapping similar to container, but contains only top level\r
-nodes defined in module.\r
-\r
-Data interface serves only as marker interface for type-safe APIs of\r
-`InstanceIdentifier`.\r
-\r
-====== Service Interface\r
-\r
-Service Interface serves to describe RPC contract defined in the module.\r
-This RPC contract is defined by `rpc` statements.\r
-\r
-RPC implementation usually implement this interface and users of the RPCs\r
-use this interface to invoke RPCs.\r
-\r
-===== container statement\r
-YANG containers are mapped to Java interfaces which extend the Java DataObject and\r
-Augmentable<container-interface>, where container-interface is the name of the mapped\r
-interface.\r
-\r
-For example, the following YANG:\r
-\r
-.YANG model\r
-[source, yang]\r
-----\r
-container cont {\r
-\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.Cont.java\r
-[source, java]\r
-----\r
-public interface Cont extends ChildOf<...>, Augmentable<Cont> {\r
-}\r
-----\r
-\r
-===== Leaf statement\r
-Each leaf has to contain at least one type substatement. The leaf is mapped to\r
-getter method of parent statement with return type equal to type substatement\r
-value.\r
-\r
-For example, the following YANG:\r
-\r
-.YANG model\r
-[source, yang]\r
-----\r
-container cont {\r
-  leaf lf {\r
-    type string;\r
-  }\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.Cont.java\r
-[source, java]\r
-----\r
-public interface Cont extends DataObject, Augmentable<Cont> {\r
-    String getLf(); // <1>\r
-}\r
-----\r
-\r
-<1> Represents `leaf lf`\r
-\r
-===== leaf-list statement\r
-Each leaf-list has to contain one type substatement. The leaf-list is mapped\r
-to getter method of parent statement with return type equal to List of type\r
-substatement value.\r
-\r
-For example, the following YANG:\r
-\r
-.YANG model\r
-[source, yang]\r
-----\r
-container cont {\r
-    leaf-list lf-lst {\r
-        type string;\r
-    }\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.Cont.java\r
-[source, java]\r
-----\r
-public interface Cont extends DataObject, Augmentable<Cont> {\r
-    List<String> getLfLst();\r
-}\r
-----\r
-\r
-===== list statement\r
-\r
-`list` statements are mapped to Java interfaces and a getter method is\r
-generated in the interface associated with it's parent statement.\r
-The return type of getter the method is a Java List of objects implementing\r
-the interface generated corresponding to the `list statement.\r
-Mapping of `list` substatement to Java:\r
-\r
-//[options="header"]\r
-//|===\r
-//|Substatement|Mapping to Java\r
-//|Key|Class\r
-//|===\r
-\r
-For example, the following YANG:\r
-\r
-.YANG model\r
-[source, yang]\r
-----\r
-container cont {\r
-  list outter-list {\r
-    key "leaf-in-list";\r
-    leaf number {\r
-      type uint64;\r
-    }\r
-  }\r
-}\r
-----\r
-\r
-The list statement  `example-list` is mapped to the Java interface `ExampleList` and\r
-the `Cont` interface (parent of `ExampleList`) contains getter method with return\r
-type `List<ExampleList>`. The presence of a `key` statement, triggers generation\r
-of `ExampleListKey`, which may be used to identify item in list.\r
-\r
-The end result is this Java:\r
-\r
-.OutterList.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379.cont;\r
-\r
-import org.opendaylight.yangtools.yang.binding.DataObject;\r
-import org.opendaylight.yangtools.yang.binding.Augmentable;\r
-import Java.util.List;\r
-import org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.outter.list.ListInList;\r
-\r
-public interface OutterList extends DataObject, Augmentable<OutterList> {\r
-\r
-    List<String> getLeafListInList();\r
-\r
-    List<ListInList> getListInList();\r
-\r
-    /*\r
-    Returns Primary Key of Yang List Type\r
-    */\r
-    OutterListKey getOutterListKey();\r
-\r
-}\r
-----\r
-\r
-.OutterListKey.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379.cont;\r
-\r
-import org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.OutterListKey;\r
-import Java.math.BigInteger;\r
-\r
-public class OutterListKey {\r
-\r
-    private BigInteger _leafInList;\r
-\r
-    public OutterListKey(BigInteger _leafInList) {\r
-        super();\r
-        this_leafInList = _leafInList;\r
-    }\r
-\r
-    public BigInteger getLeafInList() {\r
-        return _leafInList;\r
-    }\r
-\r
-    @Override\r
-    public int hashCode() {\r
-        final int prime = 31;\r
-        int result = 1;\r
-        result = prime * result + ((_leafInList == null) ? 0 : _leafInList.hashCode());\r
-        return result;\r
-    }\r
-\r
-    @Override\r
-    public boolean equals(Object obj) {\r
-        if (this == obj) {\r
-            return true;\r
-        }\r
-        if (obj == null) {\r
-            return false;\r
-        }\r
-        if (getClass() != obj.getClass()) {\r
-            return false;\r
-        }\r
-        OutterListKey other = (OutterListKey) obj;\r
-        if (_leafInList == null) {\r
-            if (other._LeafInList != null) {\r
-                return false;\r
-            }\r
-        } else if(!_leafInList.equals(other._leafInList)) {\r
-            return false;\r
-        }\r
-        return true;\r
-    }\r
-\r
-    @Override\r
-    public String toString() {\r
-        StringBuilder builder = new StringBuilder();\r
-        builder.append("OutterListKey [_leafInList=");\r
-        builder.append(_leafInList);\r
-        builder.append("]");\r
-        return builder.toString();\r
-    }\r
-}\r
-----\r
-\r
-===== choice and case statements\r
-A `choice` element is mapped in mostly the same way a `list` element is. The\r
-`choice` element is mapped to and interface (marker interface) and a new getter\r
-method with the return type of a Java `List` of this marker interfaces is added\r
-to the interface corresponding to the parent statement. Any `case` \r
-substatements are mapped to Java interfaces which extend the marker interface.\r
-\r
-For example, the following YANG:\r
-\r
-.YANG model\r
-[source, yang]\r
-----\r
-container cont {\r
-    choice example-choice {\r
-        case foo-case {\r
-          leaf foo {\r
-            type string;\r
-          }\r
-        }\r
-        case bar-case {\r
-            leaf bar {\r
-              type string;\r
-            }\r
-        }\r
-    }\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.Cont.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379;\r
-\r
-import org.opendaylight.yangtools.yang.binding.DataObject;\r
-import org.opendaylight.yangtools.yang.binding.Augmentable;\r
-import org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.ChoiceTest;\r
-\r
-public interface Cont extends DataObject, Augmentable<Cont> {\r
-\r
-    ExampleChoice getExampleChoice();\r
-\r
-}\r
-----\r
-\r
-.ExampleChoice.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379.cont;\r
-\r
-import org.opendaylight.yangtools.yang.binding.DataObject;\r
-\r
-public interface ExampleChoice extends DataContainer {\r
-}\r
-----\r
-\r
-.FooCase.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.example.choice;\r
-\r
-import org.opendaylight.yangtools.yang.binding.DataObject;\r
-import org.opendaylight.yangtools.yang.binding.Augmentable;\r
-import org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.ChoiceTest;\r
-\r
-public interface FooCase extends ExampleChoice, DataObject, Augmentable<FooCase> {\r
-\r
-    String getFoo();\r
-\r
-}\r
-----\r
-\r
-.BarCase.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.example.choice;\r
-\r
-import org.opendaylight.yangtools.yang.binding.DataObject;\r
-import org.opendaylight.yangtools.yang.binding.Augmentable;\r
-import org.opendaylight.yang.gen.v1.urn.module.rev201379.cont.ChoiceTest;\r
-\r
-public interface BarCase extends ExampleChoice, DataObject, Augmentable<BarCase> {\r
-\r
-    String getBar();\r
-\r
-}\r
-----\r
-\r
-===== grouping and uses statements\r
-`grouping`s are mapped to Java interfaces. `uses` statements in some element\r
-(using of concrete grouping) are mapped as extension of interface for this\r
-element with the interface which represents grouping.\r
-\r
-For example, the following YANG:\r
-\r
-.YANG Model\r
-[source, yang]\r
-----\r
-grouping grp {\r
-  leaf foo {\r
-    type string;\r
-  }\r
-}\r
-\r
-container cont {\r
-    uses grp;\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.Grp.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379;\r
-\r
-import org.opendaylight.yangtools.yang.binding.DataObject;\r
-\r
-public interface Grp extends DataObject {\r
-\r
-    String getFoo();\r
-\r
-}\r
-----\r
-\r
-.Cont.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379;\r
-\r
-import org.opendaylight.yangtools.yang.binding.DataObject;\r
-import org.opendaylight.yangtools.yang.binding.Augmentable;\r
-\r
-public interface Cont extends DataObject, Augmentable<Cont>, Grp {\r
-}\r
-----\r
-\r
-\r
-===== rpc, input and output statements\r
-An `rpc` statement is mapped to Java as method of class `ModuleService.java`.\r
-Any substatements of an `rpc` are mapped as follows:\r
-\r
-[options="header"]\r
-|===\r
-|Rpc Substatement|Mapping\r
-|input|presence of input statement triggers generation of interface\r
-|output|presence of output statement triggers generation of interface\r
-|===\r
-\r
-For example, the following YANG:\r
-\r
-.YANG model\r
-[source, yang]\r
-----\r
-rpc rpc-test1 {\r
-    output {\r
-        leaf lf-output {\r
-            type string;\r
-        }\r
-    }\r
-    input {\r
-        leaf lf-input {\r
-            type string;\r
-        }\r
-    }\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.ModuleService.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379;\r
-\r
-import Java.util.concurrent.Future;\r
-import org.opendaylight.yangtools.yang.common.RpcResult;\r
-\r
-public interface ModuleService {\r
-\r
-    Future<RpcResult<RpcTest1Output>> rpcTest1(RpcTest1Input input);\r
-\r
-}\r
-----\r
-\r
-.RpcTest1Input.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379;\r
-\r
-public interface RpcTest1Input {\r
-\r
-    String getLfInput();\r
-\r
-}\r
-----\r
-\r
-.RpcTest1Output.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379;\r
-\r
-public interface RpcTest1Output {\r
-\r
-    String getLfOutput();\r
-\r
-}\r
-----\r
-\r
-\r
-===== notification statement\r
-\r
-`notification` statements are mapped to Java interfaces which extend\r
-the Notification interface.\r
-\r
-For example, the following YANG:\r
-\r
-.YANG model\r
-[source, yang]\r
-----\r
-notification notif {\r
-       }\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.Notif.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379;\r
-\r
-\r
-import org.opendaylight.yangtools.yang.binding.DataObject;\r
-import org.opendaylight.yangtools.yang.binding.Augmentable;\r
-import org.opendaylight.yangtools.yang.binding.Notification;\r
-\r
-public interface Notif extends DataObject, Augmentable<Notif>, Notification {\r
-}\r
-----\r
-\r
-==== augment statement\r
-`augment` statements are mapped to Java interfaces. The interface starts with\r
-the same name as the name of augmented interface with a suffix corresponding to\r
-the order number of augmenting interface. The augmenting interface also extends\r
-`Augmentation<>` with actual type parameter equal to augmented interface.\r
-\r
-For example, the following YANG:\r
-\r
-.YANG Model\r
-[source, yang]\r
-----\r
-container cont {\r
-}\r
-\r
-augment "/cont" {\r
-  leaf additional-value {\r
-    type string;\r
-  }\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.Cont.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379;\r
-\r
-import org.opendaylight.yangtools.yang.binding.DataObject;\r
-import org.opendaylight.yangtools.yang.binding.Augmentable;\r
-\r
-public interface Cont extends DataObject, Augmentable<Cont> {\r
-\r
-}\r
-----\r
-\r
-.Cont1.java\r
-[source, java]\r
-----\r
-package org.opendaylight.yang.gen.v1.urn.module.rev201379;\r
-\r
-import org.opendaylight.yangtools.yang.binding.DataObject;\r
-import org.opendaylight.yangtools.yang.binding.Augmentation;\r
-\r
-public interface Cont1 extends DataObject, Augmentation<Cont> {\r
-\r
-}\r
-----\r
-\r
-==== YANG Type mapping\r
-\r
-===== typedef statement\r
-YANG `typedef` statements are mapped to Java classes. A `typedef` may contain following\r
-substatements:\r
-\r
-[options="header"]\r
-|===\r
-|Substatement | Behaviour\r
-|type| determines wrapped type and how class will be generated\r
-|descripton| Javadoc description\r
-|units| is not mapped\r
-|default|is not mapped\r
-|===\r
-\r
-====== Valid Arguments Type\r
-\r
-Simple values of type argument are mapped as follows:\r
-\r
-[options="header"]\r
-|===\r
-|YANG Type |  Java type\r
-|boolean| Boolean\r
-|empty| Boolean\r
-|int8| Byte\r
-|int16|Short\r
-|int32|Integer\r
-|int64|Long\r
-|string|String or, wrapper class (if pattern substatement is specified)\r
-|decimal64|Double\r
-|uint8|Short\r
-|uint16|Integer\r
-|uint32|Long\r
-|uint64|BigInteger\r
-|binary|byte[]\r
-|===\r
-\r
-Complex values of type argument are mapped as follows:\r
-\r
-[options="header"]\r
-|===\r
-|Argument Type| Java type\r
-|enumeration| generated java enum\r
-|bits| generated class for bits\r
-|leafref| same type as referenced leaf\r
-|identityref| Class\r
-|union| generated java class\r
-|instance-identifier| `org.opendaylight.yangtools.yang.binding.InstanceIdentifier`\r
-|===\r
-\r
-===== Enumeration Substatement Enum\r
-The YANG `enumeration` type has to contain some `enum` substatements. An `enumeration` is mapped as Java enum type (standalone class) and every YANG enum substatements is mapped to Java enum's predefined values.\r
-\r
-An `enum` statement can have following substatements:\r
-\r
-[options="header"]\r
-|===\r
-|Enum's Substatement | Java mapping\r
-|description|is not mapped in API\r
-|value| mapped as input parameter for every predefined value of enum\r
-|===\r
-\r
-For example, the following YANG:\r
-\r
-.YANG model\r
-[source, yang]\r
-----\r
-typedef typedef-enumeration {\r
-    type enumeration {\r
-        enum enum1 {\r
-            description "enum1 description";\r
-            value 18;\r
-        }\r
-        enum enum2 {\r
-            value 16;\r
-        }\r
-        enum enum3 {\r
-        }\r
-    }\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.TypedefEnumeration.java\r
-[source, java]\r
-----\r
-public enum TypedefEnumeration {\r
-    Enum1(18),\r
-    Enum2(16),\r
-    Enum3(19);\r
-\r
-    int value;\r
-\r
-    private TypedefEnumeration(int value) {\r
-        this.value = value;\r
-    }\r
-}\r
-----\r
-\r
-===== Bits's Substatement Bit\r
-The YANG `bits` type has to contain some bit substatements. YANG `bits` is mapped to\r
-a Java class (standalone class) and every YANG `bits` substatements is mapped to a\r
-boolean attribute of that class. In addition, the class provides overridden versions\r
-of the Object methods `hashCode`, `toString`, and `equals`.\r
-\r
-For example, the following YANG:\r
-\r
-.YANG Model\r
-[source, yang]\r
-----\r
-typedef typedef-bits {\r
-  type bits {\r
-    bit first-bit {\r
-      description "first-bit description";\r
-        position 15;\r
-      }\r
-    bit second-bit;\r
-  }\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.TypedefBits.java\r
-[source, java]\r
-----\r
-public class TypedefBits {\r
-\r
-    private Boolean firstBit;\r
-    private Boolean secondBit;\r
-\r
-    public TypedefBits() {\r
-        super();\r
-    }\r
-\r
-    public Boolean getFirstBit() {\r
-        return firstBit;\r
-    }\r
-\r
-    public void setFirstBit(Boolean firstBit) {\r
-        this.firstBit = firstBit;\r
-    }\r
-\r
-    public Boolean getSecondBit() {\r
-        return secondBit;\r
-    }\r
-\r
-    public void setSecondBit(Boolean secondBit) {\r
-        this.secondBit = secondBit;\r
-    }\r
-\r
-    @Override\r
-    public int hashCode() {\r
-        final int prime = 31;\r
-        int result = 1;\r
-        result = prime * result +\r
-         ((firstBit == null) ? 0 : firstBit.hashCode());\r
-        result = prime * result +\r
-         ((secondBit == null) ? 0 : secondBit.hashCode());\r
-        return result;\r
-    }\r
-\r
-    @Override\r
-    public boolean equals(Object obj) {\r
-        if (this == obj) {\r
-            return true;\r
-        }\r
-        if (obj == null) {\r
-            return false;\r
-        }\r
-        if (getClass() != obj.getClass()) {\r
-            return false;\r
-        }\r
-        TypedefBits other = (TypedefBits) obj;\r
-        if (firstBit == null) {\r
-            if (other.firstBit != null) {\r
-                return false;\r
-            }\r
-        } else if(!firstBit.equals(other.firstBit)) {\r
-            return false;\r
-        }\r
-        if (secondBit == null) {\r
-            if (other.secondBit != null) {\r
-                return false;\r
-            }\r
-        } else if(!secondBit.equals(other.secondBit)) {\r
-            return false;\r
-        }\r
-        return true;\r
-    }\r
-\r
-    @Override\r
-    public String toString() {\r
-        StringBuilder builder = new StringBuilder();\r
-        builder.append("TypedefBits [firstBit=");\r
-        builder.append(firstBit);\r
-        builder.append(", secondBit=");\r
-        builder.append(secondBit);\r
-        builder.append("]");\r
-        return builder.toString();\r
-    }\r
-}\r
-----\r
-\r
-===== Union's Substatement Type\r
-If the type of a `typedef` is `union`, it has to contain `type` substatements.\r
-The `union typedef` is mapped to class and its `type` substatements are mapped\r
-to private class members. Every YANG union subtype gets its own Java constructor\r
-with a parameter which represent just that one attribute.\r
-\r
-For example, the following YANG:\r
-\r
-.YANG model\r
-[source, yang]\r
-----\r
-typedef typedef-union {\r
-    type union {\r
-        type int32;\r
-        type string;\r
-    }\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.TypdefUnion.java\r
-[source, java]\r
-----\r
-public class TypedefUnion {\r
-\r
-    private Integer int32;\r
-    private String string;\r
-\r
-    public TypedefUnion(Integer int32) {\r
-        super();\r
-        this.int32 = int32;\r
-    }\r
-\r
-    public TypedefUnion(String string) {\r
-        super();\r
-        this.string = string;\r
-    }\r
-\r
-    public Integer getInt32() {\r
-        return int32;\r
-    }\r
-\r
-    public String getString() {\r
-        return string;\r
-    }\r
-\r
-    @Override\r
-    public int hashCode() {\r
-        final int prime = 31;\r
-        int result = 1;\r
-        result = prime * result + ((int32 == null) ? 0 : int32.hashCode());\r
-        result = prime * result + ((string == null) ? 0 : string.hashCode());\r
-        return result;\r
-    }\r
-\r
-    @Override\r
-    public boolean equals(Object obj) {\r
-        if (this == obj) {\r
-            return true;\r
-        }\r
-        if (obj == null) {\r
-            return false;\r
-        }\r
-        if (getClass() != obj.getClass()) {\r
-            return false;\r
-        }\r
-        TypedefUnion other = (TypedefUnion) obj;\r
-        if (int32 == null) {\r
-            if (other.int32 != null) {\r
-                return false;\r
-            }\r
-        } else if(!int32.equals(other.int32)) {\r
-            return false;\r
-        }\r
-        if (string == null) {\r
-            if (other.string != null) {\r
-                return false;\r
-            }\r
-        } else if(!string.equals(other.string)) {\r
-            return false;\r
-        }\r
-        return true;\r
-    }\r
-\r
-    @Override\r
-    public String toString() {\r
-        StringBuilder builder = new StringBuilder();\r
-        builder.append("TypedefUnion [int32=");\r
-        builder.append(int32);\r
-        builder.append(", string=");\r
-        builder.append(string);\r
-        builder.append("]");\r
-        return builder.toString();\r
-    }\r
-}\r
-----\r
-\r
-===== String Mapping\r
-The YANG `string` type can contain the substatements `length`\r
-and `pattern` which are mapped as follows:\r
-\r
-[options="header"]\r
-|===\r
-|Type substatements  |  Mapping to Java\r
-| length | not mapped\r
-| pattern |\r
-\r
-. list of string constants = list of patterns +\r
-. list of Pattern objects +\r
-. static initialization block where list of Patterns is initialized from list of string of constants\r
-|===\r
-\r
-For example, the following YANG:\r
-\r
-.YANG model\r
-[source, yang]\r
-----\r
-typedef typedef-string {\r
-    type string {\r
-        length 44;\r
-        pattern "[a][.]*"\r
-    }\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.TypedefString.java\r
-[source, java]\r
-----\r
-public class TypedefString {\r
-\r
-    private static final List<Pattern> patterns = new ArrayList<Pattern>();\r
-    public static final List<String> PATTERN`CONSTANTS = Arrays.asList("[a][.]*");\r
-\r
-    static {\r
-        for (String regEx : PATTERN`CONSTANTS) {\r
-            patterns.add(Pattern.compile(regEx));\r
-        }\r
-    }\r
-\r
-    private String typedefString;\r
-\r
-    public TypedefString(String typedefString) {\r
-        super();\r
-        // Pattern validation\r
-        this.typedefString = typedefString;\r
-    }\r
-\r
-    public String getTypedefString() {\r
-        return typedefString;\r
-    }\r
-\r
-    @Override\r
-    public int hashCode() {\r
-        final int prime = 31;\r
-        int result = 1;\r
-        result = prime * result + ((typedefString == null) ? 0 : typedefString.hashCode());\r
-        return result;\r
-    }\r
-\r
-    @Override\r
-    public boolean equals(Object obj) {\r
-        if (this == obj) {\r
-            return true;\r
-        }\r
-        if (obj == null) {\r
-            return false;\r
-        }\r
-        if (getClass() != obj.getClass()) {\r
-            return false;\r
-        }\r
-        TypedefString other = (TypedefString) obj;\r
-        if (typedefString == null) {\r
-            if (other.typedefString != null) {\r
-                return false;\r
-            }\r
-        } else if(!typedefString.equals(other.typedefString)) {\r
-            return false;\r
-        }\r
-        return true;\r
-    }\r
-\r
-    @Override\r
-    public String toString() {\r
-        StringBuilder builder = new StringBuilder();\r
-        builder.append("TypedefString [typedefString=");\r
-        builder.append(typedefString);\r
-        builder.append("]");\r
-        return builder.toString();\r
-    }\r
-}\r
-----\r
-\r
-==== identity statement\r
-The purpose of the `identity` statement is to define a new globally unique,\r
-abstract, and untyped value.\r
-\r
-The `base` substatement argument is the name of existing identity from which\r
-the new identity is derived.\r
-\r
-Given that, an `identity` statement is mapped to Java abstract class and\r
-any `base` substatements are mapped as `extends` Java keyword.\r
-The identity name is translated to class name.\r
-\r
-For example, the following YANG:\r
-\r
-.YANG Model\r
-[source, yang]\r
-----\r
-identity toast-type {\r
-\r
-}\r
-\r
-identity white-bread {\r
-   base toast-type;\r
-}\r
-----\r
-\r
-is converted into this Java:\r
-\r
-.ToastType.java\r
-[source, java]\r
-----\r
-public abstract class ToastType extends BaseIdentity {\r
-    protected ToastType() {\r
-        super();\r
-    }\r
-}\r
-----\r
-\r
-.WhiteBread.java\r
-[source, java]\r
-----\r
-public abstract class WhiteBread extends ToastType {\r
-    protected WhiteBread() {\r
-        super();\r
-    }\r
-}\r
-----\r
index 13bdddb607124b2233dfeceb5bbeee67ce9724f0..43aa90b62cfa52741f22a2645bd9306b88d7e88e 100644 (file)
@@ -1,52 +1,3 @@
 == YANG Tools
-:rfc6020: https://tools.ietf.org/html/rfc6020
-:lhotka-yang-json: https://tools.ietf.org/html/draft-lhotka-netmod-yang-json-01
 
-=== Overview
-YANG Tools is set of libraries and tooling providing support for use
-{rfc6020}[YANG] for Java (or other JVM-based language) projects and
-applications.
-
-YANG Tools provides following features in OpenDaylight:
-
-- parsing of YANG sources and
-semantic inference of relationship across YANG models as defined in
-{rfc6020}[RFC6020]
-- representation of YANG-modeled data in Java
-** *Normalized Node* representation - DOM-like tree model, which uses conceptual
-  meta-model more tailored to YANG and OpenDaylight use-cases than a standard XML
-  DOM model allows for.
-** *Java Binding* - concrete data model and classes generated from YANG models,
-  designed to provide compile-time safety when working with YANG-modeled data.
-- serialization / deserialization of YANG-modeled data driven by YANG
-models
-** XML - as defined in {rfc6020}[RFC6020]
-** JSON - as defined in {rfc6020}[draft-lhotka-netmod-yang-json-01]
-** Java Binding to Normalized Node and vice-versa
-- Integration of YANG model parsing into Maven build lifecycle and
-support for third-party generators processing YANG models.
-
-YANG Tools project consists of following logical subsystems:
-
-- *Commons* - Set of general purpose code, which is not specific to YANG, but
-  is also useful outside YANG Tools implementation.
-- *YANG Model and Parser* - YANG semantic model and lexical and semantic parser
-  of YANG models, which creates in-memory cross-referenced represenation of
-  YANG models, which is used by other components to determine their behaviour
-  based on the model.
-- *YANG Data* - Definition of Normalized Node APIs and Data Tree APIs, reference
-  implementation of these APIs and implementation of XML and JSON codecs for
-  Normalized Nodes.
-- *YANG Maven Plugin* - Maven plugin which integrates YANG parser into Maven
-  build lifecycle and provides code-generation framework for components, which
-  wants to generate code or other artefacts based on YANG model.
-- *YANG Java Binding* - Mapping of YANG model to generated Java APIs.
-  Java Binding also references to set of compile-time and runtime components which
-  implements this mapping, provides generation of classes and APIs based on
-  YANG models and integrate these Java Binding objects with **YANG Data** APIs
-  and components.
-
-* *Models* - Set of *IETF* and *YANG Tools* models, with generated Java Bindings
-  so they could be simply consumed outside of *YANG Tools*.
-
-include::yang-java-binding-explained.adoc[]
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/developer-guide/yang-tools.html
diff --git a/manuals/developer-guide/src/main/resources/images/snbi/docker_snbi.png b/manuals/developer-guide/src/main/resources/images/snbi/docker_snbi.png
new file mode 100644 (file)
index 0000000..90b8069
Binary files /dev/null and b/manuals/developer-guide/src/main/resources/images/snbi/docker_snbi.png differ
diff --git a/manuals/developer-guide/src/main/resources/images/snbi/first_fe_bs.png b/manuals/developer-guide/src/main/resources/images/snbi/first_fe_bs.png
new file mode 100644 (file)
index 0000000..df5b45c
Binary files /dev/null and b/manuals/developer-guide/src/main/resources/images/snbi/first_fe_bs.png differ
diff --git a/manuals/developer-guide/src/main/resources/images/snbi/snbi_arch.png b/manuals/developer-guide/src/main/resources/images/snbi/snbi_arch.png
new file mode 100644 (file)
index 0000000..d6aaa59
Binary files /dev/null and b/manuals/developer-guide/src/main/resources/images/snbi/snbi_arch.png differ
index 1402422335efb6a8ed550212659536c9a4905096..1ada0e262459b40737c02f6e2d41ec826752a748 100644 (file)
-== Authentication and Authorization Services\r
-\r
-=== Authentication Service\r
-Authentication uses the credentials presented by a user to identify the user.\r
-\r
-NOTE: The Authentication user store provided in the Lithium release does not fully support a clustered node deployment. Specifically, the AAA user store provided by the H2 database needs to be synchronized using out of band means. The AAA Token cache is however cluster-capable.\r
-\r
-==== Authentication data model\r
-A user requests authentication within a domain in which the user has defined roles.\r
-The user chooses either of the following ways to request authentication:\r
-\r
-* Provides credentials\r
-* Creates a token scoped to a domain. In OpenDaylight, a domain is a grouping of resources (direct or indirect, physical, logical, or virtual) for the purpose of access control.\r
-\r
-===== Terms and definitions in the model\r
-Token:: A claim of access to a group of resources on the controller\r
-Domain:: A group of resources, direct or indirect, physical, logical, or virtual, for the purpose of access control\r
-User:: A person who either owns or has  access to a resource or group of resources on the controller\r
-Role:: Opaque representation of a set of permissions, which is merely a unique string as admin or guest\r
-Credential:: Proof of identity such as username and password, OTP, biometrics, or others\r
-Client:: A service or application that requires access to the controller\r
-Claim:: A data set of validated assertions regarding a user, e.g. the role, domain, name, etc.\r
-\r
-===== Authentication methods\r
-There are three ways a user may authenticate in OpenDaylight: +\r
-\r
-* Basic HTTP Authentication\r
-** Regular, non-token based, authentication with username/password.\r
-* Token-based Authentication\r
-** Direct authentication:  A user presents username/password and a domain the user wishes to access to the controller and obtains a timed (default is 1 hour) scoped access token.  The user then uses this token to access RESTCONF (for example).\r
-** Federated authentication:  A user presents credentials to a third-party Identity Provider (for example, SSSD) trusted by the controller.  Upon successful authentication, the controller returns a refresh (unscoped) token with a list of domains that the user has access to.  The user then presents this refresh token scoped to a domain that the user has access to obtain a scoped access token.  The user then uses this access token to access RESTCONF (for example).\r
-\r
-\r
-====== Example with token authentication using curl:\r
-\r
-(username/password = admin/admin, domain = sdn)\r
-\r
-[source,bash] \r
-----\r
-# Create a token\r
-curl -ik -d 'grant_type=password&username=admin&password=admin&scope=sdn' http://localhost:8181/oauth2/token\r
-\r
-# Use the token (e.g.,  ed3e5e05-b5e7-3865-9f63-eb8ed5c87fb9) obtained from above (default token validity is 1 hour):\r
-curl -ik -H 'Authorization:Bearer ed3e5e05-b5e7-3865-9f63-eb8ed5c87fb9' http://localhost:8181/restconf/config/toaster:toaster\r
-----\r
-\r
-====== Example with basic HTTP auth using curl: +\r
-\r
-[source,bash] \r
----- \r
-curl -ik -u 'admin:admin' http://localhost:8181/restconf/config/toaster:toaster\r
-----\r
-\r
-==== How the OpenDaylight Authentication Service works\r
-In direct authentication, a service relationship exists between the user and the OpenDaylight controller. The user and the controller establish trust that allows them to use, and validate credentials.\r
-The user establishes user identity through credentials.\r
-\r
-In direct authentication, a user request progresses through the following steps:\r
-\r
-. The user requests the controller administrator for a  user account.  \r
-+\r
-:: Associated with the user account are user credentials, initially created by the administrator.  OpenDaylight supports only username/password credentials. By default, an administrator account is present in OpenDaylight out-of-the-box with the default username and password being admin/admin.  \r
-In addition to creating the user account, the controller administrator also assigns roles to that account on one or more domain.  By default, there are two user roles; admin, and user, and there is only one domain; sdn.\r
-+\r
-. The user presents credentials in a token request to the token service within a domain.  \r
-. The request is then passed on to the controller token endpoint.\r
-. The controller token endpoint uses the credential authentication entity which returns a claim for the client. \r
-. The controller token entity transforms the claim (user, domain, and roles) into a token which it then provides to the user.\r
-\r
-In federated authentication, with the absence of a direct trust relationship between the user and the service, a third-party Identity Provider (IdP) is used for authentication. Federated authentication relies on third-party identity providers (IdP) to authenticate the user.\r
-\r
-The user is authenticated by the trusted IdP and a claim is returned to the OpenDaylight authentication service.  The claim is transformed into an OpenDaylight claim and successively into a token that is passed on to the user. \r
-\r
-In a federated authentication set-up, the OpenDaylight controller AAA module provides SSSD claim support. SSSD can be used to map users in an external LDAP server to users defined on the OpenDaylight controller.\r
-\r
-==== Configuring Authentication service\r
-Changes to AAA configurations can be made as follows:\r
-\r
-For Authentication functionality via one of:\r
-\r
-* Webconsole\r
-* CLI (config command in the Karaf shell)\r
-* Editing the etc/org.opendaylight.aaa.*.cfg files directly\r
-\r
-For Token Cache Store settings via one of:\r
-\r
-* Editing the 08-authn-config.xml configuration file in etc/opendaylight/karaf\r
-* Using RESTCONF\r
-\r
-NOTE: Configurations for AAA are all dynamic and require no restart.\r
-\r
-===== Configuring Authentication\r
-\r
-To configure features from the Web console: +\r
-\r
-. Install the Web console:\r
-+\r
-----\r
-feature:install webconsole\r
-----\r
-+\r
-. On the console (http://localhost:8181/system/console) (default Karaf username/password:  karaf/karaf), go to *OSGi* > *Configuration* > *OpenDaylight AAA Authentication Configuration*.\r
-.. *Authorized Clients*:  List of software clients that are authorized to access OpenDaylight northbound APIs.\r
-.. *Enable Authentication*:  Enable or disable authentication. (The default is enable.)\r
-\r
-===== Configuring the token store\r
-. Open in a text editor etc/opendaylight/karaf/08-authn-config.xml\r
-:: The fields you can configure are as follows:\r
-.. *timeToLive*: Configure the maximum time, in milliseconds, that tokens are to be cached. Default is 360000.\r
-. Save the file.\r
-\r
-NOTE: When token's are expired, they are lazily removed from the cache.\r
-\r
-===== Configuring AAA federation\r
-\r
-. On the console, click *OpenDaylight AAA Federation Configuration*.\r
-. Use the *Custom HTTP Headers* or *Custom HTTP Attributes* fields to specify the HTTP headers or attributes for federated authentication. Normally, additional specification beyond the default is not \r
-required.\r
-\r
-NOTE: As the changes you make to the configurations are automatically committed when they are saved, no restart of the Authentication service is required.\r
-\r
-====== Configuring federated authentication\r
-Use the following steps to set up federated authentication: +\r
-\r
-. Set up an Apache front-end and Apache mods for the OpenDaylight controller.\r
-. Set up mapping rules (from LDAP users to OpenDaylight users).\r
-. Use the ClaimAuthFilter in federation to allow claim transformation.\r
-\r
-====== Mapping users to roles and domains\r
-The OpenDaylight authentication service transforms assertions from an external federated IdP into Authentication Service data: +\r
-\r
-. The Apache web server which fronts OpenDaylight AAA sends data to SssdAuthFilter.\r
-. SssdAuthFilter constructs a JSON document from the data.\r
-. OpenDaylight Authentication Service uses a general purpose transformation mapper to transform the JSON document.\r
-\r
-====== Operational model\r
-The mapping model works as follows: +\r
-\r
-. Assertions from an IdP are stored in an associative array.\r
-. A sequence of rules is applied, and the first rule which returns success is considered a match.\r
-. Upon success, an associative array of mapped values is returned.\r
-\r
-** The mapped values are taken from the local variables set during the rule execution.\r
-** The definition of the rules and mapped results are expressed in JSON notation.\r
-\r
-====== Operational Model: Sample code\r
-[source,java]\r
-----\r
-mapped = null\r
-foreach rule in rules {\r
-    result = null\r
-    initialize rule.variables with pre-defined values\r
-\r
-    foreach block in rule.statement_blocks {\r
-        for statement in block.statements {\r
-            if statement.verb is exit {\r
-                result = exit.status\r
-                break\r
-            }\r
-            elif statement.verb is continue {\r
-                break\r
-            }\r
-        }\r
-        if result {\r
-            break\r
-        }\r
-    if result == null {\r
-        result = success\r
-    }\r
-if result == success {\r
-    mapped = rule.mapping(rule.variables)\r
-}\r
-return mapped\r
-----\r
-\r
-====== Mapping Users\r
-A JSON Object acts as a mapping template to produce the final associative array of name/value pairs. The value in a name/value pair can be a constant or a variable.\r
-An example of a mapping template and rule variables in JSON: +\r
-Template: +\r
-[source,json]\r
-----\r
-{\r
-    "organization": "BigCorp.com",\r
-    "user: "$subject",\r
-    "roles": "$roles"\r
-}\r
-----\r
-Local variables: +\r
-[source,json]\r
-----\r
-{\r
-    "subject": "Sally",\r
-    "roles": ["user", "admin"]\r
-}\r
-----\r
-The final mapped result will be: +\r
-[source,json]\r
-----\r
-{\r
-    "organization": "BigCorp.com",\r
-    "user: "Sally",\r
-    "roles": ["user", "admin"]\r
-}\r
-----\r
-\r
-====== Example: Splitting a fully qualified username into user and realm components\r
-Some IdPs return a fully qualified username (for example, principal or subject). The fully qualified username is the concatenation of the user name, separator, and realm name.\r
-The following example shows the mapped result that returns the user and realm as independent values for the fully qualified username is bob@example.com .\r
-\r
-The mapping in JSON: +\r
-[source,json]\r
-----\r
-{\r
-    "user": "$username",\r
-    "realm": "$domain"\r
-}\r
-----\r
-The assertion in JSON: +\r
-[source,json]\r
-----\r
-{\r
-    "Principal": "bob@example.com"\r
-}\r
-----\r
-The rule applied: +\r
-[source,json]\r
-----\r
-[\r
-    [\r
-        ["in", "Principal", "assertion"],\r
-        ["exit", "rule_fails", "if_not_success"],\r
-        ["regexp", "$assertion[Principal]", (?P<username>\\w+)@(?P<domain>.+)"],\r
-        ["set", "$username", "$regexp_map[username]"],\r
-        ["set", "$domain", "$regexp_map[domain]"],\r
-        ["exit, "rule_succeeds", "always"]\r
-    ]\r
-]\r
-----\r
-The mapped result in JSON: +\r
-[source,json]\r
-----\r
-{\r
-    "user": "bob",\r
-    "realm": "example.com"\r
-}\r
-----\r
-Also, users may be granted roles based on their membership in certain groups.\r
-\r
-The Authentication Service allows white lists for users with specific roles. The white lists ensure that users are unconditionally accepted and authorized with specific roles. Users who must be unconditionally denied access can be placed in a black list.\r
-\r
-=== Administering OpenDaylight Authentication Services\r
-\r
-==== Actors in the System\r
-*OpenDaylight Controller administrator* +\r
-The OpenDaylight Controller administrator has the following responsibilities:\r
-\r
-* Author Authentication policies using the IdmLight Service API\r
-* Provides credentials, usernames and passwords to users who request them\r
-\r
-*OpenDaylight resource owners* +\r
-Resource owners authenticate (either by means of federation or directly providing their own credentials to the controller) to obtain an access token.  This access token can then be used to access resources on the controller.\r
-An OpenDaylight resource owner enjoys the following privileges:\r
-\r
-* Creates, refreshes, or deletes access tokens\r
-* Gets access tokens from the Secure Token Service\r
-* Passes secure tokens to resource users\r
-\r
-*OpenDaylight resource users* +\r
-Resource users do not need to authenticate: they can access resources if they are given an access tokens by the resource owner.  The default timeout for access tokens is 1 hour (This duration is configurable.).\r
-An OpenDaylight resource user does the following:\r
-\r
-*      Gets access tokens either from a resource owner or the controller administrator\r
-*      Uses tokens at access applications from the north-bound APIs\r
-\r
-==== System Components\r
-IdmLight Identity manager:: Stores local user authentication and authorization data, provides an Admin REST API for CRUD operations.\r
-Pluggable authenticators:: Provides domain-specific authentication mechanisms\r
-Authenticator:: Authenticates users against and establishes claims\r
-Authentication Cache:: Caches all authentication states and tokens\r
-Authentication Filter:: Verifies tokens and extracts claims\r
-Authentication Manager:: Contains the session token and authentication claim store\r
-\r
-\r
-===== IdmLight Identity manager\r
-The Light-weight Identity Manager (IdmLight) Stores local user authentication and authorization data, and roles and provides an Admin REST API for CRUD operations on the users/roles/domains database.\r
-The IdmLight REST API is by default accessed via the {controller baseURI:8181}/auth/v1/ API end point. \r
-Access to the API is restricted to authenticated clients only, or those possessing a token:\r
-\r
-Example: To retrieve the users list.\r
-\r
-[source,bash] \r
----- \r
-curl http://admin:admin@localhost:8181/auth/v1/users\r
-----\r
-\r
-\r
-The following document contains a detailed list of supported CRUD operations allowed by the API:\r
-\r
- https://wiki.opendaylight.org/images/a/ad/AAA_Idmlight_REST_APIs.xlsx\r
-\r
-\r
-=== OpenDaylight Authorization Service\r
-The authorization service currently included in OpenDaylight is of an experimental kind and only briefly documented here. \r
-Authorization follows successful authentication and is modeled on the Role Based Access Control (RBAC) approach for defining permissions and decide access levels to API resources on the controller.\r
-\r
+== Authentication, Authorization and Accounting (AAA) Services
+
+The Boron AAA services are based on the Apache Shiro Java Security Framework.  The main configuration file for AAA is located at “etc/shiro.ini” relative to the ODL karaf home directory.
+
+=== Terms And Definitions
+Token:: A claim of access to a group of resources on the controller
+Domain:: A group of resources, direct or indirect, physical, logical, or virtual, for the purpose of access control.  ODL recommends using the default “sdn" domain in the Boron release.
+User:: A person who either owns or has  access to a resource or group of resources on the controller
+Role:: Opaque representation of a set of permissions, which is merely a unique string as admin or guest
+Credential:: Proof of identity such as username and password, OTP, biometrics, or others
+Client:: A service or application that requires access to the controller
+Claim:: A data set of validated assertions regarding a user, e.g. the role, domain, name, etc.
+
+=== How to enable AAA
+AAA is enabled through installing the odl-aaa-shiro feature.  odl-aaa-shiro is automatically installed as part of the odl-restconf offering.
+
+=== How to disable AAA
+Edit the “etc/shiro.ini” file and replace the following:
+
+----
+/** = authcBasic
+----
+
+with
+
+----
+/** = anon
+----
+
+Then restart the karaf process.
+
+NOTE:  This is a change from the Lithium release, in which “etc/org.opendaylight.aaa.authn.cfg” was edited to set “authEnabled=false”.  Please use the “shiro.ini” mechanism to disable AAA going forward.
+
+
+=== How application developers can leverage AAA to provide servlet security
+In order to provide security to a servlet, add the following to the servlet’s web.xml file as the first filter definition:
+
+----
+<context-param>
+  <param-name>shiroEnvironmentClass</param-name>
+  <param-value>org.opendaylight.aaa.shiro.web.env.KarafIniWebEnvironment</param-value>
+</context-param>
+
+<listener>
+    <listener-class>org.apache.shiro.web.env.EnvironmentLoaderListener</listener-class>
+</listener>
+
+<filter>
+    <filter-name>ShiroFilter</filter-name>
+    <filter-class>org.opendaylight.aaa.shiro.filters.AAAShiroFilter</filter-class>
+</filter>
+
+<filter-mapping>
+    <filter-name>AAAShiroFilter</filter-name>
+    <url-pattern>/*</url-pattern>
+</filter-mapping>
+----
+
+NOTE:  It is very important to place this AAAShiroFilter as the first javax.servlet.Filter, as Jersey applies Filters in the order they appear within web.xml.  Placing the AAAShiroFilter first ensures incoming HTTP/HTTPS requests have proper credentials before any other filtering is attempted.
+
+=== AAA Realms
+AAA plugin utilizes realms to support pluggable authentication & authorization schemes.  There are two parent types of realms:
+
+* AuthenticatingRealm
+** Provides no Authorization capability.
+** Users authenticated through this type of realm are treated equally.
+* AuthorizingRealm
+** AuthorizingRealm is a more sophisticated AuthenticatingRealm, which provides the additional mechanisms to distinguish users based on roles.
+** Useful for applications in which roles determine allowed cabilities.
+
+ODL Contains Four Implementations
+
+* TokenAuthRealm
+** An AuthorizingRealm built to bridge the Shiro-based AAA service with the Lithium h2-based AAA implementation.
+** Exposes a RESTful web service to manipulate IdM policy on a per-node basis.  If identical AAA policy is desired across a cluster, the backing data store must be synchronized using an out of band method.
+** A python script located at “etc/idmtool” is included to help manipulate data contained in the TokenAuthRealm.
+** Enabled out of the box.
+* ODLJndiLdapRealm
+** An AuthorizingRealm built to extract identity information from IdM data contained on an LDAP server.
+** Extracts group information from LDAP, which is translated into ODL roles.
+** Useful when federating against an existing LDAP server, in which only certain types of users should have certain access privileges.
+** Disabled out of the box.
+* ODLJndiLdapRealmAuthNOnly
+** The same as ODLJndiLdapRealm, except without role extraction.  Thus, all LDAP users have equal authentication and authorization rights.
+** Disabled out of the box.
+* ActiveDirectoryRealm
+
+NOTE:  More than one Realm implementation can be specified.  Realms are attempted in order until authentication succeeds or all realm sources are exhausted.
+
+==== TokenAuthRealm Configuration
+TokenAuthRealm stores IdM data in an h2 database on each node.  Thus, configuration of a cluster currently requires configuring the desired IdM policy on each node.  There are two supported methods to manipulate the TokenAuthRealm IdM configuration:
+
+* idmtool Configuration
+* RESTful Web Service Configuration
+
+===== idmtool Configuration
+A utility script located at “etc/idmtool” is used to manipulate the TokenAuthRealm IdM policy.  idmtool assumes a single domain (sdn), since multiple domains are not leveraged in the Boron release.  General usage information for idmtool is derived through issuing the following command:
+
+----
+$ python etc/idmtool -h
+usage: idmtool [-h] [--target-host TARGET_HOST]
+               user
+               {list-users,add-user,change-password,delete-user,list-domains,list-roles,add-role,delete-role,add-grant,get-grants,delete-grant}
+               ...
+
+positional arguments:
+  user                  username for BSC node
+  {list-users,add-user,change-password,delete-user,list-domains,list-roles,add-role,delete-role,add-grant,get-grants,delete-grant}
+                        sub-command help
+    list-users          list all users
+    add-user            add a user
+    change-password     change a password
+    delete-user         delete a user
+    list-domains        list all domains
+    list-roles          list all roles
+    add-role            add a role
+    delete-role         delete a role
+    add-grant           add a grant
+    get-grants          get grants for userid on sdn
+    delete-grant        delete a grant
+
+optional arguments:
+  -h, --help            show this help message and exit
+  --target-host TARGET_HOST
+                        target host node
+----
+
+====== Add a user
+
+----
+python etc/idmtool admin add-user newUser
+Password:
+Enter new password:
+Re-enter password:
+add_user(admin)
+
+command succeeded!
+
+json:
+{
+    "description": "",
+    "domainid": "sdn",
+    "email": "",
+    "enabled": true,
+    "name": "newUser",
+    "password": "**********",
+    "salt": "**********",
+    "userid": "newUser@sdn"
+}
+----
+
+NOTE:  AAA redacts the password and salt fields for security purposes.
+
+====== Delete a user
+
+----
+$ python etc/idmtool admin delete-user newUser@sdn
+Password:
+delete_user(newUser@sdn)
+
+command succeeded!
+----
+
+====== List all users
+
+----
+$ python etc/idmtool admin list-users
+Password:
+list_users
+
+command succeeded!
+
+json:
+{
+    "users": [
+        {
+            "description": "user user",
+            "domainid": "sdn",
+            "email": "",
+            "enabled": true,
+            "name": "user",
+            "password": "**********",
+            "salt": "**********",
+            "userid": "user@sdn"
+        },
+        {
+            "description": "admin user",
+            "domainid": "sdn",
+            "email": "",
+            "enabled": true,
+            "name": "admin",
+            "password": "**********",
+            "salt": "**********",
+            "userid": "admin@sdn"
+        }
+    ]
+}
+----
+
+====== Change a user’s password
+
+----
+$ python etc/idmtool admin change-password admin@sdn
+Password:
+Enter new password:
+Re-enter password:
+change_password(admin)
+
+command succeeded!
+
+json:
+{
+    "description": "admin user",
+    "domainid": "sdn",
+    "email": "",
+    "enabled": true,
+    "name": "admin",
+    "password": "**********",
+    "salt": "**********",
+    "userid": "admin@sdn"
+}
+----
+
+====== Add a role
+
+----
+$ python etc/idmtool admin add-role network-admin
+Password:
+add_role(network-admin)
+
+command succeeded!
+
+json:
+{
+    "description": "",
+    "domainid": "sdn",
+    "name": "network-admin",
+    "roleid": "network-admin@sdn"
+}
+----
+
+====== Delete a role
+
+----
+$ python etc/idmtool admin delete-role network-admin@sdn
+Password:
+delete_role(network-admin@sdn)
+
+command succeeded!
+----
+
+====== List all roles
+
+----
+$ python etc/idmtool admin list-roles
+Password:
+list_roles
+
+command succeeded!
+
+json:
+{
+    "roles": [
+        {
+            "description": "a role for admins",
+            "domainid": "sdn",
+            "name": "admin",
+            "roleid": "admin@sdn"
+        },
+        {
+            "description": "a role for users",
+            "domainid": "sdn",
+            "name": "user",
+            "roleid": "user@sdn"
+        }
+    ]
+}
+----
+
+====== List all domains
+
+----
+$ python etc/idmtool admin list-domains
+Password:
+list_domains
+
+command succeeded!
+
+json:
+{
+    "domains": [
+        {
+            "description": "default odl sdn domain",
+            "domainid": "sdn",
+            "enabled": true,
+            "name": "sdn"
+        }
+    ]
+}
+----
+
+====== Add a grant
+
+----
+$ python etc/idmtool admin add-grant user@sdn admin@sdn
+Password:
+add_grant(userid=user@sdn,roleid=admin@sdn)
+
+command succeeded!
+
+json:
+{
+    "domainid": "sdn",
+    "grantid": "user@sdn@admin@sdn@sdn",
+    "roleid": "admin@sdn",
+    "userid": "user@sdn"
+}
+----
+
+====== Delete a grant
+
+----
+$ python etc/idmtool admin delete-grant user@sdn admin@sdn
+Password:
+http://localhost:8181/auth/v1/domains/sdn/users/user@sdn/roles/admin@sdn
+delete_grant(userid=user@sdn,roleid=admin@sdn)
+
+command succeeded!
+----
+
+====== Get grants for a user
+
+----
+python etc/idmtool admin get-grants admin@sdn
+Password:
+get_grants(admin@sdn)
+
+command succeeded!
+
+json:
+{
+    "roles": [
+        {
+            "description": "a role for users",
+            "domainid": "sdn",
+            "name": "user",
+            "roleid": "user@sdn"
+        },
+        {
+            "description": "a role for admins",
+            "domainid": "sdn",
+            "name": "admin",
+            "roleid": "admin@sdn"
+        }
+    ]
+}
+----
+
+===== RESTful Web Service
+The TokenAuthRealm IdM policy is fully configurable through a RESTful web service.  Full documentation for manipulating AAA IdM data is located online (https://wiki.opendaylight.org/images/0/00/AAA_Test_Plan.docx), and a few examples are included in this guide:
+
+====== Get All Users
+
+----
+curl -u admin:admin http://localhost:8181/auth/v1/users
+OUTPUT:
+{
+    "users": [
+        {
+            "description": "user user",
+            "domainid": "sdn",
+            "email": "",
+            "enabled": true,
+            "name": "user",
+            "password": "**********",
+            "salt": "**********",
+            "userid": "user@sdn"
+        },
+        {
+            "description": "admin user",
+            "domainid": "sdn",
+            "email": "",
+            "enabled": true,
+            "name": "admin",
+            "password": "**********",
+            "salt": "**********",
+            "userid": "admin@sdn"
+        }
+    ]
+}
+----
+
+====== Create a User
+
+----
+curl -u admin:admin -X POST -H "Content-Type: application/json" --data-binary @./user.json http://localhost:8181/auth/v1/users
+PAYLOAD:
+{
+    "name": "ryan",
+    "userid": "ryan@sdn",
+    "password": "ryan",
+    "domainid": "sdn",
+    "description": "Ryan's User Account",
+    "email": "ryandgoulding@gmail.com"
+}
+
+OUTPUT:
+{
+    "userid":"ryan@sdn",
+    "name":"ryan",
+    "description":"Ryan's User Account",
+    "enabled":true,
+    "email":"ryandgoulding@gmail.com",
+    "password":"**********",
+    "salt":"**********",
+    "domainid":"sdn"
+}
+----
+
+====== Create an OAuth2 Token For Admin Scoped to SDN
+
+----
+curl -d 'grant_type=password&username=admin&password=a&scope=sdn' http://localhost:8181/oauth2/token
+
+OUTPUT:
+{
+    "expires_in":3600,
+    "token_type":"Bearer",
+    "access_token":"5a615fbc-bcad-3759-95f4-ad97e831c730"
+}
+----
+
+====== Use an OAuth2 Token
+
+----
+curl -H "Authorization: Bearer 5a615fbc-bcad-3759-95f4-ad97e831c730" http://localhost:8181/auth/v1/domains
+{
+    "domains":
+    [
+        {
+            "domainid":"sdn",
+            "name":"sdn”,
+            "description":"default odl sdn domain",
+            "enabled":true
+        }
+    ]
+}
+----
+
+==== ODLJndiLdapRealm Configuration
+LDAP integration is provided in order to externalize identity management.  To configure LDAP parameters, modify "etc/shiro.ini" parameters to include the ODLJndiLdapRealm:
+
+----
+# ODL provides a few LDAP implementations, which are disabled out of the box.
+# ODLJndiLdapRealm includes authorization functionality based on LDAP elements
+# extracted through and LDAP search.  This requires a bit of knowledge about
+# how your LDAP system is setup.  An example is provided below:
+ldapRealm = org.opendaylight.aaa.shiro.realm.ODLJndiLdapRealm
+ldapRealm.userDnTemplate = uid={0},ou=People,dc=DOMAIN,dc=TLD
+ldapRealm.contextFactory.url = ldap://<URL>:389
+ldapRealm.searchBase = dc=DOMAIN,dc=TLD
+ldapRealm.ldapAttributeForComparison = objectClass
+ldapRealm.groupRolesMap = "Person":"admin"
+# ...
+# further down in the file...
+# Stacked realm configuration;  realms are round-robbined until authentication succeeds or realm sources are exhausted.
+securityManager.realms = $tokenAuthRealm, $ldapRealm
+----
+
+This configuration allows federation with an external LDAP server, and the user's ODL role parameters are mapped to corresponding LDAP attributes as specified by the groupRolesMap.  Thus, an LDAP operator can provision attributes for LDAP users that support different ODL role structures.
+
+==== ODLJndiLdapRealmAuthNOnly Configuration
+Edit the "etc/shiro.ini" file and modify the following:
+
+----
+ldapRealm = org.opendaylight.aaa.shiro.realm.ODLJndiLdapRealm
+ldapRealm.userDnTemplate = uid={0},ou=People,dc=DOMAIN,dc=TLD
+ldapRealm.contextFactory.url = ldap://<URL>:389
+# ...
+# further down in the file...
+# Stacked realm configuration;  realms are round-robbined until authentication succeeds or realm sources are exhausted.
+securityManager.realms = $tokenAuthRealm, $ldapRealm
+----
+
+This is useful for setups where all LDAP users are allowed equal access.
+
+==== Token Store Configuration Parameters
+Edit the file “etc/opendaylight/karaf/08-authn-config.xml” and edit the following:
+.*timeToLive*: Configure the maximum time, in milliseconds, that tokens are to be cached. Default is 360000.
+Save the file.
+
+=== Authorization Configuration
+==== Shiro-Based Authorization
+OpenDaylight AAA has support for Role Based Access Control based on the Apache Shiro permissions system.  Configuration of the authorization system is done offline;  authorization currently cannot be configured after the controller is started.  Thus, Authorization in the Beryllium release is aimed towards supporting coarse-grained security policies, with the aim to provide more robust configuration capabilities in the future.  Shiro-based Authorization is documented on the Apache Shiro website (http://shiro.apache.org/web.html#Web-%7B%7B%5Curls%5C%7D%7D).
+
+==== Enable “admin” Role Based Access to the IdMLight RESTful web service
+Edit the “etc/shiro.ini” configuration file and add “/auth/v1/** = authcBasic, roles[admin]” above the line “/** = authcBasic” within the “urls” section.
+
+----
+/auth/v1/** = authcBasic, roles[admin]
+/** = authcBasic
+----
+
+This will restrict the idmlight rest endpoints so that a grant for admin role must be present for the requesting user.
+
+NOTE:  The ordering of the authorization rules above is important!
+
+==== AuthZ Broker Facade
+
+ODL includes an experimental Authorization Broker Facade, which allows finer grained access control for REST endpoints.  Since this feature was not well tested in the Boron release, it is recommended to use the Shiro-based mechanism instead, and rely on the Authorization Broker Facade for POC use only.
+
+===== AuthZ Broker Facade Feature Installation
+To install the authorization broker facade, please issue the following command in the karaf shell:
+
+----
+feature:install odl-restconf odl-aaa-authz
+----
+
+===== Add an Authorization Rule
+The following shows how one might go about securing the controller so that only admins can access restconf.
+
+----
+curl -u admin:admin -H “Content-Type: application/xml” --data-binary @./rule.json http://localhost:8181/restconf/config/authorization-schema:simple-authorization/policies/RestConfService/
+cat ./rule.json
+{
+    "policies": {
+        "resource": "*",
+        "service":"RestConfService",
+        "role": "admin"
+    }
+}
+----
+
+=== Accounting Configuration
+All AAA logging is output to the standard karaf.log file.
+
+----
+log:set TRACE org.opendaylight.aaa
+----
+
+This command enables the most verbose level of logging for AAA components.
index 3fa88db2a07723f7d41e1623d874029cabbf8457..0f4d0e6ff9a3a3c7838b6260e8f292a8fcd24ae1 100644 (file)
@@ -1,42 +1,3 @@
 == Atrium User Guide
 
-=== Overview
-Project Atrium is an open source SDN distribution - a vertically integrated
-set of open source components which together form a complete SDN stack.
-It’s goals are threefold:
-
-* Close the large integration-gap of the elements that are needed to build an SDN stack -
-  while there are multiple choices at each layer, there are missing pieces with poor or no integration.
-* Overcome a massive gap in interoperability - This exists both at the switch level,
-  where existing products from different vendors have limited compatibility,
-  making it difficult to connect an arbitrary switch and controller and at an API level,
-  where its difficult to write a portable application across multiple controller platforms.
-* Work closely with network operators on deployable use-cases, so that they could download
-  near production quality code from one location, and get started with functioning
-  software defined networks on real hardware.
-
-=== Architecture
-The key components of Atrium BGP Peering Router Application are as follows:
-
-* Data Plane Switch - Data plane switch is the entity that uses flow table entries installed by
-  BGP Routing Application through SDN controller. In the simplest form data plane switch with
-  the installed flows act like a BGP Router.
-* OpenDaylight Controller - OpenDaylight SDN controller has many utility applications or plugins
-  which are leveraged by the BGP Router application to manage the control plane information.
-* BGP Routing Application - An application running within the OpenDaylight runtime environment
-  to handle I-BGP updates.
-* <<_didm_user_guide,DIDM>> - DIDM manages the drivers specific to each data plane switch connected to the controller.
-  The drivers are created primarily to hide the underlying complexity of the devices
-  and to expose a uniform API to applications.
-* Flow Objectives API - The driver implementation provides a pipeline abstraction and
-  exposes Flow Objectives API. This means applications need to be aware of only the
-  Flow Objectives API without worrying about the Table IDs or the pipelines.
-* Control Plane Switch - This component is primarily used to connect the OpenDaylight SDN controller
-  with the Quagga Soft-Router and establish a path for forwarding E-BGP packets to and from Quagga.
-* Quagga soft router - An open source routing software that handles E-BGP updates.
-
-=== Running Atrium
-* To run the Atrium BGP Routing Application in OpenDaylight distribution,
-  simply install the `odl-atrium-all` feature.
-+
-     feature:install odl-atrium-all
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/atrium-user-guide.html
index e1e20848961f32269713bd3bf389e732baf16ad1..4640c2071054795969c4c6fcbcb9c2436e8bec17 100644 (file)
@@ -1,911 +1,3 @@
 == BGP User Guide ==
 
-=== Configuring BGP ===
-
-The OpenDaylight Karaf distribution comes pre-configured with a baseline BGP
-configuration. You can find it in the `etc/opendaylight/karaf` directory and it
-consists of two files:
-
-`31-bgp.xml`:: defines the basic parser and RIB support
-`41-bgp-example.xml`:: contains a sample configuration which needs to be
-  customized to your deployment)
-
-The next sections will describe how to configure BGP manually or using RESTCONF.
-
-==== RIB ====
-
-The configuration of the Routing Information Base (RIB) is specified using a block in the `41-bgp-example.xml` file.
-
-[source,xml]
-----
-<module>
-    <type>prefix:rib-impl</type>
-    <name>example-bgp-rib</name>
-    <rib-id>example-bgp-rib</rib-id>
-    <local-as>64496</local-as>
-    <bgp-id>192.0.2.2</bgp-id>
-    <cluster-id>192.0.2.3</cluster-id>
-    ...
-</module>
-----
-
-- *type* - should always be set to `prefix:rib-impl`
-- *name* and *rib-id* - BGP RIB Identifier, you can specify multiple BGP RIBs by
-having multiple the above `module` blocks. Each such RIB must have a unique rib-id and name.
-- *local-as* - the local AS number (where OpenDaylight is deployed), we use this in best path selection
-- *bgp-id* - the local BGP identifier (the IP of the VM where OpenDaylight is deployed),
-we use this in best path selection.
-- *cluster-id* - cluster identifier, optional, if not specified, BGP Identifier will be used
-
-Depending on your BGP router, you might need to switch from
-linkstate attribute type 99 to 29. Check with your router vendor. Change the
-field iana-linkstate-attribute-type to true if your router supports type 29.
-This snippet is located in `31-bgp.xml` file.
-
-[source,xml]
-----
-<module>
- <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:linkstate">prefix:bgp-linkstate</type>
- <name>bgp-linkstate</name>
- <iana-linkstate-attribute-type>true</iana-linkstate-attribute-type>
-</module>
-----
-
-- *iana-linkstate-attribute-type* - IANA has issued an early allocation for the
-BGP linkstate path attribute (=29). To preserve he old value (=99) set this to
-to false; to use IANA assigned type set the value to true or remove it as it's true by default.
-
-==== BGP Peer ====
-
-The initial configuration is written so that it will be ignored to prevent the
-client from starting with default configuration. Therefore the first step is to
-uncomment the module containing bgp-peer.
-
-[source,xml]
-----
-<module>
- <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer</type>
- <name>example-bgp-peer</name>
- <host>192.0.2.1</host>
- <holdtimer>180</holdtimer>
- <peer-role>ibgp</peer-role>
- <rib>
-  <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:rib-instance</type>
-  <name>example-bgp-rib</name>
- </rib>
- ...
-</module>
-----
-
-- *name* - BGP Peer name, in this configuration file you specify multiple BGP Peers by replicating the above `module` block. Each peers must have a unique name.
-- *host* - IP address or hostname of BGP speaker where OpenDaylight should connect to the peer
-- *holdtimer* - hold time in seconds
-- *peer-role* - If peer role is not present, default value "ibgp" will be used (other allowed values are "ebgp" and "rr-client"). This field is case-sensitive.
-- *rib* - BGP RIB identifier
-
-==== Configure Connection Attributes (Optional) ====
-
-[source,xml]
-----
-<module>
-   <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:reconnectstrategy">prefix:timed-reconnect-strategy</type>
-   <name>example-reconnect-strategy</name>
-   <min-sleep>1000</min-sleep>
-   <max-sleep>180000</max-sleep>
-   <sleep-factor>2.00</sleep-factor>
-   <connect-time>5000</connect-time>
-   <executor>
-       <type xmlns:netty="urn:opendaylight:params:xml:ns:yang:controller:netty">netty:netty-event-executor</type>
-       <name>global-event-executor</name>
-   </executor>
-</module>
-----
-
-- *min-sleep* - minimum sleep time (miliseconds) in between reconnect tries
-- *max-sleep* - maximum sleep time (miliseconds) in between reconnect tries
-- *sleep-factor* - power factor of the sleep time between reconnect tries, i.e., the previous sleep time will be multiplied by this number to determine the next sleep time, but never exceed *max-sleep*
-- *connect-time* - how long BGP should wait (miliseconds) for the TCP connect
-attempt, overrides default connection timeout dictated by TCP.
-
-
-==== BGP Speaker Configuration ====
-
-The previous entries described configuration of a BGP connections initiated by
-OpenDaylight. OpenDaylight can also accept incoming BGP connections.
-
-The configuration of BGP speaker is located in: `41-bgp-example.xml`:
-
-[source,xml]
-----
-<module>
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer-acceptor</type>
-    <name>bgp-peer-server</name>
-
-    <!--Default parameters-->
-    <!--<binding-address>0.0.0.0</binding-address>-->
-    <!--<binding-port>1790</binding-port>-->
-
-    ...
-    <!--Drops or accepts incoming BGP connection, every BGP Peer that should be accepted needs to be added to this registry-->
-    <peer-registry>
-        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer-registry</type>
-        <name>global-bgp-peer-registry</name>
-    </peer-registry>
-</module>
-----
-
-- Changing binding address: Uncomment tag binding-address and change the address to e.g. _127.0.0.1_. The default binding address is _0.0.0.0_.
-- Changing binding port: Uncomment tag binding-port and change the port to e.g.
-  _1790_. The default binding port is _179_ as specified in link:http://tools.ietf.org/html/rfc4271[RFC 4271].
-
-==== Incomming BGP Connections ====
-
-*The BGP speaker drops all BGP connections from unknown BGP peers.* The decision is
-made in component bgp-peer-registry that is injected into the speaker (The
-registry is configured in `31-bgp.xml`).
-
-To add a BGP Peer configuration into the registry, it is necessary to configure
-regular BGP peer just like in example in `41-bgp-example.xml`. Notice that the
-BGP peer depends on the same bgp-peer-registry as bgp-speaker:
-
-[source,xml]
-----
-<module>
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer</type>
-    <name>example-bgp-peer</name>
-    <host>192.0.2.1</host>
-    ...
-    <peer-registry>
-        <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer-registry</type>
-        <name>global-bgp-peer-registry</name>
-    </peer-registry>
-    ...
-</module>
-----
-
-The BGP peer registers itself into the registry, which allows incoming BGP
-connections handled by the bgp-speaker. (Config attribute peer-registry is
-optional for now to preserve backwards compatibility). With this configuration,
-the connection to 192.0.2.1 is initiated by OpenDaylight but will also be accepted from
-192.0.2.1. In case both connections are being established, only one of them
-will be preserved and the other will be dropped. The connection initiated from
-device with lower BGP id will be dropped by the registry.  Each BGP peer must
-be configured in its own `module` block. Note, that the name of the module needs to be
-unique, so if you are configuring more peers, when changing the *host*, also change
-the *name*.
-
-To configure a peer that only listens for incoming connections and instruct
-OpenDaylight not to initiate the connection, add the initiate-connection attribute
-to peer's configuration and set it to false:
-
-[source,xml]
-----
-<module>
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">prefix:bgp-peer</type>
-    <name>example-bgp-peer</name>
-    <host>192.0.2.1</host>                         // IP address or hostname of the speaker
-    <holdtimer>180</holdtimer>
-    <initiate-connection>false</initiate-connection>  // Connection will not be initiated by ODL
-    ...
-</module>
-----
-
-- *initiate-connection* - if set to false OpenDaylight will not initiate connection to this peer. Default value is true.
-
-==== BGP Application Peer  ====
-
-A BGP speaker needs to register all peers that can be connected to it (meaning if
-a BGP peer is not configured, the connection with OpenDaylight won't be
-successful). As a first step, configure RIB. Then, instead of configuring
-regular peer, configure this application peer, with its own application RIB.
-Change the bgp-peer-id, which is your local BGP ID that will be
-used in BGP best path selection algorithm.
-
-[source,xml]
-----
-<module>
- <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-application-peer</type>
- <name>example-bgp-peer-app</name>
- <bgp-peer-id>10.25.1.9</bgp-peer-id>
- <target-rib>
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:rib-instance</type>
-  <name>example-bgp-rib</name>
- </target-rib>
- <application-rib-id>example-app-rib</application-rib-id>
- ...
-</module>
-----
-
-- *bgp-peer-id* - the local BGP identifier (the IP of the VM where OpenDaylight is deployed), we use this in best path selection
-- *target-rib* - RIB ID of existing RIB where the data should be transferred
-- *application-rib-id* - RIB ID of local application RIB (all the routes that you put to OpenDaylight will be displayed here)
-
-//TODO: internal link to Populate RIB
-//To populate RIB use 
-
-//TODO: internal jump to section?
-//In order to get routes advertised to other peers, you have to also configure the peers, as described in section BGP Peer 
-
-=== Configuration through RESTCONF ===
-
-Another method to configure BGP is dynamically through RESTCONF. Instead of
-restarting Karaf, install another feature, that provides you the access to
-'restconf/config/' URLs.
-
-  feature:install odl-netconf-connector-all
-
-To check what modules you have currently configured, check following link:
-http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/
-
-This URL is also used to POST new configuration. If you want to change any
-other configuration that is listed here, make sure you include the correct
-namespaces. RESTCONF will tell you if some namespace is wrong.
-
-To update  an existing configuration use *PUT* and give the full path to the element you  wish to update.
-
-It is vital that you respect the order of steps described in user guide.
-
-==== RIB ====
-
-First, configure the RIB. This module is already present in the configuration,
-therefore we change only the parameters we need. In this case, it's
-*bgp-rib-id* and *local-as*.
-
-*URL:*
-_http://127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-bgp-rib-impl-cfg:rib-impl/example-bgp-rib_
-
-*PUT:*
-[source,xml]
-----
-<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
- <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:rib-impl</type>
- <name>example-bgp-rib</name>
- <session-reconnect-strategy xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:protocol:framework">x:reconnect-strategy-factory</type>
-  <name>example-reconnect-strategy-factory</name>
- </session-reconnect-strategy>
- <rib-id xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">example-bgp-rib</rib-id>
- <extensions xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:spi">x:extensions</type>
-  <name>global-rib-extensions</name>
- </extensions>
- <codec-tree-factory xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">x:binding-codec-tree-factory</type>
-  <name>runtime-mapping-singleton</name>
- </codec-tree-factory>
- <tcp-reconnect-strategy xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:protocol:framework">x:reconnect-strategy-factory</type>
-  <name>example-reconnect-strategy-factory</name>
- </tcp-reconnect-strategy>
- <data-provider xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">x:binding-async-data-broker</type>
-  <name>pingpong-binding-data-broker</name>
- </data-provider>
- <local-as xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">64496</local-as>
- <bgp-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type>bgp-dispatcher</type>
-  <name>global-bgp-dispatcher</name>
- </bgp-dispatcher>
- <dom-data-provider xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">x:dom-async-data-broker</type>
-  <name>pingpong-broker</name>
- </dom-data-provider>
- <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type>bgp-table-type</type>
-  <name>ipv4-unicast</name>
- </local-table>
- <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type>bgp-table-type</type>
-  <name>ipv6-unicast</name>
- </local-table>
- <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type>bgp-table-type</type>
-  <name>linkstate</name>
- </local-table>
- <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type>bgp-table-type</type>
-  <name>ipv4-flowspec</name>
- </local-table>
- <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type>bgp-table-type</type>
-  <name>ipv6-flowspec</name>
- </local-table>
- <local-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type>bgp-table-type</type>
-  <name>labeled-unicast</name>
- </local-table>
- <bgp-rib-id xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">192.0.2.2</bgp-rib-id>
- <openconfig-provider xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp-openconfig-spi">x:bgp-openconfig-provider</type>
-  <name>openconfig-bgp</name>
- </openconfig-provider>
-</module>
-----
-
-Depending on your BGP router, you might need to switch from
-linkstate attribute type 99 to 29. Check with your router vendor. Change the
-field iana-linkstate-attribute-type to true if your router supports type 29.
-You can do that with the following RESTCONF operation:
-
-*URL:* _http://127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-bgp-linkstate-cfg:bgp-linkstate/bgp-linkstate_
-
-*PUT:*
-[source,xml]
-----
-<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
- <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:linkstate">x:bgp-linkstate</type>
- <name>bgp-linkstate</name>
- <iana-linkstate-attribute-type xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:linkstate">true</iana-linkstate-attribute-type>
-</module>
-----
-
-==== BGP Peer ====
-
-We also need to add a new module to configuration (bgp-peer). In this case, the
-whole module needs to be configured. Please change values *host*, *holdtimer*
-and *peer-role* (if necessary).
-
-*URL:*  _http://127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules_
-
-*POST:*
-
-[source,xml]
-----
-<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
- <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-peer</type>
- <name>example-bgp-peer</name>
- <host xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">192.0.2.1</host>
- <holdtimer xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">180</holdtimer>
- <peer-role xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">ibgp</peer-role>
- <rib xmlns"urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:rib-instance</type>
-  <name>example-bgp-rib</name>
- </rib>
- <peer-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-peer-registry</type>
-  <name>global-bgp-peer-registry</name>
- </peer-registry>
- <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
-  <name>ipv4-unicast</name>
- </advertized-table>
- <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
-  <name>ipv6-unicast</name>
- </advertized-table>
- <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
-  <name>linkstate</name>
- </advertized-table>
- <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
-  <name>ipv4-flowspec</name>
- </advertized-table>
- <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
-  <name>ipv6-flowspec</name>
- </advertized-table>
- <advertized-table xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-table-type</type>
-  <name>labeled-unicast</name>
- </advertized-table>
-</module>
-----
-
-This is all necessary information that you need to get ODL connect to your speaker.
-
-==== BGP Application Peer ====
-
-Change the value *bgp-peer-id* which is your local BGP ID that will be used in
-BGP Best Path Selection algorithm.
-
-*URL:* _http://127.0.0.1:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules_
-
-*POST:*
-[source,xml]
-----
-<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
- <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:bgp-application-peer</type>
- <name>example-bgp-peer-app</name>
- <bgp-peer-id xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">10.25.1.9</bgp-peer-id> <!-- Your local BGP-ID that will be used in BGP Best Path Selection algorithm -->
- <target-rib xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">x:rib-instance</type>
-  <name>example-bgp-rib</name>
-  </target-rib>
- <application-rib-id xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">example-app-rib</application-rib-id>
- <data-broker xmlns="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:impl">
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">x:dom-async-data-broker</type>
-  <name>pingpong-broker</name>
- </data-broker>
-</module>
-----
-
-=== Tutorials ===
-
-==== Viewing BGP Topology ====
-
-This section summarizes how data from BGP can be viewed through RESTCONF. Currently it is the only way to view the data.
-
-===== Network Topology View =====
-
-The URL for network topology is: http://localhost:8181/restconf/operational/network-topology:network-topology/
-
-If BGP is configured properly, it should display output similar to:
-
-[source,xml]
-----
-<network-topology>
- <topology>
-  <topology-id>pcep-topology</topology-id>
-  <topology-types>
-   <topology-pcep/>
-  </topology-types>
- </topology>
- <topology>
-  <server-provided>true</server-provided>
-  <topology-id>example-ipv4-topology</topology-id>
-  <topology-types/>
- </topology>
- <topology>
-  <server-provided>true</server-provided>
-  <topology-id>example-linkstate-topology</topology-id>
-  <topology-types/>
- </topology>
-</network-topology>
-----
-
-BGP topology information as learned from BGP peers are is in three topologies (if all three are configured):
-
-* *example-linkstate-topology* - displays links and nodes advertised through linkstate update messages
-
-** http://localhost:8181/restconf/operational/network-topology:network-topology/topology/example-linkstate-topology
-
-* *example-ipv4-topology* - display IPv4 addresses of nodes in the topology
-
-** http://localhost:8181/restconf/operational/network-topology:network-topology/topology/example-ipv4-topology
-
-* *example-ipv6-topology* - display IPv6 addresses of nodes in the topology
-
-** http://localhost:8181/restconf/operational/network-topology:network-topology/topology/example-ipv6-topology
-
-===== Route Information Base (RIB) View =====
-
-Another view of BGP data is through *BGP RIBs*, located here: http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/
-
-There are multiple RIBs configured:
-
-- AdjRibsIn (per Peer) : Adjacency RIBs In, BGP routes as they come from BGP Peer
-- EffectiveRib (per Peer) : BGP routes after applying Import policies
-- LocRib (per RIB) : Local RIB, BGP routes from all peers
-- AdjRibsOut (per Peer) : BGP routes that will be advertizes, after applying Export policies
-
-This is how the empty output looks like, when address families for IPv4 Unicast, IPv6 Unicast, IPv4 Flowspec, IPv6 Flowspec, IPv4 Labeled Unicast and Linkstate were configured: 
-
-[source,xml]
-----
-<loc-rib xmlns="urn:opendaylight:params:xml:ns:yang:bgp-rib">
-  <tables>
-    <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv6-address-family</afi>
-    <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:unicast-subsequent-address-family</safi>
-    <attributes>
-      <uptodate>false</uptodate>
-    </attributes>
-    <ipv6-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-inet">
-    </ipv6-routes>
-  </tables>
-  <tables>
-    <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv4-address-family</afi>
-    <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:unicast-subsequent-address-family</safi>
-    <attributes>
-      <uptodate>false</uptodate>
-    </attributes>
-    <ipv4-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-inet">
-    </ipv4-routes>
-  </tables>
-  <tables>
-    <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv4-address-family</afi>
-    <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">x:flowspec-subsequent-address-family</safi>
-    <attributes>
-      <uptodate>false</uptodate>
-    </attributes>
-    <flowspec-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">
-    </flowspec-routes>
-  </tables>
-  <tables>
-    <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv6-address-family</afi>
-    <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">x:flowspec-subsequent-address-family</safi>
-    <attributes>
-      <uptodate>false</uptodate>
-    </attributes>
-    <flowspec-ipv6-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">
-    </flowspec-ipv6-routes>
-  </tables>
-  <tables>
-    <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-types">x:ipv4-address-family</afi>
-    <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-labeled-unicast">x:labeled-unicast-subsequent-address-family</safi>
-    <attributes>
-      <uptodate>false</uptodate>
-    </attributes>
-    <labeled-unicast-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-labeled-unicast">
-    </labeled-unicast-routes>
-  </tables>
-  <tables>
-    <afi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-linkstate">x:linkstate-address-family</afi>
-    <safi xmlns:x="urn:opendaylight:params:xml:ns:yang:bgp-linkstate">x:linkstate-subsequent-address-family</safi>
-    <attributes>
-      <uptodate>false</uptodate>
-    </attributes>
-    <linkstate-routes xmlns="urn:opendaylight:params:xml:ns:yang:bgp-linkstate">
-    </linkstate-routes>
-  </tables>
-</loc-rib>
-----
-
-You can see details for each AFI by expanding the RESTCONF link:
-
-* *IPv4 Unicast* : http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-types:ipv4-address-family/bgp-types:unicast-subsequent-address-family/ipv4-routes
-
-* *IPv6 Unicast* : http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-types:ipv6-address-family/bgp-types:unicast-subsequent-address-family/ipv6-routes
-
-* *IPv4 Labeled Unicast* : http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-types:ipv4-address-family/bgp-labeled-unicast:labeled-unicast-subsequent-address-family/bgp-labeled-unicast:labeled-unicast-routes
-
-* *IPv4 Flowspec* : http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-types:ipv4-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-routes
-
-* *IPv6 Flowspec* : http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-types:ipv6-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-ipv6-routes
-
-* *Linkstate* : http://localhost:8181/restconf/operational/bgp-rib:bgp-rib/rib/example-bgp-rib/loc-rib/tables/bgp-linkstate:linkstate-address-family/bgp-linkstate:linkstate-subsequent-address-family/linkstate-routes
-
-==== Populate RIB ====
-
-If an application peer is configured, you can populate its RIB by making POST calls to RESTCONF like the following.
-
-===== IPv4 Unicast =====
-
-*Add route:*
-
-*URL:*  http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-types:unicast-subsequent-address-family/bgp-inet:ipv4-routes/
-
-- where example-app-rib is your application RIB id (that you specified in the configuration) and tables specifies AFI and SAFI of the data that you want to add.
-
-*Method:* POST
-
-*Content-Type:* application/xml
-
-[source,xml]
-----
- <?xml version="1.0" encoding="UTF-8" standalone="no"?>
-  <ipv4-route xmlns="urn:opendaylight:params:xml:ns:yang:bgp-inet">
-   <prefix>1.1.1.1/32</prefix>
-   <attributes>
-    <ipv4-next-hop>
-     <global>199.20.160.41</global>
-    </ipv4-next-hop><as-path/>
-    <multi-exit-disc>
-     <med>0</med>
-    </multi-exit-disc>
-    <local-pref>
-     <pref>100</pref>
-    </local-pref>
-    <originator-id>
-     <originator>41.41.41.41</originator>
-    </originator-id>
-    <origin>
-     <value>igp</value>
-    </origin>
-    <cluster-id>
-     <cluster>40.40.40.40</cluster>
-    </cluster-id>
-   </attributes>
-  </ipv4-route>
-----
-
-The request results in *204 No content*. This is expected.
-
-*Delete route:*
-
-*URL:*  http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-types:unicast-subsequent-address-family/bgp-inet:ipv4-routes/bgp-inet:ipv4-route/<route-id>
-
-*Method:* DELETE
-
-===== IPv6 Unicast =====
-
-*Add route:*
-
-*URL:* http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv6-address-family/bgp-types:unicast-subsequent-address-family/bgp-inet:ipv6-routes/
-
-*Method:* POST
-
-*Content-Type:* application/xml
-
-[source,xml]
-----
-  <ipv6-route xmlns="urn:opendaylight:params:xml:ns:yang:bgp-inet">
-   <prefix>2001:db8:30::3/128</prefix>
-   <attributes>
-    <ipv6-next-hop>
-     <global>2001:db8:1::6</global>
-    </ipv6-next-hop>
-    <as-path/>
-    <origin>
-     <value>egp</value>
-    </origin>
-   </attributes>
-  </ipv6-route>
-----
-
-The request results in *204 No content*. This is expected.
-
-*Delete route:*
-
-*URL:*  http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv6-address-family/bgp-types:unicast-subsequent-address-family/bgp-inet:ipv6-routes/bgp-inet:ipv6-route/<route-id>
-
-*Method:* DELETE
-
-===== IPv4 Labeled Unicast =====
-
-*Add route:*
-
-*URL:* http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-labeled-unicast:labeled-unicast-subsequent-address-family/bgp-labeled-unicast:labeled-unicast-routes
-
-*Method:* POST
-
-*Content-Type:* application/xml
-
-[source,xml]
-----
-  <labeled-unicast-route xmlns="urn:opendaylight:params:xml:ns:yang:bgp-labeled-unicast">
-   <route-key>label1</route-key>
-   <prefix>1.1.1.1/32</prefix>
-   <label-stack>
-    <label-value>123</label-value>
-   </label-stack>
-   <label-stack>
-    <label-value>456</label-value>
-   </label-stack>
-   <label-stack>
-    <label-value>342</label-value>
-   </label-stack>
-   <attributes>
-    <ipv4-next-hop>
-     <global>199.20.160.41</global>
-    </ipv4-next-hop>
-    <origin>
-     <value>igp</value>
-    </origin>
-    <as-path/>
-    <local-pref>
-     <pref>100</pref>
-    </local-pref>
-   </attributes>
-  </labeled-unicast-route>
-----
-
-The request results in *204 No content*. This is expected.
-
-*Delete route:*
-
-*URL:*  http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-labeled-unicast:labeled-unicast-subsequent-address-family/bgp-labeled-unicast:labeled-unicast-routes/bgp-labeled-unicast:labeled-unicast-route/<route-id>
-
-*Method:* DELETE
-
-===== IPv4 Flowspec =====
-
-*Add route:*
-
-*URL:* http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-routes
-
-*Method:* POST
-
-*Content-Type:* application/xml
-
-[source,xml]
-----
-<flowspec-route xmlns="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">
-  <route-key>flow1</route-key>
-  <flowspec>
-    <destination-prefix>192.168.0.1/32</destination-prefix>
-  </flowspec>
-  <flowspec>
-    <source-prefix>10.0.0.1/32</source-prefix>
-  </flowspec>
-  <flowspec>
-    <protocol-ips>
-      <op>equals end-of-list</op>
-      <value>6</value>
-    </protocol-ips>
-  </flowspec>
-  <flowspec>
-    <ports>
-      <op>equals end-of-list</op>
-      <value>80</value>
-    </ports>
-  </flowspec>
-  <flowspec>
-    <destination-ports>
-      <op>greater-than</op>
-      <value>8080</value>
-    </destination-ports>
-    <destination-ports>
-      <op>and-bit less-than end-of-list</op>
-      <value>8088</value>
-    </destination-ports>
-  </flowspec>
-  <flowspec>
-    <source-ports>
-      <op>greater-than end-of-list</op>
-      <value>1024</value>
-    </source-ports>
-  </flowspec>
-  <flowspec>
-    <types>
-      <op>equals end-of-list</op>
-      <value>0</value>
-    </types>
-  </flowspec>
-  <flowspec>
-    <codes>
-      <op>equals end-of-list</op>
-      <value>0</value>
-    </codes>
-  </flowspec>
-  <flowspec>
-    <tcp-flags>
-      <op>match end-of-list</op>
-      <value>32</value>
-    </tcp-flags>
-  </flowspec>
-  <flowspec>
-    <packet-lengths>
-      <op>greater-than</op>
-      <value>400</value>
-    </packet-lengths>
-    <packet-lengths>
-      <op>and-bit less-than end-of-list</op>
-       <value>500</value>
-    </packet-lengths>
-  </flowspec>
-  <flowspec>
-    <dscps>
-      <op>equals end-of-list</op>
-      <value>20</value>
-    </dscps>
-  </flowspec>
-  <flowspec>
-    <fragments>
-      <op>match end-of-list</op>
-      <value>first</value>
-    </fragments>
-  </flowspec>
-  <attributes>
-    <origin>
-      <value>igp</value>
-    </origin>
-    <as-path/>
-    <local-pref>
-      <pref>100</pref>
-    </local-pref>
-    <extended-communities>
-    ....
-    </extended-communities>
-  </attributes>
-</flowspec-route>
-----
-
-*Flowspec Extended Communities (Actions):*
-
-[source,xml]
-----
-  <extended-communities>
-    <transitive>true</transitive>
-    <traffic-rate-extended-community>
-      <informative-as>123</informative-as>
-      <local-administrator>AAAAAA==</local-administrator>
-    </traffic-rate-extended-community>
-  </extended-communities>
-
-  <extended-communities>
-    <transitive>true</transitive>
-    <traffic-action-extended-community>
-      <sample>true</sample>
-      <terminal-action>false</terminal-action>
-    </traffic-action-extended-community>
-  </extended-communities>
-
-  <extended-communities>
-    <transitive>true</transitive>
-    <redirect-extended-community>
-      <global-administrator>123</global-administrator>
-      <local-administrator>AAAAew==</local-administrator>
-    </redirect-extended-community>
-  </extended-communities>
-
-  <extended-communities>
-    <transitive>true</transitive>
-    <redirect-ipv4>
-      <global-administrator>192.168.0.1</global-administrator>
-      <local-administrator>12345</local-administrator>
-    </redirect-ipv4>
-  </extended-communities>
-
-  <extended-communities>
-    <transitive>true</transitive>
-    <redirect-as4>
-      <global-administrator>64495</global-administrator>
-      <local-administrator>12345</local-administrator>
-    </redirect-as4>
-  </extended-communities>
-
-  <extended-communities>
-    <transitive>true</transitive>
-    <redirect-ip-nh-extended-community>
-      <copy>false</false>
-    </redirect-ip-nh-extended-community>
-  </extended-communities>
-
-  <extended-communities>
-    <transitive>true</transitive>
-    <traffic-marking-extended-community>
-      <global-administrator>20</global-administrator>
-    </traffic-marking-extended-community>
-  </extended-communities>
-----
-
-The request results in *204 No content*. This is expected.
-
-*Delete route:*
-
-*URL:* http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv4-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-routes/bgp-flowspec:flowspec-route/<route-id>
-
-*Method:* DELETE
-
-===== IPv6 Flowspec =====
-
-*Add route:*
-
-*URL:* http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv6-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-ipv6-routes
-
-*Method:* POST
-
-*Content-Type:* application/xml
-
-[source,xml]
-----
-<flowspec-route xmlns="urn:opendaylight:params:xml:ns:yang:bgp-flowspec">
-  <route-key>flow-v6</route-key>
-  <flowspec>
-    <destination-prefix>2001:db8:30::3/128</destination-prefix>
-  </flowspec>
-  <flowspec>
-    <source-prefix>2001:db8:31::3/128</source-prefix>
-  </flowspec>
-  <flowspec>
-    <flow-label>
-      <op>equals end-of-list</op>
-      <value>1</value>
-    </flow-label>
-  </flowspec>
-  <attributes>
-    <extended-communities>
-      <redirect-ipv6>
-        <global-administrator>2001:db8:1::6</global-administrator>
-        <local-administrator>12345</local-administrator>
-      </redirect-ipv6>
-    </extended-communities>
-    <origin>
-      <value>igp</value>
-    </origin>
-    <as-path/>
-    <local-pref>
-      <pref>100</pref>
-    </local-pref>
-  </attributes>
-</flowspec-route>
-----
-
-The request results in *204 No content*. This is expected.
-
-*Delete route:*
-
-*URL:* http://localhost:8181/restconf/config/bgp-rib:application-rib/example-app-rib/tables/bgp-types:ipv6-address-family/bgp-flowspec:flowspec-subsequent-address-family/bgp-flowspec:flowspec-ipv6-routes/bgp-flowspec:flowspec-route/<route-id>
-
-*Method:* DELETE
\ No newline at end of file
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/bgp-user-guide.html
index 6290bf782e2d426249c97145f92109b1dab47b97..44eb3f30590d6129603816a735aef3702c413d03 100644 (file)
@@ -1,135 +1,3 @@
 == BGP Monitoring Protocol User Guide ==
 
-=== Overview ===
-
-The OpenDaylight Karaf distribution comes preconfigured with baseline BMP configuration.
-
-- *32-bmp.xml* (initial configuration for BMP messages handler service provider and BMP client/server dispatcher settings)
-- *42-bmp-example.xml* (sample initial configuration for the BMP Monitoring Station application)
-
-=== Configuring BMP ===
-
-==== Server Binding ====
-The default shipped configuration will start a BMP server on 0.0.0.0:12345.You can change this behavior in *42-bmp-example.xml*:
-
-[source,xml]
-----
- <module>
-  <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">prefix:bmp-monitor-impl</type>
-  <name>example-bmp-monitor</name>
-  <!--<binding-address>0.0.0.0</binding-address>-->
-  <binding-port>12345</binding-port>
-  ...
- </module>
-----
-
-- *binding-address* - adress on which BMP will be started and listen; to change value, uncomment then line first
-- *binding-port* - port on which the address will be started and listen
-
-Multiple instances of the BMP monitoring station (*bmp-monitor-impl* module) can be created. However, each instance must have a unique pair of *binding-address* and *binding-port*
-
-==== Active mode ====
-OpenDaylight's BMP might be configured to act as an active party of the connection (ODL BMP < = > Monitored router). To enable this functionality,
-configure monitored-router with mandatory parameters:
-
-* address (must be unique for each configured "monitored-router"),
-* port,
-* active.
-
-See following example from 42-bmp-example.xml: 
-
-[source,xml]
-----
- <monitored-router>
-  <address>192.0.2.2</address>
-  <port>1234</port>
-  <active>true</active>
- </monitored-router>
-----
-
-=== Configuration through RESTCONF ===
-
-==== Server Binding ====
-
-*URL:*
-_http://<controllerIP>:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/config:module/odl-bmp-impl-cfg:bmp-monitor-impl/example-bmp-monitor_
-
-*Content-Type:*
-application/xml
-
-*Method:*
-PUT
-
-*Body:*
-[source,xml]
-----
-<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
-  <name>example-bmp-monitor</name>
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">x:bmp-monitor-impl</type>
-  <bmp-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
-    <type>bmp-dispatcher</type>
-    <name>global-bmp-dispatcher</name>
-  </bmp-dispatcher>
-  <codec-tree-factory xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
-    <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">x:binding-codec-tree-factory</type>
-    <name>runtime-mapping-singleton</name>
-  </codec-tree-factory>
-  <extensions xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
-    <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:spi">x:extensions</type>
-    <name>global-rib-extensions</name>
-  </extensions>
-  <binding-address xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">0.0.0.0</binding-address>
-  <dom-data-provider xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
-    <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">x:dom-async-data-broker</type>
-    <name>pingpong-broker</name>
-  </dom-data-provider>
-  <binding-port xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">12345</binding-port>
-</module>
-----
-
-* change values for *binding-address* and/or *binding-port*
-
-==== Active mode ====
-
-*URL:*
-_http://<controllerIP>:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/config:module/odl-bmp-impl-cfg:bmp-monitor-impl/example-bmp-monitor_
-
-*Content-Type:*
-application/xml
-
-*Method:*
-PUT
-
-*Body:*
-[source,xml]
-----
-<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
-  <name>example-bmp-monitor</name>
-  <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">x:bmp-monitor-impl</type>
-  <bmp-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
-    <type>bmp-dispatcher</type>
-    <name>global-bmp-dispatcher</name>
-  </bmp-dispatcher>
-  <codec-tree-factory xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
-    <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">x:binding-codec-tree-factory</type>
-    <name>runtime-mapping-singleton</name>
-  </codec-tree-factory>
-  <extensions xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
-    <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:bgp:rib:spi">x:extensions</type>
-    <name>global-rib-extensions</name>
-  </extensions>
-  <binding-address xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">0.0.0.0</binding-address>
-      <dom-data-provider xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
-    <type xmlns:x="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">x:dom-async-data-broker</type>
-    <name>pingpong-broker</name>
-  </dom-data-provider>
-  <binding-port xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">12345</binding-port>
-  <monitored-router xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">
-    <address xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">127.0.0.1</address>
-    <port xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">1234</port>
-    <active xmlns="urn:opendaylight:params:xml:ns:yang:controller:bmp:impl">true</active>
-  </monitored-router>
-</module>
-----
-
-* change values for *address* and *port*
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/bgp-monitoring-protocol-user-guide.html
index c1b36089ef0a96af4ceeeaa578e0f6dee722583e..503cb7de3ed61e163b70779eb17b8b9cb7c9dbf0 100644 (file)
@@ -1,281 +1,3 @@
 == PCEP User Guide ==
 
-=== Overview ===
-
-The OpenDaylight Karaf distribution comes preconfigured with baseline PCEP configuration.
-
-- *32-pcep.xml* (basic PCEP configuration, including session parameters)
-- *39-pcep-provider.xml* (configuring for PCEP provider)
-
-=== Configuring PCEP ===
-
-The default shipped configuration will start a PCE server on 0.0.0.0:4189. You can change this behavior in *39-pcep-provider.xml*:
-
-[source,xml]
-----
-<module>
- <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:pcep:topology:provider">prefix:pcep-topology-provider</type>
- <name>pcep-topology</name>
- <listen-address>192.168.122.55</listen-address>
- <listen-port>4189</listen-port>
-...
-</module>
-----
-
-- *listen-address* - adress on which PCE will be started and listen
-- *listen-port* - port on which the address will be started and listen
-
-PCEP default configuration is set to conform stateful PCEP extensions:
-
-- http://tools.ietf.org/html/draft-ietf-pce-stateful-pce-07[draft-ietf-pce-stateful-pce-07] - PCEP Extensions for Stateful PCE
-- https://tools.ietf.org/html/draft-ietf-pce-pce-initiated-lsp-00[draft-ietf-pce-pce-initiated-lsp-00] - PCEP Extensions for PCE-initiated LSP Setup in a Stateful PCE Model
-- https://tools.ietf.org/html/draft-ietf-pce-stateful-sync-optimizations-03[draft-ietf-pce-stateful-sync-optimizations-03] - Optimizations of Label Switched Path State
-Synchronization Procedures for a Stateful PCE
-
-==== PCEP Segment Routing ====
-
-Conforms link:http://tools.ietf.org/html/draft-ietf-pce-segment-routing-01[draft-ietf-pce-segment-routing] - PCEP extension for Segment Routing
-
-The default configuration file is located in etc/opendaylight/karaf.
-
-- *33-pcep-segment-routing.xml* - You don't need to edit this file.
-
-=== Tunnel Management ===
-
-Programming tunnels through PCEP is one of the key features of PCEP implementation in OpenDaylight.
-User can create, update and delete tunnel via RESTCONF calls.
-Tunnel (LSP - Label Switched Path) arguments are passed through RESTCONF and generate a PCEP message that is sent to PCC (which is also specified via RESTCONF call).
-PCC sends a response back to OpenDaylight. The response is then interpreted and sent to RESTCONF, where, in case of success, the new LSP is displayed.
-
-The PCE Segment Routing Extends draft-ietf-pce-stateful-pce-07 and draft-ietf-pce-pce-initiated-lsp-00, brings new Segment Routing Explicit Route Object (SR-ERO) subobject composed of SID (Segment Identifier)
-and/or NAI (Node or Adjacency Identifier). Segment Routing path is carried in the ERO object, as a list of SR-ERO subobjects ordered by user.
-The draft redefines format of messages (PCUpd, PCRpt, PCInitiate) - along with common header, they can hold SPR, LSP and SR-ERO (containing only SR-ERO subobjects) objects.
-
-==== Creating LSP ====
-An LSP in PCEP can be created in one or two steps. Making an add-lsp operation will trigger a PcInitiate message to PCC.
-
-*URL:* http://localhost:8181/restconf/operations/network-topology-pcep:add-lsp
-
-*Method:* POST
-
-*Content-Type:* application/xml
-
-*Body:*
-
-*PCE Active Stateful:*
-[source,xml]
-----
-<input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
- <node>pcc://43.43.43.43</node>
- <name>update-tunel</name>
- <arguments>
-  <lsp xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
-   <delegate>true</delegate>
-   <administrative>true</administrative>
-  </lsp>
-  <endpoints-obj>
-   <ipv4>
-    <source-ipv4-address>43.43.43.43</source-ipv4-address>
-    <destination-ipv4-address>39.39.39.39</destination-ipv4-address>
-   </ipv4>
-  </endpoints-obj>
-  <ero>
-   <subobject>
-    <loose>false</loose>
-    <ip-prefix><ip-prefix>201.20.160.40/32</ip-prefix></ip-prefix>
-   </subobject>
-   <subobject>
-    <loose>false</loose>
-    <ip-prefix><ip-prefix>195.20.160.39/32</ip-prefix></ip-prefix>
-   </subobject>
-   <subobject>
-    <loose>false</loose>
-    <ip-prefix><ip-prefix>39.39.39.39/32</ip-prefix></ip-prefix>
-   </subobject>
-  </ero>
- </arguments>
- <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
-</input>
-----
-
-*PCE Segment Routing:*
-[source,xml]
-----
-<input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
- <node>pcc://43.43.43.43</node>
- <name>update-tunnel</name>
- <arguments>
-  <lsp xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
-   <delegate>true</delegate>
-   <administrative>true</administrative>
-  </lsp>
-  <endpoints-obj>
-   <ipv4>
-    <source-ipv4-address>43.43.43.43</source-ipv4-address>
-    <destination-ipv4-address>39.39.39.39</destination-ipv4-address>
-   </ipv4>
-  </endpoints-obj>
-  <path-setup-type xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
-   <pst>1</pst>
-  </path-setup-type>
-  <ero>
-   <subobject>
-    <loose>false</loose>
-    <sid-type xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">ipv4-node-id</sid-type>
-    <m-flag xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">true</m-flag>
-    <sid xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">12</sid>
-    <ip-address xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">39.39.39.39</ip-address>
-   </subobject>
-  </ero>
- </arguments>
- <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
-</input>
-----
-
-==== Updating LSP ====
-Making an update-lsp operation will trigger a PCUpd message to PCC. Updating can be used to change or add additional information to the LSP.
-
-You can only successfully update an LSP if you own the delegation. You automatically own the delegation, if you've created the LSP.
-You don't own it, if another PCE created this LSP. In this case PCC is only reporting this LSP for you, as read-only (you'll see +<delegate>false</delegate>+).
-However OpenDaylight won't restrict you from trying to modify the LSP, but you will be stopped by receiving a PCErr message from PCC.
-
-To revoke delegation, don't forget to set +<delegate>+ to true.
-
-*URL:* http://localhost:8181/restconf/operations/network-topology-pcep:update-lsp
-
-*Method:* POST
-
-*Content-Type:* application/xml
-
-*Body:*
-
-*PCE Active Stateful:*
-[source,xml]
-----
-<input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
- <node>pcc://43.43.43.43</node>
- <name>update-tunel</name>
- <arguments>
-  <lsp xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
-   <delegate>true</delegate>
-   <administrative>true</administrative>
-  </lsp>
-  <ero>
-   <subobject>
-    <loose>false</loose>
-    <ip-prefix><ip-prefix>200.20.160.41/32</ip-prefix></ip-prefix>
-   </subobject>
-   <subobject>
-    <loose>false</loose>
-    <ip-prefix><ip-prefix>196.20.160.39/32</ip-prefix></ip-prefix>
-   </subobject>
-   <subobject>
-    <loose>false</loose>
-    <ip-prefix><ip-prefix>39.39.39.39/32</ip-prefix></ip-prefix>
-   </subobject>
-  </ero>
- </arguments>
- <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
-</input>
-----
-
-*PCE Segment Routing:*
-[source,xml]
-----
-<input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
- <node>pcc://43.43.43.43</node>
- <name>update-tunnel</name>
- <arguments>
-  <lsp xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
-   <delegate>true</delegate>
-   <administrative>true</administrative>
-  </lsp>
-  <path-setup-type xmlns="urn:opendaylight:params:xml:ns:yang:pcep:ietf:stateful">
-   <pst>1</pst>
-  </path-setup-type>
-  <ero>
-   <subobject>
-    <loose>false</loose>
-    <sid-type xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">ipv4-node-id</sid-type>
-    <m-flag xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">true</m-flag>
-    <sid xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">11</sid>
-    <ip-address xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">200.20.160.41</ip-address>
-   </subobject>
-   <subobject>
-    <loose>false</loose>
-    <sid-type xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">ipv4-node-id</sid-type>
-    <m-flag xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">true</m-flag>
-    <sid xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">12</sid>
-    <ip-address xmlns="urn:opendaylight:params:xml:ns:yang:pcep:segment:routing">39.39.39.39</ip-address>
-   </subobject>
-  </ero>
- </arguments>
- <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
-</input>
-----
-
-==== Removing LSP ====
-Removing LSP from PCC is done via following RESTCONF URL. Making a remove-lsp operation will trigger a PCInitiate message to PCC, with remove-flag in SRP set to true.
-
-You can only successfully remove an LSP if you own the delegation. You automatically own the delegation, if you've created the LSP.
-You don't own it, if another PCE created this LSP. In this case PCC is only reporting this LSP for you, as read-only (you'll see +<delegate>false</delegate>+).
-However OpenDaylight won't restrict you from trying to remove the LSP, but you will be stopped by receiving a PCErr message from PCC.
-
-To revoke delegation, don't forget to set +<delegate>+ to true.
-
-*URL:* http://localhost:8181/restconf/operations/network-topology-pcep:remove-lsp
-
-*Method:* POST
-
-*Content-Type:* application/xml
-
-*Body:*
-[source,xml]
-----
-<input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
- <node>pcc://43.43.43.43</node>
- <name>update-tunel</name>
- <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
-</input>
-----
-
-==== PCE-triggered Initial Synchronization ====
-Making an trigger-sync operation will trigger a PCUpd message to PCC with PLSP-ID = 0 and SYNC = 1 in order to trigger the LSP-DB synchronization process.
-
-*URL:* http://localhost:8181/restconf/operations/network-topology-pcep:trigger-sync
-
-*Method:* POST
-
-*Content-Type:* application/xml
-
-*Body:*
-[source,xml]
-----
-<input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
- <node>pcc://43.43.43.43</node>
- <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
-</input>
-----
-
-==== PCE-triggered Re-synchronization ====
-Making an trigger-resync operation will trigger a PCUpd message to PCC. The PCE can choose to re-synchronize its entire LSP database or a single LSP.
-
-*URL:* http://localhost:8181/restconf/operations/network-topology-pcep:trigger-sync
-
-*Method:* POST
-
-*Content-Type:* application/xml
-
-*Body:*
-[source,xml]
-----
-<input xmlns="urn:opendaylight:params:xml:ns:yang:topology:pcep">
- <node>pcc://43.43.43.43</node>
- <name>re-sync-lsp</name>
- <network-topology-ref xmlns:topo="urn:TBD:params:xml:ns:yang:network-topology">/topo:network-topology/topo:topology[topo:topology-id="pcep-topology"]</network-topology-ref>
-</input>
-----
-
-==== PCE-triggered LSP database Re-synchronization ====
-
-PCE-triggered LSP database Re-synchronization works same as in PCE-triggered Initial Synchronization.
\ No newline at end of file
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/pcep-user-guide.html
index 446cc31457d5337a359886d9591eeb9a347e37ef..f4a16e4026dbfc72f608a7753649e2542966d6f0 100644 (file)
@@ -41,8 +41,6 @@ include::lacp/lacp-user.adoc[LACP]
 
 include::lfm/lispflowmapping-msmr-user.adoc[LISP flow mapping]
 
-include::nemo/odl-nemo-engine-user.adoc[NEMO]
-
 include::controller/netconf/odl-netconf-user.adoc[]
 
 include::netide/odl-netide-user-guide.adoc[NetIDE]
@@ -69,6 +67,8 @@ include::packetcable/packetcable-user.adoc[PacketCable PCMM - CMTS Management]
 
 include::sfc/sfc.adoc[Service Function Chain]
 
+include::snbi/odl-snbi-user.adoc[]
+
 include::snmp/snmp-user-guide.adoc[SNMP]
 
 include::snmp4sdn/snmp4sdn-user-guide.adoc[SNMP4SDN]
@@ -88,3 +88,5 @@ include::vtn/vtn-user.adoc[]
 include::yangide/yangide-user.adoc[]
 
 include::yang-push/odl-yang-push-user.adoc[YANG-PUSH]
+
+include::genius/genius-user-guide.adoc[Genius]
index 2cefb2c54c533b1c75575bc7fea73b3f9c317a4a..f8ea92658704db54fe597c9e4b023353a8a7bb76 100644 (file)
@@ -1,61 +1,3 @@
 == CAPWAP User Guide
-This document describes how to use the Control And Provisioning of Wireless 
-Access Points (CAPWAP) feature in OpenDaylight.  This document contains 
-configuration, administration, and management sections for the feature.
 
-=== Overview
-CAPWAP feature fills the gap OpenDaylight Controller has with respect to managing 
-CAPWAP compliant wireless termination point (WTP) network devices present 
-in enterprise networks. Intelligent applications (e.g. centralized firmware 
-management, radio planning) can be developed by tapping into the 
-WTP network device's operational states via REST APIs.
-
-=== CAPWAP Architecture
-The CAPWAP feature is implemented as an MD-SAL based provider module, which 
-helps discover WTP devices and update their states in MD-SAL operational datastore.
-
-=== Scope of CAPWAP Project
-In the Lithium release, CAPWAP project aims to only detect the WTPs and store their 
-basic attributes in the operational data store, which is accessible via REST 
-and JAVA APIs.
-
-=== Installing CAPWAP
-To install CAPWAP, download OpenDaylight and use the Karaf console to install 
-the following feature:
-
-odl-capwap-ac-rest
-
-=== Configuring CAPWAP
-
-As of Lithium, there are no configuration requirements.
-
-=== Administering or Managing CAPWAP
-
-After installing the odl-capwap-ac-rest feature from the Karaf console, users 
-can administer and manage CAPWAP from the APIDOCS explorer.
-
-Go to http://${IPADDRESS}:8181/apidoc/explorer/index.html, sign in, and expand 
-the capwap-impl panel.  From there, users can execute various API calls.
-
-=== Tutorials
-
-==== Viewing Discovered WTPs
-
-===== Overview
-This tutorial can be used as a walk through to understand the steps for 
-starting the CAPWAP feature, detecting CAPWAP WTPs, accessing the 
-operational states of WTPs.
-
-===== Prerequisites
-It is assumed that user has access to at least one hardware/software based CAPWAP 
-compliant WTP. These devices should be configured with OpenDaylight controller 
-IP address as a CAPWAP Access Controller (AC) address. It is also assumed that 
-WTPs and OpenDaylight controller share the same ethernet broadcast domain.
-
-===== Instructions
-. Run the OpenDaylight distribution and install odl-capwap-ac-rest from the Karaf console.
-. Go to http://${IPADDRESS}:8181/apidoc/explorer/index.html 
-. Expand capwap-impl
-. Click /operational/capwap-impl:capwap-ac-root/
-. Click "Try it out"
-. The above step should display list of WTPs discovered using ODL CAPWAP feature.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/capwap-user-guide.html
index 3c82984c6c44dfdcc2451adc5fc77b14f7b0095f..d1e4b9649d67878cc08f949470d66cafbc8b6c2d 100644 (file)
@@ -1,86 +1,3 @@
 == Cardinal: OpenDaylight Monitoring as a Service
-This section describes how to use the Cardinal feature in OpenDaylight
-and contains configuration, administration, and management
-sections for the feature.
 
-=== Overview
-Cardinal (OpenDaylight Monitoring as a Service) enables OpenDaylight and the underlying software defined network to be remotely monitored by deployed Network Management Systems (NMS) or Analytics suite. In the Boron release, Cardinal will add:
-
-. OpenDaylight MIB.
-. Enable ODL diagnostics/monitoring to be exposed across SNMP (v2c, v3) and REST north-bound.
-. Extend ODL System health, Karaf parameter and feature info, ODL plugin scalability and network parameters.
-. Support autonomous notifications (SNMP Traps).
-
-
-=== Cardinal Architecture
-
-The Cardinal architecture can be found at the below link:
-
-https://wiki.opendaylight.org/images/8/89/Cardinal-ODL_Monitoring_as_a_Service_V2.pdf
-
-=== Configuring Cardinal feature
-To start Cardinal feature, start karaf and type the following command:
-
-       feature:install odl-cardinal
-
-After this Cardinal should be up and working with SNMP daemon running on port 161.
-
-=== Tutorials
-Below are tutorials for Cardinal.
-
-==== Using Cardinal
-These tutorials are intended for any user who wants to monitor three basic component in OpenDaylight
-
-. System Info in which controller is running.
-. Karaf Info
-. Project Specific Information.
-
-
-===== Prerequisites
-There is no as such specific prerequisite. Cardinal can work without installing any third party software. However If one
-wants to see the output of a snmpget/snmpwalk on the CLI prompt, than one can install the SNMP using the below link:
-
-https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-an-snmp-daemon-and-client-on-ubuntu-14-04
-
-Using the above command line utility one can get the same result as the cardinal APIs will give for the snmpget/snmpwalk
-request.
-
-===== Target Environment
-This tutorial is developed considering the following environment:
-
-controller-Linux(Ubuntu 14.02).
-
-
-===== Instructions
-
-====== Install Cardinal feature
-Open karaf and install the cardinal feature using the following command:
-       
-----
-feature:install odl-cardinal
-----
-
-Please verify that SNMP daemon is up on port 161 using the following command on the terminal window of Linux machine:
-
-----
-netstat -anp | grep "161"
-----
-
-If the grep on the ``snmpd`` port is successful than SNMP daemon is up and working.
-
-======  APIs Reference
-Please see Developer guide for usage of Cardinal APIs.
-
-======  CLI commands to do snmpget/walk
-
-One can do snmpget/walk on the ODL-CARDINAL-MIB. Open the linux terminal and type the below command:
-
-       snmpget -v2c -c public localhost Oid_Of_the_mib_variable
-
-Or
-
-       snmpget -v2c -c public localhost ODL-CARDINAL-MIB::mib_variable_name
-
-For snmpwalk use the below command:
-
-       snmpwalk -v2c -c public localhost SNMPv2-SMI::experimental
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/cardinal_-opendaylight-monitoring-as-a-service.html
index f0854dc7440b38c50f16c82de216fcbaff40ea1d..36002add02e82c702e5cd07363823b0de46329bb 100644 (file)
@@ -1,81 +1,3 @@
 == Centinel User Guide
-The Centinel project aims at providing a distributed, reliable framework for
-efficiently collecting, aggregating and sinking streaming data across Persistence
-DB and stream analyzers (example: Graylog, Elastic search, Spark, Hive etc.).
-This document contains configuration, administration, management, using
-sections for the feature.
 
-=== Overview
-In the Beryllium Release of Centinel, this framework enables SDN applications/services to receive events from multiple streaming sources (e.g., Syslog, Thrift, Avro, AMQP, Log4j, HTTP/REST) and execute actions like network configuration/batch processing/real-time analytics. It also provides a Log Service to assist operators running SDN ecosystem by installing the feature odl-centinel-all.
-
-With the configurations development of "Log Service" and plug-in for log analyzer (e.g., Graylog) will take place. Log service will do processing of real time events coming from log analyzer. Additionally, stream collector (Flume and Sqoop based) that will collect logs from OpenDaylight and sink it to persistence service (integrated with TSDR). Also includes RESTCONF interface to inject events to north bound applications for real-time analytic/network configuration. Centinel User Interface (web interface) will be available to operators to enable rules/alerts/dashboard.
-
-=== Centinel core features
-The core features of the Centinel framework are:
-
-Stream collector:: Collecting, aggregating and sinking streaming data
-Log Service:: Listen log stream events coming from log analyzer
-Log Service:: Enables user to configure rules (e.g., alerts, diagnostic, health, dashboard)
-Log Service:: Performs event processing/analytics
-User Interface:: Enable set-rule, search, visualize, alert, diagnostic, dashboard etc.
-Adaptor:: Log analyzer plug-in to Graylog and a generic data-model to extend to other stream analyzers (e.g., Logstash)
-REST Service:: Northbound APIs for Log Service and Steam collector framework
-Leverages:: TSDR persistence service, data query, purging and elastic search
-
-=== Centinel Architecture
-The following wiki pages capture the Centinel Model/Architecture
-
-a. https://wiki.opendaylight.org/view/Centinel:Main
-b. https://wiki.opendaylight.org/view/Project_Proposals:Centinel
-c. https://wiki.opendaylight.org/images/0/09/Centinel-08132015.pdf
-
-
-
-=== Administering or Managing Centinel with default configuration
-
-==== Prerequisites
-
-. Check whether Graylog is up and running and plugins deployed as mentioned in http://opendaylight.readthedocs.io/en/stable-beryllium/getting-started-guide/index.html[installation guide].
-
-. Check whether HBase is up and respective tables and column families as mentioned in http://opendaylight.readthedocs.io/en/stable-beryllium/getting-started-guide/index.html[installation guide] are created.
-
-. Check if apache flume is up and running.
-
-. Check if apache drill is up and running.
-
-==== Running Centinel
-
-The following steps should be followed to bring up the controller:
-
-. Download the Centinel OpenDaylight distribution release from below link: http://www.opendaylight.org/software/downloads
-
-. Run Karaf of the distribution from bin folder
-+
-  ./karaf
-+
-. Install the centinel features using below command:
-+
-  feature:install odl-centinel-all
-+
-. Give some time for the centinel to come up.
-
-==== User Actions
-
-. *Log In:* User logs into the Centinel with required credentials using following URL: http://localhost:8181/index.html
-
-. *Create Rule:*
-
-.. Select Centinel sub-tree present in left side and go to Rule tab.
-
-.. Create Rule with title and description.
-
-.. Configure flow rule on the stream to filter the logs accordingly for, e.g., `bundle_name=org.opendaylight.openflow-plugin`
-
-. *Set Alarm Condition:* Configure alarm condition, e.g., message-count-rule such that if 10 messages comes on a stream (e.g., The OpenFlow Plugin) in last 1 minute with an alert is generated.
-
-. *Subscription:* User can subscribe to the rule and alarm condition by entering the http details or email-id in subscription textfield by clicking on
-the subscribe button.
-
-. *Create Dashboard:* Configure dashboard for stream and alert widgets. Alarm and Stream count will be updated in corresponding widget in Dashboard.
-
-. *Event Tab:* Intercepted Logs, Alarms and Raw Logs in Event Tab will be displayed by selecting the appropriate radio button. User can also filter the searched data using SQL query in the search box.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/centinel-user-guide.html
diff --git a/manuals/user-guide/src/main/asciidoc/controller/netconf/odl-netconf-northbound-user.adoc b/manuals/user-guide/src/main/asciidoc/controller/netconf/odl-netconf-northbound-user.adoc
deleted file mode 100644 (file)
index f17e826..0000000
+++ /dev/null
@@ -1,91 +0,0 @@
-=== Northbound (NETCONF servers)
-OpenDaylight provides 2 types of NETCONF servers:
-
-* *NETCONF server for config-subsystem (listening by default on port
-  1830)*
-  ** Serves as a default interface for config-subsystem and allows
-  users to spawn/reconfigure/destroy modules (or applications) in OpenDaylight
-* *NETCONF server for MD-SAL (listening by default on port 2830)*
-** Serves as an alternative interface for MD-SAL (besides RESTCONF)
-  and allows users to read/write data from MD-SAL's datastore and to
-  invoke its rpcs (NETCONF notifications are not available in the
-  Beryllium release of OpenDaylight)
-
-NOTE: The reason for having 2 NETCONF servers is that config-subsystem and
-MD-SAL are 2 different components of OpenDaylight and require different
-approach for NETCONF message handling and data translation. These 2
-components will probably merge in the future.
-
-==== NETCONF server for config-subsystem
-This NETCONF server is the primary interface for config-subsystem. It
-allows the users to interact with config-subsystem in a standardized
-NETCONF manner.
-
-In terms of RFCs, these are supported:
-
-* http://tools.ietf.org/html/rfc6241[RFC-6241]
-* https://tools.ietf.org/html/rfc5277[RFC-5277]
-* https://tools.ietf.org/html/rfc6470[RFC-6470]
-** (partially, only the
-  schema-change notification is available in Beryllium release)
-* https://tools.ietf.org/html/rfc6022[RFC-6022]
-
-For regular users it is recommended to use RESTCONF + the
-controller-config loopback mountpoint instead of using pure NETCONF.
-How to do that is spesific for each component/module/application
-in OpenDaylight and can be found in their dedicated user guides.
-
-==== NETCONF server for MD-SAL
-This NETCONF server is just a generic interface to MD-SAL in OpenDaylight.
-It uses the stadard MD-SAL APIs and serves as an alternative to
-RESTCONF. It is
-fully model driven and supports any data and rpcs that are supported
-by MD-SAL.
-
-In terms of RFCs, these are supported:
-
-* http://tools.ietf.org/html/rfc6241[RFC-6241]
-* https://tools.ietf.org/html/rfc6022[RFC-6022]
-
-Notifications over NETCONF are not supported in the Beryllium release.
-
-TIP: Install NETCONF northbound for MD-SAL by installing feature:
-+odl-netconf-mdsal+ in karaf. Default binding port is *2830*.
-
-===== Configuration
-The default configuration can be found in file:
-_08-netconf-mdsal.xml_. The file contains the configuration for all
-necessary dependencies and a single SSH endpoint starting on port 2830.
-There is also a (by default disabled) TCP endpoint. It is possible
-to start multiple endpoints at the same time either in the initial
-configuration file or while OpenDaylight is running.
-
-The credentials for SSH endpoint can also be configured here, the
-defaults are admin/admin. Credentials in the SSH endpoint are not yet
-managed by the centralized AAA component and have to be configured
-separately.
-
-===== Verifying MD-SAL's NETCONF server
-After the NETCONF server is available it can be examined by a
-command line ssh tool:
-
-----
-ssh admin@localhost -p 2830 -s netconf
-----
-
-The server will respond by sending its HELLO message and can be used
-as a regular NETCONF server from then on.
-
-===== Mounting the MD-SAL's NETCONF server
-To perform this operation, just spawn a new netconf-connector as described in
-<<_spawning_additional_netconf_connectors_while_the_controller_is_running,
-Spawning netconf-connector>>.
-Just change the ip to "127.0.0.1" port to "2830" and its name to "controller-mdsal".
-
-Now the MD-SAL's datastore can be read over RESTCONF via NETCONF by invoking:
-
-GET http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/controller-mdsal/yang-ext:mount
-
-NOTE: This might not seem very useful, since MD-SAL can be accessed
-directly from RESTCONF or from Application code, but the same method can be used to
-mount and control other OpenDaylight instances by the "master OpenDaylight".
diff --git a/manuals/user-guide/src/main/asciidoc/controller/netconf/odl-netconf-southbound-user.adoc b/manuals/user-guide/src/main/asciidoc/controller/netconf/odl-netconf-southbound-user.adoc
deleted file mode 100644 (file)
index 0399c95..0000000
+++ /dev/null
@@ -1,459 +0,0 @@
-=== Southbound (netconf-connector)
-The NETCONF southbound plugin is capable of connecting to remote NETCONF
-devices and exposing their configuration/operational datastores, RPCs and
-notifications as MD-SAL mount points. These mount points allow
-applications and remote users (over RESTCONF) to interact with the
-mounted devices.
-
-In terms of RFCs, the connector supports:
-
-* http://tools.ietf.org/html/rfc6241[RFC-6241]
-* https://tools.ietf.org/html/rfc5277[RFC-5277]
-* https://tools.ietf.org/html/rfc6022[RFC-6022]
-
-*Netconf-connector is fully model-driven (utilizing the YANG modeling language) so in addition to
-the above RFCs, it supports any data/RPC/notifications described by a
-YANG model that is implemented by the device.*
-
-TIP: NETCONF southbound can be activated by installing
-+odl-netconf-connector-all+ Karaf feature.
-
-==== Netconf-connector configuration
-There are 2 ways for configuring netconf-connector:
-NETCONF or RESTCONF. This guide focuses on using RESTCONF.
-
-===== Default configuration
-The default configuration contains all the necessary dependencies
-(file: 01-netconf.xml) and a single instance of netconf-connector
-(file: 99-netconf-connector.xml) called *controller-config* which
-connects itself to the NETCONF northbound in OpenDaylight in a loopback
-fashion. The connector mounts the NETCONF server for config-subsystem
-in order to enable RESTCONF protocol for config-subsystem. This
-RESTCONF still goes via NETCONF, but using RESTCONF is much more user
-friendly than using NETCONF.
-
-===== Spawning additional netconf-connectors while the controller is running
-Preconditions:
-
-. OpenDaylight is running
-. In Karaf, you must have the netconf-connector installed (at the
-  Karaf prompt, type: `feature:install odl-netconf-connector-all`); the
-  loopback NETCONF mountpoint will be automatically configured and
-  activated
-. Wait until log displays following entry:
-  RemoteDevice{controller-config}: NETCONF connector initialized
-  successfully
-
-To configure a new netconf-connector you need to send following
-request to RESTCONF:
-
-POST http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules
-
-Headers:
-
-* Accept application/xml
-* Content-Type application/xml
-
-----
-<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
-  <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">prefix:sal-netconf-connector</type>
-  <name>new-netconf-device</name>
-  <address xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">127.0.0.1</address>
-  <port xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">830</port>
-  <username xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">admin</username>
-  <password xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">admin</password>
-  <tcp-only xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">false</tcp-only>
-  <event-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:netty">prefix:netty-event-executor</type>
-    <name>global-event-executor</name>
-  </event-executor>
-  <binding-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
-    <name>binding-osgi-broker</name>
-  </binding-registry>
-  <dom-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">prefix:dom-broker-osgi-registry</type>
-    <name>dom-broker</name>
-  </dom-registry>
-  <client-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:config:netconf">prefix:netconf-client-dispatcher</type>
-    <name>global-netconf-dispatcher</name>
-  </client-dispatcher>
-  <processing-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:threadpool</type>
-    <name>global-netconf-processing-executor</name>
-  </processing-executor>
-  <keepalive-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:scheduled-threadpool</type>
-    <name>global-netconf-ssh-scheduled-executor</name>
-  </keepalive-executor>
-</module>
-----
-
-This spawns a new netconf-connector which tries to
-connect to (or mount) a NETCONF device at 127.0.0.1 and port 830. You
-can check the configuration of config-subsystem's configuration datastore.
-The new netconf-connector will now be present there. Just invoke:
-
-GET http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules
-
-The response will contain the module for new-netconf-device.
-
-Right after the new netconf-connector is created, it writes some
-useful metadata into the datastore of MD-SAL under the network-topology
-subtree. This metadata can be found at:
-
-GET http://localhost:8181/restconf/operational/network-topology:network-topology/
-
-Information about connection status, device capabilities, etc. can be
-found there.
-
-===== Connecting to a device not supporting NETCONF monitoring
-The netconf-connector in OpenDaylight relies on ietf-netconf-monitoring support
-when connecting to remote NETCONF device. The ietf-netconf-monitoring
-support allows netconf-connector to list and download all YANG schemas
-that are used by the device. NETCONF connector can only communicate
-with a device if it knows the set of used schemas (or at least a
-subset). However, some devices use YANG models internally but do not
-support NETCONF monitoring. Netconf-connector can also communicate
-with these devices, but you have to side load the necessary yang
-models into OpenDaylight's YANG model cache for netconf-connector. In general
-there are 2 situations you might encounter:
-
-*1. NETCONF device does not support ietf-netconf-monitoring but it
-   does list all its YANG models as capabilities in HELLO message*
-
-This could be a device that internally uses only ietf-inet-types
-YANG model with revision 2010-09-24. In the HELLO message that is sent
-from this device there is this capability reported:
-
-----
-urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&revision=2010-09-24
-----
-
-*For such devices you only need to put the schema into folder
-cache/schema inside your Karaf distribution.*
-
-IMPORTANT: The file with YANG schema for ietf-inet-types has to be
-called ietf-inet-types@2010-09-24.yang. It is the required naming format
-of the cache.
-
-*2. NETCONF device does not support ietf-netconf-monitoring and it
-   does NOT list its YANG models as capabilities in HELLO message*
-
-Compared to device that lists its YANG models in HELLO message, in
-this case there would be no capability with ietf-inet-types in the
-HELLO message. This type of device basically provides no information
-about the YANG schemas it uses so its up to the user of OpenDaylight to
-properly configure netconf-connector for this device.
-
-Netconf-connector has an optional configuration attribute called
-yang-module-capabilities and this attribute can contain a list of
-"YANG module based" capabilities. So by setting this configuration
-attribute, it is possible to override the "yang-module-based"
-capabilities reported in HELLO message of the device. To do this, we
-need to modify the configuration of netconf-connector by adding this
-XML (It needs to be added next to the address, port, username etc.
-configuration elements):
-
-----
-<yang-module-capabilities xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-  <capability xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&amp;revision=2010-09-24
-  </capability>
-</yang-module-capabilities>
-----
-
-*Remember to also put the YANG schemas into the cache folder.*
-
-NOTE: For putting multiple capabilities, you just need to replicate
-the capability xml element inside yang-module-capability element.
-Capability element is modeled as a leaf-list.
-With this configuration, we would make the remote device report usage
-of ietf-inet-types in the eyes of netconf-connector.
-
-===== Reconfiguring Netconf-Connector While the Controller is Running
-It is possible to change the configuration of a running module while
-the whole controller is running. This example will continue where the last left off and
-will change the configuration for the brand new netconf-connector
-after it was spawned. Using one RESTCONF request, we will change both
-username and password for the netconf-connector.
-
-To update an existing netconf-connector you need to send following
-request to RESTCONF:
-
-PUT
-http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-sal-netconf-connector-cfg:sal-netconf-connector/new-netconf-device
-
-----
-<module xmlns="urn:opendaylight:params:xml:ns:yang:controller:config">
-  <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">prefix:sal-netconf-connector</type>
-  <name>new-netconf-device</name>
-  <username xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">bob</username>
-  <password xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">passwd</password>
-  <tcp-only xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">false</tcp-only>
-  <event-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:netty">prefix:netty-event-executor</type>
-    <name>global-event-executor</name>
-  </event-executor>
-  <binding-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:binding">prefix:binding-broker-osgi-registry</type>
-    <name>binding-osgi-broker</name>
-  </binding-registry>
-  <dom-registry xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom">prefix:dom-broker-osgi-registry</type>
-    <name>dom-broker</name>
-  </dom-registry>
-  <client-dispatcher xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:config:netconf">prefix:netconf-client-dispatcher</type>
-    <name>global-netconf-dispatcher</name>
-  </client-dispatcher>
-  <processing-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:threadpool</type>
-    <name>global-netconf-processing-executor</name>
-  </processing-executor>
-  <keepalive-executor xmlns="urn:opendaylight:params:xml:ns:yang:controller:md:sal:connector:netconf">
-    <type xmlns:prefix="urn:opendaylight:params:xml:ns:yang:controller:threadpool">prefix:scheduled-threadpool</type>
-    <name>global-netconf-ssh-scheduled-executor</name>
-  </keepalive-executor>
-</module>
-----
-
-Since a PUT is a replace operation, the whole configuration must be
-specified along with the new values for username and password. This
-should result in a 2xx response and the instance of netconf-connector
-called new-netconf-device will be reconfigured to use username bob and
-password passwd. New configuration can be verified by executing:
-
-GET http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-sal-netconf-connector-cfg:sal-netconf-connector/new-netconf-device
-
-With new configuration, the old connection will be closed and a new
-one established.
-
-===== Destroying Netconf-Connector While the Controller is Running
-Using RESTCONF one can also destroy an instance of a module. In case
-of netconf-connector, the module will be destroyed, NETCONF connection
-dropped and all resources will be cleaned. To do this, simply issue a
-request to following URL:
-
-DELETE http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/controller-config/yang-ext:mount/config:modules/module/odl-sal-netconf-connector-cfg:sal-netconf-connector/new-netconf-device
-
-The last element of the URL is the name of the instance and its
-predecessor is the type of that module (In our case the type is
-*sal-netconf-connector* and name *new-netconf-device*). The type and name
-are actually the keys of the module list.
-
-==== Netconf-connector configuration with MD-SAL
-It is also possible to configure new NETCONF connectors directly through MD-SAL
-with the usage of the network-topology model. You can configure new NETCONF
-connectors both through the NETCONF server for MD-SAL (port 2830) or RESTCONF.
-This guide focuses on RESTCONF.
-
-TIP: To enable NETCONF connector configuration through MD-SAL install either
-the +odl-netconf-topology+ or +odl-netconf-clustered-topology+ feature.
-We will explain the difference between these features later.
-
-===== Preconditions
-
-. OpenDaylight is running
-. In Karaf, you must have the +odl-netconf-topology+ or +odl-netconf-clustered-topology+
-feature installed.
-. Feature +odl-restconf+ must be installed
-. Wait until log displays following entry:
-+
-----
-Successfully pushed configuration snapshot 02-netconf-topology.xml(odl-netconf-topology,odl-netconf-topology)
-----
-+
-or until
-+
-----
-GET http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/
-----
-+
-returns a non-empty response, for example:
-+
-----
-<topology xmlns="urn:TBD:params:xml:ns:yang:network-topology">
-  <topology-id>topology-netconf</topology-id>
-</topology>
-----
-
-===== Spawning new NETCONF connectors
-To create a new NETCONF connector you need to send the following request to RESTCONF:
-
-  PUT http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
-
-Headers:
-
-* Accept: application/xml
-* Content-Type: application/xml
-
-Payload:
-----
-<node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
-  <node-id>new-netconf-device</node-id>
-  <host xmlns="urn:opendaylight:netconf-node-topology">127.0.0.1</host>
-  <port xmlns="urn:opendaylight:netconf-node-topology">17830</port>
-  <username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
-  <password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
-  <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
-  <!-- non-mandatory fields with default values, you can safely remove these if you do not wish to override any of these values-->
-  <reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema>
-  <connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis>
-  <max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts>
-  <between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis>
-  <sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.5</sleep-factor>
-  <!-- keepalive-delay set to 0 turns off keepalives-->
-  <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">120</keepalive-delay>
-</node>
-----
-
-Note that the device name in <node-id> element must match the last element of the restconf URL.
-
-===== Reconfiguring an existing connector
-The steps to reconfigure an existing connector are exactly the same as when spawning
-a new connector. The old connection will be disconnected and a new connector with
-the new configuration will be created.
-
-===== Deleting an existing connector
-To remove an already configured NETCONF connector you need to send the following:
-
-  DELETE http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
-
-==== Clustered NETCONF connector
-To spawn NETCONF connectors that are cluster-aware you need to install the
-+odl-netconf-clustered-topology+ karaf feature.
-
-WARNING: The +odl-netconf-topology+ and +odl-netconf-clustered-topology+ features
-are considered *INCOMPATIBLE*. They both manage the same space in the datastore and
-would issue conflicting writes if installed together.
-
-Configuration of clustered NETCONF connectors works the same as the configuration
-through the topology model in the previous section.
-
-When a new clustered connector is configured the configuration gets distributed among
-the member nodes and a NETCONF connector is spawned on each node. From these nodes
-a master is chosen which handles the schema download from the device and all the
-communication with the device. You will be able to read/write to/from the device
-from all slave nodes due to the proxy data brokers implemented.
-
-You can use the +odl-netconf-clustered-topology+ feature in a single node scenario
-as well but the code that uses akka will be used, so for a scenario where only a
-single node is used, +odl-netconf-topology+ might be preferred.
-
-==== Netconf-connector utilization
-Once the connector is up and running, users can utilize the new mount
-point instance. By using RESTCONF or from their application code. This
-chapter deals with using RESTCONF and more information for app
-developers can be found in the developers guide or in the official
-tutorial application *ncmount* that can be found in the coretutorials project:
-
-* https://github.com/opendaylight/coretutorials/tree/stable/beryllum/ncmount
-
-===== Reading data from the device
-Just invoke (no body needed):
-
-GET http://localhost:8080/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/
-
-This will return the entire content of operation datastore from the
-device. To view just the configuration datastore, change *operational* in
-this URL to *config*.
-
-===== Writing configuration data to the device
-In general, you cannot simply write any data you want to the device.
-The data have to conform to the YANG models implemented by the device.
-In this example we are adding a new interface-configuration to the
-mounted device (assuming the device supports Cisco-IOS-XR-ifmgr-cfg
-YANG model). In fact this request comes from the tutorial dedicated to
-the *ncmount* tutorial app.
-
-POST
-http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/Cisco-IOS-XR-ifmgr-cfg:interface-configurations
-
-----
-<interface-configuration xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ifmgr-cfg">
-    <active>act</active>
-    <interface-name>mpls</interface-name>
-    <description>Interface description</description>
-    <bandwidth>32</bandwidth>
-    <link-status></link-status>
-</interface-configuration>
-----
-
-Should return 200 response code with no body.
-
-TIP: This call is transformed into a couple of NETCONF RPCs. Resulting
-NETCONF RPCs that go directly to the device can be found in the OpenDaylight
-logs after invoking +log:set TRACE
-org.opendaylight.controller.sal.connect.netconf+ in the Karaf shell.
-Seeing the NETCONF RPCs might help with debugging.
-
-This request is very similar to the one where we spawned a new netconf
-device. That's because we used the loopback netconf-connector to write
-configuration data into config-subsystem datastore and config-subsystem
-picked it up from there.
-
-===== Invoking custom RPC
-Devices can implement any additional RPC and as long as it provides
-YANG models for it, it can be invoked from OpenDaylight. Following example shows how
-to invoke the get-schema RPC (get-schema is quite common among netconf
-devices). Invoke:
-
-POST
-http://localhost:8181/restconf/operations/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/ietf-netconf-monitoring:get-schema
-
-----
-<input xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring">
-  <identifier>ietf-yang-types</identifier>
-  <version>2013-07-15</version>
-</input>
-----
-
-This call should fetch the source for ietf-yang-types YANG model from
-the mounted device.
-
-==== Netconf-connector + Netopeer
-https://github.com/cesnet/netopeer[Netopeer] (an open-source NETCONF server) can be used for
-testing/exploring NETCONF southbound in OpenDaylight.
-
-===== Netopeer installation
-A https://www.docker.com/[Docker] container with netopeer will be used
-in this guide. To install Docker and start the
-https://index.docker.io/u/dockeruser/netopeer/[netopeer image] perform
-following steps:
-
-. Install docker http://docs.docker.com/linux/step_one/
-. Start the netopeer image:
-+
-----
-docker run -rm -t -p 1831:830 dockeruser/netopeer
-----
-. Verify netopeer is running by invoking (netopeer should send its
-  HELLO message right away:
-+
-----
-ssh root@localhost -p 1831 -s netconf
-(password root)
-----
-
-===== Mounting netopeer NETCONF server
-Preconditions:
-
-* OpenDaylight is started with features +odl-restconf-all+ and
-  +odl-netconf-connector-all+.
-* Netopeer is up and running in docker
-
-Now just follow the chapter:
-<<_spawning_additional_netconf_connectors_while_the_controller_is_running, Spawning netconf-connector>>. In the payload change the:
-
-* name, e.g., to netopeer
-* username/password to your system credentials
-* ip to localhost
-* port to 1831.
-
-After netopeer is mounted successfully, its configuration can be read
-using RESTCONF by invoking:
-
-GET
-http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/netopeer/yang-ext:mount/
diff --git a/manuals/user-guide/src/main/asciidoc/controller/netconf/odl-netconf-testtool-user.adoc b/manuals/user-guide/src/main/asciidoc/controller/netconf/odl-netconf-testtool-user.adoc
deleted file mode 100644 (file)
index b83e2a2..0000000
+++ /dev/null
@@ -1,359 +0,0 @@
-=== NETCONF testtool
-*NETCONF testtool is a set of standalone runnable jars that can:*
-
-* Simulate NETCONF devices (suitable for scale testing)
-* Stress/Performance test NETCONF devices
-* Stress/Performance test RESTCONF devices
-
-These jars are part of OpenDaylight's controller project and are built from the
-NETCONF codebase in OpenDaylight.
-
-TIP: Download testtool from OpenDaylight Nexus at: https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/netconf/netconf-testtool/1.0.2-Beryllium-SR2/
-
-*Nexus contains 3 executable tools:*
-
-* executable.jar - device simulator
-* stress.client.tar.gz - NETCONF stress/performance measuring tool
-* perf-client.jar - RESTCONF stress/performance measuring tool
-
-TIP: Each executable tool provides help. Just invoke +java -jar
-<name-of-the-tool.jar> --help+
-
-==== NETCONF device simulator
-
-NETCONF testtool (or NETCONF device simulator) is a tool that
-
-* Simulates 1 or more NETCONF devices
-* Is suitable for scale, performance or crud testing
-* Uses core implementation of NETCONF server from OpenDaylight
-* Generates configuration files for controller so that the OpenDaylight distribution (Karaf) can easily connect to all simulated devices
-* Provides broad configuration options
-* Can start a fully fledged MD-SAL datastore
-* Supports notifications
-
-===== Building testtool
-
-. Check out latest NETCONF repository from https://git.opendaylight.org/gerrit/#/admin/projects/netconf[git]
-. Move into the `opendaylight/netconf/tools/netconf-testtool/` folder
-. Build testtool using the `mvn clean install` command
-
-===== Downloading testtool
-
-Netconf-testtool is now part of default maven build profile for controller and
-can be also downloaded from nexus. The executable jar for testtool can be found at:
-https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/netconf/netconf-testtool/1.0.2-Beryllium-SR2/[nexus-artifacts]
-
-===== Running testtool
-
-. After successfully building or downloading, move into the `opendaylight/netconf/tools/netconf-testtool/target/` folder and there is file `netconf-testtool-1.1.0-SNAPSHOT-executable.jar` (or if downloaded from nexus just take that jar file)
-. Execute this file using, e.g.:
-+
-  java -jar netconf-testtool-1.1.0-SNAPSHOT-executable.jar
-+
-This execution runs the testtool with default for all parameters and you should see this log output from the testtool :
-+
-  10:31:08.206 [main] INFO  o.o.c.n.t.t.NetconfDeviceSimulator - Starting 1, SSH simulated devices starting on port 17830
-  10:31:08.675 [main] INFO  o.o.c.n.t.t.NetconfDeviceSimulator - All simulated devices started successfully from port 17830 to 17830
-
-====== Default Parameters
-
-The default parameters for testtool are:
-
-* Use SSH
-* Run 1 simulated device
-* Device port is 17830
-* YANG modules used by device are only: ietf-netconf-monitoring, ietf-yang-types, ietf-inet-types (these modules are required for device in order to support NETCONF monitoring and are included in the netconf-testtool)
-* Connection timeout is set to 30 minutes (quite high, but when testing with 10000 devices it might take some time for all of them to fully establish a connection)
-* Debug level is set to false
-* No distribution is modified to connect automatically to the NETCONF testtool
-
-===== Verifying testtool
-
-To verify that the simulated device is up and running, we can try to connect to
-it using command line ssh tool. Execute this command to connect to the device:
-
-  ssh admin@localhost -p 17830 -s netconf
-
-Just accept the server with yes (if required) and provide any password (testtool
-accepts all users with all passwords). You should see the hello message sent by simulated device.
-
-===== Testtool help
-
-----
-usage: netconf testool [-h] [--device-count DEVICES-COUNT] [--devices-per-port DEVICES-PER-PORT] [--schemas-dir SCHEMAS-DIR] [--notification-file NOTIFICATION-FILE]
-                       [--initial-config-xml-file INITIAL-CONFIG-XML-FILE] [--starting-port STARTING-PORT] [--generate-config-connection-timeout GENERATE-CONFIG-CONNECTION-TIMEOUT]
-                       [--generate-config-address GENERATE-CONFIG-ADDRESS] [--generate-configs-batch-size GENERATE-CONFIGS-BATCH-SIZE] [--distribution-folder DISTRO-FOLDER] [--ssh SSH] [--exi EXI]
-                       [--debug DEBUG] [--md-sal MD-SAL]
-
-NETCONF device simulator. Detailed info can be found at https://wiki.opendaylight.org/view/OpenDaylight_Controller:Netconf:Testtool#Building_testtool
-
-optional arguments:
-  -h, --help             show this help message and exit
-  --device-count DEVICES-COUNT
-                         Number of simulated netconf devices to spin. This is the number of actual ports open for the devices.
-  --devices-per-port DEVICES-PER-PORT
-                         Amount of config files generated per port to spoof more devices then are actually running
-  --schemas-dir SCHEMAS-DIR
-                         Directory containing yang schemas to describe simulated devices. Some schemas e.g. netconf monitoring and inet types are included by default
-  --notification-file NOTIFICATION-FILE
-                         Xml file containing notifications that should be sent to clients after create subscription is called
-  --initial-config-xml-file INITIAL-CONFIG-XML-FILE
-                         Xml file containing initial simulatted configuration to be returned via get-config rpc
-  --starting-port STARTING-PORT
-                         First port for simulated device. Each other device will have previous+1 port number
-  --generate-config-connection-timeout GENERATE-CONFIG-CONNECTION-TIMEOUT
-                         Timeout to be generated in initial config files
-  --generate-config-address GENERATE-CONFIG-ADDRESS
-                         Address to be placed in generated configs
-  --generate-configs-batch-size GENERATE-CONFIGS-BATCH-SIZE
-                         Number of connector configs per generated file
-  --distribution-folder DISTRO-FOLDER
-                         Directory where the karaf distribution for controller is located
-  --ssh SSH              Whether to use ssh for transport or just pure tcp
-  --exi EXI              Whether to use exi to transport xml content
-  --debug DEBUG          Whether to use debug log level instead of INFO
-  --md-sal MD-SAL        Whether to use md-sal datastore instead of default simulated datastore.
-----
-
-===== Supported operations
-
-Testtool default simple datastore supported operations:
-
-get-schema:: returns YANG schemas loaded from user specified directory,
-edit-config:: always returns OK and stores the XML from the input in a local variable available for get-config and get RPC. Every edit-config replaces the previous data,
-commit:: always returns OK, but does not actually commit the data,
-get-config:: returns local XML stored by edit-config,
-get:: returns local XML stored by edit-config with netconf-state subtree, but also supports filtering.
-(un)lock:: returns always OK with no lock guarantee
-create-subscription:: returns always OK and after the operation is triggered, provided NETCONF notifications (if any) are fed to the client. No filtering or stream recognition is supported.
-
-Note: when operation="delete" is present in the payload for edit-config, it will wipe its local store to simulate the removal of data.
-
-When using the MD-SAL datastore testtool behaves more like normal NETCONF server
-and is suitable for crud testing. create-subscription is not supported when
-testtool is running with the MD-SAL datastore.
-
-===== Notification support
-
-Testtool supports notifications via the --notification-file switch. To trigger the notification feed, create-subscription operation has to be invoked.
-The XML file provided should look like this example file:
-
-----
-<?xml version='1.0' encoding='UTF-8' standalone='yes'?>
-<notifications>
-
-<!-- Notifications are processed in the order they are defined in XML -->
-
-<!-- Notification that is sent only once right after create-subscription is called -->
-<notification>
-    <!-- Content of each notification entry must contain the entire notification with event time. Event time can be hardcoded, or generated by testtool if XXXX is set as eventtime in this XML -->
-    <content><![CDATA[
-        <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
-            <eventTime>2011-01-04T12:30:46</eventTime>
-            <random-notification xmlns="http://www.opendaylight.org/netconf/event:1.0">
-                <random-content>single no delay</random-content>
-            </random-notification>
-        </notification>
-    ]]></content>
-</notification>
-
-<!-- Repeated Notification that is sent 5 times with 2 second delay inbetween -->
-<notification>
-    <!-- Delay in seconds from previous notification -->
-    <delay>2</delay>
-    <!-- Number of times this notification should be repeated -->
-    <times>5</times>
-    <content><![CDATA[
-        <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
-            <eventTime>XXXX</eventTime>
-            <random-notification xmlns="http://www.opendaylight.org/netconf/event:1.0">
-                <random-content>scheduled 5 times 10 seconds each</random-content>
-            </random-notification>
-        </notification>
-    ]]></content>
-</notification>
-
-<!-- Single notification that is sent only once right after the previous notification -->
-<notification>
-    <delay>2</delay>
-    <content><![CDATA[
-        <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
-            <eventTime>XXXX</eventTime>
-            <random-notification xmlns="http://www.opendaylight.org/netconf/event:1.0">
-                <random-content>single with delay</random-content>
-            </random-notification>
-        </notification>
-    ]]></content>
-</notification>
-
-</notifications>
-----
-
-===== Connecting testtool with controller Karaf distribution
-
-====== Auto connect to OpenDaylight
-
-It is possible to make OpenDaylight auto connect to the simulated
-devices spawned by testtool (so user does not have to post a configuration for
-every NETCONF connector via RESTCONF). The testtool is able to modify the OpenDaylight
-distribution to auto connect to the simulated devices after feature
-+odl-netconf-connector-all+ is installed.
-When running testtool, issue this command (just point the testool to the distribution:
-
-  java -jar netconf-testtool-1.1.0-SNAPSHOT-executable.jar --device-count 10 --distribution-folder ~/distribution-karaf-0.4.0-SNAPSHOT/ --debug true
-
-With the distribution-folder parameter, the testtool will modify the distribution
-to include configuration for netconf-connector to connect to all simulated devices.
-So there is no need to spawn netconf-connectors via RESTCONF.
-
-====== Running testtool and OpenDaylight on different machines
-
-The testtool binds by default to 0.0.0.0 so it should be accessible from remote
-machines. However you need to set the parameter "generate-config-address"
-(when using autoconnect) to the address of machine where testtool will be run
-so OpenDaylight can connect. The default value is localhost.
-
-===== Executing operations via RESTCONF on a mounted simulated device
-
-Simulated devices support basic RPCs for editing their config. This part shows how to edit data for simulated device via RESTCONF.
-
-====== Test YANG schema
-
-The controller and RESTCONF assume that the data that can be manipulated for
-mounted device is described by a YANG schema. For demonstration, we will define
-a simple YANG model:
-
-----
-module test {
-    yang-version 1;
-    namespace "urn:opendaylight:test";
-    prefix "tt";
-
-    revision "2014-10-17";
-
-
-   container cont {
-
-        leaf l {
-            type string;
-        }
-   }
-}
-----
-
-Save this schema in file called test@2014-10-17.yang and store it a directory called test-schemas/, e.g., your home folder.
-
-====== Editing data for simulated device
-
-* Start the device with following command:
-
-  java -jar netconf-testtool-1.1.0-SNAPSHOT-executable.jar --device-count 10 --distribution-folder ~/distribution-karaf-0.4.0-SNAPSHOT/ --debug true --schemas-dir ~/test-schemas/
-
-* Start OpenDaylight
-* Install odl-netconf-connector-all feature
-* Install odl-restconf feature
-* Check that you can see config data for simulated device by executing GET request to
-+
-  http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount/
-+
-* The data should be just and empty data container
-* Now execute edit-config request by executing a POST request to:
-+
-  http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount
-+
-with headers:
-+
-  Accept application/xml
-  Content-Type application/xml
-+
-and payload:
-+
-----
-<cont xmlns="urn:opendaylight:test">
-  <l>Content</l>
-</cont>
-----
-
-* Check that you can see modified config data for simulated device by executing GET request to
-
-  http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount/
-
-* Check that you can see the same modified data in operational for simulated device by executing GET request to
-
-  http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/17830-sim-device/yang-ext:mount/
-
-WARNING: Data will be mirrored in operational datastore only when using the default
-simple datastore.
-
-===== Known problems
-
-====== Slow creation of devices on virtual machines
-
-When testtool seems to take unusually long time to create the devices use this flag when running it:
-
-  -Dorg.apache.sshd.registerBouncyCastle=false
-
-====== Too many files open
-
-When testtool or OpenDaylight starts to fail with TooManyFilesOpen exception, you need to increase the limit of open files in your OS. To find out the limit in linux execute:
-
-  ulimit -a
-
-Example sufficient configuration in linux:
-
-----
-core file size          (blocks, -c) 0
-data seg size           (kbytes, -d) unlimited
-scheduling priority             (-e) 0
-file size               (blocks, -f) unlimited
-pending signals                 (-i) 63338
-max locked memory       (kbytes, -l) 64
-max memory size         (kbytes, -m) unlimited
-open files                      (-n) 500000
-pipe size            (512 bytes, -p) 8
-POSIX message queues     (bytes, -q) 819200
-real-time priority              (-r) 0
-stack size              (kbytes, -s) 8192
-cpu time               (seconds, -t) unlimited
-max user processes              (-u) 63338
-virtual memory          (kbytes, -v) unlimited
-file locks                      (-x) unlimited
-----
-
-To set these limits edit file: /etc/security/limits.conf, for example:
-
-----
-*         hard    nofile      500000
-*         soft    nofile      500000
-root      hard    nofile      500000
-root      soft    nofile      500000
-----
-
-====== "Killed"
-
-The testtool might end unexpectedly with a simple message: "Killed". This means
-that the OS killed the tool due to too much memory consumed or too many threads
-spawned. To find out the reason on linux you can use following command:
-
-  dmesg | egrep -i -B100 'killed process'
-
-Also take a look at this file: /proc/sys/kernel/threads-max. It limits the
-number of threads spawned by a process. Sufficient (but probably much more than
-enough) value is, e.g., 126676
-
-==== NETCONF stress/performance measuring tool
-This is basically a NETCONF client that puts NETCONF servers under
-heavy load of NETCONF RPCs and measures the time until a configurable
-amount of them is processed.
-
-////
-TODO add a guide on how to do this with OpenDaylight
-////
-
-==== RESTCONF stress-performance measuring tool
-Very similar to NETCONF stress tool with the difference of using
-RESTCONF protocol instead of NETCONF.
-
-////
-TODO add a guide on how to do this with OpenDaylight
-////
index c3ec74440924383723c9854575edfe2ba230e4b2..bca73227cac0f9a1affad8471f5f9a98641c5184 100644 (file)
@@ -1,17 +1,4 @@
+[[_southbound_netconf_connector]]
 == NETCONF User Guide
 
-=== Overview
-NETCONF is an XML-based protocol used for configuration and monitoring
-devices in the network. The base NETCONF protocol is described in
-http://tools.ietf.org/html/rfc6241[RFC-6241].
-
-.NETCONF in OpenDaylight:
-OpenDaylight supports the NETCONF protocol as a northbound server
-as well as a southbound plugin. It also includes a set of test tools
-for simulating NETCONF devices and clients.
-
-include::odl-netconf-southbound-user.adoc[]
-
-include::odl-netconf-northbound-user.adoc[]
-
-include::odl-netconf-testtool-user.adoc[]
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/netconf-user-guide.html
index d34adc3ca192ec3614cbeb302c336284fa5313e4..7277b661949bedd98d8b6aeb1d8712ce2bb51b0c 100644 (file)
@@ -1,142 +1,3 @@
 == DIDM User Guide
 
-=== Overview
-The Device Identification and Driver Management (DIDM) project addresses the
-need to provide device-specific functionality. Device-specific functionality is
-code that performs a feature, and the code is knowledgeable of the capability
-and limitations of the device. For example, configuring VLANs and adjusting
-FlowMods are features, and there may be different implementations for different
-device types. Device-specific functionality is implemented as Device Drivers.
-Device Drivers need to be associated with the devices they can be used with. To
-determine this association requires the ability to identify the device type.
-
-=== DIDM Architecture
-The DIDM project creates the infrastructure to support the following functions:
-
- * *Discovery* - Determination that a device exists in the controller
-   management domain and connectivity to the device can be established. For
-   devices that support the OpenFlow protocol, the existing discovery
-   mechanism in OpenDaylight suffices. Devices that do not support OpenFlow
-   will be discovered through manual means such as the operator entering
-   device information via GUI or REST API.
- * *Identification* – Determination of the device type.
- * *Driver Registration* – Registration of Device Drivers as routed RPCs.
- * *Synchronization* – Collection of device information, device configuration,
-   and link (connection) information.
- * *Data Models for Common Features* – Data models will be defined to
-   perform common features such as VLAN configuration. For example,
-   applications can configure a VLAN by writing the VLAN data to the data store
-   as specified by the common data model.
- * *RPCs for Common Features* – Configuring VLANs and adjusting
-   FlowMods are example of features. RPCs will be defined that specify the
-   APIs for these features. Drivers implement features for specific devices and
-   support the APIs defined by the RPCs. There may be different Driver
-   implementations for different device types.
-
-=== Atrium Support
-
-The Atrium implements an open source router that speaks BGP
-to other routers, and forwards packets received on one port/vlan to another,
-based on the next-hop learnt via BGP peering. A BGP peering application for the
-Open Daylight Controller and a new model for flow objective drivers for switches
-integrated with the Open Daylight Atrium distribution was developed for this
-project. The implementation has the same level of feature partly that was
-introduced by the Atrium 2015/A distribution on the ONOS controller. An overview
-of the architecture is available at here (https://github.com/onfsdn/atrium-docs/wiki/ODL-Based-Atrium-Router-16A).
-
-Atrium stack is implemented in OpenDaylight using Atrium and DIDM project.
-Atrium project provides the application implementation for BGP
-peering and the DIDM project provides implementation for FlowObjectives.
-FlowObjective provides an abstraction layer and present the pipeline agnostic
-api to application to consume.
-
-==== FlowObjective
-Flow Objectives describe an SDN application’s objective (or intention) behind a
-flow it is sending to a device.
-
-Application communicates the flow installation requirement using Flow
-Objectives. DIDM drivers translates the Flow Objectives to device specific flows
-as per the device pipeline.
-
-There are three FlowObjectives (already implemented in ONOS controller) :
-
-* Filtering Objective
-* Next Objective
-* Forwarding Objective
-
-=== Installing DIDM
-
-To install DIDM, download OpenDaylight and use the Karaf console to install the following features:
-
-* odl-openflowplugin-all
-* odl-didm-all
-
-odl-didm-all installs the following required features:
-
-* odl-didm-ovs-all
-* odl-didm-ovs-impl
-* odl-didm-util
-* odl-didm-identification
-* odl-didm-drivers
-* odl-didm-hp-all
-
-=== Configuring DIDM
-
-This section shows an example configuration steps for installing a driver (HP 3800 OpenFlow switch driver).
-
-=== Install DIDM features:
-
-----
-feature:install odl-didm-identification-api
-feature:install odl-didm-drivers
-----
-
-In order to identify the device, device driver needs to be installed first.
-Identification Manager will be notified when a new device connects to the controller.
-
-=== Install HP driver
-
-feature:install odl-didm-hp-all installs the following features
-
-* odl-didm-util
-* odl-didm-identification
-* odl-didm-drivers
-* odl-didm-hp-all
-* odl-didm-hp-impl
-
-Now at this point, the driver has written all of the identification information in to the MD-SAL datastore.
-The identification manager should have that information so that it can try to identify the HP 3800 device when it connects to the controller.
-
-Configure the switch and connect it to the controller from the switch CLI.
-
-=== Run REST GET command to verify the device details:
-
-http://<CONTROLLER-IP:8181>/restconf/operational/opendaylight-inventory:nodes
-
-=== Run REST adjust-flow command to adjust flows and push to the device
-
-.Flow mod driver for HP 3800 device is added in Beryllium release
-This driver adjusts the flows and push the same to the device.
-This API takes the flow to be adjusted as input and displays the adjusted flow as output in the REST output container.
-Here is the REST API to adjust and push flows to HP 3800 device:
-
-http://<CONTROLLER-IP:8181>/restconf/operations/openflow-feature:adjust-flow
-
-=== FlowObjectives API
-
-FlowObjective presents the OpenFlow pipeline agnostic API to Application to
-consume. Application communicate their intent behind installation of flow to
-Drivers using the FlowObjective. Driver translates the FlowObjective in device
-specific flows and uses the OpenFlowPlugin to install the flows to the device.
-
-==== Filter Objective
-
-http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:filter
-
-==== Next Objective
-
-http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:next
-
-==== Forward Objective
-
-http://<CONTROLLER-IP>:8181/restconf/operations/atrium-flow-objective:forward
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/didm-user-guide.html
diff --git a/manuals/user-guide/src/main/asciidoc/genius/genius-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/genius/genius-user-guide.adoc
new file mode 100644 (file)
index 0000000..82deb72
--- /dev/null
@@ -0,0 +1,3 @@
+== Genius User Guide ==
+
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/genius-user-guide.html
diff --git a/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-faas-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-faas-user-guide.adoc
deleted file mode 100644 (file)
index d3adb59..0000000
+++ /dev/null
@@ -1,11 +0,0 @@
-==== Overview
-
-The FaaS renderer feature enables leveraging the FaaS project as a GBP renderer.
-
-===== Installing and Pre-requisites
-
-From the Karaf console in OpenDaylight:
-
- feature:install odl-groupbasedpolicy-faas
-
-More information about FaaS can be found here: https://wiki.opendaylight.org/view/FaaS:GBPIntegration
diff --git a/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-iovisor-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-iovisor-user-guide.adoc
deleted file mode 100644 (file)
index 8be718e..0000000
+++ /dev/null
@@ -1,13 +0,0 @@
-==== Overview
-
-The IO Visor renderer feature enables container endpoints (e.g. Docker, LXC) to leverage GBP policies.
-
-The renderer interacts with a IO Visor module from the Linux Foundation IO Visor project.
-
-===== Installing and Pre-requisites
-
-From the Karaf console in OpenDaylight:
-
- feature:install odl-groupbasedpolicy-iovisor odl-restconf
-
-Installation details, usage, and other information for the IO Visor GBP module can be found here: https://github.com/iovisor/iomodules[*IO Visor* github repo for IO Modules]
diff --git a/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-neutronmapper-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-neutronmapper-user-guide.adoc
deleted file mode 100644 (file)
index 072a8bb..0000000
+++ /dev/null
@@ -1,182 +0,0 @@
-==== Overview
-This section is for Application Developers and Network Administrators
-who are looking to integrate Group Based Policy with OpenStack. 
-
-To enable the *GBP* Neutron Mapper feature, at the Karaf console:
-
- feature:install odl-groupbasedpolicy-neutronmapper
-
-Neutron Mapper has the following dependencies that are automatically loaded:
-
- odl-neutron-service
-
-Neutron Northbound implementing REST API used by OpenStack
-
- odl-groupbasedpolicy-base
-
-Base *GBP* feature set, such as policy resolution, data model etc.
-
- odl-groupbasedpolicy-ofoverlay
-
-// For Lithium, *GBP* has one renderer, hence this is loaded by default.
-
-REST calls from OpenStack Neutron are by the Neutron NorthBound project.
-
-*GBP* provides the implementation of the http://developer.openstack.org/api-ref-networking-v2.html[Neutron V2.0 API].
-
-==== Features
-
-List of supported Neutron entities:
-
-* Port
-* Network
-** Standard Internal
-** External provider L2/L3 network
-* Subnet
-* Security-groups
-* Routers
-** Distributed functionality with local routing per compute
-** External gateway access per compute node (dedicated port required) 
-** Multiple routers per tenant
-* FloatingIP NAT
-* IPv4/IPv6 support
-
-The mapping of Neutron entities to *GBP* entities is as follows:
-
-*Neutron Port*
-
-.Neutron Port
-image::groupbasedpolicy/neutronmapper-gbp-mapping-port.png[width=300]
-
-The Neutron port is mapped to an endpoint. 
-
-The current implementation supports one IP address per Neutron port.
-
-An endpoint and L3-endpoint belong to multiple EndpointGroups if the Neutron port is in multiple Neutron Security Groups. 
-
-The key for endpoint is L2-bridge-domain obtained as the parent of L2-flood-domain representing Neutron network. The MAC address is from the Neutron port.
-An L3-endpoint is created based on L3-context (the parent of the L2-bridge-domain) and IP address of Neutron Port. 
-
-*Neutron Network*
-
-.Neutron Network
-image::groupbasedpolicy/neutronmapper-gbp-mapping-network.png[width=300]
-
-A Neutron network has the following characteristics:
-
-* defines a broadcast domain
-* defines a L2 transmission domain
-* defines a L2 name space.
-
-To represent this, a Neutron Network is mapped to multiple *GBP* entities. 
-The first mapping is to an L2 flood-domain to reflect that the Neutron network is one flooding or broadcast domain.
-An L2-bridge-domain is then associated as the parent of L2 flood-domain. This reflects both the L2 transmission domain as well as the L2 addressing namespace.
-
-The third mapping is to L3-context, which represents the distinct L3 address space. 
-The L3-context is the parent of L2-bridge-domain. 
-
-*Neutron Subnet*
-
-.Neutron Subnet
-image::groupbasedpolicy/neutronmapper-gbp-mapping-subnet.png[width=300]
-
-Neutron subnet is associated with a Neutron network. The Neutron subnet is mapped to a *GBP* subnet where the parent of the subnet is L2-flood-domain representing the Neutron network. 
-
-*Neutron Security Group*
-
-.Neutron Security Group and Rules
-image::groupbasedpolicy/neutronmapper-gbp-mapping-securitygroup.png[width=300]
-
-*GBP* entity representing Neutron security-group is EndpointGroup. 
-
-*Infrastructure EndpointGroups*
-
-Neutron-mapper automatically creates EndpointGroups to manage key infrastructure items such as:
-
-* DHCP EndpointGroup - contains endpoints representing Neutron DHCP ports
-* Router EndpointGroup - contains endpoints representing Neutron router interfaces
-* External EndpointGroup - holds L3-endpoints representing Neutron router gateway ports, also associated with FloatingIP ports.
-
-*Neutron Security Group Rules*
-
-This is the most involved amongst all the mappings because Neutron security-group-rules are mapped to contracts with clauses, 
-subjects, rules, action-refs, classifier-refs, etc. 
-Contracts are used between EndpointGroups representing Neutron Security Groups. 
-For simplification it is important to note that Neutron security-group-rules are similar to a *GBP* rule containing:
-
-* classifier with direction
-* action of *allow*.
-
-
-*Neutron Routers*
-
-.Neutron Router
-image::groupbasedpolicy/neutronmapper-gbp-mapping-router.png[width=300]
-
-Neutron router is represented as a L3-context. This treats a router as a Layer3 namespace, and hence every network attached to it a part
-of that Layer3 namespace. 
-
-This allows for multiple routers per tenant with complete isolation.
-
-The mapping of the router to an endpoint represents the router's interface or gateway port.
-
-The mapping to an EndpointGroup represents the internal infrastructure EndpointGroups created by the *GBP* Neutron Mapper
-
-When a Neutron router interface is attached to a network/subnet, that network/subnet and its associated endpoints or Neutron Ports are seamlessly added to the namespace.
-
-*Neutron FloatingIP*
-
-When associated with a Neutron Port, this leverages the <<OfOverlay,OfOverlay>> renderer's NAT capabilities.
-
-A dedicated _external_ interface on each Nova compute host allows for disitributed external access. Each Nova instance associated with a 
-FloatingIP address can access the external network directly without having to route via the Neutron controller, or having to enable any form
-of Neutron distributed routing functionality.
-
-Assuming the gateway provisioned in the Neutron Subnet command for the external network is reachable, the combination of *GBP* Neutron Mapper and 
-<<OfOverlay,OfOverlay renderer>> will automatically ARP for this default gateway, requiring no user intervention.
-
-
-*Troubleshooting within GBP*
-
-Logging level for the mapping functionality can be set for package org.opendaylight.groupbasedpolicy.neutron.mapper. An example of enabling TRACE logging level on Karaf console:
-
- log:set TRACE org.opendaylight.groupbasedpolicy.neutron.mapper
-
-*Neutron mapping example*
-As an example for mapping can be used creation of Neutron network, subnet and port.
-When a Neutron network is created 3 *GBP* entities are created: l2-flood-domain, l2-bridge-domain, l3-context.
-.Neutron network mapping
-image::groupbasedpolicy/neutronmapper-gbp-mapping-network-example.png[width=500]
-After an subnet is created in the network mapping looks like this.
-.Neutron subnet mapping
-image::groupbasedpolicy/neutronmapper-gbp-mapping-subnet-example.png[width=500]
-If an Neutron port is created in the subnet an endpoint and l3-endpoint are created. The endpoint has key composed from l2-bridge-domain and MAC address from Neutron port. A key of l3-endpoint is compesed from l3-context and IP address. The network containment of endpoint and l3-endpoint points to the subnet.
-.Neutron port mapping
-image::groupbasedpolicy/neutronmapper-gbp-mapping-port-example.png[width=500]
-
-==== Configuring GBP Neutron
-
-No intervention passed initial OpenStack setup is required by the user.
-
-More information about configuration can be found in our DevStack demo environment on the https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)[*GBP* wiki].
-
-==== Administering or Managing GBP Neutron
-
-For consistencies sake, all provisioning should be performed via the Neutron API. (CLI or Horizon).
-
-The mapped policies can be augmented via the *GBP* <<UX,UX>>, to:
-
-* Enable <<SFC,Service Function Chaining>>
-* Add endpoints from outside of Neutron i.e. VMs/containers not provisioned in OpenStack
-* Augment policies/contracts derived from Security Group Rules
-* Overlay additional contracts or groupings
-
-==== Tutorials
-
-A DevStack demo environment can be found on the https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)[*GBP* wiki].
diff --git a/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-ofoverlay-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-ofoverlay-user-guide.adoc
deleted file mode 100644 (file)
index f8396f5..0000000
+++ /dev/null
@@ -1,516 +0,0 @@
-==== Overview
-
-The OpenFlow Overlay (OfOverlay) feature enables the OpenFlow Overlay
-renderer, which creates a network virtualization solution across nodes
-that host Open vSwitch software switches.
-
-===== Installing and Pre-requisites
-
-From the Karaf console in OpenDaylight:
-
- feature:install odl-groupbasedpolicy-ofoverlay
-
-This renderer is designed to work with OpenVSwitch (OVS) 2.1+ (although 2.3 is strongly recommended) and OpenFlow 1.3.
-
-When used in conjunction with the <<Neutron,Neutron Mapper feature>> no extra OfOverlay specific setup is required.
-
-When this feature is loaded "standalone", the user is required to configure infrastructure, such as
-
-* instantiating OVS bridges,
-* attaching hosts to the bridges,
-* and creating the VXLAN/VXLAN-GPE tunnel ports on the bridges.
-
-[[offset]]
-The *GBP* OfOverlay renderer also supports a table offset option, to offset the pipeline post-table 0.
-The value of table offset is stored in the config datastore and it may be rewritten at runtime.
-
-----
-PUT http://{{controllerIp}}:8181/restconf/config/ofoverlay:of-overlay-config
-{
-    "of-overlay-config": {
-        "gbp-ofoverlay-table-offset": 6
-    }
-}
-----
-
-The default value is set by changing:
- <gbp-ofoverlay-table-offset>0</gbp-ofoverlay-table-offset>
-
-in file:
-distribution-karaf/target/assembly/etc/opendaylight/karaf/15-groupbasedpolicy-ofoverlay.xml
-
-To avoid overwriting runtime changes, the default value is used only when the OfOverlay renderer starts and no other
-value has been written before.
-
-==== OpenFlow Overlay Architecture
-
-These are the primary components of *GBP*. The OfOverlay components are highlighted in red.
-
-.OfOverlay within *GBP*
-image::groupbasedpolicy/ofoverlay-1-components.png[align="center",width=500]
-
-In terms of the inner components of the *GBP* OfOverlay renderer:
-
-.OfOverlay expanded view:
-image::groupbasedpolicy/ofoverlay-2-components.png[align="center",width=500]
-
-*OfOverlay Renderer*
-
-Launches components below:
-
-*Policy Resolver*
-
-Policy resolution is completely domain independent, and the OfOverlay leverages process policy information internally. See <<policyresolution,Policy Resolution process>>.
-
-It listens to inputs to the _Tenants_ configuration datastore, validates tenant input, then writes this to the Tenants operational datastore.
-
-From there an internal notification is generated to the PolicyManager.
-
-In the next release, this will be moving to a non-renderer specific location.
-
-*Endpoint Manager*
-
-The endpoint repository operates in *orchestrated* mode. This means the user is responsible for the provisioning of endpoints via:
-
-* <<UX,UX/GUI>>
-* REST API
-
-NOTE: When using the <<Neutron,Neutron mapper>> feature, everything is managed transparently via Neutron.
-
-The Endpoint Manager is responsible for listening to Endpoint repository updates and notifying the Switch Manager when a valid Endpoint has been registered.
-
-It also supplies utility functions to the flow pipeline process.
-
-*Switch Manager*
-
-The Switch Manager is purely a state manager.
-
-Switches are in one of 3 states:
-
-* DISCONNECTED
-* PREPARING
-* READY
-
-*Ready* is denoted by a connected switch:
-
-* having a tunnel interface
-* having at least one endpoint connected.
-
-In this way *GBP* is not writing to switches it has no business to.
-
-*Preparing* simply means the switch has a controller connection but is missing one of the above _complete and necessary_ conditions
-
-*Disconnected* means a previously connected switch is no longer present in the Inventory operational datastore.
-
-.OfOverlay Flow Pipeline
-image::groupbasedpolicy/ofoverlay-3-flowpipeline.png[align="center",width=500]
-
-The OfOverlay leverages Nicira registers as follows:
-
-* REG0 = Source EndpointGroup + Tenant ordinal
-* REG1 = Source Conditions + Tenant ordinal
-* REG2 = Destination EndpointGroup + Tenant ordinal
-* REG3 = Destination Conditions + Tenant ordinal
-* REG4 = Bridge Domain + Tenant ordinal
-* REG5 = Flood Domain + Tenant ordinal
-* REG6 = Layer 3 Context + Tenant ordinal
-
-*Port Security*
-
-Table 0 of the OpenFlow pipeline. Responsible for ensuring that only valid connections can send packets into the pipeline:
-
- cookie=0x0, <snip> , priority=200,in_port=3 actions=goto_table:2
- cookie=0x0, <snip> , priority=200,in_port=1 actions=goto_table:1
- cookie=0x0, <snip> , priority=121,arp,in_port=5,dl_src=fa:16:3e:d5:b9:8d,arp_spa=10.1.1.3 actions=goto_table:2
- cookie=0x0, <snip> , priority=120,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_src=10.1.1.3 actions=goto_table:2
- cookie=0x0, <snip> , priority=115,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_dst=255.255.255.255 actions=goto_table:2
- cookie=0x0, <snip> , priority=112,ipv6 actions=drop
- cookie=0x0, <snip> , priority=111, ip actions=drop
- cookie=0x0, <snip> , priority=110,arp actions=drop
- cookie=0x0, <snip> ,in_port=5,dl_src=fa:16:3e:d5:b9:8d actions=goto_table:2
- cookie=0x0, <snip> , priority=1 actions=drop
-
-Ingress from tunnel interface, go to Table _Source Mapper_:
-
- cookie=0x0, <snip> , priority=200,in_port=3 actions=goto_table:2
-
-Ingress from outside, goto Table _Ingress NAT Mapper_:
-
- cookie=0x0, <snip> , priority=200,in_port=1 actions=goto_table:1
-
-ARP from Endpoint, go to Table _Source Mapper_:
-
- cookie=0x0, <snip> , priority=121,arp,in_port=5,dl_src=fa:16:3e:d5:b9:8d,arp_spa=10.1.1.3 actions=goto_table:2
-
-IPv4 from Endpoint, go to Table _Source Mapper_:
-
- cookie=0x0, <snip> , priority=120,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_src=10.1.1.3 actions=goto_table:2
-
-DHCP DORA from Endpoint, go to Table _Source Mapper_:
-
- cookie=0x0, <snip> , priority=115,ip,in_port=5,dl_src=fa:16:3e:d5:b9:8d,nw_dst=255.255.255.255 actions=goto_table:2
-
-Series of DROP tables with priority set to capture any non-specific traffic that should have matched above:
-
- cookie=0x0, <snip> , priority=112,ipv6 actions=drop
- cookie=0x0, <snip> , priority=111, ip actions=drop
- cookie=0x0, <snip> , priority=110,arp actions=drop
-
-"L2" catch all traffic not identified above:
-
- cookie=0x0, <snip> ,in_port=5,dl_src=fa:16:3e:d5:b9:8d actions=goto_table:2
-
-Drop Flow:
-
- cookie=0x0, <snip> , priority=1 actions=drop
-
-
-*Ingress NAT Mapper*
-
-Table <<offset,_offset_>>+1.
-
-ARP responder for external NAT address:
-
- cookie=0x0, <snip> , priority=150,arp,arp_tpa=192.168.111.51,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:58:c3:dd->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0xfa163e58c3dd->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xc0a86f33->NXM_OF_ARP_SPA[],IN_PORT
-
-Translate from Outside to Inside and perform same functions as SourceMapper.
-
- cookie=0x0, <snip> , priority=100,ip,nw_dst=192.168.111.51 actions=set_field:10.1.1.2->ip_dst,set_field:fa:16:3e:58:c3:dd->eth_dst,load:0x2->NXM_NX_REG0[],load:0x1->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],load:0x3->NXM_NX_TUN_ID[0..31],goto_table:3
-
-*Source Mapper*
-
-Table <<offset,_offset_>>+2.
-
-Determines based on characteristics from the ingress port, which:
-
-* EndpointGroup(s) it belongs to
-* Forwarding context
-* Tunnel VNID ordinal
-
-Establishes tunnels at valid destination switches for ingress.
-
-Ingress Tunnel established at remote node with VNID Ordinal that maps to Source EPG, Forwarding Context etc:
-
- cookie=0x0, <snip>, priority=150,tun_id=0xd,in_port=3 actions=load:0xc->NXM_NX_REG0[],load:0xffffff->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],goto_table:3
-
-Maps endpoint to Source EPG, Forwarding Context based on ingress port, and MAC:
-
- cookie=0x0, <snip> , priority=100,in_port=5,dl_src=fa:16:3e:b4:b4:b1 actions=load:0xc->NXM_NX_REG0[],load:0x1->NXM_NX_REG1[],load:0x4->NXM_NX_REG4[],load:0x5->NXM_NX_REG5[],load:0x7->NXM_NX_REG6[],load:0xd->NXM_NX_TUN_ID[0..31],goto_table:3
-
-Generic drop:
-
- cookie=0x0, duration=197.622s, table=2, n_packets=0, n_bytes=0, priority=1 actions=drop
-
-*Destination Mapper*
-
-Table <<offset,_offset_>>+3.
-
-Determines based on characteristics of the endpoint:
-
-* EndpointGroup(s) it belongs to
-* Forwarding context
-* Tunnel Destination value
-
-Manages routing based on valid ingress nodes ARP'ing for their default gateway, and matches on either gateway MAC or destination endpoint MAC.
-
-ARP for default gateway for the 10.1.1.0/24 subnet:
-
- cookie=0x0, <snip> , priority=150,arp,reg6=0x7,arp_tpa=10.1.1.1,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:28:4c:82->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0xfa163e284c82->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xa010101->NXM_OF_ARP_SPA[],IN_PORT
-
-Broadcast traffic destined for GroupTable:
-
- cookie=0x0, <snip> , priority=140,reg5=0x5,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=load:0x5->NXM_NX_TUN_ID[0..31],group:5
-
-Layer3 destination matching flows, where priority=100+masklength. Since *GBP* now support L3Prefix endpoint, we can set default routes etc:
-
- cookie=0x0, <snip>, priority=132,ip,reg6=0x7,dl_dst=fa:16:3e:b4:b4:b1,nw_dst=10.1.1.3 actions=load:0xc->NXM_NX_REG2[],load:0x1->NXM_NX_REG3[],load:0x5->NXM_NX_REG7[],set_field:fa:16:3e:b4:b4:b1->eth_dst,dec_ttl,goto_table:4
-
-Layer2 destination matching flows, designed to be caught only after last IP flow (lowest priority IP flow is 100):
-
- cookie=0x0, duration=323.203s, table=3, n_packets=4, n_bytes=168, priority=50,reg4=0x4,dl_dst=fa:16:3e:58:c3:dd actions=load:0x2->NXM_NX_REG2[],load:0x1->NXM_NX_REG3[],load:0x2->NXM_NX_REG7[],goto_table:4
-
-General drop flow:
- cookie=0x0, duration=323.207s, table=3, n_packets=6, n_bytes=588, priority=1 actions=drop
-
-*Policy Enforcer*
-
-Table <<offset,_offset_>>+4.
-
-Once the Source and Destination EndpointGroups are assigned, policy is enforced based on resolved rules.
-
-In the case of <<SFC,Service Function Chaining>>, the encapsulation and destination for traffic destined to a chain, is discovered and enforced.
-
-Policy flow, allowing IP traffic between EndpointGroups:
-
- cookie=0x0, <snip> , priority=64998,ip,reg0=0x8,reg1=0x1,reg2=0xc,reg3=0x1 actions=goto_table:5
-
-*Egress NAT Mapper*
-
-Table <<offset,_offset_>>+5.
-
-Performs NAT function before Egressing OVS instance to the underlay network.
-
-Inside to Outside NAT translation before sending to underlay:
-
- cookie=0x0, <snip> , priority=100,ip,reg6=0x7,nw_src=10.1.1.2 actions=set_field:192.168.111.51->ip_src,goto_table:6
-
-*External Mapper*
-
-Table <<offset,_offset_>>+6.
-
-Manages post-policy enforcement for endpoint specific destination effects. Specifically for <<SFC,Service Function Chaining>>, which is why we can support both symmetric and asymmetric chains
-and distributed ingress/egress classification.
-
-Generic allow:
-
- cookie=0x0, <snip>, priority=100 actions=output:NXM_NX_REG7[]
-
-==== Configuring OpenFlow Overlay via REST
-
-NOTE: Please see the <<UX,UX>> section on how to configure *GBP* via the GUI.
-
-*Endpoint*
-
-----
-POST http://{{controllerIp}}:8181/restconf/operations/endpoint:register-endpoint
-{
-    "input": {
-        "endpoint-group": "<epg0>",
-        "endpoint-groups" : ["<epg1>","<epg2>"],
-        "network-containment" : "<fowarding-model-context1>",
-        "l2-context": "<bridge-domain1>",
-        "mac-address": "<mac1>",
-        "l3-address": [
-            {
-                "ip-address": "<ipaddress1>",
-                "l3-context": "<l3_context1>"
-            }
-        ],
-        "*ofoverlay:port-name*": "<ovs port name>",
-        "tenant": "<tenant1>"
-    }
-}
-----
-
-NOTE: The usage of "port-name" preceded by "ofoverlay". In OpenDaylight, base datastore objects can be _augmented_. In *GBP*, the base endpoint model has no renderer
-specifics, hence can be leveraged across multiple renderers.
-
-*OVS Augmentations to Inventory*
-
-----
-PUT http://{{controllerIp}}:8181/restconf/config/opendaylight-inventory:nodes/
-{
-    "opendaylight-inventory:nodes": {
-        "node": [
-            {
-                "id": "openflow:123456",
-                "ofoverlay:tunnel": [
-                    {
-                        "tunnel-type": "overlay:tunnel-type-vxlan",
-                        "ip": "<ip_address_of_ovs>",
-                        "port": 4789,
-                        "node-connector-id": "openflow:123456:1"
-                    }
-                ]
-            },
-            {
-                "id": "openflow:654321",
-                "ofoverlay:tunnel": [
-                    {
-                        "tunnel-type": "overlay:tunnel-type-vxlan",
-                        "ip": "<ip_address_of_ovs>",
-                        "port": 4789,
-                        "node-connector-id": "openflow:654321:1"
-                    }
-                ]
-            }
-        ]
-    }
-}
-----
-
-*Tenants* see <<policyresolution,Policy Resolution>> and <<forwarding,Forwarding Model>> for details:
-
-----
-{
-  "policy:tenant": {
-    "contract": [
-      {
-        "clause": [
-          {
-            "name": "allow-http-clause",
-            "subject-refs": [
-              "allow-http-subject",
-              "allow-icmp-subject"
-            ]
-          }
-        ],
-        "id": "<id>",
-        "subject": [
-          {
-            "name": "allow-http-subject",
-            "rule": [
-              {
-                "classifier-ref": [
-                  {
-                    "direction": "in",
-                    "name": "http-dest"
-                  },
-                  {
-                    "direction": "out",
-                    "name": "http-src"
-                  }
-                ],
-                "action-ref": [
-                  {
-                    "name": "allow1",
-                    "order": 0
-                  }
-                ],
-                "name": "allow-http-rule"
-              }
-            ]
-          },
-          {
-            "name": "allow-icmp-subject",
-            "rule": [
-              {
-                "classifier-ref": [
-                  {
-                    "name": "icmp"
-                  }
-                ],
-                "action-ref": [
-                  {
-                    "name": "allow1",
-                    "order": 0
-                  }
-                ],
-                "name": "allow-icmp-rule"
-              }
-            ]
-          }
-        ]
-      }
-    ],
-    "endpoint-group": [
-      {
-        "consumer-named-selector": [
-          {
-            "contract": [
-              "<id>"
-            ],
-            "name": "<name>"
-          }
-        ],
-        "id": "<id>",
-        "provider-named-selector": []
-      },
-      {
-        "consumer-named-selector": [],
-        "id": "<id>",
-        "provider-named-selector": [
-          {
-            "contract": [
-              "<id>"
-            ],
-            "name": "<name>"
-          }
-        ]
-      }
-    ],
-    "id": "<id>",
-    "l2-bridge-domain": [
-      {
-        "id": "<id>",
-        "parent": "<id>"
-      }
-    ],
-    "l2-flood-domain": [
-      {
-        "id": "<id>",
-        "parent": "<id>"
-      },
-      {
-        "id": "<id>",
-        "parent": "<id>"
-      }
-    ],
-    "l3-context": [
-      {
-        "id": "<id>"
-      }
-    ],
-    "name": "GBPPOC",
-    "subject-feature-instances": {
-      "classifier-instance": [
-        {
-          "classifier-definition-id": "<id>",
-          "name": "http-dest",
-          "parameter-value": [
-            {
-              "int-value": "6",
-              "name": "proto"
-            },
-            {
-              "int-value": "80",
-              "name": "destport"
-            }
-          ]
-        },
-        {
-          "classifier-definition-id": "<id>",
-          "name": "http-src",
-          "parameter-value": [
-            {
-              "int-value": "6",
-              "name": "proto"
-            },
-            {
-              "int-value": "80",
-              "name": "sourceport"
-            }
-          ]
-        },
-        {
-          "classifier-definition-id": "<id>",
-          "name": "icmp",
-          "parameter-value": [
-            {
-              "int-value": "1",
-              "name": "proto"
-            }
-          ]
-        }
-      ],
-      "action-instance": [
-        {
-          "name": "allow1",
-          "action-definition-id": "<id>"
-        }
-      ]
-    },
-    "subnet": [
-      {
-        "id": "<id>",
-        "ip-prefix": "<ip_prefix>",
-        "parent": "<id>",
-        "virtual-router-ip": "<ip address>"
-      },
-      {
-        "id": "<id>",
-        "ip-prefix": "<ip prefix>",
-        "parent": "<id>",
-        "virtual-router-ip": "<ip address>"
-      }
-    ]
-  }
-}
-----
-
-
-==== Tutorials[[Demo]]
-
-Comprehensive tutorials, along with a demonstration environment leveraging Vagrant
-can be found on the https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)[*GBP* wiki]
-
diff --git a/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-sfc-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-sfc-user-guide.adoc
deleted file mode 100644 (file)
index 701fac2..0000000
+++ /dev/null
@@ -1,188 +0,0 @@
-==== Overview
-
-Please refer to the Service Function Chaining project for specifics on SFC provisioning and theory.
-
-*GBP* allows for the use of a chain, by name, in policy.
-
-This takes the form of an _action_ in *GBP*.
-
-Using the <<demo,*GBP* demo and development environment>> as an example:
-
-.GBP and SFC integration environment
-image::groupbasedpolicy/sfc-1-topology.png[align="center",width=500]
-
-In the topology above, a symmetrical chain between H35_2 and H36_3 could take path:
-
-H35_2 to sw1 to sff1 to sf1 to sff1 to sff2 to sf2 to sff2 to sw6 to H36_3
-
-If symmetric chaining was desired, the return path is:
-
-.GBP and SFC symmetric chain environment
-image::groupbasedpolicy/sfc-2-symmetric.png[align="center",width=500]
-
-
-If asymmetric chaining was desired, the return path could be direct, or an *entirely different chain*.
-
-.GBP and SFC assymmetric chain environment
-image::groupbasedpolicy/sfc-3-asymmetric.png[align="center",width=500]
-
-
-All these scenarios are supported by the integration.
-
-In the *Subject Feature Instance* section of the tenant config, we define the instances of the classifier definitions for ICMP and HTTP:
-----
-        "subject-feature-instances": {
-          "classifier-instance": [
-            {
-              "name": "icmp",
-              "parameter-value": [
-                {
-                  "name": "proto",
-                  "int-value": 1
-                }
-              ]
-            },
-            {
-              "name": "http-dest",
-              "parameter-value": [
-                {
-                  "int-value": "6",
-                  "name": "proto"
-                },
-                {
-                  "int-value": "80",
-                  "name": "destport"
-                }
-              ]
-            },
-            {
-              "name": "http-src",
-              "parameter-value": [
-                {
-                  "int-value": "6",
-                  "name": "proto"
-                },
-                {
-                  "int-value": "80",
-                  "name": "sourceport"
-                }
-              ]
-            }
-          ],
-----
-
-Then the action instances to associate to traffic that matches classifiers are defined. 
-
-Note the _SFC chain name_ must exist in SFC, and is validated against
-the datastore once the tenant configuration is entered, before entering a valid tenant configuration into the operational datastore (which triggers policy resolution).
-
-----
-          "action-instance": [
-            {
-              "name": "chain1",
-              "parameter-value": [
-                {
-                  "name": "sfc-chain-name",
-                  "string-value": "SFCGBP"
-                }
-              ]
-            },
-            {
-              "name": "allow1",
-            }
-          ]
-        },
-----
-
-When ICMP is matched, allow the traffic:
-
-----
-
-        "contract": [
-          {
-            "subject": [
-              {
-                "name": "icmp-subject",
-                "rule": [
-                  {
-                    "name": "allow-icmp-rule",
-                    "order" : 0,
-                    "classifier-ref": [
-                      {
-                        "name": "icmp"
-                      }
-                    ],
-                    "action-ref": [
-                      {
-                        "name": "allow1",
-                        "order": 0
-                      }
-                    ]
-                  }
-                  
-                ]
-              },
-----
-
-When HTTP is matched, *in* to the provider of the contract with a TCP destination port of 80 (HTTP) or the HTTP request. The chain action is triggered, and similarly 
-*out* from the provider for traffic with TCP source port of 80 (HTTP), or the HTTP response.
-
-----
-              {
-                "name": "http-subject",
-                "rule": [
-                  {
-                    "name": "http-chain-rule-in",
-                    "classifier-ref": [
-                      {
-                        "name": "http-dest",
-                        "direction": "in"
-                      }
-                    ],
-                    "action-ref": [
-                      {
-                        "name": "chain1",
-                        "order": 0
-                      }
-                    ]
-                  },
-                  {
-                    "name": "http-chain-rule-out",
-                    "classifier-ref": [
-                      {
-                        "name": "http-src",
-                        "direction": "out"
-                      }
-                    ],
-                    "action-ref": [
-                      {
-                        "name": "chain1",
-                        "order": 0
-                      }
-                    ]
-                  }
-                ]
-              }
-----
-
-To enable asymmetrical chaining, for instance, the user desires that HTTP requests traverse the chain, but the HTTP response does not, the HTTP response is set to _allow_ instead of chain:
-
-----
-
-                  {
-                    "name": "http-chain-rule-out",
-                    "classifier-ref": [
-                      {
-                        "name": "http-src",
-                        "direction": "out"
-                      }
-                    ],
-                    "action-ref": [
-                      {
-                        "name": "allow1",
-                        "order": 0
-                      }
-                    ]
-                  }
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-ui-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/groupbasedpolicy/odl-groupbasedpolicy-ui-user-guide.adoc
deleted file mode 100644 (file)
index fb09b47..0000000
+++ /dev/null
@@ -1,282 +0,0 @@
-==== Overview
-
-These following components make up this application and are described in more detail in following sections:
-
-* Basic view
-* Governance view
-* Policy Expression view
-* Wizard view
-
-The *GBP* UX is access via: 
-
- http://<odl controller>:8181/index.html
-
-==== Basic view
-
-Basic view contains 5 navigation buttons which switch user to the desired section of application:
-
-* Governance – switch to the Governance view (middle of graphic has the same function)
-* Renderer configuration – switch to the Policy expression view with Renderers section expanded
-* Policy expression – switch to the Policy expression view with Policy section expanded
-* Operational constraints – placeholder for development in next release
-
-.Basic view
-image::groupbasedpolicy/ui-1-basicview.png[align="center",width=500]
-
-
-==== Governance view
-
-Governance view consists from three columns.
-
-.Governance view
-image::groupbasedpolicy/ui-2-governanceview.png[align="center",width=500]
-
-*Governance view – Basic view – Left column*
-
-In the left column is Health section with Exception and Conflict buttons with no functionality yet. This is a placeholder for development in further releases.
-
-*Governance view – Basic view – Middle column*
-
-In the top half of this section is select box with list of tenants for select. Once the tenant is selected, all sub sections in application operate and display data with actual selected tenant. 
-
-Below the select box are buttons which display Expressed or Delivered policy of Governance section. In the bottom half of this section is select box with list of renderers for select. There is currently only <<OfOverlay,OfOverlay>> renderer available. 
-
-Below the select box is Renderer configuration button, which switch the app into the Policy expression view with Renderers section expanded for performing CRUD operations. Renderer state button display Renderer state view.
-
-*Governance view – Basic view – Right column*
-
-In the bottom part of the right section of Governance view is Home button which switch the app to the Basic view. 
-
-In the top part is situated navigation menu with four main sections. 
-
-Policy expression button expand/collapse sub menu with three main parts of Policy expression. By clicking on sub menu buttons, user will be switched into the Policy expressions view with appropriate section expanded for performing CRUD operations. 
-
-Renderer configuration button switches user into the Policy expressions view. 
-
-Governance button expand/collapse sub menu with four main parts of Governance section. Sub menu buttons of Governance section display appropriate section of Governance view. 
-
-Operational constraints have no functionality yet, and is a placeholder for development in further releases. 
-
-Below the menu is place for view info section which displays info about actual selected element from the topology (explained below).
-
-
-*Governance view – Expressed policy*
-
-In this view are displayed contracts with their consumed and provided EndpointGroups of actual selected tenant, which can be changed in select box in the upper left corner. 
-
-By single-clicking on any contract or EPG, the data of actual selected element will be shown in the right column below the menu. A Manage button launches a display wizard window for managing configuration of items such as <<SFC,Service Function Chaining>>.
-
-
-.Expressed policy
-image::groupbasedpolicy/ui-3-governanceview-expressed.png[align="center",width=500]
-
-
-*Governance view – Delivered policy*
-In this view are displayed subjects with their consumed and provided EndpointGroups of actual selected tenant, which can be changed in select box in the upper left corner. 
-
-By single-clicking on any subject or EPG, the data of actual selected element will be shown in the right column below the menu. 
-
-By double-click on subject the subject detail view will be displayed with subject’s rules of actual selected subject, which can be changed in select box in the upper left corner. 
-
-By single-clicking on rule or subject, the data of actual selected element will be shown in the right column below the menu. 
-
-By double-clicking on EPG in Delivered policy view, the EPG detail view will be displayed with EPG’s endpoints of actual selected EPG, which can be changed in select box in the upper left corner. 
-
-By single-clicking on EPG or endpoint the data of actual selected element will be shown in the right column below the menu.
-
-
-.Delivered policy
-image::groupbasedpolicy/ui-4-governanceview-delivered-0.png[align="center",width=500]
-
-
-
-.Subject detail
-image::groupbasedpolicy/ui-4-governanceview-delivered-1-subject.png[align="center",width=500]
-
-
-.EPG detail
-image::groupbasedpolicy/ui-4-governanceview-delivered-2-epg.png[align="center",width=500]
-
-*Governance view – Renderer state*
-
-In this part are displayed Subject feature definition data with two main parts: Action definition and Classifier definition. 
-
-By clicking on the down/right arrow in the circle is possible to expand/hide data of appropriate container or list. Next to the list node are displayed names of list’s elements where one is always selected and element’s data are shown (blue line under the name). 
-
-By clicking on names of children nodes is possible to select desired node and node’s data will be displayed.
-
-
-.Renderer state
-image::groupbasedpolicy/ui-4-governanceview-renderer.png[align="center",width=500]
-
-==== Policy expression view
-
-In the left part of this view is placed topology of actual selected elements with the buttons for switching between types of topology at the bottom. 
-
-Right column of this view contains four parts. At the top of this column are displayed breadcrumbs with actual position in the application. 
-
-Below the breadcrumbs is select box with list of tenants for select. In the middle part is situated navigation menu, which allows switch to the desired section for performing CRUD operations. 
-
-At the bottom is quick navigation menu with Access Model Wizard button which display Wizard view, Home button which switch application to the Basic view and occasionally Back button, which switch application to the upper section.
-
-*Policy expression  - Navigation menu*
-
-To open Policy expression, select Policy expression from the GBP Home screen.
-
-In the top of navigation box you can select the tenant from the tenants list to activate features addicted to selected tenant.
-
-In the right menu, by default, the Policy menu section is expanded. Subitems of this section are modules for CRUD (creating, reading, updating and deleting) of tenants, EndpointGroups, contracts, L2/L3 objects.
-
-* Section Renderers contains CRUD forms for Classifiers and Actions.
-* Section Endpoints contains CRUD forms for Endpoint and L3 prefix endpoint.
-
-.Navigation menu
-image::groupbasedpolicy/ui-5-expresssion-1.png[height=400]
-
-.CRUD operations
-image::groupbasedpolicy/ui-5-expresssion-2.png[height=400]
-
-
-*Policy expression - Types of topology*
-
-There are three different types of topology:
-
-* Configured topology - EndpointGroups and contracts between them from CONFIG datastore
-* Operational topology - displays same information but is based on operational data. 
-* L2/L3 - displays relationships between L3Contexts, L2 Bridge domains, L2 Flood domains and Subnets.
-
-
-.L2/L3 Topology
-image::groupbasedpolicy/ui-5-expresssion-3.png[align="center",width=500]
-
-
-.Config Topology
-image::groupbasedpolicy/ui-5-expresssion-4.png[align="center",width=500]
-
-
-*Policy expression - CRUD operations*
-
-In this part are described basic flows for viewing, adding, editing and deleting system elements like tenants, EndpointGroups etc.
-
-==== Tenants
-
-To edit tenant objects click the Tenants button in the right menu. You can see the CRUD form containing tenants list and control buttons.
-
-To add new tenant, click the Add button This will display the form for adding a new tenant. After filling tenant attributes Name and Description click Save button. Saving of any object can be performed only if all the object attributes are filled correctly. If some attribute doesn't have correct value, exclamation mark with mouse-over tooltip will be displayed next to the label for the attribute. After saving of tenant the form will be closed and the tenants list will be set to default value.
-
-To view an existing tenant, select the tenant from the select box Tenants list. The view form is read-only and can be closed by clicking cross mark in the top right of the form.
-
-To edit selected tenant, click the Edit button, which will display the edit form for selected tenant. After editing the Name and Description of selected tenant click the Save button to save selected tenant. After saving of tenant the edit form will be closed and the tenants list will be set to default value.
-
-To delete tenant select the tenant from the Tenants list and click Delete button.
-
-To return to the Policy expression click Back button on the bottom of window.
-
-*EndpointGroups*
-
-For managing EndpointGroups (EPG) the tenant from the top Tenants list must be selected.
-
-To add new EPG click Add button and after filling required attributes click Save button. After adding the EPG you can edit it and assign Consumer named selector or Provider named selector to it.
-
-To edit EPG click the Edit button after selecting the EPG from Group list.
-
-To add new Consumer named selector (CNS) click the Add button next to the Consumer named selectors list. While CNS editing you can set one or more contracts for current CNS pressing the Plus button and selecting the contract from the Contracts list. To remove the contract, click on the cross mark next to the contract. Added CNS can be viewed, edited or deleted by selecting from the Consumer named selectors list and clicking the Edit and Delete buttons like with the EPG or tenants.
-
-To add new Provider named selector (PNS) click the Add button next to the Provider named selectors list. While PNS editing you can set one or more contracts for current PNS pressing the Plus button and selecting the contract from the Contracts list. To remove the contract, click on the cross mark next to the contract. Added PNS can be viewed, edited or deleted by selecting from the Provider named selectors list and clicking the Edit and Delete buttons like with the EPG or tenants.
-
-To delete EPG, CNS or PNS select it in selectbox and click the Delete button next to the selectbox.
-
-*Contracts*
-
-For managing contracts the tenant from the top Tenants list must be selected.
-
-To add new Contract click Add button and after filling required fields click Save button.
-
-After adding the Contract user can edit it by selecting in the Contracts list  and clicking Edit button.
-
-To add new Clause click Add button next to the Clause list while editing the contract. While editing the Clause after selecting clause from the Clause list user can assign clause subjects by clicking the Plus button next to the Clause subjects label. Adding and editing action must be submitted by pressing Save button. To manage Subjects you can use CRUD form like with the Clause list.
-
-*L2/L3*
-
-For managing L2/L3 the tenant from the top Tenants list must be selected.
-
-To add L3 Context click the Add button next to the L3 Context list ,which will display the form for adding a new L3 Context. After filling L3 Context attributes click Save button. After saving of L3 Context, form will be closed and the L3 Context list will be set to default value.
-
-To view an existing L3 Context, select the L3 Context from the select box L3 Context list. The view form is read-only and can be closed by clicking cross mark in the top right of the form.
-
-If user wants to edit selected L3 Context, click the Edit button, which will display the edit form for selected L3 Context. After editing click the Save button to save selected L3 Context. After saving of L3 Context, the edit form will be closed and the L3 Context list will be set to default value.
-
-To delete L3 Context, select it from the L3 Context list and click Delete button.
-
-To add L2 Bridge Domain, click the Add button next to the L2 Bridge Domain list. This will display the form for adding a new L2 Bridge Domain. After filling L2 Bridge Domain attributes click Save button. After saving of L2 Bridge Domain, form will be closed and the L2 Bridge Domain list will be set to default value.
-
-To view an existing L2 Bridge Domain, select the L2 Bridge Domain from the select box L2 Bridge Domain list. The view form is read-only and can be closed by clicking cross mark in the top right of the form.
-
-If user wants to edit selected L2 Bridge Domain, click the Edit button, which will display the edit form for selected L2 Bridge Domain. After editing click the Save button to save selected L2 Bridge Domain. After saving of L2 Bridge Domain the edit form will be closed and the L2 Bridge Domain list will be set to default value.
-
-To delete L2 Bridge Domain select it from the L2 Bridge Domain list and click Delete button.
-
-To add L3 Flood Domain, click the Add button next to the L3 Flood Domain list. This will display the form for adding a new L3 Flood Domain. After filling L3 Flood Domain attributes click Save button. After saving of L3 Flood Domain, form will be closed and the L3 Flood Domain list will be set to default value.
-
-To view an existing L3 Flood Domain, select the L3 Flood Domain from the select box L3 Flood Domain list. The view form is read-only and can be closed by clicking cross mark in the top right of the form.
-
-If user wants to edit selected L3 Flood Domain, click the Edit button, which will display the edit form for selected L3 Flood Domain. After editing click the Save button to save selected L3 Flood Domain. After saving of L3 Flood Domain the edit form will be closed and the L3 Flood Domain list will be set to default value.
-
-To delete L3 Flood Domain select it from the L3 Flood Domain list and click Delete button.
-
-To add Subnet click the Add button next to the Subnet list. This will display the form for adding a new Subnet. After filling Subnet attributes click Save button. After saving of Subnet, form will be closed and the Subnet list will be set to default value.
-
-To view an existing Subnet, select the Subnet from the select box Subnet list. The view form is read-only and can be closed by clicking cross mark in the top right of the form.
-
-If user wants to edit selected Subnet, click the Edit button, which will display the edit form for selected Subnet. After editing click the Save button to save selected Subnet. After saving of Subnet the edit form will be closed and the Subnet list will be set to default value.
-
-To delete Subnet select it from the Subnet list and click Delete button.
-
-*Classifiers*
-
-To add Classifier, click the Add button next to the Classifier list. This will display the form for adding a new Classifier. After filling Classifier attributes click Save button. After saving of Classifier, form will be closed and the Classifier list will be set to default value.
-
-To view an existing Classifier, select the Classifier from the select box Classifier list. The view form is read-only and can be closed by clicking cross mark in the top right of the form.
-
-If you want to edit selected Classifier, click the Edit button, which will display the edit form for selected Classifier. After editing click the Save button to save selected Classifier. After saving of Classifier the edit form will be closed and the Classifier list will be set to default value.
-
-To delete Classifier select it from the Classifier list and click Delete button.
-
-*Actions*
-
-To add Action, click the Add button next to the Action list. This will display the form for adding a new Action. After filling Action attributes click Save button. After saving of Action, form will be closed and the Action list will be set to default value.
-
-To view an existing Action, select the Action from the select box Action list. The view form is read-only and can be closed by clicking cross mark in the top right of the form.
-
-If user wants to edit selected Action, click the Edit button, which will display the edit form for selected Action. After editing click the Save button to save selected Action. After saving of Action the edit form will be closed and the Action list will be set to default value.
-
-To delete Action select it from the Action list and click Delete button.
-
-*Endpoint*
-
-To add Endpoint, click the Add button next to the Endpoint list. This will display the form for adding a new Endpoint. To add EndpointGroup assignment click the Plus button next to the label EndpointGroups. To add Condition click Plus button next to the label Condition. To add L3 Address click the Plus button next to the L3 Addresses label. After filling Endpoint attributes click Save button. After saving of Endpoint, form will be closed and the Endpoint list will be set to default value.
-
-To view an existing Endpoint just, the Endpoint from the select box Endpoint list. The view form is read-only and can be closed by clicking cross mark in the top right of the form.
-
-If you want to edit selected Endpoint, click the Edit button, which will display the edit form for selected Endpoint. After editing click the Save button to save selected Endpoint. After saving of Endpoint the edit form will be closed and the Endpoint list will be set to default value.
-
-To delete Endpoint select it from the Endpoint list and click Delete button.
-
-*L3 prefix endpoint*
-
-To add L3 prefix endpoint, click the Add button next to the L3 prefix endpoint list. This will display the form for adding a new Endpoint. To add EndpointGroup assignment, click the Plus button next to the label EndpointGroups. To add Condition, click Plus button next to the label Condition. To add L2 gateway click the Plus button next to the L2 gateways label.  To add L3 gateway, click the Plus button next to the L3 gateways label. After filling L3 prefix endpoint attributes click Save button. After saving of L3 prefix endpoint, form will be closed and the Endpoint list will be set to default value.
-
-To view an existing L3 prefix endpoint, select the Endpoint from the select box L3 prefix endpoint list. The view form is read-only and can be closed by clicking cross mark in the top right of the form.
-
-If you want to edit selected L3 prefix endpoint, click the Edit button, which will display the edit form for selected L3 prefix endpoint. After editing click the Save button to save selected L3 prefix endpoint. After saving of Endpoint the edit form will be closed and the Endpoint list will be set to default value.
-
-To delete Endpoint select it from the L3 prefix endpoint list and click Delete button.
-
-==== Wizard
-
-Wizard provides quick method to send basic data to controller necessary for basic usage of GBP application. It is useful in the case that there aren’t any data in controller. In the first tab is form for create tenant. The second tab is for CRUD operations with contracts and their sub elements such as subjects, rules, clauses, action refs and classifier refs. The last tab is for CRUD operations with EndpointGroups and their CNS and PNS. Created structure of data is possible to send by clicking on Submit button.
-
-
-.Wizard
-image::groupbasedpolicy/ui-6-wizard.png[align="center",width=500]
-
index 9d79722a638fc458e93384db93ea8bc4cedf1601..384a26d263527fb230d881e15ed209fa5487f17f 100644 (file)
@@ -1,487 +1,3 @@
 == Group Based Policy User Guide
 
-=== Overview
-OpenDaylight Group Based Policy allows users to express network configuration in a declarative versus imperative way.
-
-This is often described as asking for *"what you want"*, rather than *"how to do it"*.
-
-In order to achieve this Group Based Policy (herein referred to as *GBP*) is an implementation of an *Intent System*.
-
-An *Intent System*:
-
-* is a process around an intent driven data model
-* contains no domain specifics
-* is capable of addressing multiple semantic definitions of intent
-
-To this end, *GBP* Policy views an *Intent System* visually as:
-
-.Intent System Process and Policy Surfaces
-image::groupbasedpolicy/IntentSystemPolicySurfaces.png[align="center",width=500]
-
-* *expressed intent* is the entry point into the system.
-* *operational constraints* provide policy for the usage of the system which modulates how the system is consumed. For instance _"All Financial applications must use a specific encryption standard"_.
-* *capabilities and state* are provided by _renderers_. _Renderers_ dynamically provide their capabilities to the core model, allowing the core model to remain non-domain specific.
-* *governance* provides feedback on the delivery of the _expressed intent_. i.e. _"Did we do what you asked us?"_
-
-In summary *GBP is about the Automation of Intent*.
-
-By thinking of *Intent Systems* in this way, it enables:
-
-* *automation of intent*
-+
-By focusing on *Model. Process. Automation*, a consistent policy resolution process enables for mapping between the *expressed intent* and renderers responsible for providing the capabilities of implementing that intent.
-
-* recursive/intent level-independent behaviour.
-+
-Where _one person's concrete is another's abstract_, intent can be fulfilled through a hierarchical implementation of non-domain specific policy resolution. Domain specifics are provided by the _renderers_, and exposed via the API, at each policy resolution instance.
-For example:
-
-** To DNS: The name "www.foo.com" is _abstract_, and it's IPv4 address 10.0.0.10 is _concrete_,
-** To an IP stack: 10.0.0.10 is _abstract_ and the MAC 08:05:04:03:02:01 is _concrete_,
-** To an Ethernet switch: The MAC 08:05:04:03:02:01 is _abstract_, the resolution to a port in it's CAM table is _concrete_,
-** To an optical network: The port maybe _abstract_, yet the optical wavelength is _concrete_.
-
-NOTE: _This is a very domain specific analogy, tied to something most readers will understand. It in no way implies the *GBP* should be implemented in an OSI type fashion.
-The premise is that by implementing a full *Intent System*, the user is freed from a lot of the constraints of how the expressed intent is realised._
-
-It is important to show the overall philosophy of *GBP* as it sets the project's direction.
-
-In the Beryllium release of OpenDaylight, *GBP* focused on *expressed intent*, *refactoring of how renderers consume and publish Subject Feature Definitions for multi-renderer support*.
-
-=== GBP Base Architecture and Value Proposition
-==== Terminology
-In order to explain the fundamental value proposition of *GBP*, an illustrated example is given. In order to do that some terminology must be defined.
-
-The Access Model is the core of the *GBP* Intent System policy resolution process.
-
-.GBP Access Model Terminology - Endpoints, EndpointGroups, Contract
-image::groupbasedpolicy/GBPTerminology1.png[align="center",width=500]
-
-.GBP Access Model Terminology - Subject, Classifier, Action
-image::groupbasedpolicy/GBPTerminology2.png[align="center",width=500]
-
-.GBP Forwarding Model Terminology - L3 Context, L2 Bridge Context, L2 Flood Context/Domain, Subnet
-image::groupbasedpolicy/GBPTerminology3.png[align="center",width=500]
-
-* Endpoints:
-+
-Define concrete uniquely identifiable entities. In Beryllium, examples could be a Docker container, or a Neutron port
-* EndpointGroups:
-+
-EndpointGroups are sets of endpoints that share a common set of policies. EndpointGroups can participate in contracts that determine the kinds of communication that are allowed. EndpointGroups _consume_ and _provide_ contracts.
-They also expose both _requirements and capabilities_, which are labels that help to determine how contracts will be applied. An EndpointGroup can specify a parent EndpointGroup from which it inherits.
-
-* Contracts:
-+
-Contracts determine which endpoints can communicate and in what way. Contracts between pairs of EndpointGroups are selected by the contract selectors defined by the EndpointGroup.
-Contracts expose qualities, which are labels that can help EndpointGroups to select contracts. Once the contract is selected,
-contracts have clauses that can match against requirements and capabilities exposed by EndpointGroups, as well as any conditions
-that may be set on endpoints, in order to activate subjects that can allow specific kinds of communication. A contract is allowed to specify a parent contract from which it inherits.
-
-* Subject
-+
-Subjects describe some aspect of how two endpoints are allowed to communicate. Subjects define an ordered list of rules that will match against the traffic and perform any necessary actions on that traffic.
-No communication is allowed unless a subject allows that communication.
-
-* Clause
-+
-Clauses are defined as part of a contract. Clauses determine how a contract should be applied to particular endpoints and EndpointGroups. Clauses can match against requirements and capabilities exposed by EndpointGroups,
-as well as any conditions that may be set on endpoints. Matching clauses define some set of subjects which can be applied to the communication between the pairs of endpoints.
-
-==== Architecture and Value Proposition
-
-*GBP* offers an intent based interface, accessed via the <<UX,UX>>, via the <<REST,REST API>> or directly from a domain-specific-language such as <<Neutron,Neutron>> through a mapping interface.
-
-There are two models in *GBP*:
-
-* the access (or core) model
-* the forwarding model
-
-.GBP Access (or Core) Model
-image::groupbasedpolicy/GBP_AccessModel_simple.png[align="center",width=500]
-
-The _classifier_ and _action_ portions of the model can be thought of as hooks, with their definition provided by each _renderer_ about its domain specific capabilities. In *GBP* Beryllium, there is one renderer,
-the _<<OfOverlay,OpenFlow Overlay renderer (OfOverlay).>>_
-
-These hooks are filled with _definitions_ of the types of _features_ the renderer can provide the _subject_, and are called *subject-feature-definitions*.
-
-This means an _expressed intent_ can be fulfilled by, and across, multiple renderers simultaneously, without any specific provisioning from the consumer of *GBP*.
-
-[[forwarding]]
-Since *GBP* is implemented in OpenDaylight, which is an SDN controller, it also must address networking. This is done via the _forwarding model_, which is domain specific to networking, but could be applied to many different _types_ of networking.
-
-.GBP Forwarding Model
-image::groupbasedpolicy/GBP_ForwardingModel_simple.png[align="center",width=500]
-
-Each endpoint is provisioned with a _network-containment_. This can be a:
-
-* subnet
-+
-** normal IP stack behaviour, where ARP is performed in subnet, and for out of subnet, traffic is sent to default gateway.
-** a subnet can be a child of any of the below forwarding model contexts, but typically would be a child of a flood-domain
-* L2 flood-domain
-** allows flooding behaviour.
-** is a n:1 child of a bridge-domain
-** can have multiple children
-* L2 bridge-domain
-** is a layer2 namespace
-** is the realm where traffic can be sent at layer 2
-** is a n:1 child of a L3 context
-** can have multiple children
-* L3 context
-** is a layer3 namespace
-** is the realm where traffic is passed at layer 3
-** is a n:1 child of a tenant
-** can have multiple children
-
-A simple example of how the access and forwarding models work is as follows:
-
-.GBP Endpoints, EndpointGroups and Contracts
-image::groupbasedpolicy/GBP_Endpoint_EPG_Contract.png[align="center",width=300]
-
-In this example, the *EPG:webservers* is _providing_ the _web_ and _ssh_ contracts. The *EPG:client* is consuming those contracts. *EPG:client* is providing the _any_ contract, which is consumed by *EPG:webservers*.
-
-The _direction_ keyword is always from the perspective of the _provider_ of the contract. In this case contract _web_, being _provided_ by *EPG:webservers*, with the classifier to match TCP destination port 80, means:
-
-* packets with a TCP destination port of 80
-* sent to (_in_) endpoints in the *EPG:webservers*
-* will be _allowed_.
-
-.GBP Endpoints and the Forwarding Model
-image::groupbasedpolicy/GBP_Endpoint_EPG_Forwarding.png[align="center",width=300]
-
-When the forwarding model is considered in the figure above, it can be seen that even though all endpoints are communicating using a common set of contracts,
-their forwarding is _contained_ by the forwarding model contexts or namespaces.
-In the example shown, the endpoints associated with a _network-containment_ that has an ultimate parent of _L3Context:Sales_ can only communicate with other endpoints within this L3Context.
-In this way L3VPN services can be implemented without any impact to the *Intent* of the contract.
-
-===== High-level implementation Architecture
-
-The overall architecture, including _<<Neutron,Neutron>>_ domain specific mapping, and the <<OfOverlay,OpenFlow Overlay renderer>> looks as so:
-
-.GBP High Level Beryllium Architecture
-image::groupbasedpolicy/GBP_High-levelBerylliumArchitecture.png[align="center",width=300]
-
-The major benefit of this architecture is that the mapping of the domain-specific-language is completely separate and independent of the underlying renderer implementation.
-
-For instance, using the <<Neutron,Neutron Mapper>>, which maps the Neutron API to the *GBP* core model, any contract automatically generated from this mapping can be augmented via the <<UX,UX>>
-to use <<SFC,Service Function Chaining>>, a capability not currently available in OpenStack Neutron.
-
-When another renderer is added, for instance, NetConf, the same policy can now be leveraged across NetConf devices simultaneously:
-
-.GBP High Level Beryllium Architecture - adding a renderer
-image::groupbasedpolicy/GBP_High-levelExtraRenderer.png[align="center",width=300]
-
-As other domain-specific mappings occur, they too can leverage the same renderers, as the renderers only need to implement the *GBP* access and forwarding models, and the domain-specific mapping need only manage mapping to the access and forwarding models. For instance:
-
-.GBP High Level Beryllium Architecture - adding a renderer
-image::groupbasedpolicy/High-levelBerylliumArchitectureEvolution2.png[align="center",width=300]
-
-In summary, the *GBP* architecture:
-
-* separates concerns: the Expressed Intent is kept completely separated from the underlying renderers.
-* is cohesive: each part does it's part and it's part only
-* is scalable: code can be optimised around model mapping/implementation, and functionality re-used
-
-==== Policy Resolution [[policyresolution]]
-
-===== Contract Selection
-
-The first step in policy resolution is to select the contracts that are in scope.
-
-EndpointGroups participate in contracts either as a _provider_ or as a _consumer_ of a contract. Each EndpointGroup can participate in many contracts at the same time, but for each contract it can be in only one role at a time.
-In addition, there are two ways for an EndpointGroup to select a contract: either with either a:
-
-* _named selector_
-+
-Named selectors simply select a specific contract by its contract ID.
-* target selector.
-+
-Target selectors allow for additional flexibility by matching against _qualities_ of the contract's _target._
-
-Thus, there are a total of 4 kinds of contract selector:
-
-* provider named selector
-+
-Select a contract by contract ID, and participate as a provider.
-
-* provider target selector
-+
-Match against a contract's target with a quality matcher, and participate as a provider.
-
-* consumer named selector
-+
-Select a contract by contract ID, and participate as a consumer.
-
-* consumer target selector
-+
-Match against a contract's target with a quality matcher, and participate as a consumer.
-
-To determine which contracts are in scope, contracts are found where either the source EndpointGroup selects a contract as either a provider or consumer,
-while the destination EndpointGroup matches against the same contract in the corresponding role.  So if endpoint _x_ in EndpointGroup _X_ is communicating with endpoint _y_
-in EndpointGroup _Y_, a contract _C_ is in scope if either _X_ selects _C_ as a provider and _Y_ selects _C_ as a consumer, or vice versa.
-
-The details of how quality matchers work are described further in <<Matchers,Matchers>>.
-Quality matchers provide a flexible mechanism for contract selection based on labels.
-
-The end result of the contract selection phase can be thought of as a set of tuples representing selected contract scopes.  The fields of the tuple are:
-
-* Contract ID
-* The provider EndpointGroup ID
-* The name of the selector in the provider EndpointGroup that was used to select the contract, called the _matching provider selector._
-* The consumer EndpointGroup ID
-* The name of the selector in the consumer EndpointGroup that was used to select the contract, called the _matching consumer selector._
-
-The result is then stored in the datastore under *Resolved Policy*.
-
-===== Subject Selection
-
-The second phase in policy resolution is to determine which subjects are in scope.
-The subjects define what kinds of communication are allowed between endpoints in the EndpointGroups.
-For each of the selected contract scopes from the contract selection phase, the subject selection procedure is applied.
-
-Labels called, capabilities, requirements and conditions are matched against to bring a Subject _into scope_.
-EndpointGroups have capabilities and requirements, while endpoints have conditions.
-
-===== Requirements and Capabilities
-
-When acting as a _provider_, EndpointGroups expose _capabilities,_ which are labels representing specific pieces of functionality that can be exposed to other
-EndpointGroups that may meet functional requirements of those EndpointGroups.
-
-When acting as a _consumer_, EndpointGroups expose _requirements_, which are labels that represent that the EndpointGroup requires some specific piece of functionality.
-
-As an example, we might create a capability called "user-database" which indicates that an EndpointGroup contains endpoints that implement a database of users.
-
-We might create a requirement also called "user-database" to indicate an EndpointGroup contains endpoints that will need to communicate with the endpoints that expose this service.
-
-Note that in this example the requirement and capability have the same name, but the user need not follow this convention.
-
-The matching provider selector (that was used by the provider EndpointGroup to select the contract) is examined to determine the capabilities exposed by the provider EndpointGroup for this contract scope.
-
-The provider selector will have a list of capabilities either directly included in the provider selector or inherited from a parent selector or parent EndpointGroup. (See <<Inheritance,Inheritance>>).
-
-Similarly, the matching consumer selector will expose a set of requirements.
-
-===== Conditions
-
-Endpoints can have _conditions_, which are labels representing some relevant piece of operational state related to the endpoint.
-
-An example of a condition might be "malware-detected," or "authentication-succeeded."  Conditions are used to affect how that particular endpoint can communicate.
-
-To continue with our example, the "malware-detected" condition might cause an endpoint's connectivity to be cut off, while "authentication-succeeded" might open up communication with services
-that require an endpoint to be first authenticated and then forward its authentication credentials.
-
-===== Clauses
-
-Clauses perform the actual selection of subjects.
-A clause has lists of matchers in two categories. In order for a clause to become active, all lists of matchers must match.
-A matching clause will select all the subjects referenced by the clause.
-Note that an empty list of matchers counts as a match.
-
-The first category is the consumer matchers, which match against the consumer EndpointGroup and endpoints.  The consumer matchers are:
-
-* Group Idenfication Constraint: Requirement matchers
-+
-Matches against requirements in the matching consumer selector.
-
-* Group Identification Constraint: GroupName
-+
-Matches against the group name
-
-* Consumer condition matchers
-+
-Matches against conditions on endpoints in the consumer EndpointGroup
-
-* Consumer Endpoint Identification Constraint
-+
-Label based criteria for matching against endpoints. In Beryllium this can be used to label endpoints based on IpPrefix.
-
-The second category is the provider matchers, which match against the provider EndpointGroup and endpoints.  The provider matchers are:
-
-* Group Idenfication Constraint: Capability matchers
-+
-Matches against capabilities in the matching provider selector.
-
-* Group Identification Constraint: GroupName
-+
-Matches against the group name
-
-* Consumer condition matchers
-+
-Matches against conditions on endpoints in the provider EndpointGroup
-
-* Consumer Endpoint Identification Constraint
-+
-Label based criteria for matching against endpoints. In Beryllium this can be used to label endpoints based on IpPrefix.
-
-Clauses have a list of subjects that apply when all the matchers in the clause match.  The output of the subject selection phase logically is a set of subjects that are in scope for any particular pair of endpoints.
-
-===== Rule Application
-
-Now subjects have been selected that apply to the traffic between a particular set of endpoints, policy can be applied to allow endpoints to communicate.
-The applicable subjects from the previous step will each contain a set of rules.
-
-Rules consist of a set of _classifiers_ and a set of _actions_.  Classifiers match against traffic between two endpoints.
-An example of a classifier would be something that matches against all TCP traffic on port 80, or one that matches against HTTP traffic containing a particular cookie.
-Actions are specific actions that need to be taken on the traffic before it reaches its destination.
-Actions could include tagging or encapsulating the traffic in some way, redirecting the traffic, or applying a <<SFC,service function chain>>.
-
-Rules, subjects, and actions have an _order_ parameter, where a lower order value means that a particular item will be applied first.
-All rules from a particular subject will be applied before the rules of any other subject, and all actions from a particular rule will be applied before the actions from another rule.
-If more than item has the same order parameter, ties are broken with a lexicographic ordering of their names, with earlier names having logically lower order.
-
-====== Matchers [[Matchers]]
-
-Matchers specify a set of labels (which include requirements, capabilities, conditions, and qualities) to match against.
-There are several kinds of matchers that operate similarly:
-
-* Quality matchers
-+
-used in target selectors during the contract selection phase.  Quality matchers provide a more advanced and flexible way to select contracts compared to a named selector.
-
-* Requirement and capability matchers
-+
-used in clauses during the subject selection phase to match against requirements and capabilities on EndpointGroups
-
-* Condition matchers
-+
-used in clauses during the subject selection phase to match against conditions on endpoints
-
-A matcher is, at its heart, fairly simple.  It will contain a list of label names, along with a _match type_.
-The match type can be either:
-
-* "all"
-+
-which means the matcher matches when all of its labels match
-
-* "any"
-+
-which means the matcher matches when any of its labels match,
-* "none"
-+
-which means the matcher matches when none of its labels match.
-
-Note a _match all_ matcher can be made by matching against an empty set of labels with a match type of "all."
-
-Additionally each label to match can optionally include a relevant name field.  For quality matchers, this is a target name.
-For capability and requirement matchers, this is a selector name.  If the name field is specified, then the matcher will only match against targets or selectors with that name, rather than any targets or selectors.
-
-===== Inheritance [[Inheritance]]
-
-Some objects in the system include references to parents, from which they will inherit definitions.
-The graph of parent references must be loop free. When resolving names, the resolution system must detect loops and raise an exception.
-Objects that are part of these loops may be considered as though they are not defined at all.
-Generally, inheritance works by simply importing the objects in the parent into the child object. When there are objects with the same name in the child object,
-then the child object will override the parent object according to rules which are specific to the type of object. We'll next explore the detailed rules for inheritance for each type of object
-
-*EndpointGroups*
-
-EndpointGroups will inherit all their selectors from their parent EndpointGroups. Selectors with the same names as selectors in the parent EndpointGroups will inherit their behavior as defined below.
-
-*Selectors*
-
-Selectors include provider named selectors, provider target selectors, consumer named selectors, and consumer target selectors. Selectors cannot themselves have parent selectors, but when selectors have the same name as a selector of the same type in the parent EndpointGroup, then they will inherit from and override the behavior of the selector in the parent EndpointGroup.
-
-*Named Selectors*
-
-Named selectors will add to the set of contract IDs that are selected by the parent named selector.
-
-*Target Selectors*
-
-A target selector in the child EndpointGroup with the same name as a target selector in the parent EndpointGroup will inherit quality matchers from the parent. If a quality matcher in the child has the same name as a quality matcher in the parent, then it will inherit as described below under Matchers.
-
-*Contracts*
-
-Contracts will inherit all their targets, clauses and subjects from their parent contracts. When any of these objects have the same name as in the parent contract, then the behavior will be as defined below.
-
-*Targets*
-
-Targets cannot themselves have a parent target, but it may inherit from targets with the same name as the target in a parent contract. Qualities in the target will be inherited from the parent. If a quality with the same name is defined in the child, then this does not have any semantic effect except if the quality has its inclusion-rule parameter set to "exclude." In this case, then the label should be ignored for the purpose of matching against this target.
-
-*Subjects*
-
-Subjects cannot themselves have a parent subject, but it may inherit from a subject with the same name as the subject in a parent contract.
-The order parameter in the child subject, if present, will override the order parameter in the parent subject.
-The rules in the parent subject will be added to the rules in the child subject. However, the rules will not override rules of the same name. Instead, all rules in the parent subject will be considered to run with a higher order than all rules in the child; that is all rules in the child will run before any rules in the parent. This has the effect of overriding any rules in the parent without the potentially-problematic semantics of merging the ordering.
-
-*Clauses*
-
-Clauses cannot themselves have a parent clause, but it may inherit from a clause with the same name as the clause in a parent contract.
-The list of subject references in the parent clause will be added to the list of subject references in the child clause. This is just a union operation.
-A subject reference that refers to a subject name in the parent contract might have that name overridden in the child contract.
-Each of the matchers in the clause are also inherited by the child clause.
-Matchers in the child of the same name and type as a matcher from the parent will inherit from and override the parent matcher. See below under Matchers for more information.
-
-*Matchers*
-
-Matchers include quality matchers, condition matchers, requirement matchers, and capability matchers.
-Matchers cannot themselves have parent matchers, but when there is a matcher of the same name and type in the parent object,
-then the matcher in the child object will inherit and override the behavior of the matcher in the parent object.
-The match type, if specified in the child, overrides the value specified in the parent.
-Labels are also inherited from the parent object. If there is a label with the same name in the child object, this does not have any semantic effect except if the label has its inclusion-rule parameter set to "exclude."
-In this case, then the label should be ignored for the purpose of matching. Otherwise, the label with the same name will completely override the label from the parent.
-
-=== Using the GBP UX interface [[UX]]
-
-include::odl-groupbasedpolicy-ui-user-guide.adoc[]
-
-=== Using the GBP API [[REST]]
-
-Please see:
-
-* <<OfOverlay,Using the GBP OpenFlow Overlay (OfOverlay) renderer>>
-* <<policyresolution, Policy Resolution>>
-* <<forwarding, Forwarding Model>>
-* <<demo, the *GBP* demo and development environments for tips>>
-
-It is recommended to use either:
-
-* <<Neutron, Neutron mapper>>
-* <<UX, the UX>>
-
-If the REST API must be used, and the above resources are not sufficient:
-
-* feature:install odl-dlux-yangui
-* browse to: http://<odl-controller>:8181/index.html and select YangUI from the left menu.
-
-to explore the various *GBP* REST options
-
-
-=== Using OpenStack with GBP [[Neutron]]
-
-
-include::odl-groupbasedpolicy-neutronmapper-user-guide.adoc[]
-
-
-=== Using the GBP OpenFlow Overlay (OfOverlay) renderer [[OfOverlay]]
-
-
-include::odl-groupbasedpolicy-ofoverlay-user-guide.adoc[]
-
-
-=== Using the GBP eBPF IO Visor Agent renderer [[IoVisor]]
-
-
-include::odl-groupbasedpolicy-iovisor-user-guide.adoc[]
-
-
-=== Using the GBP FaaS renderer [[FaaS]]
-
-
-include::odl-groupbasedpolicy-faas-user-guide.adoc[]
-
-
-=== Using Service Function Chaining (SFC) with GBP Neutron Mapper and OfOverlay [[SFC]]
-
-
-include::odl-groupbasedpolicy-sfc-user-guide.adoc[]
-
-
-=== Demo/Development environment[[demo]]
-
-The *GBP* project for Beryllium has two demo/development environments.
-
-* Docker based GBP and GBP+SFC integration Vagrant environment
-* DevStack based GBP+Neutron integration Vagrant environment
-
-https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)/Consumability/Demo[Demo @ GBP wiki]
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/group-based-policy-user-guide.html
index 284e3422f33dbb427232115fce63c4652bf383de..22016cd7143e576fea3eeed8778c7c5c1049dab1 100644 (file)
@@ -1,218 +1,3 @@
 == L2Switch User Guide
 
-=== Overview
-The L2Switch project provides Layer2 switch functionality.
-
-=== L2Switch Architecture
-* Packet Handler
-  ** Decodes the packets coming to the controller and dispatches them appropriately
-* Loop Remover
-  ** Removes loops in the network
-* Arp Handler
-  ** Handles the decoded ARP packets
-* Address Tracker
-  ** Learns the Addresses (MAC and IP) of entities in the network
-* Host Tracker
-  ** Tracks the locations of hosts in the network
-* L2Switch Main
-  ** Installs flows on each switch based on network traffic
-
-=== Configuring L2Switch
-This sections below give details about the configuration settings for the components that can be configured.
-
-//The base distribution configuration files are located in distribution/base/target/distributions-l2switch-base-0.1.0-SNAPSHOT-osgipackage/opendaylight/configuration/initial
-
-//The karaf distribution configuration files are located in distribution/karaf/target/assembly/etc/opendaylight/karaf
-
-=== Configuring Loop Remover
-* 52-loopremover.xml
-  ** is-install-lldp-flow
-    *** "true" means a flow that sends all LLDP packets to the controller will be installed on each switch
-    *** "false" means this flow will not be installed
-  ** lldp-flow-table-id
-    *** The LLDP flow will be installed on the specified flow table of each switch
-    *** This field is only relevant when "is-install-lldp-flow" is set to "true"
-  ** lldp-flow-priority
-    *** The LLDP flow will be installed with the specified priority
-    *** This field is only relevant when "is-install-lldp-flow" is set to "true"
-  ** lldp-flow-idle-timeout
-    *** The LLDP flow will timeout (removed from the switch) if the flow doesn't forward a packet for _x_ seconds
-    *** This field is only relevant when "is-install-lldp-flow" is set to "true"
-  ** lldp-flow-hard-timeout
-    *** The LLDP flow will timeout (removed from the switch) after _x_ seconds, regardless of how many packets it is forwarding
-    *** This field is only relevant when "is-install-lldp-flow" is set to "true"
-  ** graph-refresh-delay
-    *** A graph of the network is maintained and gets updated as network elements go up/down (i.e. links go up/down and switches go up/down)
-    *** After a network element going up/down, it waits _graph-refresh-delay_ seconds before recomputing the graph
-    *** A higher value has the advantage of doing less graph updates, at the potential cost of losing some packets because the graph didn't update immediately.
-    *** A lower value has the advantage of handling network topology changes quicker, at the cost of doing more computation.
-
-=== Configuring Arp Handler
-* 54-arphandler.xml
-  ** is-proactive-flood-mode
-    *** "true" means that flood flows will be installed on each switch.  With this flood flow, each switch will flood a packet that doesn't match any other flows.
-      **** Advantage: Fewer packets are sent to the controller because those packets are flooded to the network.
-      **** Disadvantage: A lot of network traffic is generated.
-    *** "false" means the previously mentioned flood flows will not be installed.  Instead an ARP flow will be installed on each switch that sends all ARP packets to the controller.
-      **** Advantage: Less network traffic is generated.
-      **** Disadvantage: The controller handles more packets (ARP requests & replies) and the ARP process takes longer than if there were flood flows.
-  ** flood-flow-table-id
-    *** The flood flow will be installed on the specified flow table of each switch
-    *** This field is only relevant when "is-proactive-flood-mode" is set to "true"
-  ** flood-flow-priority
-    *** The flood flow will be installed with the specified priority
-    *** This field is only relevant when "is-proactive-flood-mode" is set to "true"
-  ** flood-flow-idle-timeout
-    *** The flood flow will timeout (removed from the switch) if the flow doesn't forward a packet for _x_ seconds
-    *** This field is only relevant when "is-proactive-flood-mode" is set to "true"
-  ** flood-flow-hard-timeout
-    *** The flood flow will timeout (removed from the switch) after _x_ seconds, regardless of how many packets it is forwarding
-    *** This field is only relevant when "is-proactive-flood-mode" is set to "true"
-  ** arp-flow-table-id
-    *** The ARP flow will be installed on the specified flow table of each switch
-    *** This field is only relevant when "is-proactive-flood-mode" is set to "false"
-  ** arp-flow-priority
-    *** The ARP flow will be installed with the specified priority
-    *** This field is only relevant when "is-proactive-flood-mode" is set to "false"
-  ** arp-flow-idle-timeout
-    *** The ARP flow will timeout (removed from the switch) if the flow doesn't forward a packet for _x_ seconds
-    *** This field is only relevant when "is-proactive-flood-mode" is set to "false"
-  ** arp-flow-hard-timeout
-    *** The ARP flow will timeout (removed from the switch) after _arp-flow-hard-timeout_ seconds, regardless of how many packets it is forwarding
-    *** This field is only relevant when "is-proactive-flood-mode" is set to "false"
-
-=== Configuring Address Tracker
-* 56-addresstracker.xml
-  ** timestamp-update-interval
-    *** A last-seen timestamp is associated with each address.  This last-seen timestamp will only be updated after _timestamp-update-interval_ milliseconds.
-    *** A higher value has the advantage of performing less writes to the database.
-    *** A lower value has the advantage of knowing how fresh an address is.
-  ** observe-addresses-from
-    *** IP and MAC addresses can be observed/learned from ARP, IPv4, and IPv6 packets.  Set which packets to make these observations from.
-
-=== Configuring L2Switch Main
-* 58-l2switchmain.xml
-  ** is-install-dropall-flow
-    *** "true" means a drop-all flow will be installed on each switch, so the default action will be to drop a packet instead of sending it to the controller
-    *** "false" means this flow will not be installed
-  ** dropall-flow-table-id
-    *** The dropall flow will be installed on the specified flow table of each switch
-    *** This field is only relevant when "is-install-dropall-flow" is set to "true"
-  ** dropall-flow-priority
-    *** The dropall flow will be installed with the specified priority
-    *** This field is only relevant when "is-install-dropall-flow" is set to "true"
-  ** dropall-flow-idle-timeout
-    *** The dropall flow will timeout (removed from the switch) if the flow doesn't forward a packet for _x_ seconds
-    *** This field is only relevant when "is-install-dropall-flow" is set to "true"
-  ** dropall-flow-hard-timeout
-    *** The dropall flow will timeout (removed from the switch) after _x_ seconds, regardless of how many packets it is forwarding
-    *** This field is only relevant when "is-install-dropall-flow" is set to "true"
-  ** is-learning-only-mode
-    *** "true" means that the L2Switch will only be learning addresses.  No additional flows to optimize network traffic will be installed.
-    *** "false" means that the L2Switch will react to network traffic and install flows on the switches to optimize traffic.  Currently, MAC-to-MAC flows are installed.
-  ** reactive-flow-table-id
-    *** The reactive flow will be installed on the specified flow table of each switch
-    *** This field is only relevant when "is-learning-only-mode" is set to "false"
-  ** reactive-flow-priority
-    *** The reactive flow will be installed with the specified priority
-    *** This field is only relevant when "is-learning-only-mode" is set to "false"
-  ** reactive-flow-idle-timeout
-    *** The reactive flow will timeout (removed from the switch) if the flow doesn't forward a packet for _x_ seconds
-    *** This field is only relevant when "is-learning-only-mode" is set to "false"
-  ** reactive-flow-hard-timeout
-    *** The reactive flow will timeout (removed from the switch) after _x_ seconds, regardless of how many packets it is forwarding
-    *** This field is only relevant when "is-learning-only-mode" is set to "false"
-
-=== Running the L2Switch project
-
-To run the L2 Switch inside the Lithium OpenDaylight distribution simply install the `odl-l2switch-switch-ui` feature;
-
- feature:install odl-l2switch-switch-ui
-
-//==== Check out the project using git
-// git clone https://git.opendaylight.org/gerrit/p/l2switch.git
-//
-//The above command will create a directory called "l2switch" with the project.
-//
-//==== Run the distribution
-//To run the base distribution, you can use the following command
-//
-// ./distribution/base/target/distributions-l2switch-base-0.1.0-SNAPSHOT-osgipackage/opendaylight/run.sh
-//
-//If you need additional resources, you can use these command line arguments:
-//
-// -Xms1024m -Xmx2048m -XX:PermSize=512m -XX:MaxPermSize=1024m'
-//
-//To run the karaf distribution, you can use the following command:
-//
-// ./distribution/karaf/target/assembly/bin/karaf
-
-=== Create a network using mininet
- sudo mn --controller=remote,ip=<Controller IP> --topo=linear,3 --switch ovsk,protocols=OpenFlow13
- sudo mn --controller=remote,ip=127.0.0.1 --topo=linear,3 --switch ovsk,protocols=OpenFlow13
-
-The above command will create a virtual network consisting of 3 switches.
-Each switch will connect to the controller located at the specified IP, i.e. 127.0.0.1
-
- sudo mn --controller=remote,ip=127.0.0.1 --mac --topo=linear,3 --switch ovsk,protocols=OpenFlow13
-
-The above command has the "mac" option, which makes it easier to distinguish between Host MAC addresses and Switch MAC addresses.
-
-=== Generating network traffic using mininet
- h1 ping h2
-
-The above command will cause host1 (h1) to ping host2 (h2)
-
- pingall
-
-'pingall' will cause each host to ping every other host.
-
-=== Checking Address Observations
-Address Observations are added to the Inventory data tree.
-
-The Address Observations on a Node Connector can be checked through a browser or a REST Client.
-
- http://10.194.126.91:8080/restconf/operational/opendaylight-inventory:nodes/node/openflow:1/node-connector/openflow:1:1
-
-.Address Observations
-image::l2switch-address-observations.png["AddressObservations image",width=500]
-
-=== Checking Hosts
-Host information is added to the Topology data tree.
-
-* Host address
-* Attachment point (link) to a node/switch
-
-This host information and attachment point information can be checked through a browser or a REST Client.
-
- http://10.194.126.91:8080/restconf/operational/network-topology:network-topology/topology/flow:1/
-
-.Hosts
-image::l2switch-hosts.png["Hosts image",width=500]
-
-=== Checking STP status of each link
-STP Status information is added to the Inventory data tree.
-
-* A status of "forwarding" means the link is active and packets are flowing on it.
-* A status of "discarding" means the link is inactive and packets are not sent over it.
-
-The STP status of a link can be checked through a browser or a REST Client.
-
- http://10.194.126.91:8080/restconf/operational/opendaylight-inventory:nodes/node/openflow:1/node-connector/openflow:1:2
-
-.STP status
-image::l2switch-stp-status.png["STPStatus image",width=500]
-
-=== Miscellaneous mininet commands
- link s1 s2 down
-
-This will bring the link between switch1 (s1) and switch2 (s2) down
-
- link s1 s2 up
-
-This will bring the link between switch1 (s1) and switch2 (s2) up
-
- link s1 h1 down
-
-This will bring the link between switch1 (s1) and host1 (h1) down
-
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/l2switch-user-guide.html
index 215d5a0fee59a2a5905e69449351035220334f3d..5c6ed8f9ce3be5e7cbb654a84cee283914e39672 100755 (executable)
@@ -1,129 +1,3 @@
 == Link Aggregation Control Protocol User Guide
 
-=== Overview
-This section contains information about how to use the LACP plugin project with OpenDaylight, including configurations.
-
-=== Link Aggregation Control Protocol Architecture
-The LACP Project within OpenDaylight implements Link Aggregation Control Protocol (LACP) as an MD-SAL service module and will be used to auto-discover and aggregate multiple links between an OpenDaylight controlled network and LACP-enabled endpoints or switches. The result is the creation of a logical channel, which represents the aggregation of the links. Link aggregation provides link resiliency and bandwidth aggregation. This implementation adheres to IEEE Ethernet specification link:http://www.ieee802.org/3/hssg/public/apr07/frazier_01_0407.pdf[802.3ad].
-
-=== Configuring Link Aggregation Control Protocol
-
-This feature can be enabled in the Karaf console of the OpenDaylight Karaf distribution by issuing the following command:
-
- feature:install odl-lacp-ui
-
-[NOTE]
-====
-1. Ensure that legacy (non-OpenFlow) switches are configured with LACP mode active with a long timeout to allow for the LACP plugin in OpenDaylight to respond to its messages. 
-2. Flows that want to take advantage of LACP-configured Link Aggregation Groups (LAGs) must explicitly use a OpenFlow group table entry created by the LACP plugin. The plugin only creates group table entries, it does not program any flows on its own.
-====
-
-=== Administering or Managing Link Aggregation Control Protocol
-LACP-discovered network inventory and network statistics can be viewed using the following REST APIs.
-
-1. List of aggregators available for a node:
-+
- http://<ControllerIP>:8181/restconf/operational/opendaylight-inventory:nodes/node/<node-id>
-+
-Aggregator information will appear within the +<lacp-aggregators>+ XML tag.
-
-2. To view only the information of an aggregator:
-+
- http://<ControllerIP>:8181/restconf/operational/opendaylight-inventory:nodes/node/<node-id>/lacp-aggregators/<agg-id>
-+
-The group ID associated with the aggregator can be found inside the +<lag-groupid>+ XML tag.
-+
-The group table entry information for the +<lag-groupid>+ added for the aggregator is also available in the +opendaylight-inventory+ node database.
-
-3. To view physical port information.
-+
- http://<ControllerIP>:8181/restconf/operational/opendaylight-inventory:nodes/node/<node-id>/node-connector/<node-connector-id>
-+
-Ports that are associated with an aggregator will have the tag +<lacp-agg-ref>+ updated with valid aggregator information.
-
-=== Tutorials
-The below tutorial demonstrates LACP LAG creation for a sample mininet topology.
-
-==== Sample LACP Topology creation on Mininet
- sudo mn --controller=remote,ip=<Controller IP> --topo=linear,1 --switch ovsk,protocols=OpenFlow13
-
-The above command will create a virtual network consisting of a switch and a host. The switch will be connected to the controller.
-
-Once the topology is discovered, verify the presence of a flow entry with "dl_type" set to "0x8809" to handle LACP packets using the below ovs-ofctl command:
-
- ovs-ofctl -O OpenFlow13 dump-flows s1
-  OFPST_FLOW reply (OF1.3) (xid=0x2):
-  cookie=0x300000000000001e, duration=60.067s, table=0, n_packets=0, n_bytes=0, priority=5,dl_dst=01:80:c2:00:00:02,dl_type=0x8809 actions=CONTROLLER:65535
-
-Configure an additional link between the switch (s1) and host (h1) using the below command on mininet shell to aggregate 2 links:
-
- mininet> py net.addLink(s1, net.get('h1'))
- mininet> py s1.attach('s1-eth2')
-
-The LACP module will listen for LACP control packets that are generated from legacy switch (non-OpenFlow enabled). In our example, host (h1) will act as a LACP packet generator.
-In order to generate the LACP control packets, a bond interface has to be created on the host (h1) with mode type set to LACP with long-timeout. To configure bond interface, create a new file bonding.conf under the /etc/modprobe.d/ directory and insert the below lines in this new file:
-
-        alias bond0 bonding
-        options bonding mode=4
-
-Here mode=4 is referred to LACP and the default timeout is set to long.
-
-Enable bond interface and associate both physical interface h1-eth0 & h1-eth1 as members of the bond interface on host (h1) using the below commands on the mininet shell:
-
- mininet> py net.get('h1').cmd('modprobe bonding')
- mininet> py net.get('h1').cmd('ip link add bond0 type bond')
- mininet> py net.get('h1').cmd('ip link set bond0 address <bond-mac-address>')
- mininet> py net.get('h1').cmd('ip link set h1-eth0 down')
- mininet> py net.get('h1').cmd('ip link set h1-eth0 master bond0')
- mininet> py net.get('h1').cmd('ip link set h1-eth1 down')
- mininet> py net.get('h1').cmd('ip link set h1-eth1 master bond0')
- mininet> py net.get('h1').cmd('ip link set bond0 up')
-
-Once the bond0 interface is up, the host (h1) will send LACP packets to the switch (s1). The LACP Module will then create a LAG through exchange of LACP packets between the host (h1) and switch (s1). To view the bond interface output on the host (h1) side:
-
- mininet> py net.get('h1').cmd('cat /proc/net/bonding/bond0')
- Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
- Bonding Mode: IEEE 802.3ad Dynamic link aggregation
- Transmit Hash Policy: layer2 (0)
- MII Status: up
- MII Polling Interval (ms): 100
- Up Delay (ms): 0
- Down Delay (ms): 0
- 802.3ad info
- LACP rate: slow
- Min links: 0
- Aggregator selection policy (ad_select): stable
- Active Aggregator Info:
-         Aggregator ID: 1
-         Number of ports: 2
-         Actor Key: 33
-         Partner Key: 27
-         Partner Mac Address: 00:00:00:00:01:01
- Slave Interface: h1-eth0
- MII Status: up
- Speed: 10000 Mbps
- Duplex: full
- Link Failure Count: 0
- Permanent HW addr: 00:00:00:00:00:11
- Aggregator ID: 1
- Slave queue ID: 0
- Slave Interface: h1-eth1
- MII Status: up
- Speed: 10000 Mbps
- Duplex: full
- Link Failure Count: 0
- Permanent HW addr: 00:00:00:00:00:12
- Aggregator ID: 1
- Slave queue ID: 0
-
-A corresponding group table entry would be created on the OpenFlow switch (s1) with "type" set to "select" to perform the LAG functionality. To view the group entries:
-
- mininet>ovs-ofctl -O Openflow13 dump-groups s1
- OFPST_GROUP_DESC reply (OF1.3) (xid=0x2):
-  group_id=60169,type=select,bucket=weight:0,actions=output:1,output:2
-
-To apply the LAG functionality on the switches, the flows should be configured with action set to GroupId instead of output port. A sample add-flow configuration with output action set to GroupId:
-
- sudo ovs-ofctl -O Openflow13 add-flow s1 dl_type=0x0806,dl_src=SRC_MAC,dl_dst=DST_MAC,actions=group:60169
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/link-aggregation-control-protocol-user-guide.html
diff --git a/manuals/user-guide/src/main/asciidoc/lfm/lispflowmapping-clustering-user.adoc b/manuals/user-guide/src/main/asciidoc/lfm/lispflowmapping-clustering-user.adoc
deleted file mode 100644 (file)
index 0b531bd..0000000
+++ /dev/null
@@ -1,34 +0,0 @@
-=== Clustering in LISP Flow Mapping
-Documentation regarding setting up a 3-node OpenDaylight cluster is described at following  https://wiki.opendaylight.org/view/Running_and_testing_an_OpenDaylight_Cluster#Three-node_cluster[odl wiki page].
-
-To turn on clustering in LISP Flow Mapping it is necessary:
-
-* run script *deploy.py* script. This script is in https://git.opendaylight.org/gerrit/integration/test[integration-test] project placed at _tools/clustering/cluster-deployer/deploy.py_. A whole deploy.py command can looks like:
-=======
-{path_to_integration_test_project}/tools/clustering/cluster-deployer/*deploy.py* +
---*distribution* {path_to_distribution_in_zip_format} +
---*rootdir* {dir_at_remote_host_where_copy_odl_distribution}  +
---*hosts* {IP1},{IP2},{IP3} +
---*clean* +
---*template* lispflowmapping +
---*rf* 3 +
---*user* {user_name_of_remote_hosts} +
---*password* {password_to_remote_hosts}
-=======
-Running this script will cause that specified *distribution* to be deployed to remote *hosts* specified through their IP adresses with using credentials (*user* and *password*). The distribution will be copied to specified *rootdir*. As part of the deployment, a *template* which contains a set of controller files which are different from standard ones. In this case it is specified in +
-_{path_to_integration_test_project}/tools/clustering/cluster-deployer/lispflowmapping_ directory. +
-Lispflowmapping templates are part of integration-test project. There are 5 template files: +
-
-* akka.conf.template
-* jolokia.xml.template
-* module-shards.conf.template
-* modules.conf.template
-* org.apache.karaf.features.cfg.template
-
-After copying the distribution, it is unzipped and started on all of specified *hosts* in cluster aware manner.
-
-==== Remarks
-It is necessary to have:
-
-* *unzip* program installed on all of the host
-* set all remote hosts /etc/sudoers files to not *requiretty* (should only matter on debian hosts)
index 0caa9b02c4d12b6c284ce1ff023e2dae30b0ee65..e60cd814d2ce6378fa2a25edb87a72066566153f 100644 (file)
@@ -1,639 +1,3 @@
 == LISP Flow Mapping User Guide
 
-=== Overview
-
-==== Locator/ID Separation Protocol
-
-http://tools.ietf.org/html/rfc6830[Locator/ID Separation Protocol (LISP)] is a
-technology that provides a flexible map-and-encap framework that can be used
-for overlay network applications such as data center network virtualization and
-Network Function Virtualization (NFV).
-
-LISP provides the following name spaces:
-
-* http://tools.ietf.org/html/rfc6830#page-6[Endpoint Identifiers (EIDs)]
-* http://tools.ietf.org/html/rfc6830#section-3[Routing Locators (RLOCs)]
-
-In a virtualization environment EIDs can be viewed as virtual address space and
-RLOCs can be viewed as physical network address space.
-
-The LISP framework decouples network control plane from the forwarding plane by
-providing:
-
-* A data plane that specifies how the virtualized network addresses are
-  encapsulated in addresses from the underlying physical network.
-* A control plane that stores the mapping of the virtual-to-physical address
-  spaces, the associated forwarding policies and serves this information to
-  the data plane on demand.
-
-Network programmability is achieved by programming forwarding policies such as
-transparent mobility, service chaining, and traffic engineering in the mapping
-system; where the data plane elements can fetch these policies on demand as new
-flows arrive. This chapter describes the LISP Flow Mapping project in
-OpenDaylight and how it can be used to enable advanced SDN and NFV use cases.
-
-LISP data plane Tunnel Routers are available at
-http://LISPmob.org/[LISPmob.org] in the open source community on the following
-platforms:
-
-* Linux
-* Android
-* OpenWRT
-
-For more details and support for LISP data plane software please visit
-http://LISPmob.org/[the LISPmob web site].
-
-==== LISP Flow Mapping Service
-
-The LISP Flow Mapping service provides LISP Mapping System services. This
-includes LISP  Map-Server and LISP Map-Resolver services to store and serve
-mapping data to data plane nodes as well as to OpenDaylight applications.
-Mapping data can include mapping of virtual addresses to physical network
-address where the virtual nodes are reachable or hosted at. Mapping data can
-also include a variety of routing policies including traffic engineering and
-load balancing. To leverage this service, OpenDaylight applications and
-services can use the northbound REST API to define the mappings and policies in
-the LISP Mapping Service. Data plane devices capable of LISP control protocol
-can leverage this service through a southbound LISP plugin. LISP-enabled
-devices must be configured to use this OpenDaylight service as their Map Server
-and/or Map Resolver.
-
-The southbound LISP plugin supports the LISP control protocol (Map-Register,
-Map-Request, Map-Reply messages), and can also be used to register mappings in
-the OpenDaylight mapping service.
-
-=== LISP Flow Mapping Architecture
-
-The following figure shows the various LISP Flow Mapping modules.
-
-.LISP Mapping Service Internal Architecture
-
-image::ODL_lfm_Be_component.jpg["LISP Mapping Service Internal Architecture", width=460]
-
-A brief description of each module is as follows:
-
-* *DAO (Data Access Object):* This layer separates the LISP logic from the
-  database, so that we can separate the map server and map resolver from the
-  specific implementation of the mapping database. Currently we have an
-  implementation of this layer with an in-memory HashMap, but it can be switched
-  to any other key/value store and you only need to implement the ILispDAO
-  interface.
-
-* *Map Server:* This module processes the adding or registration of
-  authentication tokens (keys) and mappings. For a detailed specification of
-  LISP Map Server, see http://tools.ietf.org/search/rfc6830[LISP].
-* *Map Resolver:* This module receives and processes the mapping lookup queries
-  and provides the mappings to requester. For a detailed specification of LISP
-  Map Server, see http://tools.ietf.org/search/rfc6830[LISP].
-* *RPC/RESTCONF:* This is the auto-generated RESTCONF-based northbound API. This
-  module enables defining key-EID associations as well as adding mapping
-  information through the Map Server. Key-EID associations and mappings can also
-  be queried via this API.
-* *GUI:* This module enables adding and querying the mapping service through a
-  GUI based on ODL DLUX. 
-* *Neutron:* This module implements the OpenDaylight Neutron Service APIs. It
-  provides integration between the LISP service and the OpenDaylight Neutron
-  service, and thus OpenStack.
-* *Java API:* The API module exposes the Map Server and Map Resolver
-  capabilities via a Java API.
-* *LISP Proto:* This module includes LISP protocol dependent data types and
-  associated processing.
-* *In Memory DB:* This module includes the in memory database implementation of
-  the mapping service.
-* *LISP Southbound Plugin:* This plugin enables data plane devices that support
-  LISP control plane protocol (see http://tools.ietf.org/search/rfc6830[LISP])
-  to register and query mappings to the
-  LISP Flow Mapping via the LISP control plane protocol.
-
-
-=== Configuring LISP Flow Mapping
-
-In order to use the LISP mapping service for registering EID to RLOC mappings
-from northbound or southbound, keys have to be defined for the EID prefixes first. Once a key
-is defined for an EID prefix, it can be used to add mappings for that EID
-prefix multiple times. If the service is going to be used to process Map-Register
-messages from the southbound LISP plugin, the same key must be used by
-the data plane device to create the authentication data in the Map-Register
-messages for the associated EID prefix.
-
-The +etc/custom.properties+ file in the Karaf distribution allows configuration
-of several OpenDaylight parameters.  The LISP service has the following properties
-that can be adjusted:
-
-*lisp.mappingOverwrite* (default: 'true')::
-    Configures handling of mapping updates.  When set to 'true' (default) a
-    mapping update (either through the southbound plugin via a Map-Register
-    message or through a northbound API PUT REST call) the existing RLOC set
-    associated to an EID prefix is overwritten.  When set to 'false', the RLOCs
-    of the update are merged to the existing set.
-
-*lisp.smr* (default: 'false')::
-    Enables/disables the
-    http://tools.ietf.org/html/rfc6830#section-6.6.2[Solicit-Map-Request (SMR)]
-    functionality.  SMR is a method to notify changes in an EID-to-RLOC mapping
-    to "subscribers".  The LISP service considers all Map-Request's source RLOC
-    as a subscriber to the requested EID prefix, and will send an SMR control
-    message to that RLOC if the mapping changes.
-
-*lisp.elpPolicy* (default: 'default')::
-    Configures how to build a Map-Reply southbound message from a mapping
-    containing an Explicit Locator Path (ELP) RLOC.  It is used for
-    compatibility with dataplane devices that don't understand the ELP LCAF
-    format.  The 'default' setting doesn't alter the mapping, returning all
-    RLOCs unmodified.  The 'both' setting adds a new RLOC to the mapping, with
-    a lower priority than the ELP, that is the next hop in the service chain.
-    To determine the next hop, it searches the source RLOC of the Map-Request
-    in the ELP, and chooses the next hop, if it exists, otherwise it chooses
-    the first hop.  The 'replace' setting adds a new RLOC using the same
-    algorithm as the 'both' setting, but using the origin priority of the ELP
-    RLOC, which is removed from the mapping.
-*lisp.lookupPolicy* (default: 'northboundFirst')::
-    Configures the mapping lookup algorithm. When set to 'northboundFirst' 
-    mappings programmed through the northbound API will take precedence. If 
-    no northbound programmed mappings exist, then the mapping service will 
-    return mappings registered through the southbound plugin, if any exists.
-    When set to 'northboundAndSouthbound' the mapping programmed by the
-    northbound is returned, updated by the up/down status of these mappings
-    as reported by the southbound (if existing).
-*lisp.mappingMerge* (default: 'false')::
-    Configures the merge policy on the southbound registrations through the
-    LISP SB Plugin. When set to 'false', only the latest mapping registered
-    through the SB plugin is valid in the southbound mapping database,
-    independent of which device it came from. When set to 'true', mappings
-    for the same EID registered by different devices are merged together and
-    a union of the locators is maintained as the valid mapping for that EID.
-
-=== Textual Conventions for LISP Address Formats
-
-In addition to the more common IPv4, IPv6 and MAC address data types, the LISP
-control plane supports arbitrary
-http://www.iana.org/assignments/address-family-numbers[Address Family
-Identifiers] assigned by IANA, and in addition to those the
-https://tools.ietf.org/html/draft-ietf-lisp-lcaf[LISP Canoncal Address Format
-(LCAF)].
-
-The LISP Flow Mapping project in OpenDaylight implements support for many of
-these different address formats, the full list being summarized in the
-following table.  While some of the address formats have well defined and
-widely used textual representation, many don't.  It became necessary to define
-a convention to use for text rendering of all implemented address types in
-logs, URLs, input fields, etc.  The below table lists the supported formats,
-along with their AFI number and LCAF type, including the prefix used for
-disambiguation of potential overlap, and examples output.
-
-.LISP Address Formats
-[align="right",options="header",cols="<2s,>,>,<,<4l"]
-|=====
-|         Name           |  AFI  | LCAF |  Prefix  |  Text Rendering
-| No Address             |     0 |    - | no:      | No Address Present
-| IPv4 Prefix            |     1 |    - | ipv4:    | 192.0.2.0/24
-| IPv6 Prefix            |     2 |    - | ipv6:    | 2001:db8::/32
-| MAC Address            | 16389 |    - | mac:     | 00:00:5E:00:53:00
-| Distinguished Name     |    17 |    - | dn:      | stringAsIs
-| AS Number              |    18 |    - | as:      | AS64500
-| AFI List               | 16387 |    1 | list:    | {192.0.2.1,192.0.2.2,2001:db8::1}
-| Instance ID            | 16387 |    2 | -        | [223] 192.0.2.0/24
-| Application Data       | 16387 |    4 | appdata: | 192.0.2.1!128!17!80-81!6667-7000
-| Explicit Locator Path  | 16387 |   10 | elp:     | {192.0.2.1->192.0.2.2\|lps->192.0.2.3}
-| Source/Destination Key | 16387 |   12 | srcdst:  | 192.0.2.1/32\|192.0.2.2/32
-| Key/Value Address Pair | 16387 |   15 | kv:      | 192.0.2.1=>192.0.2.2
-| Service Path           | 16387 |  N/A | sp:      | 42(3)
-|=====
-
-Please note that the forward slash character `/` typically separating IPv4 and
-IPv6 addresses from the mask length is transformed into `%2f` when used in a
-URL.
-
-=== Karaf commands
-
-In this section we will discuss two types of Karaf commands: built-in, and
-LISP specific. Some built-in commands are quite useful, and are needed for the
-tutorial, so they will be discussed here. A reference of all LISP specific
-commands, added by the LISP Flow Mapping project is also included. They are
-useful mostly for debugging.
-
-==== Useful built-in commands
-
-+help+::
-    Lists all available command, with a short description of each.
-
-+help <command_name>+::
-    Show detailed help about a specific command.
-
-+feature:list [-i]+::
-    Show all locally available features in the Karaf container. The `-i`
-    option lists only features that are currently installed. It is possible to
-    use `| grep` to filter the output (for all commands, not just this one).
-
-+feature:install <feature_name>+::
-    Install feature `feature_name`.
-
-+log:set <level> <class>+::
-    Set the log level for `class` to `level`. The default log level for all
-    classes is INFO. For debugging, or learning about LISP internals it is
-    useful to run `log:set TRACE org.opendaylight.lispflowmapping` right after
-    Karaf starts up.
-
-+log:display+::
-    Outputs the log file to the console, and returns control to the user.
-
-+log:tail+::
-    Continuously shows log output, requires `Ctrl+C` to return to the console.
-
-==== LISP specific commands
-
-The available lisp commands can always be obtained by `help mappingservice`.
-Currently they are:
-
-+mappingservice:addkey+::
-    Add the default password `password` for the IPv4 EID prefix 0.0.0.0/0 (all
-    addresses). This is useful when experimenting with southbound devices,
-    and using the REST interface would be combersome for whatever reason.
-
-+mappingservice:mappings+::
-    Show the list of all mappings stored in the internal non-persistent data
-    store (the DAO), listing the full data structure. The output is not human
-    friendly, but can be used for debugging.
-
-
-=== LISP Flow Mapping Karaf Features
-
-LISP Flow Mapping has the following Karaf features that can be installed from
-the Karaf console:
-
-+odl-lispflowmapping-msmr+::
-    This includes the core features required to use the LISP Flow Mapping Service
-    such as mapping service and the LISP southbound plugin.
-
-+odl-lispflowmapping-ui+::
-    This includes the GUI module for the LISP Mapping Service.
-
-+odl-lispflowmapping-neutron+::
-    This is the experimental Neutron provider module for LISP mapping service.
-
-
-=== Tutorials
-
-This section provides a tutorial demonstrating various features in this service.
-
-==== Creating a LISP overlay
-
-This section provides instructions to set up a LISP network of three nodes (one
-"client" node and two "server" nodes) using LISPmob as data plane LISP nodes
-and the LISP Flow Mapping project from OpenDaylight as the LISP programmable
-mapping system for the LISP network.
-
-===== Overview
-
-The steps shown below will demonstrate setting up a LISP network between a
-client and two servers, then performing a failover between the two "server"
-nodes.
-
-===== Prerequisites
-
-* *OpenDaylight Beryllium*
-* *The Postman Chrome App*: the most convenient way to follow along this
-  tutorial is to use the
-  https://chrome.google.com/webstore/detail/postman/fhbjgbiflinjbdggehcddcbncdddomop?hl=en[Postman
-  Chrome App] to edit and send the requests. The project git repository hosts
-  a collection of the requests that are used in this tutorial in the
-  +resources/tutorial/Beryllium_Tutorial.json.postman_collection+ file. You can
-  import this file to Postman by clicking 'Import' at the top, choosing
-  'Download from link' and then entering the following URL:
-  +https://git.opendaylight.org/gerrit/gitweb?p=lispflowmapping.git;a=blob_plain;f=resources/tutorial/Beryllium_Tutorial.json.postman_collection;hb=refs/heads/stable/beryllium+.
-  Alternatively, you can save the file on your machine, or if you have the
-  repository checked out, you can import from there. You will need to create a
-  new Postman Environment and define some variables within: +controllerHost+
-  set to the hostname or IP address of the machine running the ODL instance,
-  and +restconfPort+ to 8181, if you didn't modify the default controller
-  settings.
-* *LISPmob version 0.5.x*  The README.md lists the dependencies needed to
-  build it from source.
-* *A virtualization platform*
-
-===== Target Environment
-
-The three LISP data plane nodes and the LISP mapping system are assumed to be
-running in Linux virtual machines, which have the +eth0+ interface in NAT mode
-to allow outside internet access and +eth1+ connected to a host-only network,
-with the following IP addresses (please adjust configuration files, JSON
-examples, etc. accordingly if you're using another addressing scheme):
-
-.Nodes in the tutorial
-[align="right",options="header"]
-|===
-| Node            |  Node Type     | IP Address
-| *controller*    |  OpenDaylight  | 192.168.16.11
-| *client*        |  LISPmob       | 192.168.16.30
-| *server1*       |  LISPmob       | 192.168.16.31
-| *server2*       |  LISPmob       | 192.168.16.32
-| *service-node*  |  LISPmob       | 192.168.16.33
-|===
-
-NOTE: While the tutorial uses LISPmob as the data plane, it could be any
-      LISP-enabled hardware or software router (commercial/open source).
-
-===== Instructions
-
-The below steps use the command line tool cURL to talk to the LISP Flow
-Mapping RPC REST API. This is so that you can see the actual request URLs and
-body content on the page.
-
- . Install and run OpenDaylight Beryllium release on the controller VM. Please
-   follow the general OpenDaylight Beryllium Installation Guide for this step.
-   Once the OpenDaylight controller is running install the
-   'odl-lispflowmapping-msmr' feature from the Karaf CLI:
-
- feature:install odl-lispflowmapping-msmr
-+
-It takes quite a while to load and initialize all features and their
-dependencies. It's worth running the command +log:tail+ in the Karaf console
-to see when the log output is winding down, and continue with the tutorial
-after that.
-
- . Install LISPmob on the *client*, *server1*, *server2*, and *service-node*
-   VMs following the installation instructions
-   https://github.com/LISPmob/lispmob#software-prerequisites[from the LISPmob
-   README file].
-
- . Configure the LISPmob installations from the previous step. Starting from
-   the +lispd.conf.example+ file in the distribution, set the EID in each
-   +lispd.conf+ file from the IP address space selected for your virtual/LISP
-   network. In this tutorial the EID of the *client* is set to 1.1.1.1/32, and
-   that of *server1* and *server2* to 2.2.2.2/32.
-
- . Set the RLOC interface to +eth1+ in each +lispd.conf+ file. LISP will
-   determine the RLOC (IP address of the corresponding VM) based on this
-   interface.
-
- . Set the Map-Resolver address to the IP address of the *controller*, and on
-   the *client* the Map-Server too. On *server1* and *server2* set the
-   Map-Server to something else, so that it doesn't interfere with the
-   mappings on the controller, since we're going to program them manually.
-
- . Modify the "key" parameter in each +lispd.conf+ file to a key/password of
-   your choice ('password' in this tutorial).
-+
-NOTE: The +resources/tutorial+ directory in the 'stable/beryllium' branch of the
-      project git repository has the files used in the tutorial
-      https://git.opendaylight.org/gerrit/gitweb?p=lispflowmapping.git;a=tree;f=resources/tutorial;hb=refs/heads/stable/beryllium[checked
-      in], so you can just copy the files to +/root/lispd.conf+ on the
-      respective VMs. You will also find the JSON files referenced below in
-      the same directory.
-+
- . Define a key and EID prefix association in OpenDaylight using the RPC REST
-   API for the *client* EID (1.1.1.1/32) to allow registration from the
-   southbound. Since the mappings for the server EID will be configured from
-   the REST API, no such association is necessary. Run the below command on
-   the *controller* (or any machine that can reach *controller*, by replacing
-   'localhost' with the IP address of *controller*).
-
- curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
-     http://localhost:8181/restconf/operations/odl-mappingservice:add-key \
-     --data @add-key.json
-
-+
-where the content of the 'add-key.json' file is the following:
-+
-[source,json]
-----
-{
-    "input": {
-        "eid": {
-            "address-type": "ietf-lisp-address-types:ipv4-prefix-afi",
-            "ipv4-prefix": "1.1.1.1/32"
-        },
-        "mapping-authkey": {
-            "key-string": "password",
-            "key-type": 1
-        }
-    }
-}
-----
-
- . Verify that the key is added properly by requesting the following URL:
-
- curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
-     http://localhost:8181/restconf/operations/odl-mappingservice:get-key \
-     --data @get1.json
-
-+
-where the content of the 'get1.json' file can be derived from the
-'add-key.json' file by removing the 'mapping-authkey' field.  The output the
-above invocation should look like this:
-
- {"output":{"mapping-authkey":{"key-type":1,"key-string":"password"}}}
-
- . Run the +lispd+ LISPmob daemon on all VMs:
-
- lispd -f /root/lispd.conf
-
- . The *client* LISPmob node should now register its EID-to-RLOC mapping in
-   OpenDaylight. To verify you can lookup the corresponding EIDs via the REST
-   API
-
- curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
-     http://localhost:8181/restconf/operations/odl-mappingservice:get-mapping \
-     --data @get1.json
-
-+
-An alternative way for retrieving mappings from ODL using the southbound
-interface is using the https://github.com/davidmeyer/lig[+lig+] open source
-tool.
-
- . Register the EID-to-RLOC mapping of the server EID 2.2.2.2/32 to the
-   controller, pointing to *server1* and *server2* with a higher priority for
-   *server1*
-
- curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
-     http://localhost:8181/restconf/operations/odl-mappingservice:add-mapping \
-     --data @mapping.json
-+
-where the 'mapping.json' file looks like this:
-+
-[source,json]
-----
-{
-    "input": {
-        "mapping-record": {
-            "recordTtl": 1440,
-            "action": "NoAction",
-            "authoritative": true,
-            "eid": {
-                "address-type": "ietf-lisp-address-types:ipv4-prefix-afi",
-                "ipv4-prefix": "2.2.2.2/32"
-            },
-            "LocatorRecord": [
-                {
-                    "locator-id": "server1",
-                    "priority": 1,
-                    "weight": 1,
-                    "multicastPriority": 255,
-                    "multicastWeight": 0,
-                    "localLocator": true,
-                    "rlocProbed": false,
-                    "routed": true,
-                    "rloc": {
-                        "address-type": "ietf-lisp-address-types:ipv4-afi",
-                        "ipv4": "192.168.16.31"
-                    }
-                },
-                {
-                    "locator-id": "server2",
-                    "priority": 2,
-                    "weight": 1,
-                    "multicastPriority": 255,
-                    "multicastWeight": 0,
-                    "localLocator": true,
-                    "rlocProbed": false,
-                    "routed": true,
-                    "rloc": {
-                        "address-type": "ietf-lisp-address-types:ipv4-afi",
-                        "ipv4": "192.168.16.32"
-                    }
-                }
-            ]
-        }
-    }
-}
-----
-+
-Here the priority of the second RLOC (192.168.16.32 - *server2*) is 2, a higher
-numeric value than the priority of 192.168.16.31, which is 1. This policy is
-saying that *server1* is preferred to *server2* for reaching EID 2.2.2.2/32.
-Note that lower priority value has higher preference in LISP.
-
- . Verify the correct registration of the 2.2.2.2/32 EID:
-
- curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
-     http://localhost:8181/restconf/operations/odl-mappingservice:get-mapping \
-     --data @get2.json
-
-+
-where 'get2.json' can be derived from 'get1.json' by changing the content of
-the 'Ipv4Address' field from '1.1.1.1' to '2.2.2.2'.
-
- . Now the LISP network is up. To verify, log into the *client* VM and ping the server EID:
-
- ping 2.2.2.2
-
- . Let's test fail-over now. Suppose you had a service on *server1* which
-   became unavailable, but *server1* itself is still reachable. LISP will not
-   automatically fail over, even if the mapping for 2.2.2.2/32 has two
-   locators, since both locators are still reachable and uses the one with the
-   higher priority (lowest priority value). To force a failover, we need to
-   set the priority of *server2* to a lower value. Using the file mapping.json
-   above, swap the priority values between the two locators (lines 14 and 28
-   in 'mapping.json') and repeat the request from step 11.  You can also
-   repeat step 12 to see if the mapping is correctly registered.  If you leave
-   the ping on, and monitor the traffic using wireshark, you can see that the
-   ping traffic to 2.2.2.2 will be diverted from the *server1* RLOC to the
-   *server2* RLOC.
-+
-With the default OpenDaylight configuration the failover should be near
-instantaneous (we observed 3 lost pings in the worst case), because of the
-LISP http://tools.ietf.org/html/rfc6830#section-6.6.2[Solicit-Map-Request
-(SMR) mechanism] that can ask a LISP data plane element to update its mapping
-for a certain EID (enabled by default). It is controlled by the +lisp.smr+
-variable in +etc/custom.porperties+. When enabled, any mapping change from the
-RPC interface will trigger an SMR packet to all data plane elements that have
-requested the mapping in the last 24 hours (this value was chosen because it's
-the default TTL of Cisco IOS xTR mapping registrations). If disabled, ITRs
-keep their mappings until the TTL specified in the Map-Reply expires.
-
- . To add a service chain into the path from the client to the server, we can
-   use an Explicit Locator Path, specifying the *service-node* as the first
-   hop and *server1* (or *server2*) as the second hop. The following will
-   achieve that:
-
- curl -u "admin":"admin" -H "Content-type: application/json" -X POST \
-     http://localhost:8181/restconf/operations/odl-mappingservice:add-mapping \
-     --data @elp.json
-+
-where the 'elp.json' file is as follows:
-+
-[source,json]
-----
-{
-    "input": {
-        "mapping-record": {
-            "recordTtl": 1440,
-            "action": "NoAction",
-            "authoritative": true,
-            "eid": {
-                "address-type": "ietf-lisp-address-types:ipv4-prefix-afi",
-                "ipv4-prefix": "2.2.2.2/32"
-            },
-            "LocatorRecord": [
-                {
-                    "locator-id": "ELP",
-                    "priority": 1,
-                    "weight": 1,
-                    "multicastPriority": 255,
-                    "multicastWeight": 0,
-                    "localLocator": true,
-                    "rlocProbed": false,
-                    "routed": true,
-                    "rloc": {
-                        "address-type": "ietf-lisp-address-types:explicit-locator-path-lcaf",
-                        "explicit-locator-path": {
-                            "hop": [
-                                {
-                                    "hop-id": "service-node",
-                                    "address": "192.168.16.33",
-                                    "lrs-bits": "strict"
-                                },
-                                {
-                                    "hop-id": "server1",
-                                    "address": "192.168.16.31",
-                                    "lrs-bits": "strict"
-                                }
-                            ]
-                        }
-                    }
-                }
-            ]
-        }
-    }
-}
-----
-+
-After the mapping for 2.2.2.2/32 is updated with the above, the ICMP traffic
-from *client* to *server1* will flow through the *service-node*. You can
-confirm this in the LISPmob logs, or by sniffing the traffic on either the
-*service-node* or *server1*. Note that service chains are unidirectional, so
-unless another ELP mapping is added for the return traffic, packets will go
-from *server1* to *client* directly.
-
- . Suppose the *service-node* is actually a firewall, and traffic is diverted
-   there to support access control lists (ACLs). In this tutorial that can be
-   emulated by using +iptables+ firewall rules in the *service-node* VM. To
-   deny traffic on the service chain defined above, the following rule can be
-   added:
-
- iptables -A OUTPUT --dst 192.168.16.31 -j DROP
-
-+
-The ping from the *client* should now have stopped.
-+
-In this case the ACL is done on the destination RLOC. There is an effort underway in the LISPmob
-community to allow filtering on EIDs, which is the more logical place to apply
-ACLs.
-
- . To delete the rule and restore connectivity on the service chain, delete
-   the ACL by issuing the following command:
-
- iptables -D OUTPUT --dst 192.168.16.31 -j DROP
-
-+
-which should restore connectivity.
-
-=== LISP Flow Mapping Support
-
-For support the lispflowmapping project can be reached by emailing the
-developer mailing list: lispflowmapping-dev@lists.opendaylight.org or on the
-#opendaylight-lispflowmapping IRC channel on irc.freenode.net.
-
-Additional information is also available on the https://wiki.opendaylight.org/view/OpenDaylight_Lisp_Flow_Mapping:Main[Lisp Flow Mapping wiki]
-
-include::lispflowmapping-clustering-user.adoc[Clustering in lispflowmapping]
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/lisp-flow-mapping-user-guide.html
diff --git a/manuals/user-guide/src/main/asciidoc/nemo/odl-nemo-engine-user.adoc b/manuals/user-guide/src/main/asciidoc/nemo/odl-nemo-engine-user.adoc
deleted file mode 100644 (file)
index 5ae979a..0000000
+++ /dev/null
@@ -1,39 +0,0 @@
-== NEtwork MOdeling (NEMO)
-This section describes how to use the NEMO feature in OpenDaylight
-and contains contains configuration, administration, and management
-sections for the feature.
-
-=== Overview
-TBD: An overview of the NEMO feature and the use case and the
-audience who will use the feature.
-
-=== NEMO Engine Architecture
-TBD: Information about NEMO Engine components and how they work together.
-Also include information about how the feature integrates with
-OpenDaylight.
-
-=== Configuring NEMO Engine
-TBD: Describe how to configure the NEMO Engine after installation.
-
-=== Administering or Managing NEMO Engine
-TBD: Include related command reference or  operations
-for using the NEMO Engine.
-
-=== Tutorials
-Below are tutorials for NEMO Engine.
-
-==== Using NEMO Engine
-TBD: State the purpose of tutorial
-
-===== Overview
-TBD: An overview of the NEMO tutorial
-
-===== Prerequisites
-TBD: Provide any prerequisite information, assumed knowledge, or environment
-required to execute the use case.
-
-===== Target Environment
-TBD: Include any topology requirement for the use case.
-
-===== Instructions
-TBD: Step by step procedure for using NEMO Engine.
index acea6515b146bedcca6b1b1c73c0709abfc73301..fda687f27ddf88600bff7deafc48590c20736c74 100644 (file)
@@ -1,68 +1,3 @@
 == NetIDE User Guide
 
-=== Overview
-OpenDaylight's NetIDE project allows users to run SDN applications written for different 
-SDN controllers, e.g., Floodlight or Ryu, on top of OpenDaylight managed infrastructure. The NetIDE 
-Network Engine integrates a client controller layer that executes the modules that 
-compose a Network Application and interfaces with a server SDN controller layer that drives 
-the underlying infrastructure. In addition, it provides a uniform interface to common tools
-that are intended to allow the inspection/debug of the control channel and the management of the
-network resources.
-
-The Network Engine provides a compatibility layer capable of translating calls of the network 
-applications running on top of the client controllers, into calls for the server controller framework. The
-communication between the client and the server layers is achieved through the NetIDE
-intermediate protocol, which is an application-layer protocol on top of TCP that transmits the
-network control/management messages from the client to the server controller and vice-versa.
-Between client and server controller sits the Core Layer which also speaks the intermediate protocol.
-
-=== NetIDE API
-==== Architecture and Design
-The NetIDE engine follows the ONF's proposed Client/Server SDN Application architecture.
-
-.NetIDE Network Engine Architecture
-image::netide/netidearch.jpg[width=500]
-
-==== Core
-The NetIDE Core is a message-based system that allows for the exchange of messages between
-OpenDaylight and subscribed Client SDN Controllers
-
-==== Handling reply messages correctly
-
-When an application module sends a request to the network (e.g. flow statistics, features, etc.), 
-the Network Engine must be able to correctly drive the corresponding reply to such a module. This is
-not a trivial task, as many modules may compose the network application running on top of the
-Network Engine, and there is no way for the Core to pair replies and requests. The transaction
-IDs (xid) in the OpenFlow header are unusable in this case, as it may happen that different modules
-use the same values.
-
-In the proposed approach, represented in the figure below, the task of pairing replies with requests is
-performed by the Shim Layer which replaces the original xid of the OpenFlow requests coming
-from the core with new unique xid values. The Shim also saves the original OpenFlow xid value
-and the module id it finds in the NetIDE header. As the network elements must use the same xid
-values in the replies, the Shim layer can easily pair a reply with the correct request as it is using
-unique xid values.
-
-The below figure shows how the Network Engine should handle the controller-to-switch OpenFlow messages. 
-The diagram shows the case of a request message sent by an application module to a network
-element where the Backend inserts the module id of the module in the NetIDE header (X in the
-Figure). For other messages generated by the client controller platform (e.g. echo requests) or by
-the Backend, the module id of the Backend is used (Y in the Figure).
-
-.NetIDE Communication Flow
-image::netide/netide-flow.jpg[width=500]
-
-
-==== Configuration
-Below are the configuration items which can be edited, including their default values.
-
-* core-address: This is the ip address of the NetIDE Core, default is 127.0.0.1
-* core-port: The port of on which the NetIDE core is listening on 
-* address: IP address where the controller listens for switch connections, default is 127.0.0.1
-* port: Port where controller listens for switch connections, default: 6644
-* transport-protocol: default is TCP
-* switch-idle-timeout: default is 15000ms
-
-          
-          
-
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/netide-user-guide.html
index 52b98f9639d5bc7294202a20a0f5a44767068556..3b6888cb156999ba75a1a8eb6b436e421879e15d 100644 (file)
@@ -1,59 +1,3 @@
 == Neutron Service User Guide
 
-=== Overview
-This Karaf feature (`odl-neutron-service`) provides integration support for OpenStack Neutron
-via the OpenDaylight ML2 mechanism driver. The Neutron Service is only one of the
-components necessary for OpenStack integration. For those related components
-please refer to documentations of each component:
-
-* https://wiki.openstack.org/wiki/Neutron
-* https://launchpad.net/networking-odl
-* http://git.openstack.org/cgit/openstack/networking-odl/
-* https://wiki.opendaylight.org/view/NeutronNorthbound:Main
-
-==== Use cases and who will use the feature
-If you want OpenStack integration with OpenDaylight, you will need
-this feature with an OpenDaylight provider feature like ovsdb/netvirt, group based
-policy, VTN, and lisp mapper. For provider configuration, please refer
-to each individual provider's documentation. Since the Neutron service only provides the northbound
-API for the OpenStack Neutron ML2 mechanism driver. Without those provider
-features, the Neutron service itself isn't useful.
-
-=== Neutron Service feature Architecture
-The Neutron service provides northbound API for OpenStack Neutron via
-RESTCONF and also its dedicated REST API.
-It communicates through its YANG model with providers.
-
-image::neutron/odl-neutron-service-architecture.png[height="450px", width="550px", title="Neutron Service Architecture"]
-// image original https://docs.google.com/drawings/d/14CWAo1WQrCMHzNGDeg57P9CiqpkiAE4_njr_0OgAUsw/edit?usp=sharing
-
-
-=== Configuring Neutron Service feature
-As the Karaf feature includes everything necessary for communicating
-northbound, no special configuration is needed.
-Usually this feature is used with an OpenDaylight southbound plugin that implements
-actual network virtualization functionality and OpenStack Neutron.
-The user wants to setup those configurations. Refer to each related
-documentations for each configurations.
-
-=== Administering or Managing `odl-neutron-service`
-There is no specific configuration regarding to Neutron service itself.
-For related configuration, please refer to OpenStack Neutron configuration
-and OpenDaylight related services which are providers for OpenStack.
-
-==== installing `odl-neutron-service` while the controller running
-
-. While OpenDaylight is running, in Karaf prompt, type:
-  `feature:install odl-neutron-service`.
-. Wait a while until the initialization is done and the controller stabilizes.
-
-`odl-neutron-service` provides only a unified interface for OpenStack Neutron.
-It doesn't provide actual functionality for network virtualization.
-Refer to each OpenDaylight project documentation for actual configuration with
-OpenStack Neutron.
-
-=== Neutron Logger
-Another service, the Neutron Logger, is provided for debugging/logging purposes.
-It logs changes on Neutron YANG models.
-
-  feature:install odl-neutron-logger
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/neutron-service-user-guide.html
diff --git a/manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_Log_Action.adoc b/manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_Log_Action.adoc
deleted file mode 100644 (file)
index 5b8c8d1..0000000
+++ /dev/null
@@ -1,52 +0,0 @@
-==== Requirement
-
-* Before execute the follow steps, please, use default requirements. See section <<_default_requirements,Default Requirements>>.
-
-==== How to configure Log Action
-
-This section demonstrates log action in OF Renderer. This demonstration aims at enabling communication between two hosts and logging the flow statistics details of the particular traffic.
-
-===== Configuration
-
-Please execute the following CLI commands to test network intent using mininet:
-
-* To provision the network for the two hosts (h1 and h3), add intents that allows traffic in both directions by execute the following CLI command.
-----
-intent:add –a ALLOW -t <DESTINATION_MAC> -f <SOURCE_MAC>
-----
-
-Example:
-----
-intent:add -a ALLOW -t 00:00:00:00:00:03 -f 00:00:00:00:00:01
-intent:add -a ALLOW -t 00:00:00:00:00:01 -f 00:00:00:00:00:03
-----
-
-* To log the flow statistics details of the particular traffic.
-----
-intent:add –a LOG -t <DESTINATION_MAC> -f <SOURCE_MAC>
-----
-
-Example:
-----
-intent:add -a LOG -t 00:00:00:00:00:03 -f 00:00:00:00:00:01
-----
-
-====== Verification
-
-* As we have applied action type ALLOW now ping should happen between hosts h1 and h3.
-----
- mininet> h1 ping h3
- PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
- 64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=0.984 ms
- 64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.110 ms
- 64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.098 ms
-----
-
-* To view the flow statistics log details such as, byte count, packet count and duration, check the karaf.log.
-----
-2015-12-15 22:56:20,256 | INFO | lt-dispatcher-23 | IntentFlowManager | 264 - org.opendaylight.nic.of-renderer - 1.1.0.SNAPSHOT | Creating block intent for endpoints: source00:00:00:00:00:01 destination 00:00:00:00:00:03
-2015-12-15 22:56:20,252 | INFO | lt-dispatcher-29 | FlowStatisticsListener | 264 - org.opendaylight.nic.of-renderer - 1.1.0.SNAPSHOT | Flow Statistics gathering for Byte Count:Counter64 [_value=238]
-2015-12-15 22:56:20,252 | INFO | lt-dispatcher-29 | FlowStatisticsListener | 264 - org.opendaylight.nic.of-renderer - 1.1.0.SNAPSHOT | Flow Statistics gathering for Packet Count:Counter64 [_value=3]
-2015-12-15 22:56:20,252 | INFO | lt-dispatcher-29 | FlowStatisticsListener | 264 - org.opendaylight.nic.of-renderer - 1.1.0.SNAPSHOT | Flow Statistics gathering for Duration in Nano second:Counter32 [_value=678000000]
-2015-12-15 22:56:20,252 | INFO | lt-dispatcher-29 | FlowStatisticsListener | 264 - org.opendaylight.nic.of-renderer - 1.1.0.SNAPSHOT | Flow Statistics gathering for Duration in Second:Counter32 [_value=49]
-----
diff --git a/manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_QoS_Attribute_Mapping.adoc b/manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_QoS_Attribute_Mapping.adoc
deleted file mode 100644 (file)
index 987a881..0000000
+++ /dev/null
@@ -1,69 +0,0 @@
-==== How to configure QoS Attribute Mapping
-
-This section explains how to provision QoS attribute mapping constraint using NIC OF-Renderer.
-
-The QoS attribute mapping currently supports DiffServ. It uses a 6-bit differentiated services code point (DSCP) in the 8-bit differentiated services field (DS field) in the IP header.
-
-[options="header",cols="20%,80%"]
-|===
-| Action | Function
-|Allow | Permits the packet to be forwarded normally, but allows for packet header fields, e.g., DSCP, to be modified.
-|===
-
-The following steps explain QoS Attribute Mapping function:
-
-* Initially configure the QoS profile which contains profile name and DSCP value.
-* When a packet is transferred from a source to destination, the flow builder evaluates whether the transferred packet matches the condition such as action, endpoints in the flow.
-* If the packet matches the endpoints, the flow builder applies the flow matching action and DSCP value.
-
-===== Requirement
-
-* Before execute the following steps, please, use the default requirements. See section <<_default_requirements,Default Requirements>>.
-
-===== Configuration
-
-Please execute the following CLI commands to test network intent using mininet:
-
-* To apply the QoS constraint, configure the QoS profile.
-----
-intent:qosConfig -p <qos_profile_name> -d <valid_dscp_value>
-----
-
-Example:
-----
-intent:qosConfig -p High_Quality -d 46
-----
-NOTE: Valid DSCP value ranges from 0-63.
-
-* To provision the network for the two hosts (h1 and h3), add intents that allows traffic in both directions by execute the following CLI command.
-
-Demonstrates the ALLOW action with constraint QoS and QoS profile name.
-----
-intent:add -a ALLOW -t <DESTINATION_MAC> -f <SOURCE_MAC> -q QOS -p <qos_profile_name>
-----
-
-Example:
-----
-intent:add -a ALLOW -t 00:00:00:00:00:03 -f 00:00:00:00:00:01 -q QOS -p High_Quality
-intent:add -a ALLOW -t 00:00:00:00:00:01 -f 00:00:00:00:00:03 -q QOS -p High_Quality
-----
-
-====== Verification
-
-* As we have applied action type ALLOW now ping should happen between hosts h1 and h3.
-----
- mininet> h1 ping h3
- PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
- 64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=0.984 ms
- 64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.110 ms
- 64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.098 ms
-----
-
-* Verification of the flow entry and ensuring the mod_nw_tos is part of actions.
-----
- mininet> dpctl dump-flows
- *** s1 ------------------------------------------------------------------------
- NXST_FLOW reply (xid=0x4):
- cookie=0x0, duration=21.873s, table=0, n_packets=3, n_bytes=294, idle_age=21, priority=9000,dl_src=00:00:00:00:00:03,dl_dst=00:00:00:00:00:01 actions=NORMAL,mod_nw_tos:184
- cookie=0x0, duration=41.252s, table=0, n_packets=3, n_bytes=294, idle_age=41, priority=9000,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:03 actions=NORMAL,mod_nw_tos:184
-----
diff --git a/manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_Redirect_Action.adoc b/manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_Redirect_Action.adoc
deleted file mode 100644 (file)
index 74ab717..0000000
+++ /dev/null
@@ -1,211 +0,0 @@
-==== How to configure Redirect Action
-
-The section explains the redirect action supported in NIC. The redirect functionality supports forwarding (to redirect) the traffic to a service configured in SFC before forwarding it to the destination.
-
-.REDIRECT SERVICE
-image::nic/Service_Chaining.png[REDIRECT SERVICE,width=400]
-
-Following steps explain Redirect action function:
-
-* Configure the service in SFC using the SFC APIs.
-* Configure the intent with redirect action and the service information where the traffic needs to be redirected.
-* The flows are computed as below
-. First flow entry between the source host connected node and the ingress node of the configured service.
-. Second flow entry between the egress Node id the configured service and the ID and destination host connected host.
-. Third flow entry between the destination host node and the source host node.
-
-
-===== Requirement
-* Save the mininet <<_simple_mininet_topology,Simple Mininet topology>> script as redirect_test.py
-
-* Start mininet, and create switches in it.
-
-Replace <Controller IP> based on your environment.
-
-----
-sudo mn --controller=remote,ip=<Controller IP>--custom redirect_test.py --topo mytopo2
-----
-
-----
- mininet> net
- h1 h1-eth0:s1-eth1
- h2 h2-eth0:s1-eth2
- h3 h3-eth0:s2-eth1
- h4 h4-eth0:s2-eth2
- h5 h5-eth0:s2-eth3
- srvc1 srvc1-eth0:s3-eth3 srvc1-eth1:s4-eth3
- s1 lo:  s1-eth1:h1-eth0 s1-eth2:h2-eth0 s1-eth3:s2-eth4 s1-eth4:s3-eth2
- s2 lo:  s2-eth1:h3-eth0 s2-eth2:h4-eth0 s2-eth3:h5-eth0 s2-eth4:s1-eth3 s2-eth5:s4-eth1
- s3 lo:  s3-eth1:s4-eth2 s3-eth2:s1-eth4 s3-eth3:srvc1-eth0
- s4 lo:  s4-eth1:s2-eth5 s4-eth2:s3-eth1 s4-eth3:srvc1-eth1
- c0
-----
-
-===== Starting the Karaf
-
-* Before execute the following steps, please, use the default requirements. See section <<_default_requirements,Downloading and deploy Karaf distribution>>.
-
-===== Configuration
-
-====== Mininet
-
-.CONFIGURATION THE NETWORK IN MININET
-image::nic/Redirect_flow.png[CONFIGURATION OF THE NETWORK]
-
-* Configure srvc1 as service node in the mininet environment.
-
-Please execute the following commands in the mininet console (where mininet script is executed).
-----
- srvc1 ip addr del 10.0.0.6/8 dev srvc1-eth0
- srvc1 brctl addbr br0
- srvc1 brctl addif br0 srvc1-eth0
- srvc1 brctl addif br0 srvc1-eth1
- srvc1 ifconfig br0 up
- srvc1 tc qdisc add dev srvc1-eth1 root netem delay 200ms
-----
-
-====== Configure service in SFC
-The service (srvc1) is configured using SFC REST API. As part of the configuration the ingress and egress node connected the service is configured.
-
-----
-curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '{
-  "service-functions": {
-    "service-function": [
-      {
-        "name": "srvc1",
-        "sf-data-plane-locator": [
-          {
-            "name": "Egress",
-            "service-function-forwarder": "openflow:4"
-          },
-          {
-            "name": "Ingress",
-            "service-function-forwarder": "openflow:3"
-          }
-        ],
-        "nsh-aware": false,
-        "type": "delay"
-      }
-    ]
-  }
-}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function:service-functions/
-----
-
-*SFF RESTCONF Request*
-
-Configuring switch and port information for the service functions.
-----
-curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '{
-  "service-function-forwarders": {
-    "service-function-forwarder": [
-      {
-        "name": "openflow:3",
-        "service-node": "OVSDB2",
-        "sff-data-plane-locator": [
-          {
-            "name": "Ingress",
-            "data-plane-locator":
-            {
-                "vlan-id": 100,
-                "mac": "11:11:11:11:11:11",
-                "transport": "service-locator:mac"
-            },
-            "service-function-forwarder-ofs:ofs-port":
-            {
-                "port-id" : "3"
-            }
-          }
-        ],
-        "service-function-dictionary": [
-          {
-            "name": "srvc1",
-            "sff-sf-data-plane-locator":
-            {
-                "sf-dpl-name" : "openflow:3",
-                "sff-dpl-name" : "Ingress"
-            }
-          }
-        ]
-      },
-      {
-        "name": "openflow:4",
-        "service-node": "OVSDB3",
-        "sff-data-plane-locator": [
-          {
-            "name": "Egress",
-            "data-plane-locator":
-            {
-                "vlan-id": 200,
-                "mac": "44:44:44:44:44:44",
-                "transport": "service-locator:mac"
-            },
-            "service-function-forwarder-ofs:ofs-port":
-            {
-                "port-id" : "3"
-            }
-          }
-        ],
-        "service-function-dictionary": [
-          {
-            "name": "srvc1",
-            "sff-sf-data-plane-locator":
-            {
-                "sf-dpl-name" : "openflow:4",
-                "sff-dpl-name" : "Egress"
-            }
-          }
-        ]
-      }
-    ]
-  }
-}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
-----
-
-====== CLI Command
-To provision the network for the two hosts (h1 and h5).
-
-Demonstrates the redirect action with service name srvc1.
-
-----
-intent:add -f <SOURCE_MAC> -t <DESTINATION_MAC> -a REDIRECT -s <SERVICE_NAME>
-----
-
-Example:
-----
-intent:add -f 32:bc:ec:65:a7:d1 -t c2:80:1f:77:41:ed -a REDIRECT -s srvc1
-----
-
-====== Verification
-
-* As we have applied action type redirect now ping should happen between hosts h1 and h5.
-----
- mininet> h1 ping h5
- PING 10.0.0.5 (10.0.0.5) 56(84) bytes of data.
- 64 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=201 ms
- 64 bytes from 10.0.0.5: icmp_seq=3 ttl=64 time=200 ms
- 64 bytes from 10.0.0.5: icmp_seq=4 ttl=64 time=200 ms
-----
-The redirect functionality can be verified by the time taken by the ping operation (200ms). The service srvc1 configured using SFC introduces 200ms delay. As the traffic from h1 to h5 is redirected via the srvc1, the time taken by the traffic from h1 to h5 will take about 200ms.
-
-* Flow entries added to nodes for the redirect action.
-----
- mininet> dpctl dump-flows
- *** s1 ------------------------------------------------------------------------
- NXST_FLOW reply (xid=0x4):
- cookie=0x0, duration=9.406s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=1,dl_src=32:bc:ec:65:a7:d1, dl_dst=c2:80:1f:77:41:ed actions=output:4
- cookie=0x0, duration=9.475s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=3,dl_src=c2:80:1f:77:41:ed, dl_dst=32:bc:ec:65:a7:d1 actions=output:1
- cookie=0x1, duration=362.315s, table=0, n_packets=144, n_bytes=12240, idle_age=4, priority=9500,dl_type=0x88cc actions=CONTROLLER:65535
- cookie=0x1, duration=362.324s, table=0, n_packets=4, n_bytes=168, idle_age=3, priority=10000,arp actions=CONTROLLER:65535,NORMAL
- *** s2 ------------------------------------------------------------------------
- NXST_FLOW reply (xid=0x4):
- cookie=0x0, duration=9.503s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=3,dl_src=c2:80:1f:77:41:ed, dl_dst=32:bc:ec:65:a7:d1 actions=output:4
- cookie=0x0, duration=9.437s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=5,dl_src=32:bc:ec:65:a7:d1, dl_dst=c2:80:1f:77:41:ed actions=output:3
- cookie=0x3, duration=362.317s, table=0, n_packets=144, n_bytes=12240, idle_age=4, priority=9500,dl_type=0x88cc actions=CONTROLLER:65535
- cookie=0x3, duration=362.32s, table=0, n_packets=4, n_bytes=168, idle_age=3, priority=10000,arp actions=CONTROLLER:65535,NORMAL
- *** s3 ------------------------------------------------------------------------
- NXST_FLOW reply (xid=0x4):
- cookie=0x0, duration=9.41s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=2,dl_src=32:bc:ec:65:a7:d1, dl_dst=c2:80:1f:77:41:ed actions=output:3
- *** s4 ------------------------------------------------------------------------
- NXST_FLOW reply (xid=0x4):
- cookie=0x0, duration=9.486s, table=0, n_packets=6, n_bytes=588, idle_age=3, priority=9000,in_port=3,dl_src=32:bc:ec:65:a7:d1, dl_dst=c2:80:1f:77:41:ed actions=output:1
-----
diff --git a/manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_VTN_Renderer.adoc b/manuals/user-guide/src/main/asciidoc/nic/NIC_How_To_configure_VTN_Renderer.adoc
deleted file mode 100644 (file)
index 54a347e..0000000
+++ /dev/null
@@ -1,90 +0,0 @@
-==== How to configure VTN Renderer
-
-The section demonstrates allow or block packets of the traffic within a VTN Renderer, according to the specified flow conditions.
-
-The table below lists the actions to be applied when a packet matches the condition:
-[options="header",cols="20%,80%"]
-|===
-| Action | Function
-|Allow | Permits the packet to be forwarded normally.
-|Block | Discards the packet preventing it from being forwarded.
-|===
-
-===== Requirement
-
-* Before execute the follow steps, please, use default requirements. See section <<_default_requirements,Default Requirements>>.
-
-===== Configuration
-
-Please execute the following curl commands to test network intent using mininet:
-
-====== Create Intent
-
-To provision the network for the two hosts(h1 and h2) and demonstrates the action allow.
-----
-curl -v --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X PUT http://localhost:8181/restconf/config/intent:intents/intent/b9a13232-525e-4d8c-be21-cd65e3436034 -d '{ "intent:intent" : { "intent:id": "b9a13232-525e-4d8c-be21-cd65e3436034", "intent:actions" : [ { "order" : 2, "allow" : {} } ], "intent:subjects" : [ { "order":1 , "end-point-group" : {"name":"10.0.0.1"} }, { "order":2 , "end-point-group" : {"name":"10.0.0.2"}} ] } }'
-----
-
-To provision the network for the two hosts(h2 and h3) and demonstrates the action allow.
-----
-curl -v --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X PUT http://localhost:8181/restconf/config/intent:intents/intent/b9a13232-525e-4d8c-be21-cd65e3436035 -d '{ "intent:intent" : { "intent:id": "b9a13232-525e-4d8c-be21-cd65e3436035", "intent:actions" : [ { "order" : 2, "allow" : {} } ], "intent:subjects" : [ { "order":1 , "end-point-group" : {"name":"10.0.0.2"} }, { "order":2 , "end-point-group" : {"name":"10.0.0.3"}} ] } }'
-----
-
-====== Verification
-
-As we have applied action type allow now ping should happen between hosts (h1 and h2) and (h2 and h3).
-----
- mininet> pingall
- Ping: testing ping reachability
- h1 -> h2 X X
- h2 -> h1 h3 X
- h3 -> X h2 X
- h4 -> X X X
-----
-
-====== Update the intent
-
-To provision block action that indicates traffic is not allowed between h1 and h2.
-----
-curl -v --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X PUT http://localhost:8181/restconf/config/intent:intents/intent/b9a13232-525e-4d8c-be21-cd65e3436034 -d '{ "intent:intent" : { "intent:id": "b9a13232-525e-4d8c-be21-cd65e3436034", "intent:actions" : [ { "order" : 2, "block" : {} } ], "intent:subjects" : [ { "order":1 , "end-point-group" : {"name":"10.0.0.1"} }, { "order":2 , "end-point-group" : {"name":"10.0.0.2"}} ] } }'
-----
-
-====== Verification
-
-As we have applied action type block now ping should not happen between hosts (h1 and h2).
-----
- mininet> pingall
- Ping: testing ping reachability
- h1 -> X X X
- h2 -> X h3 X
- h3 -> X h2 X
- h4 -> X X X
-----
-
-NOTE: Old actions and hosts are replaced by the new action and hosts.
-
-====== Delete the intent
-
-Respective intent and the traffics will be deleted.
-----
-curl -v --user "admin":"admin" -H "Accept: application/json" -H     "Content-type: application/json" -X DELETE http://localhost:8181/restconf/config/intent:intents/intent/b9a13232-525e-4d8c-be21-cd65e3436035
-----
-
-====== Verification
-
-Deletion of intent and flow.
-----
- mininet> pingall
- Ping: testing ping reachability
- h1 -> X X X
- h2 -> X X X
- h3 -> X X X
- h4 -> X X X
-----
-
-NOTE: Ping between two hosts can also be done using MAC Address
-
-To provision the network for the two hosts(h1 MAC address and h2 MAC address).
-----
-curl -v --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X PUT http://localhost:8181/restconf/config/intent:intents/intent/b9a13232-525e-4d8c-be21-cd65e3436035 -d '{ "intent:intent" : { "intent:id": "b9a13232-525e-4d8c-be21-cd65e3436035", "intent:actions" : [ { "order" : 2, "allow" : {} } ], "intent:subjects" : [ { "order":1 , "end-point-group" : {"name":"6e:4f:f7:27:15:c9"} }, { "order":2 , "end-point-group" : {"name":"aa:7d:1f:4a:70:81"}} ] } }'
-----
diff --git a/manuals/user-guide/src/main/asciidoc/nic/NIC_redirect_test_topology.adoc b/manuals/user-guide/src/main/asciidoc/nic/NIC_redirect_test_topology.adoc
deleted file mode 100644 (file)
index 4a623cf..0000000
+++ /dev/null
@@ -1,52 +0,0 @@
-=== Simple Mininet topology
-
-[source,python]
-----
-!/usr/bin/python
-
-from mininet.topo import Topo
-
-class SimpleTopology( Topo ):
-    "Simple topology example."
-
-    def __init__( self ):
-        "Create custom topo."
-
-        # <1>
-       Topo.__init__( self )
-
-        # <2>
-        Switch1 = self.addSwitch( 's1' )
-        Switch2 = self.addSwitch( 's2' )
-        Switch3 = self.addSwitch( 's3' )
-        Switch4 = self.addSwitch( 's4' )
-        Host11 = self.addHost( 'h1' )
-        Host12 = self.addHost( 'h2' )
-        Host21 = self.addHost( 'h3' )
-        Host22 = self.addHost( 'h4' )
-        Host23 = self.addHost( 'h5' )
-        Service1 = self.addHost( 'srvc1' ) # <3>
-
-       # <4>
-        self.addLink( Host11, Switch1 )
-        self.addLink( Host12, Switch1 )
-        self.addLink( Host21, Switch2 )
-        self.addLink( Host22, Switch2 )
-        self.addLink( Host23, Switch2 )
-        self.addLink( Switch1, Switch2 )
-        self.addLink( Switch2, Switch4 )
-        self.addLink( Switch4, Switch3 )
-        self.addLink( Switch3, Switch1 )
-        self.addLink( Switch3, Service1 )
-        self.addLink( Switch4, Service1 )
-
-
-topos = { 'simpletopology': ( lambda: SimpleTopology() ) }
-----
-<1> Initialize topology
-<2> Add hosts and switches
-<3> Host used to represent the service
-<4> Add links
-
-[quote]
-Source: https://gist.github.com/vinothgithub15/315d0a427d5afc39f2d7
diff --git a/manuals/user-guide/src/main/asciidoc/nic/NIC_requirements.adoc b/manuals/user-guide/src/main/asciidoc/nic/NIC_requirements.adoc
deleted file mode 100644 (file)
index e360949..0000000
+++ /dev/null
@@ -1,35 +0,0 @@
-==== Default Requirements
-
-Start mininet, and create three switches (s1, s2, and s3) and four hosts (h1, h2, h3, and h4) in it.
-
-Replace <Controller IP> based on your environment.
-
-----
-$  sudo mn --mac --topo single,2 --controller=remote,ip=<Controller IP>
-----
-
-----
- mininet> net
- h1 h1-eth0:s2-eth1
- h2 h2-eth0:s2-eth2
- h3 h3-eth0:s3-eth1
- h4 h4-eth0:s3-eth2
- s1 lo:  s1-eth1:s2-eth3 s1-eth2:s3-eth3
- s2 lo:  s2-eth1:h1-eth0 s2-eth2:h2-eth0 s2-eth3:s1-eth1
- s3 lo:  s3-eth1:h3-eth0 s3-eth2:h4-eth0 s3-eth3:s1-eth2
-----
-
-==== Downloading and deploy Karaf distribution
-* Get the Beryllium distribution.
-
-* Unzip the downloaded zip distribution.
-
-* To run the Karaf.
-----
-./bin/karaf
-----
-
-* Once the console is up, type as below to install feature.
-----
-feature:install odl-nic-core-mdsal odl-nic-console odl-nic-listeners
-----
index c4b0399270646004da887202972b61bb86d75957..32fbe5fdf62da2f93fb7b9014df3b4c308544a51 100644 (file)
@@ -1,217 +1,3 @@
 == Network Intent Composition (NIC) User Guide
 
-=== Overview
-Network Intent Composition (NIC) is an interface that allows clients to
-express a desired state in an implementation-neutral form that will be
-enforced via modification of available resources under the control of
-the OpenDaylight system.
-
-This description is purposely abstract as an intent interface might
-encompass network services, virtual devices, storage, etc.
-
-The intent interface is meant to be a controller-agnostic interface
-so that "intents" are portable across implementations, such as OpenDaylight
-and ONOS. Thus an intent specification should not contain implementation
-or technology specifics.
-
-The intent specification will be implemented by decomposing the intent
-and augmenting it with implementation specifics that are driven by
-local implementation rules, policies, and/or settings.
-
-=== Network Intent Composition (NIC) Architecture
-The core of the NIC architecture is the intent model, which specifies
-the details of the desired state. It is the responsibility of the NIC
-implementation transforms this desired state to the resources under
-the control of OpenDaylight. The component that transforms the
-intent to the implementation is typically referred to as a renderer.
-
-For the Boron release, multiple, simultaneous renderers will not be supported.
-Instead either the VTN or GBP renderer feature can be installed, but
-not both.
-
-For the Boron release, the only actions supported are "ALLOW" and
-"BLOCK". The "ALLOW" action indicates that traffic can flow between
-the source and destination end points, while "BLOCK" prevents that
-flow; although it is possible that an given implementation may augment
-the available actions with additional actions.
-
-Besides transforming a desired state to an actual state it is the
-responsibility of a renderer to update the operational state tree for
-the NIC data model in OpenDaylight to reflect the intent which the
-renderer implemented.
-
-=== Configuring Network Intent Composition (NIC)
-For the Boron release there is no default implementation of a renderer,
-thus without an additional module installed the NIC will not function.
-
-=== Administering or Managing Network Intent Composition (NIC)
-There is no additional administration of management capabilities
-related to the Network Intent Composition features.
-
-=== Interactions
-A user can interact with the Network Intent Composition (NIC) either
-through the RESTful interface using standard RESTCONF operations and
-syntax or via the Karaf console CLI.
-
-==== REST
-
-===== Configuration
-The Network Intent Composition (NIC) feature supports the following REST
-operations against the configuration data store.
-
-* POST - creates a new instance of an intent in the configuration store,
-which will trigger the realization of that intent. An ID _must_ be specified
-as part of this request as an attribute of the intent.
-
-* GET - fetches a list of all configured intents or a specific configured
-intent.
-
-* DELETE - removes a configured intent from the configuration store, which
-triggers the removal of the intent from the network.
-
-===== Operational
-The Network Intent Composition (NIC) feature supports the following REST
-operations against the operational data store.
-
-* GET - fetches a list of all operational intents or a specific operational
-intent.
-
-==== Karaf Console CLI
-This feature provides karaf console CLI command to manipulate the intent
-data model. The CLI essentailly invokes the equivalent data operations.
-
-===== intent:add
-
-Creates a new intent in the configuration data tree
-
-----
-DESCRIPTION
-        intent:add
-
-    Adds an intent to the controller.
-
-Examples: --actions [ALLOW] --from <subject> --to <subject>
-          --actions [BLOCK] --from <subject>
-
-SYNTAX
-        intent:add [options]
-
-OPTIONS
-        -a, --actions
-                Action to be performed.
-                -a / --actions BLOCK/ALLOW
-                (defaults to [BLOCK])
-        --help
-                Display this help message
-        -t, --to
-                Second Subject.
-                -t / --to <subject>
-                (defaults to any)
-        -f, --from
-                First subject.
-                -f / --from <subject>
-                (defaults to any)
-----
-
-===== intent:delete
-Removes an existing intent from the system
-
-----
-DESCRIPTION
-        intent:remove
-
-    Removes an intent from the controller.
-
-SYNTAX
-        intent:remove id
-
-ARGUMENTS
-        id  Intent Id
-----
-
-===== intent:list
-Lists all the intents in the system
-
-----
-DESCRIPTION
-        intent:list
-
-    Lists all intents in the controller.
-
-SYNTAX
-        intent:list [options]
-
-OPTIONS
-        -c, --config
-                List Configuration Data (optional).
-                -c / --config <ENTER>
-        --help
-                Display this help message
-----
-
-===== intent:show
-Displayes the details of a single intent
-
-----
-DESCRIPTION
-        intent:show
-
-    Shows detailed information about an intent.
-
-SYNTAX
-        intent:show id
-
-ARGUMENTS
-        id  Intent Id
-----
-
-
-===== intent:map
-
-List/Add/Delete current state from/to the mapping service.
-
-----
-DESCRIPTION
-        intent:map
-
-        List/Add/Delete current state from/to the mapping service.
-
-SYNTAX
-        intent:map [options]
-
-         Examples: --list, -l [ENTER], to retrieve all keys.
-                   --add-key <key> [ENTER], to add a new key with empty contents.
-                   --del-key <key> [ENTER], to remove a key with it's values."
-                   --add-key <key> --value [<value 1>, <value 2>, ...] [ENTER],
-                     to add a new key with some values (json format).
-OPTIONS
-       --help
-           Display this help message
-       -l, --list
-           List values associated with a particular key.
-       -l / --filter <regular expression> [ENTER]
-       --add-key
-           Adds a new key to the mapping service.
-       --add-key <key name> [ENTER]
-       --value
-           Specifies which value should be added/delete from the mapping service.
-       --value "key=>value"... --value "key=>value" [ENTER]
-           (defaults to [])
-       --del-key
-           Deletes a key from the mapping service.
-       --del-key <key name> [ENTER]
-----
-
-=== NIC Usage Examples
-
-include::NIC_requirements.adoc[]
-
-include::NIC_redirect_test_topology.adoc[]
-
-include::NIC_How_To_configure_VTN_Renderer.adoc[How to configure VTN Renderer]
-
-include::NIC_How_To_configure_Redirect_Action.adoc[How to configure Redirect Action]
-
-include::NIC_How_To_configure_QoS_Attribute_Mapping.adoc[How to configure QoS Attribute Mapping]
-
-include::NIC_How_To_configure_Log_Action.adoc[How to configure Log Action]
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/network-intent-composition-(nic)-user-guide.html
index dbced3ebe8b456c68457100d9662453875d26b4e..eb4873d432cef86d86c8924cac9140d4d3ae980a 100644 (file)
@@ -1,170 +1,3 @@
 == OCP Plugin User Guide
-This document describes how to use the ORI Control & Management Protocol (OCP)
-feature in OpenDaylight. This document contains overview, scope, architecture and
-design, installation, configuration and tutorial sections for the feature.
 
-=== Overview
-OCP is an ETSI standard protocol for control and management of Remote Radio Head (RRH)
-equipment. The OCP Project addresses the need for a southbound plugin that allows
-applications and controller services to interact with RRHs using OCP. The OCP southbound
-plugin will allow applications acting as a Radio Equipment Control (REC) to interact
-with RRHs that support an OCP agent.
-
-.OCP southbound plugin
-image::ocpplugin/ocp-sb-plugin.jpg[OCP southbound plugin, 550, 350]
-
-It is foreseen that, in 5G, C-RAN will use the packet-based Transport-SDN (T-SDN) as the
-fronthaul network to transport both control plane and user plane data between RRHs and
-BBUs. As a result, the addition of the OCP plugin to OpenDaylight will make it
-possible to build an RRH controller on top of OpenDaylight to centrally manage deployed RRHs,
-as well as integrating the RRH controller with T-SDN on one single platform, achieving
-the joint RRH and fronthaul network provisioning in C-RAN.
-
-=== Scope
-The OCP Plugin project includes:
-
-* OCP v4.1.1 support
-* Integration of OCP protocol library
-* Simple API invoked as a RPC
-* Simple API that allows applications to perform elementary functions of the following categories:
-  - Device management
-  - Config management
-  - Object lifecycle
-  - Object state management
-  - Fault management
-  - Software management (not implemented as of Boron)
-* Indication processing
-* Logging (not implemented as of Boron)
-* AISG/Iuant interface message tunnelling (not implemented as of Boron)
-* ALD connection management (not implemented as of Boron)
-
-=== Architecture and Design
-OCP is a vendor-neutral standard communications interface defined to enable control and management
-between RE and REC of an ORI architecture. The OCP Plugin supports the implementation of the OCP
-specification; it is based on the Model Driven Service Abstraction Layer (MD-SAL) architecture.
-
-OCP Plugin will support the following functionality:
-
-* Connection handling
-* Session management
-* State management
-* Error handling
-* Connection establishment will be handled by OCP library using opensource netty.io library
-* Message handling
-* Event/indication handling and propagation to upper layers
-
-*Activities in OCP plugin module*
-
-* Integration with OCP protocol library
-* Integration with corresponding MD-SAL infrastructure
-
-OCP protocol library is a component in OpenDaylight that mediates communication between
-OpenDaylight controller and RRHs supporting OCP protocol. Its primary goal is to provide
-the OCP Plugin with communication channel that can be used for managing RRHs.
-
-Key objectives:
-
-* Immutable transfer objects generation (transformation of OCP protocol library's POJO
-objects into OpenDaylight DTO objects)
-* Scalable non-blocking implementation
-* Pipeline processing
-* Scatter buffer
-* TLS support
-
-OCP Service addresses the need for a northbound interface that allows applications and other
-controller services to interact with RRHs using OCP, by providing API for abstracting OCP operations.
-
-.Overall architecture
-image::ocpplugin/plugin-design.jpg[Overall architecture, 550, 284]
-
-=== Message Flow
-.Message flow example
-image::ocpplugin/message_flow.jpg[Message flow example, 550, 335]
-
-=== Installation
-The OCP Plugin project has two top level Karaf features, odl-ocpplugin-all and odl-ocpjava-all, which contain the following sub-features:
-
-* odl-ocpplugin-southbound
-* odl-ocpplugin-app-ocp-service
-* odl-ocpjava-protocol
-
-The OCP service (odl-ocpplugin-app-ocp-service), together with the OCP southbound (odl-ocpplugin-southbound) and OCP protocol library (odl-ocpjava-protocol), provides OpenDaylight with basic OCP v4.1.1 functionality.
-
-There are two ways to interact with OCP service: one is via RESTCONF (programmatic) and the other is using DLUX web interface (manual), so you have to install the following features to enable RESTCONF and DLUX.
-----
-karaf#>feature:install odl-restconf odl-l2switch-switch odl-mdsal-apidocs odl-dlux-core odl-dlux-all
-----
-Then install the odl-ocpplugin-all feature which includes the odl-ocpplugin-southbound and odl-ocpplugin-app-ocp-service features. Note that the odl-ocpjava-all feature will be installed automatically as the odl-ocpplugin-southbound feature is dependent on the odl-ocpjava-protocol feature.
-----
-karaf#>feature:install odl-ocpplugin-all
-----
-After all required features are installed, use following command from karaf console to check and make sure features are correctly installed and initialized.
-----
-karaf#>feature:list | grep ocp
-----
-
-=== Configuration
-Configuring the OCP plugin can be done via its configuration file, 62-ocpplugin.xml, which can be found in the <odl-install-dir>/etc/opendaylight/karaf/ directory.
-
-As of Boron, there are the following settings that are configurable:
-
-. **port** specifies the port number on which the OCP plugin listens for connection requests
-. **radioHead-idle-timeout** determines the time duration (unit: milliseconds) for which a radio head has been idle before the idle event is triggered to perform health check
-. **ocp-version** specifies the OCP protocol version supported by the OCP plugin
-. **rpc-requests-quota** sets the maximum number of concurrent rpc requests allowed
-. **global-notification-quota** sets the maximum number of concurrent notifications allowed
-
-.OCP plugin configuration
-image::ocpplugin/plugin-config.jpg[OCP plugin configuration, 550, 449]
-
-=== Test Environment
-The OCP Plugin project contains a simple OCP agent for testing purposes; the agent has been designed specifically to act as a fake radio head device, giving you an idea of what it would look like during the OCP handshake taking place between the OCP agent and OpenDaylight (OCP plugin).
-
-To run the simple OCP agent, you have to first download its JAR file from OpenDaylight Nexus Repository.
-----
-wget https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/ocpplugin/simple-agent/0.1.0-Boron/simple-agent-0.1.0-Boron.jar
-----
-Then run the agent with no arguments (assuming you already have JDK 1.8 or above installed) and it should display the usage that lists the expected arguments.
-----
-java -classpath simple-agent-0.1.0-Boron.jar org.opendaylight.ocpplugin.OcpAgent
-
-Usage: java org.opendaylight.ocpplugin.OcpAgent <controller's ip address> <port number> <vendor id> <serial number>
-----
-Here is an example:
-----
-java -classpath simple-agent-0.1.0-Boron.jar org.opendaylight.ocpplugin.OcpAgent 127.0.0.1 1033 XYZ 123
-----
-
-=== Web / Graphical Interface
-Once you enable the DLUX feature, you can access the Controller GUI using following URL.
-----
-http://<controller-ip>:8080/index.html
-----
-Expand Nodes. You should see all the radio head devices that are connected to the controller running at <controller-ip>.
-
-.DLUX Nodes
-image::ocpplugin/dlux-ocp-nodes.jpg[DLUX Nodes, 550, 312]
-
-And expand Yang UI if you want to browse the various northbound APIs exposed by the OCP service.
-
-.DLUX Yang UI
-image::ocpplugin/dlux-ocp-apis.jpg[DLUX Yang UI, 550, 468]
-
-For information on how to use these northbound APIs, please refer to the OCP Plugin Developer Guide.
-
-=== Programmatic Interface
-The OCP Plugin project has implemented a complete set of the C&M operations (elementary functions) defined
-in the OCP specification, in the form of both northbound and southbound APIs, including:
-
-* health-check
-* set-time
-* re-reset
-* get-param
-* modify-param
-* create-obj
-* delete-obj
-* get-state
-* modify-state
-* get-fault
-
-The API is documented in the OCP Plugin Developer Guide under the section Southbound API and Northbound API, respectively.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/ocp-plugin-user-guide.html
index c92e8e4093207b91ec547fe18c7ad72773aa289f..bd15f8d0784436bc2d495f99f399cf7cd664bd89 100755 (executable)
@@ -1,69 +1,3 @@
 == OF-CONFIG User Guide ==
 
-=== Overview ===
-OF-CONFIG defines an OpenFlow switch as an abstraction called an
-OpenFlow Logical Switch. The OF-CONFIG protocol enables configuration of
-essential artifacts of an OpenFlow Logical Switch so that an OpenFlow
-controller can communicate and control the OpenFlow Logical switch via
-the OpenFlow protocol. OF-CONFIG introduces an operating context for one
-or more OpenFlow data paths called an OpenFlow Capable Switch for one or
-more switches. An OpenFlow Capable Switch is intended to be equivalent
-to an actual physical or virtual network element (e.g. an Ethernet
-switch) which is hosting one or more OpenFlow data paths by partitioning
-a set of OpenFlow related resources such as ports and queues among the
-hosted OpenFlow data paths. The OF-CONFIG protocol enables dynamic
-association of the OpenFlow related resources of an OpenFlow Capable
-Switch with specific OpenFlow Logical Switches which are being hosted on
-the OpenFlow Capable Switch. OF-CONFIG does not specify or report how
-the partitioning of resources on an OpenFlow Capable Switch is achieved.
-OF-CONFIG assumes that resources such as ports and queues are
-partitioned amongst multiple OpenFlow Logical Switches such that each
-OpenFlow Logical Switch can assume full control over the resources that
-is assigned to it.
-
-=== How to start ===
-- start OF-CONFIG feature as below:
-+
- feature:install odl-of-config-all
-
-=== Configuration on the OVS supporting OF-CONFIG ===
-
-NOTE: OVS is not supported by OF-CONFIG temporarily because
-the OpenDaylight version of OF-CONFIG is 1.2 while the OVS version of OF-CONFIG is not standard.
-
-The introduction of configuring the OVS can be referred to:
-
-_https://github.com/openvswitch/of-config._
-
-=== Connection Establishment between the Capable/Logical Switch and OF-CONFIG ===
-
-The OF-CONFIG protocol is based on NETCONF. So the
-switches supporting OF-CONFIG can also access OpenDaylight
-using the functions provided by NETCONF. This is the
-preparation step before connecting to OF-CONFIG. How to access the
-switch to OpenDaylight using the NETCONF can be referred
-to the <<_southbound_netconf_connector,NETCONF Southbound User Guide>> or
-https://wiki.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf[NETCONF Southbound examples on the wiki].
-
-Now the switches supporting OF-CONFIG and they have connected to the
-controller using NETCONF as described in preparation phase.
-OF-CONFIG can check whether the switch can support OF-CONFIG by
-reading the capability list in NETCONF.
-
-The OF-CONFIG will get the information of the capable switch and logical
-switch via the NETCONF connection, and creates separate topologies for
-the capable and logical switches in the OpenDaylight Topology module.
-
-The Connection between the capable/logical switches and OF-CONFIG is
-finished.
-
-=== Configuration On Capable Switch ===
-Here is an example showing how to make the configuration to
-modify-controller-connection on the capable switch using OF-CONFIG.
-Other configurations can follow the same way of the example.
-
-- Example: modify-controller-connection
-
-NOTE: this configuration can execute via the NETCONF, which can be
-referred to the <<_southbound_netconf_connector,NETCONF Southbound User Guide>> or
-https://wiki.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf[NETCONF Southbound examples on the wiki].
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/of-config-user-guide.html
index 44601efdb001916afc0b36a6dc46821923e66db4..1e4bbb58a6f5bde6da784b1ef7bd62b312f16919 100644 (file)
@@ -1607,7 +1607,7 @@ https://jenkins.opendaylight.org/controller/job/controller-merge/lastSuccessfulB
 [source,xml]
 ------------------------------------------------------------------------
 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<flow 
+<flow
     xmlns="urn:opendaylight:flow:inventory">
     <flow-name>push-mpls-action</flow-name>
     <instructions>
@@ -1666,7 +1666,7 @@ https://jenkins.opendaylight.org/controller/job/controller-merge/lastSuccessfulB
 [source,xml]
 ------------------------------------------------------------------------
 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<flow 
+<flow
     xmlns="urn:opendaylight:flow:inventory">
     <flow-name>push-mpls-action</flow-name>
     <instructions>
@@ -1724,7 +1724,7 @@ fix]
 [source,xml]
 ------------------------------------------------------------------------
 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<flow 
+<flow
     xmlns="urn:opendaylight:flow:inventory">
     <flow-name>FooXf10</flow-name>
     <instructions>
@@ -1769,3 +1769,109 @@ fix]
     <table_id>0</table_id>
 </flow>
 ------------------------------------------------------------------------
+
+[[learn]]
+====== Learn
+* Nicira extension defined in https://github.com/osrg/openvswitch/blob/master/include/openflow/nicira-ext.h 
+* Example section is - https://github.com/osrg/openvswitch/blob/master/include/openflow/nicira-ext.h#L788
+
+[source,xml]
+------------------------------------------------------------------------
+<flow>
+  <id>ICMP_Ingress258a5a5ad-08a8-4ff7-98f5-ef0b96ca3bb8</id>
+  <hard-timeout>0</hard-timeout>
+  <idle-timeout>0</idle-timeout>
+  <match>
+    <ethernet-match>
+      <ethernet-type>
+        <type>2048</type>
+      </ethernet-type>
+    </ethernet-match>
+    <metadata>
+      <metadata>2199023255552</metadata>
+      <metadata-mask>2305841909702066176</metadata-mask>
+    </metadata>
+    <ip-match>
+      <ip-protocol>1</ip-protocol>
+    </ip-match>
+  </match>
+  <cookie>110100480</cookie>
+  <instructions>
+    <instruction>
+      <order>0</order>
+      <apply-actions>
+        <action>
+          <order>1</order>
+          <nx-resubmit
+            xmlns="urn:opendaylight:openflowplugin:extension:nicira:action">
+            <table>220</table>
+          </nx-resubmit>
+        </action>
+        <action>
+          <order>0</order>
+          <nx-learn
+            xmlns="urn:opendaylight:openflowplugin:extension:nicira:action">
+            <idle-timeout>60</idle-timeout>
+            <fin-idle-timeout>0</fin-idle-timeout>
+            <hard-timeout>60</hard-timeout>
+            <flags>0</flags>
+            <table-id>41</table-id>
+            <priority>61010</priority>
+            <fin-hard-timeout>0</fin-hard-timeout>
+            <flow-mods>
+              <flow-mod-add-match-from-value>
+                <src-ofs>0</src-ofs>
+                <value>2048</value>
+                <src-field>1538</src-field>
+                <flow-mod-num-bits>16</flow-mod-num-bits>
+              </flow-mod-add-match-from-value>
+            </flow-mods>
+            <flow-mods>
+              <flow-mod-add-match-from-field>
+                <src-ofs>0</src-ofs>
+                <dst-ofs>0</dst-ofs>
+                <dst-field>4100</dst-field>
+                <src-field>3588</src-field>
+                <flow-mod-num-bits>32</flow-mod-num-bits>
+              </flow-mod-add-match-from-field>
+            </flow-mods>
+            <flow-mods>
+              <flow-mod-add-match-from-field>
+                <src-ofs>0</src-ofs>
+                <dst-ofs>0</dst-ofs>
+                <dst-field>518</dst-field>
+                <src-field>1030</src-field>
+                <flow-mod-num-bits>48</flow-mod-num-bits>
+              </flow-mod-add-match-from-field>
+            </flow-mods>
+            <flow-mods>
+              <flow-mod-add-match-from-field>
+                <src-ofs>0</src-ofs>
+                <dst-ofs>0</dst-ofs>
+                <dst-field>3073</dst-field>
+                <src-field>3073</src-field>
+                <flow-mod-num-bits>8</flow-mod-num-bits>
+              </flow-mod-add-match-from-field>
+            </flow-mods>
+            <flow-mods>
+              <flow-mod-copy-value-into-field>
+                <dst-ofs>0</dst-ofs>
+                <value>1</value>
+                <dst-field>65540</dst-field>
+                <flow-mod-num-bits>8</flow-mod-num-bits>
+              </flow-mod-copy-value-into-field>
+            </flow-mods>
+            <cookie>110100480</cookie>
+          </nx-learn>
+        </action>
+      </apply-actions>
+    </instruction>
+  </instructions>
+  <installHw>true</installHw>
+  <barrier>false</barrier>
+  <strict>false</strict>
+  <priority>61010</priority>
+  <table_id>253</table_id>
+  <flow-name>ACL</flow-name>
+</flow>
+------------------------------------------------------------------------
index 0aa91f7ca2827db57f35553f8a017518870846a9..7ac975297ded82c95720041ecec350ffaa3a851c 100644 (file)
@@ -1,366 +1,3 @@
 == OpFlex agent-ovs User Guide
 
-=== Introduction
-agent-ovs is a policy agent that works with OVS to enforce a
-group-based policy networking model with locally attached virtual
-machines or containers. The policy agent is designed to work well with
-orchestration tools like OpenStack.
-
-=== Agent Configuration
-The agent configuration is handled using its config file which is by
-default found at "/etc/opflex-agent-ovs/opflex-agent-ovs.conf"
-
-Here is an example configuration file that documents the available
-options:
-
-----
-{
-    // Logging configuration
-    // "log": {
-    //    "level": "info"
-    // },
-
-    // Configuration related to the OpFlex protocol
-    "opflex": {
-        // The policy domain for this agent.
-        "domain": "openstack",
-
-        // The unique name in the policy domain for this agent.
-        "name": "example-agent",
-
-        // a list of peers to connect to, by hostname and port.  One
-        // peer, or an anycast pseudo-peer, is sufficient to bootstrap
-        // the connection without needing an exhaustive list of all
-        // peers.
-        "peers": [
-            // EXAMPLE:
-            {"hostname": "10.0.0.30", "port": 8009}
-        ],
-
-        "ssl": {
-            // SSL mode.  Possible values:
-            // disabled: communicate without encryption
-            // encrypted: encrypt but do not verify peers
-            // secure: encrypt and verify peer certificates
-            "mode": "disabled",
-
-            // The path to a directory containing trusted certificate
-            // authority public certificates, or a file containing a
-            // specific CA certificate.
-            "ca-store": "/etc/ssl/certs/"
-        },
-
-        "inspector": {
-            // Enable the MODB inspector service, which allows
-            // inspecting the state of the managed object database.
-           // Default: enabled
-            "enabled": true,
-
-            // Listen on the specified socket for the inspector
-           // Default /var/run/opflex-agent-ovs-inspect.sock
-            "socket-name": "/var/run/opflex-agent-ovs-inspect.sock"
-        }
-    },
-
-    // Endpoint sources provide metadata about local endpoints
-    "endpoint-sources": {
-        // Filesystem path to monitor for endpoint information
-        "filesystem": ["/var/lib/opflex-agent-ovs/endpoints"]
-    },
-
-    // Renderers enforce policy obtained via OpFlex.
-    "renderers": {
-        // Stitched-mode renderer for interoperating with a
-        // hardware fabric such as ACI
-        // EXAMPLE:
-        "stitched-mode": {
-            "ovs-bridge-name": "br0",
-        
-            // Set encapsulation type.  Must set either vxlan or vlan.
-            "encap": {
-                // Encapsulate traffic with VXLAN.
-                "vxlan" : {
-                    // The name of the tunnel interface in OVS
-                    "encap-iface": "br0_vxlan0",
-        
-                    // The name of the interface whose IP should be used
-                    // as the source IP in encapsulated traffic.
-                    "uplink-iface": "eth0.4093",
-        
-                    // The vlan tag, if any, used on the uplink interface.
-                    // Set to zero or omit if the uplink is untagged.
-                    "uplink-vlan": 4093,
-        
-                    // The IP address used for the destination IP in
-                    // the encapsulated traffic.  This should be an
-                    // anycast IP address understood by the upstream
-                    // stiched-mode fabric.
-                    "remote-ip": "10.0.0.32",
-        
-                    // UDP port number of the encapsulated traffic.
-                    "remote-port": 8472
-                }
-        
-                // Encapsulate traffic with a locally-significant VLAN
-                // tag
-                // EXAMPLE:
-                // "vlan" : {
-                //     // The name of the uplink interface in OVS
-                //     "encap-iface": "team0"
-                // }
-            },
-        
-            // Configure forwarding policy
-            "forwarding": {
-                // Configure the virtual distributed router
-                "virtual-router": {
-                    // Enable virtual distributed router.  Set to true
-                    // to enable or false to disable.  Default true.
-                    "enabled": true,
-        
-                    // Override MAC address for virtual router.
-                    // Default is "00:22:bd:f8:19:ff"
-                    "mac": "00:22:bd:f8:19:ff",
-        
-                    // Configure IPv6-related settings for the virtual
-                    // router
-                    "ipv6" : {
-                        // Send router advertisement messages in
-                        // response to router solicitation requests as
-                        // well as unsolicited advertisements.  This
-                        // is not required in stitched mode since the
-                        // hardware router will send them.
-                        "router-advertisement": true
-                    }
-                },
-        
-                // Configure virtual distributed DHCP server
-                "virtual-dhcp": {
-                    // Enable virtual distributed DHCP server.  Set to
-                    // true to enable or false to disable.  Default
-                    // true.
-                    "enabled": true,
-        
-                    // Override MAC address for virtual dhcp server.
-                    // Default is "00:22:bd:f8:19:ff"
-                    "mac": "00:22:bd:f8:19:ff"
-                },
-        
-                "endpoint-advertisements": {
-                    // Enable generation of periodic ARP/NDP
-                    // advertisements for endpoints.  Default true.
-                    "enabled": "true"
-                }
-            },
-        
-            // Location to store cached IDs for managing flow state
-            "flowid-cache-dir": "/var/lib/opflex-agent-ovs/ids"
-        }
-    }
-}
-----
-
-=== Endpoint Registration
-The agent learns about endpoints using endpoint metadata files located
-by default in "/var/lib/opflex-agent-ovs/endpoints".
-
-These are JSON-format files such as the (unusually complex) example
-below:
-----
-{
-    "uuid": "83f18f0b-80f7-46e2-b06c-4d9487b0c754",
-    "policy-space-name": "test",
-    "endpoint-group-name": "group1",
-    "interface-name": "veth0",
-    "ip": [
-        "10.0.0.1", "fd8f:69d8:c12c:ca62::1"
-    ],
-    "dhcp4": {
-        "ip": "10.200.44.2",
-        "prefix-len": 24,
-        "routers": ["10.200.44.1"],
-        "dns-servers": ["8.8.8.8", "8.8.4.4"],
-        "domain": "example.com",
-        "static-routes": [
-            {
-                "dest": "169.254.169.0",
-                "dest-prefix": 24,
-                "next-hop": "10.0.0.1"
-            }
-        ]
-    },
-    "dhcp6": {
-        "dns-servers": ["2001:4860:4860::8888", "2001:4860:4860::8844"],
-        "search-list": ["test1.example.com", "example.com"]
-    },
-    "ip-address-mapping": [
-        {
-           "uuid": "91c5b217-d244-432c-922d-533c6c036ab4",
-           "floating-ip": "5.5.5.1",
-           "mapped-ip": "10.0.0.1",
-           "policy-space-name": "common",
-           "endpoint-group-name": "nat-epg"
-        },
-        {
-           "uuid": "22bfdc01-a390-4b6f-9b10-624d4ccb957b",
-           "floating-ip": "fdf1:9f86:d1af:6cc9::1",
-           "mapped-ip": "fd8f:69d8:c12c:ca62::1",
-           "policy-space-name": "common",
-           "endpoint-group-name": "nat-epg"
-        }
-    ],
-    "mac": "00:00:00:00:00:01",
-    "promiscuous-mode": false
-}
-----
-
-The possible parameters for these files are:
-
-*uuid*:: A globally unique ID for the endpoint
-*endpoint-group-name*:: The name of the endpoint group for the endpoint
-*policy-space-name*:: The name of the policy space for the endpoint group.
-*interface-name*:: The name of the OVS interface to which the endpoint
-is attached
-*ip*:: A list of strings contains either IPv4 or IPv6 addresses that the
-endpoint is allowed to use
-*mac*:: The MAC address for the endpoint's interface.
-*promiscuous-mode*:: Allow traffic from this VM to bypass default port
-security
-*dhcp4*:: A distributed DHCPv4 configuration block (see below)
-*dhcp6*:: A distributed DHCPv6 configuration block (see below)
-*ip-address-mapping*:: A list of IP address mapping configuration blocks (see below)
-
-DHCPv4 configuration blocks can contain the following parameters:
-
-*ip*:: the IP address to return with DHCP.  Must be one of the
-configured IPv4 addresses.
-*prefix*:: the subnet prefix length
-*routers*:: a list of default gateways for the endpoint
-*dns*:: a list of DNS server addresses
-*domain*:: The domain name parameter to send in the DHCP reply
-*static-routes*:: A list of static route configuration blocks, which
-contains a "dest", "dest-prefix", and "next-hop" parameters to send as
-static routes to the end host
-
-DHCPv6 configuration blocks can contain the following parameters:
-
-*dns*:: A list of DNS servers for the endpoint
-*search-patch*:: The DNS search path for the endpoint
-
-IP address mapping configuration blocks can contain the following
-parameters:
-
-*uuid*:: a globally unique ID for the virtual endpoint created by the
-mapping.
-*floating-ip*:: Map using DNAT to this floating IPv4 or IPv6 address
-*mapped-ip*:: the source IPv4 or IPv6 address; must be one of the IPs
-assigned to the endpoint.
-*endpoint-group-name*:: The name of the endpoint group for the NATed IP
-*policy-space-name*:: The name of the policy space for the NATed IP
-
-=== Inspector
-The Opflex inspector is a useful command-line tool that will allow you
-to inspect the state of the managed object database for the agent for
-debugging and diagnosis purposes.
-
-The command is called "gbp_inspect" and takes the following arguments:
-----
-# gbp_inspect -h
-Usage: ./gbp_inspect [options]
-Allowed options:
-  -h [ --help ]                         Print this help message
-  --log arg                             Log to the specified file (default 
-                                        standard out)
-  --level arg (=warning)                Use the specified log level (default 
-                                        info)
-  --syslog                              Log to syslog instead of file or 
-                                        standard out
-  --socket arg (=/usr/local/var/run/opflex-agent-ovs-inspect.sock)
-                                        Connect to the specified UNIX domain 
-                                        socket (default /usr/local/var/run/opfl
-                                        ex-agent-ovs-inspect.sock)
-  -q [ --query ] arg                    Query for a specific object with 
-                                        subjectname,uri or all objects of a 
-                                        specific type with subjectname
-  -r [ --recursive ]                    Retrieve the whole subtree for each 
-                                        returned object
-  -f [ --follow-refs ]                  Follow references in returned objects
-  --load arg                            Load managed objects from the specified
-                                        file into the MODB view
-  -o [ --output ] arg                   Output the results to the specified 
-                                        file (default standard out)
-  -t [ --type ] arg (=tree)             Specify the output format: tree, list, 
-                                        or dump (default tree)
-  -p [ --props ]                        Include object properties in output
-----
-
-Here are some examples of the ways to use this tool.
-
-You can get information about the running system using one or more
-queries, which consist of an object model class name and optionally
-the URI of a specific object.  The simplest query is to get a single
-object, nonrecursively:
-
-----
-# gbp_inspect -q DmtreeRoot
---* DmtreeRoot,/
-# gbp_inspect -q GbpEpGroup
---* GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/ 
---* GbpEpGroup,/PolicyUniverse/PolicySpace/test/GbpEpGroup/group1/
-# gbp_inspect -q GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/
---* GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/ 
-----
-
-You can also display all the properties for each object:
-----
-# gbp_inspect -p -q GbpeL24Classifier
---* GbpeL24Classifier,/PolicyUniverse/PolicySpace/test/GbpeL24Classifier/classifier4/ 
-     {
-       connectionTracking : 1 (reflexive)
-       dFromPort          : 80
-       dToPort            : 80
-       etherT             : 2048 (ipv4)
-       name               : classifier4
-       prot               : 6
-     }
---* GbpeL24Classifier,/PolicyUniverse/PolicySpace/test/GbpeL24Classifier/classifier3/ 
-     {
-       etherT : 34525 (ipv6)
-       name   : classifier3
-       order  : 100
-       prot   : 58
-     }
---* GbpeL24Classifier,/PolicyUniverse/PolicySpace/test/GbpeL24Classifier/classifier2/ 
-     {
-       etherT : 2048 (ipv4)
-       name   : classifier2
-       order  : 101
-       prot   : 1
-     }
-----
-
-You can also request to get the all the children of an object you query for:
-----
-# gbp_inspect -r -q GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/
---* GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/ 
-  |-* GbpeInstContext,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/GbpeInstContext/ 
-  `-* GbpEpGroupToNetworkRSrc,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/GbpEpGroupToNetworkRSrc/ 
-----
-
-You can also follow references found in any object downloads:
-----
-# gbp_inspect -fr -q GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/
---* GbpEpGroup,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/ 
-  |-* GbpeInstContext,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/GbpeInstContext/ 
-  `-* GbpEpGroupToNetworkRSrc,/PolicyUniverse/PolicySpace/common/GbpEpGroup/nat-epg/GbpEpGroupToNetworkRSrc/ 
---* GbpFloodDomain,/PolicyUniverse/PolicySpace/common/GbpFloodDomain/fd_ext/ 
-  `-* GbpFloodDomainToNetworkRSrc,/PolicyUniverse/PolicySpace/common/GbpFloodDomain/fd_ext/GbpFloodDomainToNetworkRSrc/ 
---* GbpBridgeDomain,/PolicyUniverse/PolicySpace/common/GbpBridgeDomain/bd_ext/ 
-  `-* GbpBridgeDomainToNetworkRSrc,/PolicyUniverse/PolicySpace/common/GbpBridgeDomain/bd_ext/GbpBridgeDomainToNetworkRSrc/ 
---* GbpRoutingDomain,/PolicyUniverse/PolicySpace/common/GbpRoutingDomain/rd_ext/ 
-  |-* GbpRoutingDomainToIntSubnetsRSrc,/PolicyUniverse/PolicySpace/common/GbpRoutingDomain/rd_ext/GbpRoutingDomainToIntSubnetsRSrc/122/%2fPolicyUniverse%2fPolicySpace%2fcommon%2fGbpSubnets%2fsubnets_ext%2f/ 
-  `-* GbpForwardingBehavioralGroupToSubnetsRSrc,/PolicyUniverse/PolicySpace/common/GbpRoutingDomain/rd_ext/GbpForwardingBehavioralGroupToSubnetsRSrc/ 
---* GbpSubnets,/PolicyUniverse/PolicySpace/common/GbpSubnets/subnets_ext/ 
-  |-* GbpSubnet,/PolicyUniverse/PolicySpace/common/GbpSubnets/subnets_ext/GbpSubnet/subnet_ext4/ 
-  `-* GbpSubnet,/PolicyUniverse/PolicySpace/common/GbpSubnets/subnets_ext/GbpSubnet/subnet_ext6/
-----
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/opflex-agend-ovs-user-guide.html
diff --git a/manuals/user-guide/src/main/asciidoc/ovsdb/odl-netvirt-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/ovsdb/odl-netvirt-user-guide.adoc
deleted file mode 100644 (file)
index 3ebe8b9..0000000
+++ /dev/null
@@ -1,49 +0,0 @@
-=== NetVirt
-
-The OVSDB NetVirt project delivers two major pieces of functionality:
-
-. The OVSDB Southbound Protocol, and
-. NetVirt, a network virtualization solution.
-
-The following diagram shows the system-level architecture of OVSDB NetVirt in
-an OpenStack-based solution.
-
-.OVSDB NetVirt Architecture
-image::ovsdb/ovsdb-netvirt-architecture.jpg[align="center",width=250]
-
-NetVirt is a network virtualization solution that is a Neutron service provider, and therefore supports
-the OpenStack Neutron Networking API and extensions.
-
-The OVSDB component implements the OVSDB protocol (RFC 7047), as well as
-plugins to support OVSDB Schemas, such as the Open_vSwitch database schema and
-the hardware_vtep database schema.
-
-NetVirt has MDSAL-based interfaces with Neutron on the northbound side, and
-OVSDB and OpenFlow plugins on the southbound side.
-
-OVSDB NetVirt currently supports Open vSwitch virtual switches
-via OpenFlow and OVSDB.  Work is underway to support hardware gateways.
-
-NetVirt services are enabled by installing the odl-ovsdb-openstack feature using the following command:
-
- feature:install odl-ovsdb-openstack
-
-To enable NetVirt's distributed Layer 3 routing services, the following line must be uncommented in the etc/custom.properties
-file in the OpenDaylight distribution prior to starting karaf:
-
- ovsdb.l3.fwd.enabled=yes
-
-To start the OpenDaylight controller, run the following application in your distribution:
-
- bin/karaf
-
-More details about using NetVirt with OpenStack can be found in the following places:
-
-. The "OpenDaylight and OpenStack" guide, and
-. https://wiki.opendaylight.org/view/OVSDB_Integration:Main#Getting_Started_with_OpenDaylight_OVSDB_Plugin_Network_Virtualization[Getting Started with OpenDaylight OVSDB Plugin Network Virtualization]
-
-Some additional details about using OpenStack Security Groups and the Data Plane Development Kit (DPDK) are provided below.
-
-include::odl-ovsdb-security-groups.adoc[]
-
-include::odl-ovs-dpdk-user-guide.adoc[]
diff --git a/manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovs-dpdk-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovs-dpdk-user-guide.adoc
deleted file mode 100644 (file)
index 424510b..0000000
+++ /dev/null
@@ -1,162 +0,0 @@
-==== Using OVS with DPDK hosts and OVSDB NetVirt
-
-The Data Plane Development Kit (http://dpdk.org/[DPDK]) is a userspace set
-of libraries and drivers designed for fast packet processing.  The userspace
-datapath variant of OVS can be built with DPDK enabled to provide the
-performance features of DPDK to Open vSwitch (OVS).  In the 2.4.0 version of OVS, the
-Open_vSwtich table schema was enhanced to include the lists 'datapath-types' and
-'interface-types'.  When the OVS with DPDK variant of OVS is running, the
-'inteface-types' list will include DPDK interface types such as 'dpdk' and 'dpdkvhostuser'.
-The OVSDB Southbound Plugin includes this information in the OVSDB YANG model
-in the MD-SAL, so when a specific OVS host is running OVS with DPDK, it is possible
-for NetVirt to detect that information by checking that DPDK interface types are
-included in the list of supported interface types.
-
-For example, query the operational MD-SAL for OVSDB nodes:
-
-HTTP GET:
-
- http://{{CONTROLLER-IP}}:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
-
-Result Body:
-
- {
-   "topology": [
-     {
-       "topology-id": "ovsdb:1",
-       "node": [
-         < content edited out >
-         {
-           "node-id": "ovsdb://uuid/f9b58b6d-04db-459a-b914-fff82b738aec",
-           < content edited out >
-           "ovsdb:interface-type-entry": [
-             {
-               "interface-type": "ovsdb:interface-type-ipsec-gre"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-internal"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-system"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-patch"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-dpdkvhostuser"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-dpdk"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-dpdkr"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-vxlan"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-lisp"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-geneve"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-gre"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-tap"
-             },
-             {
-               "interface-type": "ovsdb:interface-type-stt"
-             }
-           ],
-           < content edited out >
-           "ovsdb:datapath-type-entry": [
-             {
-               "datapath-type": "ovsdb:datapath-type-netdev"
-             },
-             {
-               "datapath-type": "ovsdb:datapath-type-system"
-             }
-           ],
-           < content edited out >
-         },
-         < content edited out >
-       ]
-     }
-   ]
- }
-
-This example illustrates the output of an OVS with DPDK host because
-the list of interface types includes types supported by DPDK.
-
-Bridges on OVS with DPDK hosts need to be created with the 'netdev' datapath type
-and DPDK specific ports need to be created with the appropriate interface type.
-The OpenDaylight OVSDB Southbound Plugin supports these attributes.
-
-The OpenDaylight NetVirt application checks whether the OVS host is using OVS with DPDK
-when creating the bridges that are expected to be present on the host, e.g. 'br-int'.
-
-Following are some tips for supporting hosts using OVS with DPDK when using NetVirt as the Neutron service
-provider and 'devstack' to deploy Openstack.
-
-In addition to the 'networking-odl' ML2 plugin, enable the 'networking-odl-dpdk' plugin in 'local.conf'.
-
- For working with Openstack Liberty
- enable_plugin networking-odl https://github.com/FedericoRessi/networking-odl integration/liberty
- enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk stable/liberty
-
- For working with Openstack Mitaka (or later) branch
- enable_plugin networking-odl https://github.com/openstack/networking-odl
- enable_plugin networking-ovs-dpdk https://github.com/openstack/networking-ovs-dpdk
-
-The order of these plugin lines is important.  The 'networking-odl' plugin will install and
-setup 'openvswitch'.  The 'networking-ovs-dpdk' plugin will install OVS with DPDK.  Note, the 'networking-ovs-dpdk'
-plugin is only being used here to setup OVS with DPDK.  The 'networking-odl' plugin will be used as the Neutron ML2 driver.
-
-For VXLAN tenant network support, the NetVirt application interacts with OVS with DPDK host in the same way as OVS hosts
-using the kernel datapath by creating VXLAN ports on 'br-int' to communicate with other tunnel endpoints.  The IP address
-for the local tunnel endpoint may be configured in the 'local.conf' file.  For example:
-
- ODL_LOCAL_IP=192.100.200.10
-
-NetVirt will use this information to configure the VXLAN port on 'br-int'.  On a host with the OVS kernel datapath, it
-is expected that there will be a networking interface configured with this IP address.  On an OVS with DPDK host, an OVS
-bridge is created and a DPDK port is added to the bridge.  The local tunnel endpoint address is then assigned to the
-bridge port of the bridge.  So, for example, if the physical network interface is associated with 'eth0' on the host,
-a bridge named 'br-eth0' could be created.  The DPDK port, such as 'dpdk0' (per the naming conventions of OVS with DPDK), is
-added to bridge 'br-eth0'.  The local tunnel endpoint address is assigned to the network interface 'br-eth0' which is
-attached to bridge 'br-eth0'.  All of this setup is not done by NetVirt.  The 'networking-ovs-dpdk' can be made to
-perform this setup by putting configuration like the following in 'local.conf'.
-
- ODL_LOCAL_IP=192.168.200.9
- ODL_PROVIDER_MAPPINGS=physnet1:eth0,physnet2:eht1
- OVS_DPDK_PORT_MAPPINGS=eth0:br-eth0,eth1:br-ex
- OVS_BRIDGE_MAPPINGS=physnet1:br-eth0,physnet2:br-ex
-
-The above settings associate the host networking interface 'eth0' with bridge 'br-eth0'.  The 'networking-ovs-dpdk' plugin
-will determine the DPDK port name associated with 'eth0' and add it to the bridge 'br-eth0'.  If using the NetVirt L3 support,
-these settings will enable setup of the 'br-ex' bridge and attach the DPDK port associated with network interface 'eth1' to it.
-
-The following settings are included in 'local.conf' to specify specific attributes associated with OVS with DPDK.  These are
-used by the 'networking-ovs-dpdk' plugin to configure OVS with DPDK.
-
- OVS_DATAPATH_TYPE=netdev
- OVS_NUM_HUGEPAGES=8192
- OVS_DPDK_MEM_SEGMENTS=8192
- OVS_HUGEPAGE_MOUNT_PAGESIZE=2M
- OVS_DPDK_RTE_LIBRTE_VHOST=y
- OVS_DPDK_MODE=compute
-
-Once the stack is up and running virtual machines may be deployed on OVS with DPDK hosts.  The 'networking-odl' plugin handles
-ensuring that 'dpdkvhostuser' interfaces are utilized by Nova instead of the default 'tap' interface.  The 'dpdkvhostuser' interface
-provides the best performance for VMs on OVS with DPDK hosts.
-
-A Nova flavor is created for VMs that may be deployed on OVS with DPDK hosts.
-
- nova flavor-create largepage-flavor 1002 1024 4 1
- nova flavor-key 1002 set "hw:mem_page_size=large"
-
-Then, just specify the flavor when creating a VM.
-
- nova boot --flavor largepage-flavor --image cirros-0.3.4-x86_64-uec --nic net-id=<NET ID VALUE> vm-name
diff --git a/manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-hwvtep-southbound-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-hwvtep-southbound-user-guide.adoc
deleted file mode 100644 (file)
index d275484..0000000
+++ /dev/null
@@ -1,294 +0,0 @@
-==== OVSDB Hardware VTEP SouthBound Plugin
-
-===== Overview
-
-Hwvtepsouthbound plugin is used to configure a hardware VTEP which
-implements hardware ovsdb schema. This page will show how to use
-RESTConf API of hwvtepsouthbound. There are two ways to connect to ODL:
-
-.user initiates connection, and
-.switch initiates connection.
-
-Both will be introduced respectively.
-
-===== User Initiates Connection
-
-====== Prerequisite
-
-Configure the hwvtep device/node to listen for the tcp connection in
-passive mode. In addition, management IP and tunnel source IP are also
-configured. After all this configuration is done, a physical switch is
-created automatically by the hwvtep node.
-
-====== Connect to a hwvtep device/node
-
-Send below Restconf request if you want to initiate the connection to a
-hwvtep node from the controller, where listening IP and port of hwvtep
-device/node are provided.
-
-REST API: POST
-http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/
-
-  {
-   "network-topology:node": [
-         {
-             "node-id": "hwvtep://192.168.1.115:6640",
-             "hwvtep:connection-info":
-             {
-                 "hwvtep:remote-port": 6640,
-                 "hwvtep:remote-ip": "192.168.1.115"
-             }
-         }
-     ]
-  }
-
-Please replace 'odl' in the URL with the IP address of your OpendayLight
-controller and change '192.168.1.115' to your hwvtep node IP.
-
-**NOTE**: The format of node-id is fixed. It will be one of the two:
-
-User initiates connection from ODL:
-
- hwvtep://ip:port
-
-Switch initiates connection:
-
- hwvtep://uuid/<uuid of switch>
-
-The reason for using UUID is that we can distinguish between multiple
-switches if they are behind a NAT.
-
-After this request is completed successfully, we can get the physical
-switch from the operational data store.
-
-REST API: GET
-http://odl:8181/restconf/operational/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
-
-There is no body in this request.
-
-The response of the request is:
-
-  {
-     "node": [
-           {
-             "node-id": "hwvtep://192.168.1.115:6640",
-             "hwvtep:switches": [
-               {
-                 "switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640/physicalswitch/br0']"
-               }
-             ],
-             "hwvtep:connection-info": {
-               "local-ip": "192.168.92.145",
-               "local-port": 47802,
-               "remote-port": 6640,
-               "remote-ip": "192.168.1.115"
-             }
-           },
-           {
-             "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
-             "hwvtep:management-ips": [
-               {
-                 "management-ips-key": "192.168.1.115"
-               }
-             ],
-             "hwvtep:physical-switch-uuid": "37eb5abd-a6a3-4aba-9952-a4d301bdf371",
-             "hwvtep:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']",
-             "hwvtep:hwvtep-node-description": "",
-             "hwvtep:tunnel-ips": [
-               {
-                 "tunnel-ips-key": "192.168.1.115"
-               }
-             ],
-             "hwvtep:hwvtep-node-name": "br0"
-           }
-         ]
-  }
-
-If there is a physical switch which has already been created by manual
-configuration, we can get the node-id of the physical switch from this
-response, which is presented in “swith-ref”. If the switch does not
-exist, we need to create the physical switch. Currently, most hwvtep
-devices do not support running multiple switches.
-
-====== Create a physical switch
-
-REST API: POST
-http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/
-
-request body:
-
-  {
-   "network-topology:node": [
-         {
-             "node-id": "hwvtep://192.168.1.115:6640/physicalswitch/br0",
-             "hwvtep-node-name": "ps0",
-             "hwvtep-node-description": "",
-             "management-ips": [
-               {
-                 "management-ips-key": "192.168.1.115"
-               }
-             ],
-             "tunnel-ips": [
-               {
-                 "tunnel-ips-key": "192.168.1.115"
-               }
-             ],
-             "managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']"
-         }
-     ]
-  }
-
-Note: "managed-by" must provided by user. We can get its value after the
-step 'Connect to a hwvtep device/node' since the node-id of hwvtep
-device is provided by user. "managed-by" is a reference typed of
-instance identifier. Though the instance identifier is a little
-complicated for RestConf, the primary user of hwvtepsouthbound plugin
-will be provider-type code such as NetVirt and the instance identifier
-is much easier to write code for.
-
-====== Create a logical switch
-
-Creating a logical switch is effectively creating a logical network. For
-VxLAN, it is a tunnel network with the same VNI.
-
-REST API: POST
-http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
-
-request body:
-
-  {
-   "logical-switches": [
-         {
-             "hwvtep-node-name": "ls0",
-             "hwvtep-node-description": "",
-             "tunnel-key": "10000"
-          }
-     ]
-  }
-
-====== Create a physical locator
-
-After the VXLAN network is ready, we will add VTEPs to it. A VTEP is
-described by a physical locator.
-
-REST API: POST
-http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
-
-request body:
-
-   {
-    "termination-point": [
-         {
-             "tp-id": "vxlan_over_ipv4:192.168.0.116",
-             "encapsulation-type": "encapsulation-type-vxlan-over-ipv4",
-             "dst-ip": "192.168.0.116"
-             }
-        ]
-   }
-
-The "tp-id" of locator is "\{encapsualation-type}: \{dst-ip}".
-
-Note: As far as we know, the OVSDB database does not allow the insertion
-of a new locator alone. So, no locator is inserted after this request is
-sent. We will trigger off the creation until other entity refer to it,
-such as remote-mcast-macs.
-
-====== Create a remote-mcast-macs entry
-
-After adding a physical locator to a logical switch, we need to create a
-remote-mcast-macs entry to handle unknown traffic.
-
-REST API: POST
-http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
-
-request body:
-
-  {
-   "remote-mcast-macs": [
-         {
-             "mac-entry-key": "00:00:00:00:00:00",
-             "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
-             "locator-set": [
-                  {
-                        "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://219.141.189.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
-                  }
-             ]
-         }
-     ]
-  }
-
-The physical locator 'vxlan_over_ipv4:192.168.0.116' is just created in
-"Create a physical locator". It should be noted that list "locator-set"
-is immutable, that is, we must provide a set of "locator-ref" as a
-whole.
-
-Note: "00:00:00:00:00:00" stands for "unknown-dst" since the type of
-mac-entry-key is yang:mac and does not accept "unknown-dst".
-
-====== Create a physical port
-
-Now we add a physical port into the physical switch
-"hwvtep://192.168.1.115:6640/physicalswitch/br0". The port is attached
-with a physical server or an L2 network and with the vlan 100.
-
-REST API: POST
-http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640%2Fphysicalswitch%2Fbr0
-
-  {
-   "network-topology:termination-point": [
-         {
-             "tp-id": "port0",
-             "hwvtep-node-name": "port0",
-             "hwvtep-node-description": "",
-             "vlan-bindings": [
-                 {
-                   "vlan-id-key": "100",
-                   "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']"
-                 }
-           ]
-         }
-     ]
-  }
-
-At this point, we have completed the basic configuration.
-
-Typically, hwvtep devices learn local MAC addresses automatically. But
-they also support getting MAC address entries from ODL.
-
-====== Create a local-mcast-macs entry
-
-It is similar to 'Create a remote-mcast-macs entry'.
-
-====== Create a remote-ucast-macs
-
-REST API: POST
-http://odl:8181/restconf/config/network-topology:network-topology/topology/hwvtep:1/node/hwvtep:%2F%2F192.168.1.115:6640
-
-  request body:
-
-  {
-   "remote-ucast-macs": [
-         {
-             "mac-entry-key": "11:11:11:11:11:11",
-             "logical-switch-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/hwvtep:logical-switches[hwvtep:hwvtep-node-name='ls0']",
-             "ipaddr": "1.1.1.1",
-             "locator-ref": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='hwvtep:1']/network-topology:node[network-topology:node-id='hwvtep://192.168.1.115:6640']/network-topology:termination-point[network-topology:tp-id='vxlan_over_ipv4:192.168.0.116']"
-         }
-     ]
-  }
-
-====== Create a local-ucast-macs entry
-
-This is similar to 'Create a remote-ucast-macs'.
-
-===== Switch Initiates Connection
-
-We do not need to connect to a hwvtep device/node when the switch
-initiates the connection. After switches connect to ODL successfully, we
-get the node-id's of switches by reading the operational data store.
-Once the node-id of a hwvtep device is received, the remaining steps are
-the same as when the user initiates the connection.
-
-===== References
-
-https://wiki.opendaylight.org/view/User_talk:Pzhang
index b44b53a4ee5cecea2869f6dbb1bd1152369302f1..7c875a32083feb26505509fd4c007b14ed360157 100644 (file)
@@ -1,5 +1,3 @@
 == OVSDB NetVirt
 
-include::odl-netvirt-user-guide.adoc[]
-
-include::odl-ovsdb-plugins-user-guide.adoc[]
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/ovsdb-netvirt.html
diff --git a/manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-plugins-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-plugins-user-guide.adoc
deleted file mode 100644 (file)
index 9d0c899..0000000
+++ /dev/null
@@ -1,17 +0,0 @@
-=== OVSDB Plugins
-
-==== Overview and Architecture
-
-There are currently two OVSDB Southbound plugins:
-
-* odl-ovsdb-southbound: Implements the OVSDB Open_vSwitch database schema.
-
-* odl-ovsdb-hwvtepsouthbound: Implements the OVSDB hardware_vtep database schema.
-
-These plugins are normally installed and used automatically by higher level applications such as
-odl-ovsdb-openstack; however, they can also be installed separately and used via their REST APIs
-as is described in the following sections.
-
-include::odl-ovsdb-southbound-user-guide.adoc[]
-
-include::odl-ovsdb-hwvtep-southbound-user-guide.adoc[]
diff --git a/manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-security-groups.adoc b/manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-security-groups.adoc
deleted file mode 100644 (file)
index 839fc67..0000000
+++ /dev/null
@@ -1,271 +0,0 @@
-[[security-groups]]
-==== Security groups
-
-The security group in openstack helps to filter packets based on
-policies configured. The current implementation in openstack uses
-iptables to realize security groups. In Opendaylight instead of iptable
-rules, ovs flows are used. This will remove the many layers of
-bridges/ports required in iptable implementation.
-
-The current rules are applied on the basis of the following attributes:
-ingress/egress, protocol, port range, and prefix. In the pipeline, table
-40 is used for egress acl and table 90 for ingress acl rules.
-
-[[stateful-implementation]]
-===== Stateful Implementation
-
-The security group is implemented in two modes, stateful and stateless.
-Stateful can be enabled by setting false to true in
-etc/opendaylight/karaf/netvirt-impl-default-config.xml
-
-The stateful implementation uses the conntrack capabilities of ovs and
-tracks an existing connection. This mode requires OVS2.5 and linux
-kernel 4.3. The ovs which is integrated with netfilter framework tracks
-the connection using the five tuple(layer-3 protocol, source address,
-destination address, layer-4 protocol, layer-4 key). The connection
-state is independent of the upper level state of connection oriented
-protocols like TCP, and even connectionless protocols like UDP will have
-a pseudo state. With this implementation OVS sends the packet to the
-netfilter framework to know whether there is an entry for to the
-connection. netfilter will return the packet to OVS with the appropriate
-flag set. Below are the states we are interested in:
-
-  -trk - The packet was never send to netfilter framework
-
-  +trk+est - It is already known and connection which was allowed previously, 
-  pass it to the next table.
-
-  +trk+new - This is a new connection. So if there is a specific rule in the 
-  table which allows this traffic with a commit action an entry will be made 
-  in the netfilter framework. If there is no  specific rule to allow this 
-  traffic the packet will be dropped.
-
-So, by default, a packet is be dropped unless there is a rule to allow
-the packet.
-
-[[stateless-implementation]]
-===== Stateless Implementation
-
-The stateless mode is for OVS 2.4 and below where connection tracking is
-not supported. Here we have pseudo-connection tracking using the TCP SYN
-flag. Other than TCP packets, all protocol packets is allowed by
-default. For TCP packets, the SYN packets will be dropped by default
-unless there is a specific rule which allows TCP SYN packets to a
-particular port.
-
-[[fixed-rules]]
-===== Fixed Rules
-
-The SecurityGroup are associated with the vm port when the vm is
-spawned. By default a set of rules are applied to the vm port referred
-to as fixed security group rule. This includes the DHCP rules the ARP
-rule and the conntrack rules. The conntrack rules will be inserted only
-in the stateful mode.
-
-[[dhcp-rules]]
-====== DHCP rules
-
-The DHCP rules added to the vm port when a vm is spawned. The fixed DHCP
-rules are
-
-* Allow DHCP server traffic ingress.
-
-  cookie=0x0, duration=36.848s, table=90, n_packets=2, n_bytes=717,
-  priority=61006,udp,dl_src=fa:16:3e:a1:f9:d0,
-  tp_src=67,tp_dst=68 actions=goto_table:100
-
-  cookie=0x0, duration=36.566s, table=90, n_packets=0, n_bytes=0, 
-  priority=61006,udp6,dl_src=fa:16:3e:a1:f9:d0,
-  tp_src=547,tp_dst=546 actions=goto_table:100  
-
-* Allow DHCP client traffic egress.
-
-  cookie=0x0, duration=2165.596s, table=40, n_packets=2, n_bytes=674, 
-  priority=61012,udp,tp_src=68,tp_dst=67 actions=goto_table:50
-
-  cookie=0x0, duration=2165.513s, table=40, n_packets=0, n_bytes=0, 
-  priority=61012,udp6,tp_src=546,tp_dst=547 actions=goto_table:50
-
-* Prevent DHCP server traffic from the vm port.(DHCP Spoofing)
-
-  cookie=0x0, duration=34.711s, table=40, n_packets=0, n_bytes=0, 
-  priority=61011,udp,in_port=2,tp_src=67,tp_dst=68 actions=drop
-
-  cookie=0x0, duration=34.519s, table=40, n_packets=0, n_bytes=0, 
-  priority=61011,udp6,in_port=2,tp_src=547,tp_dst=546 actions=drop
-
-[[arp-rules]]
-====== Arp rules
-
-The default arp rules allows the arp traffic to go in and out of the vm
-port.
-
-  cookie=0x0, duration=35.015s, table=40, n_packets=10, n_bytes=420, 
-  priority=61010,arp,arp_sha=fa:16:3e:93:88:60 actions=goto_table:50
-
-  cookie=0x0, duration=35.582s, table=90, n_packets=1, n_bytes=42, 
-  priority=61010,arp,arp_tha=fa:16:3e:93:88:60 actions=goto_table:100
-
-[[conntrack-rules]]
-====== Conntrack rules
-
-These rules are inserted only in stateful mode. The conntrack rules use
-the netfilter framework to track packets. The below rules are added to
-leverage it.
-
-* If a packet is not tracked(connection state –trk) it is send it to the
-netfilter for tracking
-* If the packet is already tracked (netfilter filter returns connection
-state +trk,+est) and if the connection is established, then allow the
-packet to go through the pipeline.
-* The third rule is the default drop rule which will drop the packet, if
-the packet is tracked and new(netfilter filter returns connection state
-+trk,+new). This rule has lower priority than the custom rules which
-shall be added.
-
-  cookie=0x0, duration=35.015s table=40,priority=61021,in_port=3,
-  ct_state=-trk,action=ct"("table=0")"
-
-  cookie=0x0, duration=35.015s table=40,priority=61020,in_port=3,
-  ct_state=+trk+est,action=goto_table:50
-
-  cookie=0x0, duration=35.015s table=40,priority=36002,in_port=3,
-  ct_state=+new,actions=drop
-
-  cookie=0x0, duration=35.015s table=90,priority=61022,
-  dl_dst=fa:16:3e:0d:8d:21,ct_state=+trk+est,action=goto_table:100
-
-  cookie=0x0, duration=35.015s table=90,priority=61021,
-  dl_dst=fa:16:3e:0d:8d:21,ct_state=-trk,action=ct"("table=0")"
-
-  cookie=0x0, duration=35.015s table=90,priority=36002,
-  dl_dst=fa:16:3e:0d:8d:21,ct_state=+new,actions=drop
-
-[[tcp-syn-rule]]
-====== TCP SYN Rule
-
-This rule is inserted in stateless mode only. This rule will drop TCP
-SYN packet by default
-
-[[custom-security-groups]]
-===== Custom Security Groups
-
-     User can add security groups in openstack via command line or UI. When we associate this security group with a vm the flows related to each security group will be added in the related tables. A preconfigured security group called the default security group is available in neutron db.   
-
-[[stateful]]
-====== Stateful
-
-If connection tracking is enabled the match will have connection state
-and the action will have commit along with goto. The commit will send
-the packet to the netfilter framework to cache the entry. After a
-commit, for the next packet of this connection netfilter will return
-+trk+est and the packet will match the fixed conntrack rule and get
-forwarded to next table.
-
-  cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
-  priority=61007,ct_state=+new+trk,icmp,dl_src=fa:16:3e:ee:a5:ec,
-  nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50
-
-  cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,ct_state=+new+trk,udp,dl_dst=fa:16:3e:22:59:2f,
-  nw_src=10.100.5.3,tp_dst=2222 actions=ct(commit),goto_table:100
-
-  cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,ct_state=+new+trk,tcp,dl_dst=fa:16:3e:22:59:2f,
-  nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100  
-
-[[stateless]]
-====== Stateless
-
-If the mode is stateless the match will have only the parameter
-specified in the security rule and a goto in the action. The ct_state
-and commit action will be missing.
-
-  cookie=0x0, duration=13211.171s, table=40, n_packets=0, n_bytes=0, 
-  priority=61007,icmp,dl_src=fa:16:3e:93:88:60,nw_dst=0.0.0.0/24,
-  icmp_type=2,icmp_code=4 actions=goto_table:50
-
-  cookie=0x0, duration=199.674s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,udp,dl_dst=fa:16:3e:dc:49:ff,nw_src=10.100.5.3,tp_dst=2222 
-  actions=goto_table:100
-
-  cookie=0x0, duration=199.780s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,tcp,dl_dst=fa:16:3e:93:88:60,nw_src=10.100.5.4,tp_dst=3333 
-  actions=goto_table:100  
-
-[[tcpudp-port-range]]
-====== TCP/UDP Port Range
-
-The TCP/UDP port range is supported with the help of port mask. This
-will dramatically reduce the number of flows required to cover a port
-range. The below 7 rules can cover a port range from 333 to 777.
-
-  cookie=0x0, duration=56.129s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
-  tp_dst=0x200/0xff00 actions=goto_table:100
-
-  cookie=0x0, duration=55.805s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
-  tp_dst=0x160/0xffe0 actions=goto_table:100
-
-  cookie=0x0, duration=55.587s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
-  tp_dst=0x300/0xfff8 actions=goto_table:100
-
-  cookie=0x0, duration=55.437s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
-  tp_dst=0x150/0xfff0 actions=goto_table:100
-
-  cookie=0x0, duration=55.282s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
-  tp_dst=0x14e/0xfffe actions=goto_table:100
-
-  cookie=0x0, duration=54.063s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
-  tp_dst=0x308/0xfffe actions=goto_table:100
-
-  cookie=0x0, duration=55.130s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,udp,dl_dst=fa:16:3e:f9:2c:42,nw_src=0.0.0.0/24,
-  tp_dst=333 actions=goto_table:100  
-
-[[cidrremote-security-group]]
-===== CIDR/Remote Security Group
-
-  When adding a security group we can select the rule to applicable to a 
-  set of CIDR or to a set of VMs which has a particular security group 
-  associated with it. 
-
-If CIDR is selected there will be only one flow rule added allowing the
-traffic from/to the IP’s belonging to that CIDR.
-
-  cookie=0x0, duration=202.516s, table=40, n_packets=0, n_bytes=0,
-  priority=61007,ct_state=+new+trk,icmp,dl_src=fa:16:3e:ee:a5:ec,
-  nw_dst=0.0.0.0/24,icmp_type=2,icmp_code=4 actions=ct(commit),goto_table:50  
-
-If a remote security group is selected a flow will be inserted for every
-vm which has that security group associated.
-
-  cookie=0x0, duration=60.701s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,ct_state=+new+trk,udp,dl_dst=fa:16:3e:22:59:2f,
-  nw_src=10.100.5.3,tp_dst=2222    actions=ct(commit),goto_table:100
-
-  cookie=0x0, duration=58.988s, table=90, n_packets=0, n_bytes=0, 
-  priority=61007,ct_state=+new+trk,tcp,dl_dst=fa:16:3e:22:59:2f,
-  nw_src=10.100.5.3,tp_dst=1111 actions=ct(commit),goto_table:100  
-
-[[rules-supported-in-odl]]
-===== Rules supported in ODL
-
-The following rules are supported in the current implementation. The
-direction (ingress/egress) is always expected.
-
-.Table Supported Rules
-|====
-|Protocol |Port Range |IP Prefix |Remote Security Group supported
-|Any |Any |Any |Yes
-|TCP |1 - 65535 |0.0.0.0/0 |Yes
-|UDP |1 - 65535 |0.0.0.0/0 |Yes
-|ICMP |Any |0.0.0.0/0 |Yes
-|====
-
-Note : IPV6 and port-range feature is not supported as of today
diff --git a/manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-southbound-user-guide.adoc b/manuals/user-guide/src/main/asciidoc/ovsdb/odl-ovsdb-southbound-user-guide.adoc
deleted file mode 100644 (file)
index 451122a..0000000
+++ /dev/null
@@ -1,1021 +0,0 @@
-==== OVSDB Southbound Plugin
-
-The OVSDB Southbound Plugin provides support for managing OVS hosts
-via an OVSDB model in the MD-SAL which maps to important tables and
-attributes present in the Open_vSwitch schema.  The OVSDB Southbound Plugin
-is able to connect actively or passively to OVS hosts and operate
-as the OVSDB manager of the OVS host.  Using the OVSDB protocol it is able
-to manage the OVS database (OVSDB) on the OVS host as defined by the Open_vSwitch schema.
-
-===== OVSDB YANG Model
-
-The OVSDB Southbound Plugin provides a YANG model which is based on the
-abstract 
-https://github.com/opendaylight/yangtools/blob/stable/beryllium/yang/yang-parser-impl/src/test/resources/ietf/network-topology%402013-10-21.yang[network topology model].
-
-The details of the OVSDB YANG model are defined in the
-https://github.com/opendaylight/ovsdb/blob/stable/beryllium/southbound/southbound-api/src/main/yang/ovsdb.yang[ovsdb.yang] file.
-
-The OVSDB YANG model defines three augmentations:
-
-*ovsdb-node-augmentation*::
-This augments the network-topology node and maps primarily to the Open_vSwitch table of
-the OVSDB schema.  The ovsdb-node-augmentation is a representation of the OVS host.  It contains the following attributes.
-  * *connection-info* - holds the local and remote IP address and TCP port numbers for the OpenDaylight to OVSDB node connections
-  * *db-version* - version of the OVSDB database
-  * *ovs-version* - version of OVS
-  * *list managed-node-entry* - a list of references to ovsdb-bridge-augmentation nodes, which are the OVS bridges managed by this OVSDB node
-  * *list datapath-type-entry* - a list of the datapath types supported by the OVSDB node (e.g. 'system', 'netdev') - depends on newer OVS versions
-  * *list interface-type-entry* - a list of the interface types supported by the OVSDB node (e.g. 'internal', 'vxlan', 'gre', 'dpdk', etc.) - depends on newer OVS verions
-  * *list openvswitch-external-ids* - a list of the key/value pairs in the Open_vSwitch table external_ids column
-  * *list openvswitch-other-config* - a list of the key/value pairs in the Open_vSwitch table other_config column
-  * *list managery-entry* - list of manager information entries and connection status
-  * *list qos-entries* - list of QoS entries present in the QoS table
-  * *list queues* - list of queue entries present in the queue table
-*ovsdb-bridge-augmentation*::
-This augments the network-topology node and maps to an specific bridge in the OVSDB
-bridge table of the associated OVSDB node. It contains the following attributes.
-  * *bridge-uuid* - UUID of the OVSDB bridge
-  * *bridge-name* - name of the OVSDB bridge
-  * *bridge-openflow-node-ref* - a reference (instance-identifier) of the OpenFlow node associated with this bridge
-  * *list protocol-entry* - the version of OpenFlow protocol to use with the OpenFlow controller
-  * *list controller-entry* - a list of controller-uuid and is-connected status of the OpenFlow controllers associated with this bridge
-  * *datapath-id* - the datapath ID associated with this bridge on the OVSDB node
-  * *datapath-type* - the datapath type of this bridge
-  * *fail-mode* - the OVSDB fail mode setting of this bridge
-  * *flow-node* - a reference to the flow node corresponding to this bridge
-  * *managed-by* - a reference to the ovsdb-node-augmentation (OVSDB node) that is managing this bridge
-  * *list bridge-external-ids* - a list of the key/value pairs in the bridge table external_ids column for this bridge
-  * *list bridge-other-configs* - a list of the key/value pairs in the bridge table other_config column for this bridge
-*ovsdb-termination-point-augmentation*::
-This augments the topology termination point model.  The OVSDB Southbound
-Plugin uses this model to represent both the OVSDB port and OVSDB interface for
-a given port/interface in the OVSDB schema.  It contains the following
-attributes.
-  * *port-uuid* - UUID of an OVSDB port row
-  * *interface-uuid* - UUID of an OVSDB interface row
-  * *name* - name of the port and interface
-  * *interface-type* - the interface type
-  * *list options* - a list of port options
-  * *ofport* - the OpenFlow port number of the interface
-  * *ofport_request* - the requested OpenFlow port number for the interface
-  * *vlan-tag* - the VLAN tag value
-  * *list trunks* - list of VLAN tag values for trunk mode
-  * *vlan-mode* - the VLAN mode (e.g. access, native-tagged, native-untagged, trunk)
-  * *list port-external-ids* - a list of the key/value pairs in the port table external_ids column for this port
-  * *list interface-external-ids* - a list of the key/value pairs in the interface table external_ids interface for this interface
-  * *list port-other-configs* - a list of the key/value pairs in the port table other_config column for this port
-  * *list interface-other-configs* - a list of the key/value pairs in the interface table other_config column for this interface
-  * *list inteface-lldp* - LLDP Auto Attach configuration for the interface
-  * *qos* - UUID of the QoS entry in the QoS table assigned to this port
-
-===== Getting Started
-
-To install the OVSDB Southbound Plugin, use the following command at the Karaf console:
-
- feature:install odl-ovsdb-southbound-impl-ui
-
-After installing the OVSDB Southbound Plugin, and before any OVSDB topology nodes have been created,
-the OVSDB topology will appear as follows in the configuration and operational MD-SAL.
-
-HTTP GET:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
-  or
- http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
-
-Result Body:
-
- {
-   "topology": [
-     {
-       "topology-id": "ovsdb:1"
-     }
-   ]
- }
-
-Where
-
-'<controller-ip>' is the IP address of the OpenDaylight controller
-
-===== OpenDaylight as the OVSDB Manager
-
-An OVS host is a system which is running the OVS software and is capable of being managed
-by an OVSDB manager.  The OVSDB Southbound Plugin is capable of connecting to
-an OVS host and operating as an OVSDB manager.  Depending on the configuration of the
-OVS host, the connection of OpenDaylight to the OVS host will be active or passive.
-
-===== Active Connection to OVS Hosts
-
-An active connection is when the OVSDB Southbound Plugin initiates the connection to
-an OVS host.  This happens when the OVS host is configured to listen for the
-connection (i.e. the OVSDB Southbound Plugin is active the the OVS host is passive).
-The OVS host is configured with the following command:
-
- sudo ovs-vsctl set-manager ptcp:6640
-
-This configures the OVS host to listen on TCP port 6640.
-
-The OVSDB Southbound Plugin can be configured via the configuration MD-SAL to
-actively connect to an OVS host.
-
-HTTP PUT:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1
-
-Body:
-
- {
-   "network-topology:node": [
-     {
-       "node-id": "ovsdb://HOST1",
-       "connection-info": {
-         "ovsdb:remote-port": "6640",
-         "ovsdb:remote-ip": "<ovs-host-ip>"
-       }
-     }
-   ]
- }
-
-Where
-
-'<ovs-host-ip>' is the IP address of the OVS Host
-
-
-Note that the configuration assigns a 'node-id' of "ovsdb://HOST1" to the OVSDB node.
-This 'node-id' will be used as the identifier for this OVSDB node in the MD-SAL.
-
-Query the configuration MD-SAL for the OVSDB topology.
-
-HTTP GET:
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
-
-Result Body:
-
- {
-   "topology": [
-     {
-       "topology-id": "ovsdb:1",
-       "node": [
-         {
-           "node-id": "ovsdb://HOST1",
-           "ovsdb:connection-info": {
-             "remote-ip": "<ovs-host-ip>",
-             "remote-port": 6640
-           }
-         }
-       ]
-     }
-   ]
- }
-
-As a result of the OVSDB node configuration being added to the configuration MD-SAL, the OVSDB
-Southbound Plugin will attempt to connect with the specified OVS host.  If the connection is
-successful, the plugin will connect to the OVS host as an OVSDB manager, query the schemas and
-databases supported by the OVS host, and register to monitor changes made to the OVSDB tables
-on the OVS host.  It will also set an external id key and value in the external-ids column
-of the Open_vSwtich table of the OVS host which identifies the MD-SAL instance identifier
-of the OVSDB node.  This ensures that the OVSDB node will use the same 'node-id' in both the
-configuration and operational MD-SAL.
-
- "opendaylight-iid" = "instance identifier of OVSDB node in the MD-SAL"
-
-When the OVS host sends the OVSDB Southbound Plugin the first update message after the monitoring has
-been established, the plugin will populate the operational MD-SAL with the information it
-receives from the OVS host.
-
-Query the operational MD-SAL for the OVSDB topology.
-
-HTTP GET:
-
- http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
-
-Result Body:
-
- {
-   "topology": [
-     {
-       "topology-id": "ovsdb:1",
-       "node": [
-         {
-           "node-id": "ovsdb://HOST1",
-           "ovsdb:openvswitch-external-ids": [
-             {
-               "external-id-key": "opendaylight-iid",
-               "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
-             }
-           ],
-           "ovsdb:connection-info": {
-             "local-ip": "<controller-ip>",
-             "remote-port": 6640,
-             "remote-ip": "<ovs-host-ip>",
-             "local-port": 39042
-           },
-           "ovsdb:ovs-version": "2.3.1-git4750c96",
-           "ovsdb:manager-entry": [
-             {
-               "target": "ptcp:6640",
-               "connected": true,
-               "number_of_connections": 1
-             }
-           ]
-         }
-       ]
-     }
-   ]
- }
-
-
-To disconnect an active connection, just delete the configuration MD-SAL entry.
-
-HTTP DELETE:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1
-
-Note in the above example, that '/' characters which are part of the 'node-id' are specified in hexadecimal format as "%2F".
-
-===== Passive Connection to OVS Hosts
-
-A passive connection is when the OVS host initiates the connection to
-the OVSDB Southbound Plugin.  This happens when the OVS host is configured to connect
-to the OVSDB Southbound Plugin.
-The OVS host is configured with the following command:
-
- sudo ovs-vsctl set-manager tcp:<controller-ip>:6640
-
-The OVSDB Southbound Plugin is configured to listen for OVSDB connections
-on TCP port 6640.  This value can be changed by editing the "./karaf/target/assembly/etc/custom.properties"
-file and changing the value of the "ovsdb.listenPort" attribute.
-
-When a passive connection is made, the OVSDB node will appear first in the operational MD-SAL.
-If the Open_vSwitch table does not contain an external-ids value of 'opendaylight-iid', then
-the 'node-id' of the new OVSDB node will be created in the format:
-
- "ovsdb://uuid/<actual UUID value>"
-
-If there an 'opendaylight-iid' value was already present in the external-ids column, then the
-instance identifier defined there will be used to create the 'node-id' instead.
-
-Query the operational MD-SAL for an OVSDB node after a passive connection.
-
-HTTP GET:
-
- http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/
-
-Result Body:
-
- {
-   "topology": [
-     {
-       "topology-id": "ovsdb:1",
-       "node": [
-         {
-           "node-id": "ovsdb://uuid/163724f4-6a70-428a-a8a0-63b2a21f12dd",
-           "ovsdb:openvswitch-external-ids": [
-             {
-               "external-id-key": "system-id",
-               "external-id-value": "ecf160af-e78c-4f6b-a005-83a6baa5c979"
-             }
-           ],
-           "ovsdb:connection-info": {
-             "local-ip": "<controller-ip>",
-             "remote-port": 46731,
-             "remote-ip": "<ovs-host-ip>",
-             "local-port": 6640
-           },
-           "ovsdb:ovs-version": "2.3.1-git4750c96",
-           "ovsdb:manager-entry": [
-             {
-               "target": "tcp:10.11.21.7:6640",
-               "connected": true,
-               "number_of_connections": 1
-             }
-           ]
-         }
-       ]
-     }
-   ]
- }
-
-Take note of the 'node-id' that was created in this case.
-
-===== Manage Bridges
-
-The OVSDB Southbound Plugin can be used to manage bridges on an OVS host.
-
-This example shows how to add a bridge to the OVSDB node 'ovsdb://HOST1'.
-
-HTTP PUT:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
-
-Body:
-
- {
-   "network-topology:node": [
-     {
-       "node-id": "ovsdb://HOST1/bridge/brtest",
-       "ovsdb:bridge-name": "brtest",
-       "ovsdb:protocol-entry": [
-         {
-           "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
-         }
-       ],
-       "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
-     }
-   ]
- }
-
-Notice that the 'ovsdb:managed-by' attribute is specified in the command.  This indicates the association of the new bridge node with its OVSDB node.
-
-Bridges can be updated.  In the following example, OpenDaylight is configured to be the OpenFlow controller for the bridge.
-
-HTTP PUT:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
-
-Body:
-
- {
-   "network-topology:node": [
-         {
-           "node-id": "ovsdb://HOST1/bridge/brtest",
-              "ovsdb:bridge-name": "brtest",
-               "ovsdb:controller-entry": [
-                 {
-                   "target": "tcp:<controller-ip>:6653"
-                 }
-               ],
-              "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']"
-         }
-     ]
- }
-
-If the OpenDaylight OpenFlow Plugin is installed, then checking on the OVS host will show that OpenDaylight has successfully connected as the controller for the bridge.
-
- $ sudo ovs-vsctl show
-     Manager "ptcp:6640"
-         is_connected: true
-     Bridge brtest
-         Controller "tcp:<controller-ip>:6653"
-             is_connected: true
-         Port brtest
-             Interface brtest
-                 type: internal
-     ovs_version: "2.3.1-git4750c96"
-
-Query the operational MD-SAL to see how the bridge appears.
-
-HTTP GET:
-
- http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/
-
-Result Body:
-
- {
-   "node": [
-     {
-       "node-id": "ovsdb://HOST1/bridge/brtest",
-       "ovsdb:bridge-name": "brtest",
-       "ovsdb:datapath-type": "ovsdb:datapath-type-system",
-       "ovsdb:datapath-id": "00:00:da:e9:0c:08:2d:45",
-       "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1']",
-       "ovsdb:bridge-external-ids": [
-         {
-           "bridge-external-id-key": "opendaylight-iid",
-           "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']"
-         }
-       ],
-       "ovsdb:protocol-entry": [
-         {
-           "protocol": "ovsdb:ovsdb-bridge-protocol-openflow-13"
-         }
-       ],
-       "ovsdb:bridge-uuid": "080ce9da-101e-452d-94cd-ee8bef8a4b69",
-       "ovsdb:controller-entry": [
-         {
-           "target": "tcp:10.11.21.7:6653",
-           "is-connected": true,
-           "controller-uuid": "c39b1262-0876-4613-8bfd-c67eec1a991b"
-         }
-       ],
-       "termination-point": [
-         {
-           "tp-id": "brtest",
-           "ovsdb:port-uuid": "c808ae8d-7af2-4323-83c1-e397696dc9c8",
-           "ovsdb:ofport": 65534,
-           "ovsdb:interface-type": "ovsdb:interface-type-internal",
-           "ovsdb:interface-uuid": "49e9417f-4479-4ede-8faf-7c873b8c0413",
-           "ovsdb:name": "brtest"
-         }
-       ]
-     }
-   ]
- }
-
-Notice that just like with the OVSDB node, an 'opendaylight-iid' has been added to the external-ids column of the bridge since it was created via the configuration MD-SAL.
-
-
-A bridge node may be deleted as well.
-
-HTTP DELETE:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest
-
-===== Manage Ports
-
-Similarly, ports may be managed by the OVSDB Southbound Plugin.
-
-This example illustrates how a port and various attributes may be created on a bridge.
-
-HTTP PUT:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
-
-Body:
-
- {
-   "network-topology:termination-point": [
-     {
-       "ovsdb:options": [
-         {
-           "ovsdb:option": "remote_ip",
-           "ovsdb:value" : "10.10.14.11"
-         }
-       ],
-       "ovsdb:name": "testport",
-       "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
-       "tp-id": "testport",
-       "vlan-tag": "1",
-       "trunks": [
-         {
-           "trunk": "5"
-         }
-       ],
-       "vlan-mode":"access"
-     }
-   ]
- }
-
-
-Ports can be updated - add another VLAN trunk.
-
-HTTP PUT:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
-
-Body:
-
- {
-   "network-topology:termination-point": [
-     {
-       "ovsdb:name": "testport",
-       "tp-id": "testport",
-       "trunks": [
-         {
-           "trunk": "5"
-         },
-         {
-           "trunk": "500"
-         }
-       ]
-     }
-   ]
- }
-
-Query the operational MD-SAL for the port.
-
-HTTP GET:
-
- http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest/termination-point/testport/
-
-Result Body:
-
- {
-   "termination-point": [
-     {
-       "tp-id": "testport",
-       "ovsdb:port-uuid": "b1262110-2a4f-4442-b0df-84faf145488d",
-       "ovsdb:options": [
-         {
-           "option": "remote_ip",
-           "value": "10.10.14.11"
-         }
-       ],
-       "ovsdb:port-external-ids": [
-         {
-           "external-id-key": "opendaylight-iid",
-           "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb://HOST1/bridge/brtest']/network-topology:termination-point[network-topology:tp-id='testport']"
-         }
-       ],
-       "ovsdb:interface-type": "ovsdb:interface-type-vxlan",
-       "ovsdb:trunks": [
-         {
-           "trunk": 5
-         },
-         {
-           "trunk": 500
-         }
-       ],
-       "ovsdb:vlan-mode": "access",
-       "ovsdb:vlan-tag": 1,
-       "ovsdb:interface-uuid": "7cec653b-f407-45a8-baec-7eb36b6791c9",
-       "ovsdb:name": "testport",
-       "ovsdb:ofport": 1
-     }
-   ]
- }
-
-Remember that the OVSDB YANG model includes both OVSDB port and interface table attributes in the termination-point augmentation.
-Both kinds of attributes can be seen in the examples above.  Again, note the creation of an 'opendaylight-iid' value in the external-ids column of the port table.
-
-Delete a port.
-
-HTTP DELETE:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:%2F%2FHOST1%2Fbridge%2Fbrtest2/termination-point/testport/
-
-
-
-
-===== Overview of QoS and Queue
-The OVSDB Southbound Plugin provides the capability of managing the QoS
-and Queue tables on an OVS host with OpenDaylight configured
-as the OVSDB manager.
-
-====== QoS and Queue Tables in OVSDB
-The OVSDB includes a QoS and Queue table.  Unlike most of the other tables
-in the OVSDB, except the Open_vSwitch table, the QoS and Queue tables are
-"root set" tables, which means that entries, or rows, in these tables are
-not automatically deleted if they can not be reached directly or indirectly
-from the Open_vSwitch table.  This means that QoS entries can exist and be
-managed independently of whether or not they are referenced in a Port entry.
-Similarly, Queue entries can be managed independently of whether or not they are
-referenced by a QoS entry.
-
-
-====== Modelling of QoS and Queue Tables in OpenDaylight MD-SAL
-
-Since the QoS and Queue tables are "root set" tables, they are modeled
-in the OpenDaylight MD-SAL as lists which are part of the attributes
-of the OVSDB node model.
-
-The MD-SAL QoS and Queue models have an additonal identifier attribute per
-entry (e.g. "qos-id" or "queue-id") which is not present
-in the OVSDB schema. This identifier is used by the MD-SAL as a key for referencing
-the entry.  If the entry is created originally from the
-configuration MD-SAL, then the value of the identifier is whatever is specified
-by the configuration.  If the entry is created on the OVSDB node and received
-by OpenDaylight in an operational update, then the id will be created in
-the following format.
-
- "queue-id": "queue://<UUID>"
- "qos-id": "qos://<UUID>"
-
-The UUID in the above identifiers is the actual UUID of the entry in the
-OVSDB database.
-
-When the QoS or Queue entry is created by the configuration MD-SAL, the
-identifier will be configured as part of the external-ids column of the
-entry.  This will ensure that the corresponding entry that is created
-in the operational MD-SAL uses the same identifier.
-
- "queues-external-ids": [
-   {
-     "queues-external-id-key": "opendaylight-queue-id",
-     "queues-external-id-value": "QUEUE-1"
-   }
- ]
-
-See more in the examples that follow in this section.
-
-The QoS schema in OVSDB currently defines two types of QoS entries.
-
-* linux-htb
-* linux-hfsc
-
-These QoS types are defined in the QoS model.  Additional types will
-need to be added to the model in order to be supported.  See the examples
-that folow for how the QoS type is specified in the model.
-
-QoS entries can be configured with addtional attritubes such as "max-rate".
-These are configured via the 'other-config' column of the QoS entry.  Refer
-to OVSDB schema (in the reference section below) for all of the relevant
-attributes that can be configured.  The examples in the rest of this section
-will demonstrate how the other-config column may be configured.
-
-Similarly, the Queue entries may be configured with additional attributes
-via the other-config column.
-
-===== Managing QoS and Queues via Configuration MD-SAL
-This section will show some examples on how to manage QoS and
-Queue entries via the configuration MD-SAL.  The examples will
-be illustrated by using RESTCONF (see
-https://github.com/opendaylight/ovsdb/blob/stable/beryllium/resources/commons/Qos-and-Queue-Collection.json.postman_collection[QoS and Queue Postman Collection] ).
-
-A pre-requisite for managing QoS and Queue entries is that the
-OVS host must be present in the configuration MD-SAL.
-
-For the following examples, the following OVS host is configured.
-
-HTTP POST:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/
-
-Body:
-
- {
-   "node": [
-     {
-       "node-id": "ovsdb:HOST1",
-       "connection-info": {
-         "ovsdb:remote-ip": "<ovs-host-ip>",
-         "ovsdb:remote-port": "<ovs-host-ovsdb-port>"
-       }
-     }
-   ]
- }
-
-Where
-
-* '<controller-ip>' is the IP address of the OpenDaylight controller
-* '<ovs-host-ip>' is the IP address of the OVS host
-* '<ovs-host-ovsdb-port>' is the TCP port of the OVSDB server on the OVS host (e.g. 6640)
-
-This command creates an OVSDB node with the node-id "ovsdb:HOST1".  This OVSDB node will be used in the following
-examples.
-
-QoS and Queue entries can be created and managed without a port, but ultimately, QoS entries are
-associated with a port in order to use them.  For the following examples a test bridge and port will
-be created.
-
-Create the test bridge.
-
-HTTP PUT
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test
-
-Body:
-
- {
-   "network-topology:node": [
-     {
-       "node-id": "ovsdb:HOST1/bridge/br-test",
-       "ovsdb:bridge-name": "br-test",
-       "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']"
-     }
-   ]
- }
-
-Create the test port (which is modeled as a termination point in the OpenDaylight MD-SAL).
-
-HTTP PUT:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
-
-Body:
-
- {
-   "network-topology:termination-point": [
-     {
-       "ovsdb:name": "testport",
-       "tp-id": "testport"
-     }
-   ]
- }
-
-If all of the previous steps were successful, a query of the operational MD-SAL should look something like the following results.  This indicates that the configuration commands have been successfully instantiated on the OVS host.
-
-HTTP GET:
-
- http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test
-
-Result Body:
-
- {
-   "node": [
-     {
-       "node-id": "ovsdb:HOST1/bridge/br-test",
-       "ovsdb:bridge-name": "br-test",
-       "ovsdb:datapath-type": "ovsdb:datapath-type-system",
-       "ovsdb:managed-by": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1']",
-       "ovsdb:datapath-id": "00:00:8e:5d:22:3d:09:49",
-       "ovsdb:bridge-external-ids": [
-         {
-           "bridge-external-id-key": "opendaylight-iid",
-           "bridge-external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']"
-         }
-       ],
-       "ovsdb:bridge-uuid": "3d225d8d-d060-4909-93ef-6f4db58ef7cc",
-       "termination-point": [
-         {
-           "tp-id": "br=-est",
-           "ovsdb:port-uuid": "f85f7aa7-4956-40e4-9c94-e6ca2d5cd254",
-           "ovsdb:ofport": 65534,
-           "ovsdb:interface-type": "ovsdb:interface-type-internal",
-           "ovsdb:interface-uuid": "29ff3692-6ed4-4ad7-a077-1edc277ecb1a",
-           "ovsdb:name": "br-test"
-         },
-         {
-           "tp-id": "testport",
-           "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
-           "ovsdb:port-external-ids": [
-             {
-               "external-id-key": "opendaylight-iid",
-               "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
-             }
-           ],
-           "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
-           "ovsdb:name": "testport"
-         }
-       ]
-     }
-   ]
- }
-
-====== Create Queue
-Create a new Queue in the configuration MD-SAL.
-
-HTTP PUT:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
-
-Body:
-
- {
-   "ovsdb:queues": [
-     {
-       "queue-id": "QUEUE-1",
-       "dscp": 25,
-       "queues-other-config": [
-         {
-           "queue-other-config-key": "max-rate",
-           "queue-other-config-value": "3600000"
-         }
-       ]
-     }
-   ]
- }
-
-
-====== Query Queue
-Now query the operational MD-SAL for the Queue entry.
-
-HTTP GET:
-
- http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
-
-Result Body:
-
- {
-   "ovsdb:queues": [
-     {
-       "queue-id": "QUEUE-1",
-       "queues-other-config": [
-         {
-           "queue-other-config-key": "max-rate",
-           "queue-other-config-value": "3600000"
-         }
-       ],
-       "queues-external-ids": [
-         {
-           "queues-external-id-key": "opendaylight-queue-id",
-           "queues-external-id-value": "QUEUE-1"
-         }
-       ],
-       "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
-       "dscp": 25
-     }
-   ]
- }
-
-====== Create QoS
-
-Create a QoS entry.  Note that the UUID of the Queue entry, obtained by querying the operational MD-SAL of the Queue entry, is
-specified in the queue-list of the QoS entry.  Queue entries may be added to the QoS entry at the creation of the QoS entry, or
-by a subsequent update to the QoS entry.
-
-HTTP PUT:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
-
-Body:
-
- {
-   "ovsdb:qos-entries": [
-     {
-       "qos-id": "QOS-1",
-       "qos-type": "ovsdb:qos-type-linux-htb",
-       "qos-other-config": [
-         {
-           "other-config-key": "max-rate",
-           "other-config-value": "4400000"
-         }
-       ],
-       "queue-list": [
-         {
-           "queue-number": "0",
-           "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
-         }
-       ]
-     }
-   ]
- }
-
-====== Query QoS
-
-Query the operational MD-SAL for the QoS entry.
-
-HTTP GET:
-
- http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
-
-Result Body:
-
- {
-   "ovsdb:qos-entries": [
-     {
-       "qos-id": "QOS-1",
-       "qos-other-config": [
-         {
-           "other-config-key": "max-rate",
-           "other-config-value": "4400000"
-         }
-       ],
-       "queue-list": [
-         {
-           "queue-number": 0,
-           "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
-         }
-       ],
-       "qos-type": "ovsdb:qos-type-linux-htb",
-       "qos-external-ids": [
-         {
-           "qos-external-id-key": "opendaylight-qos-id",
-           "qos-external-id-value": "QOS-1"
-         }
-       ],
-       "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
-     }
-   ]
- }
-
-====== Add QoS to a Port
-Update the termination point entry to include the UUID of the QoS entry, obtained by querying the operational MD-SAL, to associate a QoS entry with a port.
-
-HTTP PUT:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
-
-Body:
-
- {
-   "network-topology:termination-point": [
-     {
-       "ovsdb:name": "testport",
-       "tp-id": "testport",
-       "qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
-     }
-   ]
- }
-
-====== Query the Port
-Query the operational MD-SAL to see how the QoS entry appears in the termination point model.
-
-HTTP GET:
-
- http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
-
-Result Body:
-
- {
-   "termination-point": [
-     {
-       "tp-id": "testport",
-       "ovsdb:port-uuid": "aa79a8e2-147f-403a-9fa9-6ee5ec276f08",
-       "ovsdb:port-external-ids": [
-         {
-           "external-id-key": "opendaylight-iid",
-           "external-id-value": "/network-topology:network-topology/network-topology:topology[network-topology:topology-id='ovsdb:1']/network-topology:node[network-topology:node-id='ovsdb:HOST1/bridge/br-test']/network-topology:termination-point[network-topology:tp-id='testport']"
-         }
-       ],
-       "ovsdb:qos": "90ba9c60-3aac-499d-9be7-555f19a6bb31",
-       "ovsdb:interface-uuid": "e96f282e-882c-41dd-a870-80e6b29136ac",
-       "ovsdb:name": "testport"
-     }
-   ]
- }
-
-
-====== Query the OVSDB Node
-Query the operational MD-SAL for the OVS host to see how the QoS and Queue entries appear as lists in the OVS node model.
-
-HTTP GET:
-
- http://<controller-ip>:8181/restconf/operational/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/
-
-Result Body (edited to only show information relevant to the QoS and Queue entries):
-
- {
-   "node": [
-     {
-       "node-id": "ovsdb:HOST1",
-       <content edited out>
-       "ovsdb:queues": [
-         {
-           "queue-id": "QUEUE-1",
-           "queues-other-config": [
-             {
-               "queue-other-config-key": "max-rate",
-               "queue-other-config-value": "3600000"
-             }
-           ],
-           "queues-external-ids": [
-             {
-               "queues-external-id-key": "opendaylight-queue-id",
-               "queues-external-id-value": "QUEUE-1"
-             }
-           ],
-           "queue-uuid": "83640357-3596-4877-9527-b472aa854d69",
-           "dscp": 25
-         }
-       ],
-       "ovsdb:qos-entries": [
-         {
-           "qos-id": "QOS-1",
-           "qos-other-config": [
-             {
-               "other-config-key": "max-rate",
-               "other-config-value": "4400000"
-             }
-           ],
-           "queue-list": [
-             {
-               "queue-number": 0,
-               "queue-uuid": "83640357-3596-4877-9527-b472aa854d69"
-             }
-           ],
-           "qos-type": "ovsdb:qos-type-linux-htb",
-           "qos-external-ids": [
-             {
-               "qos-external-id-key": "opendaylight-qos-id",
-               "qos-external-id-value": "QOS-1"
-             }
-           ],
-           "qos-uuid": "90ba9c60-3aac-499d-9be7-555f19a6bb31"
-         }
-       ]
-       <content edited out>
-     }
-   ]
- }
-
-
-====== Remove QoS from a Port
-This example removes a QoS entry from the termination point and associated port.  Note that this is a PUT command on the termination point with the
-QoS attribute absent.  Other attributes of the termination point should be included in the body of the command so that they are not inadvertantly removed.
-
-HTTP PUT:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1%2Fbridge%2Fbr-test/termination-point/testport/
-
-Body:
-
- {
-   "network-topology:termination-point": [
-     {
-       "ovsdb:name": "testport",
-       "tp-id": "testport"
-     }
-   ]
- }
-
-====== Remove a Queue from QoS
-
-This example removes the specific Queue entry from the queue list in the QoS entry.  The queue entry is specified by the queue number, which is "0" in this example.
-
-HTTP DELETE:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/queue-list/0/
-
-====== Remove Queue
-Once all references to a specific queue entry have been removed from QoS entries, the Queue itself can be removed.
-
-HTTP DELETE:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:queues/QUEUE-1/
-
-====== Remove QoS
-The QoS entry may be removed when it is no longer referenced by any ports.
-
-HTTP DELETE:
-
- http://<controller-ip>:8181/restconf/config/network-topology:network-topology/topology/ovsdb:1/node/ovsdb:HOST1/ovsdb:qos-entries/QOS-1/
-
-
-===== References
-http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf[Openvswitch schema]
-
-https://github.com/opendaylight/ovsdb/blob/stable/beryllium/resources/commons[OVSDB and Netvirt Postman Collection]
-
index fbab4369697b8279c2bd9b67c73a66d2845c8429..0e70ace6bb45cea6732c44853e88b59b10537a48 100644 (file)
@@ -1,52 +1,3 @@
 == PacketCable User Guide
 
-=== Overview
-
-These components introduce a DOCSIS QoS Gates management using
-the PCMM protocol. The driver component is responsible for the
-PCMM/COPS/PDP functionality required to service requests from
-PacketCable Provider and FlowManager. Requests are transposed into PCMM
-Gate Control messages and transmitted via COPS to the CMTS. This plugin
-adheres to the PCMM/COPS/PDP functionality defined in the CableLabs
-specification. PacketCable solution is an MDSAL compliant component.
-
-=== PacketCable Components
-
-PacketCable is comprised of two OpenDaylight bundles:
-
-[options="header"]
-|======
-|Bundle |Description
-|odl-packetcable-policy-server | Plugin that provides PCMM model implementation based on CMTS structure and COPS protocol.
-|odl-packetcable-policy-model  | The Model provided provides a direct mapping to the underlying QoS Gates of CMTS.
-|======
-
-See the PacketCable 
-https://git.opendaylight.org/gerrit/gitweb?p=packetcable.git;a=tree;f=packetcable-policy-model/src/main/yang[YANG
-Models].
-
-=== Installing PacketCable
-
-To install PacketCable, run the following `feature:install` command from the Karaf CLI
-
- feature:install odl-packetcable-policy-server-all odl-restconf odl-mdsal-apidocs
-
-=== Explore and exercise the PacketCable REST API
-
-To see the PacketCable APIs, browse to this URL:
-http://localhost:8181/apidoc/explorer/index.html
-
-Replace localhost with the IP address or hostname where OpenDaylight is running if you are not running OpenDaylight locally on your machine.
-
-NOTE: Prior to setting any PCMM gates, a CCAP must first be added. 
-
-=== Postman
-
-https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en[Install
-the Chrome extension]
-
-https://git.opendaylight.org/gerrit/gitweb?p=packetcable.git;a=tree;f=packetcable-policy-server/doc/restconf-samples[Download
-and import sample packetcable collection]
-
-.Postman Operations
-image::Screenshot5.png[width=500]
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/packetcable-user-guide.html
diff --git a/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-classifier-user.adoc b/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-classifier-user.adoc
deleted file mode 100644 (file)
index ada39c5..0000000
+++ /dev/null
@@ -1,59 +0,0 @@
-=== SFC Classifier User Guide
-
-==== Overview
-Description of classifier can be found in: https://datatracker.ietf.org/doc/draft-ietf-sfc-architecture/
-
-There are two types of classifier:
-
-. OpenFlow Classifier
-
-. Iptables Classifier
-
-==== OpenFlow Classifier
-
-OpenFlow Classifier implements the classification criteria based on OpenFlow rules deployed into an OpenFlow switch. An Open vSwitch will take the role of a classifier and performs various encapsulations such NSH, VLAN, MPLS, etc. In the existing implementation, classifier can support NSH encapsulation. Matching information is based on ACL for MAC addresses, ports, protocol, IPv4 and IPv6. Supported protocols are TCP, UDP and SCTP. Actions information in the OF rules, shall be forwarding of the encapsulated packets with specific information related to the RSP.
-
-===== Classifier Architecture
-
-The OVSDB Southbound interface is used to create an instance of a bridge in a specific location (via IP address). This bridge contains the OpenFlow rules that perform the classification of the packets and react accordingly. The OpenFlow Southbound interface is used to translate the ACL information into OF rules within the Open vSwitch.
-
-NOTE: in order to create the instance of the bridge that takes the role of a classifier, an "empty" SFF must be created.
-
-===== Configuring Classifier
-. An empty SFF must be created in order to host the ACL that contains the classification information.
-. SFF data plane locator must be configured
-. Classifier interface must be mannually added to SFF bridge.
-
-===== Administering or Managing Classifier
-Classification information is based on MAC addresses, protocol, ports and IP. ACL gathers this information and is assigned to an RSP which turns to be a specific path for a Service Chain.
-
-==== Iptables Classifier
-
-Classifier manages everything from starting the packet listener to creation (and removal) of appropriate ip(6)tables rules and marking received packets accordingly. Its functionality is *available only on Linux* as it leverdges *NetfilterQueue*, which provides access to packets matched by an *iptables* rule. Classifier requires *root privileges* to be able to operate.
-
-So far it is capable of processing ACL for MAC addresses, ports, IPv4 and IPv6. Supported protocols are TCP and UDP.
-
-===== Classifier Architecture
-Python code located in the project repository sfc-py/common/classifier.py.
-
-NOTE: classifier assumes that Rendered Service Path (RSP) *already exists* in ODL when an ACL referencing it is obtained
-
-.How it works:
-. sfc_agent receives an ACL and passes it for processing to the classifier
-. the RSP (its SFF locator) referenced by ACL is requested from ODL
-. if the RSP exists in the ODL then ACL based iptables rules for it are applied
-
-After this process is over, every packet successfully matched to an iptables rule (i.e. successfully classified) will be NSH encapsulated and forwarded to a related SFF, which knows how to traverse the RSP.
-
-Rules are created using appropriate iptables command. If the Access Control Entry (ACE) rule is MAC address related both iptables and ip6tabeles rules re issued. If ACE rule is IPv4 address related, only iptables rules are issued, same for IPv6.
-
-NOTE: iptables *raw* table contains all created rules
-
-===== Configuring Classifier
-Classfier does't need any configuration. +
-Its only requirement is that the *second (2) Netfilter Queue* is not used by any other process and is *avalilable for the classifier*.
-
-===== Administering or Managing Classifier
-Classfier runs alongside sfc_agent, therefore the commad for starting it locally is:
-
-       sudo python3.4 sfc-py/sfc_agent.py --rest --odl-ip-port localhost:8181 --auto-sff-name --nfq-class
diff --git a/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-iosxe-renderer-user.adoc b/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-iosxe-renderer-user.adoc
deleted file mode 100644 (file)
index 5f6ef9e..0000000
+++ /dev/null
@@ -1,315 +0,0 @@
-=== SFC IOS XE Renderer User Guide
-
-:SFCIOSXERNDR: SFC IOS-XE renderer
-
-==== Overview
-The early Service Function Chaining (SFC) renderer for IOS-XE devices
-({SFCIOSXERNDR}) implements Service Chaining functionality on IOS-XE
-capable switches. It listens for the creation of a Rendered Service
-Path (RSP) and sets up Service Function Forwarders (SFF) that are hosted
-on IOS-XE switches to steer traffic through the service chain.
-
-Common acronyms used in the following sections:
-
-* SF - Service Function
-* SFF - Service Function Forwarder
-* SFC - Service Function Chain
-* SP - Service Path
-* SFP - Service Function Path
-* RSP - Rendered Service Path
-* LSF - Local Service Forwarder
-* RSF - Remote Service Forwarder
-
-==== SFC IOS-XE Renderer Architecture
-When the {SFCIOSXERNDR} is initialized, all required listeners are registered
-to handle incoming data. It involves CSR/IOS-XE +NodeListener+ which stores
-data about all configurable devices including their mountpoints (used here
-as databrokers), +ServiceFunctionListener+, +ServiceForwarderListener+
-(see mapping) and +RenderedPathListener+ used to listen for
-RSP changes. When the {SFCIOSXERNDR} is invoked, +RenderedPathListener+ calls
-the +IosXeRspProcessor+ which processes the RSP change and creates all necessary
-Service Paths and Remote Service Forwarders (if necessary) on IOS-XE devices.
-
-==== Service Path details
-Each Service Path is defined by index (represented by NSP) and contains
-service path entries. Each entry has appropriate service index
-(NSI) and definition of next hop. Next hop can be Service Function, different
-Service Function Forwarder or definition of end of chain - terminate. After
-terminating, the packet is sent to destination. If a SFF is defined as a next hop,
-it has to be present on device in the form of Remote Service Forwarder.
-RSFs are also created during RSP processing.
-
-Example of Service Path:
-
- service-chain service-path 200
-    service-index 255 service-function firewall-1
-    service-index 254 service-function dpi-1
-    service-index 253 terminate
-
-==== Mapping to IOS-XE SFC entities
-Renderer contains mappers for SFs and SFFs. IOS-XE capable device is using its
-own definition of Service Functions and Service Function Forwarders according to
-appropriate .yang file.
-+ServiceFunctionListener+ serves as a listener for SF changes. If SF appears in
-datastore, listener extracts its management ip address and looks into cached IOS-XE
-nodes. If some of available nodes match, Service function is mapped
-in +IosXeServiceFunctionMapper+ to be understandable by IOS-XE device and it's
-written into device's config.
-+ServiceForwarderListener+ is used in a similar way. All SFFs with suitable
-management ip address it mapped in +IosXeServiceForwarderMapper+. Remapped SFFs
-are configured as a Local Service Forwarders. It is not possible to directly create
-Remote Service Forwarder using IOS-XE renderer. RSF is created only during RSP processing.
-
-==== Administering {SFCIOSXERNDR}
-To use the SFC IOS-XE Renderer Karaf, at least the following Karaf
-features must be installed:
-
-* odl-aaa-shiro
-* odl-sfc-model
-* odl-sfc-provider
-* odl-restconf
-* odl-netconf-topology
-* odl-sfc-ios-xe-renderer
-
-==== {SFCIOSXERNDR} Tutorial
-
-===== Overview
-This tutorial is a simple example how to create Service Path on IOS-XE capable device
-using IOS-XE renderer
-
-===== Preconditions
-To connect to IOS-XE device, it is necessary to use several modified yang models and override
-device's ones. All .yang files are in the +Yang/netconf+ folder in the +sfc-ios-xe-renderer module+ in
-the SFC project. These files have to be copied to the +cache/schema+ directory, before
-Karaf is started. After that, custom capabilities have to be sent to network-topology:
-
-----
-PUT ./config/network-topology:network-topology/topology/topology-netconf/node/<device-name>
-
-<node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
-  <node-id>device-name</node-id>
-  <host xmlns="urn:opendaylight:netconf-node-topology">device-ip</host>
-  <port xmlns="urn:opendaylight:netconf-node-topology">2022</port>
-  <username xmlns="urn:opendaylight:netconf-node-topology">login</username>
-  <password xmlns="urn:opendaylight:netconf-node-topology">password</password>
-  <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
-  <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">0</keepalive-delay>
-  <yang-module-capabilities xmlns="urn:opendaylight:netconf-node-topology">
-     <override>true</override>
-     <capability xmlns="urn:opendaylight:netconf-node-topology">
-        urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&amp;revision=2013-07-15
-     </capability>
-     <capability xmlns="urn:opendaylight:netconf-node-topology">
-        urn:ietf:params:xml:ns:yang:ietf-yang-types?module=ietf-yang-types&amp;revision=2013-07-15
-     </capability>
-     <capability xmlns="urn:opendaylight:netconf-node-topology">
-        urn:ios?module=ned&amp;revision=2016-03-08
-     </capability>
-     <capability xmlns="urn:opendaylight:netconf-node-topology">
-        http://tail-f.com/yang/common?module=tailf-common&amp;revision=2015-05-22
-     </capability>
-     <capability xmlns="urn:opendaylight:netconf-node-topology">
-        http://tail-f.com/yang/common?module=tailf-meta-extensions&amp;revision=2013-11-07
-     </capability>
-     <capability xmlns="urn:opendaylight:netconf-node-topology">
-        http://tail-f.com/yang/common?module=tailf-cli-extensions&amp;revision=2015-03-19
-     </capability>
-  </yang-module-capabilities>
-</node>
-----
-
-NOTE: The device name in the URL and in the XML must match.
-
-===== Instructions
-When the IOS-XE renderer is installed, all NETCONF nodes in topology-netconf are
-processed and all capable nodes with accessible mountpoints are cached.
-The first step is to create LSF on node.
-
-+Service Function Forwarder configuration+
-
-----
-PUT ./config/service-function-forwarder:service-function-forwarders
-
-{
-    "service-function-forwarders": {
-        "service-function-forwarder": [
-            {
-                "name": "CSR1Kv-2",
-                "ip-mgmt-address": "172.25.73.23",
-                "sff-data-plane-locator": [
-                    {
-                        "name": "CSR1Kv-2-dpl",
-                        "data-plane-locator": {
-                            "transport": "service-locator:vxlan-gpe",
-                            "port": 6633,
-                            "ip": "10.99.150.10"
-                        }
-                    }
-                ]
-            }
-        ]
-    }
-}
-----
-
-If the IOS-XE node with appropriate management IP exists, this configuration
-is mapped and LSF is created on the device. The same approach is used for
-Service Functions.
-
-----
-PUT ./config/service-function:service-functions
-
-{
-    "service-functions": {
-        "service-function": [
-            {
-                "name": "Firewall",
-                "ip-mgmt-address": "172.25.73.23",
-                "type": "service-function-type: firewall",
-                "nsh-aware": true,
-                "sf-data-plane-locator": [
-                    {
-                        "name": "firewall-dpl",
-                        "port": 6633,
-                        "ip": "12.1.1.2",
-                        "transport": "service-locator:gre",
-                        "service-function-forwarder": "CSR1Kv-2"
-                    }
-                ]
-            },
-            {
-                "name": "Dpi",
-                "ip-mgmt-address": "172.25.73.23",
-                "type":"service-function-type: dpi",
-                "nsh-aware": true,
-                "sf-data-plane-locator": [
-                    {
-                        "name": "dpi-dpl",
-                        "port": 6633,
-                        "ip": "12.1.1.1",
-                        "transport": "service-locator:gre",
-                        "service-function-forwarder": "CSR1Kv-2"
-                    }
-                ]
-            },
-            {
-                "name": "Qos",
-                "ip-mgmt-address": "172.25.73.23",
-                "type":"service-function-type: qos",
-                "nsh-aware": true,
-                "sf-data-plane-locator": [
-                    {
-                        "name": "qos-dpl",
-                        "port": 6633,
-                        "ip": "12.1.1.4",
-                        "transport": "service-locator:gre",
-                        "service-function-forwarder": "CSR1Kv-2"
-                    }
-                ]
-            }
-        ]
-    }
-}
-----
-
-All these SFs are configured on the same device as the LSF. The next step is to
-prepare Service Function Chain. SFC is symmetric.
-
-----
-PUT ./config/service-function-chain:service-function-chains/
-
-{
-    "service-function-chains": {
-        "service-function-chain": [
-            {
-                "name": "CSR3XSF",
-                "symmetric": "true",
-                "sfc-service-function": [
-                    {
-                        "name": "Firewall",
-                        "type": "service-function-type: firewall"
-                    },
-                    {
-                        "name": "Dpi",
-                        "type": "service-function-type: dpi"
-                    },
-                    {
-                        "name": "Qos",
-                        "type": "service-function-type: qos"
-                    }
-                ]
-            }
-        ]
-    }
-}
-----
-
-Service Function Path:
-
-----
-PUT ./config/service-function-path:service-function-paths/
-
-{
-    "service-function-paths": {
-        "service-function-path": [
-            {
-                "name": "CSR3XSF-Path",
-                "service-chain-name": "CSR3XSF",
-                "starting-index": 255,
-                "symmetric": "true"
-            }
-        ]
-    }
-}
-----
-
-Without a classifier, there is possibility to POST RSP directly.
-
-----
-POST ./operations/rendered-service-path:create-rendered-path
-
-{
-  "input": {
-      "name": "CSR3XSF-Path-RSP",
-      "parent-service-function-path": "CSR3XSF-Path",
-      "symmetric": true
-  }
-}
-----
-
-The resulting configuration:
-
-----
-!
-service-chain service-function-forwarder local
-  ip address 10.99.150.10
-!
-service-chain service-function firewall
-ip address 12.1.1.2
-  encapsulation gre enhanced divert
-!
-service-chain service-function dpi
-ip address 12.1.1.1
-  encapsulation gre enhanced divert
-!
-service-chain service-function qos
-ip address 12.1.1.4
-  encapsulation gre enhanced divert
-!
-service-chain service-path 1
-  service-index 255 service-function firewall
-  service-index 254 service-function dpi
-  service-index 253 service-function qos
-  service-index 252 terminate
-!
-service-chain service-path 2
-  service-index 255 service-function qos
-  service-index 254 service-function dpi
-  service-index 253 service-function firewall
-  service-index 252 terminate
-!
-----
-
-Service Path 1 is direct, Service Path 2 is reversed. Path numbers may vary.
-
-:SFCIOSXERNDR!:
diff --git a/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-load-balance-user.adoc b/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-load-balance-user.adoc
deleted file mode 100644 (file)
index 9727c2e..0000000
+++ /dev/null
@@ -1,92 +0,0 @@
-=== Service Function Load Balancing User Guide
-
-==== Overview
-SFC Load-Balancing feature implements load balancing of Service Functions, rather than a one-to-one mapping between Service-Function-Forwarder and Service-Function. 
-
-==== Load Balancing Architecture
-Service Function Groups (SFG) can replace Service Functions (SF) in the Rendered Path model. 
-A Service Path can only be defined using SFGs or SFs, but not a combination of both.
-
-Relevant objects in the YANG model are as follows:
-
-1. Service-Function-Group-Algorithm:
-
-       Service-Function-Group-Algorithms {
-               Service-Function-Group-Algorithm {
-                       String name
-                       String type
-               }
-       }
-       
-       Available types: ALL, SELECT, INDIRECT, FAST_FAILURE
-
-2. Service-Function-Group:
-
-       Service-Function-Groups {
-               Service-Function-Group {
-                       String name
-                       String serviceFunctionGroupAlgorithmName
-                       String type
-                       String groupId
-                       Service-Function-Group-Element {
-                               String service-function-name
-                               int index
-                       }
-               }
-       }
-
-3. ServiceFunctionHop: holds a reference to a name of SFG (or SF)
-
-==== Tutorials
-This tutorial will explain how to create a simple SFC configuration, with SFG instead of SF. In this example, the SFG will include two existing SF.
-
-===== Setup SFC
-For general SFC setup and scenarios, please see the SFC wiki page: https://wiki.opendaylight.org/view/Service_Function_Chaining:Main#SFC_101
-
-===== Create an algorithm
-POST - http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms
-----
-{
-    "service-function-group-algorithm": [
-      {
-        "name": "alg1"
-        "type": "ALL"
-      }
-   ]
-}
-----
-
-(Header "content-type": application/json)
-
-===== Verify: get all algorithms
-GET - http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms
-
-In order to delete all algorithms:
-DELETE - http://127.0.0.1:8181/restconf/config/service-function-group-algorithm:service-function-group-algorithms
-
-===== Create a group
-POST - http://127.0.0.1:8181/restconf/config/service-function-group:service-function-groups 
-----
-{
-    "service-function-group": [
-    {
-        "rest-uri": "http://localhost:10002",
-        "ip-mgmt-address": "10.3.1.103",
-        "algorithm": "alg1",
-        "name": "SFG1",
-        "type": "service-function-type:napt44",
-               "sfc-service-function": [
-                       {
-                               "name":"napt44-104"
-                       }, 
-                       {
-                               "name":"napt44-103-1"
-                       }
-               ]
-      }
-       ]
-}
-----
-
-===== Verify: get all SFG's
-GET - http://127.0.0.1:8181/restconf/config/service-function-group:service-function-groups 
\ No newline at end of file
diff --git a/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-openflow-renderer-user.adoc b/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-openflow-renderer-user.adoc
deleted file mode 100644 (file)
index 4e2e661..0000000
+++ /dev/null
@@ -1,787 +0,0 @@
-=== SFC OpenFlow Renderer User Guide
-
-:SFCOFRNDR: SFC OF Renderer
-
-==== Overview
-The Service Function Chaining (SFC) OpenFlow Renderer ({SFCOFRNDR})
-implements Service Chaining on OpenFlow switches. It listens for the
-creation of a Rendered Service Path (RSP), and once received it programs
-Service Function Forwarders (SFF) that are hosted on OpenFlow capable
-switches to steer packets through the service chain.
-
-Common acronyms used in the following sections:
-
-* SF - Service Function
-* SFF - Service Function Forwarder
-* SFC - Service Function Chain
-* SFP - Service Function Path
-* RSP - Rendered Service Path
-
-==== SFC OpenFlow Renderer Architecture
-The {SFCOFRNDR} is invoked after a RSP is created using an MD-SAL listener
-called +SfcOfRspDataListener+. Upon {SFCOFRNDR} initialization, the
-+SfcOfRspDataListener+ registers itself to listen for RSP changes.
-When invoked, the +SfcOfRspDataListener+ processes the RSP and calls
-the +SfcOfFlowProgrammerImpl+ to create the necessary flows in the
-Service Function Forwarders configured in the RSP. Refer to the
-following diagram for more details.
-
-.SFC OpenFlow Renderer High Level Architecture
-image::sfc/sfcofrenderer_architecture.png["SFC OpenFlow Renderer High Level Architecture",width=500]
-
-==== SFC OpenFlow Switch Flow pipeline
-The SFC OpenFlow Renderer uses the following tables for its Flow pipeline:
-
-* Table 0, Classifier
-* Table 1, Transport Ingress
-* Table 2, Path Mapper
-* Table 3, Path Mapper ACL
-* Table 4, Next Hop
-* Table 10, Transport Egress
-
-The OpenFlow Table Pipeline is intended to be generic to work for
-all of the different encapsulations supported by SFC.
-
-All of the tables are explained in detail in the following section.
-
-The SFFs (SFF1 and SFF2), SFs (SF1), and topology used for the flow
-tables in the following sections are as described in the following
-diagram.
-
-.SFC OpenFlow Renderer Typical Network Topology
-image::sfc/sfcofrenderer_nwtopo.png["SFC OpenFlow Renderer Typical Network Topology",width=500]
-
-===== Classifier Table detailed
-
-It is possible for the SFF to also act as a classifier. This table maps subscriber
-traffic to RSPs, and is explained in detail in the classifier documentation.
-
-If the SFF is not a classifier, then this table will just have a simple Goto
-Table 1 flow.
-
-===== Transport Ingress Table detailed
-
-The Transport Ingress table has an entry per expected tunnel transport
-type to be received in a particular SFF, as established in the SFC
-configuration.
-
-Here are two example on SFF1: one where the RSP ingress tunnel is MPLS assuming
-VLAN is used for the SFF-SF, and the other where the RSP ingress tunnel is NSH
-GRE (UDP port 4789):
-
-.Table Transport Ingress
-[cols="1,3,2"]
-|===
-|Priority |Match | Action
-
-|256
-|EtherType==0x8847 (MPLS unicast)
-|Goto Table 2
-
-|256
-|EtherType==0x8100 (VLAN)
-|Goto Table 2
-
-|256
-|EtherType==0x0800,udp,tp_dst==4789 (IP v4)
-|Goto Table 2
-
-|5
-|Match Any
-|Drop
-|===
-
-===== Path Mapper Table detailed
-The Path Mapper table has an entry per expected tunnel transport info
-to be received in a particular SFF, as established in the SFC
-configuration. The tunnel transport info is used to determine the
-RSP Path ID, and is stored in the OpenFlow Metadata. This table is not
-used for NSH, since the RSP Path ID is stored in the NSH header. 
-
-For SF nodes that do not support NSH tunneling, the IP header DSCP field is
-used to store the RSP Path Id. The RSP Path Id is written to the DSCP
-field in the Transport Egress table for those packets sent to an SF.
-
-Here is an example on SFF1, assuming the following details:
-
-* VLAN ID 1000 is used for the SFF-SF
-* The RSP Path 1 tunnel uses MPLS label 100 for ingress and 101 for egress
-* The RSP Path 2 (symmetric downlink path) uses MPLS label 101 for ingress and 100 for egress
-
-.Table Path Mapper
-[width=60%]
-|===
-|Priority |Match | Action
-
-|256
-|MPLS Label==100
-|RSP Path=1, Pop MPLS, Goto Table 4
-
-|256
-|MPLS Label==101
-|RSP Path=2, Pop MPLS, Goto Table 4
-
-|256
-|VLAN ID==1000, IP DSCP==1
-|RSP Path=1, Pop VLAN, Goto Table 4
-
-|256
-|VLAN ID==1000, IP DSCP==2
-|RSP Path=2, Pop VLAN, Goto Table 4
-
-|5
-|Match Any
-|Goto Table 3
-|===
-
-===== Path Mapper ACL Table detailed
-This table is only populated when PacketIn packets are received from the switch
-for TcpProxy type SFs. These flows are created with an inactivity timer of 60
-seconds and will be automatically deleted upon expiration.
-
-===== Next Hop Table detailed
-The Next Hop table uses the RSP Path Id and appropriate packet fields to
-determine where to send the packet next. For NSH, only the NSP (Network
-Services Path, RSP ID) and NSI (Network Services Index, next hop) fields
-from the NSH header are needed to determine the VXLAN tunnel destination
-IP. For VLAN or MPLS, then the source MAC address is used to determine
-the destination MAC address.
-
-Here are two examples on SFF1, assuming SFF1 is connected to SFF2. RSP Paths 1
-and 2 are symmetric VLAN paths. RSP Paths 3 and 4 are symmetric NSH paths.
-RSP Path 1 ingress packets come from external to SFC, for which we don’t have
-the source MAC address (MacSrc).
-
-.Table Next Hop
-[cols="1,3,3"]
-|===
-|Priority |Match | Action
-
-|256
-|RSP Path==1, MacSrc==SF1
-|MacDst=SFF2, Goto Table 10
-
-|256
-|RSP Path==2, MacSrc==SF1
-|Goto Table 10
-
-|256
-|RSP Path==2, MacSrc==SFF2
-|MacDst=SF1, Goto Table 10
-
-|246
-|RSP Path==1
-|MacDst=SF1, Goto Table 10
-
-|256
-|nsp=3,nsi=255  (SFF Ingress RSP 3)
-|load:0xa000002->NXM_NX_TUN_IPV4_DST[], Goto Table 10
-
-|256
-|nsp=3,nsi=254  (SFF Ingress from SF, RSP 3)
-|load:0xa00000a->NXM_NX_TUN_IPV4_DST[], Goto Table 10
-
-|256
-|nsp=4,nsi=254  (SFF1 Ingress from SFF2)
-|load:0xa00000a->NXM_NX_TUN_IPV4_DST[], Goto Table 10
-
-|5
-|Match Any
-|Drop
-|===
-
-===== Transport Egress Table detailed
-The Transport Egress table prepares egress tunnel information and
-sends the packets out.
-
-Here are two examples on SFF1. RSP Paths 1 and 2 are symmetric MPLS paths that
-use VLAN for the SFF-SF. RSP Paths 3 and 4 are symmetric NSH paths. Since it is
-assumed that switches used for NSH will only have one VXLANport, the NSH
-packets are just sent back where they came from.
-
-.Table Transport Egress
-[cols="1,3,3"]
-|===
-|Priority |Match | Action
-
-|256
-|RSP Path==1, MacDst==SF1
-|Push VLAN ID 1000, Port=SF1
-
-|256
-|RSP Path==1, MacDst==SFF2
-|Push MPLS Label 101, Port=SFF2
-
-|256
-|RSP Path==2, MacDst==SF1
-|Push VLAN ID 1000, Port=SF1
-
-|246
-|RSP Path==2
-|Push MPLS Label 100, Port=Ingress
-
-|256
-|nsp=3,nsi=255  (SFF Ingress RSP 3)
-|IN_PORT
-
-|256
-|nsp=3,nsi=254  (SFF Ingress from SF, RSP 3)
-|IN_PORT
-
-|256
-|nsp=4,nsi=254  (SFF1 Ingress from SFF2)
-|IN_PORT
-
-|5
-|Match Any
-|Drop
-|===
-
-==== Administering {SFCOFRNDR}
-To use the SFC OpenFlow Renderer Karaf, at least the following Karaf
-features must be installed.
-
-* odl-openflowplugin-nxm-extensions
-* odl-openflowplugin-flow-services
-* odl-sfc-provider
-* odl-sfc-model
-* odl-sfc-openflow-renderer
-* odl-sfc-ui (optional)
-
-The following command can be used to view all of the currently installed Karaf features:
-
- opendaylight-user@root>feature:list -i
-
-Or, pipe the command to a grep to see a subset of the currently installed Karaf features:
-
- opendaylight-user@root>feature:list -i | grep sfc
-
-To install a particular feature, use the Karaf `feature:install` command.
-
-==== {SFCOFRNDR} Tutorial
-
-===== Overview
-In this tutorial, 2 different encapsulations will be shown: MPLS and NSH. The
-following Network Topology diagram is a logical view of the SFFs and SFs involved
-in creating the Service Chains.
-
-.SFC OpenFlow Renderer Typical Network Topology
-image::sfc/sfcofrenderer_nwtopo.png["SFC OpenFlow Renderer Typical Network Topology",width=500]
-
-===== Prerequisites
-To use this example, SFF OpenFlow switches must be created and
-connected as illustrated above. Additionally, the SFs must be
-created and connected.
-
-===== Target Environment
-The target environment is not important, but this use-case was created
-and tested on Linux.
-
-===== Instructions
-The steps to use this tutorial are as follows. The referenced
-configuration in the steps is listed in the following sections.
-
-There are numerous ways to send the configuration. In the following
-configuration chapters, the appropriate `curl` command is shown for
-each configuration to be sent, including the URL.
-
-Steps to configure the {SFCOFRNDR} tutorial:
-
-. Send the `SF` RESTCONF configuration
-. Send the `SFF` RESTCONF configuration
-. Send the `SFC` RESTCONF configuration
-. Send the `SFP` RESTCONF configuration
-. Create the `RSP` with a RESTCONF RPC command
-
-Once the configuration has been successfully created, query the
-Rendered Service Paths with either the SFC UI or via RESTCONF.
-Notice that the RSP is symmetrical, so the following 2 RSPs will
-be created:
-
-* sfc-path1
-* sfc-path1-Reverse
-
-At this point the Service Chains have been created, and the OpenFlow
-Switches are programmed to steer traffic through the Service Chain.
-Traffic can now be injected from a client into the Service Chain.
-To debug problems, the OpenFlow tables can be dumped with the following
-commands, assuming SFF1 is called `s1` and SFF2 is called `s2`.
-
- sudo ovs-ofctl -O OpenFlow13  dump-flows s1
-
- sudo ovs-ofctl -O OpenFlow13  dump-flows s2
-
-In all the following configuration sections, replace the `${JSON}`
-string with the appropriate JSON configuration. Also, change the
-`localhost` desintation in the URL accordingly.
-
-====== {SFCOFRNDR} NSH Tutorial
-
-The following configuration sections show how to create the different elements
-using NSH encapsulation.
-
-*NSH Service Function configuration* +
-
-The Service Function configuration can be sent with the following command:
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function:service-functions/
-
-.SF configuration JSON
-----
-{
- "service-functions": {
-   "service-function": [
-     {
-       "name": "sf1",
-       "type": "http-header-enrichment",
-       "nsh-aware": true,
-       "ip-mgmt-address": "10.0.0.2",
-       "sf-data-plane-locator": [
-         {
-           "name": "sf1dpl",
-           "ip": "10.0.0.10",
-           "port": 4789,
-           "transport": "service-locator:vxlan-gpe",
-           "service-function-forwarder": "sff1"
-         }
-       ]
-     },
-     {
-       "name": "sf2",
-       "type": "firewall",
-       "nsh-aware": true,
-       "ip-mgmt-address": "10.0.0.3",
-       "sf-data-plane-locator": [
-         {
-           "name": "sf2dpl",
-            "ip": "10.0.0.20",
-            "port": 4789,
-            "transport": "service-locator:vxlan-gpe",
-           "service-function-forwarder": "sff2"
-         }
-       ]
-     }
-   ]
- }
-}
-----
-
-*NSH Service Function Forwarder configuration* +
-
-The Service Function Forwarder configuration can be sent with the
-following command:
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
-
-.SFF configuration JSON
-----
-{
- "service-function-forwarders": {
-   "service-function-forwarder": [
-     {
-       "name": "sff1",
-       "service-node": "openflow:2",
-       "sff-data-plane-locator": [
-         {
-           "name": "sff1dpl",
-           "data-plane-locator":
-           {
-               "ip": "10.0.0.1",
-               "port": 4789,
-               "transport": "service-locator:vxlan-gpe"
-           }
-         }
-       ],
-       "service-function-dictionary": [
-         {
-           "name": "sf1",
-           "sff-sf-data-plane-locator":
-           {
-               "sf-dpl-name": "sf1dpl",
-               "sff-dpl-name": "sff1dpl"
-           }
-         }
-       ]
-     },
-     {
-       "name": "sff2",
-       "service-node": "openflow:3",
-       "sff-data-plane-locator": [
-         {
-           "name": "sff2dpl",
-           "data-plane-locator":
-           {
-               "ip": "10.0.0.2",
-               "port": 4789,
-               "transport": "service-locator:vxlan-gpe"
-           }
-         }
-       ],
-       "service-function-dictionary": [
-         {
-           "name": "sf2",
-           "sff-sf-data-plane-locator":
-           {
-               "sf-dpl-name": "sf2dpl",
-               "sff-dpl-name": "sff2dpl"
-           }
-         }
-       ]
-     }
-   ]
- }
-}
-----
-
-*NSH Service Function Chain configuration* +
-
-The Service Function Chain configuration can be sent with the following command:
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/
-
-.SFC configuration JSON
-----
-{
- "service-function-chains": {
-   "service-function-chain": [
-     {
-       "name": "sfc-chain1",
-       "symmetric": true,
-       "sfc-service-function": [
-         {
-           "name": "hdr-enrich-abstract1",
-           "type": "http-header-enrichment"
-         },
-         {
-           "name": "firewall-abstract1",
-           "type": "firewall"
-         }
-       ]
-     }
-   ]
- }
-}
-----
-
-*NSH Service Function Path configuration* +
-
-The Service Function Path configuration can be sent with the following command:
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-path:service-function-paths/
-
-.SFP configuration JSON
-----
-{
-  "service-function-paths": {
-    "service-function-path": [
-      {
-        "name": "sfc-path1",
-        "service-chain-name": "sfc-chain1",
-        "transport-type": "service-locator:vxlan-gpe",
-        "symmetric": true
-      }
-    ]
-  }
-}
-----
-
-*NSH Rendered Service Path creation* +
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X POST --user admin:admin http://localhost:8181/restconf/operations/rendered-service-path:create-rendered-path/
-
-.RSP creation JSON
-----
-{
- "input": {
-     "name": "sfc-path1",
-     "parent-service-function-path": "sfc-path1",
-     "symmetric": true
- }
-}
-----
-
-*NSH Rendered Service Path removal* +
-
-The following command can be used to remove a Rendered Service Path
-called `sfc-path1`:
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '{"input": {"name": "sfc-path1" } }' -X POST --user admin:admin http://localhost:8181/restconf/operations/rendered-service-path:delete-rendered-path/
-
-*NSH Rendered Service Path Query* +
-
-The following command can be used to query all of the created Rendered Service Paths:
-
- curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET --user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
-
-
-====== {SFCOFRNDR} MPLS Tutorial
-
-The following configuration sections show how to create the different elements
-using MPLS encapsulation.
-
-*MPLS Service Function configuration* +
-
-The Service Function configuration can be sent with the following command:
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function:service-functions/
-
-.SF configuration JSON
-----
-{
- "service-functions": {
-   "service-function": [
-     {
-       "name": "sf1",
-       "type": "http-header-enrichment",
-       "nsh-aware": false,
-       "ip-mgmt-address": "10.0.0.2",
-       "sf-data-plane-locator": [
-         {
-           "name": "sf1-sff1",
-           "mac": "00:00:08:01:02:01",
-           "vlan-id": 1000,
-           "transport": "service-locator:mac",
-           "service-function-forwarder": "sff1"
-         }
-       ]
-     },
-     {
-       "name": "sf2",
-       "type": "firewall",
-       "nsh-aware": false,
-       "ip-mgmt-address": "10.0.0.3",
-       "sf-data-plane-locator": [
-         {
-           "name": "sf2-sff2",
-           "mac": "00:00:08:01:03:01",
-           "vlan-id": 2000,
-           "transport": "service-locator:mac",
-           "service-function-forwarder": "sff2"
-         }
-       ]
-     }
-   ]
- }
-}
-----
-
-*MPLS Service Function Forwarder configuration* +
-
-The Service Function Forwarder configuration can be sent with the
-following command:
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-forwarder:service-function-forwarders/
-
-.SFF configuration JSON
-----
-{
- "service-function-forwarders": {
-   "service-function-forwarder": [
-     {
-       "name": "sff1",
-       "service-node": "openflow:2",
-       "sff-data-plane-locator": [
-         {
-           "name": "ulSff1Ingress",
-           "data-plane-locator":
-           {
-               "mpls-label": 100,
-               "transport": "service-locator:mpls"
-           },
-           "service-function-forwarder-ofs:ofs-port":
-           {
-               "mac": "11:11:11:11:11:11",
-               "port-id" : "1"
-           }
-         },
-         {
-           "name": "ulSff1ToSff2",
-           "data-plane-locator":
-           {
-               "mpls-label": 101,
-               "transport": "service-locator:mpls"
-           },
-           "service-function-forwarder-ofs:ofs-port":
-           {
-               "mac": "33:33:33:33:33:33",
-               "port-id" : "2"
-           }
-         },
-         {
-           "name": "toSf1",
-           "data-plane-locator":
-           {
-               "mac": "22:22:22:22:22:22",
-               "vlan-id": 1000,
-               "transport": "service-locator:mac",
-           },
-           "service-function-forwarder-ofs:ofs-port":
-           {
-               "mac": "33:33:33:33:33:33",
-               "port-id" : "3"
-           }
-         }
-       ],
-       "service-function-dictionary": [
-         {
-           "name": "sf1",
-           "sff-sf-data-plane-locator":
-           {
-               "sf-dpl-name": "sf1-sff1",
-               "sff-dpl-name": "toSf1"
-           }
-         }
-       ]
-     },
-     {
-       "name": "sff2",
-       "service-node": "openflow:3",
-       "sff-data-plane-locator": [
-         {
-           "name": "ulSff2Ingress",
-           "data-plane-locator":
-           {
-               "mpls-label": 101,
-               "transport": "service-locator:mpls"
-           },
-           "service-function-forwarder-ofs:ofs-port":
-           {
-               "mac": "44:44:44:44:44:44",
-               "port-id" : "1"
-           }
-         },
-         {
-           "name": "ulSff2Egress",
-           "data-plane-locator":
-           {
-               "mpls-label": 102,
-               "transport": "service-locator:mpls"
-           },
-           "service-function-forwarder-ofs:ofs-port":
-           {
-               "mac": "66:66:66:66:66:66",
-               "port-id" : "2"
-           }
-         },
-         {
-           "name": "toSf2",
-           "data-plane-locator":
-           {
-               "mac": "55:55:55:55:55:55",
-               "vlan-id": 2000,
-               "transport": "service-locator:mac"
-           },
-           "service-function-forwarder-ofs:ofs-port":
-           {
-               "port-id" : "3"
-           }
-         }
-       ],
-       "service-function-dictionary": [
-         {
-           "name": "sf2",
-           "sff-sf-data-plane-locator":
-           {
-               "sf-dpl-name": "sf2-sff2",
-               "sff-dpl-name": "toSf2"
-           
-           },
-           "service-function-forwarder-ofs:ofs-port":
-           {
-               "port-id" : "3"
-           }
-         }
-       ]
-     }
-   ]
- }
-}
-----
-
-*MPLS Service Function Chain configuration* +
-
-The Service Function Chain configuration can be sent with the
-following command:
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-chain:service-function-chains/
-
-.SFC configuration JSON
-----
-{
- "service-function-chains": {
-   "service-function-chain": [
-     {
-       "name": "sfc-chain1",
-       "symmetric": true,
-       "sfc-service-function": [
-         {
-           "name": "hdr-enrich-abstract1",
-           "type": "http-header-enrichment"
-         },
-         {
-           "name": "firewall-abstract1",
-           "type": "firewall"
-         }
-       ]
-     }
-   ]
- }
-}
-----
-
-*MPLS Service Function Path configuration* +
-
-The Service Function Path configuration can be sent with the following
-command:
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-path:service-function-paths/
-
-.SFP configuration JSON
-----
-{
-  "service-function-paths": {
-    "service-function-path": [
-      {
-        "name": "sfc-path1",
-        "service-chain-name": "sfc-chain1",
-        "transport-type": "service-locator:mpls",
-        "symmetric": true
-      }
-    ]
-  }
-}
-----
-
-*MPLS Rendered Service Path creation* +
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '${JSON}' -X POST --user admin:admin http://localhost:8181/restconf/operations/rendered-service-path:create-rendered-path/
-
-.RSP creation JSON
-----
-{
- "input": {
-     "name": "sfc-path1",
-     "parent-service-function-path": "sfc-path1",
-     "symmetric": true
- }
-}
-----
-
-*MPLS Rendered Service Path removal* +
-
-The following command can be used to remove a Rendered Service Path
-called `sfc-path1`:
-
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '{"input": {"name": "sfc-path1" } }' -X POST --user admin:admin http://localhost:8181/restconf/operations/rendered-service-path:delete-rendered-path/
-
-*MPLS Rendered Service Path Query* +
-
-The following command can be used to query all of the created Rendered Service Paths:
-
- curl -H "Content-Type: application/json" -H "Cache-Control: no-cache" -X GET --user admin:admin http://localhost:8181/restconf/operational/rendered-service-path:rendered-service-paths/
-
-
-
-
-:SFCOFRNDR!:
-
diff --git a/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-ovs-user.adoc b/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-ovs-user.adoc
deleted file mode 100644 (file)
index 5bc6f45..0000000
+++ /dev/null
@@ -1,93 +0,0 @@
-=== SFC-OVS integration
-
-==== Overview
-SFC-OVS provides integration of SFC with Open vSwitch (OVS) devices.
-Integration is realized through mapping of SFC objects (like SF, SFF,
-Classifier, etc.) to OVS objects (like Bridge, TerminationPoint=Port/Interface).
-The mapping takes care of automatic instantiation (setup) of corresponding object
-whenever its counterpart is created. For example, when a new SFF is created,
-the SFC-OVS plugin will create a new OVS bridge and when a new OVS Bridge is
-created, the SFC-OVS plugin will create a new SFF.
-
-The feature is intended for SFC users willing to use Open vSwitch as underlying
-network infrastructure for deploying RSPs (Rendered Service Paths).
-
-==== SFC-OVS Architecture
-SFC-OVS uses the OVSDB MD-SAL Southbound API for getting/writing information
-from/to OVS devices. From the user perspective SFC-OVS acts as a layer between
-SFC DataStore and OVSDB.
-
-.SFC-OVS integration into ODL
-image::sfc/sfc-ovs-architecture-user.png[width=250]
-
-==== Configuring SFC-OVS
-.Configuration steps:
-. Run ODL distribution (run karaf)
-. In karaf console execute: +feature:install odl-sfc-ovs+
-. Configure Open vSwitch to use ODL as a manager, using following command:
-+ovs-vsctl set-manager tcp:<odl_ip_address>:6640+
-
-==== Tutorials
-
-===== Verifying mapping from OVS to SFF
-
-====== Overview
-This tutorial shows the usual workflow when OVS configuration is transformed to
-corresponding SFC objects (in this case SFF).
-
-====== Prerequisities
-* Open vSwitch installed (ovs-vsctl command available in shell)
-* SFC-OVS feature configured as stated above
-
-====== Instructions
-.In shell execute:
-. +ovs-vsctl set-manager tcp:<odl_ip_address>:6640+
-. +ovs-vsctl add-br br1+
-. +ovs-vsctl add-port br1 testPort+
-
-====== Verification
-.There are two possible ways to verify if SFF was created:
-a. visit SFC User Interface:
-+http://<odl_ip_address>:8181/sfc/index.html#/sfc/serviceforwarder+
-b. use pure RESTCONF and send GET request to URL:
-+http://<odl_ip_address>:8181/restconf/config/service-function-forwarder:service-function-forwarders+
-
-There should be SFF, which name will be ending with 'br1' and the SFF should
-containt two DataPlane locators: 'br1' and 'testPort'.
-
-===== Verifying mapping from SFF to OVS
-
-====== Overview
-This tutorial shows the usual workflow during creation of OVS Bridge with use
-of SFC APIs.
-
-====== Prerequisities
-* Open vSwitch installed (ovs-vsctl command available in shell)
-* SFC-OVS feature configured as stated above
-
-====== Instructions
-.Steps:
-. In shell execute: +ovs-vsctl set-manager tcp:<odl_ip_address>:6640+
-. Send POST request to URL:
-+http://<odl_ip_address>:8181/restconf/operations/service-function-forwarder-ovs:create-ovs-bridge+
-Use Basic auth with credentials: "admin", "admin" and set +Content-Type: application/json+.
-The content of POST request should be following:
-----
-{
-    "input":
-    {
-        "name": "br-test",
-        "ovs-node": {
-            "ip": "<Open_vSwitch_ip_address>"
-        }
-    }
-}
-----
-Open_vSwitch_ip_address is IP address of machine, where Open vSwitch is installed.
-
-====== Verification
-In shell execute: +ovs-vsctl show+. There should be Bridge with name 'br-test'
-and one port/interface called 'br-test'.
-
-Also, corresponding SFF for this OVS Bridge should be configured, which can be
-verified through SFC User Interface or RESTCONF as stated in previous tutorial.
diff --git a/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-pot-user.adoc b/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-pot-user.adoc
deleted file mode 100644 (file)
index d1fafc8..0000000
+++ /dev/null
@@ -1,145 +0,0 @@
-=== SFC Proof of Transit User Guide
-
-:SFCPOT: SFC Proof of Transit
-
-==== Overview
-Early Service Function Chaining (SFC) Proof of Transit ({SFCPOT})
-implements Service Chaining Proof of Transit functionality on
-capable switches.  After the creation of an Rendered Service
-Path (RSP), a user can configure to enable SFC proof of transit
-on the selected RSP to effect the proof of transit.
-
-Common acronyms used in the following sections:
-
-* SF - Service Function
-* SFF - Service Function Forwarder
-* SFC - Service Function Chain
-* SFP - Service Function Path
-* RSP - Rendered Service Path
-* SFCPOT - Service Function Chain Proof of Transit
-
-==== SFC Proof of Transit Architecture
-When {SFCPOT} is initialized, all required listeners are registered
-to handle incoming data. It involves +SfcPotNodeListener+ which stores
-data about all node devices including their mountpoints (used here
-as databrokers), +SfcPotRSPDataListener+, +RenderedPathListener+.
-+RenderedPathListener+ is used to listen for RSP changes.
-+SfcPotRSPDataListener+ implements RPC services to enable or disable
-SFC Proof of Transit on a particular RSP.  When the {SFCPOT} is invoked,
-RSP listeners and service implementations are setup to receive SFCPOT
-configurations.  When a user configures via a POST RPC call to enable
-SFCPOT on a particular RSP, the configuration drives the creation of
-necessary augmentations to the RSP to effect the SFCPOT configurations.
-
-==== SFC Proof of Transit details
-Several deployments use traffic engineering, policy routing,
-segment routing or service function chaining (SFC) to steer packets
-through a specific set of nodes. In certain cases regulatory obligations
-or a compliance policy require to prove that all packets that are
-supposed to follow a specific path are indeed being forwarded across
-the exact set of nodes specified. I.e. if a packet flow is supposed to
-go through a series of service functions or network nodes, it has to
-be proven that all packets of the flow actually went through the
-service chain or collection of nodes specified by the policy.
-In case the packets of a flow weren't appropriately processed, a
-proof of transit egress device would be required to identify the policy
-violation and take corresponding actions (e.g. drop or redirect the packet,
-send an alert etc.) corresponding to the policy.
-
-The SFCPOT approach is based on meta-data which is added to every packet.
-The meta data is updated at every hop and is used to verify whether
-a packet traversed all required nodes. A particular path is either
-described by a set of secret keys, or a set of shares of a single
-secret. Nodes on the path retrieve their individual keys or shares
-of a key (using for e.g. Shamir's Shared Sharing Secret scheme) from
-a central controller. The complete key set is only known to the
-verifier- which is typically the ultimate node on a path that
-requires proof of transit. Each node in the path uses its secret or share
-of the secret to update the meta-data of the packets as the packets
-pass through the node. When the verifier receives a packet, it can use
-its key(s) along with the meta-data to validate whether the packet
-traversed the service chain correctly.
-
-==== SFC Proof of Transit entities
-In order to implement SFC Proof of Transit for a service function chain,
-an RSP is a pre-requisite to identify the SFC to enable SFC PoT
-on.  SFC Proof of Transit for a particular RSP is enabled by an RPC request
-to the controller along with necessary parameters to control some of the
-aspects of the SFC Proof of Transit process.
-
-The RPC handler identifies the RSP and generates SFC Proof of Transit
-parameters like secret share, secret etc., and adds the generated SFCPOT
-configuration parameters to SFC main as well as the various SFC hops.
-The last node in the SFC is configured as a verifier node to allow SFCPOT
-Proof of Transit process to be completed.
-
-The SFCPOT configuration generators and related handling are done by
-+SfcPotAPI+, +SfcPotConfigGenerator+,
-+SfcPotListener+, +SfcPotPolyAPI+,
-+SfcPotPolyClassAPI+ and +SfcPotPolyClass+.
-
-==== Administering {SFCPOT}
-To use the SFC Proof of Transit Karaf, at least the following Karaf
-features must be installed:
-
-* odl-sfc-model
-* odl-sfc-provider
-* odl-sfc-netconf
-* odl-restconf
-* odl-netconf-topology
-* odl-netconf-connector-all
-* odl-sfc-pot
-
-==== {SFCPOT} Tutorial
-
-===== Overview
-This tutorial is a simple example how to configure Service Function Chain
-Proof of Transit using SFC POT feature.
-
-===== Preconditions
-To enable a device to handle SFC Proof of Transit, it is expected that the netconf server
-device advertise capability as under ioam-scv.yang present under src/main/yang folder of
-sfc-pot feature.  It is also expected that netconf notifications be enabled and its
-support capability advertised as capabilities.
-
-It is also expected that the devices are netconf mounted and available in the
-topology-netconf store.
-
-===== Instructions
-When SFC Proof of Transit is installed, all netconf nodes in topology-netconf are
-processed and all capable nodes with accessible mountpoints are cached.
-
-First step is to create the required RSP as usually done.
-
-Once RSP name is avaiable it is used to send a POST RPC to the controller similar to
-below:
-
-----
-
-POST ./restconf/operations/sfc-ioam-nb-pot:enable-sfc-ioam-pot-rendered-path
-
-{
-  "input": {
-    "sfc-ioam-pot-rsp-name": "rsp1"
-  }
-}
-
-----
-
-The following can be used to disable the SFC Proof of Transit on an RSP which removes
-the augmentations and stores back the RSP without the SFCPOT enabled features and also
-sending down a delete configuration to the SFCPOT configuration sub-tree in the nodes.
-
-----
-
-POST ./restconf/operations/sfc-ioam-nb-pot:disable-sfc-ioam-pot-rendered-path
-
-{
-  "input": {
-    "sfc-ioam-pot-rsp-name": "rsp1"
-  }
-}
-
-----
-
-:SFCPOT!:
diff --git a/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-sb-rest-user.adoc b/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-sb-rest-user.adoc
deleted file mode 100644 (file)
index 31b23e0..0000000
+++ /dev/null
@@ -1,35 +0,0 @@
-=== SFC Southbound REST Plugin
-
-==== Overview
-The Southbound REST Plugin is used to send configuration from DataStore down to
-network devices supporting a REST API (i.e. they have a configured REST URI).
-It supports POST/PUT/DELETE operations, which are triggered accordingly by
-changes in the SFC data stores.
-
-.In its current state it listens to changes in these SFC data stores:
-* Access Control List (ACL)
-* Service Classifier Function (SCF)
-* Service Function (SF)
-* Service Function Group (SFG)
-* Service Function Schedule Type (SFST)
-* Service Function Forwader (SFF)
-* Rendered Service Path (RSP)
-
-==== Southbound REST Plugin Architecture
-From the user perspective, the REST plugin is another SFC Southbound plugin
-used to communicate with network devices.
-
-.Soutbound REST Plugin integration into ODL
-image::sfc/sb-rest-architecture-user.png[width=250]
-
-==== Configuring Southbound REST Plugin
-.Configuration steps:
-. Run ODL distribution (run karaf)
-. In karaf console execute: +feature:install odl-sfc-sb-rest+
-. Configure REST URIs for SF/SFF through SFC User Interface or RESTCONF
-(required configuration steps can be found in the tutorial stated bellow)
-
-==== Tutorial
-Comprehensive tutorial on how to use the Southbound REST Plugin and how to
-control network devices with it can be found on:
-https://wiki.opendaylight.org/view/Service_Function_Chaining:Main#SFC_101
diff --git a/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-sf-monitoring-user.adoc b/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-sf-monitoring-user.adoc
deleted file mode 100644 (file)
index 5284bbb..0000000
+++ /dev/null
@@ -1,18 +0,0 @@
-=== Service Function Monitoring
-TBD
-
-==== SF Monitoring Overview
-TBD
-
-==== SF Monitoring Architecture
-TBD
-
-==== The way of getting SF monitor information
-TBD
-
-===== SF NETCONF server configuration
-TBD
-
-===== ODL configuration
-TBD
-
diff --git a/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-sf-scheduler-user.adoc b/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-sf-scheduler-user.adoc
deleted file mode 100644 (file)
index 293bdb6..0000000
+++ /dev/null
@@ -1,532 +0,0 @@
-=== Service Function Scheduling Algorithms
-
-==== Overview
-When creating the Rendered Service Path, the origin SFC controller chose
-the first available service function from a list of service function names.
-This may result in many issues such as overloaded service functions
-and a longer service path as SFC has no means to understand the status of
-service functions and network topology. The service function selection
-framework supports at least four algorithms (Random, Round Robin,
-Load Balancing and Shortest Path) to select the most appropriate service
-function when instantiating the Rendered Service Path. In addition, it is an
-extensible framework that allows 3rd party selection algorithm to be plugged in.
-
-==== Architecture
-The following figure illustrates the service function selection framework
-and algorithms.
-
-.SF Selection Architecture
-image::sfc/sf-selection-arch.png["SF Selection Architecture",width=500]
-
-A user has three different ways to select one service function selection
-algorithm:
-
-. Integrated RESTCONF Calls. OpenStack and/or other administration system
-  could provide plugins to call the APIs to select one scheduling algorithm.
-. Command line tools. Command line tools such as curl or browser plugins
-  such as POSTMAN (for Google Chrome) and RESTClient (for Mozilla Firefox)
-  could select schedule algorithm by making RESTCONF calls.
-. SFC-UI. Now the SFC-UI provides an option for choosing a selection algorithm
-  when creating a Rendered Service Path.
-
-The RESTCONF northbound SFC API provides GUI/RESTCONF interactions for choosing
-the service function selection algorithm.
-MD-SAL data store provides all supported service function selection algorithms,
-and provides APIs to enable one of the provided service function selection
-algorithms.  
-Once a service function selection algorithm is enabled, the service function
-selection algorithm will work when creating a Rendered Service Path. 
-
-==== Select SFs with Scheduler
-Administrator could use both the following ways to select one of the selection
-algorithm when creating a Rendered Service Path.
-
-* Command line tools. Command line tools includes Linux commands curl or even
-   browser plugins such as POSTMAN(for Google Chrome) or RESTClient(for Mozilla
-   Firefox). In this case, the following JSON content is needed at the moment:
-   Service_function_schudule_type.json
-+
- {
-   "service-function-scheduler-types": {
-     "service-function-scheduler-type": [
-       {
-         "name": "random",
-         "type": "service-function-scheduler-type:random",
-         "enabled": false
-       },
-       {
-         "name": "roundrobin",
-         "type": "service-function-scheduler-type:round-robin",
-         "enabled": true
-       },
-       {
-         "name": "loadbalance",
-         "type": "service-function-scheduler-type:load-balance",
-         "enabled": false
-       },
-       {
-         "name": "shortestpath",
-         "type": "service-function-scheduler-type:shortest-path",
-         "enabled": false
-       }
-     ]
-   }
- }
-+
-If using the Linux curl command, it could be:
-+
- curl -i -H "Content-Type: application/json" -H "Cache-Control: no-cache" --data '$${Service_function_schudule_type.json}'
- -X PUT --user admin:admin http://localhost:8181/restconf/config/service-function-scheduler-type:service-function-scheduler-types/
-+
-Here is also a snapshot for using the RESTClient plugin:
-
-.Mozilla Firefox RESTClient
-image::sfc/RESTClient-snapshot.png["Mozilla Firefox RESTClient",width=500]
-
-* SFC-UI.SFC-UI provides a drop down menu for service function selection
-  algorithm. Here is a snapshot for the user interaction from SFC-UI when
-  creating a Rendered Service Path.
-
-.Karaf Web UI
-image::sfc/karaf-webui-select-a-type.png["Karaf Web UI",width=500]
-NOTE: Some service function selection algorithms in the drop list are not
-      implemented yet. Only the first three algorithms are committed at the
-      moment.
-
-===== Random
-Select Service Function from the name list randomly.
-
-====== Overview
-The Random algorithm is used to select one Service Function from the name list
-which it gets from the Service Function Type randomly.
-
-====== Prerequisites
-* Service Function information are stored in datastore.
-* Either no algorithm or the Random algorithm is selected.
-
-====== Target Environment
-The Random algorithm will work either no algorithm type is selected or the
-Random algorithm is selected.
-
-====== Instructions
-Once the plugins are installed into Karaf successfully, a user can use his
-favorite method to select the Random scheduling algorithm type.
-There are no special instructions for using the Random algorithm.
-
-===== Round Robin
-Select Service Function from the name list in Round Robin manner.
-
-====== Overview
-The Round Robin algorithm is used to select one Service Function from the name
-list which it gets from the Service Function Type in a Round Robin manner, this
-will balance workloads to all Service Functions. However, this method cannot
-help all Service Functions load the same workload because it's flow-based
-Round Robin.
-
-====== Prerequisites
-* Service Function information are stored in datastore.
-* Round Robin algorithm is selected
-
-====== Target Environment
-The Round Robin algorithm will work one the Round Robin algorithm is selected.
-
-====== Instructions
-Once the plugins are installed into Karaf successfully, a user can use his
-favorite method to select the Round Robin scheduling algorithm type.
-There are no special instructions for using the Round Robin algorithm.
-
-===== Load Balance Algorithm
-Select appropriate Service Function by actual CPU utilization.
-
-====== Overview
-The Load Balance Algorithm is used to select appropriate Service Function
-by actual CPU utilization of service functions. The CPU utilization of
-service function obtained from monitoring information reported via NETCONF.
-
-====== Prerequisites
-* CPU-utilization for Service Function.
-* NETCONF server.
-* NETCONF client.
-* Each VM has a NETCONF server and it could work with NETCONF client well.
-
-====== Instructions
-Set up VMs as Service Functions. enable NETCONF server in VMs.
-Ensure that you specify them separately. For example:
-
-.1 *Setting up the VM*
-.. Set up 4 VMs include 2 SFs' type are Firewall, Others are Napt44. Name them
-   as firewall-1, firewall-2, napt44-1, napt44-2 as Service Function.
-   The four VMs can run either the same server or different servers.
-.. Install NETCONF server on every VM and enable it.
-   More information on NETCONF can be found on the OpenDaylight wiki here:
-   https://wiki.opendaylight.org/view/OpenDaylight_Controller:Config:Examples:Netconf:Manual_netopeer_installation
-.. Get Monitoring data from NETCONF server.
-   These monitoring data should be get from the NETCONF server which is running
-   in VMs. The following static XML data is an example:
-
-static XML data like this:
-----
-<?xml version="1.0" encoding="UTF-8"?>
-<service-function-description-monitor-report>
-  <SF-description>
-    <number-of-dataports>2</number-of-dataports>
-    <capabilities>
-      <supported-packet-rate>5</supported-packet-rate>
-      <supported-bandwidth>10</supported-bandwidth>
-      <supported-ACL-number>2000</supported-ACL-number>
-      <RIB-size>200</RIB-size>
-      <FIB-size>100</FIB-size>
-      <ports-bandwidth>
-        <port-bandwidth>
-          <port-id>1</port-id>
-          <ipaddress>10.0.0.1</ipaddress>
-          <macaddress>00:1e:67:a2:5f:f4</macaddress>
-          <supported-bandwidth>20</supported-bandwidth>
-        </port-bandwidth>
-        <port-bandwidth>
-          <port-id>2</port-id>
-          <ipaddress>10.0.0.2</ipaddress>
-          <macaddress>01:1e:67:a2:5f:f6</macaddress>
-          <supported-bandwidth>10</supported-bandwidth>
-        </port-bandwidth>
-      </ports-bandwidth>
-    </capabilities>
-  </SF-description>
-  <SF-monitoring-info>
-    <liveness>true</liveness>
-    <resource-utilization>
-        <packet-rate-utilization>10</packet-rate-utilization>
-        <bandwidth-utilization>15</bandwidth-utilization>
-        <CPU-utilization>12</CPU-utilization>
-        <memory-utilization>17</memory-utilization>
-        <available-memory>8</available-memory>
-        <RIB-utilization>20</RIB-utilization>
-        <FIB-utilization>25</FIB-utilization>
-        <power-utilization>30</power-utilization>
-        <SF-ports-bandwidth-utilization>
-          <port-bandwidth-utilization>
-            <port-id>1</port-id>
-            <bandwidth-utilization>20</bandwidth-utilization>
-          </port-bandwidth-utilization>
-          <port-bandwidth-utilization>
-            <port-id>2</port-id>
-            <bandwidth-utilization>30</bandwidth-utilization>
-          </port-bandwidth-utilization>
-        </SF-ports-bandwidth-utilization>
-    </resource-utilization>
-  </SF-monitoring-info>
-</service-function-description-monitor-report>
-----
-
-.2 *Start SFC*
-.. Unzip SFC release tarball.
-.. Run SFC: ${SFC}/bin/karaf.
-More information on Service Function Chaining can be found on the OpenDaylight
-SFC's wiki page:
-https://wiki.opendaylight.org/view/Service_Function_Chaining:Main
-
-.3 *Verify the Load Balance Algorithm*
-.. Deploy the SFC2 (firewall-abstract2=>napt44-abstract2) and click button to
-   Create Rendered Service Path in SFC UI (http://localhost:8181/sfc/index.html).
-.. Verify the Rendered Service Path to ensure the CPU utilization of the
-   selected hop is the minimum one among all the service functions with same
-   type.
-The correct RSP is firewall-1=>napt44-2
-
-===== Shortest Path Algorithm
-Select appropriate Service Function by Dijkstra's algorithm. Dijkstra's
-algorithm is an algorithm for finding the shortest paths between nodes in a
-graph.
-
-====== Overview
-The Shortest Path Algorithm is used to select appropriate Service Function by
-actual topology.
-
-====== Prerequisites
-* Depolyed topology (include SFFs, SFs and their links).
-* Dijkstra's algorithm. More information on Dijkstra's algorithm can be found
-on the wiki here:
-http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
-
-====== Instructions
-.1 *Start SFC*
-.. Unzip SFC release tarball.
-.. Run SFC: ${SFC}/bin/karaf.
-.. Depoly SFFs and SFs. import the service-function-forwarders.json and
-   service-functions.json in UI (http://localhost:8181/sfc/index.html#/sfc/config)
-
-service-function-forwarders.json:
-----
-{
-  "service-function-forwarders": {
-    "service-function-forwarder": [
-      {
-        "name": "SFF-br1",
-        "service-node": "OVSDB-test01",
-        "rest-uri": "http://localhost:5001",
-        "sff-data-plane-locator": [
-          {
-            "name": "eth0",
-            "service-function-forwarder-ovs:ovs-bridge": {
-              "uuid": "4c3778e4-840d-47f4-b45e-0988e514d26c",
-              "bridge-name": "br-tun"
-            },
-            "data-plane-locator": {
-              "port": 5000,
-              "ip": "192.168.1.1",
-              "transport": "service-locator:vxlan-gpe"
-            }
-          }
-        ],
-        "service-function-dictionary": [
-          {
-            "sff-sf-data-plane-locator": {
-              "port": 10001,
-              "ip": "10.3.1.103"
-            },
-            "name": "napt44-1",
-            "type": "service-function-type:napt44"
-          },
-          {
-            "sff-sf-data-plane-locator": {
-              "port": 10003,
-              "ip": "10.3.1.102"
-            },
-            "name": "firewall-1",
-            "type": "service-function-type:firewall"
-          }
-        ],
-        "connected-sff-dictionary": [
-          {
-            "name": "SFF-br3"
-          }
-        ]
-      },
-      {
-        "name": "SFF-br2",
-        "service-node": "OVSDB-test01",
-        "rest-uri": "http://localhost:5002",
-        "sff-data-plane-locator": [
-          {
-            "name": "eth0",
-            "service-function-forwarder-ovs:ovs-bridge": {
-              "uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a1",
-              "bridge-name": "br-tun"
-            },
-            "data-plane-locator": {
-              "port": 5000,
-              "ip": "192.168.1.2",
-              "transport": "service-locator:vxlan-gpe"
-            }
-          }
-        ],
-        "service-function-dictionary": [
-          {
-            "sff-sf-data-plane-locator": {
-              "port": 10002,
-              "ip": "10.3.1.103"
-            },
-            "name": "napt44-2",
-            "type": "service-function-type:napt44"
-          },
-          {
-            "sff-sf-data-plane-locator": {
-              "port": 10004,
-              "ip": "10.3.1.101"
-            },
-            "name": "firewall-2",
-            "type": "service-function-type:firewall"
-          }
-        ],
-        "connected-sff-dictionary": [
-          {
-            "name": "SFF-br3"
-          }
-        ]
-      },
-      {
-        "name": "SFF-br3",
-        "service-node": "OVSDB-test01",
-        "rest-uri": "http://localhost:5005",
-        "sff-data-plane-locator": [
-          {
-            "name": "eth0",
-            "service-function-forwarder-ovs:ovs-bridge": {
-              "uuid": "fd4d849f-5140-48cd-bc60-6ad1f5fc0a4",
-              "bridge-name": "br-tun"
-            },
-            "data-plane-locator": {
-              "port": 5000,
-              "ip": "192.168.1.2",
-              "transport": "service-locator:vxlan-gpe"
-            }
-          }
-        ],
-        "service-function-dictionary": [
-          {
-            "sff-sf-data-plane-locator": {
-              "port": 10005,
-              "ip": "10.3.1.104"
-            },
-            "name": "test-server",
-            "type": "service-function-type:dpi"
-          },
-          {
-            "sff-sf-data-plane-locator": {
-              "port": 10006,
-              "ip": "10.3.1.102"
-            },
-            "name": "test-client",
-            "type": "service-function-type:dpi"
-          }
-        ],
-        "connected-sff-dictionary": [
-          {
-            "name": "SFF-br1"
-          },
-          {
-            "name": "SFF-br2"
-          }
-        ]
-      }
-    ]
-  }
-}
-----
-
-service-functions.json:
-----
-{
-  "service-functions": {
-    "service-function": [
-      {
-        "rest-uri": "http://localhost:10001",
-        "ip-mgmt-address": "10.3.1.103",
-        "sf-data-plane-locator": [
-          {
-            "name": "preferred",
-            "port": 10001,
-            "ip": "10.3.1.103",
-            "service-function-forwarder": "SFF-br1"
-          }
-        ],
-        "name": "napt44-1",
-        "type": "service-function-type:napt44",
-        "nsh-aware": true
-      },
-      {
-        "rest-uri": "http://localhost:10002",
-        "ip-mgmt-address": "10.3.1.103",
-        "sf-data-plane-locator": [
-          {
-            "name": "master",
-            "port": 10002,
-            "ip": "10.3.1.103",
-            "service-function-forwarder": "SFF-br2"
-          }
-        ],
-        "name": "napt44-2",
-        "type": "service-function-type:napt44",
-        "nsh-aware": true
-      },
-      {
-        "rest-uri": "http://localhost:10003",
-        "ip-mgmt-address": "10.3.1.103",
-        "sf-data-plane-locator": [
-          {
-            "name": "1",
-            "port": 10003,
-            "ip": "10.3.1.102",
-            "service-function-forwarder": "SFF-br1"
-          }
-        ],
-        "name": "firewall-1",
-        "type": "service-function-type:firewall",
-        "nsh-aware": true
-      },
-      {
-        "rest-uri": "http://localhost:10004",
-        "ip-mgmt-address": "10.3.1.103",
-        "sf-data-plane-locator": [
-          {
-            "name": "2",
-            "port": 10004,
-            "ip": "10.3.1.101",
-            "service-function-forwarder": "SFF-br2"
-          }
-        ],
-        "name": "firewall-2",
-        "type": "service-function-type:firewall",
-        "nsh-aware": true
-      },
-      {
-        "rest-uri": "http://localhost:10005",
-        "ip-mgmt-address": "10.3.1.103",
-        "sf-data-plane-locator": [
-          {
-            "name": "3",
-            "port": 10005,
-            "ip": "10.3.1.104",
-            "service-function-forwarder": "SFF-br3"
-          }
-        ],
-        "name": "test-server",
-        "type": "service-function-type:dpi",
-        "nsh-aware": true
-      },
-      {
-        "rest-uri": "http://localhost:10006",
-        "ip-mgmt-address": "10.3.1.103",
-        "sf-data-plane-locator": [
-          {
-            "name": "4",
-            "port": 10006,
-            "ip": "10.3.1.102",
-            "service-function-forwarder": "SFF-br3"
-          }
-        ],
-        "name": "test-client",
-        "type": "service-function-type:dpi",
-        "nsh-aware": true
-      }
-    ]
-  }
-}
-----
-
-The depolyed topology like this:
-----
-
-              +----+           +----+          +----+
-              |sff1|+----------|sff3|---------+|sff2|
-              +----+           +----+          +----+
-                |                                  |
-         +--------------+                   +--------------+
-         |              |                   |              |
-    +----------+   +--------+          +----------+   +--------+
-    |firewall-1|   |napt44-1|          |firewall-2|   |napt44-2|
-    +----------+   +--------+          +----------+   +--------+
-
-----
-
-.2 *Verify the Shortest Path Algorithm*
-** Deploy the SFC2(firewall-abstract2=>napt44-abstract2), select "Shortest
-   Path" as schedule type and click button to Create Rendered Service Path in
-   SFC UI (http://localhost:8181/sfc/index.html).
-
-.select schedule type
-image::sfc/sf-schedule-type.png["select schedule type",width=500]
-
-** Verify the Rendered Service Path to ensure the selected hops are linked in
-   one SFF. The correct RSP is firewall-1=>napt44-1 or  firewall-2=>napt44-2.
-   The first SF type is Firewall in Service Function Chain. So the algorithm
-   will select first Hop randomly among all the SFs type is Firewall.
-   Assume the first selected SF is firewall-2. 
-   All the path from firewall-1 to SF which type is Napt44 are list:
-
-* Path1: firewall-2 -> sff2 -> napt44-2
-* Path2: firewall-2 -> sff2 -> sff3 -> sff1 -> napt44-1
-The shortest path is Path1, so the selected next hop is napt44-2.
-
-.rendered service path
-image::sfc/sf-rendered-service-path.png["rendered service path",width=500]
diff --git a/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-ui-user.adoc b/manuals/user-guide/src/main/asciidoc/sfc/odl-sfc-ui-user.adoc
deleted file mode 100644 (file)
index c9e8419..0000000
+++ /dev/null
@@ -1,19 +0,0 @@
-=== SFC User Interface
-
-==== Overview
-SFC User Interface (SFC-UI) is based on Dlux project. It provides an easy way
-to create, read, update and delete configuration stored in Datastore. Moreover,
-it shows the status of all SFC features (e.g installed, uninstalled) and
-Karaf log messages as well.
-
-==== SFC-UI Architecture
-SFC-UI operates purely by using RESTCONF.
-
-.SFC-UI integration into ODL
-image::sfc/sfc-ui-architecture.png[width=250]
-
-==== Configuring SFC-UI
-.Configuration steps
-. Run ODL distribution (run karaf)
-. In karaf console execute: +feature:install odl-sfc-ui+
-. Visit SFC-UI on: +http://<odl_ip_address>:8181/sfc/index.html+
index 013e9c7e3b7b851a3b14f90ac7d3729ef1412e54..9f797a724dc2d958ea7a969ab7a75a934b730f15 100644 (file)
@@ -1,28 +1,3 @@
 == Service Function Chaining
 
-include::sfc_overview.adoc[SFC Overview]
-
-include::odl-sfc-ui-user.adoc[SFC UI User guide]
-
-include::odl-sfc-sb-rest-user.adoc[SFC Southbound REST plugin User guide]
-
-include::odl-sfc-ovs-user.adoc[SFC OVS User guide]
-
-include::odl-sfc-classifier-user.adoc[SFC Classifier configuration User guide]
-
-// SFC Renderers
-include::odl-sfc-openflow-renderer-user.adoc[SFC OpenFlow Renderer]
-
-include::odl-sfc-iosxe-renderer-user.adoc[SFC IOS XE Renderer]
-
-include::odl-sfc-vpp-renderer-user.adoc[SFC VPP Renderer]
-// SFC Renderers
-
-include::odl-sfc-sf-scheduler-user.adoc[Service Function selection scheduler]
-
-// Removed because there is no content
-// include::odl-sfc-sf-monitoring-user.adoc[Service Function Monitoring]
-
-include::odl-sfc-load-balance-user.adoc[Service Function Grouping and Load Balancing user guide]
-
-include::odl-sfc-pot-user.adoc[SFC Proof of Transit]
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/service-function-chaining.html
diff --git a/manuals/user-guide/src/main/asciidoc/sfc/sfc_overview.adoc b/manuals/user-guide/src/main/asciidoc/sfc/sfc_overview.adoc
deleted file mode 100644 (file)
index 9dc064f..0000000
+++ /dev/null
@@ -1,15 +0,0 @@
-=== OpenDaylight Service Function Chaining (SFC) Overiew
-
-OpenDaylight Service Function Chaining (SFC) provides the ability to define an ordered list of a network services (e.g. firewalls, load balancers). These service are then "stitched" together in the network to create a service chain. This project provides the infrastructure (chaining logic, APIs) needed for ODL to provision a service chain in the network and an end-user application for defining such chains.
-
-.List of acronyms:
-* ACE - Access Control Entry
-* ACL - Access Control List
-* SCF - Service Classifier Function
-* SF - Service Function
-* SFC - Service Function Chain
-* SFF - Service Function Forwarder
-* SFG - Service Function Group
-* SFP - Service Function Path
-* RSP - Rendered Service Path
-* NSH - Network Service Header
\ No newline at end of file
diff --git a/manuals/user-guide/src/main/asciidoc/snbi/odl-snbi-user.adoc b/manuals/user-guide/src/main/asciidoc/snbi/odl-snbi-user.adoc
new file mode 100644 (file)
index 0000000..3cd99f6
--- /dev/null
@@ -0,0 +1,274 @@
+== SNBI User Guide
+This section describes how to use the SNBI feature in OpenDaylight and contains
+configuration, administration, and management section for the feature.
+
+=== Overview
+Key distribution in a scaled network has always been a challenge. Typically, operators must perform some manual key distribution process before secure communication is possible between a set of network devices. The Secure Network Bootstrapping Infrastructure (SNBI) project securely and automatically brings up an integrated set of network devices and controllers, simplifying the process of bootstrapping network devices with the keys required for secure communication. SNBI enables connectivity to the network devices by assigning unique IPv6 addresses and bootstrapping devices with the required keys. Admission control of devices into a specific domain is achieved using whitelist of authorized devices.
+
+=== SNBI Architecture
+At a high level, SNBI architecture consists of the following components:
+
+* SNBI Registrar
+* SNBI Forwarding Element (FE)
+
+.SNBI Architecture Diagram
+image::snbi/snbi_arch.png["SNBI Architecture",width=500]
+
+==== SNBI Registrar
+Registrar is a device in a network that validates device against a whitelist and delivers device domain certificate. Registrar includes the following:
+
+* RESCONF API for Domain Whitelist Configuration
+* SNBI Southbound Plugin
+* Certificate Authority
+
+.RESTCONF API for Domain Whitelist Configuration:
+Below is the YANG model to configure the whitelist of devices for a particular domain.
+----
+module snbi {
+    //The yang version - today only 1 version exists. If omitted defaults to 1.
+    yang-version 1;
+
+    //a unique namespace for this SNBI module, to uniquely identify it from other modules that may have the same name.
+    namespace "http://netconfcentral.org/ns/snbi";
+
+    //a shorter prefix that represents the namespace for references used below
+    prefix snbi;
+
+    //Defines the organization which defined / owns this .yang file.
+    organization "Netconf Central";
+
+    //defines the primary contact of this yang file.
+    contact "snbi-dev";
+
+    //provides a description of this .yang file.
+    description "YANG version for SNBI.";
+
+    //defines the dates of revisions for this yang file
+    revision "2024-07-02" {
+        description "SNBI module";
+    }
+
+    typedef UDI {
+        type string;
+        description "Unique Device Identifier";
+    }
+
+    container snbi-domain {
+        leaf domain-name {
+            type string;
+            description "The SNBI domain name";
+        }
+
+        list device-list {
+            key "list-name";
+
+            leaf list-name {
+                type string;
+                description "Name of the device list";
+            }
+
+            leaf list-type {
+                type enumeration {
+                    enum "white";
+                }
+                description "Indicates the type of the list";
+            }
+
+            leaf active {
+                type boolean;
+                description "Indicates whether the list is active or not";
+            }
+
+            list devices {
+                key "device-identifier";
+                leaf device-identifier {
+                    type union {
+                        type UDI;
+                    }
+                }
+             }
+         }
+    }
+}
+----
+
+.Southbound Plugin:
+The Southbound Plugin implements the protocol state machine necessary to exchange device identifiers, and deliver certificates.
+
+.Certificate Authority:
+A simple certificate authority is implemented using the Bouncy Castle package. The Certificate Authority creates the certificates from the device CSR requests received from the devices. The certificates thus generated are delivered to the devices using the Southbound Plugin.
+
+==== SNBI Forwarding Element
+The forwarding element must be installed or unpacked on a Linux host whose network layer traffic must be secured. The FE performs the following functions:
+
+* Neighour Discovery
+* Bootstrap
+* Host Configuration
+
+.Neighbour Discovery:
+Neighbour Discovery (ND) is the first step in accommodating devices in a secure network. SNBI performs periodic neighbour discovery of SNBI agents by transmitting ND hello packets. The discovered devices are populated in an ND table. Neighbour Discovery is periodic and bidirectional. ND hello packets are transmitted every 10 seconds. A 40 second refresh timer is set for each discovered neighbour. On expiry of the refresh timer, the Neighbour Adjacency is removed from the ND table as the Neighbour Adjacency is no longer valid.  It is possible that the same SNBI neighbour is discovered on multiple links, the expiry of a device on one link does not automatically remove the device entry from the ND table.
+
+.Bootstrapping:
+Bootstrapping a device involves the following sequential steps:
+
+* Authenticate a device using device identifier (UDI or SUDI)
+* Allocate the appropriate device ID and IPv6 address to uniquely identify the device in the network
+* Allocate the required keys by installing a Device Domain Certificate
+* Accommodate the device in the domain
+
+.Host Configuration:
+Involves configuring a host to create a secure overlay network, assigning appropriate ipv6 address, setting up gre tunnels, securing the tunnels traffic via IPsec and enabling connectivity via a routing protocol.
+
+The SNBI Forwarding Element is packaged in a docker container available at this link: https://hub.docker.com/r/snbi/boron/.
+For more information on docker, refer to this link: https://docs.docker.com/linux/.
+
+=== Prerequisites for Configuring SNBI
+Before proceeding further, ensure that the following system requirements are met:
+
+* 64bit Ubunutu 14.04 LTS
+* 4GB RAM
+* 4GB of hard disk space, sufficient enough to store certificates
+* Java Virtual Machine 1.8 or above
+* Apache Maven 3.3.3 or above
+* Make sure the time on all the devices or synced either manually or using NTP
+* The docker version must be greater than 1.0 on a 14.04 Ubuntu
+
+=== Configuring SNBI
+This section contains the following:
+
+* Setting up SNBI Registrar on the controller
+* Configuring Whitelist
+* Setting up SNBI FE on Linux Hosts
+
+==== Setting up SNBI Registrar on the controller
+This section contains the following:
+
+* Configuring the Registrar Host
+* Installing Karaf Package
+* Configuring SNBI Registrar
+
+.Configuring the Registrar Host:
+Before enabling the SNBI registrar service, assign an IPv6 address to an interface on the registrar host. This is to bind the registrar service to an IPv6 address (*fd08::aaaa:bbbb:1/128*).
+----
+sudo ip link add snbi-ra type dummy
+sudo ip addr add fd08::aaaa:bbbb:1/128 dev snbi-ra
+sudo ifconfig snbi-ra up
+----
+
+.Installing Karaf Package:
+Download the karaf package from this link: http://www.opendaylight.org/software/downloads, unzip and run the `karaf` executable present in the bin folder. Here is an example of this step:
+----
+cd distribution-karaf-0.3.0-Boron/bin
+./karaf
+----
+
+Additional information on useful Karaf commands are available at this link: https://wiki.opendaylight.org/view/CrossProject:Integration_Group:karaf.
+
+.Configuring SNBI Registrar:
+Before you perform this step, ensure that you have completed the tasks
+<<_configuring_snbi, above>>:
+
+To use RESTCONF APIs, install the RESTCONF feature available in the Karaf package.
+If required, install mdsal-apidocs module for access to documentation. Refer https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:Restconf_API_Explorer for more information on MDSAL API docs.
+
+Use the commands below to install the required features and verify the same.
+----
+feature:install odl-restconf
+feature:install odl-mdsal-apidocs
+feature:install odl-snbi-all
+feature:list -i
+----
+
+After confirming that the features are installed, use the following command to start SNBI registrar:
+----
+snbi:start <domain-name>
+----
+
+==== Configuring Whitelist
+The registrar must be configured with a whitelist of devices that are accommodated in a specific domain. The YANG for configuring the domain and the associated whitelist in the controller is avaialble at this link: https://wiki.opendaylight.org/view/SNBI_Architecture_and_Design#Registrar_YANG_Definition.
+It is recommended to use Postman to configure the registrar using RESTCONF.
+
+This section contains the following:
+
+* Installing PostMan
+* Configuring Whitelist using REST API
+
+.Installing PostMan:
+Follow the steps below to install postman on your Google Chrome Browser.
+
+* Install Postman via Google Chrome browser available at this link: https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en
+* In the chrome browser address bar, enter: chrome://apps/
+* Click Postman.
+* Enter the URL.
+* Click Headers.
+* Enter Accept: header.
+* Click Basic Auth tab to create user credentials, such as user name and password.
+* Send.
+
+You can download a sample Postman configuration to get started from this link: https://www.getpostman.com/collections/c929a2a4007ffd0a7b51
+
+.Configuring Whitelist using REST API:
+
+The POST method below configures a domain - "secure-domain" and configures a whitelist set of devices to be accommodated to the domain.
+----
+{
+  "snbi-domain": {
+    "domain-name": "secure-domain",
+    "device-list": [
+      {
+        "list-name": "demo list",
+        "list-type": "white",
+        "active": true,
+        "devices": [
+          {
+            "device-id": "UDI-FirstFE"
+          },
+          {
+            "device-id": "UDI-dev1"
+          },
+          {
+            "device-id": "UDI-dev2"
+          }
+        ]
+      }
+     ]
+  }
+}
+----
+The associated device ID must be configured on the SNBI FE (see below).
+You can also use REST APIs using the API docs interface to push the domain and whitelist information. The API docs could be accessed at link:http://localhost:8080/apidoc/explorer. More details on the API docs is available at link:https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:Restconf_API_Explorer
+
+==== Setting up SNBI FE on Linux Hosts
+The SNBI Daemon is used to bootstrap the host device with a valid device domain certificate and IP address for connectivity and to create a reachable overlay network by interacting with multiple software modules.
+
+.Device UDI:
+The Device UDI or the device Unique Identifier can be derived from a multitude of parameters in the host machine, but most derived parameters are already known or do not remain constant across reloads. Therefore, every SNBI FE must be configured explicitly with a UDI that is present in the device whitelist.
+
+.First Forwarding Element:
+The registrar service IP address must be provided to the first host (Forwarding Element) to be bootstrapped. As mentioned in the "Configuring the Registrar Host" section, the registrar service IP address is *fd08::aaaa:bbbb:1*. The First Forwarding Element must be configured with this IPv6 address.
+
+.Running the SNBI docker image:
+The SNBI FE in the docker image picks the UDI of the ForwardingElement via an environment variable provided when executing docker instance. If the Forwarding Element is a first forwarding element, the IP address of the registrar service should also be provided.
+
+----
+sudo docker run -v /etc/timezone:/etc/timezone:ro --net=host --privileged=true
+--rm -t -i -e SNBI_UDI=UDI-FirstFE  -e SNBI_REGISTRAR=fd08::aaaa:bbbb:1 snbi/boron:latest /bin/bash
+----
+
+After the docker image is executed, you are placed in the snbi.d command prompt.
+
+A new Forwarding Element is bootstrapped in the same way, except that the registrar IP address is not required while running the docker image.
+----
+sudo docker run --net=host --privileged=true --rm -t -i -e SNBI_UDI=UDI-dev1 snbi/boron:latest /bin/bash
+----
+
+
+=== Administering or Managing SNBI
+The SNBI daemon provides various show commands to verify the current state of the daemon. The commands are completed automatically when you press Tab in your keyboard. There are help strings "?" to list commands.
+----
+snbi.d > show snbi
+        device                Host deevice
+        neighbors             SNBI Neighbors
+        debugs                Debugs enabled
+        certificate           Certificate information
+----
index 2ef1f56dcf00bb44986f33a7b0c0709de44464e4..a890e6b9db196914ac12209753554dc30e2be984 100644 (file)
@@ -1,116 +1,3 @@
 == SNMP Plugin User Guide
 
-=== Installing Feature
-The SNMP Plugin can be installed using a single karaf feature: *odl-snmp-plugin*
-
-After starting Karaf:
-
-* Install the feature: *feature:install odl-snmp-plugin*
-* Expose the northbound API: *feature:install odl-restconf*
-
-=== Northbound APIs
-There are two exposed northbound APIs: snmp-get & snmp-set
-
-==== SNMP GET
-Default URL: http://localhost:8181/restconf/operations/snmp:snmp-get
-
-===== POST Input
-
-[options="header"]
-|=======
-|Field Name | Type | Description | Example | Required?
-| ip-address | Ipv4 Address | The IPv4 Address of the desired network node | 10.86.3.13 | Yes
-| oid | String | The Object Identifier of the desired MIB table/object | 1.3.6.1.2.1.1.1 | Yes
-| get-type | ENUM (GET, GET-NEXT, GET-BULK, GET-WALK) | The type of get request to send | GET-BULK | Yes
-| community | String | The community string to use for the SNMP request | private | No. (Default: public)
-|=======
-
-.Example
-----
- {
-     "input": {
-         "ip-address": "10.86.3.13",
-         "oid" : "1.3.6.1.2.1.1.1",
-         "get-type" : "GET-BULK",
-         "community" : "private"
-     }
- }
-----
-
-===== POST Output
-
-[options="header"]
-|=======
-|Field Name | Type | Description
-| results | List of { "value" : String } pairs | The results of the SNMP query
-|=======
-
-.Example
-----
- {
-     "snmp:results": [
-         {
-             "value": "Ethernet0/0/0",
-             "oid": "1.3.6.1.2.1.2.2.1.2.1"
-         },
-         {
-             "value": "FastEthernet0/0/0",
-             "oid": "1.3.6.1.2.1.2.2.1.2.2"
-         },
-         {
-             "value": "GigabitEthernet0/0/0",
-             "oid": "1.3.6.1.2.1.2.2.1.2.3"
-         }
-     ]
- }
-----
-
-==== SNMP SET
-Default URL: http://localhost:8181/restconf/operations/snmp:snmp-set
-
-===== POST Input
-
-[options="header"]
-|=======
-|Field Name | Type | Description | Example | Required?
-|ip-address | Ipv4 Address | The Ipv4 address of the desired network node | 10.86.3.13 | Yes
-|oid | String | The Object Identifier of the desired MIB object | 1.3.6.2.1.1.1 | Yes
-|value | String | The value to set on the network device | "Hello World" | Yes
-|community | String | The community string to use for the SNMP request | private | No. (Default: public)
-|=======
-
-.Example
-----
- {
-     "input": {
-         "ip-address": "10.86.3.13",
-         "oid" : "1.3.6.1.2.1.1.1.0",
-         "value" : "Sample description",
-         "community" : "private"
-     }
- }
-----
-
-===== POST Output
-On a successful SNMP-SET, no output is presented, just a HTTP status of 200.
-
-===== Errors
-If any errors happen in the set request, you will be presented with an error message in the output.
-
-For example, on a failed set request you may see an error like:
-
-----
- {
-     "errors": {
-         "error": [
-             {
-                 "error-type": "application",
-                 "error-tag": "operation-failed",
-                 "error-message": "SnmpSET failed with error status: 17, error index: 1. StatusText: Not writable"
-             }
-         ]
-     }
- }
-----
-
-which corresponds to Error status 17 in the SNMPv2 RFC: https://tools.ietf.org/html/rfc1905.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/snmp-plugin-user-guide.html
index 2797f1f245044f92e10ccb4f3067a6e728610b87..63f100a98574f8d5fb0cea4cd660b4845c27104c 100644 (file)
@@ -1,477 +1,3 @@
 == SNMP4SDN User Guide\r
-=== Overview\r
-We propose a southbound plugin that can control the off-the-shelf commodity Ethernet switches for the purpose of building SDN using Ethernet switches. For Ethernet switches, forwarding table, VLAN table, and ACL are where one can install flow configuration on, and this is done via SNMP and CLI in the proposed plugin. In addition, some settings required for Ethernet switches in SDN, e.g., disabling STP and flooding, are proposed.\r
-\r
-.SNMP4SDN as an OpenDaylight southbound plugin \r
-image::snmp4sdn_in_odl_architecture.jpg["SNMP4SDN as an OpenDaylight southbound plugin",width=400]\r
-\r
-=== Configuration\r
-Just follow the steps:\r
-\r
-==== Prepare the switch list database file\r
-A sample is https://wiki.opendaylight.org/view/SNMP4SDN:switch_list_file[here], and we suggest to save it as '/etc/snmp4sdn_swdb.csv' so that SNMP4SDN Plugin can automatically load this file. Note that the first line is title and should not be removed.\r
-\r
-==== Prepare the vendor-specific configuration file\r
-A sample is https://wiki.opendaylight.org/view/SNMP4SDN:snmp4sdn_VendorSpecificSwitchConfig_file[here], and we suggest to save it as '/etc/snmp4sdn_VendorSpecificSwitchConfig.xml' so that SNMP4SDN Plugin can automatically load this file.\r
-\r
-=== Install SNMP4SDN Plugin\r
-If using SNMP4SDN Plugin provided in OpenDaylight release, just do the following from the Karaf CLI:\r
-\r
-----\r
-feature:install odl-snmp4sdn-all\r
-----\r
-\r
-=== Troubleshooting\r
-==== Installation Troubleshooting\r
-===== Feature installation failure\r
-When trying to install a feature, if the following failure occurs:\r
-----\r
-Error executing command: Could not start bundle ... \r
-Reason: Missing Constraint: Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.7))"\r
-----\r
-A workaround: exit Karaf, and edit the file <karaf_directory>/etc/config.properties, remove the line '${services-${karaf.framework}}' and the ", \" in the line above.\r
-\r
-==== Runtime Troubleshooting\r
-===== Problem starting SNMP Trap Interface\r
-It is possible to get the following exception during controller startup. (The error would not be printed in Karaf console, one may see it in <karaf_directory>/data/log/karaf.log)\r
-----\r
-2014-01-31 15:00:44.688 CET [fileinstall-./plugins] WARN  o.o.snmp4sdn.internal.SNMPListener - Problem starting SNMP Trap Interface: {}\r
- java.net.BindException: Permission denied\r
-        at java.net.PlainDatagramSocketImpl.bind0(Native Method) ~[na:1.7.0_51]\r
-        at java.net.AbstractPlainDatagramSocketImpl.bind(AbstractPlainDatagramSocketImpl.java:95) ~[na:1.7.0_51]\r
-        at java.net.DatagramSocket.bind(DatagramSocket.java:376) ~[na:1.7.0_51]\r
-        at java.net.DatagramSocket.<init>(DatagramSocket.java:231) ~[na:1.7.0_51]\r
-        at java.net.DatagramSocket.<init>(DatagramSocket.java:284) ~[na:1.7.0_51]\r
-        at java.net.DatagramSocket.<init>(DatagramSocket.java:256) ~[na:1.7.0_51]\r
-        at org.snmpj.SNMPTrapReceiverInterface.<init>(SNMPTrapReceiverInterface.java:126) ~[org.snmpj-1.4.3.jar:na]\r
-        at org.snmpj.SNMPTrapReceiverInterface.<init>(SNMPTrapReceiverInterface.java:99) ~[org.snmpj-1.4.3.jar:na]\r
-        at org.opendaylight.snmp4sdn.internal.SNMPListener.<init>(SNMPListener.java:75) ~[bundlefile:na]\r
-        at org.opendaylight.snmp4sdn.core.internal.Controller.start(Controller.java:174) [bundlefile:na]\r
-...\r
-----\r
-This indicates that the controller is being run as a user which does not have sufficient OS privileges to bind the SNMPTRAP port (162/UDP)\r
-\r
-==== Switch list file missing\r
-The SNMP4SDN Plugin needs a switch list file, which is necessary for topology discovery and should be provided by the administrator (so please prepare one for the first time using SNMP4SDN Plugin, here is the https://wiki.opendaylight.org/view/SNMP4SDN:switch_list_file[sample]). The default file path is /etc/snmp4sdn_swdb.csv. SNMP4SDN Plugin would automatically load this file and start topology discovery. If this file is not ready there, the following message like this will pop up:\r
-----\r
-2016-02-02 04:21:52,476 | INFO| Event Dispatcher | CmethUtil                        | 466 - org.opendaylight.snmp4sdn - 0.3.0.SNAPSHOT | CmethUtil.readDB() err: {}\r
-java.io.FileNotFoundException: /etc/snmp4sdn_swdb.csv (No such file or directory)\r
-       at java.io.FileInputStream.open0(Native Method)[:1.8.0_65]\r
-       at java.io.FileInputStream.open(FileInputStream.java:195)[:1.8.0_65]\r
-       at java.io.FileInputStream.<init>(FileInputStream.java:138)[:1.8.0_65]\r
-       at java.io.FileInputStream.<init>(FileInputStream.java:93)[:1.8.0_65]\r
-       at java.io.FileReader.<init>(FileReader.java:58)[:1.8.0_65]\r
-       at org.opendaylight.snmp4sdn.internal.util.CmethUtil.readDB(CmethUtil.java:66)\r
-       at org.opendaylight.snmp4sdn.internal.util.CmethUtil.<init>(CmethUtil.java:43)\r
-...\r
-----\r
-\r
-=== Configuration\r
-Just follow the steps:\r
-\r
-==== 1. Prepare the switch list database file\r
-A sample is https://wiki.opendaylight.org/view/SNMP4SDN:switch_list_file[here], and we suggest to save it as '/etc/snmp4sdn_swdb.csv' so that SNMP4SDN Plugin can automatically load this file.\r
-\r
-NOTE:\r
-The first line is title and should not be removed.\r
-\r
-==== 2. Prepare the vendor-specific configuration file\r
-A sample is https://wiki.opendaylight.org/view/SNMP4SDN:snmp4sdn_VendorSpecificSwitchConfig_file[here], and we suggest to save it as '/etc/snmp4sdn_VendorSpecificSwitchConfig.xml' so that SNMP4SDN Plugin can automatically load this file.\r
-\r
-==== 3. Install SNMP4SDN Plugin\r
-If using SNMP4SDN Plugin provided in OpenDaylight release, just do the following:\r
-\r
-Launch Karaf in Linux console:\r
-----\r
-cd <Beryllium_controller_directory>/bin\r
-(for example, cd distribution-karaf-x.x.x-Beryllium/bin)\r
-----\r
-----\r
-./karaf\r
-----\r
-Then in Karaf console, execute:\r
-----\r
-feature:install odl-snmp4sdn-all\r
-----\r
-\r
-==== 4. Load switch list\r
-For initialization, we need to feed SNMP4SDN Plugin the switch list. Actually SNMP4SDN Plugin automatically try to load the switch list at /etc/snmp4sdn_swdb.csv if there is. If so, this step could be skipped.\r
-In Karaf console, execute:\r
-----\r
-snmp4sdn:ReadDB <switch_list_path>\r
-(For example, snmp4sdn:ReadDB /etc/snmp4sdn_swdb.csv)\r
-(in Windows OS, For example, snmp4sdn:ReadDB D://snmp4sdn_swdb.csv)\r
-----\r
-A sample is https://wiki.opendaylight.org/view/SNMP4SDN:switch_list_file[here], and we suggest to save it as '/etc/snmp4sdn_swdb.csv' so that SNMP4SDN Plugin can automatically load this file. \r
-\r
-NOTE:\r
-The first line is title and should not be removed.\r
-\r
-==== 5. Show switch list\r
-----\r
-snmp4sdn:PrintDB\r
-----\r
-\r
-=== Tutorial\r
-==== Topology Service\r
-===== Execute topology discovery\r
-\r
-The SNMP4SDN Plugin automatically executes topology discovery on startup. One may use the following commands to invoke topology discovery manually. Note that you may need to wait for seconds for itto complete. \r
-\r
-NOTE:\r
-Currently, one needs to manually execute 'snmp4sdn:TopoDiscover' first (just once), then later the automatic topology discovery can be successful. If switches change (switch added or removed), 'snmp4sdn:TopoDiscover' is also required. A future version will fix it to eliminate these requirements.\r
-----\r
-snmp4sdn:TopoDiscover\r
-----\r
-\r
-If one like to discover all inventory (i.e. switches and their ports) but not edges, just execute "TopoDiscoverSwitches":\r
-----\r
-snmp4sdn:TopoDiscoverSwitches\r
-----\r
-\r
-If one like to only discover all edges but not inventory, just execute "TopoDiscoverEdges":\r
-----\r
-snmp4sdn:TopoDiscoverEdges\r
-----\r
-\r
-You can also trigger topology discovery via the REST API by using +curl+ from the Linux console (or any other REST client):\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:rediscover\r
-----\r
-\r
-You can change the periodic topology discovery interval via a REST API:\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:set-discovery-interval -d "{"input":{"interval-second":'<interval_time>'}}" \r
-For example, set the interval as 15 seconds:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:set-discovery-interval -d "{"input":{"interval-second":'15'}}" \r
-----\r
-\r
-===== Show the topology\r
-\r
-SNMP4SDN Plugin supports to show topology via REST API:\r
-\r
-* Get topology\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:get-edge-list\r
-----\r
-+\r
-* Get switch list\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:get-node-list\r
-----\r
-+\r
-* Get switches' ports list\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/topology:get-node-connector-list\r
-----\r
-+\r
-* The three commands above are just for user to get the latest topology discovery result, it does not trigger SNMP4SDN Plugin to do topology discovery.\r
-* To trigger SNMP4SDN Plugin to do topology discover, as described in aforementioned 'Execute topology discovery'.\r
-\r
-==== Flow configuration\r
-\r
-===== FDB configuration\r
-\r
-SNMP4SDN supports to add entry on FDB table via REST API:\r
-\r
-* Get FDB table\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:get-fdb-table -d "{input:{"node-id":<switch-mac-address-in-number>}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:get-fdb-table -d "{input:{"node-id":158969157063648}}" \r
-----\r
-+\r
-* Get FDB table entry\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:get-fdb-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "dest-mac-addr":<destination-mac-address-in-number>}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:get-fdb-entry -d "{input:{"node-id":158969157063648, "vlan-id":1, "dest-mac-addr":158969157063648}}" \r
-----\r
-+\r
-* Set FDB table entry\r
-+\r
-(Notice invalid value: (1) non unicast mac (2) port not in the VLAN)\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:set-fdb-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "dest-mac-addr":<destination-mac-address-in-number>, "port":<port-in-number>, "type":'<type>'}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:set-fdb-entry -d "{input:{"node-id":158969157063648, "vlan-id":1, "dest-mac-addr":187649984473770, "port":23, "type":'MGMT'}}" \r
-----\r
-+\r
-* Delete FDB table entry\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/fdb:del-fdb-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "dest-mac-addr":<destination-mac-address-in-number>}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/fdb:del-fdb-entry -d "{input:{"node-id":158969157063648, "vlan-id":1, "dest-mac-addr":187649984473770}}" \r
-----\r
-\r
-===== VLAN configuration\r
-\r
-SNMP4SDN supports to add entry on VLAN table via REST API:\r
-\r
-* Get VLAN table\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:get-vlan-table -d "{input:{node-id:<switch-mac-address-in-number>}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:get-vlan-table -d "{input:{node-id:158969157063648}}" \r
-----\r
-+\r
-* Add VLAN\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:add-vlan -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "vlan-name":'<vlan-name>'}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:add-vlan -d "{"input":{"node-id":158969157063648, "vlan-id":123, "vlan-name":'v123'}}" \r
-----\r
-+\r
-* Delete VLAN\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:delete-vlan -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:delete-vlan -d "{"input":{"node-id":158969157063648, "vlan-id":123}}" \r
-----\r
-+\r
-* Add VLAN and set ports\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:add-vlan-and-set-ports -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "vlan-name":'<vlan-name>', "tagged-port-list":'<tagged-ports-separated-by-comma>', "untagged-port-list":'<untagged-ports-separated-by-comma>'}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:add-vlan-and-set-ports -d "{"input":{"node-id":158969157063648, "vlan-id":123, "vlan-name":'v123', "tagged-port-list":'1,2,3', "untagged-port-list":'4,5,6'}}" \r
-----\r
-+\r
-* Set VLAN ports\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/vlan:set-vlan-ports -d "{"input":{"node-id":<switch-mac-address-in-number>, "vlan-id":<vlan-id-in-number>, "tagged-port-list":'<tagged-ports-separated-by-comma>', "untagged-port-list":'<untagged-ports-separated-by-comma>'}}"\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vlan:set-vlan-ports -d "{"input":{"node-id":"158969157063648", "vlan-id":"123", "tagged-port-list":'4,5', "untagged-port-list":'2,3'}}"\r
-----\r
-\r
-===== ACL configuration\r
-\r
-SNMP4SDN supports to add flow on ACL table via REST API. However, it is so far only implemented for the D-Link DGS-3120 switch.\r
-\r
-ACL configuration via CLI is vendor-specific, and SNMP4SDN  will support configuration with vendor-specific CLI in future release.\r
-\r
-To do ACL configuration using the REST APIs, use commands like the following:\r
-\r
-* Clear ACL table\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:clear-acl-table -d "{"input":{"nodeId":<switch-mac-address-in-number>}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:clear-acl-table -d "{"input":{"nodeId":158969157063648}}"\r
-----\r
-+\r
-* Create ACL profile (IP layer)\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"acl-layer":'IP',"vlan-mask":<vlan_mask_in_number>,"src-ip-mask":'<src_ip_mask>',"dst-ip-mask":"<destination_ip_mask>"}}"\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":158969157063648,"profile-id":1,"profile-name":'profile_1',"acl-layer":'IP',"vlan-mask":1,"src-ip-mask":'255.255.0.0',"dst-ip-mask":'255.255.255.255'}}"\r
-----\r
-+\r
-* Create ACL profile (MAC layer)\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"acl-layer":'ETHERNET',"vlan-mask":<vlan_mask_in_number>}}"\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:create-acl-profile -d "{input:{"nodeId":158969157063648,"profile-id":2,"profile-name":'profile_2',"acl-layer":'ETHERNET',"vlan-mask":4095}}"\r
-----\r
-+\r
-* Delete ACL profile\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>}}"\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":158969157063648,"profile-id":1}}"\r
-----\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-name":"<profile_name>"}}"\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-profile -d "{input:{"nodeId":158969157063648,"profile-name":'profile_2'}}"\r
-----\r
-+\r
-* Set ACL rule\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:set-acl-rule -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"rule-id":<rule_id_in_number>,"port-list":[<port_number>,<port_number>,...],"acl-layer":'<acl_layer>',"vlan-id":<vlan_id_in_number>,"src-ip":"<src_ip_address>","dst-ip":'<dst_ip_address>',"acl-action":'<acl_action>'}}" \r
-(<acl_layer>: IP or ETHERNET)\r
-(<acl_action>: PERMIT as permit, DENY as deny)\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:set-acl-rule -d "{input:{"nodeId":158969157063648,"profile-id":1,"profile-name":'profile_1',"rule-id":1,"port-list":[1,2,3],"acl-layer":'IP',"vlan-id":2,"src-ip":'1.1.1.1',"dst-ip":'2.2.2.2',"acl-action":'PERMIT'}}" \r
-----\r
-+\r
-* Delete ACL rule\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/acl:del-acl-rule -d "{input:{"nodeId":<switch-mac-address-in-number>,"profile-id":<profile_id_in_number>,"profile-name":'<profile_name>',"rule-id":<rule_id_in_number>}}"\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/acl:del-acl-rule -d "{input:{"nodeId":158969157063648,"profile-id":1,"profile-name":'profile_1',"rule-id":1}}"\r
-----\r
-\r
-==== Special configuration\r
-\r
-SNMP4SDN supports setting the following special configurations via REST API:\r
-\r
-* Set STP port state\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:set-stp-port-state -d "{input:{"node-id":<switch-mac-address-in-number>, "port":<port_number>, enable:<true_or_false>}}" \r
-(true: enable, false: disable)\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:set-stp-port-state -d "{input:{"node-id":158969157063648, "port":2, enable:false}}" \r
-----\r
-+\r
-* Get STP port state\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-stp-port-state -d "{input:{"node-id":<switch-mac-address-in-number>, "port":<port_number>}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-stp-port-state -d "{input:{"node-id":158969157063648, "port":2}}" \r
-----\r
-+\r
-* Get STP port root\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-stp-port-root -d "{input:{"node-id":<switch-mac-address-in-number>, "port":<port_number>}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-stp-port-root -d "{input:{"node-id":158969157063648, "port":2}}" \r
-----\r
-+\r
-* Enable STP\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:enable-stp -d "{input:{"node-id":<switch-mac-address-in-number>}}" \r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:enable-stp -d "{input:{"node-id":158969157063648}}" \r
-----\r
-+\r
-* Disable STP\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:disable-stp -d "{input:{"node-id":<switch-mac-address-in-number>}}"\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:disable-stp -d "{input:{"node-id":158969157063648}}"\r
-----\r
-+\r
-* Get ARP table\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-arp-table -d "{input:{"node-id":<switch-mac-address-in-number>}}"\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-arp-table -d "{input:{"node-id":158969157063648}}"\r
-----\r
-+\r
-* Set ARP entry\r
-+\r
-(Notice to give IP address with subnet prefix)\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:set-arp-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "ip-address":'<ip_address>', "mac-address":<mac_address_in_number>}}"\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:set-arp-entry -d "{input:{"node-id":158969157063648, "ip-address":'10.217.9.9', "mac-address":1}}"\r
-----\r
-+\r
-* Get ARP entry\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:get-arp-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "ip-address":'<ip_address>'}}"\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:get-arp-entry -d "{input:{"node-id":158969157063648, "ip-address":'10.217.9.9'}}"\r
-----\r
-+\r
-* Delete ARP entry\r
-+\r
-----\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://<controller_ip_address>:8181/restconf/operations/config:delete-arp-entry -d "{input:{"node-id":<switch-mac-address-in-number>, "ip-address":'<ip_address>'}}"\r
-\r
-For example:\r
-curl --user "admin":"admin" -H "Accept: application/json" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/config:delete-arp-entry -d "{input:{"node-id":158969157063648, "ip-address":'10.217.9.9'}}"\r
-----\r
-\r
-==== Using Postman to invoke REST API\r
-Besides using the curl tool to invoke REST API, like the examples aforementioned, one can also use GUI tool like Postman for better data display.\r
-\r
-* Install Postman: \r
-https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en[Install Postman in the Chrome browser]\r
-+\r
-* In the chrome browser bar enter \r
-+\r
-----\r
-chrome://apps/\r
-----\r
-+\r
-* Click on Postman.\r
-\r
-===== Example: Get VLAN table using Postman\r
-\r
-As the screenshot shown below, one needs to fill in required fields.\r
-----\r
-URL:\r
-http://<controller_ip_address>:8181/restconf/operations/vlan:get-vlan-table\r
-\r
-Accept header:\r
-application/json\r
-\r
-Content-type:\r
-application/json\r
-\r
-Body:\r
-{input:{"node-id":<node_id>}}\r
-for example:\r
-{input:{"node-id":158969157063648}}\r
-----\r
-\r
-.Example: Get VLAN table using Postman\r
-image::snmp4sdn_getvlantable_postman.jpg["Example: Get VLAN table using Postman",width=500]\r
-\r
-=== Multi-vendor support\r
-\r
-So far the supported vendor-specific configurations:\r
-\r
-* Add VLAN and set ports\r
-* (More functions are TBD)\r
-\r
-The SNMP4SDN Plugin would examine whether the configuration is described in the vendor-specific configuration file. If yes, the configuration description would be adopted, otherwise just use the default configuration. For example, adding VLAN and setting the ports is supported via SNMP standard MIB. However we found some special cases, for example, certain Accton switch requires to add VLAN first and then allows to set the ports. So one may describe this in the vendor-specific configuration file.\r
-\r
-A vendor-specific configuration file sample is https://wiki.opendaylight.org/view/SNMP4SDN:snmp4sdn_VendorSpecificSwitchConfig_file[here], and we suggest to save it as '/etc/snmp4sdn_VendorSpecificSwitchConfig.xml' so that SNMP4SDN Plugin can automatically load it.\r
-\r
-=== Help\r
-* https://wiki.opendaylight.org/view/SNMP4SDN:Main[SNMP4SDN Wiki]\r
-* SNMP4SDN Mailing Lists: (https://lists.opendaylight.org/mailman/listinfo/snmp4sdn-users[user], https://lists.opendaylight.org/mailman/listinfo/snmp4sdn-dev[developer])\r
-* Latest https://wiki.opendaylight.org/view/SNMP4SDN:User_Guide#Troubleshooting[troubleshooting] in Wiki\r
 \r
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/snmp4sdn-user-guide.html\r
index 81ecca36f22434ae239a0eb072d0e9ac74a65e38..2f75bf9c57556204dda4797fc91976c9a13ca126 100644 (file)
@@ -1,334 +1,3 @@
 == SXP User Guide
 
-=== Overview
-SXP (Source-Group Tag eXchange Protocol) project is an effort to enhance OpenDaylight platform with IP-SGT (IP Address to Source Group Tag) bindings that can be learned from connected SXP-aware network nodes. The current implementation supports SXP protocol version 4 according to the Smith, Kandula - SXP https://tools.ietf.org/html/draft-smith-kandula-sxp-04[IETF draft] and grouping of peers and creating filters based on ACL/Prefix-list syntax for filtering outbound and inbound IP-SGT bindings. All protocol legacy versions 1-3 are supported as well. Additionally, version 4 adds bidirectional connection type as an extension of a unidirectional one.
-
-=== SXP Architecture
-The SXP Server manages all connected clients in separate threads and a common SXP protocol agreement is used between connected peers. Each SXP network peer is modelled with its pertaining class, e.g., SXP Server represents the SXP Speaker, SXP Listener the Client. The server program creates the ServerSocket object on a specified port and waits until a client starts up and requests connect on the IP address and port of the server. The client program opens a Socket that is connected to the server running on the specified host IP address and port.
-
-The SXP Listener maintains connection with its speaker peer. From an opened channel pipeline, all incoming SXP messages are processed by various handlers. Message must be decoded, parsed and validated.
-
-The SXP Speaker is a counterpart to the SXP Listener. It maintains a connection with its listener peer and sends composed messages.
-
-The SXP Binding Handler extracts the IP-SGT binding from a message and pulls it into the SXP-Database. If an error is detected during the IP-SGT extraction, an appropriate error code and sub-code is selected and an error message is sent back to the connected peer. All transitive messages are routed directly to the output queue of SXP Binding Dispatcher.
-
-The SXP Binding Dispatcher represents a selector that will decides how many data from the SXP-database will be sent and when. It is responsible for message content composition based on maximum message length.
-
-The SXP Binding Filters handles filtering of outgoing and incoming IP-SGT bindings according to BGP filtering using ACL and Prefix List syntax for specifying filter or based on Peer-sequence length.
-
-The SXP Domains feature provides isolation of SXP peers and bindings learned between them, also exchange of Bindings is possible across SXP-Domains by ACL, Prefix List or Peer-Sequence filters
-
-=== Configuring SXP
-The OpenDaylight Karaf distribution comes pre-configured with baseline SXP configuration.
-Configuration of SXP Nodes is also possible via NETCONF.
-
-- *22-sxp-controller-one-node.xml* (defines the basic parameters)
-
-=== Administering or Managing SXP
-By RPC (response is XML document containing requested data or operation status):
-
-* Get Connections
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:get-connections
-[source,xml]
-----
-<input xmlns:xsi="urn:opendaylight:sxp:controller">
- <domain-name>global</domain-name>
- <requested-node>0.0.0.100</requested-node>
-</input>
-----
-* Add Connection
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:add-connection
-[source,xml]
-----
-<input xmlns:xsi="urn:opendaylight:sxp:controller">
- <requested-node>0.0.0.100</requested-node>
- <domain-name>global</domain-name>
- <connections>
-  <connection>
-   <peer-address>172.20.161.50</peer-address>
-   <tcp-port>64999</tcp-port>
-   <!-- Password setup: default | none leave empty -->
-   <password>default</password>
-   <!-- Mode: speaker/listener/both -->
-   <mode>speaker</mode>
-   <version>version4</version>
-   <description>Connection to ASR1K</description>
-   <!-- Timers setup: 0 to disable specific timer usability, the default value will be used -->
-   <connection-timers>
-    <!-- Speaker -->
-    <hold-time-min-acceptable>45</hold-time-min-acceptable>
-    <keep-alive-time>30</keep-alive-time>
-   </connection-timers>
-  </connection>
-  <connection>
-   <peer-address>172.20.161.178</peer-address>
-   <tcp-port>64999</tcp-port>
-   <!-- Password setup: default | none leave empty-->
-   <password>default</password>
-   <!-- Mode: speaker/listener/both -->
-   <mode>listener</mode>
-   <version>version4</version>
-   <description>Connection to ISR</description>
-   <!-- Timers setup: 0 to disable specific timer usability, the default value will be used -->
-   <connection-timers>
-    <!-- Listener -->
-    <reconciliation-time>120</reconciliation-time>
-    <hold-time>90</hold-time>
-    <hold-time-min>90</hold-time-min>
-    <hold-time-max>180</hold-time-max>
-   </connection-timers>
-  </connection>
- </connections>
-</input>
-----
-
-* Delete Connection
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-connection
-[source,xml]
-----
-<input xmlns:xsi="urn:opendaylight:sxp:controller">
- <requested-node>0.0.0.100</requested-node>
- <domain-name>global</domain-name>
- <peer-address>172.20.161.50</peer-address>
-</input>
-----
-* Add Binding Entry
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:add-entry
-[source,xml]
-----
-<input xmlns:xsi="urn:opendaylight:sxp:controller">
- <requested-node>0.0.0.100</requested-node>
- <domain-name>global</domain-name>
- <ip-prefix>192.168.2.1/32</ip-prefix>
- <sgt>20</sgt >
-</input>
-----
-* Update Binding Entry
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:update-entry
-[source,xml]
-----
-<input xmlns:xsi="urn:opendaylight:sxp:controller">
- <requested-node>0.0.0.100</requested-node>
- <domain-name>global</domain-name>
- <original-binding>
-  <ip-prefix>192.168.2.1/32</ip-prefix>
-  <sgt>20</sgt>
- </original-binding>
- <new-binding>
-  <ip-prefix>192.168.3.1/32</ip-prefix>
-  <sgt>30</sgt>
- </new-binding>
-</input>
-----
-* Delete Binding Entry
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-entry
-[source,xml]
-----
-<input xmlns:xsi="urn:opendaylight:sxp:controller">
- <requested-node>0.0.0.100</requested-node>
- <domain-name>global</domain-name>
- <ip-prefix>192.168.3.1/32</ip-prefix>
- <sgt>30</sgt >
-</input>
-----
-* Get Node Bindings
-+
-This RPC gets particular device bindings. An SXP-aware node is identified with a unique Node-ID. If a user requests bindings
-for a Speaker 20.0.0.2, the RPC will search for an appropriate path, which contains 20.0.0.2 Node-ID, within locally learnt
-SXP data in the SXP database and replies with associated bindings.
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:get-node-bindings
-[source,xml]
-----
-<input xmlns:xsi="urn:opendaylight:sxp:controller">
- <requested-node>20.0.0.2</requested-node>
- <bindings-range>all</bindings-range>
- <domain-name>global</domain-name>
-</input>
-----
-* Get Binding SGTs
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:get-binding-sgts
-[source,xml]
-----
-<input xmlns:xsi="urn:opendaylight:sxp:controller">
- <requested-node>0.0.0.100</requested-node>
- <domain-name>global</domain-name>
- <ip-prefix>192.168.12.2/32</ip-prefix>
-</input>
-----
-* Add PeerGroup with or without filters to node.
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:add-peer-group
-[source,xml]
-----
-<input xmlns="urn:opendaylight:sxp:controller">
- <requested-node>127.0.0.1</requested-node>
- <sxp-peer-group>
-  <name>TEST</name>
-  <sxp-peers>
-  </sxp-peers>
-  <sxp-filter>
-   <filter-type>outbound</filter-type>
-   <acl-entry>
-    <entry-type>deny</entry-type>
-    <entry-seq>1</entry-seq>
-    <sgt-start>1</sgt-start>
-    <sgt-end>100</sgt-end>
-   </acl-entry>
-   <acl-entry>
-    <entry-type>permit</entry-type>
-    <entry-seq>45</entry-seq>
-    <matches>1</matches>
-    <matches>3</matches>
-    <matches>5</matches>
-   </acl-entry>
-  </sxp-filter>
- </sxp-peer-group>
-</input>
-----
-* Delete PeerGroup with peer-group-name from node request-node.
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-peer-group
-[source,xml]
-----
-<input xmlns="urn:opendaylight:sxp:controller">
- <requested-node>127.0.0.1</requested-node>
- <peer-group-name>TEST</peer-group-name>
-</input>
-----
-* Get PeerGroup with peer-group-name from node request-node.
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:get-peer-group
-[source,xml]
-----
-<input xmlns="urn:opendaylight:sxp:controller">
- <requested-node>127.0.0.1</requested-node>
- <peer-group-name>TEST</peer-group-name>
-</input>
-----
-* Add Filter to peer group on node request-node.
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:add-filter
-[source,xml]
-----
-<input xmlns="urn:opendaylight:sxp:controller">
- <requested-node>127.0.0.1</requested-node>
- <peer-group-name>TEST</peer-group-name>
- <sxp-filter>
-  <filter-type>outbound</filter-type>
-  <acl-entry>
-   <entry-type>deny</entry-type>
-   <entry-seq>1</entry-seq>
-   <sgt-start>1</sgt-start>
-   <sgt-end>100</sgt-end>
-  </acl-entry>
-  <acl-entry>
-   <entry-type>permit</entry-type>
-   <entry-seq>45</entry-seq>
-   <matches>1</matches>
-   <matches>3</matches>
-   <matches>5</matches>
-  </acl-entry>
- </sxp-filter>
-</input>
-----
-* Delete Filter from peer group on node request-node.
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-filter
-[source,xml]
-----
-<input xmlns="urn:opendaylight:sxp:controller">
- <requested-node>127.0.0.1</requested-node>
- <peer-group-name>TEST</peer-group-name>
- <filter-type>outbound</filter-type>
-</input>
-----
-* Update Filter of the same type in peer group on node request-node.
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:update-filter
-[source,xml]
-----
-<input xmlns="urn:opendaylight:sxp:controller">
- <requested-node>127.0.0.1</requested-node>
- <peer-group-name>TEST</peer-group-name>
- <sxp-filter>
-  <filter-type>outbound</filter-type>
-  <acl-entry>
-   <entry-type>deny</entry-type>
-   <entry-seq>1</entry-seq>
-   <sgt-start>1</sgt-start>
-   <sgt-end>100</sgt-end>
-  </acl-entry>
-  <acl-entry>
-   <entry-type>permit</entry-type>
-   <entry-seq>45</entry-seq>
-   <matches>1</matches>
-   <matches>3</matches>
-   <matches>5</matches>
-  </acl-entry>
- </sxp-filter>
-</input>
-----
-* Add new SXP aware Node
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:add-node
-[source,xml]
-----
-<input xmlns="urn:opendaylight:sxp:controller">
-    <node-id>1.1.1.1</node-id>
-    <source-ip>0.0.0.0</source-ip>
-    <timers>
-        <retry-open-time>5</retry-open-time>
-        <hold-time-min-acceptable>120</hold-time-min-acceptable>
-        <delete-hold-down-time>120</delete-hold-down-time>
-        <hold-time-min>90</hold-time-min>
-        <reconciliation-time>120</reconciliation-time>
-        <hold-time>90</hold-time>
-        <hold-time-max>180</hold-time-max>
-        <keep-alive-time>30</keep-alive-time>
-    </timers>
-    <mapping-expanded>150</mapping-expanded>
-    <security>
-        <password>password</password>
-    </security>
-    <tcp-port>64999</tcp-port>
-    <version>version4</version>
-    <description>ODL SXP Controller</description>
-    <master-database></master-database>
-</input>
-----
-* Delete SXP aware node
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-node
-[source,xml]
-----
-<input xmlns="urn:opendaylight:sxp:controller">
- <node-id>1.1.1.1</node-id>
-</input>
-----
-* Add SXP Domain on node request-node.
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:add-domain
-[source,xml]
-----
-<input xmlns="urn:opendaylight:sxp:controller">
-  <node-id>1.1.1.1</node-id>
-  <domain-name>global</domain-name>
-</input>
-----
-* Delete SXP Domain on node request-node.
-POST http://127.0.0.1:8181/restconf/operations/sxp-controller:delete-domain
-[source,xml]
-----
-<input xmlns="urn:opendaylight:sxp:controller">
- <node-id>1.1.1.1</node-id>
- <domain-name>global</domain-name>
-</input>
-----
-
-==== Use cases for SXP
-Cisco has a wide installed base of network devices supporting SXP. By including SXP in OpenDaylight, the binding of policy groups to IP addresses can be made available for possible further processing to a wide range of devices, and applications running on OpenDaylight. The range of applications that would be enabled is extensive. Here are just a few of them:
-
-OpenDaylight based applications can take advantage of the IP-SGT binding information. For example, access control can be defined by an operator in terms of policy groups, while OpenDaylight can configure access control lists on network elements using IP addresses, e.g., existing technology.
-
-Interoperability between different vendors. Vendors have different policy systems. Knowing the IP-SGT binding for Cisco makes it possible to maintain policy groups between Cisco and other vendors.
-
-OpenDaylight can aggregate the binding information from many devices and communicate it to a network element. For example, a firewall can use the IP-SGT binding information to know how to handle IPs based on the group-based ACLs it has set. But to do this with SXP alone, the firewall has to maintain a large number of network connections to get the binding information. This incurs heavy overhead costs to maintain all of the SXP peering and protocol information. OpenDaylight can aggregate the IP-group information so that the firewall need only connect to OpenDaylight. By moving the information flow outside of the network elements to a centralized position, we reduce the overhead of the CPU consumption on the enforcement element. This is a huge savings - it allows the enforcement point to only have to make one connection rather than thousands, so it can concentrate on its primary job of forwarding and enforcing.
-
-OpenDaylight can relay the binding information from one network element to others. Changes in group membership can be propagated more readily through a centralized model. For example, in a security application a particular host (e.g., user or IP Address) may be found to be acting suspiciously or violating established security policies. The defined response is to put the host into a different source group for remediation actions such as a lower quality of service, restricted access to critical servers, or special routing conditions to ensure deeper security enforcement (e.g., redirecting the host’s traffic through an IPS with very restrictive policies). Updated group membership for this host needs to be communicated to multiple network elements as soon as possible; a very efficient and effective method of propagation can be performed using OpenDaylight as a centralized point for relaying the information.
-
-OpenDayLight can create filters for exporting and receiving IP-SGT bindings used on specific peer groups, thus can provide more complex maintaining of policy groups.
-
-Although the IP-SGT binding is only one specific piece of information, and although SXP is implemented widely in a single vendor’s equipment, bringing the ability of OpenDaylight to process and distribute the bindings, is a very specific immediate useful implementation of policy groups. It would go a long way to develop both the usefulness of OpenDaylight and of policy groups.
-
-
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/sxp-user-guide.html
index 1914b0bc60883d2a932d1af0f904fa65c1c25d3e..bcb535490a112bd89b0367b0f1282027ecaec4af 100644 (file)
@@ -1,394 +1,3 @@
 == TSDR User Guide
-This document describes how to use HSQLDB, HBase, and Cassandra data stores to
-capture time series data using Time Series Data Repository (TSDR) features
-in OpenDaylight. This document contains configuration, administration, management,
-usage, and troubleshooting sections for the features.
 
-=== Overview
-The Time Series Data Repository (TSDR) project in OpenDaylight (ODL) creates a
-framework for collecting, storing, querying, and maintaining time series data.
-TSDR provides the framework for plugging in proper data collectors to collect
-various time series data and store the data into
-TSDR Data Stores. With a common data model and generic TSDR data persistence
-APIs, the user can choose various data stores to be plugged into the TSDR
-persistence framework. Currently, three types of data stores are supported:
-HSQLDB relational database, HBase NoSQL database, and Cassandra NoSQL database.
-
-With the capabilities of data collection, storage, query, aggregation, and
-purging provided by TSDR, network administrators can leverage various data
-driven appliations built on top of TSDR for security risk detection,
-performance analysis, operational configuration optimization, traffic
-engineering, and network analytics with automated intelligence.
-
-
-=== TSDR Architecture
-TSDR has the following major components:
-
-* Data Collection Service
-* Data Storage Service
-* TSDR Persistence Layer with data stores as plugins
-* TSDR Data Stores
-* Data Query Service
-* Grafana integration for time series data visualization
-* Data Aggregation Service
-* Data Purging Service
-
-The Data Collection Service handles the collection of time series data into TSDR and
-hands it over to the Data Storage Service. The Data Storage Service stores the data
-into TSDR through the TSDR Persistence Layer. The TSDR Persistence Layer provides generic
-Service APIs allowing various data stores to be plugged in. The Data Aggregation
-Service aggregates time series fine-grained raw data into course-grained roll-up
-data to control the size of the data. The Data Purging Service periodically purges
-both fine-grained raw data and course-granined aggregated data according to
-user-defined schedules.
-
-We have implemented The Data Collection Service, Data Storage Service, TSDR
-Persistence Layer, TSDR HSQLDB Data Store, TSDR HBase Data Store, and TSDR Cassandra
-Datastore. Among these services and components, time series data is communicated
-using a common TSDR data model, which is designed and implemented for the
-abstraction of time series data commonalities. With these functions, TSDR is
-able to collect the data from the data sources and store them into one of
-the TSDR data stores: HSQLDB Data Store, HBase Data Store or Cassandra Data
-Store. Besides a simple query command from Karaf console to retrieve data from the
-TSDR data stores, we also provided a Data Query Service for the user to use REST API
-to query the data from the data stores. Moreover, the user can use Grafana, which is
-a time series visualization tool to view the data stored in TSDR in various charting
-formats.
-
-=== Configuring TSDR Data Stores
-==== To Configure HSQLDB Data Store
-
-The HSQLDB based storage files get stored automatically in <karaf install folder>/tsdr/
-directory. If you want to change the default storage location, the configuration
-file to change can be found in <karaf install folder>/etc directory. The filename
-is org.ops4j.datasource-metric.cfg. Change the last portion of the  url=jdbc:hsqldb:./tsdr/metric
-to point to different directory.
-
-==== To Configure HBase Data Store
-
-After installing HBase Server on the same machine as OpenDaylight, if the user accepts the default configuration of the HBase Data Store, the user can directly proceed with the installation of HBase Data Store from Karaf console.
-
-Optionally, the user can configure TSDR HBase Data Store following HBase Data Store Configuration Procedure.
-
-* HBase Data Store Configuration Steps
-
-** Open the file etc/tsdr-persistence-hbase.peroperties under karaf distribution directory.
-** Edit the following parameters:
-*** HBase server name
-*** HBase server port
-*** HBase client connection pool size
-*** HBase client write buffer size
-
-After the configuration of HBase Data Store is complete, proceed with the installation of HBase Data Store from Karaf console.
-
-* HBase Data Store Installation Steps
-
-** Start Karaf Console
-** Run the following commands from Karaf Console:
-feature:install odl-tsdr-hbase
-
-==== To Configure Cassandra Data Store
-
-Currently, there's no configuration needed for Cassandra Data Store. The user can use Cassandra data store directly after installing the feature from Karaf console.
-
-Additionally separate commands have been implemented to install various data collectors.
-
-=== Administering or Managing TSDR Data Stores
-==== To Administer HSQLDB Data Store
-
-Once the TSDR default datastore feature (odl-tsdr-hsqldb-all) is enabled, the TSDR captured OpenFlow statistics metrics can be accessed from Karaf Console by executing the command
-
- tsdr:list <metric-category> <starttimestamp> <endtimestamp>
-
-wherein
-
-* <metric-category> = any one of the following categories FlowGroupStats, FlowMeterStats, FlowStats, FlowTableStats, PortStats, QueueStats
-* <starttimestamp> = to filter the list of metrics starting this timestamp
-* <endtimestamp>   = to filter the list of metrics ending this timestamp
-* <starttimestamp> and <endtimestamp> are optional.
-* Maximum 1000 records will be displayed.
-
-==== To Administer HBase Data Store
-
-* Using Karaf Command to retrieve data from HBase Data Store
-
-The user first need to install hbase data store from karaf console:
-
-feature:install odl-tsdr-hbase
-
-The user can retrieve the data from HBase data store using the following commands from Karaf console:
-
- tsdr:list
- tsdr:list <CategoryName> <StartTime> <EndTime>
-
-Typing tab will get the context prompt of the arguments when typeing the command in Karaf console.
-
-==== To Administer Cassandra Data Store
-
-The user first needs to install Cassandra data store from Karaf console:
-
- feature:install odl-tsdr-cassandra
-
-Then the user can retrieve the data from Cassandra data store using the following commands from Karaf console:
-
- tsdr:list
- tsdr:list <CategoryName> <StartTime> <EndTime>
-
-Typing tab will get the context prompt of the arguments when typeing the command in Karaf console.
-
-=== Installing TSDR Data Collectors
-
-When the user uses HSQLDB data store and installed "odl-tsdr-hsqldb-all" feature from Karaf console, besides the HSQLDB data store, OpenFlow data collector is also installed with this command. However, if the user needs to use other collectors, such as NetFlow Collector, Syslog Collector, SNMP Collector, and Controller Metrics Collector, the user needs to install them with separate commands. If the user uses HBase or Cassandra data store, no collectors will be installed when the data store is installed. Instead, the user needs to install each collector separately using feature install command from Karaf console.
-
-The following is the list of supported TSDR data collectors with the associated feature install commands:
-
-* OpenFlow Data Collector
-
-  feature:install odl-tsdr-openflow-statistics-collector
-
-* SNMP Data Collector
-
-  feature:install odl-tsdr-snmp-data-collector
-
-* NetFlow Data Collector
-
-  feature:install odl-tsdr-netflow-statistics-collector
-
-* sFlow Data Collector
-  feature:install odl-tsdr-sflow-statistics-colletor
-
-* Syslog Data Collector
-
-  feature:install odl-tsdr-syslog-collector
-
-* Controller Metrics Collector
-
-  feature:install odl-tsdr-controller-metrics-collector
-
-In order to use controller metrics collector, the user needs to install Sigar library.
-
-The following is the instructions for installing Sigar library on Ubuntu:
-
-*** Install back end library by "sudo apt-get install libhyperic-sigar-java"
-*** Execute "export LD_LIBRARY_PATH=/usr/lib/jni/:/usr/lib:/usr/local/lib" to set the path of the JNI (you can add this to the ".bashrc" in your home directory)
-*** Download the file "sigar-1.6.4.jar". It might be also in your ".m2" directory under "~/.m2/resources/org/fusesource/sigar/1.6.4"
-*** Create the directory "org/fusesource/sigar/1.6.4" under the "system" directory in your controller home directory and place the "sigar-1.6.4.jar" there
-
-=== Configuring TSDR Data Collectors
-
-* SNMP Data Collector Device Credential Configuration
-
-After installing SNMP Data Collector, a configuration file under etc/ directory of ODL distribution is generated: etc/tsdr.snmp.cfg is created.
-
-The following is a sample tsdr.snmp.cfg file:
-
-credentials=[192.168.0.2,public],[192.168.0.3,public]
-
-The above credentials indicate that TSDR SNMP Collector is going to connect to two devices. The IPAddress and Read community string of these two devices are (192.168.0.2, public), and (192.168.0.3) respectively.
-
-The user can make changes to this configuration file any time during runtime. The configuration will be picked up by TSDR in the next cycle of data collection.
-
-==== Polling interval configuration for SNMP Collector and OpenFlow Stats Collector
-
-The default polling interval of SNMP Collector and OpenFlow Stats Collector is 30 seconds and 15 seconds respectively. The user can change the polling interval through restconf APIs at any time. The new polling interval will be picked up by TSDR in the next collection cycle.
-
-* Retrieve Polling Interval API for SNMP Collector
-** URL: http://localhost:8181/restconf/config/tsdr-snmp-data-collector:TSDRSnmpDataCollectorConfig
-** Verb: GET
-
-* Update Polling Interval API for SNMP Collector
-** URL: http://localhost:8181/restconf/operations/tsdr-snmp-data-collector:setPollingInterval
-** Verb: POST
-** Content Type: application/json
-** Input Payload:
-
- {
-    "input": {
-        "interval": "15000"
-    }
- }
-
-* Retrieve Polling Interval API for OpenFlowStats Collector
-** URL: http://localhost:8181/restconf/config/tsdr-openflow-statistics-collector:TSDROSCConfig
-** Verb: GET
-
-* Update Polling Interval API for OpenFlowStats Collector
-** URL: http://localhost:8181/restconf/operations/tsdr-openflow-statistics-collector:setPollingInterval
-** Verb: POST
-** Content Type: application/json
-** Input Payload:
-
- {
-    "input": {
-        "interval": "15000"
-    }
- }
-
-=== Querying TSDR from REST APIs
-
-TSDR provides two REST APIs for querying data stored in TSDR data stores.
-
-* Query of TSDR Metrics
-** URL: http://localhost:8181/tsdr/metrics/query
-** Verb: GET
-** Parameters:
-*** tsdrkey=[NID=][DC=][MN=][RK=]
-
- The TSDRKey format indicates the NodeID(NID), DataCategory(DC), MetricName(MN), and RecordKey(RK) of the monitored objects.
- For example, the following is a valid tsdrkey:
- [NID=openflow:1][DC=FLOWSTATS][MN=PacketCount][RK=Node:openflow:1,Table:0,Flow:3]
- The following is also a valid tsdrkey:
- tsdrkey=[NID=][DC=FLOWSTATS][MN=][RK=]
- In the case when the sections in the tsdrkey is empty, the query will return all the records in the TSDR data store that matches the filled tsdrkey. In the above example, the query will return all the data in FLOWSTATS data category.
- The query will return only the first 1000 records that match the query criteria.
-
-*** from=<time_in_seconds>
-*** until=<time_in_seconds>
-
-The following is an example curl command for querying metric data from TSDR data store:
-
-curl -G -v -H "Accept: application/json" -H "Content-Type: application/json" "http://localhost:8181/tsdr/metrics/query" --data-urlencode "tsdrkey=[NID=][DC=FLOWSTATS][MN=][RK=]" --data-urlencode "from=0" --data-urlencode "until=240000000000"|more
-
-* Query of TSDR Log type of data
-** URL:http://localhost:8181/tsdr/logs/query
-** Verb: GET
-** Parameters:
-*** tsdrkey=tsdrkey=[NID=][DC=][RK=]
-
- The TSDRKey format indicates the NodeID(NID), DataCategory(DC), and RecordKey(RK) of the monitored objects.
- For example, the following is a valid tsdrkey:
- [NID=openflow:1][DC=NETFLOW][RK]
- The query will return only the first 1000 records that match the query criteria.
-
-*** from=<time_in_seconds>
-*** until=<time_in_seconds>
-
-The following is an example curl command for querying log type of data from TSDR data store:
-
-curl -G -v -H "Accept: application/json" -H "Content-Type: application/json" "http://localhost:8181/tsdr/logs/query" --data-urlencode "tsdrkey=[NID=][DC=NETFLOW][RK=]" --data-urlencode "from=0" --data-urlencode "until=240000000000"|more
-
-=== Grafana integration with TSDR
-
-TSDR provides northbound integration with Grafana time series data visualization tool. All the metric type of data stored in TSDR data store can be visualized using Grafana.
-
-For the detailed instruction about how to install and configure Grafana to work with TSDR, please refer to the following link:
-
-https://wiki.opendaylight.org/view/Grafana_Integration_with_TSDR_Step-by-Step
-
-=== Purging Service configuration
-
-After the data stores are installed from Karaf console, the purging service will be installed as well. A configuration file called tsdr.data.purge.cfg will be generated under etc/ directory of ODL distribution.
-
-The following is the sample default content of the tsdr.data.purge.cfg file:
-
-host=127.0.0.1
-data_purge_enabled=true
-data_purge_time=23:59:59
-data_purge_interval_in_minutes=1440
-retention_time_in_hours=168
-
-The host indicates the IPAddress of the data store. In the case when the data store is together with ODL controller, 127.0.0.1 should be the right value for the host IP. The other attributes are self-explained. The user can change those attributes at any time. The configuration change will be picked up right away by TSDR Purging service at runtime.
-
-=== How to use TSDR to collect, store, and view OpenFlow Interface Statistics
-
-==== Overview
-This tutorial describes an example of using TSDR to collect, store, and view one type of time series data in OpenDaylight environment.
-
-
-==== Prerequisites
-You would need to have the following as prerequisits:
-
-* One or multiple OpenFlow enabled switches. Alternatively, you can use mininet to simulate such a switch.
-* Successfully installed OpenDaylight Controller.
-* Successfully installed HBase Data Store following TSDR HBase Data Store Installation Guide.
-* Connect the OpenFlow enabled switch(es) to OpenDaylight Controller.
-
-==== Target Environment
-HBase data store is only supported in Linux operation system.
-
-==== Instructions
-
-* Start OpenDaylight.
-
-* Connect OpenFlow enabled switch(es) to the controller.
-
-** If using mininet, run the following commands from mininet command line:
-
-*** mn --topo single,3  --controller 'remote,ip=172.17.252.210,port=6653' --switch ovsk,protocols=OpenFlow13
-
-
-* Install tsdr hbase feature from Karaf:
-
-** feature:install odl-tsdr-hbase
-
-* Install OpenFlow Statistics Collector from Karaf:
-
-** feature:install odl-tsdr-openflow-statistics-collector
-
-* run the following command from Karaf console:
-
-** tsdr:list PORTSTATS
-
-You should be able to see the interface statistics of the switch(es) from the HBase Data Store. If there are too many rows, you can use "tsdr:list InterfaceStats|more" to view it page by page.
-
-By tabbing after "tsdr:list", you will see all the supported data categories. For example, "tsdr:list FlowStats" will output the Flow statistics data collected from the switch(es).
-
-=== Troubleshooting
-==== Karaf logs
-
-All TSDR features and components write logging information including information messages, warnings, errors and debug messages into karaf.log.
-
-==== HBase and Cassandra logs
-
-For HBase and Cassandra data stores, the database level logs are written into HBase log and Cassandra logs.
-
-** HBase log
-*** HBase log is under <HBase-installation-directory>/logs/.
-
-** Cassandra log
-*** Cassandra log is under {cassandra.logdir}/system.log. The default {cassandra.logdir} is /var/log/cassandra/.
-
-=== Security
-
-TSDR gets the data from a variety of sources, which can be secured in different ways.
-
-** OpenFlow Security
-*** The OpenFlow data can be configured with Transport Layer Security (TLS) since the OpenFlow Plugin that TSDR depends on provides this security support.
-
-** SNMP Security
-*** The SNMP version3 has security support. However, since ODL SNMP Plugin that TSDR depends on does not support version 3, we (TSDR) will not have security support at this moment.
-
-** NetFlow Security
-*** NetFlow, which cannot be configured with security so we recommend making sure it flows only over a secured management network.
-
-** Syslog Security
-*** Syslog, which cannot be configured with security so we recommend making sure it flows only over a secured management network.
-
-=== Support multiple data stores simultaneously at runtime
-
-TSDR supports running multiple data stores simultaneously at runtim. For example, it is possible to configure TSDR to push log type of data into Cassandra data store, while pushing metrics type of data into HBase.
-
-When you install one TSDR data store from karaf console, such as using feature:install odl-tsdr-hsqldb, a properties file will be generated under <Karaf-distribution-directory>/etc/. For example, when you install hsqldb, a file called tsdr-persistence-hsqldb.properties is generated under that directory. 
-
-By default, all the types of data are supported in the data store. For example, the default content of tsdr-persistence-hsqldb.properties is as follows:
-
- metric-persistency=true
- log-persistency=true
- binary-persistency=true
-
-When the user would like to use different data stores to support different types of data, he/she could enable or disable a particular type of data persistence in the data stores by configuring the properties file accordingly.
-
-For example, if the user would like to store the log type of data in HBase, and store the metric and binary type of data in Cassandra, he/she needs to install both hbase and cassandra data stores from Karaf console. Then the user needs to modify the properties file under <Karaf-distribution-directory>/etc as follows:
-
-* tsdr-persistence-hbase.properties
-
- metric-persistency=false
- log-persistency=true
- binary-persistency=true
-
-
-* tsdr-persistence-cassandra.properties
-
- metric-psersistency=true
- log-persistency=false
- binary-persistency=false
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/tsdr-user-guide.html
index bacce966053b0ccb7f449156add0a2f46ffd881f..44d4c6a9af82769fa2148e8ce57b3a721feffc87 100644 (file)
@@ -1,84 +1,3 @@
 == TTP CLI Tools User Guide
 
-=== Overview
-Table Type Patterns are a specification developed by the
-https://www.opennetworking.org/[Open Networking Foundation] to enable
-the description and negotiation of subsets of the OpenFlow protocol.
-This is particularly useful for hardware switches that support OpenFlow
-as it enables the to describe what features they do (and thus also what
-features they do not) support. More details can be found in the full
-specification listed on the
-https://www.opennetworking.org/sdn-resources/onf-specifications/openflow[OpenFlow
-specifications page].
-
-=== TTP CLI Tools Architecture
-The TTP CLI Tools use the TTP Model and the YANG Tools/RESTCONF codecs
-to translate between the Data Transfer Objects (DTOs) and JSON/XML.
-
-// === Configuring <feature>
-//
-// Describe how to configure the feature or the project after installation.
-// Configuration information could include day-one activities for a project
-// such as configuring users, configuring clients/servers and so on.
-//
-// === Administering or Managing <feature>
-// Include related command reference or  operations that you could perform
-// using the feature. For example viewing network statistics, monitoring
-// the network,  generating reports, and so on.
-//
-// NOTE:  Ensure that you create a step procedure whenever required and
-// avoid concepts.
-//
-// For example:
-//
-// .To configure L2switch components perform the following steps.
-// . Step 1:
-// . Step 2:
-// . Step 3:
-
-// === Using the CLI Tools
-// 
-// TODO: provide a few examples of using the CLI tools.
-
-// <optional>
-// If there is only one tutorial, you skip the "Tutorials" section and
-// instead just lead with the single tutorial's name.
-//
-// ==== <Tutorial Name>
-// Ensure that the title starts with a gerund. For example using,
-// monitoring, creating, and so on.
-//
-// ===== Overview
-// An overview of the use case.
-//
-// ===== Prerequisites
-// Provide any prerequisite information, assumed knowledge, or environment
-// required to execute the use case.
-//
-// ===== Target Environment
-// Include any topology requirement for the use case. Ideally, provide
-// visual (abstract) layout of network diagrams and any other useful visual
-// aides.
-//
-// ===== Instructions
-// Use case could be a set of configuration procedures. Including
-// screenshots to help demonstrate what is happening is especially useful.
-// Ensure that you specify them separately. For example:
-//
-// . *Setting up the VM*
-// To set up a VM perform the following steps.
-// .. Step 1
-// .. Step 2
-// .. Step 3
-//
-// . *Installing the feature*
-// To install the feature perform the following steps.
-// .. Step 1
-// .. Step 2
-// .. Step 3
-//
-// . *Configuring the environment*
-// To configure the system perform the following steps.
-// .. Step 1
-// .. Step 2
-// .. Step 3
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/ttp-cli-tools-user-guide.html
index e32ffa3aac31b9191bacf3f2d2741ae6b4f8ba1c..22c0ab8694f6694c3d3ac92d954ae34b73e78869 100644 (file)
@@ -1,78 +1,3 @@
 == UNI Manager Plug In Project
 
-=== Overview
-The version of the UNI Manager (UNIMgr) plug-in included in OpenDaylight
-Beryllium release is experimental, serving as a proof-of-concept (PoC) for using
-features of OpenDaylight to provision networked elements with attributes
-satisfying Metro Ethernet Forum (MEF) requirements for delivery of Carrier
-Ethernet service. This initial version of UNIMgr does not enable the full set
-of MEF-defined functionality for Carrier Ethernet service. UNI Manager adheres
-to a minimum set of functionality defined by MEF 7.2 and 10.2 specifications.
-
-UNIMgr receives a request from applications to create an Ethernet Private Line
-(EPL) private Ethernet connection between two endpoints on the network. The
-request must include the IP addresses of the endpoints and a class of service
-identifier.
-
-UNI Manager plug-in translates the request for EPL service into (a) configuring
-two instances of Open vSwitch (OVS), each instance running in one of the
-UNI endpoints, with two ports and a bridge between the ports, and (b) creating a
-GRE tunnel to provide a private connection between the endpoints. This initial
-version of UNIMgr uses only OVSDB on its southbound interface to send
-configuration commands.
-
-UNIMgr also accepts a bits per second datarate parameter, which is translated
-to an OVSDB command to limit the rate at which the OVS instances will forward
-data traffic.
-
-The YANG module used to create the UNIMgr plug-in models MEF-defined UNI and
-Ethernet Virtual Connection (EVC) attributes but does not include the full set
-of UNI and EVC attributes. And of the attributes modeled in the YANG module
-only a subset of them are implemented in the UNIMgr listener code translating
-the Operational data tree to OVSDB commands. The YANG module used to develop
-the PoC UNIMgr plug-in is cl-unimgr-mef.yang. A copy of this module is
-available in the odl-unimgr-api bundle of the UNIMgr project.
-
-Limitations of the PoC version of UNI Manager in OpenDaylight Beryllium include
-those listed below:
-* Uses only OVSDB southbound interface of OpenDaylight
-* Only uses UNI ID, IP Address, and speed UNI attributes
-* Only uses a subset of EVC per UNI attributes
-* Does not use MEF Class of Service or Bandwidth Profile attributes
-* Configures only Open vSwitch network elements
-
-Opportunities for evolution of the UNI Manager plug in include using complete
-MEF Service Layer and MEF Resource Layer YANG models and supporting other
-OpenDaylight southbound protocols like NetConf and OpenFlow.
-
-=== UNI Manager Components
-
-UNI Manager is comprised of the following OpenDaylight Karaf features:
-
-[width="60%",frame="topbot"]
-|======================
-|odl-unimgr-api          | OpenDaylight :: UniMgr :: api
-|odl-unimgr              | OpenDaylight :: UniMgr
-|odl-unimgr-console      | OpenDaylight :: UniMgr :: CLI
-|odl-unimgr-rest         | OpenDaylight :: UniMgr :: REST
-|odl-unimgr-ui           | OpenDaylight :: UniMgr :: UI
-|======================
-
-=== Installing UNI Manager Plug-in
-
-After launching OpenDaylight install the feature for the UNI Manager plug-in.
-From the karaf command prompt execute the following command to install
-the UNI Manager plug-in:
-
- $ feature:install odl-manager-ui
-
-=== Explore and exercise the UNI Manager REST API
-
-To see the UNI Manager APIs, browse to this URL:
-http://localhost:8181/apidoc/explorer/index.html
-
-Replace localhost with the IP address or hostname where OpenDaylight is
-running if you are not running OpenDaylight locally on your machine.
-
-See also the UNI Manager Developer's Guide for a full list and description of
-UNI Manager POSTMAN calls.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/uni-manager-plug-in-project.html
index 1e7fedb464f22e94c01c9f6a1ed42f2b4554f92f..cbd3deaf94f13b00e5db9f2fedbd9fce35b5dbe0 100644 (file)
@@ -1,74 +1,3 @@
 == Unified Secure Channel
-This document describes how to use the Unified Secure Channel (USC) 
-feature in OpenDaylight.  This document contains configuration,
-administration, and management sections for the feature.
-
-=== Overview
-In enterprise networks, more and more controller and network
-management systems are being deployed remotely, such as in the
-cloud. Additionally, enterprise networks are becoming more
-heterogeneous - branch, IoT, wireless (including cloud access
-control). Enterprise customers want a converged network controller
-and management system solution.  This feature is intended for
-device and network administrators looking to use unified secure
-channels for their systems.
-
-=== USC Channel Architecture
-* USC Agent
-  ** The USC Agent provides proxy and agent functionality on top of all standard protocols supported by the device.  It initiates call-home with the controller, maintains live connections with with the controller, acts as a demuxer/muxer for packets with the USC header, and authenticates the controller.
-* USC Plugin
-  ** The USC Plugin is responsible for communication between the controller and the USC agent .  It responds to call-home with the controller, maintains live connections with the devices, acts as a muxer/demuxer for packets with the USC header, and provides support for TLS/DTLS.
-* USC Manager
-  ** The USC Manager handles configurations, high availability, security, monitoring, and clustering support for USC.
-* USC UI
-  ** The USC UI is responsible for displaying a graphical user interface representing the state of USC in the OpenDaylight DLUX UI.
-
-=== Installing USC Channel
-To install USC, download OpenDaylight and use the Karaf console
-to install the following feature:
-
-odl-usc-channel-ui
-
-=== Configuring USC Channel
-This section gives details about the configuration settings for various components in USC.
-
-The USC configuration files for the Karaf distribution are located in distribution/karaf/target/assembly/etc/usc
-
-* certificates
-  ** The certificates folder contains the client key, pem, and rootca files as is necessary for security.
-* akka.conf
-  ** This file contains configuration related to clustering.  Potential configuration properties can be found on the akka website at http://doc.akka.io
-* usc.properties
-  ** This file contains configuration related to USC.  Use this file to set the location of certificates, define the source of additional akka configurations, and assign default settings to the USC behavior.
-
-=== Administering or Managing USC Channel
-After installing the odl-usc-channel-ui feature from the Karaf console, users can administer and manage USC channels from the the UI or APIDOCS explorer.
-
-Go to http://${IPADDRESS}:8181/index.html, sign in, and click on the USC side menu tab.  From there, users can view the state of USC channels.
-
-Go to http://${IPADDRESS}:8181/apidoc/explorer/index.html, sign in, and expand the usc-channel panel.  From there, users can execute various API calls to test their USC deployment such as add-channel, delete-channel, and view-channel.
-
-=== Tutorials
-Below are tutorials for USC Channel
-
-==== Viewing USC Channel
-The purpose of this tutorial is to view USC Channel
-
-===== Overview
-This tutorial walks users through the process of viewing the USC
-Channel environment topology including established channels connecting
-the controllers and devices in the USC topology.
-
-===== Prerequisites
-For this tutorial, we assume that a device running a USC agent
-is already installed.
-
-===== Instructions
-* Run the OpenDaylight distribution and install odl-usc-channel-ui from the Karaf console.
-* Go to http://${IPADDRESS}:8181/apidoc/explorer/index.html 
-* Execute add-channel with the following json data:
-** {"input":{"channel":{"hostname":"127.0.0.1","port":1068,"remote":false}}}
-* Go to http://${IPADDRESS}:8181/index.html
-* Click on the USC side menu tab.
-* The UI should display a table including the added channel from step 3.
 
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/unified-secure-channel.html
index d759c769a77b3898767889329792c924f30a22ef..13ed14df994572a013b6985c093dd73df7539c57 100644 (file)
@@ -1,340 +1,3 @@
 == L3VPN Service: User Guide
 
-=== Overview
-L3VPN Service in OpenDaylight provides a framework to create L3VPN based on BGP-MP.  It also helps to create Network Virtualization for DC Cloud environment.
-
-=== Modules & Interfaces
-L3VPN service can be realized using the following modules -
-
-==== VPN Service Modules
-. *VPN Manager* : Creates and manages VPNs and VPN Interfaces
-. *BGP Manager* : Configures BGP routing stack and provides interface to routing services
-. *FIB Manager* : Provides interface to FIB, creates and manages forwarding rules in Dataplane
-. *Nexthop Manager* : Creates and manages nexthop egress pointer, creates egress rules in Dataplane
-. *Interface Manager* : Creates and manages different type of network interfaces, e.g., VLAN, l3tunnel etc.,
-. *Id Manager* : Provides cluster-wide unique ID for a given key. Used by different modules to get unique IDs for different entities.
-. *MD-SAL Util* : Provides interface to MD-SAL. Used by service modules to access MD-SAL Datastore and services.
-
-All the above modules can function independently and can be utilized by other services as well. 
-
-==== Configuration Interfaces
-The following modules expose configuration interfaces through which user can configure L3VPN Service.
-
-. BGP Manager
-. VPN Manager
-. Interface Manager
-. FIB Manager
-
-===== Configuration Interface Details
-
-.BGP Manager Interface
-. Data Node Path : _/config/bgp:bgp-router/_
-.. Fields :
-... local-as-identifier
-... local-as-number
-.. REST Methods : GET, PUT, DELETE, POST
-. Data Node Path : _/config/bgp:bgp-neighbors/_
-.. Fields :
-... List of bgp-neighbor
-.. REST Methods : GET, PUT, DELETE, POST
-. Data Node Path : _/config/bgp:bgp-neighbors/bgp-neighbor/`{as-number}`/_
-.. Fields :
-... as-number
-... ip-address
-.. REST Methods : GET, PUT, DELETE, POST
-
-.VPN Manager Interface
-. Data Node Path : _/config/l3vpn:vpn-instances/_
-.. Fields :
-... List of vpn-instance
-.. REST Methods : GET, PUT, DELETE, POST
-. Data Node Path : _/config/l3vpn:vpn-interfaces/vpn-instance_
-.. Fields :
-... name
-... route-distinguisher
-... import-route-policy
-... export-route-policy
-.. REST Methods : GET, PUT, DELETE, POST
-. Data Node Path : _/config/l3vpn:vpn-interfaces/_
-.. Fields :
-... List of vpn-interface
-.. REST Methods : GET, PUT, DELETE, POST
-. Data Node Path : _/config/l3vpn:vpn-interfaces/vpn-interface_
-.. Fields :
-... name
-... vpn-instance-name
-.. REST Methods : GET, PUT, DELETE, POST
-. Data Node Path : _/config/l3vpn:vpn-interfaces/vpn-interface/`{name}`/adjacency_
-.. Fields :
-... ip-address
-... mac-address
-.. REST Methods : GET, PUT, DELETE, POST
-
-.Interface Manager Interface
-. Data Node Path : _/config/if:interfaces/interface_
-.. Fields :
-... name
-... type
-... enabled
-... of-port-id
-... tenant-id
-... base-interface
-.. type specific fields
-... when type = _l2vlan_
-.... vlan-id
-... when type = _stacked_vlan_
-.... stacked-vlan-id
-... when type = _l3tunnel_
-.... tunnel-type
-.... local-ip
-.... remote-ip
-.... gateway-ip
-... when type = _mpls_
-.... list labelStack
-.... num-labels
-.. REST Methods : GET, PUT, DELETE, POST
-
-.FIB Manager Interface
-. Data Node Path : _/config/odl-fib:fibEntries/vrfTables_
-.. Fields :
-... List of vrfTables
-.. REST Methods : GET, PUT, DELETE, POST
-. Data Node Path : _/config/odl-fib:fibEntries/vrfTables/`{routeDistinguisher}`/_
-.. Fields :
-... route-distinguisher
-... list vrfEntries
-.... destPrefix
-.... label
-.... nexthopAddress
-.. REST Methods : GET, PUT, DELETE, POST
-. Data Node Path : _/config/odl-fib:fibEntries/ipv4Table_
-.. Fields :
-... list ipv4Entry
-.... destPrefix
-.... nexthopAddress
-.. REST Methods : GET, PUT, DELETE, POST
-
-
-=== Provisioning Sequence & Sample Configurations
-
-[[install]]
-==== Installation
-1. Edit 'etc/custom.properties' and set the following property:
-'vpnservice.bgpspeaker.host.name = <bgpserver-ip>'
-'<bgpserver-ip>' here refers to the IP address of the host where BGP is running.
-
-2. Run ODL and install VPN Service
-'feature:install odl-vpnservice-core'
-
-Use REST interface to configure L3VPN service
-
-[[prer]]
-==== Pre-requisites:
-
-1. BGP stack with VRF support needs to installed and configured
-a. _Configure BGP as specified in Step 1 below._
-
-2. Create pairs of GRE/VxLAN Tunnels (using ovsdb/ovs-vsctl) between each switch and between each switch to the Gateway node
-a. _Create 'l3tunnel' interfaces corresponding to each tunnel in interfaces DS as specified in Step 2 below._
-
-==== Step 1 : Configure BGP
-
-===== 1. Configure BGP Router
-
-*REST API* : _PUT /config/bgp:bgp-router/_
-
-*Sample JSON Data*
-[source,json]
------------------
-{
-    "bgp-router": {
-        "local-as-identifier": "10.10.10.10",
-        "local-as-number": 108
-    }
-}
------------------
-
-
-===== 2. Configure BGP Neighbors
-
-*REST API* : _PUT /config/bgp:bgp-neighbors/_
-
-*Sample JSON Data*
-
-[source,json]
------------------
-  {
-     "bgp-neighbor" : [
-            {
-                "as-number": 105,
-                "ip-address": "169.144.42.168"
-            }
-       ]
-   }
------------------
-
-==== Step 2 : Create Tunnel Interfaces
-Create l3tunnel interfaces corresponding to all GRE/VxLAN tunnels created with ovsdb (<<prer, refer Prerequisites>>). Use following REST Interface -
-
-*REST API* : _PUT /config/if:interfaces/if:interfacce_
-
-*Sample JSON Data*
-
-[source,json]
------------------
-{
-    "interface": [
-        {
-            "name" : "GRE_192.168.57.101_192.168.57.102",
-            "type" : "odl-interface:l3tunnel",
-            "odl-interface:tunnel-type": "odl-interface:tunnel-type-gre",
-            "odl-interface:local-ip" : "192.168.57.101",
-            "odl-interface:remote-ip" : "192.168.57.102",
-            "odl-interface:portId" : "openflow:1:3",
-            "enabled" : "true"
-        }
-    ]
-}
-
------------------
-
-===== Following is expected as a result of these configurations
-
-1. Unique If-index is generated
-2. 'Interface-state' operational DS is updated
-3. Corresponding Nexthop Group Entry is created
-
-==== Step 3 : OS Create Neutron Ports and attach VMs
-
-At this step user creates VMs.
-
-==== Step 4 : Create VM Interfaces
-Create l2vlan interfaces corresponding to VM created in step 3
-
-*REST API* : _PUT /config/if:interfaces/if:interface_
-
-*Sample JSON Data*
-
-[source,json]
------------------
-{
-    "interface": [
-        {
-            "name" : "dpn1-dp1.2",
-            "type" : "l2vlan",
-            "odl-interface:of-port-id" : "openflow:1:2",
-            "odl-interface:vlan-id" : "1",
-            "enabled" : "true"
-        }
-    ]
-}
-
------------------
-
-==== Step 5: Create VPN Instance
-
-*REST API* : _PUT /config/l3vpn:vpn-instances/l3vpn:vpn-instance/_
-
-*Sample JSON Data*
-
-[source,json]
------------------
-{
-  "vpn-instance": [
-    {
-        "description": "Test VPN Instance 1",
-        "vpn-instance-name": "testVpn1",
-        "ipv4-family": {
-            "route-distinguisher": "4000:1",
-            "export-route-policy": "4000:1,5000:1",
-            "import-route-policy": "4000:1,5000:1",
-        }
-    }
-  ]
-}
-
------------------
-
-===== Following is expected as a result of these configurations
-
-1. VPN ID is allocated and updated in data-store
-2. Corresponding VRF is created in BGP
-3. If there are vpn-interface configurations for this VPN, corresponding action is taken as defined in step 5
-
-==== Step 5 : Create VPN-Interface and Local Adjacency
-
-_this can be done in two steps as well_
-
-===== 1. Create vpn-interface
-
-*REST API* : _PUT /config/l3vpn:vpn-interfaces/l3vpn:vpn-interface/_
-
-*Sample JSON Data*
-
-[source,json]
------------------
-{
-  "vpn-interface": [
-    {
-      "vpn-instance-name": "testVpn1",
-      "name": "dpn1-dp1.2",
-    }
-  ]
-}
------------------
-
-[NOTE]
-name here is the name of VM interface created in step 3, 4
-
-===== 2. Add Adjacencies on vpn-interafce
-
-*REST API* : _PUT /config/l3vpn:vpn-interfaces/l3vpn:vpn-interface/dpn1-dp1.3/adjacency_
-
-*Sample JSON Data*
-
-[source,json]
------------------
-  {
-     "adjacency" : [
-            {
-                "ip-address" : "169.144.42.168",
-                "mac-address" : "11:22:33:44:55:66"
-            }
-       ]
-   }
------------------
-
-
-[quote]
-its a list, user can define more than one adjacency on a vpn_interface
-
-Above steps can be carried out in a single step as following
-
-[source,json]
------------------
-{
-    "vpn-interface": [
-        {
-            "vpn-instance-name": "testVpn1",
-            "name": "dpn1-dp1.3",
-            "odl-l3vpn:adjacency": [
-                {
-                    "odl-l3vpn:mac_address": "11:22:33:44:55:66",
-                    "odl-l3vpn:ip_address": "11.11.11.2",
-                }
-            ]
-        }
-    ]
-}
-
------------------
-
-
-===== Following is expected as a result of these configurations
-
-1. Prefix label is generated and stored in DS
-2. Ingress table is programmed with flow corresponding to interface
-3. Local Egress Group is created
-4. Prefix is added to BGP for advertisement
-5. BGP pushes route update to FIB YANG Interface
-6. FIB Entry flow is added to FIB Table in OF pipeline
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/l3vpn-service_-user-guide.html
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_Support_for_Microsoft_SCVMM.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_Support_for_Microsoft_SCVMM.adoc
deleted file mode 100644 (file)
index 4ba38e3..0000000
+++ /dev/null
@@ -1,228 +0,0 @@
-==== Support for Microsoft SCVMM 2012 R2 with ODL VTN
-
-===== Introduction
-
-System Center Virtual Machine Manager (SCVMM) is Microsoft's virtual machine support center for window's based emulations. SCVMM is a management solution for the virtualized data center. You can use it to configure and manage your virtualization host, networking, and storage resources in order to create and deploy virtual machines and services to private clouds that you have created.
-
-The VSEM Provider is a plug-in to bridge between SCVMM and OpenDaylight.
-
-Microsoft Hyper-V is a server virtualization developed by Microsoft, which provides virtualization services through hypervisor-based emulations.
-
-.Set-Up Diagram
-image::vtn/setup_diagram_SCVMM.png["Setup" ,width= 500]
-
-*The topology used in this set-up is:*
-
-* A SCVMM with VSEM Provider installed and a running VTN Coordinator and OpenDaylight with VTN Feature installed.
-
-* PF1000 virtual switch extension has been installed in the two Hyper-V servers as it implements the OpenFlow capability in Hyper-V.
-
-* Three OpenFlow switches simulated using mininet and connected to Hyper-V.
-
-* Four VM's hosted using SCVMM.
-
-*It is implemented as two major components:*
-
-* SCVMM
-
-* OpenDaylight (VTN Feature)
-
-* VTN Coordinator
-
-===== VTN Coordinator
-
-OpenDaylight VTN as Network Service provider for SCVMM where VSEM provider is added in the Network Service which will handle all requests from SCVMM and communicate with the VTN Coordinator. It is used to manage the network virtualization provided by OpenDaylight.
-
-====== Installing HTTPS in VTN Coordinator
-
-* System Center Virtual Machine Manager (SCVMM) supports only https protocol.
-
-*Apache Portable Runtime (APR) Installation Steps*
-
-* Enter the command "yum install *apr*" in VTN Coordinator installed machine.
-
-* In /usr/bin, create a soft link as "ln –s /usr/bin/apr-1-config /usr/bin/apr-config".
-
-* Extract tomcat under "/usr/share/java" by using the below command "tar -xvf apache-tomcat-8.0.27.tar.gz –C /usr/share/java".
-
-NOTE:
-Please go through the bleow link to download apache-tomcat-8.0.27.tar.gz file,
-https://archive.apache.org/dist/tomcat/tomcat-8/v8.0.27/bin/
-
-* Please go to the directory "cd /usr/share/java/apache-tomcat-8.0.27/bin and unzip tomcat-native.gz using this command "tar -xvf tomcat-native.gz".
-
-* Go to the directory "cd /usr/share/java/apache-tomcat-8.0.27/bin/tomcat-native-1.1.33-src/jni/native".
-
-* Enter the command "./configure --with-os-type=bin --with-apr=/usr/bin/apr-config".
-
-* Enter the command "make" and "make install".
-
-* Apr libraries are successfully installed in "/usr/local/apr/lib".
-
-*Enable HTTP/HTTPS in VTN Coordinator*
-
-Enter the command "firewall-cmd --zone=public --add-port=8083/tcp --permanent" and "firewall-cmd --reload" to enable firewall settings in server.
-
-*Create a CA's private key and a self-signed certificate in server*
-
-* Execute the following command "openssl req -x509 -days 365 -extensions v3_ca -newkey rsa:2048 –out /etc/pki/CA/cacert.pem –keyout /etc/pki/CA/private/cakey.pem" in a single line.
-
-[options="header",cols="30%,70%"]
-|===
-| Argument | Description
-| Country Name | Specify the country code. +
-For example, JP
-| State or Province Name | Specify the state or province. +
-For example, Tokyo
-| Locality Name | Locality Name +
-For example, Chuo-Ku
-| Organization Name | Specify the company.
-| Organizational Unit Name | Specify the department, division, or the like.
-| Common Name | Specify the host name.
-| Email Address | Specify the e-mail address.
-|===
-
-* Execute the following commands: "touch /etc/pki/CA/index.txt" and "echo 00 > /etc/pki/CA/serial" in server after setting your CA's private key.
-
-*Create a private key and a CSR for web server*
-
-* Execute the following command "openssl req -new -newkey rsa:2048 -out csr.pem –keyout /usr/local/vtn/tomcat/conf/key.pem" in a single line.
-
-* Enter the PEM pass phrase: Same password you have given in CA's private key PEM pass phrase.
-
-[options="header",cols="30%,70%"]
-|===
-| Argument | Description
-| Country Name | Specify the country code. +
-For example, JP
-| State or Province Name | Specify the state or province. +
-For example, Tokyo
-| Locality Name | Locality Name +
-For example, Chuo-Ku
-| Organization Name | Specify the company.
-| Organizational Unit Name | Specify the department, division, or the like.
-| Common Name | Specify the host name.
-| Email Address | Specify the e-mail address.
-| A challenge password | Specify the challenge password.
-| An optional company name | Specify an optional company name.
-|===
-
-*Create a certificate for web server*
-
-* Execute the following command "openssl ca –in csr.pem –out /usr/local/vtn/tomcat/conf/cert.pem –days 365 –batch" in a single line.
-
-* Enter pass phrase for /etc/pki/CA/private/cakey.pem: Same password you have given in CA's private key PEM pass phrase.
-
-* Open the tomcat file using "vim /usr/local/vtn/tomcat/bin/tomcat".
-
-* Include the line " TOMCAT_PROPS="$TOMCAT_PROPS -Djava.library.path=\"/usr/local/apr/lib\"" " in 131th line and save the file.
-
-*Edit server.xml file and restart the server*
-
-* Open the server.xml file using "vim /usr/local/vtn/tomcat/conf/server.xml" and add the below lines.
-+
-----
-<Connector port="${vtn.port}" protocol="HTTP/1.1" SSLEnabled="true"
-maxThreads="150" scheme="https" secure="true"
-SSLCertificateFile="/usr/local/vtn/tomcat/conf/cert.pem"
-SSLCertificateKeyFile="/usr/local/vtn/tomcat/conf/key.pem"
-SSLPassword=same password you have given in CA's private key PEM pass phrase
-connectionTimeout="20000" />
-----
-+
-* Save the file and restart the server.
-
-* To stop vtn use the following command.
-+
-----
-/usr/local/vtn/bin/vtn_stop
-----
-+
-* To start vtn use the following command.
-+
-----
-/usr/local/vtn/bin/vtn_start
-----
-+
-* Copy the created CA certificate from cacert.pem to cacert.crt by using the following command,
-+
-----
-openssl x509 –in /etc/pki/CA/cacert.pem –out cacert.crt
-----
-+
-*Checking the HTTP and HTTPS connection from client*
-
-* You can check the HTTP connection by using the following command:
-+
-----
-curl -X GET -H 'contenttype:application/json' -H 'username:admin' -H 'password:adminpass' http://<server IP address>:8083/vtn-webapi/api_version.json
-----
-+
-* You can check the HTTPS connection by using the following command:
-+
-----
-curl -X GET -H 'contenttype:application/json' -H 'username:admin' -H 'password:adminpass' https://<server IP address>:8083/vtn-webapi/api_version.json --cacert /etc/pki/CA/cacert.pem
-----
-+
-* The response should be like this for both HTTP and HTTPS:
-+
-----
-{"api_version":{"version":"V1.4"}}
-----
-
-===== Prerequisites to create Network Service in SCVMM machine, Please follow the below steps
-
-. Please go through the below link to download VSEM Provider zip file,
- https://nexus.opendaylight.org/content/groups/public/org/opendaylight/vtn/application/vtnmanager-vsemprovider/2.0.0-Beryllium/vtnmanager-vsemprovider-2.0.0-Beryllium-bin.zip
-
-. Unzip the vtnmanager-vsemprovider-2.0.0-Beryllium-bin.zip file anywhere in your SCVMM machine.
-
-. Stop SCVMM service from *"service manager->tools->servers->select system center virtual machine manager"* and click stop.
-
-. Go to *"C:/Program Files"* in your SCVMM machine. Inside *"C:/Program Files"*, create a folder named as *"ODLProvider"*.
-
-. Inside *"C:/Program Files/ODLProvider"*, create a folder named as "Module" in your SCVMM machine.
-
-. Inside "C:/Program Files/ODLProvider/Module", Create two folders named as *"Odl.VSEMProvider"* and *"VSEMOdlUI"* in your SCVMM machine.
-
-. Copy the *"VSEMOdl.dll"* file from *"ODL_SCVMM_PROVIDER/ODL_VSEM_PROVIDER"* to *"C:/Program Files/ODLProvider/Module/Odl.VSEMProvider"* in your SCVMM machine.
-
-. Copy the *"VSEMOdlProvider.psd1"* file from *"application/vsemprovider/VSEMOdlProvider/VSEMOdlProvider.psd1"* to *"C:/Program Files/ODLProvider/Module/Odl.VSEMProvider"* in your SCVMM machine.
-
-. Copy the *"VSEMOdlUI.dll"* file from *"ODL_SCVMM_PROVIDER/ODL_VSEM_PROVIDER_UI"* to *"C:/Program Files/ODLProvider/Module/VSEMOdlUI"* in your SCVMM machine.
-
-. Copy the *"VSEMOdlUI.psd1"* file from *"application/vsemprovider/VSEMOdlUI"* to *"C:/Program Files/ODLProvider/Module/VSEMOdlUI"* in your SCVMM machine.
-
-. Copy the *"reg_entry.reg"* file from *"ODL_SCVMM_PROVIDER/Register_settings"* to your SCVMM desktop and double click the *"reg_entry.reg"* file to install registry entry in your SCVMM machine.
-
-. Download *"PF1000.msi"* from this link, https://www.pf-info.com/License/en/index.php?url=index/index_non_buyer and place into *"C:/Program Files/Switch Extension Drivers"* in your SCVMM machine.
-
-. Start SCVMM service from *"service manager->tools->servers->select system center virtual machine manager"* and click start.
-
-===== System Center Virtual Machine Manager (SCVMM)
-
-It supports two major features:
-
-* Failover Clustering
-* Live Migration
-
-====== Failover Clustering
-
-A single Hyper-V can host a number of virtual machines. If the host were to fail then all of the virtual machines that are running on it will also fail, thereby resulting in a major outage. Failover clustering treats individual virtual machines as clustered resources. If a host were to fail then clustered virtual machines are able to fail over to a different Hyper-V server where they can continue to run.
-
-====== Live Migration
-
-Live Migration is used to migrate the running virtual machines from one Hyper-V server to another Hyper-V server without any interruptions.
-Please go through the below video for more details,
-
-* https://youtu.be/34YMOTzbNJM
-
-===== SCVMM User Guide
-
-* Please go through the below link for SCVMM user guide: https://wiki.opendaylight.org/images/c/ca/ODL_SCVMM_USER_GUIDE_final.pdf
-
-* Please go through the below links for more details
-
-** OpenDaylight SCVMM VTN Integration: https://youtu.be/iRt4dxtiz94
-
-** OpenDaylight Congestion Control with SCVMM VTN: https://youtu.be/34YMOTzbNJM
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_Troubleshoot_Coordinator_Installation.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_Troubleshoot_Coordinator_Installation.adoc
deleted file mode 100644 (file)
index 5647806..0000000
+++ /dev/null
@@ -1,122 +0,0 @@
-==== VTN Coordinator(Troubleshooting HowTo)
-
-===== Overview
-
-This page demonstrates Installation troubleshooting steps of VTN Coordinator.
-OpenDaylight VTN provides multi-tenant virtual network functions on OpenDaylight controllers. OpenDaylight VTN consists of two parts:
-
-* VTN Coordinator.
-* VTN Manager.
-
-VTN Coordinator orchestrates multiple VTN Managers running in OpenDaylight Controllers, and provides VTN Applications with VTN API.
-VTN Manager is OSGi bundles running in OpenDaylight Controller. Current VTN Manager supports only OpenFlow switches. It handles PACKET_IN messages, sends PACKET_OUT messages, manages host information, and installs flow entries into OpenFlow switches to provide VTN Coordinator with virtual network functions.
-The requirements for installing these two are different.Therefore, we recommend that you install VTN Manager and VTN Coordinator in different machines.
-
-===== List of installation Troubleshooting How to's
-.How to install VTN Coordinator?
-
-* https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Installation:VTN_Coordinator
-
-*After executing db_setup, you have encountered the error "Failed to setup database"?*
-
-The error could be due to the below reasons
-
-* Access Restriction
-
-The user who owns /usr/local/vtn/ directory and installs VTN Coordinator, can only start db_setup.
-Example :
-
-----
-  The directory should appear as below (assuming the user as "vtn"):
-  # ls -l /usr/local/
-    drwxr-xr-x. 12 vtn  vtn  4096 Mar 14 21:53 vtn
-  If the user doesnot own /usr/local/vtn/ then, please run the below command (assuming the username as vtn),
-              chown -R vtn:vtn /usr/local/vtn
-----
-* Postgres not Present
-
-----
-1. In case of Fedora/CentOS/RHEL, please check if /usr/pgsql/<version> directory is present and also ensure the commands initdb, createdb,pg_ctl,psql are working. If, not please re-install postgres packages
-2. In case of Ubuntu, check if /usr/lib/postgres/<version> directory is present and check for the commands as in the previous step.
-----
-* Not enough space to create tables
-
-----
-Please check df -k and ensure enough free space is available.
-----
-* If the above steps do not solve the problem, please refer to the log file for the exact problem
-
-----
-/usr/local/vtn/var/dbm/unc_setup_db.log for the exact error.
-----
-
-.What are the things to check after vtn_start?
-
-* list of VTN Coordinator processes
-* Run the below command ensure the Coordinator daemons are running.
-
-----
-       Command:     /usr/local/vtn/bin/unc_dmctl status
-       Name              Type           IPC Channel       PID
-    -----------       -----------      --------------     ------
-        drvodcd         DRIVER           drvodcd           15972
-        lgcnwd         LOGICAL           lgcnwd            16010
-        phynwd         PHYSICAL          phynwd            15996
-----
-* Issue the curl command to fetch version and ensure the process is able to respond.
-
-.How to debug a startup failure?
-
-The following activities take place in order during startup
-
-* Database server is started after setting virtual memory to required value,Any database startup errors will be reflected in any of the below logs.
-
-----
-         /usr/local/vtn/var/dbm/unc_db_script.log.
-         /usr/local/vtn/var/db/pg_log/postgresql-*.log (the pattern will have the date)
-----
-* uncd daemon is kicked off, The daemon in turn kicks off the rest of the daemons.
-
-----
-  Any  uncd startup failures will be reflected in /usr/local/vtn/var/uncd/uncd_start.err.
-----
-
-====== After setting up the apache tomcat server, what are the aspects that should be checked.
-.Please check if catalina is running.
-
-----
-    The command ps -ef | grep catalina | grep -v grep should list a catalina process
-----
-
-.If you encounter an erroneous situation where the REST API is always failing.
-
-----
-  Please ensure the firewall settings for port:8181(Beryllium release) or port:8083(Post Beryllium release) and enable the same.
-----
-.How to debug a REST API returning a failure message?
-Please check the /usr/share/java/apache-tomcat-7.0.39/logs/core/core.log for failure details.
-
-.REST API for VTN configuration fails, how to debug?
-
-The default log level for all daemons is "INFO", to debug the situation TRACE or DEBUG logs may be needed. To increase the log level for individual daemons, please use the commands suggested below
-
-----
-  /usr/local/vtn/bin/lgcnw_control loglevel trace -- upll daemon log
-   /usr/local/vtn/bin/phynw_control loglevel trace -- uppl daemon log
-   /usr/local/vtn/bin/unc_control loglevel trace -- uncd daemon log
-   /usr/local/vtn/bin/drvodc_control loglevel trace -- Driver daemon log
-----
-After setting the log levels, the operation can be repeated and the log files can be referred for debugging.
-
-.Problems while Installing PostgreSQL due to openssl
-
-Errors may occur when trying to install postgreSQL rpms. Recently PostgreSQL has upgraded all their binaries to use the latest openssl versions with fix for http://en.wikipedia.org/wiki/Heartbleed Please upgrade the openssl package to the latest version and re-install.
-For RHEL 6.1/6.4 : If you have subscription, Please use the same and update the rpms. The details are available in the following link
-https://access.redhat.com/site/solutions/781793 ACCESS-REDHAT
-
-----
-  rpm -Uvh http://mirrors.kernel.org/centos/6/os/x86_64/Packages/openssl-1.0.1e-15.el6.x86_64.rpm
-  rpm -ivh http://mirrors.kernel.org/centos/6/os/x86_64/Packages/openssl-devel-1.0.1e-15.el6.x86_64.rpm
-----
-
-For other linux platforms, Please do yum update, the public respositroes will have the latest openssl, please install the same.
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_Use_VTN_to_make_packets_take_different_paths.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_Use_VTN_to_make_packets_take_different_paths.adoc
deleted file mode 100644 (file)
index 5bd95d0..0000000
+++ /dev/null
@@ -1,182 +0,0 @@
-==== How To Use VTN To Make Packets Take Different Paths
-This example demonstrates on how to create a specific VTN Path Map information.
-
-.PathMap
-image::vtn/Pathmap.png["Pathmap" ,width= 500]
-
-===== Requirement
-* Save the mininet script given below as pathmap_test.py and run the mininet script in the mininet environment where Mininet is installed.
-
-* Create topology using the below mininet script:
-
-----
- from mininet.topo import Topo
- class MyTopo( Topo ):
-    "Simple topology example."
-    def __init__( self ):
-        "Create custom topo."
-        # Initialize topology
-        Topo.__init__( self )
-        # Add hosts and switches
-        leftHost = self.addHost( 'h1' )
-        rightHost = self.addHost( 'h2' )
-        leftSwitch = self.addSwitch( 's1' )
-        middleSwitch = self.addSwitch( 's2' )
-        middleSwitch2 = self.addSwitch( 's4' )
-        rightSwitch = self.addSwitch( 's3' )
-        # Add links
-        self.addLink( leftHost, leftSwitch )
-        self.addLink( leftSwitch, middleSwitch )
-        self.addLink( leftSwitch, middleSwitch2 )
-        self.addLink( middleSwitch, rightSwitch )
-        self.addLink( middleSwitch2, rightSwitch )
-        self.addLink( rightSwitch, rightHost )
- topos = { 'mytopo': ( lambda: MyTopo() ) }
-----
-
-----
- mininet> net
- c0
- s1 lo:  s1-eth1:h1-eth0 s1-eth2:s2-eth1 s1-eth3:s4-eth1
- s2 lo:  s2-eth1:s1-eth2 s2-eth2:s3-eth1
- s3 lo:  s3-eth1:s2-eth2 s3-eth2:s4-eth2 s3-eth3:h2-eth0
- s4 lo:  s4-eth1:s1-eth3 s4-eth2:s3-eth2
- h1 h1-eth0:s1-eth1
- h2 h2-eth0:s3-eth3
-----
-
-* Generate traffic by pinging between hosts h1 and h2 before creating the portmaps respectively
-
-----
-  mininet> h1 ping h2
-  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
-  From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
-  From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
-  From 10.0.0.1 icmp_seq=3 Destination Host Unreachable
-  From 10.0.0.1 icmp_seq=4 Destination Host Unreachable
-----
-
-===== Configuration
-* Create a Controller named controller1 and mention its ip-address in the below create-controller command.
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"controller": {"controller_id": "odc", "ipaddr":"10.100.9.42", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers.json
-----
-
-* Create a VTN named vtn1 by executing the create-vtn command
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"vtn" : {"vtn_name":"vtn1","description":"test VTN" }}' http://127.0.0.1:8083/vtn-webapi/vtns.json
-----
-
-* Create a vBridge named vBridge1 in the vtn1 by executing the create-vbr command.
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"vbridge" : {"vbr_name":"vBridge1","controller_id":"odc","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges.json
-----
-
-* Create two Interfaces named if1 and if2 into the vBridge1
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if1","description": "if_desc1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if2","description": "if_desc2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
-----
-
-* Configure two mappings on each of the interfaces by executing the below command.
-
-The interface if1 of the virtual bridge will be mapped to the port "s1-eth1" of the switch "openflow:1" of the Mininet.
-The h1 is connected to the port "s1-eth1".
-
-The interface if2 of the virtual bridge will be mapped to the port "s3-eth3" of the switch "openflow:3" of the Mininet.
-The h2 is connected to the port "s3-eth3".
-
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:01-s1-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if1/portmap.json
-curl --user admin:adminpass -H 'content-type: application/json'  -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:03-s3-eth3"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if2/portmap.json
-----
-
-* Generate traffic by pinging between hosts h1 and h2 after creating the portmaps respectively
-
-----
-  mininet> h1 ping h2
-  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
-  64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=36.4 ms
-  64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.880 ms
-  64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.073 ms
-  64 bytes from 10.0.0.2: icmp_req=4 ttl=64 time=0.081 ms
-----
-
-* Get the VTN Dataflows information
-
-----
-curl -X GET -H 'content-type: application/json' --user 'admin:adminpass' "http://127.0.0.1:8083/vtn-webapi/dataflows?&switch_id=00:00:00:00:00:00:00:01&port_name=s1-eth1&controller_id=odc&srcmacaddr=de3d.7dec.e4d2&no_vlan_id=true"
-----
-
-* Create a Flowcondition in the VTN
-
-.(The flowconditions, pathmap and pathpolicy commands have to be executed in the controller)
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"operation":"SET","present":"false","name":"cond_1", "vtn-flow-match":[{"vtn-ether-match":{},"vtn-inet-match":{"source-network":"10.0.0.1/32","protocol":1,"destination-network":"10.0.0.2/32"},"index":"1"}]}}'
-----
-
-* Create a Pathmap in the VTN
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-path-map:set-path-map -d '{"input":{"tenant-name":"vtn1","path-map-list":[{"condition":"cond_1","policy":"1","index": "1","idle-timeout":"300","hard-timeout":"0"}]}}'
-----
-
-* Get the Path policy information
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-path-policy:set-path-policy -d '{"input":{"operation":"SET","id": "1","default-cost": "10000","vtn-path-cost": [{"port-desc":"openflow:1,3,s1-eth3","cost":"1000"},{"port-desc":"openflow:4,2,s4-eth2","cost":"100000"},{"port-desc":"openflow:3,3,s3-eth3","cost":"10000"}]}}'
-----
-
-===== Verification
-* Before applying Path policy information in the VTN
-
-----
-{
-        "pathinfos": [
-            {
-              "in_port_name": "s1-eth1",
-              "out_port_name": "s1-eth3",
-              "switch_id": "openflow:1"
-            },
-            {
-              "in_port_name": "s4-eth1",
-              "out_port_name": "s4-eth2",
-              "switch_id": "openflow:4"
-            },
-            {
-               "in_port_name": "s3-eth2",
-               "out_port_name": "s3-eth3",
-               "switch_id": "openflow:3"
-            }
-                     ]
-}
-----
-* After applying Path policy information in the VTN
-
-----
-{
-    "pathinfos": [
-            {
-              "in_port_name": "s1-eth1",
-              "out_port_name": "s1-eth2",
-              "switch_id": "openflow:1"
-            },
-            {
-              "in_port_name": "s2-eth1",
-              "out_port_name": "s2-eth2",
-              "switch_id": "openflow:2"
-            },
-            {
-               "in_port_name": "s3-eth1",
-               "out_port_name": "s3-eth3",
-               "switch_id": "openflow:3"
-            }
-                     ]
-}
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_configure_L2_Network_with_Multiple_Controllers.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_configure_L2_Network_with_Multiple_Controllers.adoc
deleted file mode 100644 (file)
index 4631f96..0000000
+++ /dev/null
@@ -1,96 +0,0 @@
-==== How to configure L2 Network with Multiple Controllers
-* This example provides the procedure to demonstrate configuration of VTN Coordinator with L2 network using VTN Virtualization
-Here is the Example for vBridge Interface Mapping with Multi-controller using mininet.
-
-.EXAMPLE DEMONSTRATING MULTIPLE CONTROLLERS
-image::vtn/MutiController_Example_diagram.png["EXAMPLE DEMONSTRATING MULTIPLE CONTROLLERS",width=600]
-
-===== Requirements
-* Configure multiple controllers using the mininet script given below: https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_%28VTN%29:Scripts:Mininet#Network_with_multiple_switches_and_OpenFlow_controllers
-
-===== Configuration
-* Create a VTN named vtn3 by executing the create-vtn command
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"vtn" : {"vtn_name":"vtn3"}}' http://127.0.0.1:8083/vtn-webapi/vtns.json
-----
-* Create two Controllers named odc1 and odc2 with its ip-address in the below create-controller command.
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"controller": {"controller_id": "odc1", "ipaddr":"10.100.9.52", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers.json
-----
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"controller": {"controller_id": "odc2", "ipaddr":"10.100.9.61", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers.json
-----
-* Create two vBridges in the VTN like, vBridge1 in Controller1 and vBridge2 in Controller2
-
-----
- curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vbridge" : {"vbr_name":"vbr1","controller_id":"odc1","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges.json
-----
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"vbridge" : {"vbr_name":"vbr2","controller_id":"odc2","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges.json
-----
-* Create two Interfaces if1, if2 for the two vBridges vbr1 and vbr2.
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr1/interfaces.json
-----
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr1/interfaces.json
-----
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr2/interfaces.json
-----
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"interface": {"if_name": "if2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr2/interfaces.json
-----
-* Get the list of logical ports configured
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X GET http://127.0.0.1:8083/vtn-webapi/controllers/odc1/domains/\(DEFAULT\)/logical_ports/detail.json
-----
-* Create boundary and vLink for the two controllers
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'   -X POST -d '{"boundary": {"boundary_id": "b1", "link": {"controller1_id": "odc1", "domain1_id": "(DEFAULT)", "logical_port1_id": "PP-OF:00:00:00:00:00:00:00:01-s1-eth3", "controller2_id": "odc2", "domain2_id": "(DEFAULT)", "logical_port2_id": "PP-OF:00:00:00:00:00:00:00:04-s4-eth3"}}}' http://127.0.0.1:8083/vtn-webapi/boundaries.json
-----
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"vlink": {"vlk_name": "vlink1" , "vnode1_name": "vbr1", "if1_name":"if2", "vnode2_name": "vbr2", "if2_name": "if2", "boundary_map": {"boundary_id":"b1","vlan_id": "50"}}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vlinks.json
-----
-* Configure two mappings on each of the interfaces by executing the below command.
-
-The interface if1 of the vbr1 will be mapped to the port "s2-eth2" of the switch "openflow:2" of the Mininet.
-The h2 is connected to the port "s2-eth2".
-
-The interface if2 of the vbr2 will be mapped to the port "s5-eth2" of the switch "openflow:5" of the Mininet.
-The h6 is connected to the port "s5-eth2".
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:02-s2-eth2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr1/interfaces/if1/portmap.json
-----
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:05-s5-eth2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn3/vbridges/vbr2/interfaces/if1/portmap.json
-----
-
-===== Verification
-Please verify whether Host h2 and Host h6 are pinging.
-
-* Send packets from h2 to h6
-
-----
-mininet> h2 ping h6
-----
-
-----
- PING 10.0.0.6 (10.0.0.3) 56(84) bytes of data.
- 64 bytes from 10.0.0.6: icmp_req=1 ttl=64 time=0.780 ms
- 64 bytes from 10.0.0.6: icmp_req=2 ttl=64 time=0.079 ms
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_configure_L2_Network_with_Single_Controller.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_configure_L2_Network_with_Single_Controller.adoc
deleted file mode 100644 (file)
index 45d430e..0000000
+++ /dev/null
@@ -1,89 +0,0 @@
-==== How to configure L2 Network with Single Controller
-
-===== Overview
-
-This example provides the procedure to demonstrate configuration of VTN Coordinator with L2 network using VTN Virtualization(single controller). Here is the Example for vBridge Interface Mapping with Single Controller using mininet. mininet details and set-up can be referred at below URL:
-https://wiki.opendaylight.org/view/OpenDaylight_Controller:Installation#Using_Mininet
-
-.EXAMPLE DEMONSTRATING SINGLE CONTROLLER
-image::vtn/vtn-single-controller-topology-example.png[EXAMPLE DEMONSTRATING SINGLE CONTROLLER]
-
-===== Requirements
-
-* Configure mininet and create a topology:
-
-----
-mininet@mininet-vm:~$ sudo mn --controller=remote,ip=<controller-ip> --topo tree,2
-----
-
-* mininet> net
-
-----
- s1 lo:  s1-eth1:h1-eth0 s1-eth2:s2-eth1
- s2 lo:  s2-eth1:s1-eth2 s2-eth2:h2-eth0
- h1 h1-eth0:s1-eth1
- h2 h2-eth0:s2-eth2
-----
-
-===== Configuration
-
-* Create a Controller named controllerone and mention its ip-address in the below create-controller command.
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"controller": {"controller_id": "controllerone", "ipaddr":"10.0.0.2", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers.json
-----
-
-* Create a VTN named vtn1 by executing the create-vtn command
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vtn" : {"vtn_name":"vtn1","description":"test VTN" }}' http://127.0.0.1:8083/vtn-webapi/vtns.json
-----
-
-* Create a vBridge named vBridge1 in the vtn1 by executing the create-vbr command.
-
-----
- curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vbridge" : {"vbr_name":"vBridge1","controller_id":"controllerone","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges.json
-----
-
-* Create two Interfaces named if1 and if2 into the vBridge1
-
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if1","description": "if_desc1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
-----
-
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if2","description": "if_desc2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
-----
-
-* Get the list of logical ports configured
-
-----
-Curl --user admin:adminpass -H 'content-type: application/json' -X GET http://127.0.0.1:8083/vtn-webapi/controllers/controllerone/domains/\(DEFAULT\)/logical_ports.json
-----
-* Configure two mappings on each of the interfaces by executing the below command.
-
-The interface if1 of the virtual bridge will be mapped to the port "s2-eth1" of the switch "openflow:2" of the Mininet.
-The h1 is connected to the port "s2-eth1".
-
-The interface if2 of the virtual bridge will be mapped to the port "s3-eth1" of the switch "openflow:3" of the Mininet.
-The h3 is connected to the port "s3-eth1".
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:03-s3-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if1/portmap.json
-curl --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:02-s2-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if2/portmap.json
-----
-
-===== Verification
-
-Please verify whether the Host1 and Host3 are pinging.
-
-* Send packets from Host1 to Host3
-
-----
- mininet> h1 ping h3
- PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
- 64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=0.780 ms
- 64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.079 ms
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_configure_flow_filters.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_configure_flow_filters.adoc
deleted file mode 100644 (file)
index bc49301..0000000
+++ /dev/null
@@ -1,138 +0,0 @@
-==== How To Configure Flow Filters Using VTN
-
-===== Overview
-The flow-filter function discards, permits, or redirects packets of the traffic within a VTN, according to specified flow conditions The table below lists the actions to be applied when a packet matches the condition:
-
-[cols="2*"]
-|===
-| Action | Function
-|Pass | Permits the packet to pass.
-As options, packet transfer priority (set priority) and DSCP change (se t ip-dscp) is specified.
-|Drop | Discards the packet.
-|Redirect|Redirects the packet to a desired virtual interface.
-As an option, it is possible to change the MAC address when the packet is transferred.
-|===
-
-.Flow Filter
-image::vtn/flow_filter_example.png["Example demonstrating flow filters",width=600]
-
-Following steps explain flow-filter function:
-
-* When a packet is transferred to an interface within a virtual network, the flow-filter function
-evaluates whether the transferred packet matches the condition specified in the flow-list.
-* If the packet matches the condition, the flow-filter applies the flow-list matching action
-specified in the flow-filter.
-
-===== Requirements
-To apply the packet filter, configure the following:
-
-* Create a flow-list and flow-listentry.
-* Specify where to apply the flow-filter, for example VTN, vBridge, or interface of vBridge.
-
-Configure mininet and create a topology:
-
-----
-$  mininet@mininet-vm:~$ sudo mn --controller=remote,ip=<controller-ip> --topo tree
-----
-Please generate the following topology
-
-----
-$  mininet@mininet-vm:~$ sudo mn --controller=remote,ip=<controller-ip> --topo tree,2
-mininet> net
-c0
-s1 lo:  s1-eth1:s2-eth3 s1-eth2:s3-eth3
-s2 lo:  s2-eth1:h1-eth0 s2-eth2:h2-eth0 s2-eth3:s1-eth1
-s3 lo:  s3-eth1:h3-eth0 s3-eth2:h4-eth0 s3-eth3:s1-eth2
-h1 h1-eth0:s2-eth1
-h2 h2-eth0:s2-eth2
-h3 h3-eth0:s3-eth1
-h4 h4-eth0:s3-eth2
-----
-
-===== Configuration
-* Create a Controller named controller1 and mention its ip-address in the below create-controller command.
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"controller": {"controller_id": "controller1", "ipaddr":"10.100.9.61", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers
-----
-* Create a VTN named vtn_one by executing the create-vtn command
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vtn" : {"vtn_name":"vtn_one","description":"test VTN" }}' http://127.0.0.1:8083/vtn-webapi/vtns.json
-----
-* Create a vBridge named vbr_two in the vtn1 by executing the create-vbr command.
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vbridge" : {"vbr_name":"vbr_one^C"controller_id":"controller1","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges.json
-curl -v --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vbridge" :
-{"vbr_name":"vbr_two","controller_id":"controller1","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges.json
-----
-* Create two Interfaces named if1 and if2 into the vbr_two
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if1","description": "if_desc1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces.json
-curl -v --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if1","description": "if_desc1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces.json
-----
-
-* Get the list of logical ports configured
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X GET  http://127.0.0.1:8083/vtn-webapi/controllers/controllerone/domains/\(DEFAULT\)/logical_ports.json
-----
-* Configure two mappings on each of the interfaces by executing the below command.
-
-The interface if1 of the virtual bridge will be mapped to the port "s2-eth1" of the switch "openflow:2" of the Mininet.
-The h1 is connected to the port "s2-eth1".
-
-The interface if2 of the virtual bridge will be mapped to the port "s3-eth1" of the switch "openflow:3" of the Mininet.
-The h3 is connected to the port "s3-eth1".
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:03-s3-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces/if1/portmap.json
-curl -v --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:02-s2-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces/if2/portmap.json
-----
-* Create Flowlist
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"flowlist": {"fl_name": "flowlist1", "ip_version":"IP"}}' http://127.0.0.1:8083/vtn-webapi/flowlists.json
-----
-* Create Flowlistentry
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"flowlistentry": {"seqnum": "233","macethertype": "0x8000","ipdstaddr": "10.0.0.3","ipdstaddrprefix": "2","ipsrcaddr": "10.0.0.2","ipsrcaddrprefix": "2","ipproto": "17","ipdscp": "55","icmptypenum":"232","icmpcodenum": "232"}}' http://127.0.0.1:8083/vtn-webapi/flowlists/flowlist1/flowlistentries.json
-----
-* Create vBridge Interface Flowfilter
-
-----
-curl --user admin:adminpass -X POST -H 'content-type: application/json' -d '{"flowfilter" : {"ff_type": "in"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces/if1/flowfilters.json
-----
-===== Flow filter demonstration with DROP action-type
-
-----
-curl --user admin:adminpass -X POST -H 'content-type: application/json' -d '{"flowfilterentry": {"seqnum": "233", "fl_name": "flowlist1", "action_type":"drop", "priority":"3", "dscp":"55" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces/if1/flowfilters/in/flowfilterentries.json
-----
-===== Verification
-As we have applied the action type "drop" , ping should fail.
-
-----
-mininet> h1 ping h3
-PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
-From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
-From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
-----
-
-===== Flow filter demonstration with PASS action-type
-
-----
-curl --user admin:adminpass -X PUT -H 'content-type: application/json' -d '{"flowfilterentry": {"seqnum": "233", "fl_name": "flowlist1", "action_type":"pass", "priority":"3", "dscp":"55" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn_one/vbridges/vbr_two/interfaces/if1/flowfilters/in/flowfilterentries/233.json
-----
-===== Verification
-
-----
-mininet> h1 ping h3
-PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
-64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=0.984 ms
-64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.110 ms
-64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.098 ms
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_test_vlanmap_using_mininet.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_test_vlanmap_using_mininet.adoc
deleted file mode 100644 (file)
index 7374c38..0000000
+++ /dev/null
@@ -1,77 +0,0 @@
-==== How To Test Vlan-Map In Mininet Environment
-
-===== Overview
-This example explains how to test vlan-map in a multi host scenario.
-
-.Example that demonstrates vlanmap testing in Mininet Environment
-image::vtn/vlanmap_using_mininet.png[Example that demonstrates vlanmap testing in Mininet Environment]
-
-===== Requirements
-* Save the mininet script given below as vlan_vtn_test.py and run the mininet script in the mininet environment where Mininet is installed.
-
-
-===== Mininet Script
-https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:Mininet#Network_with_hosts_in_different_vlan
-
-* Run the mininet script
-
-----
-sudo mn --controller=remote,ip=192.168.64.13 --custom vlan_vtn_test.py --topo mytopo
-----
-
-===== Configuration
-
-Please follow the below steps to test a vlan map using mininet:
-
-* Create a Controller named controllerone and mention its ip-address in the below create-controller command.
-
-----
-curl --user admin:adminpass -H 'content-type: application/json'  -X POST -d '{"controller": {"controller_id": "controllerone", "ipaddr":"10.0.0.2", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers
-----
-
-* Create a VTN named vtn1 by executing the create-vtn command
-
-----
-curl -X POST -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' -d '{"vtn" : {"vtn_name":"vtn1","description":"test VTN" }}' http://127.0.0.1:8083/vtn-webapi/vtns.json
-----
-
-* Create a vBridge named vBridge1 in the vtn1 by executing the create-vbr command.
-
-----
-curl -X POST -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' -d '{"vbridge" : {"vbr_name":"vBridge1","controller_id":"controllerone","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges.json
-----
-
-* Create a vlan map with vlanid 200 for vBridge vBridge1
-
-----
-curl -X POST -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' -d '{"vlanmap" : {"vlan_id": 200 }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/vlanmaps.json
-----
-
-* Create a vBridge named vBridge2 in the vtn1 by executing the create-vbr command.
-
-----
-curl -X POST -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' -d '{"vbridge" : {"vbr_name":"vBridge2","controller_id":"controllerone","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges.json
-----
-
-* Create a vlan map with vlanid 300 for vBridge vBridge2
-
-----
-curl -X POST -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' -d '{"vlanmap" : {"vlan_id": 300 }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge2/vlanmaps.json
-----
-
-===== Verification
-
-Ping all in mininet environment to view the host reachability.
-
-
-----
-mininet> pingall
-Ping: testing ping reachability
-h1 -> X h3 X h5 X
-h2 -> X X h4 X h6
-h3 -> h1 X X h5 X
-h4 -> X h2 X X h6
-h5 -> h1 X h3 X X
-h6 -> X h2 X h4 X
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_view_Dataflows.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_view_Dataflows.adoc
deleted file mode 100644 (file)
index 39db993..0000000
+++ /dev/null
@@ -1,67 +0,0 @@
-==== How To View Dataflows in VTN
-This example demonstrates on how to view a specific VTN Dataflow information.
-
-===== Configuration
-The same Configuration as Vlan Mapping Example(https://wiki.opendaylight.org/view/VTN:Coordinator:Beryllium:HowTos:How_To_test_vlanmap_using_mininet)
-
-===== Verification
-Get the VTN Dataflows information
-
-----
-curl -X GET -H 'content-type: application/json' --user 'admin:adminpass' "http://127.0.0.1:8083/vtn-webapi/dataflows?controller_id=controllerone&srcmacaddr=924c.e4a3.a743&vlan_id=300&switch_id=openflow:2&port_name=s2-eth1"
-----
-
-
-----
-{
-   "dataflows": [
-       {
-           "controller_dataflows": [
-               {
-                   "controller_id": "controllerone",
-                   "controller_type": "odc",
-                   "egress_domain_id": "(DEFAULT)",
-                   "egress_port_name": "s3-eth3",
-                   "egress_station_id": "3",
-                   "egress_switch_id": "00:00:00:00:00:00:00:03",
-                   "flow_id": "29",
-                   "ingress_domain_id": "(DEFAULT)",
-                   "ingress_port_name": "s2-eth2",
-                   "ingress_station_id": "2",
-                   "ingress_switch_id": "00:00:00:00:00:00:00:02",
-                   "match": {
-                       "macdstaddr": [
-                           "4298.0959.0e0b"
-                       ],
-                       "macsrcaddr": [
-                           "924c.e4a3.a743"
-                       ],
-                       "vlan_id": [
-                           "300"
-                       ]
-                   },
-                   "pathinfos": [
-                       {
-                           "in_port_name": "s2-eth2",
-                           "out_port_name": "s2-eth1",
-                           "switch_id": "00:00:00:00:00:00:00:02"
-                       },
-                       {
-                           "in_port_name": "s1-eth2",
-                           "out_port_name": "s1-eth3",
-                           "switch_id": "00:00:00:00:00:00:00:01"
-                       },
-                       {
-                           "in_port_name": "s3-eth1",
-                           "out_port_name": "s3-eth3",
-                           "switch_id": "00:00:00:00:00:00:00:03"
-                       }
-                   ]
-               }
-           ],
-           "reason": "success"
-       }
-   ]
-}
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_view_STATIONS.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_How_To_view_STATIONS.adoc
deleted file mode 100644 (file)
index 821a769..0000000
+++ /dev/null
@@ -1,116 +0,0 @@
-==== How To View Specific VTN Station Information.
-
-This example demonstrates on how to view a specific VTN Station information.
-
-.EXAMPLE DEMONSTRATING VTN STATIONS
-image::vtn/vtn_stations.png[EXAMPLE DEMONSTRATING VTN STATIONS]
-
-===== Requirement
-* Configure mininet and create a topology:
-
-----
- $ sudo mn --custom /home/mininet/mininet/custom/topo-2sw-2host.py --controller=remote,ip=10.100.9.61 --topo mytopo
-mininet> net
-
- s1 lo:  s1-eth1:h1-eth0 s1-eth2:s2-eth1
- s2 lo:  s2-eth1:s1-eth2 s2-eth2:h2-eth0
- h1 h1-eth0:s1-eth1
- h2 h2-eth0:s2-eth2
-----
-
-* Generate traffic by pinging between hosts h1 and h2 after configuring the portmaps respectively
-
-
-----
- mininet> h1 ping h2
- PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
- 64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=16.7 ms
- 64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=13.2 ms
-----
-
-===== Configuration
-
-* Create a Controller named controllerone and mention its ip-address in the below create-controller command
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"controller": {"controller_id": "controllerone", "ipaddr":"10.100.9.61", "type": "odc", "version": "1.0", "auditstatus":"enable"}}' http://127.0.0.1:8083/vtn-webapi/controllers.json
-----
-
-* Create a VTN named vtn1 by executing the create-vtn command
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vtn" : {"vtn_name":"vtn1","description":"test VTN" }}' http://127.0.0.1:8083/vtn-webapi/vtns.json
-----
-
-* Create a vBridge named vBridge1 in the vtn1 by executing the create-vbr command.
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"vbridge" : {"vbr_name":"vBridge1","controller_id":"controllerone","domain_id":"(DEFAULT)" }}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges.json
-----
-
-* Create two Interfaces named if1 and if2 into the vBridge1
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if1","description": "if_desc1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
-curl -v --user admin:adminpass -H 'content-type: application/json' -X POST -d '{"interface": {"if_name": "if2","description": "if_desc2"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces.json
-----
-
-* Configure two mappings on each of the interfaces by executing the below command.
-
-The interface if1 of the virtual bridge will be mapped to the port "s1-eth1" of the switch "openflow:1" of the Mininet.
-The h1 is connected to the port "s1-eth1".
-
-The interface if2 of the virtual bridge will be mapped to the port "s1-eth2" of the switch "openflow:1" of the Mininet.
-The h2 is connected to the port "s1-eth2".
-
-
-----
-curl --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:01-s1-eth1"}}' http://127.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if1/portmap.json
-curl -v --user admin:adminpass -H 'content-type: application/json' -X PUT -d '{"portmap":{"logical_port_id": "PP-OF:00:00:00:00:00:00:00:02-s2-eth2"}}' http://17.0.0.1:8083/vtn-webapi/vtns/vtn1/vbridges/vBridge1/interfaces/if2/portmap.json
-----
-
-* Get the VTN stations information
-
-----
-curl -X GET -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' "http://127.0.0.1:8083/vtn-webapi/vtnstations?controller_id=controllerone&vtn_name=vtn1"
-----
-
-===== Verification
-
-----
-curl -X GET -H 'content-type: application/json' -H 'username: admin' -H 'password: adminpass' "http://127.0.0.1:8083/vtn-webapi/vtnstations?controller_id=controllerone&vtn_name=vtn1"
-{
-   "vtnstations": [
-       {
-           "domain_id": "(DEFAULT)",
-           "interface": {},
-           "ipaddrs": [
-               "10.0.0.2"
-           ],
-           "macaddr": "b2c3.06b8.2dac",
-           "no_vlan_id": "true",
-           "port_name": "s2-eth2",
-           "station_id": "178195618445172",
-           "switch_id": "00:00:00:00:00:00:00:02",
-           "vnode_name": "vBridge1",
-           "vnode_type": "vbridge",
-           "vtn_name": "vtn1"
-       },
-       {
-           "domain_id": "(DEFAULT)",
-           "interface": {},
-           "ipaddrs": [
-               "10.0.0.1"
-           ],
-           "macaddr": "ce82.1b08.90cf",
-           "no_vlan_id": "true",
-           "port_name": "s1-eth1",
-           "station_id": "206130278144207",
-           "switch_id": "00:00:00:00:00:00:00:01",
-           "vnode_name": "vBridge1",
-           "vnode_type": "vbridge",
-           "vtn_name": "vtn1"
-       }
-   ]
-}
-----
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Configure_Flowfilters.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Configure_Flowfilters.adoc
deleted file mode 100644 (file)
index 7e01ca2..0000000
+++ /dev/null
@@ -1,273 +0,0 @@
-==== How To Configure Flowfilters
-
-===== Overview
-
-* This page explains how to provision flowfilter using VTN Manager. This page targets Beryllium release, so the procedure described here does not work in other releases.
-
-* The flow-filter function discards, permits, or redirects packets of the traffic within a VTN, according to specified flow conditions. The table below lists the actions to be applied when a packet matches the condition:
-
-[options="header",cols="30%,70%"]
-|===
-| Action | Function
-| Pass | Permits the packet to pass along the determined path. +
-As options, packet transfer priority (set priority) and DSCP change (set ip-dscp) is specified.
-| Drop | Discards the packet.
-| Redirect | Redirects the packet to a desired virtual interface. +
-As an option, it is possible to change the MAC address when the packet is transferred.
-|===
-
-.Flow Filter Example
-image::vtn/flow_filter_example.png["Flow filter example",width=500]
-
-* Following steps explain flow-filter function:
-
-** when a packet is transferred to an interface within a virtual network, the flow-filter function evaluates whether the transferred packet matches the condition specifed in the flow-list.
-
-** If the packet matches the condition, the flow-filter applies the flow-list matching action specified in the flow-filter.
-
-===== Requirements
-
-To apply the packet filter, configure the following:
-
-* Create a flow condition.
-* Specify where to apply the flow-filter, for example VTN, vBridge, or interface of vBridge.
-
-To provision OpenFlow switches, this page uses Mininet. Mininet details and set-up can be referred at the below page:
-https://wiki.opendaylight.org/view/OpenDaylight_Controller:Installation#Using_Mininet
-
-Start Mininet, and create three switches (s1, s2, and s3) and four hosts (h1, h2, h3 and h4) in it.
-
-----
-sudo mn --controller=remote,ip=192.168.0.100 --topo tree,2
-----
-
-NOTE: Replace "192.168.0.100" with the IP address of OpenDaylight controller based on your environment.
-
-You can check the topology that you have created by executing "net" command in the Mininet console.
-
-----
- mininet> net
- h1 h1-eth0:s2-eth1
- h2 h2-eth0:s2-eth2
- h3 h3-eth0:s3-eth1
- h4 h4-eth0:s3-eth2
- s1 lo:  s1-eth1:s2-eth3 s1-eth2:s3-eth3
- s2 lo:  s2-eth1:h1-eth0 s2-eth2:h2-eth0 s2-eth3:s1-eth1
- s3 lo:  s3-eth1:h3-eth0 s3-eth2:h4-eth0 s3-eth3:s1-eth2
-----
-
-In this guide, you will provision flowfilters to establish communication between h1 and h3.
-
-===== Configuration
-
-To provision the virtual L2 network for the two hosts (h1 and h3), execute REST API provided by VTN Manager as follows. It uses curl command to call the REST API.
-
-* Create a virtual tenant named vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn[the update-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"vtn1"}}'
-----
-
-* Create a virtual bridge named vbr1 in the tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge[the update-vbridge RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1"}}'
-----
-
-* Create two interfaces into the virtual bridge by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface[the update-vinterface RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1"}}'
-----
-
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if2"}}'
-----
-
-* Configure two mappings on the interfaces by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map[the set-port-map RPC].
-
-** The interface if1 of the virtual bridge will be mapped to the port "s2-eth1" of the switch "openflow:2" of the Mininet.
-
-*** The h1 is connected to the port "s2-eth1".
-
-** The interface if2 of the virtual bridge will be mapped to the port "s3-eth1" of the switch "openflow:3" of the Mininet.
-
-*** The h3 is connected to the port "s3-eth1".
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if1", "node":"openflow:2", "port-name":"s2-eth1"}}'
-----
-
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if2", "node":"openflow:3", "port-name":"s3-eth1"}}'
-----
-
-* Create flowcondition named cond_1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#set-flow-condition[the set-flow-condition RPC].
-
-** For option source and destination-network, get inet address of host h1 and h3 from mininet.
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"name":"cond_1", "vtn-flow-match":[{"vtn-ether-match":{},"vtn-inet-match":{"source-network":"10.0.0.1/32","protocol":1,"destination-network":"10.0.0.3/32"},"index":"1"}]}}'
-----
-
-* Flowfilter can be applied either in VTN, VBR or VBR Interfaces. Here in this page we provision flowfilter with VBR Interface and demonstrate with action type drop and then pass.
-
-* Flow filter demonstration with DROP action-type. Create Flowfilter in VBR Interface if1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter[the set-flow-filter RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input": {"tenant-name": "vtn1", "bridge-name": "vbr1","interface-name":"if1","vtn-flow-filter":[{"condition":"cond_1","vtn-drop-filter":{},"vtn-flow-action":[{"order": "1","vtn-set-inet-src-action":{"ipv4-address":"10.0.0.1/32"}},{"order": "2","vtn-set-inet-dst-action":{"ipv4-address":"10.0.0.3/32"}}],"index": "1"}]}}'
-----
-
-===== Verification of the drop filter
-
-* Please execute ping from h1 to h3. As we have applied the action type "drop" , ping should fail with no packet flows between hosts h1 and h3 as below,
-
-----
- mininet> h1 ping h3
-----
-
-===== Configuration for pass filter
-
-* Update the flow filter to pass the packets by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter[the set-flow-filter RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input": {"tenant-name": "vtn1", "bridge-name": "vbr1","interface-name":"if1","vtn-flow-filter":[{"condition":"cond_1","vtn-pass-filter":{},"vtn-flow-action":[{"order": "1","vtn-set-inet-src-action":{"ipv4-address":"10.0.0.1/32"}},{"order": "2","vtn-set-inet-dst-action":{"ipv4-address":"10.0.0.3/32"}}],"index": "1"}]}}'
-----
-
-===== Verification For Packets Success
-
-* As we have applied action type PASS now ping should happen between hosts h1 and h3.
-
-----
- mininet> h1 ping h3
- PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
- 64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=0.984 ms
- 64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.110 ms
- 64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.098 ms
-----
-
-* You can also verify the configurations by executing the following REST API. It shows all configuration in VTN Manager.
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X GET http://localhost:8181/restconf/operational/vtn:vtns/vtn/vtn1
-----
-
-----
-{
-  "vtn": [
-  {
-    "name": "vtn1",
-      "vtenant-config": {
-        "hard-timeout": 0,
-        "idle-timeout": 300,
-        "description": "creating vtn"
-      },
-      "vbridge": [
-      {
-        "name": "vbr1",
-        "vbridge-config": {
-          "age-interval": 600,
-          "description": "creating vBridge1"
-        },
-        "bridge-status": {
-          "state": "UP",
-          "path-faults": 0
-        },
-        "vinterface": [
-        {
-          "name": "if1",
-          "vinterface-status": {
-            "mapped-port": "openflow:2:1",
-            "state": "UP",
-            "entity-state": "UP"
-          },
-          "port-map-config": {
-            "vlan-id": 0,
-            "node": "openflow:2",
-            "port-name": "s2-eth1"
-          },
-          "vinterface-config": {
-            "description": "Creating if1 interface",
-            "enabled": true
-          },
-          "vinterface-input-filter": {
-            "vtn-flow-filter": [
-            {
-              "index": 1,
-              "condition": "cond_1",
-              "vtn-flow-action": [
-              {
-                "order": 1,
-                "vtn-set-inet-src-action": {
-                  "ipv4-address": "10.0.0.1/32"
-                }
-              },
-              {
-                "order": 2,
-                "vtn-set-inet-dst-action": {
-                  "ipv4-address": "10.0.0.3/32"
-                }
-              }
-              ],
-                "vtn-pass-filter": {}
-            },
-            {
-              "index": 10,
-              "condition": "cond_1",
-              "vtn-drop-filter": {}
-            }
-            ]
-          }
-        },
-        {
-          "name": "if2",
-          "vinterface-status": {
-            "mapped-port": "openflow:3:1",
-            "state": "UP",
-            "entity-state": "UP"
-          },
-          "port-map-config": {
-            "vlan-id": 0,
-            "node": "openflow:3",
-            "port-name": "s3-eth1"
-          },
-          "vinterface-config": {
-            "description": "Creating if2 interface",
-            "enabled": true
-          }
-        }
-        ]
-      }
-    ]
-  }
-  ]
-}
-----
-
-===== Cleaning Up
-
-* To clean up both VTN and flowcondition.
-
-* You can delete the virtual tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn[the remove-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"vtn1"}}'
-----
-
-* You can delete the flowcondition cond_1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#remove-flow-condition[the remove-flow-condition RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:remove-flow-condition -d '{"input":{"name":"cond_1"}}'
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Configure_Service_Function_Chaining_Support.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Configure_Service_Function_Chaining_Support.adoc
deleted file mode 100644 (file)
index 1a1e585..0000000
+++ /dev/null
@@ -1,647 +0,0 @@
-==== How To Configure Service Function Chaining using VTN Manager
-
-===== Overview
-
-This page explains how to configure VTN Manager for Service Chaining. This page targets Beryllium release, so the procedure described here does not work in other releases.
-
-.Service Chaining With One Service
-image::vtn/Service_Chaining_With_One_Service.png["Service Chaining With One Service",width=500]
-
-===== Requirements
-
-* Please refer to the https://wiki.opendaylight.org/view/VTN:Beryllium:Installation_Guide[Installation Pages] to run ODL with VTN Feature enabled.
-* Please ensure Bridge-Utils package is installed in mininet environment before running the mininet script.
-* To install Bridge-Utils package run sudo apt-get install bridge-utils (assuming Ubuntu is used to run mininet, If not then this is not required).
-* Save the mininet script given below as topo_handson.py and run the mininet script in the mininet environment where Mininet is installed.
-
-===== Mininet Script
-
-* https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:Mininet[Script for emulating network with multiple hosts].
-* Before executing the mininet script, please confirm Controller is up and running.
-* Run the mininet script.
-* Replace <path> and <Controller IP> based on your environment
-
-----
-sudo mn --controller=remote,ip=<Controller IP> --custom <path>\topo_handson.py --topo mytopo2
-----
-
-----
- mininet> net
- h11 h11-eth0:s1-eth1
- h12 h12-eth0:s1-eth2
- h21 h21-eth0:s2-eth1
- h22 h22-eth0:s2-eth2
- h23 h23-eth0:s2-eth3
- srvc1 srvc1-eth0:s3-eth3 srvc1-eth1:s4-eth3
- srvc2 srvc2-eth0:s3-eth4 srvc2-eth1:s4-eth4
- s1 lo:  s1-eth1:h11-eth0 s1-eth2:h12-eth0 s1-eth3:s2-eth4 s1-eth4:s3-eth2
- s2 lo:  s2-eth1:h21-eth0 s2-eth2:h22-eth0 s2-eth3:h23-eth0 s2-eth4:s1-eth3 s2-eth5:s4-eth1
- s3 lo:  s3-eth1:s4-eth2 s3-eth2:s1-eth4 s3-eth3:srvc1-eth0 s3-eth4:srvc2-eth0
- s4 lo:  s4-eth1:s2-eth5 s4-eth2:s3-eth1 s4-eth3:srvc1-eth1 s4-eth4:srvc2-eth1
-----
-
-===== Configurations
-
-====== Mininet
-
-* Please follow the below steps to configure the network in mininet as in the below image:
-
-.Mininet Configuration
-image::vtn/Mininet_Configuration.png["Mininet Configuration",width=500]
-
-====== Configure service nodes
-
-* Please execute the following commands in the mininet console where mininet script is executed.
-
-----
- mininet> srvc1 ip addr del 10.0.0.6/8 dev srvc1-eth0
- mininet> srvc1 brctl addbr br0
- mininet> srvc1 brctl addif br0 srvc1-eth0
- mininet> srvc1 brctl addif br0 srvc1-eth1
- mininet> srvc1 ifconfig br0 up
- mininet> srvc1 tc qdisc add dev srvc1-eth1 root netem delay 200ms
- mininet> srvc2 ip addr del 10.0.0.7/8 dev srvc2-eth0
- mininet> srvc2 brctl addbr br0
- mininet> srvc2 brctl addif br0 srvc2-eth0
- mininet> srvc2 brctl addif br0 srvc2-eth1
- mininet> srvc2 ifconfig br0 up
- mininet> srvc2 tc qdisc add dev srvc2-eth1 root netem delay 300ms
-----
-
-===== Controller
-
-====== Multi-Tenancy
-
-* Please execute the below commands to configure the network topology in the controller as in the below image:
-
-.Tenant2
-image::vtn/Tenant2.png["Tenant2",width=500]
-
-====== Please execute the below commands in controller
-
-NOTE:
-The below commands are for the difference in behavior of Manager in Beryllium topology. The Link below has the details for this bug: https://bugs.opendaylight.org/show_bug.cgi?id=3818.
-
-----
-curl --user admin:admin -H 'content-type: application/json' -H 'ipaddr:127.0.0.1' -X PUT http://localhost:8181/restconf/config/vtn-static-topology:vtn-static-topology/static-edge-ports -d '{"static-edge-ports": {"static-edge-port": [ {"port": "openflow:3:3"}, {"port": "openflow:3:4"}, {"port": "openflow:4:3"}, {"port": "openflow:4:4"}]}}'
-----
-
-* Create a virtual tenant named vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn[the update-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"vtn1","update-mode":"CREATE","operation":"SET","description":"creating vtn","idle-timeout":300,"hard-timeout":0}}'
-----
-
-* Create a virtual bridge named vbr1 in the tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge[the update-vbridge RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"creating vbr","tenant-name":"vtn1","bridge-name":"vbr1"}}'
-----
-
-* Create interface if1 into the virtual bridge vbr1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface[the update-vinterface RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vbrif1 interface","tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1"}}'
-----
-
-* Configure port mapping on the interface by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map[the set-port-map RPC].
-
-** The interface if1 of the virtual bridge will be mapped to the port "s1-eth2" of the switch "openflow:1" of the Mininet.
-
-*** The h12 is connected to the port "s1-eth2".
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"vlan-id":0,"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1","node":"openflow:1","port-name":"s1-eth2"}}'
-----
-
-* Create interface if2 into the virtual bridge vbr1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface[the update-vinterface RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vbrif2 interface","tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if2"}}'
-----
-
-* Configure port mapping on the interface by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map[the set-port-map RPC].
-
-** The interface if2 of the virtual bridge will be mapped to the port "s2-eth2" of the switch "openflow:2" of the Mininet.
-
-*** The h22 is connected to the port "s2-eth2".
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"vlan-id":0,"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if2","node":"openflow:2","port-name":"s2-eth2"}}'
-----
-
-* Create interface if3 into the virtual bridge vbr1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface[the update-vinterface RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vbrif3 interface","tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if3"}}'
-----
-
-* Configure port mapping on the interfaces by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map[the set-port-map RPC].
-
-** The interface if3 of the virtual bridge will be mapped to the port "s2-eth3" of the switch "openflow:2" of the Mininet.
-
-*** The h23 is connected to the port "s2-eth3".
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"vlan-id":0,"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if3","node":"openflow:2","port-name":"s2-eth3"}}'
-----
-
-===== Traffic filtering
-
-* Create flowcondition named cond_1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#set-flow-condition[the set-flow-condition RPC].
-
-** For option source and destination-network, get inet address of host h12(src) and h22(dst) from mininet.
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"operation":"SET","present":"false","name":"cond_1","vtn-flow-match":[{"index":1,"vtn-ether-match":{},"vtn-inet-match":{"source-network":"10.0.0.2/32","destination-network":"10.0.0.4/32"}}]}}'
-----
-
-* Flow filter demonstration with DROP action-type. Create Flowfilter in VBR Interface if1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter[the set-flow-filter RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input":{"output":"false","tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1","vtn-flow-filter":[{"condition":"cond_1","index":10,"vtn-drop-filter":{}}]}}'
-----
-
-===== Service Chaining
-
-====== With One Service
-
-* Please execute the below commands to configure the network topology which sends some specific traffic via a single service(External device) in the controller as in the below image:
-
-.Service Chaining With One Service LLD
-image::vtn/Service_Chaining_With_One_Service_LLD.png["Service Chaining With One Service LLD",width=500]
-
-* Create a virtual terminal named vt_srvc1_1 in the tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vterminal.html#update-vterminal[the update-vterminal RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vterminal:update-vterminal -d '{"input":{"update-mode":"CREATE","operation":"SET","tenant-name":"vtn1","terminal-name":"vt_srvc1_1","description":"Creating vterminal"}}'
-----
-
-* Create interface IF into the virtual terminal vt_srvc1_1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface[the update-vinterface RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vterminal IF","enabled":"true","tenant-name":"vtn1","terminal-name":"vt_srvc1_1","interface-name":"IF"}}'
-----
-
-* Configure port mapping on the interfaces by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map[the set-port-map RPC].
-
-** The interface IF of the virtual terminal will be mapped to the port "s3-eth3" of the switch "openflow:3" of the Mininet.
-
-*** The h12 is connected to the port "s3-eth3".
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1","terminal-name":"vt_srvc1_1","interface-name":"IF","node":"openflow:3","port-name":"s3-eth3"}}'
-----
-
-* Create a virtual terminal named vt_srvc1_2 in the tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vterminal.html#update-vterminal[the update-vterminal RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vterminal:update-vterminal -d '{"input":{"update-mode":"CREATE","operation":"SET","tenant-name":"vtn1","terminal-name":"vt_srvc1_2","description":"Creating vterminal"}}'
-----
-
-* Create interface IF into the virtual terminal vt_srvc1_2 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface[the update-vinterface RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vterminal IF","enabled":"true","tenant-name":"vtn1","terminal-name":"vt_srvc1_2","interface-name":"IF"}}'
-----
-
-* Configure port mapping on the interfaces by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map[the set-port-map RPC].
-
-** The interface IF of the virtual terminal will be mapped to the port "s4-eth3" of the switch "openflow:4" of the Mininet.
-
-*** The h22 is connected to the port "s4-eth3".
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1","terminal-name":"vt_srvc1_2","interface-name":"IF","node":"openflow:4","port-name":"s4-eth3"}}'
-----
-
-* Create flowcondition named cond_1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#set-flow-condition[the set-flow-condition RPC].
-
-** For option source and destination-network, get inet address of host h12(src) and h22(dst) from mininet.
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"operation":"SET","present":"false","name":"cond_1","vtn-flow-match":[{"index":1,"vtn-ether-match":{},"vtn-inet-match":{"source-network":"10.0.0.2/32","destination-network":"10.0.0.4/32"}}]}}'
-----
-
-* Create flowcondition named cond_any by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#set-flow-condition[the set-flow-condition RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"operation":"SET","present":"false","name":"cond_any","vtn-flow-match":[{"index":1}]}}'
-----
-
-* Flow filter demonstration with redirect action-type. Create Flowfilter in virtual terminal vt_srvc1_2 interface IF by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter[the set-flow-filter RPC].
-
-** Flowfilter redirects vt_srvc1_2 to bridge1-IF2
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input":{"output":"false","tenant-name":"vtn1","terminal-name":"vt_srvc1_2","interface-name":"IF","vtn-flow-filter":[{"condition":"cond_any","index":10,"vtn-redirect-filter":{"redirect-destination":{"bridge-name":"vbr1","interface-name":"if2"},"output":"true"}}]}}'
-----
-
-* Flow filter demonstration with redirect action-type. Create Flowfilter in vbridge vbr1 interface if1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter[the set-flow-filter RPC].
-
-** Flow filter redirects Bridge1-IF1 to vt_srvc1_1
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input":{"output":"false","tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1","vtn-flow-filter":[{"condition":"cond_1","index":10,"vtn-redirect-filter":{"redirect-destination":{"terminal-name":"vt_srvc1_1","interface-name":"IF"},"output":"true"}}]}}'
-----
-
-===== Verification
-
-.Service Chaining With One Service
-image::vtn/Service_Chaining_With_One_Service_Verification.png["Service Chaining With One Service Verification",width=500]
-
-* Ping host12 to host22 to view the host rechability, a delay of 200ms will be taken to reach host22 as below.
-
-----
- mininet> h12 ping h22
- PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
- 64 bytes from 10.0.0.4: icmp_seq=35 ttl=64 time=209 ms
- 64 bytes from 10.0.0.4: icmp_seq=36 ttl=64 time=201 ms
- 64 bytes from 10.0.0.4: icmp_seq=37 ttl=64 time=200 ms
- 64 bytes from 10.0.0.4: icmp_seq=38 ttl=64 time=200 ms
-----
-
-====== With two services
-
-* Please execute the below commands to configure the network topology which sends some specific traffic via two services(External device) in the controller as in the below image.
-
-.Service Chaining With Two Services LLD
-image::vtn/Service_Chaining_With_Two_Services_LLD.png["Service Chaining With Two Services LLD",width=500]
-
-* Create a virtual terminal named vt_srvc2_1 in the tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vterminal.html#update-vterminal[the update-vterminal RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vterminal:update-vterminal -d '{"input":{"update-mode":"CREATE","operation":"SET","tenant-name":"vtn1","terminal-name":"vt_srvc2_1","description":"Creating vterminal"}}'
-----
-
-* Create interface IF into the virtual terminal vt_srvc2_1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface[the update-vinterface RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vterminal IF","enabled":"true","tenant-name":"vtn1","terminal-name":"vt_srvc2_1","interface-name":"IF"}}'
-----
-
-* Configure port mapping on the interfaces by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map[the set-port-map RPC].
-
-** The interface IF of the virtual terminal will be mapped to the port "s3-eth4" of the switch "openflow:3" of the Mininet.
-
-*** The host h12 is connected to the port "s3-eth4".
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1","terminal-name":"vt_srvc2_1","interface-name":"IF","node":"openflow:3","port-name":"s3-eth4"}}'
-----
-
-* Create a virtual terminal named vt_srvc2_2 in the tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vterminal.html#update-vterminal[the update-vterminal RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vterminal:update-vterminal -d '{"input":{"update-mode":"CREATE","operation":"SET","tenant-name":"vtn1","terminal-name":"vt_srvc2_2","description":"Creating vterminal"}}'
-----
-
-* Create interfaces IF into the virtual terminal vt_srvc2_2 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface[the update-vinterface RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"update-mode":"CREATE","operation":"SET","description":"Creating vterminal IF","enabled":"true","tenant-name":"vtn1","terminal-name":"vt_srvc2_2","interface-name":"IF"}}'
-----
-
-* Configure port mapping on the interfaces by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map[the set-port-map RPC].
-
-** The interface IF of the virtual terminal will be mapped to the port "s4-eth4" of the switch "openflow:4" of the mininet.
-
-*** The host h22 is connected to the port "s4-eth4".
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1","terminal-name":"vt_srvc2_2","interface-name":"IF","node":"openflow:4","port-name":"s4-eth4"}}'
-----
-
-* Flow filter demonstration with redirect action-type. Create Flowfilter in virtual terminal vt_srvc2_2 interface IF by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter[the set-flow-filter RPC].
-
-** Flow filter redirects vt_srvc2_2 to Bridge1-IF2.
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input":{"output":"false","tenant-name":"vtn1","terminal-name":"vt_srvc2_2","interface-name":"IF","vtn-flow-filter":[{"condition":"cond_any","index":10,"vtn-redirect-filter":{"redirect-destination":{"bridge-name":"vbr1","interface-name":"if2"},"output":"true"}}]}}'
-----
-
-* Flow filter demonstration with redirect action-type. Create Flowfilter in virtual terminal vt_srvc2_2 interface IF by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-filter.html#set-flow-filter[the set-flow-filter RPC].
-
-** Flow filter redirects vt_srvc1_2 to vt_srvc2_1.
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-filter:set-flow-filter -d '{"input":{"output":"false","tenant-name":"vtn1","terminal-name":"vt_srvc1_2","interface-name":"IF","vtn-flow-filter":[{"condition":"cond_any","index":10,"vtn-redirect-filter":{"redirect-destination":{"terminal-name":"vt_srvc2_1","interface-name":"IF"},"output":"true"}}]}}'
-----
-
-===== Verification
-
-.Service Chaining With Two Service
-image::vtn/Service_Chaining_With_Two_Services.png["Service Chaining With Two Services",width=500]
-
-* Ping host12 to host22 to view the host rechability, a delay of 500ms will be taken to reach host22 as below.
-
-----
- mininet> h12 ping h22
- PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
- 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=512 ms
- 64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=501 ms
- 64 bytes from 10.0.0.4: icmp_seq=3 ttl=64 time=500 ms
- 64 bytes from 10.0.0.4: icmp_seq=4 ttl=64 time=500 ms
-----
-
-* You can verify the configuration by executing the following REST API. It shows all configuration in VTN Manager.
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X GET http://localhost:8181/restconf/operational/vtn:vtns
-----
-
-----
-{
-  "vtn": [
-  {
-    "name": "vtn1",
-      "vtenant-config": {
-        "hard-timeout": 0,
-        "idle-timeout": 300,
-        "description": "creating vtn"
-      },
-      "vbridge": [
-      {
-        "name": "vbr1",
-        "vbridge-config": {
-          "age-interval": 600,
-          "description": "creating vbr"
-        },
-        "bridge-status": {
-          "state": "UP",
-          "path-faults": 0
-        },
-        "vinterface": [
-        {
-          "name": "if1",
-          "vinterface-status": {
-            "mapped-port": "openflow:1:2",
-            "state": "UP",
-            "entity-state": "UP"
-          },
-          "port-map-config": {
-            "vlan-id": 0,
-            "node": "openflow:1",
-            "port-name": "s1-eth2"
-          },
-          "vinterface-config": {
-            "description": "Creating vbrif1 interface",
-            "enabled": true
-          },
-          "vinterface-input-filter": {
-            "vtn-flow-filter": [
-            {
-              "index": 10,
-              "condition": "cond_1",
-              "vtn-redirect-filter": {
-                "output": true,
-                "redirect-destination": {
-                  "terminal-name": "vt_srvc1_1",
-                  "interface-name": "IF"
-                }
-              }
-            }
-            ]
-          }
-        },
-        {
-          "name": "if2",
-          "vinterface-status": {
-            "mapped-port": "openflow:2:2",
-            "state": "UP",
-            "entity-state": "UP"
-          },
-          "port-map-config": {
-            "vlan-id": 0,
-            "node": "openflow:2",
-            "port-name": "s2-eth2"
-          },
-          "vinterface-config": {
-            "description": "Creating vbrif2 interface",
-            "enabled": true
-          }
-        },
-        {
-          "name": "if3",
-          "vinterface-status": {
-            "mapped-port": "openflow:2:3",
-            "state": "UP",
-            "entity-state": "UP"
-          },
-          "port-map-config": {
-            "vlan-id": 0,
-            "node": "openflow:2",
-            "port-name": "s2-eth3"
-          },
-          "vinterface-config": {
-            "description": "Creating vbrif3 interface",
-            "enabled": true
-          }
-        }
-        ]
-      }
-    ],
-      "vterminal": [
-      {
-        "name": "vt_srvc2_2",
-        "bridge-status": {
-          "state": "UP",
-          "path-faults": 0
-        },
-        "vinterface": [
-        {
-          "name": "IF",
-          "vinterface-status": {
-            "mapped-port": "openflow:4:4",
-            "state": "UP",
-            "entity-state": "UP"
-          },
-          "port-map-config": {
-            "vlan-id": 0,
-            "node": "openflow:4",
-            "port-name": "s4-eth4"
-          },
-          "vinterface-config": {
-            "description": "Creating vterminal IF",
-            "enabled": true
-          },
-          "vinterface-input-filter": {
-            "vtn-flow-filter": [
-            {
-              "index": 10,
-              "condition": "cond_any",
-              "vtn-redirect-filter": {
-                "output": true,
-                "redirect-destination": {
-                  "bridge-name": "vbr1",
-                  "interface-name": "if2"
-                }
-              }
-            }
-            ]
-          }
-        }
-        ],
-          "vterminal-config": {
-            "description": "Creating vterminal"
-          }
-      },
-      {
-        "name": "vt_srvc1_1",
-        "bridge-status": {
-          "state": "UP",
-          "path-faults": 0
-        },
-        "vinterface": [
-        {
-          "name": "IF",
-          "vinterface-status": {
-            "mapped-port": "openflow:3:3",
-            "state": "UP",
-            "entity-state": "UP"
-          },
-          "port-map-config": {
-            "vlan-id": 0,
-            "node": "openflow:3",
-            "port-name": "s3-eth3"
-          },
-          "vinterface-config": {
-            "description": "Creating vterminal IF",
-            "enabled": true
-          }
-        }
-        ],
-          "vterminal-config": {
-            "description": "Creating vterminal"
-          }
-      },
-      {
-        "name": "vt_srvc1_2",
-        "bridge-status": {
-          "state": "UP",
-          "path-faults": 0
-        },
-        "vinterface": [
-        {
-          "name": "IF",
-          "vinterface-status": {
-            "mapped-port": "openflow:4:3",
-            "state": "UP",
-            "entity-state": "UP"
-          },
-          "port-map-config": {
-            "vlan-id": 0,
-            "node": "openflow:4",
-            "port-name": "s4-eth3"
-          },
-          "vinterface-config": {
-            "description": "Creating vterminal IF",
-            "enabled": true
-          },
-          "vinterface-input-filter": {
-            "vtn-flow-filter": [
-            {
-              "index": 10,
-              "condition": "cond_any",
-              "vtn-redirect-filter": {
-                "output": true,
-                "redirect-destination": {
-                  "terminal-name": "vt_srvc2_1",
-                  "interface-name": "IF"
-                }
-              }
-            }
-            ]
-          }
-        }
-        ],
-          "vterminal-config": {
-            "description": "Creating vterminal"
-          }
-      },
-      {
-        "name": "vt_srvc2_1",
-        "bridge-status": {
-          "state": "UP",
-          "path-faults": 0
-        },
-        "vinterface": [
-        {
-          "name": "IF",
-          "vinterface-status": {
-            "mapped-port": "openflow:3:4",
-            "state": "UP",
-            "entity-state": "UP"
-          },
-          "port-map-config": {
-            "vlan-id": 0,
-            "node": "openflow:3",
-            "port-name": "s3-eth4"
-          },
-          "vinterface-config": {
-            "description": "Creating vterminal IF",
-            "enabled": true
-          }
-        }
-        ],
-          "vterminal-config": {
-            "description": "Creating vterminal"
-          }
-      }
-    ]
-  }
-  ]
-}
-----
-
-===== Cleaning Up
-
-* To clean up both VTN and flowconditions.
-
-* You can delete the virtual tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn[the remove-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"vtn1"}}'
-----
-
-* You can delete the flowcondition cond_1 and cond_any by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#remove-flow-condition[the remove-flow-condition RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:remove-flow-condition -d '{"input":{"name":"cond_1"}}'
-----
-
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:remove-flow-condition -d '{"input":{"name":"cond_any"}}'
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Create_Mac_Map_In_VTN.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Create_Mac_Map_In_VTN.adoc
deleted file mode 100644 (file)
index 5c58b5a..0000000
+++ /dev/null
@@ -1,168 +0,0 @@
-==== How To Create Mac Map In VTN
-
-===== Overview
-
-* This page demonstrates Mac Mapping. This demonstration aims at enabling communication between two hosts and denying communication of particular host by associating a Vbridge to the hosts and configuring Mac Mapping (mac address) to the Vbridge.
-
-* This page targets Beryllium release, so the procedure described here does not work in other releases.
-
-.Single Controller Mapping
-image::vtn/Single_Controller_Mapping.png["Single_Controller_Mapping",width=500]
-
-===== Requirement
-
-====== Configure mininet and create a topology
-
-* https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:Mininet#Network_with_Multiple_Hosts_for_Service_Function_Chain[Script for emulating network with multiple hosts].
-* Before executing the mininet script, please confirm Controller is up and running.
-* Run the mininet script.
-* Replace <path> and <Controller IP> based on your environment.
-
-----
-sudo mn --controller=remote,ip=<Controller IP> --custom <path>\topo_handson.py --topo mytopo2
-----
-
-----
-mininet> net
-h11 h11-eth0:s1-eth1
-h12 h12-eth0:s1-eth2
-h21 h21-eth0:s2-eth1
-h22 h22-eth0:s2-eth2
-h23 h23-eth0:s2-eth3
-srvc1 srvc1-eth0:s3-eth3 srvc1-eth1:s4-eth3
-srvc2 srvc2-eth0:s3-eth4 srvc2-eth1:s4-eth4
-s1 lo:  s1-eth1:h11-eth0 s1-eth2:h12-eth0 s1-eth3:s2-eth4 s1-eth4:s3-eth2
-s2 lo:  s2-eth1:h21-eth0 s2-eth2:h22-eth0 s2-eth3:h23-eth0 s2-eth4:s1-eth3 s2-eth5:s4-eth1
-s3 lo:  s3-eth1:s4-eth2 s3-eth2:s1-eth4 s3-eth3:srvc1-eth0 s3-eth4:srvc2-eth0
-s4 lo:  s4-eth1:s2-eth5 s4-eth2:s3-eth1 s4-eth3:srvc1-eth1 s4-eth4:srvc2-eth1
-----
-
-===== Configuration
-
-To create Mac Map in VTN, execute REST API provided by VTN Manager as follows. It uses curl command to call REST API.
-
-* Create a virtual tenant named Tenant1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn[the update-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"Tenant1"}}'
-----
-
-* Create a virtual bridge named vBridge1 in the tenant Tenant1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge[the update-vbridge RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"Tenant1","bridge-name":"vBridge1"}}'
-----
-
-* Configuring Mac Mappings on the vBridge1 by giving the mac address of host h12 and host h22 as follows to allow the communication by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-mac-map.html#set-mac-map[the set-mac-map RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-mac-map:set-mac-map -d '{"input":{"operation":"SET","allowed-hosts":["de:05:40:c4:96:76@0","62:c5:33:bc:d7:4e@0"],"tenant-name":"Tenant1","bridge-name":"vBridge1"}}'
-----
-
-NOTE: Mac Address of host h12 and host h22 can be obtained with the following command in mininet.
-
-----
- mininet> h12 ifconfig
- h12-eth0  Link encap:Ethernet  HWaddr 62:c5:33:bc:d7:4e
- inet addr:10.0.0.2  Bcast:10.255.255.255  Mask:255.0.0.0
- inet6 addr: fe80::60c5:33ff:febc:d74e/64 Scope:Link
-----
-
-----
- mininet> h22 ifconfig
- h22-eth0  Link encap:Ethernet  HWaddr de:05:40:c4:96:76
- inet addr:10.0.0.4  Bcast:10.255.255.255  Mask:255.0.0.0
- inet6 addr: fe80::dc05:40ff:fec4:9676/64 Scope:Link
-----
-
-* MAC Mapping will not be activated just by configuring it, a two end communication needs to be established to activate Mac Mapping.
-
-* Ping host h22 from host h12 in mininet, the ping will not happen between the hosts as only one way activation is enabled.
-
-----
- mininet> h12 ping h22
- PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
- From 10.0.0.2 icmp_seq=1 Destination Host Unreachable
- From 10.0.0.2 icmp_seq=2 Destination Host Unreachable
-----
-
-* Ping host h12 from host h22 in mininet, now the ping communication will take place as the two end communication is enabled.
-
-----
- mininet> h22 ping h12
- PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
- 64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=91.8 ms
- 64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.510 ms
-----
-
-* After two end communication enabled, now host h12 can ping host h22
-
-----
- mininet> h12 ping h22
- PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
- 64 bytes from 10.0.0.4: icmp_req=1 ttl=64 time=0.780 ms
- 64 bytes from 10.0.0.4: icmp_req=2 ttl=64 time=0.079 ms
-----
-
-===== Verification
-
-* To view the configured Mac Map of allowed host execute the following command.
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X GET http://localhost:8181/restconf/operational/vtn:vtns/vtn/Tenant1/vbridge/vBridge1/mac-map
-----
-
-----
-{
-  "mac-map": {
-    "mac-map-status": {
-      "mapped-host": [
-      {
-        "mac-address": "c6:44:22:ba:3e:72",
-          "vlan-id": 0,
-          "port-id": "openflow:1:2"
-      },
-      {
-        "mac-address": "f6:e0:43:b6:3a:b7",
-        "vlan-id": 0,
-        "port-id": "openflow:2:2"
-      }
-      ]
-    },
-      "mac-map-config": {
-        "allowed-hosts": {
-          "vlan-host-desc-list": [
-          {
-            "host": "c6:44:22:ba:3e:72@0"
-          },
-          {
-            "host": "f6:e0:43:b6:3a:b7@0"
-          }
-          ]
-        }
-      }
-  }
-}
-----
-
-NOTE:
-When Deny is configured a broadcast message is sent to all the hosts connected to the vBridge, so a two end communication need not be establihed like allow, the hosts can communicate directly without any two way communication enabled.
-
-. To Deny host h23 communication from hosts connected on vBridge1, the following configuration can be applied.
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-mac-map:set-mac-map -d '{"input":{"operation": "SET", "denied-hosts": ["0a:d3:ea:3d:8f:a5@0"],"tenant-name": "Tenant1","bridge-name": "vBridge1"}}'
-----
-
-===== Cleaning Up
-
-* You can delete the virtual tenant Tenant1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn[the remove-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"Tenant1"}}'
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Provision_Virtual_L2_Network.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Provision_Virtual_L2_Network.adoc
deleted file mode 100644 (file)
index a43fe04..0000000
+++ /dev/null
@@ -1,182 +0,0 @@
-==== How to provision virtual L2 Network
-
-===== Overview
-
-This page explains how to provision virtual L2 network using VTN Manager. This page targets Beryllium release, so the procedure described here does not work in other releases.
-
-.Virtual L2 network for host1 and host3
-image::vtn/How_to_provision_virtual_L2_network.png["Virtual L2 network for host1 and host3",width=500]
-
-===== Requirements
-
-====== Mininet
-
-* To provision OpenFlow switches, this page uses Mininet. Mininet details and set-up can be referred at the following page:
-https://wiki.opendaylight.org/view/OpenDaylight_Controller:Installation#Using_Mininet
-
-* Start Mininet and create three switches(s1, s2, and s3) and four hosts(h1, h2, h3, and h4) in it.
-
-----
- mininet@mininet-vm:~$ sudo mn --controller=remote,ip=192.168.0.100 --topo tree,2
-----
-
-NOTE:
-Replace "192.168.0.100" with the IP address of OpenDaylight controller based on your environment.
-
-* you can check the topology that you have created by executing "net" command in the Mininet console.
-
-----
- mininet> net
- h1 h1-eth0:s2-eth1
- h2 h2-eth0:s2-eth2
- h3 h3-eth0:s3-eth1
- h4 h4-eth0:s3-eth2
- s1 lo:  s1-eth1:s2-eth3 s1-eth2:s3-eth3
- s2 lo:  s2-eth1:h1-eth0 s2-eth2:h2-eth0 s2-eth3:s1-eth1
- s3 lo:  s3-eth1:h3-eth0 s3-eth2:h4-eth0 s3-eth3:s1-eth2
-----
-
-* In this guide, you will provision the virtual L2 network to establish communication between h1 and h3.
-
-===== Configuration
-
-To provision the virtual L2 network for the two hosts (h1 and h3), execute REST API provided by VTN Manager as follows. It uses curl command to call the REST API.
-
-* Create a virtual tenant named vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn[the update-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"vtn1"}}'
-----
-
-* Create a virtual bridge named vbr1 in the tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge[the update-vbridge RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1"}}'
-----
-
-* Create two interfaces into the virtual bridge by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface[the update-vinterface RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if1"}}'
-----
-
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if2"}}'
-----
-
-* Configure two mappings on the created interfaces by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map[the set-port-map RPC].
-
-** The interface if1 of the virtual bridge will be mapped to the port "s2-eth1" of the switch "openflow:2" of the Mininet.
-*** The h1 is connected to the port "s2-eth1".
-
-** The interface if2 of the virtual bridge will be mapped to the port "s3-eth1" of the switch "openflow:3" of the Mininet.
-*** The h3 is connected to the port "s3-eth1".
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if1", "node":"openflow:2", "port-name":"s2-eth1"}}'
-----
-
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if2", "node":"openflow:3", "port-name":"s3-eth1"}}'
-----
-
-===== Verification
-
-* Please execute ping from h1 to h3 to verify if the virtual L2 network for h1 and h3 is provisioned successfully.
-
-----
- mininet> h1 ping h3
- PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
- 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=243 ms
- 64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.341 ms
- 64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=0.078 ms
- 64 bytes from 10.0.0.3: icmp_seq=4 ttl=64 time=0.079 ms
-----
-
-* You can also verify the configuration by executing the following REST API. It shows all configuration in VTN Manager.
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X GET http://localhost:8181/restconf/operational/vtn:vtns/
-----
-
-* The result of the command should be like this.
-
-----
-{
-  "vtns": {
-    "vtn": [
-    {
-      "name": "vtn1",
-        "vtenant-config": {
-          "idle-timeout": 300,
-          "hard-timeout": 0
-        },
-        "vbridge": [
-        {
-          "name": "vbr1",
-          "bridge-status": {
-            "state": "UP",
-            "path-faults": 0
-          },
-          "vbridge-config": {
-            "age-interval": 600
-          },
-          "vinterface": [
-          {
-            "name": "if2",
-            "vinterface-status": {
-              "entity-state": "UP",
-              "state": "UP",
-              "mapped-port": "openflow:3:3"
-            },
-            "vinterface-config": {
-              "enabled": true
-            },
-            "port-map-config": {
-              "vlan-id": 0,
-              "port-name": "s3-eth1",
-              "node": "openflow:3"
-            }
-          },
-          {
-            "name": "if1",
-            "vinterface-status": {
-              "entity-state": "UP",
-              "state": "UP",
-              "mapped-port": "openflow:2:1"
-            },
-            "vinterface-config": {
-              "enabled": true
-            },
-            "port-map-config": {
-              "vlan-id": 0,
-              "port-name": "s2-eth1",
-              "node": "openflow:2"
-            }
-          }
-          ]
-        }
-      ]
-    }
-    ]
-  }
-}
-----
-
-===== Cleaning Up
-
-* You can delete the virtual tenant vtn1 by executing
-https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn[the remove-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"vtn1"}}'
-----
-
-
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Use_VTN_to_change_the_path_of_the_packet_flow.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_Use_VTN_to_change_the_path_of_the_packet_flow.adoc
deleted file mode 100644 (file)
index 9ac4b23..0000000
+++ /dev/null
@@ -1,313 +0,0 @@
-==== How to use VTN to change the path of the packet flow
-
-===== Overview
-
-* This page explains how to create specific VTN Pathmap using VTN Manager. This page targets Beryllium release, so the procedure described here does not work in other releases.
-
-.Pathmap
-image::vtn/Pathmap.png["Pathmap",width=500]
-
-===== Requirement
-
-* Save the mininet script given below as pathmap_test.py and run the mininet script in the mininet environment where Mininet is installed.
-
-* Create topology using the below mininet script:
-
-----
- from mininet.topo import Topo
- class MyTopo( Topo ):
-    "Simple topology example."
-    def __init__( self ):
-        "Create custom topo."
-        # Initialize topology
-        Topo.__init__( self )
-        # Add hosts and switches
-        leftHost = self.addHost( 'h1' )
-        rightHost = self.addHost( 'h2' )
-        leftSwitch = self.addSwitch( 's1' )
-        middleSwitch = self.addSwitch( 's2' )
-        middleSwitch2 = self.addSwitch( 's4' )
-        rightSwitch = self.addSwitch( 's3' )
-        # Add links
-        self.addLink( leftHost, leftSwitch )
-        self.addLink( leftSwitch, middleSwitch )
-        self.addLink( leftSwitch, middleSwitch2 )
-        self.addLink( middleSwitch, rightSwitch )
-        self.addLink( middleSwitch2, rightSwitch )
-        self.addLink( rightSwitch, rightHost )
- topos = { 'mytopo': ( lambda: MyTopo() ) }
-----
-
-* After creating new file with the above script start the mininet as below,
-
-----
-sudo mn --controller=remote,ip=10.106.138.124 --custom pathmap_test.py --topo mytopo
-----
-
-NOTE: Replace "10.106.138.124" with the IP address of OpenDaylight controller based on your environment.
-
-----
- mininet> net
- h1 h1-eth0:s1-eth1
- h2 h2-eth0:s3-eth3
- s1 lo:  s1-eth1:h1-eth0 s1-eth2:s2-eth1 s1-eth3:s4-eth1
- s2 lo:  s2-eth1:s1-eth2 s2-eth2:s3-eth1
- s3 lo:  s3-eth1:s2-eth2 s3-eth2:s4-eth2 s3-eth3:h2-eth0
- s4 lo:  s4-eth1:s1-eth3 s4-eth2:s3-eth2
- c0
-----
-
-* Generate traffic by pinging between host h1 and host h2 before creating the portmaps respectively.
-
-----
- mininet> h1 ping h2
- PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
- From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
- From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
- From 10.0.0.1 icmp_seq=3 Destination Host Unreachable
- From 10.0.0.1 icmp_seq=4 Destination Host Unreachable
-----
-
-===== Configuration
-
-* To change the path of the packet flow, execute REST API provided by VTN Manager as follows. It uses curl command to call the REST API.
-
-* Create a virtual tenant named vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn[the update-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"vtn1"}}'
-----
-
-* Create a virtual bridge named vbr1 in the tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge[the update-vbridge RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1"}}'
-----
-
-* Create two interfaces into the virtual bridge by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vinterface.html#update-vinterface[the update-vinterface RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if1"}}'
-----
-
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vinterface:update-vinterface -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1","interface-name":"if2"}}'
-----
-
-* Configure two mappings on the interfaces by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-port-map.html#set-port-map[the set-port-map RPC].
-
-** The interface if1 of the virtual bridge will be mapped to the port "s2-eth1" of the switch "openflow:1" of the Mininet.
-
-*** The h1 is connected to the port "s1-eth1".
-
-** The interface if2 of the virtual bridge will be mapped to the port "s3-eth1" of the switch "openflow:3" of the Mininet.
-
-*** The h3 is connected to the port "s3-eth3".
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if1", "node":"openflow:1", "port-name":"s1-eth1"}}'
-----
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-port-map:set-port-map -d '{"input":{"tenant-name":"vtn1", "bridge-name":"vbr1", "interface-name":"if2", "node":"openflow:3", "port-name":"s3-eth3"}}'
-----
-
-* Genarate traffic by pinging between host h1 and host h2 after creating the portmaps respectively.
-
-----
- mininet> h1 ping h2
- PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
- 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.861 ms
- 64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.101 ms
- 64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.101 ms
-----
-
-* Get the Dataflows information by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow.html#get-data-flow[the get-data-flow RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow:get-data-flow -d '{"input":{"tenant-name":"vtn1","mode":"DETAIL","node":"openflow:1","data-flow-port":{"port-id":1,"port-name":"s1-eth1"}}}'
-----
-
-* Create flowcondition named cond_1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#set-flow-condition[the set-flow-condition RPC].
-
-** For option source and destination-network, get inet address of host h1 or host h2 from mininet
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:set-flow-condition -d '{"input":{"operation":"SET","present":"false","name":"cond_1", "vtn-flow-match":[{"vtn-ether-match":{},"vtn-inet-match":{"source-network":"10.0.0.1/32","protocol":1,"destination-network":"10.0.0.2/32"},"index":"1"}]}}'
-----
-
-* Create pathmap with flowcondition cond_1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-path-map.html#set-path-map[the set-path-map RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-path-map:set-path-map -d '{"input":{"tenant-name":"vtn1","path-map-list":[{"condition":"cond_1","policy":"1","index": "1","idle-timeout":"300","hard-timeout":"0"}]}}'
-----
-
-* Create pathpolicy by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-path-policy.html#set-path-policy[the set-path-policy RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-path-policy:set-path-policy -d '{"input":{"operation":"SET","id": "1","default-cost": "10000","vtn-path-cost": [{"port-desc":"openflow:1,3,s1-eth3","cost":"1000"},{"port-desc":"openflow:4,2,s4-eth2","cost":"1000"},{"port-desc":"openflow:3,3,s3-eth3","cost":"100000"}]}}'
-----
-
-===== Verification
-
-* Before applying Path policy get node information by executing get dataflow command.
-
-----
-"data-flow-info": [
-{
-  "physical-route": [
-  {
-    "physical-ingress-port": {
-      "port-name": "s3-eth3",
-        "port-id": "3"
-    },
-      "physical-egress-port": {
-        "port-name": "s3-eth1",
-        "port-id": "1"
-      },
-      "node": "openflow:3",
-      "order": 0
-  },
-  {
-    "physical-ingress-port": {
-      "port-name": "s2-eth2",
-      "port-id": "2"
-    },
-    "physical-egress-port": {
-      "port-name": "s2-eth1",
-      "port-id": "1"
-    },
-    "node": "openflow:2",
-    "order": 1
-  },
-  {
-    "physical-ingress-port": {
-      "port-name": "s1-eth2",
-      "port-id": "2"
-    },
-    "physical-egress-port": {
-      "port-name": "s1-eth1",
-      "port-id": "1"
-    },
-    "node": "openflow:1",
-    "order": 2
-  }
-  ],
-    "data-egress-node": {
-      "interface-name": "if1",
-      "bridge-name": "vbr1",
-      "tenant-name": "vtn1"
-    },
-    "data-egress-port": {
-      "node": "openflow:1",
-      "port-name": "s1-eth1",
-      "port-id": "1"
-    },
-    "data-ingress-node": {
-      "interface-name": "if2",
-      "bridge-name": "vbr1",
-      "tenant-name": "vtn1"
-    },
-    "data-ingress-port": {
-      "node": "openflow:3",
-      "port-name": "s3-eth3",
-      "port-id": "3"
-    },
-    "flow-id": 32
-  },
-}
-----
-
-* After applying Path policy get node information by executing get dataflow command.
-
-----
-"data-flow-info": [
-{
-  "physical-route": [
-  {
-    "physical-ingress-port": {
-      "port-name": "s1-eth1",
-        "port-id": "1"
-    },
-      "physical-egress-port": {
-        "port-name": "s1-eth3",
-        "port-id": "3"
-      },
-      "node": "openflow:1",
-      "order": 0
-  },
-  {
-    "physical-ingress-port": {
-      "port-name": "s4-eth1",
-      "port-id": "1"
-    },
-    "physical-egress-port": {
-      "port-name": "s4-eth2",
-      "port-id": "2"
-    },
-    "node": "openflow:4",
-    "order": 1
-  },
-  {
-    "physical-ingress-port": {
-      "port-name": "s3-eth2",
-      "port-id": "2"
-    },
-    "physical-egress-port": {
-      "port-name": "s3-eth3",
-      "port-id": "3"
-    },
-    "node": "openflow:3",
-    "order": 2
-  }
-  ],
-    "data-egress-node": {
-      "interface-name": "if2",
-      "bridge-name": "vbr1",
-      "tenant-name": "vtn1"
-    },
-    "data-egress-port": {
-      "node": "openflow:3",
-      "port-name": "s3-eth3",
-      "port-id": "3"
-    },
-    "data-ingress-node": {
-      "interface-name": "if1",
-      "bridge-name": "vbr1",
-      "tenant-name": "vtn1"
-    },
-    "data-ingress-port": {
-      "node": "openflow:1",
-      "port-name": "s1-eth1",
-      "port-id": "1"
-    },
-}
-----
-
-===== Cleaning Up
-
-* To clean up both VTN and flowcondition.
-
-* You can delete the virtual tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn[the remove-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"vtn1"}}'
-----
-
-* You can delete the flowcondition cond_1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow-condition.html#remove-flow-condition[the remove-flow-condition RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow-condition:remove-flow-condition -d '{"input":{"name":"cond_1"}}'
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_View_Dataflows.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_View_Dataflows.adoc
deleted file mode 100644 (file)
index a6433f0..0000000
+++ /dev/null
@@ -1,246 +0,0 @@
-==== How To View Dataflows
-
-===== Overview
-
-This page explains how to view Dataflows using VTN Manager. This page targets Beryllium release, so the procedure described here does not work in other releases.
-
-Dataflow feature enables retrieval and display of data flows in the openflow network. The data flows can be retrieved based on an openflow switch or a switch port or a L2 source host.
-
-The flow information provided by this feature are
-
-* Location of virtual node which maps the incoming packet and outgoing packets.
-
-* Location of physical switch port where incoming and outgoing packets is sent and received.
-
-* A sequence of physical route info which represents the packet route in the physical network.
-
-===== Configuration
-
-* To view Dataflow information, configure with VLAN Mapping
-  https://wiki.opendaylight.org/view/VTN:Mananger:How_to_test_Vlan-map_using_mininet.
-
-===== Verification
-
-After creating vlan mapping configuration from the above page, execute as below in mininet to get switch details.
-
-----
- mininet> net
- h1 h1-eth0.200:s1-eth1
- h2 h2-eth0.300:s2-eth2
- h3 h3-eth0.200:s2-eth3
- h4 h4-eth0.300:s2-eth4
- h5 h5-eth0.200:s3-eth2
- h6 h6-eth0.300:s3-eth3
- s1 lo:  s1-eth1:h1-eth0.200 s1-eth2:s2-eth1 s1-eth3:s3-eth1
- s2 lo:  s2-eth1:s1-eth2 s2-eth2:h2-eth0.300 s2-eth3:h3-eth0.200 s2-eth4:h4-eth0.300
- s3 lo:  s3-eth1:s1-eth3 s3-eth2:h5-eth0.200 s3-eth3:h6-eth0.300
- c0
- mininet>
-----
-
-Please execute ping from h1 to h3 to check hosts reachability.
-
-----
- mininet> h1 ping h3
- PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
- 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=11.4 ms
- 64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.654 ms
- 64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=0.093 ms
-----
-
-Parallely execute below Restconf command to get data flow information of node "openflow:1" and its port "s1-eth1".
-
-* Get the Dataflows information by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-flow.html#get-data-flow[the get-data-flow RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-flow:get-data-flow -d '{"input":{"tenant-name":"vtn1","mode":"DETAIL","node":"openflow:1","data-flow-port":{"port-id":"1","port-name":"s1-eth1"}}}'
-----
-
-----
-{
-  "output": {
-    "data-flow-info": [
-    {
-      "averaged-data-flow-stats": {
-        "packet-count": 1.1998800119988002,
-          "start-time": 1455241209151,
-          "end-time": 1455241219152,
-          "byte-count": 117.58824117588242
-      },
-        "physical-route": [
-        {
-          "physical-ingress-port": {
-            "port-name": "s2-eth3",
-            "port-id": "3"
-          },
-          "physical-egress-port": {
-            "port-name": "s2-eth1",
-            "port-id": "1"
-          },
-          "node": "openflow:2",
-          "order": 0
-        },
-        {
-          "physical-ingress-port": {
-            "port-name": "s1-eth2",
-            "port-id": "2"
-          },
-          "physical-egress-port": {
-            "port-name": "s1-eth1",
-            "port-id": "1"
-          },
-          "node": "openflow:1",
-          "order": 1
-        }
-      ],
-        "data-egress-node": {
-          "bridge-name": "vbr1",
-          "tenant-name": "vtn1"
-        },
-        "hard-timeout": 0,
-        "idle-timeout": 300,
-        "data-flow-stats": {
-          "duration": {
-            "nanosecond": 640000000,
-            "second": 362
-          },
-          "packet-count": 134,
-          "byte-count": 12932
-        },
-        "data-egress-port": {
-          "node": "openflow:1",
-          "port-name": "s1-eth1",
-          "port-id": "1"
-        },
-        "data-ingress-node": {
-          "bridge-name": "vbr1",
-          "tenant-name": "vtn1"
-        },
-        "data-ingress-port": {
-          "node": "openflow:2",
-          "port-name": "s2-eth3",
-          "port-id": "3"
-        },
-        "creation-time": 1455240855753,
-        "data-flow-match": {
-          "vtn-ether-match": {
-            "vlan-id": 200,
-            "source-address": "6a:ff:e2:81:86:bb",
-            "destination-address": "26:9f:82:70:ec:66"
-          }
-        },
-        "virtual-route": [
-        {
-          "reason": "VLANMAPPED",
-          "virtual-node-path": {
-            "bridge-name": "vbr1",
-            "tenant-name": "vtn1"
-          },
-          "order": 0
-        },
-        {
-          "reason": "FORWARDED",
-          "virtual-node-path": {
-            "bridge-name": "vbr1",
-            "tenant-name": "vtn1"
-          },
-          "order": 1
-        }
-      ],
-        "flow-id": 16
-    },
-    {
-      "averaged-data-flow-stats": {
-        "packet-count": 1.1998800119988002,
-        "start-time": 1455241209151,
-        "end-time": 1455241219152,
-        "byte-count": 117.58824117588242
-      },
-      "physical-route": [
-      {
-        "physical-ingress-port": {
-          "port-name": "s1-eth1",
-          "port-id": "1"
-        },
-        "physical-egress-port": {
-          "port-name": "s1-eth2",
-          "port-id": "2"
-        },
-        "node": "openflow:1",
-        "order": 0
-      },
-      {
-        "physical-ingress-port": {
-          "port-name": "s2-eth1",
-          "port-id": "1"
-        },
-        "physical-egress-port": {
-          "port-name": "s2-eth3",
-          "port-id": "3"
-        },
-        "node": "openflow:2",
-        "order": 1
-      }
-      ],
-        "data-egress-node": {
-          "bridge-name": "vbr1",
-          "tenant-name": "vtn1"
-        },
-        "hard-timeout": 0,
-        "idle-timeout": 300,
-        "data-flow-stats": {
-          "duration": {
-            "nanosecond": 587000000,
-            "second": 362
-          },
-          "packet-count": 134,
-          "byte-count": 12932
-        },
-        "data-egress-port": {
-          "node": "openflow:2",
-          "port-name": "s2-eth3",
-          "port-id": "3"
-        },
-        "data-ingress-node": {
-          "bridge-name": "vbr1",
-          "tenant-name": "vtn1"
-        },
-        "data-ingress-port": {
-          "node": "openflow:1",
-          "port-name": "s1-eth1",
-          "port-id": "1"
-        },
-        "creation-time": 1455240855747,
-        "data-flow-match": {
-          "vtn-ether-match": {
-            "vlan-id": 200,
-            "source-address": "26:9f:82:70:ec:66",
-            "destination-address": "6a:ff:e2:81:86:bb"
-          }
-        },
-        "virtual-route": [
-        {
-          "reason": "VLANMAPPED",
-          "virtual-node-path": {
-            "bridge-name": "vbr1",
-            "tenant-name": "vtn1"
-          },
-          "order": 0
-        },
-        {
-          "reason": "FORWARDED",
-          "virtual-node-path": {
-            "bridge-name": "vbr1",
-            "tenant-name": "vtn1"
-          },
-          "order": 1
-        }
-      ],
-        "flow-id": 15
-    }
-    ]
-  }
-}
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_test_vlan_map_using_mininet.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_Manager_How_To_test_vlan_map_using_mininet.adoc
deleted file mode 100644 (file)
index 85db133..0000000
+++ /dev/null
@@ -1,173 +0,0 @@
-==== How To Test Vlan-Map In Mininet Environment
-
-===== Overview
-This page explains how to test Vlan-map in a multi host scenario using mininet. This page targets Beryllium release, so the procedure described here does not work in other releases.
-
-.Example that demonstrates vlanmap testing in Mininet Environment
-image::vtn/vlanmap_using_mininet.png[Example that demonstrates vlanmap testing in Mininet Environment]
-
-===== Requirements
-Save the mininet script given below as vlan_vtn_test.py and run the mininet script in the mininet environment where Mininet is installed.
-
-===== Mininet Script
-https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:Mininet#Network_with_hosts_in_different_vlan
-
-* Run the mininet script
-
-----
-sudo mn --controller=remote,ip=192.168.64.13 --custom vlan_vtn_test.py --topo mytopo
-----
-
-NOTE:
-Replace "192.168.64.13" with the IP address of OpenDaylight controller based on your environment.
-
-* You can check the topology that you have created by executing "net" command in the Mininet console.
-
-----
- mininet> net
- h1 h1-eth0.200:s1-eth1
- h2 h2-eth0.300:s2-eth2
- h3 h3-eth0.200:s2-eth3
- h4 h4-eth0.300:s2-eth4
- h5 h5-eth0.200:s3-eth2
- h6 h6-eth0.300:s3-eth3
- s1 lo:  s1-eth1:h1-eth0.200 s1-eth2:s2-eth1 s1-eth3:s3-eth1
- s2 lo:  s2-eth1:s1-eth2 s2-eth2:h2-eth0.300 s2-eth3:h3-eth0.200 s2-eth4:h4-eth0.300
- s3 lo:  s3-eth1:s1-eth3 s3-eth2:h5-eth0.200 s3-eth3:h6-eth0.300
- c0
-----
-
-===== Configuration
-
-To test vlan-map, execute REST API provided by VTN Manager as follows.
-
-* Create a virtual tenant named vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#update-vtn[the update-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:update-vtn -d '{"input":{"tenant-name":"vtn1"}}'
-----
-
-* Create a virtual bridge named vbr1 in the tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge[the update-vbridge RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr1"}}'
-----
-
-* Configure a vlan map with vlanid 200 for vBridge vbr1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vlan-map.html#add-vlan-map[the add-vlan-map RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vlan-map:add-vlan-map -d '{"input":{"vlan-id":200,"tenant-name":"vtn1","bridge-name":"vbr1"}}'
-----
-
-* Create a virtual bridge named vbr2 in the tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vbridge.html#update-vbridge[the update-vbridge RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vbridge:update-vbridge -d '{"input":{"tenant-name":"vtn1","bridge-name":"vbr2"}}'
-----
-
-* Configure a vlan map with vlanid 300 for vBridge vbr2 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn-vlan-map.html#add-vlan-map[the add-vlan-map RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn-vlan-map:add-vlan-map -d '{"input":{"vlan-id":300,"tenant-name":"vtn1","bridge-name":"vbr2"}}'
-----
-
-===== Verification
-
-* Please execute pingall in mininet environment to view the host reachability.
-
-----
- mininet> pingall
- Ping: testing ping reachability
- h1 -> X h3 X h5 X
- h2 -> X X h4 X h6
- h3 -> h1 X X h5 X
- h4 -> X h2 X X h6
- h5 -> h1 X h3 X X
- h6 -> X h2 X h4 X
-----
-
-* You can also verify the configuration by executing the following REST API. It shows all configurations in VTN Manager.
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X GET http://localhost:8181/restconf/operational/vtn:vtns
-----
-
-* The result of the command should be like this.
-
-----
-{
-  "vtns": {
-    "vtn": [
-    {
-      "name": "vtn1",
-        "vtenant-config": {
-          "hard-timeout": 0,
-          "idle-timeout": 300,
-          "description": "creating vtn"
-        },
-        "vbridge": [
-        {
-          "name": "vbr2",
-          "vbridge-config": {
-            "age-interval": 600,
-            "description": "creating vbr2"
-          },
-          "bridge-status": {
-            "state": "UP",
-            "path-faults": 0
-          },
-          "vlan-map": [
-          {
-            "map-id": "ANY.300",
-            "vlan-map-config": {
-              "vlan-id": 300
-            },
-            "vlan-map-status": {
-              "active": true
-            }
-          }
-          ]
-        },
-        {
-          "name": "vbr1",
-          "vbridge-config": {
-            "age-interval": 600,
-            "description": "creating vbr1"
-          },
-          "bridge-status": {
-            "state": "UP",
-            "path-faults": 0
-          },
-          "vlan-map": [
-          {
-            "map-id": "ANY.200",
-            "vlan-map-config": {
-              "vlan-id": 200
-            },
-            "vlan-map-status": {
-              "active": true
-            }
-          }
-          ]
-        }
-      ]
-    }
-    ]
-  }
-}
-----
-
-===== Cleaning Up
-
-* You can delete the virtual tenant vtn1 by executing
-  https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/vtn.html#remove-vtn[the remove-vtn RPC].
-
-----
-curl --user "admin":"admin" -H "Content-type: application/json" -X POST http://localhost:8181/restconf/operations/vtn:remove-vtn -d '{"input":{"tenant-name":"vtn1"}}'
-----
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_OpenStack_Support-user.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_OpenStack_Support-user.adoc
deleted file mode 100644 (file)
index 18546a4..0000000
+++ /dev/null
@@ -1,348 +0,0 @@
-=== VTN OpenStack Configuration
-
-This guide describes how to set up OpenStack for integration with OpenDaylight Controller.
-
-While OpenDaylight Controller provides several ways to integrate with OpenStack, this guide focus on the way which uses VTN features available on OpenDaylight. In the integration, VTN Manager work as network service provider for OpenStack.
-
-VTN Manager features, enable OpenStack to work in pure OpenFlow environment in which all switches in data plane are OpenFlow switch.
-
-==== Requirements
-
-* OpenDaylight Controller. (VTN features must be installed)
-* OpenStack Control Node.
-* OpenStack Compute Node.
-* OpenFlow Switch like mininet(Not Mandatory).
-
-The VTN features support multiple OpenStack nodes. You can deploy multiple OpenStack Compute Nodes.
-In management plane, OpenDaylight Controller, OpenStack nodes and OpenFlow switches should communicate with each other.
-In data plane, Open vSwitches running in OpenStack nodes should communicate with each other through a physical or logical OpenFlow switches. The core OpenFlow switches are not mandatory. Therefore, you can directly connect to the Open vSwitch's.
-
-.Openstack Overview
-image::vtn/OpenStack_Demo_Picture.png["Openstack overview" , width= 500]
-
-==== Sample Configuration
-
-Below steps depicts the configuration of single OpenStack Control node and OpenStack Compute node setup. Our test setup is as follows
-
-.LAB Setup
-image::vtn/vtn_devstack_setup.png["LAB Setup" ,width= 500]
-
-*Server Preparation*
-[horizontal]
-- Install Ubuntu 14.04 LTS in two servers (OpenStack Control node and Compute node respectively)
-- While installing, Ubuntu mandates creation of a User, we created the user "stack"(We will use the same user for running devstack)
-- Proceed with the below mentioned User Settings and Network Settings in both the Control and Compute nodes.
-
-*User Settings for devstack*
-- Login to both servers
-- Disable Ubuntu Firewall
-
-
-  sudo ufw disable
-
-- Install the below packages (optional, provides ifconfig and route coammnds, handy for debugging!!)
-
-
-  sudo apt-get install net-tools
-
-- Edit sudo vim /etc/sudoers and add an entry as follows
-
-
-  stack ALL=(ALL) NOPASSWD: ALL
-
-*Network Settings*
-- Checked the output of ifconfig -a, two interfaces were listed eth0 and eth1 as indicated in the image above.
-- We had connected eth0 interface to the Network where OpenDaylight is reachable.
-- eth1 interface in both servers were connected to a different network to act as data plane for the VM's created using the OpenStack.
-- Manually edited the file : sudo vim /etc/network/interfaces and made entries as follows
-
-
-   stack@ubuntu-devstack:~/devstack$ cat /etc/network/interfaces
-   # This file describes the network interfaces available on your system
-   # and how to activate them. For more information, see interfaces(5).
-   # The loop-back network interface
-   auto lo
-   iface lo inet loopback
-   # The primary network interface
-   auto eth0
-   iface eth0 inet static
-        address <IP_ADDRESS_TO_REACH_ODL>
-        netmask <NET_MASK>
-        broadcast <BROADCAST_IP_ADDRESS>
-        gateway <GATEWAY_IP_ADDRESS>
-  auto eth1
-  iface eth1 inet static
-       address <IP_ADDRESS_UNIQ>
-       netmask <NETMASK>
-
-NOTE: Please ensure that the eth0 interface is the default route and it is able to reach the ODL_IP_ADDRESS
-NOTE: The entries for eth1 are not mandatory, If not set, we may have to manually do "ifup eth1" after the stacking is complete to activate the interface
-
-*Finalize the user and network settings*
-- Please reboot both nodes after the user and network settings to have the network settings applied to the network
-- Login again and check the output of ifconfig to ensure that both interfaces are listed
-
-====  OpenDaylight Settings and Execution
-
-=====  VTN Configuration for OpenStack Integration:
-
- * VTN uses the configuration parameters from  "90-vtn-neutron.xml" file for the OpenStack integration.
- * These values will be set for the OpenvSwitch, in all the participating OpenStack nodes.
- * A configuration file "90-vtn-neutron.xml" will be generated automatically by following the below steps,
- * Download the latest Beryllium karaf distribution from the below link,
-
-
-   http://www.opendaylight.org/software/downloads
-
-
- * cd "distribution-karaf-0.4.0-Beryllium" and run karaf by using the following command "./bin/karaf".
- * Install the below feature to generate "90-vtn-neutron.xml"
-
-----
- feature:install odl-vtn-manager-neutron
-----
-
- * Logout from the karaf console and Check "90-vtn-neutron.xml" file from the following path "distribution-karaf-0.4.0-Beryllium/etc/opendaylight/karaf/".
- * The contents of "90-vtn-neutron.xml" should be as follows:
-
-
-bridgename=br-int
-portname=eth1
-protocols=OpenFlow13
-failmode=secure
-
- * The values of the configuration parameters must be changed based on the user environment.
- * Especially, "portname" should be carefully configured, because if the value is wrong, OpenDaylight fails to forward packets.
- * Other parameters works fine as is for general use cases.
- ** bridgename
- *** The name of the bridge in Open vSwitch, that will be created by OpenDaylight Controller.
- *** It must be "br-int".
- ** portname
- *** The name of the port that will be created in the vbridge in Open vSwitch.
- *** This must be the same name of the interface of OpenStack Nodes which is used for interconnecting OpenStack Nodes in data plane.(in our case:eth1)
- *** By default, if 90-vtn-neutron.xml is not created, VTN uses ens33 as portname.
- ** protocols
- *** OpenFlow protocol through which OpenFlow Switch and Controller communicate.
- *** The values can be OpenFlow13 or OpenFlow10.
- ** failmode
- *** The value can be "standalone" or "secure".
- *** Please use "secure" for general use cases.
-
-===== Start ODL Controller
-* Please refer to the Installation Pages to run ODL with VTN Feature enabled.
-* After running ODL Controller, please ensure ODL Controller listens to the ports:6633,6653, 6640 and 8080
-* Please allow the ports in firewall for the devstack to be able to communicate with ODL Controller.
-
-[NOTE]
-====
-* 6633/6653 - OpenFlow Ports
-* 6640 - OVS Manager Port
-* 8080 - Port for REST API
-====
-
-====  Devstack Setup
-
-=====  Get Devstack (All nodes)
-* Install git application using
-** sudo apt-get install git
-* Get devstack
-** git clone https://git.openstack.org/openstack-dev/devstack;
-* Switch to stable/Juno Version branch
-** cd devstack
-
-
-   git checkout stable/juno
-
-NOTE:
-   If you want to use stable/kilo Version branch, Please execute the below command in devstack folder
-
-
-   git checkout stable/kilo
-
-NOTE:
-   If you want to use stable/liberty Version branch, Please execute the below command in devstack folder
-
-
-   git checkout stable/liberty
-
-===== Stack Control Node
-
-* local.conf:
-* cd devstack in the controller node
-* Copy the contents of local.conf for juno (devstack control node) from https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:devstack  and save it as "local.conf" in the "devstack".
-* Copy the contents of local.conf for kilo and liberty (devstack control node) from https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:devstack_post_juno_versions and save it as "local.conf" in the "devstack".
-* Please modify the IP Address values as required.
-* Stack the node
-
-  ./stack.sh
-
-====== Verify Control Node stacking
-* stack.sh prints out Horizon is now available at http://<CONTROL_NODE_IP_ADDRESS>:8080/
-* Execute the command 'sudo ovs-vsctl show' in the control node terminal and verify if the bridge 'br-int'  is created.
-* Typical output of the ovs-vsctl show is indicated below:
-----
-e232bbd5-096b-48a3-a28d-ce4a492d4b4f
-   Manager "tcp:192.168.64.73:6640"
-       is_connected: true
-   Bridge br-int
-       Controller "tcp:192.168.64.73:6633"
-           is_connected: true
-       fail_mode: secure
-       Port "eth1"
-          Interface "eth1"
-   ovs_version: "2.0.2"
-----
-
-===== Stack Compute Node
-
-* local.conf:
-* cd devstack in the controller node
-* Copy the contents of local.conf for juno (devstack compute node) from https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:devstack and save it as "local.conf" in the "devstack".
-* Copy the contents of local.conf file for kilo and liberty (devstack compute node) from https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_(VTN):Scripts:devstack_post_juno_versions and save it as "local.conf" in the "devstack".
-* Please modify the IP Address values as required.
-* Stack the node
-
-
-  ./stack.sh
-
-====== Verify Compute Node Stacking
-* stack.sh prints out This is your host ip: <COMPUTE_NODE_IP_ADDRESS>
-* Execute the command 'sudo ovs-vsctl show' in the control node terminal and verify if the bridge 'br-int'  is created.
-* The output of the ovs-vsctl show will be similar to the one seen in control node.
-
-===== Additional Verifications
-* Please visit the OpenDaylight DLUX GUI after stacking all the nodes, http://<ODL_IP_ADDRESS>:8181/index.html. The switches, topology and the ports that are currently read can be validated.
-----
-http://<controller-ip>:8181/index.html
-----
-
-TIP: If the interconnected between the Open vSwitch is not seen, Please bring up the interface for the dataplane manually using the below comamnd
-
-
-  ifup <interface_name>
-
-* Please Accept Promiscuous mode in the networks involving the interconnect.
-
-===== Create VM from Devstack Horizon GUI
-* Login to http://<CONTROL_NODE_IP>:8080/ to check the horizon GUI.
-
-.Horizon GUI
-image::vtn/OpenStackGui.png["Horizon",width= 600]
-
-Enter the value for User Name as admin and enter the value for Password as labstack.
-
-* We should first ensure both the hypervisors(control node and compute node) are mapped under hypervisors by clicking on Hpervisors tab.
-
-.Hypervisors
-image::vtn/Hypervisors.png["Hypervisors",width=512]
-
-* Create a new Network from Horizon GUI.
-* Click on Networks Tab.
-* click on the Create Network button.
-
-.Create Network
-image::vtn/Create_Network.png["Create Network" ,width=600]
-
-*  A popup screen will appear.
-*  Enter network name and click Next button.
-
-.Step 1
-image::vtn/Creare_Network_Step_1.png["Step 1" ,width=600]
-* Create a sub network by giving Network Address and click Next button .
-
-.Step 2
-image::vtn/Create_Network_Step_2.png[Step 2,width=600]
-
-* Specify the additional details for subnetwork (please refer the image for your reference).
-
-.Step 3
-image::vtn/Create_Network_Step_3.png[Step 3,width=600]
-
-* Click Create button
-* Create VM Instance
-* Navigate to Instances tab in the GUI.
-
-.Instance Creation
-image::vtn/Instance_Creation.png["Instance Creation",width=512]
-
-* Click on Launch Instances button.
-
-.Launch Instance
-image::vtn/Launch_Instance.png[Launch Instance,width=600]
-
-* Click on Details tab to enter the VM details.For this demo we are creating Ten VM's(instances).
-
-* In the Networking tab, we must select the network,for this we need to drag and drop the Available networks to Selected Networks (i.e.,) Drag vtn1 we created from Available networks to Selected Networks and click Launch to create the instances.
-
-.Launch Network
-image::vtn/Launch_Instance_network.png[Launch Network,width=600]
-
-* Ten VM's will be created.
-
-.Load All Instances
-image::vtn/Load_All_Instances.png[Load All Instances,width=600]
-
-* Click on any VM displayed in the Instances tab and click the Console tab.
-
-.Instance Console
-image::vtn/Instance_Console.png[Instance Console,width=600]
-
-* Login to the VM console and verify with a ping command.
-
-.Ping
-image::vtn/Instance_ping.png[Ping,width=600]
-
-===== Verification of Control and Compute Node after VM creation
-* Every time a new VM is created, more interfaces are added to the br-int bridge in Open vSwitch.
-* Use 'sudo ovs-vsctl show' to list the number of interfaces added.
-* Please visit the DLUX GUI to list the new nodes in every switch.
-
-===== Getting started with DLUX
-Ensure that you have created a topology and enabled MD-SAL feature in the Karaf distribution before you use DLUX for network management.
-
-===== Logging In
-To log in to DLUX, after installing the application:
-* Open a browser and enter the login URL. If you have installed DLUX as a stand-alone, then the login URL is http://localhost:9000/DLUX/index.html. However if you have deployed DLUX with Karaf, then the login URL is http://\<your IP\>:8181/dlux/index.html.
-* Login to the application with user ID and password credentials as admin.
-NOTE:admin is the only user type available for DLUX in this release.
-
-===== Working with DLUX
-To get a complete DLUX feature list, install restconf, odl l2 switch, and switch while you start the DLUX distribution.
-
-.DLUX_GUI
-image::vtn/Dlux_login.png[DLUX_GUI,width=600]
-
-NOTE: DLUX enables only those modules, whose APIs are responding. If you enable just the MD-SAL in beginning and then start dlux, only MD-SAL related tabs will be visible. While using the GUI if you enable AD-SAL karaf features, those tabs will appear automatically.
-
-===== Viewing Network Statistics
-The Nodes module on the left pane enables you to view the network statistics and port information for the switches in the network.
-* To use the Nodes module:
-** Select Nodeson the left pane.
-----
-The right pane displays atable that lists all the nodes, node connectors and the statistics.
-----
-** Enter a node ID in the Search Nodes tab to search by node connectors.
-** Click on the Node Connector number to view details such as port ID, port name, number of ports per switch, MAC Address, and so on.
-** Click Flows in the Statistics column to view Flow Table Statistics for the particular node like table ID, packet match, active flows and so on.
-** Click Node Connectors to view Node Connector Statistics for the particular node ID.
-
-===== Viewing Network Topology
-To view network topology:
-* Select Topology on the left pane. You will view the graphical representation on the right pane.
-----
-In the diagram
-blue boxes represent the switches,black represents the hosts available, and lines represents how switches are connected.
-----
-NOTE: DLUX UI does not provide ability to add topology information. The Topology should be created using an open flow plugin. Controller stores this information in the database and displays on the DLUX page, when the you connect to the controller using openflow.
-
-.Topology
-image::vtn/Dlux_topology.png[Topology,width=600]
-
-==== OpenStack PackStack Installation Steps
-* Please go through the below wiki page for OpenStack PackStack installation steps.
-** https://wiki.opendaylight.org/view/Release/Lithium/VTN/User_Guide/Openstack_Packstack_Support
-
-==== References
-* http://devstack.org/guides/multinode-lab.html
-* https://wiki.opendaylight.org/view/File:Vtn_demo_hackfest_2014_march.pdf
-
diff --git a/manuals/user-guide/src/main/asciidoc/vtn/VTN_Overview.adoc b/manuals/user-guide/src/main/asciidoc/vtn/VTN_Overview.adoc
deleted file mode 100644 (file)
index 44d1db6..0000000
+++ /dev/null
@@ -1,348 +0,0 @@
-=== VTN Overview
-
-OpenDaylight Virtual Tenant Network (VTN) is an application that provides multi-tenant virtual network on an SDN controller.
-
-Conventionally, huge investment in the network systems and operating expenses are needed because the network is configured as a silo for each department and system. So, various network appliances must be installed for each tenant and those boxes cannot be shared with others. It is a heavy work to design, implement and operate the entire complex network.
-
-The uniqueness of VTN is a logical abstraction plane. This enables the complete separation of logical plane from physical plane. Users can design and deploy any desired network without knowing the physical network topology or bandwidth restrictions.
-
-VTN allows the users to define the network with a look and feel of conventional L2/L3 network. Once the network is designed on VTN, it will automatically be mapped into underlying physical network, and then configured on the individual switch leveraging SDN control protocol. The definition of logical plane makes it possible not only to hide the complexity of the underlying network but also to better manage network resources. It achieves reducing reconfiguration time of network services and minimizing network configuration errors.
-
-.VTN Overview
-image::vtn/VTN_Overview.jpg[VTN Overview ,width= 500]
-
-It is implemented as two major components
-
-* <<_vtn_manager,VTN Manager>>
-* <<_vtn_coordinator,VTN Coordinator>>
-
-==== VTN Manager
-An OpenDaylight Plugin that interacts with other modules to implement the components of the VTN model. It also provides a REST interface to configure VTN components in OpenDaylight. VTN Manager is implemented as one plugin to the OpenDaylight. This provides a REST interface to create/update/delete VTN components. The user command in VTN Coordinator is translated as REST API to VTN Manager by the OpenDaylight Driver component. In addition to the above mentioned role, it also provides an implementation to the OpenStack L2 Network Functions API.
-
-===== Features Overview
-
-* *odl-vtn-manager* provides VTN Manager's JAVA API.
-* *odl-vtn-manager-rest* provides VTN Manager's REST API.
-* *odl-vtn-manager-neutron* provides the integration with Neutron interface.
-
-===== REST API
-
-VTN Manager provides REST API for virtual network functions.
-
-Here is an example of how to create a virtual tenant network.
-
-----
- curl --user "admin":"admin" -H "Accept: application/json" -H \
- "Content-type: application/json" -X POST \
- http://localhost:8181/restconf/operations/vtn:update-vtn \
- -d '{"input":{"tenant-name":"vtn1"}}'
-----
-
-You can check the list of all tenants by executing the following command.
-
-----
- curl --user "admin":"admin" -H "Accept: application/json" -H \
- "Content-type: application/json" -X GET \
- http://localhost:8181/restconf/operational/vtn:vtns
-----
-
-REST API documentation for VTN Manager, please refer to: https://jenkins.opendaylight.org/releng/view/vtn/job/vtn-merge-beryllium/lastSuccessfulBuild/artifact/manager/model/target/site/models/
-
-==== VTN Coordinator
-
-The VTN Coordinator is an external application that provides a REST interface for an user to use OpenDaylight VTN Virtualization. It interacts with VTN Manager plugin to implement the user configuration. It is also capable of multiple OpenDaylight orchestration. It realizes Virtual Tenant Network (VTN) provisioning in OpenDaylight instances. In the OpenDaylight architecture VTN Coordinator is part of the network application, orchestration and services layer. VTN Coordinator will use the REST interface exposed by the VTN Manger to realize the virtual network using OpenDaylight. It uses OpenDaylight APIs (REST) to construct the virtual network in OpenDaylight instances. It provides REST APIs for northbound VTN applications and supports virtual networks spanning across multiple OpenDaylight by coordinating across OpenDaylight.
-
-For VTN Coordinator REST API, please refer to: https://wiki.opendaylight.org/view/OpenDaylight_Virtual_Tenant_Network_%28VTN%29:VTN_Coordinator:RestApi
-
-==== Network Virtualization Function
-
-The user first defines a VTN. Then, the user maps the VTN to a physical network, which enables communication to take place according to the VTN definition. With the VTN definition, L2 and L3 transfer functions and flow-based traffic control functions (filtering and redirect) are possible.
-
-==== Virtual Network Construction
-
-The following table shows the elements which make up the VTN.
-In the VTN, a virtual network is constructed using virtual nodes (vBridge, vRouter) and virtual interfaces and links.
-It is possible to configure a network which has L2 and L3 transfer function, by connecting the virtual intrefaces made on virtual nodes via virtual links.
-
-[cols="2*"]
-|===
-|vBridge
-|The logical representation of L2 switch function.
-
-|vRouter
-|The logical representation of router function.
-
-|vTep
-|The logical representation of Tunnel End Point - TEP.
-
-|vTunnel
-|The logical representation of Tunnel.
-
-|vBypass
-|The logical representation of connectivity between controlled networks.
-
-|Virtual interface
-|The representation of end point on the virtual node.
-
-|Virtual Linkv(vLink)
-|The logical representation of L1 connectivity between virtual interfaces.
-|===
-
-The following figure shows an example of a constructed virtual network. VRT is defined as the vRouter, BR1 and BR2 are defined as vBridges. interfaces of the vRouter and vBridges are connected using vLinks.
-
-.VTN Construction
-image::vtn/VTN_Construction.jpg[VTN Construction ,width= 500]
-
-
-==== Mapping of Physical Network Resources
-
-Map physical network resources to the constructed virtual network. Mapping identifies which virtual network each packet transmitted or received by an OpenFlow switch belongs to, as well as which interface in the OpenFlow switch transmits or receives that packet.
-There are two mapping methods. When a packet is received from the OFS, port mapping is first searched for the corresponding mapping definition, then VLAN mapping is searched, and the packet is mapped to the relevant vBridge according to the first matching mapping.
-
-[cols="2*"]
-|===
-|Port mapping
-|Maps physical network resources to an interface of vBridge using Switch ID, Port ID and VLAN ID of the incoming L2 frame. Untagged frame mapping is also supported.
-
-|VLAN mapping
-|Maps physical network resources to a vBridge using VLAN ID of the incoming L2 frame.Maps physical resources of a particular switch to a vBridge using switch ID and VLAN ID of the incoming L2 frame.
-
-|MAC mapping
-|Maps physical resources to an interface of vBridge using MAC address of the incoming L2 frame(The initial contribution does not include this method).
-|===
-
-VTN can learn the terminal information from a terminal that is connected to a switch which is mapped to VTN. Further, it is possible to refer that terminal information on the VTN.
-
-* Learning terminal information
-  VTN learns the information of a terminal that belongs to VTN. It will store the MAC address and VLAN ID of the terminal in relation to the port of the switch.
-* Aging of terminal information
-  Terminal information, learned by the VTN, will be maintained until the packets from terminal keep flowing in VTN. If the terminal gets disconnected from the VTN, then the aging timer will start clicking and the terminal information will be maintained till timeout.
-
-The following figure shows an example of mapping. An interface of BR1 is mapped to port GBE0/1 of OFS1 using port mapping. Packets received from GBE0/1 of OFS1 are regarded as those from the corresponding interface of BR1.
-BR2 is mapped to VLAN 200 using VLAN mapping.
-Packets with VLAN tag 200 received from any ports of any OFSs are regarded as those from an interface of BR2.
-
-.VTN Mapping
-image::vtn/VTN_Mapping.jpg[VTN Mapping]
-
-==== vBridge Functions
-
-The vBridge provides the bridge function that transfers a packet to the intended virtual port according to the destination MAC address.
-The vBridge looks up the MAC address table and transmits the packet to the corresponding virtual interface when the destination MAC address has been learned. When the destination MAC address has not been learned, it transmits the packet to all virtual interfaces other than the receiving port (flooding).
-MAC addresses are learned as follows.
-
-* MAC address learning
-  The vBridge learns the MAC address of the connected host. The source MAC address of each received frame is mapped to the receiving virtual interface, and this MAC address is stored in the MAC address table created on a per-vBridge basis.
-* MAC address aging
-  The MAC address stored in the MAC address table is retained as long as the host returns the ARP reply. After the host is disconnected, the address is retained until the aging timer times out.
-To have the vBridge learn MAC addresses statically, you can register MAC addresses manually.
-
-==== vRouter Functions
-
-The vRouter transfers IPv4 packets between vBridges. The vRouter supports routing, ARP learning, and ARP aging functions. The following outlines the functions.
-
-* Routing function
-  When an IP address is registered with a virtual interface of the vRouter, the default routing information for that interface is registered. It is also possible to statically register routing information for a virtual interface.
-* ARP learning function
-  The vRouter associates a destination IP address, MAC address and a virtual interface, based on an ARP request to its host or a reply packet for an ARP request, and maintains this information in an ARP table prepared for each routing domain. The registered ARP entry is retained until the aging timer, described later, times out. The vRouter transmits an ARP request on an individual aging timer basis and deletes the associated entry from the ARP table if no reply is returned. For static ARP learning, you can register ARP entry information manually.
-* DHCP relay agent function
-  The vRouter also provides the DHCP relay agent function.
-
-==== Flow Filter Functions
-
-Flow Filter function is similar to ACL. It is possible to allow or prohibit communication with only certain kind of packets that meet a particular condition. Also, it can perform a processing called Redirection - WayPoint routing, which is different from the existing ACL.
-Flow Filter can be applied to any interface of a vNode within VTN, and it is possible to the control the packets that pass interface.
-The match conditions that could be specified in Flow Filter are as follows. It is also possible to specify a combination of multiple conditions.
-
-* Source MAC address
-* Destination MAC address
-* MAC ether type
-* VLAN Priority
-* Source IP address
-* Destination IP address
-* DSCP
-* IP Protocol
-* TCP/UDP source port
-* TCP/UDP destination port
-* ICMP type
-* ICMP code
-
-The types of Action that can be applied on packets that match the Flow Filter conditions are given in the following table.
-It is possible to make only those packets, which match a particular condition, to pass through a particular server by specifying Redirection in Action. E.g., path of flow can be changed for each packet sent from a particular terminal, depending upon the destination IP address.
-VLAN priority control and DSCP marking are also supported.
-
-
-[cols="2*"]
-|===
-| Action | Function
-| Pass
-| Pass particular packets matching the specified conditions.
-
-| Drop
-| Discards particular packets matching the specified conditions.
-
-| Redirection
-| Redirects the packet to a desired virtual interface. Both Transparent Redirection (not changing MAC address) and Router Redirection (changing MAC address) are supported.
-|===
-
-The following figure shows an example of how the flow filter function works.
-
-If there is any matching condition specified by flow filter when a packet being transferred within a virtual network goes through a virtual interface, the function evaluates the matching condition to see whether the packet matches it.
-If the packet matches the condition, the function applies the matching action specified by flow filter. In the example shown in the figure, the function evaluates the matching condition at BR1 and discards the packet if it matches the condition.
-
-.VTN FlowFilter
-image::vtn/VTN_Flow_Filter.jpg[width=500]
-
-==== Multiple SDN Controller Coordination
-
-With the network abstractions, VTN enables to configure virtual network across multiple SDN controllers. This provides highly scalable network system.
-
-VTN can be created on each SDN controller. If users would like to manage those multiple VTNs with one policy, those VTNs can be integrated to a single VTN.
-
-As a use case, this feature is deployed to multi data center environment. Even if those data centers are geographically separated and controlled with different controllers, a single policy virtual network can be realized with VTN.
-
-Also, one can easily add a new SDN Controller to an existing VTN or delete a particular SDN Controller from VTN.
-
-In addition to this, one can define a VTN which covers both OpenFlow network and Overlay network at the same time.
-
-Flow Filter, which is set on the VTN, will be automatically applied on the newly added SDN Controller.
-
-==== Coordination between OpenFlow Network and L2/L3 Network
-
-It is possible to configure VTN on an environment where there is mix of L2/L3 switches as well. L2/L3 switch will be shown on VTN as vBypass. Flow Filter or policing cannot be configured for a vBypass. However, it is possible to treat it as a virtual node inside VTN.
-
-==== Virtual Tenant Network (VTN) API
-
-VTN provides Web APIs. They are implemented by REST architecture and provide the access to resources within VTN that are identified by URI.
-User can perform the operations like GET/PUT/POST/DELETE against the virtual network resources (e.g. vBridge or vRouter) by sending a message to VTN through HTTPS communication in XML or JSON format.
-
-.VTN API
-image::vtn/VTN_API.jpg[VTN API]
-
-===== Function Outline
-
-VTN provides following operations for various network resources.
-
-[cols="5*"]
-|===
-| Resources
-| GET
-| POST
-| PUT
-| DELETE
-
-| VTN
-| Yes
-| Yes
-| Yes
-| Yes
-
-| vBridge
-| Yes
-| Yes
-| Yes
-| Yes
-
-| vRouter
-| Yes
-| Yes
-| Yes
-| Yes
-
-| vTep
-| Yes
-| Yes
-| Yes
-| Yes
-
-| vTunnel
-| Yes
-| Yes
-| Yes
-| Yes
-
-| vBypass
-| Yes
-| Yes
-| Yes
-| Yes
-
-| vLink
-| Yes
-| Yes
-| Yes
-| Yes
-
-| Interface
-| Yes
-| Yes
-| Yes
-| Yes
-
-| Port map
-| Yes
-| No
-| Yes
-| Yes
-
-| Vlan map
-| Yes
-| Yes
-| Yes
-| Yes
-
-| Flowfilter (ACL/redirect)
-| Yes
-| Yes
-| Yes
-| Yes
-
-| Controller information
-| Yes
-| Yes
-| Yes
-| Yes
-
-| Physical topology information
-| Yes
-| No
-| No
-| No
-
-| Alarm information
-| Yes
-| No
-| No
-| No
-|===
-
-===== Example usage
-
-The following is an example of the usage to construct a virtual network.
-
-* Create VTN
-
-----
-   curl --user admin:adminpass -X POST -H 'content-type: application/json'  \
-  -d '{"vtn":{"vtn_name":"VTN1"}}' http://172.1.0.1:8083/vtn-webapi/vtns.json
-----
-* Create Controller Information
-
-----
-   curl --user admin:adminpass -X POST -H 'content-type: application/json'  \
-  -d '{"controller": {"controller_id":"CONTROLLER1","ipaddr":"172.1.0.1","type":"odc","username":"admin", \
-  "password":"admin","version":"1.0"}}' http://172.1.0.1:8083/vtn-webapi/controllers.json
-----
-* Create vBridge under VTN
-
-----
-  curl --user admin:adminpass -X POST -H 'content-type: application/json' \
-  -d '{"vbridge":{"vbr_name":"VBR1","controller_id": "CONTROLLER1","domain_id": "(DEFAULT)"}}' \
-  http://172.1.0.1:8083/vtn-webapi/vtns/VTN1/vbridges.json
-----
-* Create the interface under vBridge
-
-----
-  curl --user admin:adminpass -X POST -H 'content-type: application/json' \
-  -d '{"interface":{"if_name":"IF1"}}' http://172.1.0.1:8083/vtn-webapi/vtns/VTN1/vbridges/VBR1/interfaces.json
-----
index fc2aa1cebb56c045167d6fe4692cc284e1b97609..16e6953dfc5be67432f2a775e43e3fb549e4be93 100644 (file)
@@ -1,42 +1,3 @@
 == Virtual Tenant Network (VTN)
 
-include::VTN_Overview.adoc[VTN Overview]
-
-include::VTN_OpenStack_Support-user.adoc[VTN OpenStack Configuration]
-
-=== VTN Manager Usage Examples
-
-include::VTN_Manager_How_To_Provision_Virtual_L2_Network.adoc[How to provision virtual L2 Network]
-
-include::VTN_Manager_How_To_test_vlan_map_using_mininet.adoc[How To Test Vlan-map using Mininet Environment]
-
-include::VTN_Manager_How_To_Configure_Service_Function_Chaining_Support.adoc[How To Configure Service Function Chaining Support using Mininet Environmen]
-
-include::VTN_Manager_How_To_View_Dataflows.adoc[How To View Dataflow in VTN Manager]
-
-include::VTN_Manager_How_To_Create_Mac_Map_In_VTN.adoc[How To Create Mac Map In VTN]
-
-include::VTN_Manager_How_To_Configure_Flowfilters.adoc[How To Configure Flow Filters using VTN]
-
-include::VTN_Manager_How_To_Use_VTN_to_change_the_path_of_the_packet_flow.adoc[How To Use VTN to change the path of the packet flow]
-
-=== VTN Coordinator Usage Examples
-
-include::VTN_How_To_configure_L2_Network_with_Single_Controller.adoc[How to configure L2 Network with Single Controller]
-
-include::VTN_How_To_configure_L2_Network_with_Multiple_Controllers.adoc[How to configure L2 Network with Multiple Controllers]
-
-include::VTN_How_To_test_vlanmap_using_mininet.adoc[How To Test Vlan-Map In Mininet Environment]
-
-include::VTN_How_To_view_STATIONS.adoc[VTN Station information]
-
-include::VTN_How_To_view_Dataflows.adoc[Dataflows Path Information]
-
-include::VTN_How_To_configure_flow_filters.adoc[How To Configure Flow Filters Using VTN]
-
-include::VTN_How_To_Use_VTN_to_make_packets_take_different_paths.adoc[Change Traffic Path]
-
-include::VTN_How_To_Troubleshoot_Coordinator_Installation.adoc[Troubleshoot VTN Coordinator]
-
-include::VTN_How_To_Support_for_Microsoft_SCVMM.adoc[Support for Microsoft SCVMM 2012 R2 with ODL VTN]
-
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/virtual-tenant-network-(vtn).html
index 50fbcfbcab86d3f4381644efba787b27a1f30e0f..7365b4d44c792d063fd98988d61b1449f1b638b2 100644 (file)
@@ -1,93 +1,3 @@
 == YANG-PUSH
-This section describes how to use the YANG-PUSH feature in OpenDaylight
-and contains contains configuration, administration, and management
-sections for the feature.
 
-=== Overview
-YANG PUBSUB project allows applications to place subscriptions upon targeted subtrees of YANG datastores residing on remote devices. Changes in YANG objects within the remote subtree can be pushed to an OpenDaylight MD-SAL and to the application as specified without a requiring the controller to make a continuous set of fetch requests.
-
-==== YANG-PUSH capabilities available
-
-This module contains the base code which embodies the intent of YANG-PUSH requirements for subscription as defined in {i2rs-pub-sub-requirements} [https://datatracker.ietf.org/doc/draft-ietf-i2rs-pub-sub-requirements/].   The mechanism for delivering on these YANG-PUSH requirements over Netconf transport is defined in {netconf-yang-push} [netconf-yang-push: https://tools.ietf.org/html/draft-ietf-netconf-yang-push-00].  
-
-Note that in the current release, not all capabilities of draft-ietf-netconf-yang-push are realized.   Currently only implemented is *create-subscription* RPC support from ietf-datastore-push@2015-10-15.yang; and this will be for periodic subscriptions only.  There of course is intent to provide much additional functionality in future OpenDaylight releases.
-
-==== Future YANG-PUSH capabilities
-
-Over time, the intent is to flesh out more robust capabilities which will allow OpenDaylight applications to subscribe to YANG-PUSH compliant devices.  Capabilities for future releases will include:
-
-Support for subscription change/delete:
-*modify-subscription* rpc support for all mountpoint devices or particular mountpoint device
-*delete-subscription* rpc support for all mountpoint devices or particular mountpoint device
-
-Support for static subscriptions:
-This will enable the receipt of subscription updates pushed from publishing devices where no signaling from the controller has been used to establish the subscriptions.
-
-Support for additional transports:
-NETCONF is not the only transport of interest to OpenDaylight or the subscribed devices.  Over time this code will support Restconf and HTTP/2 transport requirements defined in {netconf-restconf-yang-push} [https://tools.ietf.org/html/draft-voit-netconf-restconf-yang-push-01]
-
-
-=== YANG-PUSH Architecture
-The code architecture of Yang push consists of two main elements
-
-YANGPUSH Provider
-YANGPUSH Listener
-
-YANGPUSH Provider receives create-subscription requests from applications and then establishes/registers the corresponding listener which will receive information pushed by a publisher.  In addition, YANGPUSH Provider also invokes an augmented OpenDaylight create-subscription RPC which enables applications to register for notification as per rfc5277. This augmentation adds periodic time period (duration) and subscription-id values to the existing RPC parameters. The Java package supporting this capability is “org.opendaylight.yangpush.impl”. YangpushDomProvider is the class which supports this YANGPUSH Provider capability. 
-
-The YANGPUSH Listener accepts update notifications from a device after they have been de-encapsulated from the NETCONF transport.  The YANGPUSH Listener then passes these updates to MD-SAL.  This function is implemented via the YangpushDOMNotificationListener class within the “org.opendaylight.yangpush.listner” Java package.  Applications should monitor MD-SAL for the availability of newly pushed subscription updates.
-
-=== YANG-PUSH Catalog
-The NF Catalog contains metadata describing a NF.
-
-==== Configuring YANG-PUSH Catalog
-TBD: Describe how to configure YANG-PUSH Catalog after installation.
-
-==== Administering YANG-PUSH Catalog
-TBD: Include related command reference or operations
-for using YANG-PUSH Catalog.
-
-=== YANG-PUSH Workload Manager
-The Workload Manager defines RPCs to manage instances.
-
-=== Configuring YANG-PUSH Workload Manager
-TBD: Describe how to configure YANG-PUSH Workload Manager after installation.
-
-=== Administering YANG-PUSH Workload Manager
-TBD: Include related command reference or operations
-for using YANG-PUSH Workload Manager.
-
-=== Tutorials
-Below are tutorials for YANG-PUSH.
-
-==== Using YANG-PUSH Catalog
-TBD: State the purpose of tutorial
-
-===== Overview
-TBD: An overview of the YANG-PUSH Catalog tutorial
-
-===== Prerequisites
-TBD: Provide any prerequisite information, assumed knowledge, or environment
-required to execute the use case.
-
-===== Target Environment
-There are no topology requirement for using YANG-PUSH.  A single node able interact as per https://tools.ietf.org/html/draft-ietf-netconf-yang-push-00 is sufficient to use this capability.
-
-===== Instructions
-TBD: Step by step procedure for using YANG-PUSH Catalog.
-
-==== Using YANG-PUSH Workload Manager
-TBD: State the purpose of tutorial
-
-===== Overview
-TBD: An overview of the YANG-PUSH Workload Manager tutorial
-
-===== Prerequisites
-TBD: Provide any prerequisite information, assumed knowledge, or environment
-required to execute the use case.
-
-===== Target Environment
-TBD: Include any topology requirement for the use case.
-
-===== Instructions
-TBD: Step by step procedure for using YANG-PUSH Workload Manager.
\ No newline at end of file
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/yang-push.html
index 481c29b354ff0322dac8bb34b76d04be1eceae8f..f1fd469179a8fafe7d3ef273da4f0dd4710ab431 100644 (file)
@@ -1,355 +1,3 @@
 == YANG IDE User Guide
 
-=== Overview
-
-The YANG IDE project provides an Eclipse plugin that is used to create,
-view, and edit Yang model files.  It currently supports version 1.0 of
-the Yang specification.
-
-The YANG IDE project uses components from the OpenDaylight project for
-parsing and verifying Yang model files.  The "yangtools" parser in
-OpenDaylight is generally used for generating Java code associated
-with Yang models.  If you are just using the YANG IDE to view and edit
-Yang models, you do not need to know any more about this.
-
-Although the YANG IDE plugin is used in Eclipse, it is not necessary to
-be familiar with the Java programming language to use it effectively.
-
-The YANG IDE also uses the Maven build tool, but you do not have to be
-a Maven expert to use it, or even know that much about it.  Very
-little configuration of Maven files will have to be done by you.  In
-fact, about the only thing you will likely ever need to change can be
-done entirely in the Eclipse GUI forms, without even seeing the
-internal structure of the Maven POM file (Project Object Model).
-
-The YANG IDE plugin provides features that are similar to other
-programming language plugins in the Eclipse ecosystem.
-
-For instance, you will find support for the following:
-
-* Immediate "as-you-type" display of syntactic and semantic errors
-* Intelligent completion of language tokens, limited to only choices
-valid in the current scope and namespace
-* Consistent (and customizable) color-coding of syntactic and semantic symbols
-* Provides access to remote Yang models by specifying dependency on
-Maven artifact containing models (or by manual inclusion in project)
-* One-click navigation to referenced symbols in external files
-* Mouse hovers display descriptions of referenced components
-* Tools for refactoring or renaming components respect namespaces
-* Code templates can be entered for common conventions
-
-Forthcoming sections of this manual will step through how to utilize
-these features.
-
-=== Creating a Yang Project
-
-After the plugin is installed, the next thing you have to do is create
-a Yang Project.  This is done from the "File" menu, selecting "New",
-and navigating to the "Yang" section and selecting "YANG Project", and
-then clicking "Next" for more items to configure.
-
-Some shortcuts for these steps are the following:
-
-* Typically, the key sequence "Ctrl+n" (press "n" while holding down
-one of the "ctrl" keys) is bound to the "new" function
-* In the "New" wizard dialog, the initial focus is in the filter
-field, where you can enter "yang" to limit the choices to only the
-functions provided by the YANG IDE plugin
-* On the "New" wizard dialog, instead of clicking the "Next" button
-with your mouse, you can press "Alt+n" (you will see a hint for this
-with the "N" being underlined)
-
-==== First Yang Project Wizard Page
-
-After the "Next" button is pressed, it goes to the first wizard page
-that is specific to creating Yang projects.  you will see a subtitle on
-this page of "YANG Tools Configuration".  In almost all cases, you
-should be able to click "Next" again on this page to go to the next
-wizard page.
-
-However, some information about the fields on this page would be helpful.
-
-You will see the following labeled fields and sections:
-
-===== Yang Files Root Directory
-
-This defaults to "src/main/yang".  Except when creating your first
-Yang file, you, you do not even have to know this, as Eclipse presents
-the same interface to view your Yang files no matter what you set
-this to.
-
-===== Source Code Generators
-
-If you do not know what this is, you do not need to know about it.  The
-"yangtools" Yang parser from OpenDaylight uses a "code generator"
-component to generate specific kinds of Java classes from the Yang
-models.  Again, if you do not need to work with the generated Java
-code, you do not need to change this.
-
-===== Create Example YANG File
-
-This is likely the only field you will ever have any reason to change.
-If this checkbox is set, when the YANG IDE creates the Yang project,
-it will create a sample "acme-system.yang" file which you can view and
-edit to demonstrate the features of the tool to yourself.  If you
-do not need this file, then either delete it from the project or
-uncheck the checkbox to prevent its creation.
-
-When done with the fields on this page, click the "Next" button to go
-to the next wizard page.
-
-==== Second Yang Project Wizard Page
-
-This page has a subtitle of "New Maven project".  There are several
-fields on this page, but you will only ever have to see and change the
-setting of the first field, the "Create a simple project" checkbox.
-You should always set this ON to avoid the selection of a Maven
-archetype, which is something you do not need to do for creating a
-Yang project.
-
-Click "Next" at the bottom of the page to move to the next wizard page.
-
-==== Third Yang Project Wizard Page
-
-This also has a subtitle of "New Maven project", but with different
-fields to set.  You will likely only ever set the first two fields,
-and completely ignore everything else.
-
-The first field is labeled "Group id" in the "Artifact" section.  It
-really does not matter what you set this to, but it does have to be set
-to something.  For consistency, you might set this to the name or
-nickname of your organization.  Otherwise, there are no constraints on
-the value of this field.
-
-The second field is labeled "Artifact id".  The value of this field
-will be used as the name of the project you create, so you will have
-to think about what you want the project to be called.  Also note that
-this name has to be unique in the Eclipse workspace.  You cannot have
-two projects with the same name.
-
-After you have set this field, you will notice that the "Next" button is
-insensitive, but now the "Finish" button is sensitive.  You can click
-"Finish" now (or use the keyboard shortcut of "Alt+f"), and the Yang
-IDE will finally create your project.
-
-=== Creating a Yang File
-
-Now that you have created your project, it is time to create your first Yang file.
-
-When you created the Yang project, you might have noticed the other
-option next to "YANG Project", which was "YANG File".  That is what
-you will select now.  Click "Next" to go to the first wizard page.
-
-==== First Yang File Wizard Page
-
-This wizard page lets you specify where the new file will be located, and its name.
-
-You have to select the particular project you want the file to go
-into, and it needs to go into the "src/main/yang" folder (or a
-different location if you changed that field when creating the
-project).
-
-You then enter the desired name of the file in the "File name".  The
-file name should have no spaces or "special characters" in it.  You
-can specify a ".yang" extent if you want.  If you do not specify an
-extent, the YANG IDE will create it with the ".yang" extent.
-
-Click "Next" to go to the next wizard page.
-
-==== Second Yang File Wizard Page
-
-On this wizard page, you set some metadata about the module that is
-used to initialize the contents of the Yang file.
-
-It has the following fields:
-
-===== Module Name
-
-This will default to the "base name" of the file name you created.
-For instance, if the file name you created was "network-setup.yang",
-this field will default to "network-setup".  You should leave this
-value as is.  There is no good reason to define a model with a name
-different from the file name.
-
-===== Namespace
-
-This defaults to "urn:opendaylight:xxx", where "xxx" is the "base
-name" of the file name you created.  You should put a lot of thought
-into designing a namespace naming scheme that is used throughout your
-organization.  It is quite common for this namespace value to look like
-a "http" URL, but note that that is just a convention, and will not
-necessarily imply that there is a web page residing at that HTTP
-address.
-
-===== Prefix
-
-This defaults to the "base name" of the file name you created.  It
-mostly does not technically matter what you set this to, as long as
-it is not empty.  Conventionally, it should be a "nickname" that is
-used to refer to the given namespace in an abbreviated form, when
-referenced in an "import" statement in another Yang model file.
-
-===== Revision
-
-This has to be a date value in the form of "yyyy-mm-dd", representing
-the last modified date of this Yang model.  The value will default to
-the current date.
-
-===== Revision Description
-
-This is just human-readable text, which will go into the "description"
-field underneath the Yang "revision" field, which will describe what
-went into this revision.
-
-When all the fields have the content you want, click the "Finish"
-button to set the YANG IDE create the file in the specified location.
-It will then present the new file in the editor view for additional
-modifications.
-
-=== Accessing Artifacts for Yang Model Imports
-
-You might be working on Yang models that are "abstract" or are
-intended to be imported by other Yang models.  You might also, and
-more likely, be working on Yang models that import other "abstract"
-Yang models.
-
-Assuming you are in that latter more common group, you need to consider
-for yourself, and for your organization, how you are going to get
-access to those models that you import.
-
-You could use a very simple and primitive approach of somehow
-obtaining those models from some source as plain files and just
-copying them into the "src/main/yang" folder of your project.  For a
-simple demo or a "one-off" very short project, that might be
-sufficient.
-
-A more robust and maintainable approach would be to reference
-"coordinates" of the artifacts containing Yang models to import.  When
-you specify unique coordinates associated with that artifact, the Yang
-IDE can retrieve the artifact in the background and make it available
-for your "import" statements.
-
-Those "coordinates" that I speak of refer to the Maven concepts of
-"group id", "artifact id", and "version".  you may remember "group id"
-and "artifact id" from the wizard page for creating a Yang project.
-It is the same idea.  If you ever produce Yang model artifacts that
-other people are going to import, you will want to think more about what
-you set those values to when you created the project.
-
-For example, the OpenDaylight project produces several importable
-artifacts that you can specify to get access to common Yang models.
-
-==== Turning on Indexing for Maven Repositories
-
-Before we talk about how to add dependencies to Maven artifacts with
-Yang models for import, I need to explain how to make it easier to
-find those artifacts.
-
-In the Yang project that you have created, the "pom.xml" file (also
-called a "POM file") is the file that Maven uses to specify
-dependencies.  We will talk about that in a minute, but first we need to
-talk about "repositories".  These are where artifacts are stored.
-
-We are going to have Eclipse show us the "Maven Repositories" view.
-In the main menu, select "Window" and then "Show View", and then
-"Other".  Like in the "New" dialog, you can enter "maven" in the
-filter field to limit the list to views with "maven" in the name.
-Click on the "Maven Repositories" entry and click OK.
-
-This will usually create the view in the bottom panel of the window.
-
-The view presents an outline view of four principal elements: 
-
-* Local Repositories
-* Global Repositories
-* Project Repositories
-* Custom Repositories
-
-For this purpose, the only section you care about is "Project
-Repositories", being the repositories that are only specified in the
-POM for the project.  There should be a "right-pointing arrow" icon on
-the line.  Click that to expand the entry.
-
-You should see two entries there:
-
-* opendaylight-release
-* opendaylight-snapshot
-
-You will also see internet URLs associated with each of those repositories.
-
-For this purpose, you only care about the first one.  Right-click on
-that entry and select "Full Index Enabled".  The first time you do
-this on the first project you create, it will spend several minutes
-walking the entire tree of artifacts available at that repository and
-"indexing" all of those components.  When this is done, searching for
-available artifacts in that repository will go very quickly.
-
-=== Adding Dependencies Containing Yang Models
-
-Double-click the "pom.xml" file in your project.  Instead of just
-bringing up the view of an XML file (although you can see that if you
-like), it presents a GUI form editor with a handful of tabs.
-
-The first tab, "Overview", shows things like the "Group Id", "Artifact
-Id", and "Version", which represents the "Maven coordinate" of your
-project, which I have mentioned before.
-
-Now click on the "Dependencies" tab.  You will now see two list
-components, labeled "Dependencies" and "Dependency Management".  You
-only care about the "Dependencies" section.
-
-In the "Dependencies" section, you should see one dependency for an
-artifact called "yang-binding".  This artifact is part of
-OpenDaylight, but you do not need to know anything about it.
-
-Now click the "Add" button.
-
-This brings up a dialog titled "Select Dependency".  It has three
-fields at the top labeled "Group Id", "Artifact Id", and "Version",
-with a "Scope" dropdown.  You will never have a need to change the
-"Scope" dropdown, so ignore it.  Despite the fact that you will need to
-get values into these fields, in general usage, you will never have to
-manually enter values into them, but you will see values being inserted
-into these fields by the next steps I describe.
-
-Below those fields is a field labeled "Enter groupId, artifactId ...".
-This is effectively a "filter field", like on the "New" dialog, but
-instead of limiting the list from a short list of choices, the value
-you enter there will be matched against all of the artifacts that were
-indexed in the "opendaylight-release" repository (and others).  It
-will match the string you enter as a substring of any groupId or
-artifactId.
-
-For all of the entries that match that substring, it will list an
-entry showing the groupId and artifactId, with an expansion arrow.  If
-you open it by clicking on the arrow, you will see individual entries
-corresponding to each available version of that artifact, along with
-some metadata about the artifacts between square brackets, mostly
-indicating what "type" of artifact is.
-
-For your purposes, you only ever want to use "bundle" or "jar" artifacts.
-
-Let us consider an example that many people will probably be using.
-
-In the filter field, enter "ietf-yang-types".  Depending on what
-versions are available, you should see a small handful of "groupId,
-artifactId" entries there.  One of them should be groupId
-"org.opendaylight.mdsal.model" and artifactId "ietf-yang-types".
-Click on the expansion arrow to open that.
-
-What you will see at this point depends on what versions are
-available.  You will likely want to select the newest one (most likely
-top of the list) that is also either a "bundle" or "jar" type
-artifact.
-
-If you click on that resulting version entry, you should notice at
-this point that the "Group Id", "Artifact Id", and "Version" fields at
-the top of the dialog are now filled in with the values corresponding
-to this artifact and version.
-
-If this is the version that you want, click OK and this artifact will
-be added to the dependencies in the POM.
-
-This will now make the Yang models found in that artifact available in
-"import" statements in Yang models, not to mention the completion
-choices for that "import" statement.
+This content has been migrated to: http://docs.opendaylight.org/en/stable-boron/user-guide/yang-ide-user-guide.html
diff --git a/manuals/user-guide/src/main/resources/images/snbi/snbi_arch.png b/manuals/user-guide/src/main/resources/images/snbi/snbi_arch.png
new file mode 100644 (file)
index 0000000..d6aaa59
Binary files /dev/null and b/manuals/user-guide/src/main/resources/images/snbi/snbi_arch.png differ