Fold netvirtsfc-env demo into ovsdb project 43/31143/1
authorFlavio Fernandes <ffernand@redhat.com>
Thu, 10 Dec 2015 13:55:20 +0000 (08:55 -0500)
committerFlavio Fernandes <ffernand@redhat.com>
Thu, 10 Dec 2015 14:05:59 +0000 (09:05 -0500)
Keeping the ovsdb netvirt-sfc demo together with the ovsdb project itself.

Original location: https://github.com/flavio-fernandes/netvirtsfc-env

Change-Id: I747694a9860433c4d4deab06aed537ecba1a96f5
Signed-off-by: Flavio Fernandes <ffernand@redhat.com>
24 files changed:
resources/demo/netvirtsfc-env/README.md [new file with mode: 0755]
resources/demo/netvirtsfc-env/Vagrantfile [new file with mode: 0644]
resources/demo/netvirtsfc-env/bootstrap.sh [new file with mode: 0644]
resources/demo/netvirtsfc-env/checkdemo.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/cleandemo.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/cleanodl.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/demo-asymmetric-chain/rest.py [new file with mode: 0755]
resources/demo/netvirtsfc-env/dpdumpflows.py [new file with mode: 0755]
resources/demo/netvirtsfc-env/dumpflows.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/env.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/flowcount.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/images/asymmetric-sfc-demo.png [new file with mode: 0644]
resources/demo/netvirtsfc-env/infrastructure_launch.py [new file with mode: 0755]
resources/demo/netvirtsfc-env/ovswork.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/pollflows.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/resetcontroller.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/rest-clean.py [new file with mode: 0755]
resources/demo/netvirtsfc-env/setsfc.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/startdemo.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/traceflow.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/utils/hosts [new file with mode: 0644]
resources/demo/netvirtsfc-env/utils/overlay-flows.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/utils/setuphosts.sh [new file with mode: 0755]
resources/demo/netvirtsfc-env/vmclean.sh [new file with mode: 0755]

diff --git a/resources/demo/netvirtsfc-env/README.md b/resources/demo/netvirtsfc-env/README.md
new file mode 100755 (executable)
index 0000000..e993674
--- /dev/null
@@ -0,0 +1,147 @@
+#SETUP
+
+This is a demonstration / development environment for show-casing OpenDaylight OVSDB NETVIRT with ServiceFunctionChaining (SFC)
+
+git clone https://github.com/flavio-fernandes/netvirtsfc-env.git
+
+This demo setup can also be found under the the ovsdb repo of the Opendaylight project:
+
+```
+https://github.com/opendaylight/ovsdb/tree/master/resources/demo/netvirtsfc-env
+```
+
+This demo is analogous to the demo done by the group-based-policy project of Opendaylight. In fact, the kudos
+for initially putting it together goes to our friends Keith, Thomas, and others responsible for GBP:
+
+```
+https://github.com/alagalah/gbpsfc-env
+```
+
+The initial installation may take some time, with vagrant and docker image downloads. 
+
+After the first time it is very quick.
+
+1. Set up Vagrant. 
+  * Edit env.sh for NUM_NODES. (Keep all other vars the same for this version)
+    Also set 'ODL_ROOT_DIR' to point to the directory ./openstack/net-virt-sfc/karaf/target/assembly
+
+    That directory will be available when you build the ovsdb project, or where the karaf distro
+    got unzipped.
+
+  * Each VM takes approximately 1G RAM, 2GB used HDD (40GB)
+
+  * demo-asymmetric-chain: 6 VMs.
+
+2. From the 'netvirtsfc-env' directory do:
+```
+source ./env.sh
+vagrant up
+```
+  * This takes quite some time initially. 
+
+3. Start controller.
+  * Currently it is expected that that controller runs on the machine hosting the vagrant VMs.
+  * Tested using ovsdb netvirt beryllium.
+
+  * Set config for your setup:
+
+    Use the script 'setsfc.sh' to make the changes below. You only need to do it once after build.
+
+    * Modify the NetvirtSfc config.xml to start in standalone mode. (i.e. set of13provider to standalone)
+    * Modify the logging levels to help with troubleshooting
+    * Start ODL with the following feature loaded:  odl-ovsdb-sfc-ui
+
+  * Start controller by running bin/karaf and make sure the following features are installed
+```
+    cd $ODL_ROOT_DIR ; ./bin/karaf
+```
+
+```
+    opendaylight-user@root>feature:list -i | grep ovsdb-sfc
+    odl-ovsdb-sfc-test                   | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-test1.2.1-SNAPSHOT        | OpenDaylight :: ovsdb-sfc-test
+    odl-ovsdb-sfc-api                    | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc :: api
+    odl-ovsdb-sfc                        | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc
+    odl-ovsdb-sfc-rest                   | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc :: REST
+    odl-ovsdb-sfc-ui                     | 1.2.1-SNAPSHOT   | x         | odl-ovsdb-sfc-1.2.1-SNAPSHOT            | OpenDaylight :: ovsdb-sfc :: UI
+```
+
+    Note that if you missed running 'setsfc.sh' ODL will operate in non-standalone mode, which is going
+    to make ovsdb netvirt work with openstack/tacker environments.
+
+  * Run `log:tail | grep SfcL2Renderer` and wait until the following message appears in the log:
+```
+ successfully started the SfcL2Renderer plugin
+```
+  * Now you can ^C the log:tail if you wish
+
+##demo-asymmetric-chain
+
+  * Service Chain classifying HTTP traffic.
+  * Traffic in the forward direction is chained and in the reverse direction the traffic uses the normal VxLAN tunnel
+  * 2 docker containers in the same tenant space
+
+![asymmetric-chain demo diag](https://raw.githubusercontent.com/flavio-fernandes/netvirtsfc-env/master/images/asymmetric-sfc-demo.png)
+
+VMs:
+* netvirtsfc1: netvirt (client initiates transactions from here)
+* netvirtsfc2: sff
+* netvirtsfc3: "sf"
+* netvirtsfc4: sff
+* netvirtsfc5: "sf"
+* netvirtsfc6: netvirt (run a server here)
+
+Containers:
+* h35_2 is on netvirtsfc1. This host serves as the client.
+* h35_4 is netvirtsfc6. This host serves as the webserver.
+
+To run, from host folder where Vagrantfile located do:
+
+`./startdemo.sh demo-asymmetric-chain`
+
+### To test by sending traffic:
+Start a test HTTP server on h35_4 in VM 6.
+
+*(don't) forget double ENTER after `docker attach`*
+```bash
+vagrant ssh netvirtsfc6
+docker ps
+docker attach h35_4
+python -m SimpleHTTPServer 80
+```
+
+Ctrl-P-Q to detach from docker without stopping the SimpleHTTPServer, and logoff netvirtsfc6.
+
+Now start client traffic, either ping or make HTTP requests to the server on h36_4.
+
+```bash
+vagrant ssh netvirtsfc1
+docker ps
+docker attach h35_2
+ping 10.0.35.4
+curl 10.0.35.4
+while true; do curl 10.0.35.4; sleep 1; done
+```
+
+Ctrl-P-Q to detach from docker, leaving the client making HTTP requests, and logoff netvirtsfc1.
+
+Look around: use "vagrant ssh" to the various machines. To run wireshark, ssh to the vms using the -XY flags:
+```
+vagrant ssh netvirtsfcX -- -XY   (e.g.: vagrant ssh netvirtsfc1 -- -XY)
+sudo wireshark &
+```
+
+ * take packet captures on eth1, as that is the interface used for communication between vms.
+ * sudo ovs-dpctl dump-flows
+
+
+### When finished from host folder where Vagrantfile located do:
+
+`./cleandemo.sh`
+
+If you like `vagrant destroy` will remove all VMs
+
+##Preparing to run another demo
+1. In the vagrant directory, run cleandemo.sh
+2. stop controller (logout of karaf)
+3. Remove data, journal and snapshot directories from controller directory.
+4. Restart tests starting with restarting the controller, install features, wait, as above.
diff --git a/resources/demo/netvirtsfc-env/Vagrantfile b/resources/demo/netvirtsfc-env/Vagrantfile
new file mode 100644 (file)
index 0000000..3811901
--- /dev/null
@@ -0,0 +1,34 @@
+
+# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
+VAGRANTFILE_API_VERSION = "2"
+
+Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
+  odl=ENV['ODL']
+  config.vm.provider "virtualbox" do |vb|
+    vb.memory = "512"
+  end
+
+  # run our bootstrapping for the system
+  config.vm.provision 'shell', path: 'bootstrap.sh', :args => odl
+
+  num_nodes = (ENV['NUM_NODES'] || 1).to_i
+
+  # ip configuration
+  ip_base = (ENV['SUBNET'] || "192.168.50.")
+  ips = num_nodes.times.collect { |n| ip_base + "#{n+70}" }
+
+  num_nodes.times do |n|
+    config.vm.define "netvirtsfc#{n+1}", autostart: true do |compute|
+      vm_ip = ips[n]
+      vm_index = n+1
+      compute.vm.box = "ubuntu/trusty64"
+      compute.vm.hostname = "netvirtsfc#{vm_index}"
+      compute.vm.network "private_network", ip: "#{vm_ip}"
+      compute.vm.provider :virtualbox do |vb|
+        vb.memory = 512
+        vb.customize ["modifyvm", :id, "--ioapic", "on"]      
+        vb.cpus = 1
+      end
+    end
+  end
+end
diff --git a/resources/demo/netvirtsfc-env/bootstrap.sh b/resources/demo/netvirtsfc-env/bootstrap.sh
new file mode 100644 (file)
index 0000000..a11d562
--- /dev/null
@@ -0,0 +1,34 @@
+#!/bin/bash
+
+# vim: sw=4 ts=4 sts=4 et tw=72 :
+
+echo "---> Updating operating system"
+apt update -qq
+
+echo "---> Installing OVSDB Netvirt requirements"
+apt install -y software-properties-common -qq
+apt install -y python-software-properties -qq
+apt install -y python-pip -qq
+apt install -y git-core git -qq
+apt install -y curl -qq
+
+echo "---> Installing wireshark"
+apt install -y xbase-clients -qq
+apt install -y wireshark -qq
+
+# docker
+curl -sSL https://get.docker.com/ | sh
+
+cat <<EOL > /etc/default/docker
+  DOCKER_NETWORK_OPTIONS='--bip=10.250.0.254/24'
+EOL
+
+docker pull alagalah/odlpoc_ovs230
+# OVS
+curl https://raw.githubusercontent.com/pritesh/ovs/nsh-v8/third-party/start-ovs-deb.sh | bash
+
+# this part is just for local spinup DON'T copy it to releng bootstrap.sh
+pip install ipaddr
+echo "export PATH=$PATH:/vagrant" >> /home/vagrant/.profile
+echo "export ODL=$1" >> /home/vagrant/.profile
+usermod -aG docker vagrant
diff --git a/resources/demo/netvirtsfc-env/checkdemo.sh b/resources/demo/netvirtsfc-env/checkdemo.sh
new file mode 100755 (executable)
index 0000000..0e08495
--- /dev/null
@@ -0,0 +1,16 @@
+#!/usr/bin/env bash
+
+set -e
+
+
+echo "Checking demo from $demo with vars:"
+echo "Number of nodes: " $NUM_NODES
+echo "Opendaylight Controller: " $ODL
+echo "Base subnet: " $SUBNET
+
+for i in `seq 1 $NUM_NODES`; do
+  hostname="netvirtsfc"$i
+  echo $hostname "flow count: "
+  vagrant ssh $hostname -c "sudo ovs-ofctl dump-flows sw$i -OOpenFlow13 | wc -l "
+done
+
diff --git a/resources/demo/netvirtsfc-env/cleandemo.sh b/resources/demo/netvirtsfc-env/cleandemo.sh
new file mode 100755 (executable)
index 0000000..3c494a3
--- /dev/null
@@ -0,0 +1,17 @@
+#!/usr/bin/env bash
+
+set -e
+
+for i in `seq 1 $NUM_NODES`; do
+  hostname="netvirtsfc"$i
+  switchname="sw"$i
+  echo $hostname
+  vagrant ssh $hostname -c "sudo ovs-vsctl del-br $switchname; sudo ovs-vsctl del-manager; sudo /vagrant/vmclean.sh"
+
+done
+./rest-clean.py
+
+if [ -f "demo.lock" ] ; then
+  rm demo.lock
+fi
diff --git a/resources/demo/netvirtsfc-env/cleanodl.sh b/resources/demo/netvirtsfc-env/cleanodl.sh
new file mode 100755 (executable)
index 0000000..95a9f0f
--- /dev/null
@@ -0,0 +1,3 @@
+#!/usr/bin/env bash
+
+rm -rf ${ODL_ROOT_DIR}/{data,journal,snapshots}/*
diff --git a/resources/demo/netvirtsfc-env/demo-asymmetric-chain/rest.py b/resources/demo/netvirtsfc-env/demo-asymmetric-chain/rest.py
new file mode 100755 (executable)
index 0000000..c3355e6
--- /dev/null
@@ -0,0 +1,346 @@
+#!/usr/bin/python
+import argparse
+import requests,json
+from requests.auth import HTTPBasicAuth
+from subprocess import call
+import time
+import sys
+import os
+
+
+DEFAULT_PORT='8181'
+
+
+USERNAME='admin'
+PASSWORD='admin'
+
+
+OPER_NODES='/restconf/operational/opendaylight-inventory:nodes/'
+CONF_TENANT='/restconf/config/policy:tenants'
+
+def get(host, port, uri):
+    url='http://'+host+":"+port+uri
+    #print url
+    r = requests.get(url, auth=HTTPBasicAuth(USERNAME, PASSWORD))
+    jsondata=json.loads(r.text)
+    return jsondata
+
+def put(host, port, uri, data, debug=False):
+    '''Perform a PUT rest operation, using the URL and data provided'''
+
+    url='http://'+host+":"+port+uri
+
+    headers = {'Content-type': 'application/yang.data+json',
+               'Accept': 'application/yang.data+json'}
+    if debug == True:
+        print "PUT %s" % url
+        print json.dumps(data, indent=4, sort_keys=True)
+    r = requests.put(url, data=json.dumps(data), headers=headers, auth=HTTPBasicAuth(USERNAME, PASSWORD))
+    if debug == True:
+        print r.text
+    r.raise_for_status()
+
+def post(host, port, uri, data, debug=False):
+    '''Perform a POST rest operation, using the URL and data provided'''
+
+    url='http://'+host+":"+port+uri
+    headers = {'Content-type': 'application/yang.data+json',
+               'Accept': 'application/yang.data+json'}
+    if debug == True:
+        print "POST %s" % url
+        print json.dumps(data, indent=4, sort_keys=True)
+    r = requests.post(url, data=json.dumps(data), headers=headers, auth=HTTPBasicAuth(USERNAME, PASSWORD))
+    if debug == True:
+        print r.text
+    r.raise_for_status()
+
+def get_service_functions_uri():
+    return "/restconf/config/service-function:service-functions"
+
+def get_service_functions_data():
+    return {
+    "service-functions": {
+        "service-function": [
+            {
+                "name": "firewall-72",
+                "ip-mgmt-address": "192.168.50.72",
+                "type": "service-function-type:firewall",
+                "nsh-aware": "true",
+                "sf-data-plane-locator": [
+                    {
+                        "name": "sf1Dpl",
+                        "port": 6633,
+                        "ip": "192.168.50.72",
+                        "transport": "service-locator:vxlan-gpe",
+                        "service-function-forwarder": "SFF1"
+                    }
+                ]
+            },
+            {
+                "name": "dpi-74",
+                "ip-mgmt-address": "192.168.50.74",
+                "type": "service-function-type:dpi",
+                "nsh-aware": "true",
+                "sf-data-plane-locator": [
+                    {
+                        "name": "sf2Dpl",
+                        "port": 6633,
+                        "ip": "192.168.50.74",
+                        "transport": "service-locator:vxlan-gpe",
+                        "service-function-forwarder": "SFF2"
+                    }
+                ]
+            }
+        ]
+    }
+}
+
+def get_service_function_forwarders_uri():
+    return "/restconf/config/service-function-forwarder:service-function-forwarders"
+
+def get_service_function_forwarders_data():
+    return {
+    "service-function-forwarders": {
+        "service-function-forwarder": [
+            {
+                "name": "SFF1",
+                "service-node": "OVSDB2",
+                "service-function-forwarder-ovs:ovs-bridge": {
+                    "bridge-name": "sw2"
+                },
+                "service-function-dictionary": [
+                    {
+                        "name": "firewall-72",
+                        "sff-sf-data-plane-locator": {
+                            "sff-dpl-name": "sfc-tun2",
+                            "sf-dpl-name": "sf1Dpl"
+                        }
+                    }
+                ],
+                "sff-data-plane-locator": [
+                    {
+                        "name": "sfc-tun2",
+                        "data-plane-locator": {
+                            "transport": "service-locator:vxlan-gpe",
+                            "port": 6633,
+                            "ip": "192.168.50.71"
+                        },
+                        "service-function-forwarder-ovs:ovs-options": {
+                            "remote-ip": "flow",
+                            "dst-port": "6633",
+                            "key": "flow",
+                            "nsp": "flow",
+                            "nsi": "flow",
+                            "nshc1": "flow",
+                            "nshc2": "flow",
+                            "nshc3": "flow",
+                            "nshc4": "flow"
+                        }
+                    }
+                ]
+            },
+            {
+                "name": "SFF2",
+                "service-node": "OVSDB2",
+                "service-function-forwarder-ovs:ovs-bridge": {
+                    "bridge-name": "sw4"
+                },
+                "service-function-dictionary": [
+                    {
+                        "name": "dpi-74",
+                        "sff-sf-data-plane-locator": {
+                            "sff-dpl-name": "sfc-tun4",
+                            "sf-dpl-name": "sf2Dpl"
+                        }
+                    }
+                ],
+                "sff-data-plane-locator": [
+                    {
+                        "name": "sfc-tun4",
+                        "data-plane-locator": {
+                            "transport": "service-locator:vxlan-gpe",
+                            "port": 6633,
+                            "ip": "192.168.50.73"
+                        },
+                        "service-function-forwarder-ovs:ovs-options": {
+                            "remote-ip": "flow",
+                            "dst-port": "6633",
+                            "key": "flow",
+                            "nsp": "flow",
+                            "nsi": "flow",
+                            "nshc1": "flow",
+                            "nshc2": "flow",
+                            "nshc3": "flow",
+                            "nshc4": "flow"
+                        }
+                    }
+                ]
+            }
+        ]
+    }
+}
+
+def get_service_function_chains_uri():
+    return "/restconf/config/service-function-chain:service-function-chains/"
+
+def get_service_function_chains_data():
+    return {
+    "service-function-chains": {
+        "service-function-chain": [
+            {
+                "name": "SFCNETVIRT",
+                "symmetric": "false",
+                "sfc-service-function": [
+                    {
+                        "name": "firewall-abstract1",
+                        "type": "service-function-type:firewall"
+                    },
+                    {
+                        "name": "dpi-abstract1",
+                        "type": "service-function-type:dpi"
+                    }
+                ]
+            }
+        ]
+    }
+}
+
+def get_service_function_paths_uri():
+    return "/restconf/config/service-function-path:service-function-paths/"
+
+def get_service_function_paths_data():
+    return {
+    "service-function-paths": {
+        "service-function-path": [
+            {
+                "name": "SFCNETVIRT-Path",
+                "service-chain-name": "SFCNETVIRT",
+                "starting-index": 255,
+                "symmetric": "false"
+
+            }
+        ]
+    }
+}
+
+def get_ietf_acl_uri():
+    return "/restconf/config/ietf-access-control-list:access-lists"
+
+def get_ietf_acl_data():
+    return {
+        "access-lists": {
+            "acl": [
+                {
+                    "acl-name": "http-acl",
+                    "access-list-entries": {
+                        "ace": [
+                            {
+                                "rule-name": "http-rule",
+                                "matches": {
+                                    "protocol": "6",
+                                    "destination-port-range": {
+                                        "lower-port": "80",
+                                        "upper-port": "80"
+                                    },
+                                },
+                                "actions": {
+                                    "netvirt-sfc-acl:sfc-name": "SFCNETVIRT"
+                                }
+                            }
+                        ]
+                    }
+                }
+            ]
+        }
+    }
+
+def get_classifier_uri():
+    return "/restconf/config/netvirt-sfc-classifier:classifiers"
+
+def get_classifier_data():
+    return {
+        "classifiers": {
+            "classifier": [
+                {
+                    "name": "http-classifier",
+                    "acl": "http-acl",
+                    "sffs": {
+                        "sff": [
+                            {
+                                "name": "SFF1"
+                            }
+                        ]
+                    },
+                    "bridges": {
+                        "bridge": [
+                            {
+                                "name": "sw1",
+                                "direction": "ingress"
+                            },
+                            {
+                                "name": "sw6",
+                                "direction": "egress"
+                            }
+                        ]
+                    }
+                }
+            ]
+        }
+    }
+
+def get_netvirt_sfc_uri():
+    return "/restconf/config/netvirt-sfc:sfc/"
+
+def get_netvirt_sfc_data():
+    return {
+        "sfc": {
+            "name": "sfc1"
+        }
+    }
+
+if __name__ == "__main__":
+    # Launch main menu
+
+
+    # Some sensible defaults
+    controller=os.environ.get('ODL')
+    if controller == None:
+        sys.exit("No controller set.")
+    else:
+       print "Contacting controller at %s" % controller
+
+    #tenants=get(controller,DEFAULT_PORT,CONF_TENANT)
+
+    print "sending service functions"
+    put(controller, DEFAULT_PORT, get_service_functions_uri(), get_service_functions_data(), True)
+    print "sending service function forwarders"
+    put(controller, DEFAULT_PORT, get_service_function_forwarders_uri(), get_service_function_forwarders_data(), True)
+
+    print "sf's and sff's created"
+    time.sleep(5)
+    print "sending service function chains"
+    put(controller, DEFAULT_PORT, get_service_function_chains_uri(), get_service_function_chains_data(), True)
+    print "sending service function paths"
+    put(controller, DEFAULT_PORT, get_service_function_paths_uri(), get_service_function_paths_data(), True)
+
+    print "sfc's and sfp's created"
+    time.sleep(5)
+    print "sending netvirt-sfc"
+    put(controller, DEFAULT_PORT, get_netvirt_sfc_uri(), get_netvirt_sfc_data(), True)
+    time.sleep(1)
+    print "sending ietf-acl"
+    put(controller, DEFAULT_PORT, get_ietf_acl_uri(), get_ietf_acl_data(), True)
+    time.sleep(1)
+    print "sending classifier"
+    put(controller, DEFAULT_PORT, get_classifier_uri(), get_classifier_data(), True)
+
+
+    # print "sending tunnel -- SKIPPED"
+    ## put(controller, DEFAULT_PORT, get_tunnel_uri(), get_tunnel_data(), True)
+    # print "sending tenant -- SKIPPED"
+    ## put(controller, DEFAULT_PORT, get_tenant_uri(), get_tenant_data(),True)
+    # print "registering endpoints -- SKIPPED"
+    ## for endpoint in get_endpoint_data():
+    ##    post(controller, DEFAULT_PORT, get_endpoint_uri(),endpoint,True)
+
+
diff --git a/resources/demo/netvirtsfc-env/dpdumpflows.py b/resources/demo/netvirtsfc-env/dpdumpflows.py
new file mode 100755 (executable)
index 0000000..a920344
--- /dev/null
@@ -0,0 +1,17 @@
+#!/usr/bin/python
+
+from subprocess import check_output
+
+
+def call_dpctl():
+       cmd="ovs-dpctl dump-flows"
+       listcmd=cmd.split()
+       return check_output(listcmd)
+
+if __name__ == "__main__" :
+       flows=call_dpctl().split("recirc_id")
+       for flow in flows:
+               print flow
+
+
+
diff --git a/resources/demo/netvirtsfc-env/dumpflows.sh b/resources/demo/netvirtsfc-env/dumpflows.sh
new file mode 100755 (executable)
index 0000000..fa27cc2
--- /dev/null
@@ -0,0 +1,17 @@
+#!/usr/bin/env bash
+
+set -e
+hostnum=${HOSTNAME#"netvirtsfc"}
+sw="sw$hostnum"
+
+TABLE=$1
+
+clear
+ovs-ofctl dump-groups $sw -OOpenFlow13
+if [ "$TABLE" ]
+then
+        ovs-ofctl dump-flows $sw -OOpenFlow13 table=$TABLE
+else
+        ovs-ofctl dump-flows $sw -OOpenFlow13
+fi
+
diff --git a/resources/demo/netvirtsfc-env/env.sh b/resources/demo/netvirtsfc-env/env.sh
new file mode 100755 (executable)
index 0000000..dac82d8
--- /dev/null
@@ -0,0 +1,9 @@
+#!/usr/bin/env bash
+export NUM_NODES=6
+export ODL="192.168.50.1"
+export SUBNET="192.168.50."
+
+#rootdir="/home/shague/git/ovsdb/openstack/net-virt-sfc/karaf/target/assembly"
+rootdir="/Users/ffernand/ODL/projects/ovsdb.git/openstack/net-virt-sfc/karaf/target/assembly"
+
+export ODL_ROOT_DIR=$rootdir
diff --git a/resources/demo/netvirtsfc-env/flowcount.sh b/resources/demo/netvirtsfc-env/flowcount.sh
new file mode 100755 (executable)
index 0000000..1b8ca93
--- /dev/null
@@ -0,0 +1,19 @@
+#!/usr/bin/env bash
+
+hostnum=${HOSTNAME#"netvirtsfc"}
+sw="sw$hostnum"
+set -e
+if [ "$1" ]
+then
+    echo;echo "FLOWS:";ovs-ofctl dump-flows $sw -OOpenFlow13 table=$1 --rsort=priority
+    echo
+    printf "Flow count: "
+    echo $(($(ovs-ofctl dump-flows $sw -OOpenFlow13 table=$1 | wc -l)-1))
+else
+    echo;echo "FLOWS:";ovs-ofctl dump-flows $sw -OOpenFlow13
+    printf "No table entered. $sw flow count: ";
+    echo $(($(ovs-ofctl dump-flows $sw -OOpenFlow13 | wc -l)-1))
+    printf "\nTable0: base:  "; echo $(($(ovs-ofctl dump-flows $sw -OOpenFlow13 table=0| wc -l)-1))
+    printf "\nTable50: sfc:   "; echo $(($(ovs-ofctl dump-flows $sw -OOpenFlow13 table=6| wc -l)-1))
+fi
+
diff --git a/resources/demo/netvirtsfc-env/images/asymmetric-sfc-demo.png b/resources/demo/netvirtsfc-env/images/asymmetric-sfc-demo.png
new file mode 100644 (file)
index 0000000..5ccf548
Binary files /dev/null and b/resources/demo/netvirtsfc-env/images/asymmetric-sfc-demo.png differ
diff --git a/resources/demo/netvirtsfc-env/infrastructure_launch.py b/resources/demo/netvirtsfc-env/infrastructure_launch.py
new file mode 100755 (executable)
index 0000000..81ab714
--- /dev/null
@@ -0,0 +1,167 @@
+#!/usr/bin/python
+
+import socket
+import os
+import re
+import time
+import sys
+import ipaddr
+import commands
+from subprocess import call
+from subprocess import check_output
+from infrastructure_config import *
+
+def addController(sw, ip):
+    call(['ovs-vsctl', 'set-controller', sw, 'tcp:%s:6653' % ip ])
+
+def addManager(ip):
+    cmd="ovs-vsctl set-manager tcp:%s:6640" % ip
+    listcmd=cmd.split()
+    print check_output(listcmd)
+
+def addSwitch(name, dpid=None):
+    call(['ovs-vsctl', 'add-br', name]) #Add bridge
+    if dpid:
+        if len(dpid) < 16: #DPID must be 16-bytes in later versions of OVS
+            filler='0000000000000000'
+            dpid=filler[:len(filler)-len(dpid)]+dpid
+        elif len(dpid) > 16:
+            print 'DPID: %s is too long' % dpid
+            sys.exit(3)
+        call(['ovs-vsctl','set','bridge', name,'other-config:datapath-id=%s'%dpid])
+
+def addHost(net, switch, name, ip, mac):
+    containerID=launchContainer()
+
+#,OpenFlow12,OpenFlow10
+def setOFVersion(sw, version='OpenFlow13'):
+    call(['ovs-vsctl', 'set', 'bridge', sw, 'protocols={}'.format(version)])
+
+def addTunnel(sw, port, sourceIp=None, remoteIp=None):
+    ifaceName = '{}-vxlan-0'.format(sw)
+    cmd = ['ovs-vsctl', 'add-port', sw, ifaceName,
+           '--', 'set', 'Interface', ifaceName,
+           'type=vxlan',
+           'options:local_ip=%s'%sourceIp,
+           'options:remote_ip=%s'%remoteIp,
+           'options:key=4096',
+           'ofport_request=%s'%port]
+#    if sourceIp is not None:
+#        cmd.append('options:source_ip={}'.format(sourceIp))
+    call(cmd)
+
+def addGpeTunnel(sw, sourceIp=None):
+    ifaceName = '{}-vxlangpe-0'.format(sw)
+    cmd = ['ovs-vsctl', 'add-port', sw, ifaceName,
+           '--', 'set', 'Interface', ifaceName,
+           'type=vxlan',
+           'options:remote_ip=flow',
+           'options:dst_port=6633',
+           'options:nshc1=flow',
+           'options:nshc2=flow',
+           'options:nshc3=flow',
+           'options:nshc4=flow',
+           'options:nsp=flow',
+           'options:nsi=flow',
+           'options:key=flow',
+           'ofport_request=7']
+#    if sourceIp is not None:
+#        cmd.append('options:source_ip={}'.format(sourceIp))
+    call(cmd)
+
+def launchContainer(host,containerImage):
+    containerID= check_output(['docker','run','-d','--net=none','--name=%s'%host['name'],'-h',host['name'],'-t', '-i','--privileged=True',containerImage,'/bin/bash']) #docker run -d --net=none --name={name} -h {name} -t -i {image} /bin/bash
+    #print "created container:", containerID[:-1]
+    return containerID[:-1] #Remove extraneous \n from output of above
+
+def connectContainerToSwitch(sw,host,containerID,of_port):
+    hostIP=host['ip']
+    mac=host['mac']
+    nw = ipaddr.IPv4Network(hostIP)
+    broadcast = "{}".format(nw.broadcast)
+    router = "{}".format(nw.network + 1)
+    cmd=['/vagrant/ovswork.sh',sw,containerID,hostIP,broadcast,router,mac,of_port,host['name']]
+    if host.has_key('vlan'):
+        cmd.append(host['vlan'])
+    call(cmd)
+
+def doCmd(cmd):
+    listcmd=cmd.split()
+    print check_output(listcmd)
+
+def launch(switches, hosts, contIP='127.0.0.1'):
+
+    for sw in switches:
+        addManager(contIP)
+        ports=0
+        first_host=True
+        for host in hosts:
+            if host['switch'] == sw['name']:
+                if first_host:
+                    dpid=sw['dpid']
+                    addSwitch(sw['name'],sw['dpid'])
+                    setOFVersion(sw['name'])
+                    addController(sw['name'], contIP)
+                    addGpeTunnel(sw['name'])
+                    if host['switch'] == "sw1":
+                        addTunnel(sw['name'], 5, "192.168.50.70", "192.168.50.75")
+                    if host['switch'] == "sw6":
+                        addTunnel(sw['name'], 5, "192.168.50.75", "192.168.50.70")
+                first_host=False
+                containerImage=defaultContainerImage #from Config
+                if host.has_key('container_image'): #from Config
+                    containerImage=host['container_image']
+                containerID=launchContainer(host,containerImage)
+                ports+=1
+                connectContainerToSwitch(sw['name'],host,containerID,str(ports))
+                host['port-name']='vethl-'+host['name']
+                print "Created container: %s with IP: %s. Connect using 'docker attach %s', disconnect with ctrl-p-q." % (host['name'],host['ip'],host['name'])
+
+if __name__ == "__main__" :
+#    print "Cleaning environment..."
+#    doCmd('/vagrant/clean.sh')
+    sw_index=int(socket.gethostname().split("netvirtsfc",1)[1])-1
+    if sw_index in range(0,len(switches)+1):
+
+       controller=os.environ.get('ODL')
+       sw_type = switches[sw_index]['type']
+       sw_name = switches[sw_index]['name']
+       if sw_type == 'netvirt':
+           print "*****************************"
+           print "Configuring %s as a NETVIRT node." % sw_name
+           print "*****************************"
+           print
+           launch([switches[sw_index]],hosts,controller)
+           print "*****************************"
+           doCmd('sudo /vagrant/utils/overlay-flows.sh')
+           print "*****************************"
+           print "OVS status:"
+           print "-----------"
+           print
+           doCmd('ovs-vsctl show')
+           doCmd('ovs-ofctl -O OpenFlow13 dump-flows %s'%sw_name)
+           print
+           print "Docker containers:"
+           print "------------------"
+           doCmd('docker ps')
+           print "*****************************"
+       elif sw_type == 'sff':
+           print "*****************************"
+           print "Configuring %s as an SFF." % sw_name
+           print "*****************************"
+           doCmd('sudo ovs-vsctl set-manager tcp:%s:6640' % controller)
+           time.sleep(1)
+           dpid=switches[sw_index]['dpid']
+           addSwitch(sw_name,dpid)
+           setOFVersion(sw_name)
+           addController(sw_name, controller)
+           #addGpeTunnel(sw_name)
+           #doCmd('sudo ovs-vsctl set-manager tcp:%s:6640' % controller)
+           print
+       elif sw_type == 'sf':
+           print "*****************************"
+           print "Configuring %s as an SF." % sw_name
+           print "*****************************"
+           doCmd('sudo /vagrant/sf-config.sh')
+           #addGpeTunnel(switches[sw_index]['name'])
+
diff --git a/resources/demo/netvirtsfc-env/ovswork.sh b/resources/demo/netvirtsfc-env/ovswork.sh
new file mode 100755 (executable)
index 0000000..b19835b
--- /dev/null
@@ -0,0 +1,84 @@
+#!/usr/bin/env bash
+set -e
+
+BRIDGE=$1
+GUEST_ID=$2
+IPADDR=$3
+BROADCAST=$4
+GWADDR=$5
+MAC=$6
+OF_PORT=$7
+GUESTNAME=$8
+VLANTAG=$9
+
+[ "$IPADDR" ] || {
+    echo "Syntax:"
+    echo "pipework <hostinterface> <guest> <ipaddr>/<subnet> <broadcast> <gateway> [vlan tag]"
+    exit 1
+}
+
+# Step 1: Find the guest (for now, we only support LXC containers)
+while read dev mnt fstype options dump fsck
+do
+    [ "$fstype" != "cgroup" ] && continue
+    echo $options | grep -qw devices || continue
+    CGROUPMNT=$mnt
+done < /proc/mounts
+
+[ "$CGROUPMNT" ] || {
+    echo "Could not locate cgroup mount point."
+    exit 1
+}
+
+N=$(find "$CGROUPMNT" -name "$GUEST_ID*" | wc -l)
+case "$N" in
+    0)
+       echo "Could not find any container matching $GUEST_ID"
+       exit 1
+       ;;
+    1)
+       true
+       ;;
+    *)
+       echo "Found more than one container matching $GUEST_ID"
+       exit 1
+       ;;
+esac
+
+NSPID=$(head -n 1 $(find "$CGROUPMNT" -name "$GUEST_ID*" | head -n 1)/tasks)
+[ "$NSPID" ] || {
+    echo "Could not find a process inside container $GUEST_ID"
+    exit 1
+}
+
+# Step 2: Prepare the working directory
+mkdir -p /var/run/netns
+rm -f /var/run/netns/$NSPID
+ln -s /proc/$NSPID/ns/net /var/run/netns/$NSPID
+
+# Step 3: Creating virtual interfaces
+LOCAL_IFNAME=vethl-$GUESTNAME #$NSPID
+GUEST_IFNAME=vethg-$GUESTNAME #$NSPID
+ip link add name $LOCAL_IFNAME type veth peer name $GUEST_IFNAME
+ip link set $LOCAL_IFNAME up
+
+# Step 4: Adding the virtual interface to the bridge
+ip link set $GUEST_IFNAME netns $NSPID
+if [ "$VLANTAG" ]
+then
+       ovs-vsctl add-port $BRIDGE $LOCAL_IFNAME tag=$VLANTAG 
+       echo $LOCAL_IFNAME 
+else
+       ovs-vsctl add-port $BRIDGE $LOCAL_IFNAME 
+       echo $LOCAL_IFNAME
+fi
+
+# Step 5: Configure netwroking within the container
+ip netns exec $NSPID ip link set $GUEST_IFNAME name eth0
+ip netns exec $NSPID ip addr add $IPADDR broadcast $BROADCAST dev eth0
+ip netns exec $NSPID ifconfig eth0 hw ether $MAC 
+ip netns exec $NSPID ip addr add 127.0.0.1 dev lo
+ip netns exec $NSPID ip link set eth0 up
+ip netns exec $NSPID ip link set lo up
+ip netns exec $NSPID ip route add default via $GWADDR 
+
diff --git a/resources/demo/netvirtsfc-env/pollflows.sh b/resources/demo/netvirtsfc-env/pollflows.sh
new file mode 100755 (executable)
index 0000000..cb5569d
--- /dev/null
@@ -0,0 +1,4 @@
+#!/usr/bin/env bash
+TABLE=$1
+watch -n 1 -d "sudo /vagrant/flowcount.sh $1"
+
diff --git a/resources/demo/netvirtsfc-env/resetcontroller.sh b/resources/demo/netvirtsfc-env/resetcontroller.sh
new file mode 100755 (executable)
index 0000000..07ff657
--- /dev/null
@@ -0,0 +1,11 @@
+hostnum=${HOSTNAME#"netvirtsfc"}
+sw="sw$hostnum"
+echo "Deleting controller for $sw"
+ovs-vsctl del-controller $sw;
+if [[ $? -ne 0 ]] ; then
+    exit 1
+fi
+echo "Sleeping for 6sec..."
+sleep 6
+echo "Setting controller to $ODL"
+ovs-vsctl set-controller $sw tcp:$ODL:6653
diff --git a/resources/demo/netvirtsfc-env/rest-clean.py b/resources/demo/netvirtsfc-env/rest-clean.py
new file mode 100755 (executable)
index 0000000..a70544a
--- /dev/null
@@ -0,0 +1,141 @@
+#!/usr/bin/python
+import argparse
+import requests,json
+from requests.auth import HTTPBasicAuth
+from subprocess import call
+import time
+import sys
+import os
+
+
+DEFAULT_PORT='8181'
+
+
+USERNAME='admin'
+PASSWORD='admin'
+
+
+OPER_NODES='/restconf/operational/opendaylight-inventory:nodes/'
+CONF_TENANT='/restconf/config/policy:tenants'
+
+def get(host, port, uri):
+    url='http://'+host+":"+port+uri
+    #print url
+    r = requests.get(url, auth=HTTPBasicAuth(USERNAME, PASSWORD))
+    jsondata=json.loads(r.text)
+    return jsondata
+
+def rest_delete(host, port, uri, debug=False):
+    '''Perform a DELETE rest operation, using the URL and data provided'''
+    url='http://'+host+":"+port+uri
+    headers = {'Content-type': 'application/yang.data+json',
+               'Accept': 'application/yang.data+json'}
+    if debug == True:
+        print "DELETE %s" % url
+    try:
+        r = requests.delete(url, headers=headers, auth=HTTPBasicAuth(USERNAME, PASSWORD))
+    except (requests.exceptions.ConnectionError, requests.exceptions.HTTPError) as e:
+        print "oops: ", e
+        return
+    if debug == True:
+        print r.text
+    try:
+        r.raise_for_status()
+    except:
+        print "oops: ", sys.exc_info()[0]
+
+
+def post(host, port, uri, data, debug=False):
+    '''Perform a POST rest operation, using the URL and data provided'''
+    url='http://'+host+":"+port+uri
+    headers = {'Content-type': 'application/yang.data+json',
+               'Accept': 'application/yang.data+json'}
+    if debug == True:
+        print "POST %s" % url
+        print json.dumps(data, indent=4, sort_keys=True)
+    r = requests.post(url, data=json.dumps(data), headers=headers, auth=HTTPBasicAuth(USERNAME, PASSWORD))
+    if debug == True:
+        print r.text
+    r.raise_for_status()
+
+def get_service_functions_uri():
+    return "/restconf/config/service-function:service-functions"
+
+def get_service_function_forwarders_uri():
+    return "/restconf/config/service-function-forwarder:service-function-forwarders"
+
+def get_service_function_chains_uri():
+    return "/restconf/config/service-function-chain:service-function-chains/"
+
+def get_service_function_paths_uri():
+    return "/restconf/config/service-function-path:service-function-paths/"
+
+def get_tenant_uri():
+    return "/restconf/config/policy:tenants/policy:tenant/f5c7d344-d1c7-4208-8531-2c2693657e12"
+
+def get_tunnel_uri():
+    return "/restconf/config/opendaylight-inventory:nodes"
+
+def get_endpoint_uri():
+    return "/restconf/operations/endpoint:unregister-endpoint"
+
+def get_ietf_acl_uri():
+    return "/restconf/config/ietf-access-control-list:access-lists"
+
+def get_classifier_uri():
+    return "/restconf/config/netvirt-sfc-classifier:classifiers"
+
+def get_netvirt_sfc_uri():
+    return "/restconf/config/netvirt-sfc:sfc/"
+
+if __name__ == "__main__":
+    # Launch main menu
+
+
+    # Some sensible defaults
+    controller=os.environ.get('ODL')
+    if controller == None:
+        sys.exit("No controller set.")
+    else:
+       print "Contacting controller at %s" % controller
+
+    #resp=get(controller,DEFAULT_PORT,'/restconf/operational/endpoint:endpoints')
+    #l2_eps=resp['endpoints']['endpoint']
+    #l3_eps=resp['endpoints']['endpoint-l3']
+
+    print "deleting service function paths"
+    rest_delete(controller, DEFAULT_PORT, get_service_function_paths_uri(), True)
+
+    print "deleting service function chains"
+    rest_delete(controller, DEFAULT_PORT, get_service_function_chains_uri(), True)
+
+    print "deleting service functions"
+    rest_delete(controller, DEFAULT_PORT, get_service_functions_uri(), True)
+
+    print "deleting service function forwarders"
+    rest_delete(controller, DEFAULT_PORT, get_service_function_forwarders_uri(), True)
+
+    #print "deleting tunnel"
+    #rest_delete(controller, DEFAULT_PORT, get_tunnel_uri(), True)
+
+    #print "deleting tenant"
+    #rest_delete(controller, DEFAULT_PORT, get_tenant_uri(), True)
+
+    #print "unregistering L2 endpoints"
+    #for endpoint in l2_eps:
+    #data={ "input": { "l2": [ { "l2-context": endpoint['l2-context'] ,"mac-address": endpoint['mac-address'] } ] } }
+    #    post(controller, DEFAULT_PORT, get_endpoint_uri(),data,True)
+
+    #print "unregistering L3 endpoints"
+    #for endpoint in l3_eps:
+    #data={ "input": { "l3": [ { "l3-context": endpoint['l3-context'] ,"ip-address": endpoint['ip-address'] } ] } }
+    #    post(controller, DEFAULT_PORT, get_endpoint_uri(),data,True)
+
+    print "deleting acl"
+    rest_delete(controller, DEFAULT_PORT, get_ietf_acl_uri(), True)
+
+    print "deleting classifier"
+    rest_delete(controller, DEFAULT_PORT, get_classifier_uri(), True)
+
+    print "deleting netvirt sfc"
+    rest_delete(controller, DEFAULT_PORT, get_netvirt_sfc_uri(), True)
diff --git a/resources/demo/netvirtsfc-env/setsfc.sh b/resources/demo/netvirtsfc-env/setsfc.sh
new file mode 100755 (executable)
index 0000000..b9f7e7b
--- /dev/null
@@ -0,0 +1,27 @@
+#!/usr/bin/env bash
+
+ovsdbversion="1.2.1-SNAPSHOT"
+
+# Attempt to keep l2switch from monkeying with the flows
+#sed -i 's/<is-proactive-flood-mode>true<\/is-proactive-flood-mode>/<is-proactive-flood-mode>false<\/is-proactive-flood-mode>/' ${ODL_ROOT_DIR}/system/org/opendaylight/l2switch/arphandler/arphandler-config/$l2switchversion/arphandler-config-$l2switchversion-config.xml
+#sed -i 's/<is-install-lldp-flow>true<\/is-install-lldp-flow>/<is-install-lldp-flow>false<\/is-install-lldp-flow>/' ${ODL_ROOT_DIR}/system/org/opendaylight/l2switch/loopremover/loopremover-config/$l2switchversion/loopremover-config-$l2switchversion-config.xml
+
+# enable NetvirtSfc for standalone mode
+sed -i -e 's/<of13provider>[a-z]\{1,\}<\/of13provider>/<of13provider>standalone<\/of13provider>/g' ${ODL_ROOT_DIR}/system/org/opendaylight/ovsdb/openstack.net-virt-sfc-impl/$ovsdbversion/openstack.net-virt-sfc-impl-$ovsdbversion-config.xml
+
+# Automatically install the feature odl-ovsdb-sfc-ui upon ODL start
+ODL_NETVIRT_SFC_KARAF_FEATURE='odl-ovsdb-sfc-ui'
+ODLFEATUREMATCH=$(cat ${ODL_ROOT_DIR}/etc/org.apache.karaf.features.cfg | \
+                            grep -e "featuresBoot=" -e "featuresBoot =" | grep $ODL_NETVIRT_SFC_KARAF_FEATURE)
+if [ "$ODLFEATUREMATCH" == "" ]; then
+   sed -i -e "/^featuresBoot[ ]*=/ s/$/,$ODL_NETVIRT_SFC_KARAF_FEATURE/" \
+       ${ODL_ROOT_DIR}/etc/org.apache.karaf.features.cfg
+fi
+
+# Set the logging levels for troubleshooting
+logcfg=${ODL_ROOT_DIR}/etc/org.ops4j.pax.logging.cfg
+echo "log4j.logger.org.opendaylight.ovsdb.openstack.netvirt.sfc = TRACE" >> $logcfg
+#echo "log4j.logger.org.opendaylight.ovsdb.lib = INFO" >> $logcfg
+echo "log4j.logger.org.opendaylight.sfc = TRACE" >> $logcfg
+echo "log4j.logger.org.opendaylight.openflowplugin.applications.statistics.manager.impl.StatListenCommitFlow = ERROR" >> $logcfg
+
diff --git a/resources/demo/netvirtsfc-env/startdemo.sh b/resources/demo/netvirtsfc-env/startdemo.sh
new file mode 100755 (executable)
index 0000000..c0251af
--- /dev/null
@@ -0,0 +1,53 @@
+#!/usr/bin/env bash
+
+set -e
+
+demo=${1%/}
+
+echo $demo
+
+if [ -f "demo.lock" ]; then
+    echo "There is already a demo running:"
+    cat demo.lock
+    exit
+fi
+
+cp $demo/infrastructure_config.py .
+cp $demo/sf-config.sh .
+
+echo "Starting demo from $demo with vars:"
+echo "Number of nodes: " $NUM_NODES
+echo "Opendaylight Controller: " $ODL
+echo "Base subnet: " $SUBNET
+
+for i in `seq 1 $NUM_NODES`; do
+#for i in 1 6; do
+  hostname="netvirtsfc"$i
+  echo $hostname
+  vagrant ssh $hostname -c "sudo -E /vagrant/infrastructure_launch.py"
+done
+
+# Looks like SFC is not including l2switch anymore so this is not needed. But just in case...
+#sleep 5
+#echo "Clean l2switch flows"
+#for i in 1 2 4 6; do
+#  hostname="netvirtsfc"$i
+#  sw="sw"$i
+#  echo $hostname
+#  vagrant ssh $hostname -c "sudo ovs-ofctl -O OpenFlow13 --strict del-flows br-int priority=1,arp"
+#  vagrant ssh $hostname -c "sudo ovs-ofctl -O OpenFlow13 --strict del-flows $sw priority=1,arp"
+#done
+
+echo "Configuring controller..."
+./$demo/rest.py
+
+sleep 5
+for i in 1 6; do
+  hostname="netvirtsfc"$i
+  sw="sw"$i
+  echo $hostname
+  vagrant ssh $hostname -c "sudo ovs-vsctl show; sudo ovs-ofctl -O OpenFlow13 dump-flows $sw"
+done
+
+echo "$demo" > demo.lock
+
diff --git a/resources/demo/netvirtsfc-env/traceflow.sh b/resources/demo/netvirtsfc-env/traceflow.sh
new file mode 100755 (executable)
index 0000000..ca2426e
--- /dev/null
@@ -0,0 +1 @@
+ovs-appctl ofproto/trace $1
diff --git a/resources/demo/netvirtsfc-env/utils/hosts b/resources/demo/netvirtsfc-env/utils/hosts
new file mode 100644 (file)
index 0000000..1ed1fb4
--- /dev/null
@@ -0,0 +1,12 @@
+127.0.0.1      localhost
+192.168.50.70  netvirtsfc1
+192.168.50.71  netvirtsfc2
+192.168.50.72  netvirtsfc3
+192.168.50.73   netvirtsfc4
+192.168.50.74   netvirtsfc5
+192.168.50.75   netvirtsfc6
+
+# The following lines are desirable for IPv6 capable hosts
+::1     localhost ip6-localhost ip6-loopback
+ff02::1 ip6-allnodes
+ff02::2 ip6-allrouters
diff --git a/resources/demo/netvirtsfc-env/utils/overlay-flows.sh b/resources/demo/netvirtsfc-env/utils/overlay-flows.sh
new file mode 100755 (executable)
index 0000000..ed9050a
--- /dev/null
@@ -0,0 +1,34 @@
+#!/usr/bin/env bash
+# Add flows for the normal overlay that Netvirt would have added
+# sw1: h35_2, dl_src=00:00:00:00:35:02
+# sw6: h35_4, dl_src=00:00:00:00:35:04
+
+set -e
+hostnum=${HOSTNAME#"netvirtsfc"}
+sw="sw$hostnum"
+
+if [ "$hostnum" -eq "1" ]; then
+    # ARP responder for h35_4
+    sudo ovs-ofctl -O OpenFlow13 add-flow $sw "table=0,arp,arp_tpa=10.0.35.4,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:00:00:00:00:35:04->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0x000000003504->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0x0a002304->NXM_OF_ARP_SPA[],IN_PORT"
+
+    #port=$(ip -o link | grep veth | awk '{print$2}' | sed 's/://')
+    # l2 forward of local traffic to the normal vxlan
+    sudo ovs-ofctl -O OpenFlow13 add-flow $sw "table=0,priority=150,in_port=1,dl_src=00:00:00:00:35:02,dl_dst=00:00:00:00:35:04,actions=output:5"
+
+    # l2 forward of incoming vxlan traffic to the local port
+    sudo ovs-ofctl -O OpenFlow13 add-flow $sw "table=0,priority=150,in_port=5,dl_src=00:00:00:00:35:04,dl_dst=00:00:00:00:35:02,actions=output:1"
+
+elif [ "$hostnum" -eq "6" ]; then
+    # ARP responder for h35_4
+    sudo ovs-ofctl -O OpenFlow13 add-flow $sw "table=0,arp,arp_tpa=10.0.35.2,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:00:00:00:00:35:02->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0x000000003502->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0x0a002302->NXM_OF_ARP_SPA[],IN_PORT"
+
+    # l2 forward of local traffic to the normal vxlan
+    sudo ovs-ofctl -O OpenFlow13 add-flow $sw "table=0,priority=150,in_port=1,dl_src=00:00:00:00:35:04,dl_dst=00:00:00:00:35:02,actions=output:5"
+
+    # l2 forward of incoming vxlan traffic to the local port
+    sudo ovs-ofctl -O OpenFlow13 add-flow $sw "table=0,priority=150,in_port=5,dl_src=00:00:00:00:35:02,dl_dst=00:00:00:00:35:04,actions=output:1"
+
+else
+    echo "Invalid SF for this demo";
+    exit
+fi
diff --git a/resources/demo/netvirtsfc-env/utils/setuphosts.sh b/resources/demo/netvirtsfc-env/utils/setuphosts.sh
new file mode 100755 (executable)
index 0000000..eee7ef1
--- /dev/null
@@ -0,0 +1,11 @@
+#!/usr/bin/env bash
+
+set -e
+
+for i in `seq 1 $NUM_NODES`; do
+  hostname="netvirtsfc"$i
+  echo $hostname
+  vagrant ssh $hostname -c "sudo cp /vagrant/utils/hosts /etc/hosts"
+done
+
diff --git a/resources/demo/netvirtsfc-env/vmclean.sh b/resources/demo/netvirtsfc-env/vmclean.sh
new file mode 100755 (executable)
index 0000000..3974b38
--- /dev/null
@@ -0,0 +1,12 @@
+#!/usr/bin/env bash
+
+docker stop -t=1 $(docker ps -a -q) > /dev/null 2>&1
+docker rm $(docker ps -a -q) > /dev/null 2>&1
+
+/etc/init.d/openvswitch-switch stop > /dev/null
+rm /etc/openvswitch/conf.db > /dev/null
+/etc/init.d/openvswitch-switch start > /dev/null
+
+
+ovs-vsctl show
+