- commons
+-- parent : Contains Parent pom.xml for all the ovsdb modules.
-- distribution : Builds a working controller distribution based on the controller + ovsdb modules and other
- dependant modules such as openflowplugin
- +-- opendaylight : older, OSGi-based distribution
- +-- opendaylight-karaf : karaf-based distribution
-
- features : This folder contains all the Karaf related files.
+- hwvtepsouthbound : Contains the hw_vtep southbound plugin.
+
+- karaf : Builds a working controller distribution based on the controller + ovsdb modules and other
+ dependant modules such as openflowplugin
+
- library : Contains Schema-independent library that is a reference implementation for RFC 7047.
This module doesn't depend on any of the Opendaylight components.
This library module can also be used independently in a non-OSGi environment.
for Network Virtualization.
+-- net-virt-providers : Mostly contains data-path programming functionality via OpenFlow or potentially
other protocols.
+ +-- net-virt-sfc : SFC implementation using the OVSDB project.
-- ovs-sfc : SFC implementation using the OVSDB project. Currently it is just a shell.
-
-- plugin : Contains Opendaylight Southbound Plugin APIs and provides a simpler API interface on top of library layer.
- Ideally, this module should also be schema independent. But due to legacy reasons this layer contains some
- deprecated functionality that assumes openvswitch schema.
-
-- plugin-mdsal-adapter : Adds an MD-SAL Adapter for the OVSDB Plugin. The adapter updates the MD-SAL with nodes
- as they are added and removed from the inventory. The Yang model provides a reference
- between OVSDB nodes and the OpenFlow nodes (bridges) that they manage.
-
-- plugin-shell : Contains a Karaf shell framework for OVSDB plugin and printCache command-line.
+- ovsdb-ui : Contains the DLUX implementation for displaying network virtualization
- resources : Contains some useful resources such as scripts, testing utilities and tools used for deployment
or testing the binaries generated from the OVSDB project.
+-- openvswitch : Schema wrapper that represents http://openvswitch.org/ovs-vswitchd.conf.db.5.pdf
+-- hardwarevtep: Schema wrapper that represents http://openvswitch.org/docs/vtep.5.pdf
+- southbound : contains the plugin for converting from the OVSDB protocol to mdsal and vice-versa.
+
- utils : MD-SAL OpenFlow and OVSDB common utilities.
HOW TO BUILD & RUN
Pre-requisites : JDK 1.7+, Maven 3+
1. Building a Karaf Feature and deploying it in an Opendaylight Karaf distribution :
- 1. This is a new method for Opendaylight distribution wherein there is no defined editions such
- as Base, Virtualization or SP editions. The end-customer can choose to deploy the required feature
- based on his/her deployment needs.
-
- 2. From the root ovsdb/ directory, execute "mvn clean install"
+ 1. From the root ovsdb/ directory, execute "mvn clean install"
- 3. Next unzip the distribution-karaf-<VERSION_NUMBER>-SNAPSHOT.zip file created from step #2 in
- the directory ovsdb/distribution/opendaylight-karaf/target like so:
- "unzip distribution-karaf-<VERSION_NUMBER>-SNAPSHOT.zip"
+ 2. Unzip the karaf-<VERSION_NUMBER>-SNAPSHOT.zip file created from step 1 in the directory ovsdb/karaf/target/:
+ "unzip karaf-<VERSION_NUMBER>-SNAPSHOT.zip"
- 4. Once karaf has started and you see the Opendaylight ascii art in the console, the last step
+ 3. Once karaf has started and you see the Opendaylight ascii art in the console, the last step
is to start the OVSDB plugin framework with the following command in the karaf console:
"feature:install odl-ovsdb-openstack" (without quotation marks).
Sample output from Karaf console :
- opendaylight-user@root>feature:list | grep -i ovsdb
- odl-ovsdb-all | 1.0.0-SNAPSHOT | | ovsdb-1.0.0-SNAPSHOT | OpenDaylight :: OVSDB :: all
- odl-ovsdb-library | 1.0.0-SNAPSHOT | x | ovsdb-1.0.0-SNAPSHOT | OVSDB :: Library
- odl-ovsdb-schema-openvswitch | 1.0.0-SNAPSHOT | x | ovsdb-1.0.0-SNAPSHOT | OVSDB :: Schema :: Open_vSwitch
- odl-ovsdb-schema-hardwarevtep | 1.0.0-SNAPSHOT | x | ovsdb-1.0.0-SNAPSHOT | OVSDB :: Schema :: hardware_vtep
- odl-ovsdb-openstack | 1.0.0-SNAPSHOT | x | ovsdb-1.0.0-SNAPSHOT | OpenDaylight :: OVSDB :: OpenStack Network Virtual
- odl-ovsdb-ovssfc | 0.0.1-SNAPSHOT | | ovsdb-0.0.1-SNAPSHOT | OpenDaylight :: OVSDB :: OVS Service Function Chai
-
+ opendaylight-user@root>feature:list -i | grep ovsdb
+ odl-ovsdb-southbound-api | 1.2.1-SNAPSHOT | x | odl-ovsdb-southbound-1.2.1-SNAPSHOT | OpenDaylight :: southbound :: api
+ odl-ovsdb-southbound-impl | 1.2.1-SNAPSHOT | x | odl-ovsdb-southbound-1.2.1-SNAPSHOT | OpenDaylight :: southbound :: impl
+ odl-ovsdb-southbound-impl-rest | 1.2.1-SNAPSHOT | x | odl-ovsdb-southbound-1.2.1-SNAPSHOT | OpenDaylight :: southbound :: impl :: REST
+ odl-ovsdb-southbound-impl-ui | 1.2.1-SNAPSHOT | x | odl-ovsdb-southbound-1.2.1-SNAPSHOT | OpenDaylight :: southbound :: impl :: UI
+ odl-ovsdb-library | 1.2.1-SNAPSHOT | x | odl-ovsdb-library-1.2.1-SNAPSHOT | OpenDaylight :: library
+ odl-ovsdb-openstack | 1.2.1-SNAPSHOT | x | ovsdb-1.2.1-SNAPSHOT | OpenDaylight :: OVSDB :: OpenStack Network Virtual
2. Building a bundle and deploying it in an Opendaylight Karaf distribution :
This method can be used to update and test new code in a bundle. If the bundle of interest is rebuilt as a
4. karaf will see the changed bundle and reload it.
-
-3. Building an OVSDB based Opendaylight Virtualization edition:
- 1. This is the legacy way to build and distribute Opendaylight archives. This method was
- followed in Hydrogen. It might still work in Helium but it is best effort for support.
- The preferred method for Helium and later is to use karaf.
-
- 2. From the root folder(that hosts this README), execute "mvn clean install"
- That should build a full distribution archive and distribution directory that will contain
- Opendaylight Controller + OVSDB bundles + Openflow Plugins under
- distribution/opendaylight/target/distribution.ovsdb-X.X.X-osgipackage
-
- 3. Upon successful completion of a build, the Controller with OVSDB can be executed by :
- cd distribution/opendaylight/target/distribution.ovsdb-X.X.X-osgipackage/opendaylight/
- ./run.sh -virt ovsdb
-
-4. Building a Karaf Feature and deploying it in an Opendaylight Karaf distribution :
-*** This method is deprecated.
- 1. This is a new method for Opendaylight distribution wherein there is no defined editions such
- as Base, Virtualization or SP editions. Rather each of the projects will generate features in
- form of .kar files. The end-customer can choose to deploy the required feature based on his/her
- deployment needs.
-
- 2. From the features/ directory, execute "mvn clean install"
- This will generate a kar file such as "features/target/ovsdb-features-1.2.1-SNAPSHOT.kar"
-
- 3. Download (or build from controller project) the Karaf distribution :
- http://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/controller/distribution.opendaylight-karaf/
- Sample zip file :
- http://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/controller/distribution.opendaylight-karaf/1.4.2-SNAPSHOT/distribution.opendaylight-karaf-1.4.2-20140718.075612-407.zip
-
- 4. unzip the downloaded (or built) distribution and copy the ovsdb-features-x.x.x.kar file (from step 2) into
- the unzipped distribution.opendaylight-karaf-X.X.X/deploy/ directory.
-
- 5. run Karaf from within the distribution.opendaylight-karaf-X.X.X/ directory using "bin/karaf"
-
- Sample output from Karaf console :
-
- opendaylight-user@root>kar:list
- KAR Name
- -----------------------------
- ovsdb-features-1.2.1-SNAPSHOT
-
- opendaylight-user@root>feature:list | grep ovsdb
- odl-ovsdb-all | 1.2.1-SNAPSHOT | x | ovsdb-1.2.1-SNAPSHOT | OpenDaylight :: OVSDB :: all
- odl-ovsdb-library | 1.0.0-SNAPSHOT | x | ovsdb-1.2.1-SNAPSHOT | OVSDB :: Library
- odl-ovsdb-schema-openvswitch | 1.0.0-SNAPSHOT | x | ovsdb-1.2.1-SNAPSHOT | OVSDB :: Schema :: Open_vSwitch
- odl-ovsdb-schema-hardwarevtep | 1.0.0-SNAPSHOT | x | ovsdb-1.2.1-SNAPSHOT | OVSDB :: Schema :: hardware_vtep
- odl-ovsdb-plugin | 1.0.0-SNAPSHOT | x | ovsdb-1.2.1-SNAPSHOT | OpenDaylight :: OVSDB :: Plugin
-
- opendaylight-user@root>bundle:list | grep OVSDB
- 186 | Active | 80 | 1.0.0.SNAPSHOT | OVSDB Library
- 199 | Active | 80 | 1.0.0.SNAPSHOT | OVSDB Open_vSwitch Schema
- 200 | Active | 80 | 1.0.0.SNAPSHOT | OVSDB hardware_vtep Schema
- 201 | Active | 80 | 1.0.0.SNAPSHOT | OpenDaylight OVSDB Plugin
-
Running The Integration Tests
=============================
<repository>mvn:org.opendaylight.netconf/features-restconf/{{VERSION}}/xml/features</repository>
<repository>mvn:org.opendaylight.ovsdb/hwvtepsouthbound-features/{{VERSION}}/xml/features</repository>
- <feature name="odl-ovsdb-all" description="OpenDaylight :: OVSDB :: all"
- version='${project.version}'>
- <feature version="${project.version}">odl-ovsdb-library</feature>
- </feature>
-
<feature name="odl-ovsdb-schema-openvswitch" description="OVSDB :: Schema :: Open_vSwitch"
version='${project.version}'>
<feature version="${project.version}">odl-ovsdb-library</feature>
transactInvokers.put(dbSchema, new TransactInvokerImpl(this,dbSchema));
}
} catch (InterruptedException | ExecutionException e) {
- LOG.warn("Exception attempting to createTransactionInvokers {}: {}",connectionInfo,e);
+ LOG.warn("Exception attempting to createTransactionInvokers {}", connectionInfo, e);
}
}
}
client.getConnectionInfo().getRemotePort(),
client.getConnectionInfo().getLocalAddress(),
client.getConnectionInfo().getLocalPort());
- HwvtepConnectionInstance hwClient = connectedButCallBacksNotRegistered(client);
- registerEntityForOwnership(hwClient);
+ if(client.getSchema(HwvtepSchemaConstants.HARDWARE_VTEP) != null) {
+ HwvtepConnectionInstance hwClient = connectedButCallBacksNotRegistered(client);
+ registerEntityForOwnership(hwClient);
+ }
}
@Override
}
}
} catch (CandidateAlreadyRegisteredException e) {
- LOG.warn("OVSDB entity {} was already registered for {} ownership", candidateEntity, e);
+ LOG.warn("OVSDB entity {} was already registered for ownership", candidateEntity, e);
}
}
LOG.trace("Registering on path: {}", treeId);
registration = db.registerDataTreeChangeListener(treeId, HwvtepDataChangeListener.this);
} catch (final Exception e) {
- LOG.warn("HwvtepDataChangeListener registration failed");
+ LOG.warn("HwvtepDataChangeListener registration failed", e);
//TODO: Should we throw an exception here?
}
}
}
} catch (CandidateAlreadyRegisteredException e) {
LOG.warn("HWVTEP Southbound Provider instance entity {} was already "
- + "registered for {} ownership", instanceEntity, e);
+ + "registered for ownership", instanceEntity, e);
}
}
transaction.cancel();
}
} catch (Exception e) {
- LOG.error("Error initializing hwvtep topology {}",e);
+ LOG.error("Error initializing hwvtep topology", e);
}
}
}
}
} catch (Exception e) {
- LOG.warn("Failure to delete ovsdbNode {}", e);
+ LOG.warn("Failure to delete ovsdbNode", e);
}
}
try {
super.setup();
} catch (Exception e) {
- e.printStackTrace();
+ LOG.warn("Failed to setup test", e);
+ fail("Failed to setup test: " + e);
}
//dataBroker = getSession().getSALService(DataBroker.class);
Thread.sleep(3000);
try {
portNumber = Integer.parseInt(portStr);
} catch (NumberFormatException e) {
- fail("Invalid port number " + portStr + System.lineSeparator() + usage());
+ fail("Invalid port number " + portStr + System.lineSeparator() + usage() + e);
}
connectionType = bundleContext.getProperty(CONNECTION_TYPE);
try {
result = monitor.get();
} catch (InterruptedException | ExecutionException e) {
+ LOG.warn("Failed to monitor {}", dbSchema, e);
return null;
}
return transformingCallback(result, dbSchema);
try {
result = monitor.get();
} catch (InterruptedException | ExecutionException e) {
+ LOG.warn("Failed to monitor {}", dbSchema, e);
return null;
}
return transformingCallback(result, dbSchema);
try {
result = cancelMonitor.get();
} catch (InterruptedException | ExecutionException e) {
- LOG.error("Exception when canceling monitor handler {}", handler.getId());
+ LOG.error("Exception when canceling monitor handler {}", handler.getId(), e);
}
if (result == null) {
sfuture.set(schema);
}
} catch (Exception e) {
+ LOG.warn("Failed to populate schema {}:{}", dbNames, schema, e);
sfuture.setException(e);
}
return null;
method.setAccessible(true);
method.invoke(TyperUtils.class, schema, from, to);
} catch (NoSuchMethodException e) {
- LOG.error("Can't find TyperUtils::checkVersion(), TyperUtilsTest::callCheckVersion() may be obsolete");
+ LOG.error("Can't find TyperUtils::checkVersion(), TyperUtilsTest::callCheckVersion() may be obsolete", e);
} catch (IllegalAccessException e) {
LOG.error("Error invoking TyperUtils::checkVersion(), please check TyperUtilsTest::callCheckVersion()", e);
} catch (InvocationTargetException e) {
try {
super.setup();
} catch (Exception e) {
- e.printStackTrace();
+ LOG.warn("Failed to setup test", e);
+ fail("Failed to setup test: " + e);
}
getProperties();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
- e.printStackTrace();
+ LOG.warn("Interrupted while waiting for provider context", e);
}
}
}
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
- e.printStackTrace();
+ LOG.warn("Interrupted while waiting for other provider", e);
}
return providerContext;
}
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
- e.printStackTrace();
+ LOG.warn("Interrupted while waiting for {}", NETVIRT_TOPOLOGY_ID, e);
}
}
}
commitFuture.checkedGet(); // TODO: Make it async (See bug 1362)
LOG.debug("Transaction success for write of Flow {}", flowBuilder.getFlowName());
} catch (Exception e) {
- LOG.error(e.getMessage(), e);
+ LOG.error("Failed to write flow {}", flowBuilder.getFlowName(), e);
modification.cancel();
}
}
commitFuture.get(); // TODO: Make it async (See bug 1362)
LOG.debug("Transaction success for deletion of Flow {}", flowBuilder.getFlowName());
} catch (Exception e) {
- LOG.error(e.getMessage(), e);
+ LOG.error("Failed to remove flow {}", flowBuilder.getFlowName(), e);
modification.cancel();
}
}
return data.get();
}
} catch (InterruptedException|ExecutionException e) {
- LOG.error(e.getMessage(), e);
+ LOG.error("Failed to get flow {}", flowBuilder.getFlowName(), e);
}
LOG.debug("Cannot find data for Flow {}", flowBuilder.getFlowName());
return data.get();
}
} catch (InterruptedException|ExecutionException e) {
- LOG.error(e.getMessage(), e);
+ LOG.error("Failed to get openflow node {}", nodeId, e);
}
LOG.debug("Cannot find data for Node {}", nodeId);
programLocalBridgeRules(node, dpid, segmentationId, attachedMac, localPort);
}
} catch (Exception e) {
- LOG.error("Exception in programming Local Rules for " + intf + " on " + node, e);
+ LOG.error("Exception in programming Local Rules for {} on {}", intf, node, e);
}
}
programLocalSecurityGroupRules(attachedMac, node, intf, dpid, localPort, segmentationId, false);
}
} catch (Exception e) {
- LOG.error("Exception in removing Local Rules for " + intf + " on " + node, e);
+ LOG.error("Exception in removing Local Rules for {} on {}", intf, node, e);
}
}
}
}
} catch (Exception e) {
- LOG.trace("", e);
+ LOG.warn("Failed to program tunnel rules, node {}, intf {}", node, intf, e);
}
}
}
}
} catch (Exception e) {
- LOG.error("", e);
+ LOG.error("Failed to remove tunnel rules, node {}, intf {}", node, intf, e);
}
}
MdsalHelper.createOvsdbInterfaceType(intf.getInterfaceType()),
src, dst);
} catch (Exception e) {
- LOG.error(e.getMessage(), e);
+ LOG.error("handleInterfaceDelete: failed to delete tunnel port", e);
}
} else if (phyIfName.contains(intf.getName())) {
deletePhysicalPort(srcNode, intf.getName());
return data.get();
}
} catch (InterruptedException|ExecutionException e) {
- LOG.error(e.getMessage(), e);
+ LOG.error("Failed to get group {}", groupBuilder.getGroupName(), e);
}
- LOG.debug("Cannot find data for Group " + groupBuilder.getGroupName());
+ LOG.debug("Cannot find data for Group {}", groupBuilder.getGroupName());
return null;
}
CheckedFuture<Void, TransactionCommitFailedException> commitFuture = modification.submit();
try {
commitFuture.get(); // TODO: Make it async (See bug 1362)
- LOG.debug("Transaction success for write of Group " + groupBuilder.getGroupName());
+ LOG.debug("Transaction success for write of Group {}", groupBuilder.getGroupName());
} catch (InterruptedException|ExecutionException e) {
- LOG.error(e.getMessage(), e);
+ LOG.error("Failed to write group {}", groupBuilder.getGroupName(), e);
}
}
}
try {
commitFuture.get(); // TODO: Make it async (See bug 1362)
- LOG.debug("Transaction success for deletion of Group " + groupBuilder.getGroupName());
+ LOG.debug("Transaction success for deletion of Group {}", groupBuilder.getGroupName());
} catch (InterruptedException|ExecutionException e) {
- LOG.error(e.getMessage(), e);
+ LOG.error("Failed to remove group {}", groupBuilder.getGroupName(), e);
}
}
}
CheckedFuture<Void, TransactionCommitFailedException> commitFuture = modification.submit();
try {
commitFuture.get(); // TODO: Make it async (See bug 1362)
- LOG.debug("Transaction success for write of Flow " + flowBuilder.getFlowName());
+ LOG.debug("Transaction success for write of Flow {}", flowBuilder.getFlowName());
} catch (InterruptedException|ExecutionException e) {
- LOG.error(e.getMessage(), e);
+ LOG.error("Failed to write flows {}", flowBuilder.getFlowName(), e);
}
}
}
return;
}
} catch (UnknownHostException e) {
- LOG.warn("Invalid IP address {}", ipaddress);
+ LOG.warn("Invalid IP address {}", ipaddress, e);
return;
}
}
Constants.PROTO_VM_IP_MAC_MATCH_PRIORITY,write);
}
} catch(UnknownHostException e) {
- LOG.warn("Invalid IP address {}", srcAddress.getIpAddress());
+ LOG.warn("Invalid IP address {}", srcAddress.getIpAddress(), e);
}
}
}
NodeBuilder nodeBuilder = createNodeBuilder(nodeName);
String flowName = "Egress_Fixed_Conntrk_Untrk_" + segmentationId + "_" + localPort + "_";
matchBuilder = MatchUtils.createV4EtherMatchWithType(matchBuilder, attachMac, null);
- matchBuilder = MatchUtils.addCtState(matchBuilder,0x00,0X80);
+ matchBuilder = MatchUtils.addCtState(matchBuilder,MatchUtils.UNTRACKED_CT_STATE,
+ MatchUtils.UNTRACKED_CT_STATE_MASK);
FlowBuilder flowBuilder = new FlowBuilder();
flowBuilder.setMatch(matchBuilder.build());
FlowUtils.initFlowBuilder(flowBuilder, flowName, getTable()).setPriority(priority);
NodeBuilder nodeBuilder = createNodeBuilder(nodeName);
String flowName = "Egress_Fixed_Conntrk_TrkEst_" + segmentationId + "_" + localPort + "_";
matchBuilder = MatchUtils.createInPortMatch(matchBuilder, dpid, localPort);
- matchBuilder = MatchUtils.addCtState(matchBuilder,0x82, 0x82);
+ matchBuilder = MatchUtils.addCtState(matchBuilder,MatchUtils.TRACKED_EST_CT_STATE,
+ MatchUtils.TRACKED_EST_CT_STATE_MASK);
FlowBuilder flowBuilder = new FlowBuilder();
flowBuilder.setMatch(matchBuilder.build());
FlowUtils.initFlowBuilder(flowBuilder, flowName, getTable()).setPriority(priority);
NodeBuilder nodeBuilder = createNodeBuilder(nodeName);
String flowName = "Egress_Fixed_Conntrk_NewDrop_" + segmentationId + "_" + localPort + "_";
matchBuilder = MatchUtils.createInPortMatch(matchBuilder, dpid, localPort);
- matchBuilder = MatchUtils.addCtState(matchBuilder,0x01, 0x01);
+ matchBuilder = MatchUtils.addCtState(matchBuilder,MatchUtils.NEW_CT_STATE, MatchUtils.NEW_CT_STATE_MASK);
FlowBuilder flowBuilder = new FlowBuilder();
flowBuilder.setMatch(matchBuilder.build());
FlowUtils.initFlowBuilder(flowBuilder, flowName, getTable()).setPriority(priority);
boolean write, boolean drop, boolean isCtCommit) {
MatchBuilder matchBuilder1 = matchBuilder;
if (isCtCommit) {
- matchBuilder1 = MatchUtils.addCtState(matchBuilder1,0x81, 0x81);
+ matchBuilder1 = MatchUtils.addCtState(matchBuilder1, MatchUtils.TRACKED_NEW_CT_STATE,
+ MatchUtils.TRACKED_NEW_CT_STATE_MASK);
}
FlowBuilder flowBuilder = new FlowBuilder();
flowBuilder.setMatch(matchBuilder1.build());
return;
}
} catch(UnknownHostException e) {
- LOG.warn("Invalid IP address {}", ipaddress);
+ LOG.warn("Invalid IP address {}", ipaddress, e);
return;
}
}
NodeBuilder nodeBuilder = createNodeBuilder(nodeName);
String flowName = "Ingress_Fixed_Conntrk_Untrk_" + segmentationId + "_" + localPort + "_";
matchBuilder = MatchUtils.createV4EtherMatchWithType(matchBuilder,null,attachMac);
- matchBuilder = MatchUtils.addCtState(matchBuilder,0x00, 0x80);
+ matchBuilder = MatchUtils.addCtState(matchBuilder,MatchUtils.UNTRACKED_CT_STATE,
+ MatchUtils.UNTRACKED_CT_STATE_MASK);
FlowBuilder flowBuilder = new FlowBuilder();
flowBuilder.setMatch(matchBuilder.build());
FlowUtils.initFlowBuilder(flowBuilder, flowName, getTable()).setPriority(priority);
NodeBuilder nodeBuilder = createNodeBuilder(nodeName);
String flowName = "Ingress_Fixed_Conntrk_TrkEst_" + segmentationId + "_" + localPort + "_";
matchBuilder = MatchUtils.createV4EtherMatchWithType(matchBuilder,null,attachMac);
- matchBuilder = MatchUtils.addCtState(matchBuilder,0x82, 0x82);
+ matchBuilder = MatchUtils.addCtState(matchBuilder,MatchUtils.TRACKED_EST_CT_STATE,
+ MatchUtils.TRACKED_EST_CT_STATE_MASK);
FlowBuilder flowBuilder = new FlowBuilder();
flowBuilder.setMatch(matchBuilder.build());
FlowUtils.initFlowBuilder(flowBuilder, flowName, getTable()).setPriority(priority);
NodeBuilder nodeBuilder = createNodeBuilder(nodeName);
String flowName = "Ingress_Fixed_Conntrk_NewDrop_" + segmentationId + "_" + localPort + "_";
matchBuilder = MatchUtils.createV4EtherMatchWithType(matchBuilder,null,attachMac);
- matchBuilder = MatchUtils.addCtState(matchBuilder,0x01, 0x01);
+ matchBuilder = MatchUtils.addCtState(matchBuilder,MatchUtils.NEW_CT_STATE, MatchUtils.NEW_CT_STATE_MASK);
FlowBuilder flowBuilder = new FlowBuilder();
flowBuilder.setMatch(matchBuilder.build());
FlowUtils.initFlowBuilder(flowBuilder, flowName, getTable()).setPriority(priority);
boolean write, boolean drop, boolean isCtCommit) {
MatchBuilder matchBuilder1 = matchBuilder;
if (isCtCommit) {
- matchBuilder1 = MatchUtils.addCtState(matchBuilder1,0x81, 0x81);
+ matchBuilder1 = MatchUtils.addCtState(matchBuilder1,MatchUtils.TRACKED_NEW_CT_STATE,
+ MatchUtils.TRACKED_NEW_CT_STATE_MASK);
}
FlowBuilder flowBuilder = new FlowBuilder();
flowBuilder.setMatch(matchBuilder1.build());
LOG.info("Registering Data Change Listener for NetvirtSfc AccessList configuration.");
listenerRegistration = db.registerDataTreeChangeListener(treeId, this);
} catch (final Exception e) {
- LOG.warn("Netvirt AccessList DataChange listener registration fail!");
+ LOG.warn("Netvirt AccessList DataChange listener registration fail!", e);
throw new IllegalStateException("NetvirtSfcAccessListListener startup fail! System needs restart.", e);
}
}
LOG.info("Registering Data Change Listener for NetvirtSfc Classifier configuration.");
listenerRegistration = db.registerDataTreeChangeListener(treeId, this);
} catch (final Exception e) {
- LOG.warn("Netvirt Classifier DataChange listener registration fail!");
+ LOG.warn("Netvirt Classifier DataChange listener registration fail!", e);
throw new IllegalStateException("NetvirtSfcClassifierListener startup fail! System needs restart.", e);
}
}
try {
listenerRegistration.close();
} catch (final Exception e) {
- LOG.warn("Error to stop Netvirt Classifier DataChange listener: {}", e.getMessage());
+ LOG.warn("Error to stop Netvirt Classifier DataChange listener", e);
}
listenerRegistration = null;
}
LOG.info("Registering Data Change Listener for NetvirtSfc RenderedServicePath configuration.");
listenerRegistration = db.registerDataTreeChangeListener(treeId, this);
} catch (final Exception e) {
- LOG.warn("Netvirt RenderedServicePath DataChange listener registration failed!");
+ LOG.warn("Netvirt RenderedServicePath DataChange listener registration failed!", e);
throw new IllegalStateException("RspListener startup failed! System needs restart.", e);
}
}
commitFuture.get(); // TODO: Make it async (See bug 1362)
LOG.debug("Transaction success for deletion of Flow {}", path);
} catch (Exception e) {
- LOG.error(e.getMessage(), e);
+ LOG.error("Failed to remove flow {}", path, e);
modification.cancel();
}
}
Thread.sleep(1000);
super.setup();
} catch (Exception e) {
- e.printStackTrace();
+ LOG.warn("Failed to setup test", e);
+ fail("Failed to setup test: " + e);
}
getProperties();
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
- e.printStackTrace();
+ LOG.warn("Interrupted while waiting for provider context", e);
}
}
}
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
- e.printStackTrace();
+ LOG.warn("Interrupted while waiting for other provider", e);
}
return providerContext;
}
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
- e.printStackTrace();
+ LOG.warn("Interrupted while waiting for {}", NETVIRT_TOPOLOGY_ID, e);
}
}
}
try {
listener.close();
} catch (Exception ex) {
- LOG.warn("Failed to close registration {}, iid {}", listener, ex);
+ LOG.warn("Failed to close registration {}", listener, ex);
}
}
LOG.info("waitList size {}", waitList.size());
<dependency>
<groupId>org.opendaylight.controller</groupId>
<artifactId>sal-binding-broker-impl</artifactId>
- <version>1.3.0-SNAPSHOT</version>
<type>test-jar</type>
<scope>test</scope>
</dependency>
Node ovsdbNode = southbound.readOvsdbNode(bridgeNode);
if (ovsdbNode == null) {
//this should never happen
- LOG.error("createLocalNetwork could not find ovsdbNode from bridge node " + bridgeNode);
+ LOG.error("createLocalNetwork could not find ovsdbNode from bridge node {}", bridgeNode);
return false;
}
if (network.getProviderNetworkType().equalsIgnoreCase(NetworkHandler.NETWORK_TYPE_VLAN)) {
try {
isCreated = createBridges(bridgeNode, ovsdbNode, network);
} catch (Exception e) {
- LOG.error("Error creating internal vlan net network " + bridgeNode, e);
+ LOG.error("Error creating internal vlan net network {}--{}", bridgeNode, network, e);
}
} else {
isCreated = true;
try {
isCreated = createBridges(bridgeNode, ovsdbNode, network);
} catch (Exception e) {
- LOG.error("Error creating internal vxlan/gre net network " + bridgeNode, e);
+ LOG.error("Error creating internal vxlan/gre net network {}--{}", bridgeNode, network, e);
}
} else {
isCreated = true;
return addressString;
}
} catch (UnknownHostException e) {
- LOG.error("Host {} is invalid", addressString);
+ LOG.error("Host {} is invalid", addressString, e);
}
}
return addressString;
}
} catch (UnknownHostException e) {
- LOG.error("Host {} is invalid", addressString);
+ LOG.error("Host {} is invalid", addressString, e);
}
}
openFlowPort = Short.parseShort(portString);
} catch (NumberFormatException e) {
LOG.warn("Invalid port:{}, use default({})", portString,
- openFlowPort);
+ openFlowPort, e);
}
}
return openFlowPort;
}
if (status.isSuccess()) {
- LOG.debug("ProgramStaticArp {} for mac:{} addr:{} dpid:{} segOrOfPort:{} action:{}",
+ LOG.debug("programStaticRuleStage2 {} for mac:{} addr:{} dpid:{} segOrOfPort:{} action:{}",
arpProvider == null ? "skipped" : "programmed",
macAddress, address, dpid, segOrOfPort, action);
} else {
- LOG.error("ProgramStaticArp failed for mac:{} addr:{} dpid:{} segOrOfPort:{} action:{} status:{}",
+ LOG.error("programStaticRuleStage2 failed for mac:{} addr:{} dpid:{} segOrOfPort:{} action:{} status:{}",
macAddress, address, dpid, segOrOfPort, action, status);
}
return status;
}
if (status.isSuccess()) {
- LOG.debug("ProgramRouterInterface {} for mac:{} addr:{}/{} node:{} srcTunId:{} destTunId:{} action:{}",
+ LOG.debug("programRouterInterfaceStage2 {} for mac:{} addr:{}/{} node:{} srcTunId:{} destTunId:{} action:{}",
routingProvider == null ? "skipped" : "programmed",
macAddress, address, mask, node.getNodeId().getValue(), sourceSegmentationId, destinationSegmentationId,
actionForNode);
} else {
- LOG.error("ProgramRouterInterface failed for mac:{} addr:{}/{} node:{} srcTunId:{} destTunId:{} action:{} status:{}",
+ LOG.error("programRouterInterfaceStage2 failed for mac:{} addr:{}/{} node:{} srcTunId:{} destTunId:{} action:{} status:{}",
macAddress, address, mask, node.getNodeId().getValue(), sourceSegmentationId, destinationSegmentationId,
actionForNode, status);
}
<dependency>
<groupId>org.opendaylight.controller</groupId>
<artifactId>sal-binding-broker-impl</artifactId>
- <version>1.3.0-SNAPSHOT</version>
<type>test-jar</type>
<scope>test</scope>
</dependency>
}
}
} catch (InterruptedException | ExecutionException e) {
- LOG.warn("Exception attempting to createTransactionInvokers {}: {}",connectionInfo,e);
+ LOG.warn("Exception attempting to createTransactionInvokers {}", connectionInfo, e);
}
}
}
ovs.getExternalIdsColumn().getData());
transaction.add(mutate);
} catch (NullPointerException e) {
- LOG.warn("Incomplete OVSDB Node external IDs");
+ LOG.warn("Incomplete OVSDB Node external IDs", e);
}
externalClient.getConnectionInfo().getRemotePort(),
externalClient.getConnectionInfo().getLocalAddress(),
externalClient.getConnectionInfo().getLocalPort());
- OvsdbConnectionInstance client = connectedButCallBacksNotRegistered(externalClient);
-
- // Register Cluster Ownership for ConnectionInfo
- registerEntityForOwnership(client);
+ if(externalClient.getSchema(SouthboundConstants.OPEN_V_SWITCH) != null) {
+ OvsdbConnectionInstance client = connectedButCallBacksNotRegistered(externalClient);
+ // Register Cluster Ownership for ConnectionInfo
+ registerEntityForOwnership(client);
+ }
}
public OvsdbConnectionInstance connectedButCallBacksNotRegistered(final OvsdbClient externalClient) {
protocols = bridge.getProtocolsColumn().getData();
} catch (SchemaVersionMismatchException e) {
// We don't care about the exception stack trace here
- LOG.warn("protocols not supported by this version of ovsdb: {}", e.getMessage());
+ LOG.warn("protocols not supported by this version of ovsdb", e);
}
List<ProtocolEntry> protocolList = new ArrayList<>();
if (protocols != null && protocols.size() > 0) {
transaction.cancel();
}
} catch (Exception e) {
- LOG.error("Error initializing ovsdb topology {}",e);
+ LOG.error("Error initializing ovsdb topology", e);
}
}
try {
bridge.setOtherConfig(ImmutableMap.copyOf(otherConfigMap));
} catch (NullPointerException e) {
- LOG.warn("Incomplete bridge other config");
+ LOG.warn("Incomplete bridge other config", e);
}
}
}
ovs.getExternalIdsColumn().getData());
transaction.add(mutate);
} catch (NullPointerException e) {
- LOG.warn("Incomplete OVSDB Node external IDs");
+ LOG.warn("Incomplete OVSDB Node external IDs", e);
}
try {
qos.setExternalIds(ImmutableMap.copyOf(externalIdsMap));
} catch (NullPointerException e) {
- LOG.warn("Incomplete Qos external IDs");
+ LOG.warn("Incomplete Qos external IDs", e);
}
List<QosOtherConfig> otherConfigs = qosEntry.getQosOtherConfig();
try {
queue.setExternalIds(ImmutableMap.copyOf(externalIdsMap));
} catch (NullPointerException e) {
- LOG.warn("Incomplete Queue external IDs");
+ LOG.warn("Incomplete Queue external IDs", e);
}
List<QueuesOtherConfig> otherConfigs = queueEntry.getQueuesOtherConfig();
try {
ovsInterface.setOptions(ImmutableMap.copyOf(optionsMap));
} catch (NullPointerException e) {
- LOG.warn("Incomplete OVSDB interface options");
+ LOG.warn("Incomplete OVSDB interface options", e);
}
}
}
try {
ovsInterface.setExternalIds(ImmutableMap.copyOf(externalIdsMap));
} catch (NullPointerException e) {
- LOG.warn("Incomplete OVSDB interface external_ids");
+ LOG.warn("Incomplete OVSDB interface external_ids", e);
}
}
}
try {
ovsInterface.setLldp(ImmutableMap.copyOf(interfaceLldpMap));
} catch (NullPointerException e) {
- LOG.warn("Incomplete OVSDB interface lldp");
+ LOG.warn("Incomplete OVSDB interface lldp", e);
}
}
} catch (SchemaVersionMismatchException e) {
- LOG.debug("lldp column for Interface Table unsupported for this version of ovsdb schema. {}", e.getMessage());
+ LOG.debug("lldp column for Interface Table unsupported for this version of ovsdb schema", e);
}
}
try {
port.setExternalIds(ImmutableMap.copyOf(externalIdsMap));
} catch (NullPointerException e) {
- LOG.warn("Incomplete OVSDB port external_ids");
+ LOG.warn("Incomplete OVSDB port external_ids", e);
}
}
}
try {
ovsInterface.setOptions(ImmutableMap.copyOf(optionsMap));
} catch (NullPointerException e) {
- LOG.warn("Incomplete OVSDB interface options");
+ LOG.warn("Incomplete OVSDB interface options", e);
}
}
}
try {
ovsInterface.setExternalIds(ImmutableMap.copyOf(externalIdsMap));
} catch (NullPointerException e) {
- LOG.warn("Incomplete OVSDB interface external_ids");
+ LOG.warn("Incomplete OVSDB interface external_ids", e);
}
}
}
try {
ovsInterface.setLldp(ImmutableMap.copyOf(interfaceLldpMap));
} catch (NullPointerException e) {
- LOG.warn("Incomplete OVSDB interface lldp");
+ LOG.warn("Incomplete OVSDB interface lldp", e);
}
}
} catch (SchemaVersionMismatchException e) {
- LOG.debug("lldp column for Interface Table unsupported for this version of ovsdb schema. {}", e.getMessage());
+ LOG.debug("lldp column for Interface Table unsupported for this version of ovsdb schema", e);
}
}
try {
port.setExternalIds(ImmutableMap.copyOf(externalIdsMap));
} catch (NullPointerException e) {
- LOG.warn("Incomplete OVSDB port external_ids");
+ LOG.warn("Incomplete OVSDB port external_ids", e);
}
}
}
}
}
} catch (Exception e) {
- LOG.warn("Error getting local ip address {}", e);
+ LOG.warn("Error getting local ip address", e);
}
}
}
}
}
} catch (Exception e) {
- LOG.warn("Failure to delete ovsdbNode {}",e);
+ LOG.warn("Failure to delete ovsdbNode", e);
}
}
ovsdbTerminationPointBuilder.setInterfaceLldp(interfaceLldpList);
}
} catch (SchemaVersionMismatchException e) {
+ // We don't care about the exception stack trace here
LOG.debug("lldp column for Interface Table unsupported for this version of ovsdb schema. {}", e.getMessage());
}
}
queuesBuilder.setQueueUuid(new Uuid(entry.getKey().toString()));
Collection<Long> dscp = queue.getDscpColumn().getData();
if (!dscp.isEmpty()) {
- try {
- queuesBuilder.setDscp(new Short(dscp.iterator().next().toString()));
- } catch (NumberFormatException e) {
- queuesBuilder.setDscp(new Short("0"));
- }
+ queuesBuilder.setDscp(dscp.iterator().next().shortValue());
}
setOtherConfig(transaction, queuesBuilder, oldQueue, queue, nodeIId);
setExternalIds(transaction, queuesBuilder, oldQueue, queue, nodeIId);
try {
super.setup();
} catch (Exception e) {
- e.printStackTrace();
+ LOG.warn("Failed to setup test", e);
}
//dataBroker = getSession().getSALService(DataBroker.class);
Thread.sleep(3000);
return data.get();
}
} catch (InterruptedException|ExecutionException e) {
- LOG.error(e.getMessage(), e);
+ LOG.error("Failed to get flow {}", flowBuilder.getFlowName(), e);
}
LOG.info("Cannot find data for Flow {} in {}", flowBuilder.getFlowName(), store);
public static final short ALL_ICMP = -1;
public static final long ETHERTYPE_IPV4 = 0x0800;
public static final long ETHERTYPE_IPV6 = 0x86dd;
+ public static final int UNTRACKED_CT_STATE = 0x00;
+ public static final int UNTRACKED_CT_STATE_MASK = 0x20;
+ public static final int TRACKED_EST_CT_STATE = 0x22;
+ public static final int TRACKED_EST_CT_STATE_MASK = 0x22;
+ public static final int TRACKED_NEW_CT_STATE = 0x21;
+ public static final int TRACKED_NEW_CT_STATE_MASK = 0x21;
+ public static final int NEW_CT_STATE = 0x01;
+ public static final int NEW_CT_STATE_MASK = 0x01;
/**
* Create Ingress Port Match dpidLong, inPort
try {
inetAddress = InetAddress.getByName(addressStr);
} catch (UnknownHostException e) {
- LOG.warn("Could not allocate InetAddress");
+ LOG.warn("Could not allocate InetAddress", e);
}
IpAddress address = SouthboundMapper.createIpAddress(inetAddress);