4 years agoBUG-8665: fix memory leak around RangeSets 00/58800/1
Robert Varga [Tue, 13 Jun 2017 10:13:58 +0000 (12:13 +0200)]
BUG-8665: fix memory leak around RangeSets

This is a thinko on my part, where I was thinking in terms of a
discrete set (UnsignedLong) and assumed RangeSets will coalesce
individual items.

Unfortunately TreeRangeSet has no way of knowing that that the
domain it operates on is discrete and hence will not merge invididual

This patch fixes the problem by using [N,N+1) ranges to address
the problem. A follow-up patch should address this in a more
efficient manner.

Change-Id: Iecc313e09ae0cdd51a42f7d39281f7634f0358a7
Signed-off-by: Robert Varga <>
4 years agoBug 8606: Continue leadership transfer on pauseLeader timeout 94/58794/2
Tom Pantelis [Mon, 12 Jun 2017 13:42:38 +0000 (09:42 -0400)]
Bug 8606: Continue leadership transfer on pauseLeader timeout

Modified it to continue with leadership transfer if pauseLeader times out
instead of aborting. The shard may have a lot of transactions queued up
which it can't finish in time but there may still be a follower that is
caught up (ie whose matchIndex equals the leader's lastIndex) or would be
caught up if leadership transfer continued. Worst case is no follower is
available and the "catch up" phase of leadership transfer also times out
which would lengthen shut down time but that should be fine.

Change-Id: I1ec1ef43bb556e50416bb7239ce3c267265db9b3
Signed-off-by: Tom Pantelis <>
(cherry picked from commit dac16f0d464eff3325b3800a803e81b303964e4b)

4 years agoFix intermittent PreLeaderScenarioTest failure 91/58791/1
Tom Pantelis [Tue, 13 Jun 2017 00:28:06 +0000 (20:28 -0400)]
Fix intermittent PreLeaderScenarioTest failure

java.lang.AssertionError: AppendEntries - # entries expected:<1> but was:<0>
  at org.junit.Assert.failNotEquals(
  at org.junit.Assert.assertEquals(
  at org.junit.Assert.assertEquals(
  at org.opendaylight.controller.cluster.raft.PreLeaderScenarioTest.testUnComittedEntryOnLeaderChange(

AppendEntries appendEntries = expectFirstMatching(follower1CollectorActor,
assertEquals("AppendEntries - # entries", 1, appendEntries.getEntries().size());

After the payload is sent to the leader, it expects an AppendEntries sent to follower1
with a single ReplicatedLogEntry. From the test output this did occur correctly but
the MessageCollectorActor still had the initial empty AppendEntries sent on leader
startup. The test setup waits for the initial AppendEntriesReply's from both followers
prior to clearing messages in each MessageCollectorActor however the AppendEntries may
not have been delivered to follower1's MessageCollectorActor yet and thus doesn't get
cleared. We need to specifically wait for the AppendEntries in follower1's

Change-Id: I638a21e75ea135c1fe24970135f564da4fc5738e
Signed-off-by: Tom Pantelis <>
4 years agoBUG 8494 log possibly hanged futures in tx handlers 39/58739/2
Tomas Cere [Mon, 12 Jun 2017 13:29:46 +0000 (15:29 +0200)]
BUG 8494 log possibly hanged futures in tx handlers

Change-Id: Iccc90e575033c6770a3a499853f31e0684a712e4
Signed-off-by: Tomas Cere <>
4 years agoBUG 8604 set proper tag when producer creation times out 38/58738/2
Tomas Cere [Mon, 12 Jun 2017 13:20:29 +0000 (15:20 +0200)]
BUG 8604 set proper tag when producer creation times out

Change-Id: I405f4d546a32b2d0f5b56fb03907a63334fabd6c
Signed-off-by: Tomas Cere <>
4 years agoBUG 8649: remove bounded mailbox from ShardManager and notification actors 34/58734/1
Tomas Cere [Mon, 12 Jun 2017 12:47:53 +0000 (14:47 +0200)]
BUG 8649: remove bounded mailbox from ShardManager and notification actors

Change-Id: I52975d969a81cc3ccdc7b963e0f43f9958ba6a10
Signed-off-by: Tomas Cere <>
4 years agoBUG 8629: log inconsistent notifications as warn 99/58499/3
Tomas Cere [Thu, 8 Jun 2017 12:00:02 +0000 (14:00 +0200)]
BUG 8629: log inconsistent notifications as warn

Change-Id: I872fced1ecc913e521ddf7c0d4acee7f48b04cb1
Signed-off-by: Tomas Cere <>
4 years agoBUG 8618: Log leader status when rejecting request 16/58416/7
Tomas Cere [Wed, 7 Jun 2017 14:14:36 +0000 (16:14 +0200)]
BUG 8618: Log leader status when rejecting request

Change-Id: Iecd99a74473b68f43b7ad43a1272d679aa09b4e6
Signed-off-by: Tomas Cere <>
4 years agoCatch all exceptions when submitting in tx handlers 84/58484/4
Tomas Cere [Thu, 8 Jun 2017 09:48:00 +0000 (11:48 +0200)]
Catch all exceptions when submitting in tx handlers

Change-Id: I5b9a2ec26b1b6001423f2cf5cf57285ce6c7e340
Signed-off-by: Tomas Cere <>
4 years agoDo not flood logs with modifications 34/58434/4
Robert Varga [Wed, 7 Jun 2017 16:40:52 +0000 (18:40 +0200)]
Do not flood logs with modifications

Debugging logs have grown quite a bit for tell-based protocol
mostly due to us dumping modifications as part of the request
message. Log only the number of modification in the message,
which will make the logs quite a bit more readable.

Change-Id: I35961702b7bdd0e3f93cd03f05a0e443a14bf419
Signed-off-by: Robert Varga <>
4 years agoBug 5740: Configure control-aware mailbox 99/58399/2
Tom Pantelis [Thu, 25 May 2017 14:18:02 +0000 (10:18 -0400)]
Bug 5740: Configure control-aware mailbox

Configured unit tests and production to use a control-aware mailbox
for Shard actors. The current code allows for a "shard-dispatcher"
to be defined so I added a section in the .conf that specifies the
mailbox-type appropriately (ie UnboundedDequeBasedControlAwareMailbox).

Change-Id: Ibdb404e1dfcc699471a8e899c491a09500ee04c0
Signed-off-by: Tom Pantelis <>
4 years agoBug 5740: Add Deque-based control-aware mailbox 98/58398/2
Tom Pantelis [Thu, 25 May 2017 13:39:40 +0000 (09:39 -0400)]
Bug 5740: Add Deque-based control-aware mailbox

Since akka persistence uses stashing, it requires a mailbox to be
Deque-based to provide the enqueueFirst method. However, the
control-aware mailboxes provided by akka are not Deque-based so we
need one that is.

Change-Id: I74f214c725eff16aba093aad3f2f6eed80948ee4
Signed-off-by: Tom Pantelis <>
4 years agoBug 5740: Add ControlMessage interface to raft messages 97/58397/2
Tom Pantelis [Wed, 24 May 2017 14:58:57 +0000 (10:58 -0400)]
Bug 5740: Add ControlMessage interface to raft messages

Added 'implements ControlMessage' for all RaftRPCs and other messages
related to raft that should have higher priority.

Change-Id: Ie699531ef67d9cbcf7cbdec0422dd2e6faafebaa
Signed-off-by: Tom Pantelis <>
4 years agoBug 5740: Remove Serializable where not necessary 96/58396/2
Tom Pantelis [Wed, 24 May 2017 13:40:52 +0000 (09:40 -0400)]
Bug 5740: Remove Serializable where not necessary

Some raft message classes are Serializable but they don't need to
be as they're only sent locally.

Change-Id: Ibd052b9a4589dd2476b30c51e301b3dd609df750
Signed-off-by: Tom Pantelis <>
4 years agoBug 5740: Change TimeoutNow and Shutdown to externalizable proxy 95/58395/2
Tom Pantelis [Wed, 24 May 2017 12:13:34 +0000 (08:13 -0400)]
Bug 5740: Change TimeoutNow and Shutdown to externalizable proxy

Change-Id: I3b2289c258ffab288901b5cbf4e5032bc143dfc7
Signed-off-by: Tom Pantelis <>
4 years agoBug 5740: Change RequestVote(Reply) to externalizable proxy 94/58394/2
Tom Pantelis [Wed, 24 May 2017 12:02:00 +0000 (08:02 -0400)]
Bug 5740: Change RequestVote(Reply) to externalizable proxy

The other RaftRPC classes havwe been converted to use the
externalizable proxy pattern so we shoild convert RequestVote(Reply)
as well.

Change-Id: I0a2054d8426f66480f37061d1a9fc51464f705da
Signed-off-by: Tom Pantelis <>
4 years agoBUG-8620: handle direct commit and disconnect correctly 30/58430/3
Robert Varga [Wed, 7 Jun 2017 16:08:28 +0000 (18:08 +0200)]
BUG-8620: handle direct commit and disconnect correctly

Transactions committed directly can complete in a disconnected
fashion as we are skipping the back-and-forth communication of the
three-phase commit. This period may involve shard leadership changes
and so we may end up in a situation where we are replaying a direct
commit request to a transaction which already completed -- which
raises a RequestFailure to make sure we do not do anything untoward.

In the specific case of direct commit, though, this is perfectly fine
and so update the callback to account for this case happening.

Change-Id: Ic60e69f0f58cc7c5a3ac869386dc12f856aa1f74
Signed-off-by: Robert Varga <>
4 years agoBUG 8602: Skip initial fill of idints 55/58355/3
Tomas Cere [Tue, 6 Jun 2017 16:28:10 +0000 (18:28 +0200)]
BUG 8602: Skip initial fill of idints

Change-Id: If197c9b2318a52b3608f6065bea44af860a09849
Signed-off-by: Tomas Cere <>
4 years agoFix RecoveryIntegrationSingleNodeTest failure 68/58368/1
Tom Pantelis [Mon, 5 Jun 2017 18:56:45 +0000 (14:56 -0400)]
Fix RecoveryIntegrationSingleNodeTest failure

The InMemoryJournal may not have received all the persisted messages
by the time it checks the expected size of the journal so added a latch/wait
for he expected messages..

Change-Id: I8f050b9375f5e3e74749c17e831add21d09d1831
Signed-off-by: Tom Pantelis <>
(cherry picked from commit 75a222ffd227bb9f2aeef8ee7998f8df6f529bf8)

4 years agoBUG-8494: do not attempt to reconnect ReconnectingClientConnection 94/58194/3
Robert Varga [Sat, 3 Jun 2017 02:16:00 +0000 (04:16 +0200)]
BUG-8494: do not attempt to reconnect ReconnectingClientConnection

If we are in reconnect state, we should never attempt to initiate
reconnection, as that would leave us without the timer running --
which is a problem since we need to be timing out requests which
are queued even as we are attempting to reconnect to the backend.

Change-Id: Ic955a2e5b743617c26cc72815df94d0c4584704c
Signed-off-by: Robert Varga <>
4 years agoBUG-8403: fix DONE state propagation 74/58274/1
Robert Varga [Mon, 5 Jun 2017 17:50:52 +0000 (19:50 +0200)]
BUG-8403: fix DONE state propagation

The DONE state detection logic in replayMessages() was flawed, as
we checked the current state, which is guaranteed to be SuccessorState.

We should be checking the previous state, available from the successor
state. As it turns out we can do this very cleanly by setting the flag
when the successor state gets the previous state assigned.

This also has better performance, as we do not touch the volatile
field multiple times.

Change-Id: Ica2246160bf8fee7aa134bbacb45857235405f6a
Signed-off-by: Robert Varga <>
4 years agoReplace LOGGER by LOG 72/57972/3
David Suarez [Mon, 29 May 2017 10:03:08 +0000 (12:03 +0200)]
Replace LOGGER by LOG

Replace LOGGER by LOG to follow the
OpenDaylight recommendations [1].


Change-Id: I024bcd5f23a5bdcc177440b175578694c6c471a4
Signed-off-by: David Suarez <>
(cherry picked from commit 202de5f104dc8149accab4ebf04c3ec8857edff2)

4 years agoBUG-8494: fix throttling during reconnect 96/57996/6
Robert Varga [Mon, 29 May 2017 21:58:07 +0000 (23:58 +0200)]
BUG-8494: fix throttling during reconnect

ReconnectForwarder is called from differing code-paths: the one is
during replay when we are dealing with late requests (those which have
been waiting while we replaying), the other is subsequent user requests.

The first one should not be waiting on the queue, as the requests have
already entered it, hence have payed the cost of entry. The latter needs
to pay for entering the queue, as otherwise we do not exert backpressure.

This patch differentiates the two code paths, so they behave as they
should. Also add more debug information in timer paths.

Change-Id: I609be2332b13868ef1b9511399e2827d7f3d5b7d
Signed-off-by: Robert Varga <>
4 years agoBUG-8403: propagate DONE state to successor 94/57994/7
Robert Varga [Mon, 29 May 2017 20:53:53 +0000 (22:53 +0200)]
BUG-8403: propagate DONE state to successor

We need correct accounting for DONE non-standalone local transactions,
as such transactions do not interact with open/closed semantics.

Propagate DONE via a simple flag, which we check in local ProxyHistory
a create a proxy without a backing modification.

Change-Id: Ie921db8c9e40f30934c119b74c31ca5418b61548
Signed-off-by: Robert Varga <>
4 years agoBUG-8403: go through the DONE transition 76/58076/3
Robert Varga [Wed, 31 May 2017 17:49:52 +0000 (19:49 +0200)]
BUG-8403: go through the DONE transition

Third step of the fix: make make AbstractProxyTransaction go through
the DONE state before retiring. This ends up also fixing breakage
in local chain transactions, which could end up leaking because we
never go back to just using the base data tree.

Change-Id: I97ac1687eaf3ecd8f46a68c6170891ea06703e95
Signed-off-by: Robert Varga <>
4 years agoBUG-8403: add state documentation and DONE state 34/58034/6
Robert Varga [Tue, 30 May 2017 17:19:42 +0000 (19:19 +0200)]
BUG-8403: add state documentation and DONE state

Second step of the fix: clarify AbstractProxyTransaction states and
their transitions. Introduce a DONE state which we will use to close
the replay state race window.

Change-Id: I82e47103f2cd9b8ec496b72803b5d5e56d33c0f5
Signed-off-by: Robert Varga <>
4 years agoBUG-8403: move successor allocation to AbstractProxyTransaction 93/57993/5
Robert Varga [Tue, 30 May 2017 17:35:17 +0000 (19:35 +0200)]
BUG-8403: move successor allocation to AbstractProxyTransaction

We still have a tiny race window where we do not correctly handle
reconnection, leading to an ISE splat -- this time if the final
stage of transaction completion.

The problem is that a reconnect attempt is happening while we are
waiting for the transaction purge to complete. At this point the
transaction has been completely resolved and the only remaining
task is to inform the backend that we have received all of the
state and hence it can throw it out (we will remove our state once
purge completes).

We are still allocating a live transaction in the local history
and the purge request replay does not logically close it, leading
to the splat.

To fix this, we really need to allocate a non-open tx successor,
which will not trip the successor local history. All the required
knowledge already resides in AbstractProxyHistory, except we do
not have a dedicated 'purging' state.

This makes the first step of moving the allocation caller to the
appropriate place.

Change-Id: If82957019b478f4d5132edda4d38e6bc026aa0ab
Signed-off-by: Robert Varga <>
4 years agoBUG-8494: Cap queue sleep time 33/58033/2
Robert Varga [Tue, 30 May 2017 09:41:58 +0000 (11:41 +0200)]
BUG-8494: Cap queue sleep time

Inconsistency in transmit queue backpressure has caused an observed
delay of over 44 minutes in CSIT. Thile this is an artefact of a bug,
we should not delay application thread for large periods of time, as
that can cause failures in the application (if it has other
time-bound duties like BGP keepalive messages and similar).

Restrict the sleep time to 5 seconds, emitting an INFO-level messages
when we reach this level of sleeping. This should not typically not
occur, as the backpressure should be kicking in much sooner, smooting
out the delay curve.

Change-Id: Ie5f148248caa71791bdda71ddd7e33e5733aa7f8
Signed-off-by: Robert Varga <>
4 years agoBug 8446 - Increase timeout in leadership transfer 98/57698/4
Jakub Morvay [Tue, 23 May 2017 12:06:48 +0000 (14:06 +0200)]
Bug 8446 - Increase timeout in leadership transfer

Change-Id: Iffd66ef2c771b797b236f23c39b1fb87b5a27c89
Signed-off-by: Jakub Morvay <>
4 years agoCleanup time access 58/57758/4
Robert Varga [Wed, 24 May 2017 09:44:30 +0000 (11:44 +0200)]
Cleanup time access

ShardDataTree does not need to expose the ticker, just a readTime()
method. This makes the users slightly more readable.

Change-Id: I9aa72a2d3625f40a2a44b0838ff344437293e1e3
Signed-off-by: Robert Varga <>
4 years agoBUG-8515: make sure we retry connection on NotLeaderException 51/57951/1
Robert Varga [Mon, 29 May 2017 08:40:06 +0000 (10:40 +0200)]
BUG-8515: make sure we retry connection on NotLeaderException

There is a race window when we are establishing connection to the

When we received the pointer to shard leader, we send a connect
request, but during that time window the leader may move, resulting
in a NotLeaderException response to ConnectClientRequest. Since
we are in reconnection mode, this will result in hard abort of

Fix this by wrapping NotLeaderException and akka failures in a
TimeoutException -- hence we will retry connecting.

Change-Id: Ia5d1915d59e80a70c54302c1790121d0767ff08a
Signed-off-by: Robert Varga <>
4 years agoBUG-8403: do not throttle purge requests 22/57822/3
Robert Varga [Thu, 25 May 2017 16:10:57 +0000 (18:10 +0200)]
BUG-8403: do not throttle purge requests

It seems we are getting stuck after replay on purge requests,
which are dispatched internally.

Make sure we do not use sendRequest() in obvious replay places,
nor for purge requests. Also add a debug upcall if we happen to
sleep for more than 100msec.

Change-Id: Iec667f2039610f3f036e6b88c7c7e7b773cdfc19
Signed-off-by: Robert Varga <>
4 years agoBUG-8538: rework transaction abort paths 70/57770/12
Robert Varga [Wed, 24 May 2017 12:42:04 +0000 (14:42 +0200)]
BUG-8538: rework transaction abort paths

Direct transaction abort path can end up touching proxy history's
maps, which it should not, as that happens only after purge. This
inconsistency has cropped up when purge was introduced.

Refactor the methods so that cohorts are removed only after purge,
and fix abort request routing such that it always enqueues a purge
request (possibly via successor). This also addresses a FIXME, as
we now have an enqueueAbort() request, which is not waiting on the

Change-Id: Ie291da70ace772274f33505db376a915b38e37c0
Signed-off-by: Robert Varga <>
4 years agoBUG-8538: do not invoke read callbacks during replay. 59/57759/12
Robert Varga [Wed, 24 May 2017 10:01:20 +0000 (12:01 +0200)]
BUG-8538: do not invoke read callbacks during replay.

As evidenced by a ConcurrentModificationException happening reliably
in face of aborted read-only transactions, there are avenues how
our state can be modified eventhough we hold the locks.

One such avenue is listeners hanging on read operations, which
can enqueue further requests in the context of calling thread. That
thread must not be performing replay, hence delay request completion
into a separate actor message by using executeInActor().

Change-Id: Ibcd0ac788156011ec3a4cc573dc7fb249ebf93a2
Signed-off-by: Robert Varga <>
4 years agoBUG-8371: Respond to CreateLocalHistoryRequest after replication 51/57751/4
Robert Varga [Wed, 24 May 2017 09:36:18 +0000 (11:36 +0200)]
BUG-8371: Respond to CreateLocalHistoryRequest after replication

CreateLocalHistoryRequest needs to be replicated to followers before
we respond to the frontend, as logically this request has to be
persisted before any subsequent transactions.

While the frontend could replay the request on reconnect, it would
also have to track the implied persistence (via child transactions),
which we do not want because it really is a backend detail and it
would lead to a lot of complexity in the frontend.

Change-Id: Icdfad59d3c2bab3d4125186c6a9b3c901d3934f6
Signed-off-by: Robert Varga <>
4 years agoBUG-8540: suppress ConnectingClientConnection backend timeout 22/57722/5
Robert Varga [Tue, 23 May 2017 17:42:57 +0000 (19:42 +0200)]
BUG-8540: suppress ConnectingClientConnection backend timeout

While a ClientConnection is in initial connect state we do not want
the timer to attempt to reconnect it, as it we are already trying
hard to connect it. Suppress that attempt by faking backend silent
ticks to be 0.

Change-Id: Iaf554632a56fd5be1d417d6806462edf3c746526
Signed-off-by: Robert Varga <>
4 years agoBUG 8525 Listeners not getting triggered from followers 63/57763/4
Tomas Cere [Wed, 24 May 2017 12:09:53 +0000 (14:09 +0200)]
BUG 8525 Listeners not getting triggered from followers

This is an oversight in the dtcl implementation of the lowlevel
model. However we also need to change the proxy listener thats
registered from the new sharding apis as there is no way
for the user to specify this cluster interface since the mdsal
api's are required.

Change-Id: I41c02a45d1db9eb9ed8c6e63dff99da567829d2f
Signed-off-by: Tomas Cere <>
4 years agoDo not retain initial SchemaContext 61/56561/16
Robert Varga [Thu, 4 May 2017 21:52:57 +0000 (23:52 +0200)]
Do not retain initial SchemaContext

While looking over a memory dump I have noticed that we retain
SchemaContext inside Shard$Builder, which is being retained via
Props (which are used to restart the actor).

This reference is not updated as the SchemaContext is updated, which
means we are wasting memory and are causing Shard to come up with
an ancient SchemaContext after a failure.

Fix this by having an AtomicReference holder for SchemaContext
and have Shard have a Supplier<SchemaContext>.

Change-Id: I73fcae46f249d3679522eb7dbbb059e43c5af6c7
Signed-off-by: Robert Varga <>
4 years agoBUG-8402: correctly propagate read-only bit 92/57692/3
Robert Varga [Tue, 23 May 2017 12:56:37 +0000 (14:56 +0200)]
BUG-8402: correctly propagate read-only bit

During replay we substitute read requests with an IncrementSequence
request, but that does not indicate whether the transaction state
should be read-only.

This leads to transaction chains allocating a full-blown transaction
instead of a snapshot, hence follow-up transactions fail to allocate,
leading to OutOfOrderRequestException.

Fix this by making IncrementTransactionSequenceRequest a subclass
of AbstractReadTransactionRequest so it carries isSnapshotOnly().

Change-Id: Ifdb6214478aa7548d3bc1f06b532e06c93b3dd0b
Signed-off-by: Robert Varga <>
4 years agoBUG 8402: Close readonly tx 81/57681/4
Tomas Cere [Tue, 23 May 2017 12:09:34 +0000 (14:09 +0200)]
BUG 8402: Close readonly tx

This transaction is only used for an exist check of
default prefix shard configuration and needs to be closed
once we are done with it.

Change-Id: I8d7c06e7e3ce58cb91713dac14744c411ec1bf5f
Signed-off-by: Tomas Cere <>
4 years agoBUG 8318: Add section for remoting transport-failure-detector 57/57357/4
Tomas Cere [Thu, 18 May 2017 14:00:54 +0000 (16:00 +0200)]
BUG 8318: Add section for remoting transport-failure-detector

Similar to separate dispatcher for cluster we might also
trip a false positive in remoting so add this in so we can modify
the parameter in csit.

Change-Id: I751fec044e2bf0f0d82badb2ea7d581b3374ac4a
Signed-off-by: Tomas Cere <>
4 years agoBUG 8525: Prevent NPE in test-app listeners 21/57621/3
Tomas Cere [Mon, 22 May 2017 11:22:46 +0000 (13:22 +0200)]
BUG 8525: Prevent NPE in test-app listeners

Prevents the NPE thrown when the listeners didn't
receive any notifications.

Change-Id: I0d774913a15b4341abce779c64d6ee8f75d6a0e1
Signed-off-by: Tomas Cere <>
4 years agoBUG 8403 Timeout writetransactions on initial ensure 23/57423/3
Tomas Cere [Fri, 19 May 2017 12:29:33 +0000 (14:29 +0200)]
BUG 8403 Timeout writetransactions on initial ensure

This stage can get stuck aswell and if the submit is never timed out
from the backend as a result of a bug it will never complete.

Change-Id: Ia424d009cd201e3f03a13af88c35b1390b40cbee
Signed-off-by: Tomas Cere <>
4 years agoBUG-8403: raise misordered request log message 99/57599/3
Robert Varga [Mon, 22 May 2017 08:17:15 +0000 (10:17 +0200)]
BUG-8403: raise misordered request log message

This error seems to occur intermittently, raise the message to
a warning.

Change-Id: Ia749a9ac17fa75ef26fe7a2963fa9ea3a0b35731
Signed-off-by: Robert Varga <>
4 years agoBUG-8371: raise unknown history log to warn 98/57598/3
Robert Varga [Mon, 22 May 2017 08:11:51 +0000 (10:11 +0200)]
BUG-8371: raise unknown history log to warn

This error seems to be happening quite often, raise it to a warning
so we understand what request is triggering it.

Change-Id: If357325787f5c859a46af9286c86c0e9934909cb
Signed-off-by: Robert Varga <>
4 years agoDon't use File(In/Out)putStream in FileBackedOutputStream 87/56987/2
Tom Pantelis [Fri, 12 May 2017 15:13:19 +0000 (11:13 -0400)]
Don't use File(In/Out)putStream in FileBackedOutputStream

As per
FileInputStream/FileOutputStream can incur unnecessary GC overhead due to
finalize. Use Files.newInputStream and Files.newOutputStream instead.

Change-Id: Ic4130ba650fef312e82a5039e9e11a573bd9d406
Signed-off-by: Tom Pantelis <>
(cherry picked from commit 6ead1cc9d35cdb72f469f7a451df3597c60c1a93)

4 years agoBump versions by x.y.(z+1) 37/57837/1
jenkins-releng [Thu, 25 May 2017 18:18:07 +0000 (18:18 +0000)]
Bump versions by x.y.(z+1)

Change-Id: I9f723dafe81c487a0219b29d51a715d8a121891f
Signed-off-by: jenkins-releng <>
4 years agoBUG-8507: Fix replayed directCommit() on reconnect 87/57387/12
Robert Varga [Thu, 18 May 2017 21:24:15 +0000 (23:24 +0200)]
BUG-8507: Fix replayed directCommit() on reconnect

After remote shard reconnect of a brief isolation, we have observed
a NPE indicating that we encounter a NPE when faced with a direct

Assuming state engine correctness, this can happen during the time
when we have completed preCommit and before we have recorded the
request result (i.e. after commit completes).

At any rate, this flushes out the need for transaction transitions
to be idempotent, which is something ShardDataTreeTransaction and
ShardDataTreeCohort do not provide.

Encapsulate FrontendReadWriteTransaction state into distinct state
objects. This allows us to accurately track the internal transaction
state and detect when a canCommit, directCommit, preCommit and
doCommit are no-ops because the request is being already handled.

Change-Id: Ib533ec9a4882f51f7914c5b11865ac093c6d6ad0
Signed-off-by: Robert Varga <>
4 years agoBUG-8511: add more explicit messages 31/57431/3
Robert Varga [Fri, 19 May 2017 13:32:13 +0000 (15:32 +0200)]
BUG-8511: add more explicit messages

This adds more defensive handling of connections and locking,
even if it should not strictly be necessary, as we are using
atomic operations and run on the actor thread. This makes the
transitions work even in fact of actor context leakage.

Change-Id: I26df0f208d63b861a0f3d3dc3c0f1959bbc79e90
Signed-off-by: Robert Varga <>
4 years agoBUG-8403: guard against ConcurrentModificationException 38/57438/1
Robert Varga [Fri, 19 May 2017 15:21:42 +0000 (17:21 +0200)]
BUG-8403: guard against ConcurrentModificationException

Using TransmitQueue.asIterable() offers slight advantage of not
dealing with a big list, but exposes us to the risk of the Iterable
being changed.

The point missed by the fix to BUG 8491 is that there is an avenue
for the old connection to be touched during replay, as we are
completing entries, for example reads when we are switching from
remote to local connection. In this case the callback will be invoked
in the actor thread, with all the locks being reentrant and held,
hence it can break through to the old connection's queue.

If that happens we will see a ConcurrentModificationException and
enter a buggy territory, where the client fails to work properly.

Document this caveat and turn asIterable() into drain(), which
removes all the entries in the queue, allowing new entries to be
enqueued. The late-comer entries are accounted for when we set the

Change-Id: Idf29c1e565e12aaed917ac94c21c552daf169d4d
Signed-off-by: Robert Varga <>
4 years agoBUG-8452: make NoShardLeaderException retriable 74/57074/13
Robert Varga [Mon, 15 May 2017 14:56:14 +0000 (16:56 +0200)]
BUG-8452: make NoShardLeaderException retriable

We can recover from this exception by retrying the connection to
the backend. Wrap it in a TimeoutException, which will cause a new
connection attempt.

Change-Id: I1d5c771fdb89cbdd7723e0425542154a1ed85853
Signed-off-by: Robert Varga <>
4 years agoBUG-8491: Remove requests as they are replayed 15/57315/9
Robert Varga [Wed, 17 May 2017 21:39:55 +0000 (23:39 +0200)]
BUG-8491: Remove requests as they are replayed

We should not be seeing any messages just after we have finished
message replay, as the queue is still locked and we should have
accounted for all messages by removing them from the queue.

Change-Id: I47396b4705e048460934538acc470468a0a6285d
Signed-off-by: Robert Varga <>
Signed-off-by: Tomas Cere <>
4 years agoBUG 8462: Switch to using cds-client in usubscribe-ddtl 52/57352/2
Tomas Cere [Thu, 18 May 2017 08:34:25 +0000 (10:34 +0200)]
BUG 8462: Switch to using cds-client in usubscribe-ddtl

The initial notification seemed iffy when the leader was moving,
so switch the final data consitency check to cds-clients read
which also makes this more consistent with unsubscribe-dtcl.

Change-Id: Ia23da11a5bda33925ee6ba911d2794f666a17a94
Signed-off-by: Tomas Cere <>
4 years agoBUG-8402: fix sequencing with read/exists requests 28/57028/17
Robert Varga [Sun, 14 May 2017 18:36:09 +0000 (20:36 +0200)]
BUG-8402: fix sequencing with read/exists requests

When replaying successful requests, we do not issue read and exists
requests, as they have already been satisfied, but account for their
sequence numbers.

This does not work in the case where we have a remote connection,
the first request on a transaction is a read and after it is
satisfied subsequent requests are replayed to a different backend

Since the initial request is not replayed, but subsequent requests
account for it and the backend has no prior knowledge of the
transaction, it sees an initial request with sequence != 0, and
rejects all requests with an OutOfOrderRequestException.

Fix this by introducing IncrementTransactionSequenceRequest, which
the frontend enqueues as the first request instead of the initial
read/exist request -- introducing the transaction to backend.

Change-Id: Ia0f048e33d417e1fdc8d15bf319d6b8b33c2b1b1
Signed-off-by: Robert Varga <>
4 years agoBUG-8402: Record modification failures 93/57293/6
Robert Varga [Wed, 17 May 2017 14:56:57 +0000 (16:56 +0200)]
BUG-8402: Record modification failures

When a modification fails to apply, we must record the resulting
failure, as we have partially applied the state and hence should
never attempt to try to do it again even if the client retransmits
the request.

Furthermore we should stop responding to any subsequent requests
including reads, as our responses are not accurate anyway (and the
requests may have been enqueued before the client saw the failure).

Enqueue the failure and respond to all subsequent requests with it,
forcing the transaction to fail the canCommit() phase.

Change-Id: I1d25f1b3a688e02f8a69f54f22a5d6d2dd43339c
Signed-off-by: Robert Varga <>
4 years agoBUG-8402: Separate out OutOfOrderRequestException 89/57289/4
Robert Varga [Wed, 17 May 2017 14:19:04 +0000 (16:19 +0200)]
BUG-8402: Separate out OutOfOrderRequestException

OutOfOrderRequestException is used for two distinct cases, which is
a mixup during refactor.

The first case is when an envelope's sequence does not match the
sequence we are expecting on a connection. This is a retriable
exception and happens due to mailbox queueing during leadership
- a FE sees us as a leader, sends requests
- we become a follower, we reject a few requests
- we become a leader, at which point we must not process requests
  until the FE reconnects, as we would not be processing them in
  the correct order.

The second case is when we receive a Request with an unexpected
sequence. This is a hard error, as it indicates that the client
has made a mistake and lost a request (like the case fixed in

Separate these two cases out by introducing
OutOfSequenceEnvelopeException and handle it by initiating a session

Change-Id: Ifb0bac41ff2efd6385455fd9c77b8b39054dd4a0
Signed-off-by: Robert Varga <>
4 years agoBUG-8422: separate retry and request timeouts 74/56874/12
Robert Varga [Thu, 11 May 2017 14:54:22 +0000 (16:54 +0200)]
BUG-8422: separate retry and request timeouts

This patch corrects a thinko around request timeouts, where we
reconnect the connection based on request timeout, not based on
the 'try' timeout.

The difference between the two is that the 'try' timeout is the
period we allow the backend to respond to our request and when
it does not, we reconnect the connection.

Change-Id: I8c00a80e5c26c5b829056c43fe78a0567041bc5e
Signed-off-by: Robert Varga <>
Signed-off-by: Tomas Cere <>
4 years agoBUG-8402: fix transmit accounting 52/57152/6
Robert Varga [Tue, 16 May 2017 14:49:42 +0000 (16:49 +0200)]
BUG-8402: fix transmit accounting

CSIT has shown that during burst activity and leader movement
we can lose track of messages and the requests can arrive misordered.

As it turns out TransmitQueue.complete() transmit-on-response code
path fails to properly move the request to the in-flight queue.

Furthermore, opportunistic sending TransmitQueue.enqueue() could cause
message reordering if for some reason we have pending requests and
available transmit slot.

Fix this sharing the codepaths and making the TransmitQueue.enqueue()
check pending queue emptiness.

Change-Id: I2daf3d8b198e83c6f50f4a2f43b9e4c3cc091187
Signed-off-by: Robert Varga <>
4 years agoBUG 8422: Change tx handlers hard timeout 32/57132/3
Tomas Cere [Tue, 16 May 2017 10:40:35 +0000 (12:40 +0200)]
BUG 8422: Change tx handlers hard timeout

This makes write-transactions/produce-transactions return an
RpcError upon reaching 2 minutes of waiting after the last
transaction is submitted in case the transactions arent timed out
from the frontend.

Also close producer when the initial write into id-ints list fails.

Change-Id: I20abbd02ed14e16d9e9a49f935113c0044e7c6d8
Signed-off-by: Tomas Cere <>
4 years agoBUG-8159: apply object lifecycle to metadata 38/57138/3
Robert Varga [Tue, 16 May 2017 11:20:41 +0000 (13:20 +0200)]
BUG-8159: apply object lifecycle to metadata

In leader role ShardDataTree needs to maintain its own view of
the metadata that is present in the journal, otherwise snapshots
do not contain accurate view nor can the shard transition to follower
while retaining correct state.

The initial idea was that this would be maintained in the replication
callbacks, but that is not really feasible, as it would spread the
code to different codepaths with the possibility of missed updates.

This patch centralizes metadata updates in payloadReplicationComplete(),
performing them unconditionally. Callbacks registered with
replicatePayload() are then used only for hooking in further events,
like sending messages to the frontend.

Change-Id: I2b3de068589f03fe988f11138436a4ec225e357e
Signed-off-by: Robert Varga <>
4 years agoBUG 8447: Add shard getRole rpcs 31/57131/2
Tomas Cere [Tue, 16 May 2017 10:30:48 +0000 (12:30 +0200)]
BUG 8447: Add shard getRole rpcs

These are added to get around jolokia which seems
to sometimes take a very long time to produce a response,
so we have a way to find out the current shard role via
talking directly to the ShardManager.

Change-Id: I18b98988fc9fab26513544c129e5063e87affede
Signed-off-by: Tomas Cere <>
4 years agoBug 8444 - Persistent prefix-based shard cannot load its snapshot 75/57075/5
Jakub Morvay [Mon, 15 May 2017 15:13:53 +0000 (17:13 +0200)]
Bug 8444 - Persistent prefix-based shard cannot load its snapshot

Since the name is URL-encoded, we have to make sure it does not get
double-encoded -- hence we need to make a pass of URL-decoding before
we use the result.

Change-Id: I20fe8702ad7e405a8b68d8bda2f9ce4522f2dfd0
Signed-off-by: Jakub Morvay <>
4 years agoFix logging format/argument mismatch 61/57061/2
Robert Varga [Mon, 15 May 2017 13:10:26 +0000 (15:10 +0200)]
Fix logging format/argument mismatch

Two debug sites fail to pass down shardName, leading to mal-formatted
log messages.

Change-Id: I5521539c54c2e1f7ef5ef25d9a47fbc6d6d0a27c
Signed-off-by: Robert Varga <>
4 years agoBUG-8056: place an upper bound on number of transactions processed 48/56948/2
Robert Varga [Fri, 12 May 2017 13:24:07 +0000 (15:24 +0200)]
BUG-8056: place an upper bound on number of transactions processed

When transactions complete their preCommit step immediately we end
up scheduling the next transaction immediately in the call stack,
hence if that completes immediately we end up eating away our stack
until we hit StackOverflowError.

Limit the number of transactions we process as a reaction to a single
message so that stack usage is under control. Should we hit this
limit, schedule a contiuation, which will deal with the rest of
the transactions.

Change-Id: Iad2812c823bd8e91ad45020ac50f6a8626654afb
Signed-off-by: Robert Varga <>
4 years agoBUG-8422: Propagate enqueue time 47/56747/7
Robert Varga [Tue, 9 May 2017 21:55:31 +0000 (23:55 +0200)]
BUG-8422: Propagate enqueue time

When we are replaying requests onto a connection we really want
to leave their enqueue times intact, so they time out properly.
This codepath is specific for the replay case, hence we do not
want to incur any waiting, either.

This patch introduces enqueueRequest() which does not wait for
the queue duration and audits code paths so they end up talking
to the right method -- either enqueueRequest() or sendRequest().

Change-Id: Ibf97dcc11e32d9ffa911c78ccf0448d6891a9cac
Signed-off-by: Robert Varga <>
4 years agoBUG 8318: Add a separate dispatcher for cluster actors 28/56928/4
Tomas Cere [Fri, 12 May 2017 09:59:55 +0000 (11:59 +0200)]
BUG 8318: Add a separate dispatcher for cluster actors

When the system is under load it seems like there can
be a missed heartbeat leading to false positives for unreachable
nodes. Run the actors responsible for heartbeats on a separate
dispatcher to avoid this.

Change-Id: Ib4f4225bf69e99d93e3c7010d6fbe1163b96a5a2
Signed-off-by: Tomas Cere <>
4 years agoFix testLeaderAndFollowerEntityOwnersReassignedAfterShutdown failure 09/56909/2
Tom Pantelis [Thu, 11 May 2017 11:32:39 +0000 (07:32 -0400)]
Fix testLeaderAndFollowerEntityOwnersReassignedAfterShutdown failure

14:19:15 Failed tests:
14:19:15   DistributedEntityOwnershipIntegrationTest.testLeaderAndFollowerEntityOwnersReassignedAfterShutdown:439->lambda$testLeaderAndFollowerEntityOwnersReassignedAfterShutdown$1:440 Raft state expected:<[Leader]> but was:<[Candidate]>

After the leader is shut down, member-2 is supposed to be elected leader but,
in this case, it didn't get the vote from member-5 b/c member-2 had not yet
received the MemberUp for member-5 and thus did not have its peer actor address.
So I made changes to ensure member data stores are ready and members are up.

I also saw a failure where member-3 or member5 didn't grant the vote for member-2
b/c it's last log index was greater. This can happen is member-2 didn't was a bit
behind when the leader was shut down. So I changes it to obtain the leader's last
index and verify the remaining follower's last log index is up-to-date.

Change-Id: Ib5ad2e135bb3809e1c62a432a029a5a56109190d
Signed-off-by: Tom Pantelis <>
4 years agoBug 8424: Don't output data tree and tree candidates wih debug 06/56906/1
Tom Pantelis [Thu, 11 May 2017 16:31:27 +0000 (12:31 -0400)]
Bug 8424: Don't output data tree and tree candidates wih debug

Data trees and tree candidates can get quite large and fill up the
log files when debug is on and, worst case, cause OOM errors. For
debug logging, only print the whole tree/candidate with trace logging.
In cases where an Optional data tree is outputted via toString, only
output if the data tree is present or not.

Change-Id: I6cb5f9a5da9e3cc3218c83bb103b673db0fb1d80
Signed-off-by: Tom Pantelis <>
(cherry picked from commit 5083d20c7f49788e64ca9080aebcded623d7aa44)

4 years agoBUG-8392 RpcRegistry has it's buckets populated by unreachable nodes 83/56783/4
Tomas Cere [Wed, 10 May 2017 12:04:37 +0000 (14:04 +0200)]
BUG-8392 RpcRegistry has it's buckets populated by unreachable nodes

In a situation when a member(f.ex member2) is isolated and the rpc registrations
are removed from the node(member1) we can still have our bucket store populated
by buckets from the remaining node(member-3) which might not have received
the memberUnreachable message yet leadig to stale routing of an rpc to
This patch adds bucket filtering based on the currently present peers
so that we only accept Buckets that we can see.

Change-Id: I92c1e063f4754aca829bd73df4518f859e1d8497
Signed-off-by: Tomas Cere <>
4 years agoBug 8385: Fix testMultipleRegistrationsAtOnePrefix failure 74/56774/2
Tom Pantelis [Tue, 9 May 2017 12:29:07 +0000 (08:29 -0400)]
Bug 8385: Fix testMultipleRegistrationsAtOnePrefix failure

The previous patch added a callback on the Future returned by
gracefulStop on shard removal. The timout was set to 3 * election timeout
which is 30 s in production by default. For the tests the election
timeout is 500 ms so the timeout is 1500 ms. However, if the timing is right,
the leader may not be able to transfer leadership on shutdown if the other
member was already shutdown. On shutdown there's a 2 sec wait to hear from
a new leader - this is greater than the 1500 ms shutdown timeout which
leads to test failure. To alleviate this, I made 10 s the minimum for the
shutdown timeout.

Another problem was that, after the stop future failed, the OnComplete
callback for PrefixShardCreated was repeated many times before the
OnComplete callback queued the message to remove the Future from the map.
To alleviate this, I added a CompositeOnComplete containing a list of
deferred OnComplete tasks. This allows the control to remove the entry
from the map before the deferred tasks run.

Change-Id: I899518e6d7e92533d2c4008a978ac772b02863cf
Signed-off-by: Tom Pantelis <>
(cherry picked from commit 6ef8a6b4e403d5908e7090a5bd387f81c10c91c6)

4 years agoFix testTransactionForwardedToLeaderAfterRetry failure 69/56769/1
Tom Pantelis [Wed, 10 May 2017 07:00:09 +0000 (03:00 -0400)]
Fix testTransactionForwardedToLeaderAfterRetry failure

java.util.concurrent.ExecutionException: ReadFailedException{message=Error executeRead ReadData for path /(urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom:store:test:cars?revision=2014-03-13)cars/car, errorList=[RpcError [message=Error executeRead ReadData for path /(urn:opendaylight:params:xml:ns:yang:controller:md:sal:dom:store:test:cars?revision=2014-03-13)cars/car, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, Shard member-1-shard-cars-testTransactionForwardedToLeaderAfterRetry currently has no leader. Try again later.]]}

The test submits transactions and deposes the current leader so it forwards the
pending transactions to the other member-2 that assumes leadership. However it calls


which may result in an untimely MemberExited message sent to the ShardManager that
clears the peer address, causing the FindPrimary message to fail to find the leader.
I'm not clear why this was call was put in but it's unnecessary and may cause a
failure if the timing is right.

I also saw a failure due to a timeout when forwarding a pending transaction. This is
b/c it takes some time for member-2 to switch to candidate and become leader due to
the checking of current leader availability via the akka cluster on ElectionTimout.
If it takes too long the pending transaction forwarding may time out. To alleviate
this, I forced the swicth to candidate by sending an immediate TimeoutNow message.

Change-Id: I2dd228964779e2b755b1740a518e2c400b5cb88d
Signed-off-by: Tom Pantelis <>
(cherry picked from commit e345c2a17f737d537cda45b0f737dff417e3b359)

4 years agoFix timing issue in PartitionedCandidateOnStartupElection*Test 99/56699/2
Tom Pantelis [Mon, 8 May 2017 16:49:51 +0000 (12:49 -0400)]
Fix timing issue in PartitionedCandidateOnStartupElection*Test

If the initial AppendEntries sent by the leader (member 1) to member 3
is delayed enough such that the behavior field in MemberActor is already
set by the test code, the AppendEntries message will be forwarded to the
Candidate behavior and yield incorrect results for the test. To prevent this,
we really shouldn't set and access the behavior field directly but instead
do so via messages to maintain actor encapsulation.

Change-Id: If497583ce648e62e3279e5abff19cb8702943c17
Signed-off-by: Tom Pantelis <>
4 years agoBug 8391 - Close producer in become-prefix-leader rpc implementation 98/56698/1
Jakub Morvay [Mon, 8 May 2017 11:58:57 +0000 (13:58 +0200)]
Bug 8391 - Close producer in become-prefix-leader rpc implementation

MdsalLowLevelTestProvider's become-prefix-leader rpc implementation
creates CDSDataTreeProducer to try to move shard leadership. However,
the producer is not closed after leadership change request. This
prevents any subsequent invocations of become-prefix-leader rpc with
same prefix parameter to be successful. Subtree specified by the prefix
is attached to still opened producer and creation of any new producer
for this subtree fails. Close producer once we don't need it.

Change-Id: I3827e425082c35a43ec18dac1ef0f2dbd19b291f
Signed-off-by: Jakub Morvay <>
4 years agoBUG-8372: fix AbstractProxyTransaction.replayMessages() 36/56636/6
Robert Varga [Fri, 5 May 2017 12:35:32 +0000 (14:35 +0200)]
BUG-8372: fix AbstractProxyTransaction.replayMessages()

This method made assumptions on whan requests can be present in the
queue -- notably that local requests are never encountered. This is
not true, as local requests can be present here due to being in-flight
when reconnect occurs.

Change-Id: Ia5b6ec442c014329046bf384a0f5ea97666a2c4a
Signed-off-by: Robert Varga <>
4 years agoBug 8385 - @Ignore testMultipleRegistrationsAtOnePrefix 60/56660/2
Jakub Morvay [Mon, 8 May 2017 08:01:29 +0000 (10:01 +0200)]
Bug 8385 - @Ignore testMultipleRegistrationsAtOnePrefix

is failing intermittently - set it to ignore for now.

Change-Id: I3e8aec2bfbe97559525051805170203574472aab
Signed-off-by: Jakub Morvay <>
4 years agoBUG-8372: fix abort message confusion 96/56596/3
Robert Varga [Fri, 5 May 2017 11:46:22 +0000 (13:46 +0200)]
BUG-8372: fix abort message confusion

Immediate transaction aborts need to use the appropriate message,
not 3PC's TransactionAbortRequest.

Change-Id: I9e25e3f20ed62fc520853685af17accef35c1bb4
Signed-off-by: Robert Varga <>
4 years agoBUG-8372: improve forward/replay naming 93/56593/3
Robert Varga [Fri, 5 May 2017 10:04:12 +0000 (12:04 +0200)]
BUG-8372: improve forward/replay naming

There is a bit of confusion between 'replay' and 'forward' methods.
They serve two distinct purposes:
- 'replay' happens during reconnect, i.e. for requests that have
           already entered the connection queue and have paid
           the delay cost, so they should not pay it again.
- 'forward' happens after reconnect for requests that have raced
            with the reconnect process, i.e. they need to hop from
            the old connection to the new one. These need to enter
            the queue and pay the delay cost.

This patch cleans the codepaths up to use consistent naming, making
it clearer that the problem we are seeing is in the 'replay' path.

Change-Id: Id854e09a0308f8d0a9144d59f41e31950cd58665
Signed-off-by: Robert Varga <>
4 years agoBug 8385: Fix testMultipleRegistrationsAtOnePrefix failures 34/56634/1
Tom Pantelis [Fri, 5 May 2017 10:41:59 +0000 (06:41 -0400)]
Bug 8385: Fix testMultipleRegistrationsAtOnePrefix failures

The test quickly creates/removes the prefix shard in iterations which
can result in an InvalidActorNameException if the shard actor from the prior
iteration hadn't been destroyed yet. To alleviate this I modified the
removal in the ShardManager to utilize Patterns.gracefulStop to store the
Future and block a subsequent create until the Future completes.

Change-Id: Ica98de3cc17c2d87195840bdf052d81ed3b9dd10
Signed-off-by: Tom Pantelis <>
4 years agoBUG-8372: add dataTree information 39/56539/4
Robert Varga [Thu, 4 May 2017 15:56:33 +0000 (17:56 +0200)]
BUG-8372: add dataTree information

We are attempting to send a local message to a remote actor,
which seems to be a mixup with data tree presence. Add dataTree
to toString() output so we know which connections resolve to
being local and which to being remote.

Change-Id: If1ed3cfdea24148456a4d310949fb480791c1ffa
Signed-off-by: Robert Varga <>
4 years agoMake the last submit timeout after 30 seconds 89/56589/3
Tomas Cere [Fri, 5 May 2017 10:11:17 +0000 (12:11 +0200)]
Make the last submit timeout after 30 seconds

The low level test was waiting indefinetly for submits
to finish, change this to block and timeout after one minute
in case there's an unrecoverable failure on the backend which
doesnt propagate to the frontend.

Change-Id: I3df2465b56c701c88341ab6cc7fa37a015f1c893
Signed-off-by: Tomas Cere <>
4 years agoFix DistributesShardedDOMDataTree.ProxyProducer's getShardAccess 99/56399/5
Jakub Morvay [Tue, 2 May 2017 16:54:52 +0000 (18:54 +0200)]
Fix DistributesShardedDOMDataTree.ProxyProducer's getShardAccess

DistributesShardedDOMDataTree.ProxyProducer's getShardAccess works only
for subtrees that are rooted at some registered prefix based shard.
Moreover subtree has to be one of the subtrees specified in
DistributedShardedDOMDatatTree's createProducer method.

This is way more strict than what is required by CDSDataTreeProducer's
API. Pass ProxyProducer's implementation current shard layout, so
producer can lookup corresponding shard for specified subtree in
getShardAccess method. One-to-one mapping between shards and subtrees
is no longer required.

Change-Id: I765567d34c803a85b4be8a6e10fd81b6f64a1610
Signed-off-by: Jakub Morvay <>
4 years agoFix logger formatting strings 36/56536/2
Robert Varga [Thu, 4 May 2017 15:51:43 +0000 (17:51 +0200)]
Fix logger formatting strings

Fix %s/{} mixups.

Change-Id: I916996e17839a61802a83ddff31d162ac662f934
Signed-off-by: Robert Varga <>
4 years agoMove initial list creation to create-prefix-shard. 68/56468/3
Tomas Cere [Wed, 3 May 2017 13:21:00 +0000 (15:21 +0200)]
Move initial list creation to create-prefix-shard.

This move the initial list population of produce-transactions
to create-prefix-shard rpc with 3 hardcoded prefixes(prefix-1,prefix-2,prefix-3)
so that csit suites can populate the id-int list just once when the shard is created
and produce-transactions can now run parallely on multiple entries from
multiple nodes.

Change-Id: If70990c0e217cd68027ae960a7545c69acf52cdb
Signed-off-by: Tomas Cere <>
4 years agoFix Eclipse warnings in config-manager 09/56509/2
Robert Varga [Wed, 3 May 2017 13:34:13 +0000 (15:34 +0200)]
Fix Eclipse warnings in config-manager

Change-Id: I0ed9bc52d4cf4e5ee7a4da8bd53355191326cba6
Signed-off-by: Robert Varga <>
(cherry picked from commit 405c97ddf445b6057a2654545dd8072d28eedcce)

4 years agoBug 8328 - Create prefix shards with correct peers 77/56177/6
Jakub Morvay [Thu, 27 Apr 2017 15:21:42 +0000 (17:21 +0200)]
Bug 8328 - Create prefix shards with correct peers

Change-Id: I068b38bb275d23d27559aec3f336a6b9081fb732
Signed-off-by: Jakub Morvay <>
4 years agoBug 8336 - Fix NPE in DistributedShardedDOMDataTree's ProxyProducer 52/56352/3
Jakub Morvay [Tue, 2 May 2017 08:32:20 +0000 (10:32 +0200)]
Bug 8336 - Fix NPE in DistributedShardedDOMDataTree's ProxyProducer

Change-Id: If0060e6e2696674bc5418d2f2a80ad0d01327e29
Signed-off-by: Jakub Morvay <>
4 years agoBUG 8301: Convert queue to a local variable 56/56456/3
Tomas Cere [Wed, 3 May 2017 11:12:31 +0000 (13:12 +0200)]
BUG 8301: Convert queue to a local variable

There's a possibility that this might race and an
actor can have it's queue overwritten by another thread, so convert
this to a local variable.

Change-Id: Ic84922c6d109d8361a48debbf971fddd9cee1d3e
Signed-off-by: Tomas Cere <>
4 years agoBug 8342: Add info logging to ConfigManagerActivator 06/56406/2
Tom Pantelis [Tue, 2 May 2017 19:16:44 +0000 (15:16 -0400)]
Bug 8342: Add info logging to ConfigManagerActivator

Change-Id: I7b01961910dd2ba7ed9a421ee52e0aec29c68ade
Signed-off-by: Tom Pantelis <>
4 years agoBUG-8342: force config-manager startup 37/56437/1
Robert Varga [Wed, 3 May 2017 08:51:32 +0000 (10:51 +0200)]
BUG-8342: force config-manager startup

config-manager needs to be pretty much the first thing that comes
up due to historic reasons. Assign it a low start level so it
activates before the blueprint extension.

Change-Id: I2d0a3706843409e8a22f9064f27e47cc0df46c95
Signed-off-by: Robert Varga <>
4 years agoDowngrade most info messages in benchmarks 18/56118/2
Vratko Polak [Wed, 26 Apr 2017 18:37:43 +0000 (20:37 +0200)]
Downgrade most info messages in benchmarks

They create spam during CSIT,
making real errors less noticable in log.

Change-Id: Icf00389526919751e88189ffef1be70e16e806e8
Signed-off-by: Vratko Polak <>
4 years agoBUG-8327: deprecate sal.core.api.model.SchemaService 05/56305/2
Robert Varga [Fri, 28 Apr 2017 15:03:32 +0000 (17:03 +0200)]
BUG-8327: deprecate sal.core.api.model.SchemaService

This interface is deprecated in favor of the DOMSchemaService
for the MD-SAL project.

Change-Id: Icff2cced791bc9fbf5bfadbe2f1cf2b949ff2d58
Signed-off-by: Robert Varga <>
(cherry picked from commit 84f6c81afe8c53307dc9be8c39824ca4e4191819)

4 years agoImprove test logging in DistributedEntityOwnershipIntegrationTest 04/56304/2
Tom Pantelis [Sun, 30 Apr 2017 02:44:26 +0000 (22:44 -0400)]
Improve test logging in DistributedEntityOwnershipIntegrationTest

Some of the tests in DistributedEntityOwnershipIntegrationTest set the
datastore type to "test" which isn't helpful in identifying the output
in jenkins log archives. Use the name of the test method instead as is
done with other tests.

Change-Id: I25e40df5139a4d9f8c46d03c0f2c9c8a52fd15ee
Signed-off-by: Tom Pantelis <>
(cherry picked from commit 56af1b2a45b4a567813de5314c31aaf26a2e4052)

4 years agoBUG-8327: GlobalBundleScanningSchemaServiceImpl should be a proxy 00/56300/2
Robert Varga [Fri, 28 Apr 2017 14:56:48 +0000 (16:56 +0200)]
BUG-8327: GlobalBundleScanningSchemaServiceImpl should be a proxy

We are currently running to separate services which assemble
the GlobalSchemaContext, which hurts our startup performance and
leads to wasted memory. This is an artefact of the mdsal split,
hence we should be getting the service from the MD-SAL and
just proxy to old interfaces.

This lowers the startup time for

feature:install odl-restconf odl-bgpcep-bgp
    odl-bgpcep-data-change-counter odl-netconf-topology

from 86s down to 67s (22%). Final retained heap size is also
lowered from 217MiB to 181MiB (16%)

Change-Id: I549e9512538bd83d86cfd2164d03e34bc9130c1e
Signed-off-by: Robert Varga <>
4 years agoBug 8303: BP odl:clustered-app-config initial/*-config.xml testability 23/56123/3
Michael Vorburger [Tue, 25 Apr 2017 22:32:15 +0000 (00:32 +0200)]
Bug 8303: BP odl:clustered-app-config initial/*-config.xml testability

DataStoreAppConfigDefaultXMLReaderTest illustrates usage.

Change-Id: I342fca4583c90802238e63262871e33b4b713438
Signed-off-by: Michael Vorburger <>
(cherry picked from commit 821944277049bbb3949021626844ef7a80101f70)

4 years agoAdd more debug logging for DTCL registration/notification code paths 93/56293/1
Tom Pantelis [Thu, 27 Apr 2017 11:23:34 +0000 (07:23 -0400)]
Add more debug logging for DTCL registration/notification code paths

Added logging so the listener instance and actor can be traced
end-to-end from FE registration to the BE publisher actor.

Also added log context to some classes to identify which shard it
belongs to.

Change-Id: I1a26fb8775a57e0fc563eceec919d50395f4ceb1
Signed-off-by: Tom Pantelis <>
4 years agoBug 8337: Ignore testMultipleShardLevels 85/56285/1
Tom Pantelis [Sat, 29 Apr 2017 01:43:34 +0000 (21:43 -0400)]
Bug 8337: Ignore testMultipleShardLevels

DistributedShardedDOMDataTreeTest.testMultipleShardLevels is
failing intermittently - set it to ignore for now.

Change-Id: Ib7f86166fd85cd54e6ec8cac106c993e9407ffea
Signed-off-by: Tom Pantelis <>
4 years agoBug 8301: Fix some issues with testProducerRegistrations 58/56158/2
Tom Pantelis [Thu, 27 Apr 2017 10:45:11 +0000 (06:45 -0400)]
Bug 8301: Fix some issues with testProducerRegistrations

The LogicalDatastoreType.CONFIGURATION type was being used for both data
stores - modified the IntegrationTestKit to set the logicalStoreType

Fixed a synchronization issue in DistributedShardedDOMDataTree#lookupShardFrontend
where it accessed shards unprotected.

Change-Id: I628add86667e4a812f8e7516bac59f9b66fe4033
Signed-off-by: Tom Pantelis <>
(cherry picked from commit 64b9093c2c9ce670b92f2d0ea44c76dc9a385b5a)

4 years agoBUG-7927: stop scanning bundles on framework stop 35/56235/2
Robert Varga [Thu, 27 Apr 2017 15:36:37 +0000 (17:36 +0200)]
BUG-7927: stop scanning bundles on framework stop

Monitor framework bundle for STOPPING event and when it triggers
flag us as stopping: all bundles are about to shut down, so there
is no point in trying to update the schema context anymore.

Change-Id: I1a55169fce1705c19a139063cf632674fc256701
Signed-off-by: Robert Varga <>
(cherry picked from commit 6a79e55d2b6462cd609ab8cd5766fd4222c18c4f)

4 years agoFix read-only abort message mismatch 64/56064/5
Robert Varga [Wed, 26 Apr 2017 12:39:23 +0000 (14:39 +0200)]
Fix read-only abort message mismatch

Testing has revealed:

WARN  | FrontendReadOnlyTransaction | Rejecting unsupported request ModifyTransactionRequest{target=member-2-datastore-config-fe-0-txn-2-0, sequence=1, replyTo=Actor[akka.tcp://opendaylight-cluster-data@$a#585956314], operations=[], protocol=ABORT}

This is a thinko on the part of which message does what:

TransactionAbortRequest is dedicated for 3PC doAbort phase, hence
it is never seen for read-only transactions.

The message corresponding to an abort is either
AbortLocalTransactionRequest or ModifyTransactionRequest with protocol
set to ABORT.

Change-Id: I3238ade7b9f7933e6538742354888d182f599412
Signed-off-by: Robert Varga <>
4 years agoPropagate shard name to FrontendClientMetadataBuilder 73/56073/5
Robert Varga [Wed, 26 Apr 2017 13:24:37 +0000 (15:24 +0200)]
Propagate shard name to FrontendClientMetadataBuilder

Prefixing log message with shard name is useful to track things
down. Pass the shard name down from FrontendMetadata, so we can
emit such messages.

Change-Id: Ie6a2cd218e1a2686f8cc14f67574f245e3de914b
Signed-off-by: Robert Varga <>