Remove DataChangeListener and friends AsyncDataChangeEvent is being kept for now as ovsdb still independently uses it internally. JIRA: TSC-112 Change-Id: Ia68ac1cdf31dec3645f675442db14b7697d63b64 Signed-off-by: Tom Pantelis <tompantelis@gmail.com>
Add more debug logging for DTCL registration/notification code paths Added logging so the listener instance and actor can be traced end-to-end from FE registration to the BE publisher actor. Also added log context to some classes to identify which shard it belongs to. Change-Id: I3e6dd92e7632139372407abf94a160096aa7750e Signed-off-by: Tom Pantelis <tompantelis@gmail.com>
Bug 8231: Fix testChangeListenerRegistration failure As described in Bug 8231, the sharing of the ListenerTree between the ShardDataTree and the ShardDataTreeNotificationPublisherActor is problematic. Therefore the ListenerTree (wrapped by the DefaultShardDataTreeChangeListenerPublisher) is now owned by the ShardDataTreeNotificationPublisherActor. On registration, a RegisterListener messages is sent to the ShardDataTreeNotificationPublisherActor to perform the on-boarding of the new listener, ie it atomically generates and sends the initial notification and then adds the listener to the ListenerTree. This change necessitated some refactoring of the DataChangeListenerSupport class et al wrt to how the ListenerRegistration is handled. Prior the ListenerRegistration was passed on creation of the registration actor. This is now done indirectly by sending a SetRegistration message to the registration actor via a Consumer callback passed in the RegisterListener message. When the ListenerRegistration is obtained by the ShardDataChangePublisherActor, it invokes the Consumer callback. When a registration is initially delayed due to no leader, the DelayedListenerRegistration is sent to the registration actor. When the leader is elected later on, the actual ListenerRegistration is sent and replaces the DelayedListenerRegistration. The DOMDataTreeChangeListener registration classes were changed/refactored similarly. In addition, the 2 specific registration actor classes were replaced by a generic reusable DataTreeNotificationListenerRegistrationActor that handles both listener types. Also the 2 CloseData*ListenerRegistration and CloseData*ListenerRegistrationReply messages were consolidated. Change-Id: I79ac76b8044609351e5dd8367b691b589ea35075 Signed-off-by: Tom Pantelis <tompantelis@gmail.com>
Remove ListenerRegistration protobuff messages The ListenerRegistration protobuff messages are used to serialize the corresponding CDS messages but we don't actually send these messages over the wire so they don't need serialization. So the protobuff messages were removed. If we do need to serialize these messages in he future we won't use protobuff. Change-Id: I3818a965e0fd4e1876364022f2de09b1bac216d5 Signed-off-by: Tom Pantelis <tpanteli@brocade.com>
Bug 4651: Implement handling of ClusteredDOMDataTreeChangeListener in CDS Implemented handling of ClusteredDOMDataTreeChangeListener similar as to what was done previously for ClusteredDOMDataChangeListener. I also refactored the listener support classes used by Shard and extracted generic base classes for the common functionality. Change-Id: I694a6a4ce41284f7ecd3bf73bc6201e9d5555998 Signed-off-by: Tom Pantelis <tpanteli@brocade.com>
Enabling Data Change Notifications for all nodes in cluster. Two new interfaces are introduced ClusteredDataChangeListener and ClusteredDOMDataChangeListener and external applications will have to implement any of that interface, if those applications want to listen to remote data change notifications. Datastore registers listeners, which are instance of that interface, even on followers. Change-Id: I0e29cdf2a08a2051de5fc8ce73b9ec8ac408e45b Signed-off-by: Harman Singh <harmasin@cisco.com>
Do not use ActorSystem.actorFor as it is deprecated Stopped using ActorSystem.actorFor and instead replace the actor path serialization with the right way to serialize an ActorRef. Change-Id: I282ec46d88a531eb5ce189a55e25fe6e75413c66 Signed-off-by: Robert Varga <rovarga@cisco.com> Signed-off-by: Moiz Raja <moraja@cisco.com>
Bug 2003: CDS serialization improvements In NormalizedNodeToNodeCodec#encode, significant time was spent serializing the YangInstanceIdentifier path via PathUtils even though it wasn't actually needed - the decode method didn't decode it. This might have been used by WriteModification and MergeModification originally however they currently serialized/deserialize their YangInstanceIdentifier path separately from the NormalizedNode via InstanceIdentifierUtils. It turns out this takes significant time as well as it's implemented similarly as PathUtils. So I ended up using NormalizedNodeToNodeCodec to encode/decode the YangInstanceIdentifier along with the NormalizedNode but changed InstanceIdentifierUtils to utilize the new PathArgumentSerializer and the NormalizedNodeSerializer's special QName encoding. With serializing a 5K batch of WriteModifications with flow data, the time went down from ~350 ms to ~150 ms. Deserialization was also improved. Other smaller optimizations in NormalizedNodeSerializer, NormalizedNodeType, PathArgumentSerializer and PathArgumentType chopped another 20-30 ms off the time. I also changed InstanceIdentifierUtils to serialize/deserialize via the new PathArgumentSerializer and the NormalizedNodeSerializer's special QName encoding by default, even when the ID isn't encoded as part of a NormalizeNode. This seems reasonable to me as a standalone IID will likely have repeated namespaces and revisions plus we get savings by not serializing each path arg class name. Removed the deprecated InstanceIdentifierUtils class in sal-distributed-datastore bundle. Change-Id: Iaa29daeaececf4b93065f4d46d0c2796c4d8188f Signed-off-by: tpantelis <tpanteli@brocade.com>
Serialization/Deserialization and a host of other fixes - Hande Cluster MemberUp and MemberRemoved events in ShardManager - Cohort messages and close listener messages switched to use protobuff - Distributed Datastore switch messages to use protobuff CreateTransaction CreateTransactionReply CreateTransactionChain CreateTransactionChainReply distributed datastore messages switched to protobuff - ShardManager messages switch to protobuff - DataChanged and other messages switch to protobuf in distributed datastore - Fixed few things found during testing 1. ShardStrategy - setting of configuration 2. NodeToNormalizedNodeBuilder - leaf node/leafsetentry node checks 3. DataChanged event - passing of scope instanceidentifier used during deserialization - Introducing JMX MBeans for distributed datastore -Fixed issues which were preventing remote Shards from talking to each other - Fixed a number of issues related to deserialization - Add distributed datastore to the build - Switch from InstanceIdentifier to YangInstanceIdentifier Change-Id: I0d15dc482cb2b0fb2170b1344bad9fa3b421e8e0 Signed-off-by: Moiz Raja <moraja@cisco.com>
Changed RegisterChangeListener and RegisterChangeListenerReply in distributed datastore to be based on protocol buffer ListenerRegistrationMessages Change-Id: Ia1e427ca4d60336bb1cbf774c19fada4a1d0aed1 Signed-off-by: Basheeruddin Ahmed <syedbahm@cisco.com>
Make CompositeModification serializable using protocol buffers Change-Id: I3e91452b0244c6adec84c000e83d7f993b2a59b7 Signed-off-by: Moiz Raja <moraja@cisco.com>
Implement DistributedDataStore#registerDataChangeListener Change-Id: I153c76b923dff7845321699d556f30f2ecadec57 Signed-off-by: Moiz Raja <moraja@cisco.com>
Initial implementation of the Shard Actor Things to note, - Added a temporary dependency on sal-broker-impl. This is so that I could use the InMemoryDOMDataStore. Once InMemoryDOMDataStore is moved to it's own bundle then I will simply switch the dependency - Shard has been only implemented to the point where it is using an InMemoryDOMDataStore and handling the CreateTransaction and RegisterChangeListener messages This commit is intended to give a feel for what kind of coding patterns will be used to implement Shard and related actors and their tests Change-Id: I86f0d701399805185a0987bb1b97fe1358ce4cd9 Signed-off-by: Moiz Raja <moraja@cisco.com>