9 Waits for a Sink to connect. After it does, registers a DTCL and starts sending all changes to the Sink.
12 Connects to the Source and asks for changes on a particular path in the datastore(root by default).
13 All changes received from the Source are applied to the Sink's datastore.
16 Data Tree Change Listener is an object, which is registered on a Node in the datastore andnotified if
17 said node(or any of its children) is modified.
22 Incremental Backup vs Daexim
23 ----------------------------
25 The concept of Incremental Backup originated from Daexim drawbacks. Importing
26 the whole datastore may take a while since it triggers all the DTCLs.
27 Therefore using Daexim as a mechanism for backup is problematic, since the
28 export/import process needs to be executed quite frequently to keep the two
31 Incremental Backup simply mirrors the changes made on the primary site to the
32 secondary site one-by-one. All that's needed is to have a Source on the
33 primary site, which sends the changes and a Sink on the
34 secondary site which then applies them. The transport mechanism used is Netty.
36 Replication (works both for LAN and WAN)
37 ----------------------------------------
39 Once the Sink is started it tries to connect to the Source's address and port.
40 Once the connection is established, the Sink sends a request containing a path
41 in the datastore which needs to be replicated. Source receives this request and
42 registers DTCL on said path. Any changes the listener receives are then streamed
43 to the Sink. When Sink receives them he applies them to his datastore.
45 In case there is a network partition and the connection goes down, the Source unregisters
46 the listener and simply waits for the Sink to reconnect. When the connection goes UP again
47 and the Sink reconnects, the Source registers the DTCL again and continues replicating.
48 Therefore even if there were some changes in the Source's datastore while the connection
49 was down, when the Sink reconnects and Source registers new DTCL, the current initial state
50 will be replicated to the Sink. At this point they are synchronized again and the replication
51 can continue without any issue.
54 * odl-mdsal-replicate-netty
58 <groupId>org.opendaylight.mdsal</groupId>
59 <artifactId>odl-mdsal-replicate-common</artifactId>
60 <classifier>features</classifier>
64 <groupId>org.opendaylight.mdsal</groupId>
65 <artifactId>odl-mdsal-replicate-netty</artifactId>
66 <classifier>features</classifier>
71 Configuration and Installation
72 ------------------------------
74 #. **Install the features on the primary and secondary site**
77 feature:install odl-mdsal-replicate-netty odl-mdsal-replicate-common
79 #. **Enable Source (on the primary site)**
82 config:edit org.opendaylight.mdsal.replicate.netty.source
83 config:property-set enabled true
86 All configuration options:
87 * enabled <true/false>
88 * listen-port <port> *(9999 is used if not set)*
89 * keepalive-interval-seconds <amount> *(10 is used if not set)*
90 * max-missed-keepalives <amount> *(5 is used if not set)*
92 #. **Enable Sink (on the secondary site)**
93 *In this example the Source is at 172.16.0.2 port 9999*
97 config:edit org.opendaylight.mdsal.replicate.netty.sink
98 config:property-set enabled true
99 config:property-set source-host 172.16.0.2
102 All configuration options:
103 * enabled <true/false> *(127.0.0.1 is used if not set)*
104 * source-host <address> *(127.0.0.1 is used if not set)*
105 * source-port <port> *(9999 is used if not set)*
106 * reconnect-delay-millis <reconnect-delay> *(3000 is used if not set)*
107 * keepalive-interval-seconds <amount> *(10 is used if not set)*
108 * max-missed-keepalives <amount> *(5 is used if not set)*
110 Switching Primary and Secondary sites
111 -------------------------------------
113 Sites can be switched simply by disabling the configurations and enabling
114 them in the opposite direction.
119 Running one ODL instance locally and one in Docker
124 karaf-distribution/bin/karaf
126 Karaf Terminal - Start features
127 - features-mdsal - core MDSAL features
128 - odl-mdsal-replicate-netty - netty replicator
129 - odl-restconf-nb-bierman02 - we'll be using Postman to access datastore
130 - odl-netconf-clustered-topolog - we will change data of some netconf devices
134 feature:install features-mdsal odl-mdsal-replicate-netty odl-restconf-nb-bierman02 odl-netconf-clustered-topolog
139 config:edit org.opendaylight.mdsal.replicate.netty.source
140 config:property-set enabled true
143 #. **Run Dockerized Karaf distribution**
144 To get access to Karaf Terminal running in Docker you can use:
147 docker exec -ti $(docker ps -a -q --filter ancestor=<NAME-OF-THE-DOCKER-IMAGE>) /karaf-distribution/bin/karaf
149 Start features in the Docker's Karaf Terminal
152 feature:install features-mdsal odl-mdsal-replicate-netty odl-restconf-nb-bierman02 odl-netconf-clustered-topolog
154 Start Sink - Let's say the Docker runs at 172.17.0.2 meaning it will find the local Source is at 172.17.0.1
157 config:edit org.opendaylight.mdsal.replicate.netty.sink
158 config:property-set enabled true
159 config:property-set source-host 172.17.0.1
162 #. **Run Postman and try modifying the Source's datastore**
163 Put data to the local datastore:
168 PUT http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device
174 <node xmlns="urn:TBD:params:xml:ns:yang:network-topology">
175 <node-id>new-netconf-device</node-id>
176 <host xmlns="urn:opendaylight:netconf-node-topology">127.0.0.1</host>
177 <port xmlns="urn:opendaylight:netconf-node-topology">16777</port>
178 <username xmlns="urn:opendaylight:netconf-node-topology">admin</username>
179 <password xmlns="urn:opendaylight:netconf-node-topology">admin</password>
180 <tcp-only xmlns="urn:opendaylight:netconf-node-topology">false</tcp-only>
181 <reconnect-on-changed-schema xmlns="urn:opendaylight:netconf-node-topology">false</reconnect-on-changed-schema>
182 <connection-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">20000</connection-timeout-millis>
183 <max-connection-attempts xmlns="urn:opendaylight:netconf-node-topology">0</max-connection-attempts>
184 <between-attempts-timeout-millis xmlns="urn:opendaylight:netconf-node-topology">2000</between-attempts-timeout-millis>
185 <sleep-factor xmlns="urn:opendaylight:netconf-node-topology">1.5</sleep-factor>
186 <keepalive-delay xmlns="urn:opendaylight:netconf-node-topology">120</keepalive-delay>
194 GET http://localhost:8181/restconf/config/network-topology:network-topology/
196 Get the data from the Docker. The change should be present there.
201 GET http://172.17.0.2:8181/restconf/config/network-topology:network-topology/