Bug 2847: Recovery of a large journal runs out of memory
Changed the ShardRecoveryCoordinator to commit each log entries batch
immediately instead of using an executor to prepare write transaction in
parallel and then commit them all on recovery complete. This resulted in
a lot of memory overhead for little to no gain. I also changed it to
cache the serialized ModificationPayload instances instead of
immediately de-serializing - this further reduces the memory footprint
as the serialized instances are much smaller
As a result, all of the recovery code is now in the
ShardRecoveryCoordinator - Shard is essentially a pass through.
I also reduced the shardJournalRecoveryLogBatchSize to 1000 to further
reduce the memory footprint.
Change-Id: I3aaabe52781bc0db14975e0a292ef9fd18aa3d7c
Signed-off-by: Tom Pantelis <tpanteli@brocade.com>