1 Datastore Job Coordination framework
2 ------------------------------------
4 The datastore job coordinator framework offers the following benefits :
6 #. “Datastore Job” is a set of updates to the Config/Operational
8 #. Dependent Jobs (eg. Operations on interfaces on same port) that need
9 to be run one after the other will continue to be run in sequence.
10 #. Independent Jobs (eg. Operations on interfaces across different
11 Ports) will be allowed to run paralelly.
12 #. Makes use of ForkJoin Pools that allows for work-stealing across
13 threads. ThreadPool executor flavor is also available… But would be
14 deprecating that soon.
15 #. Jobs are enqueued and dequeued to/from a two-level Hash structure
16 that ensures point 1 & 2 above are satisfied and are executed using
17 the ForkJoinPool mentioned in point 3.
18 #. The jobs are enqueued by the application along with an application
19 job-key (type: string). The Coordinator dequeues and schedules the
20 job for execution as appropriate. All jobs enqueued with the same
21 job-key will be executed sequentially.
22 #. DataStoreJob Coordination to distribute jobs and execute them
23 paralelly within a single node.
24 #. This will still work in a clustered mode by handling optimistic lock
25 exceptions and retrying of the job.
26 #. Framework provides the capability to retry and rollback Jobs.
27 #. Applications can specify how-many retries and provide callbacks for
29 #. Aids movement of Application Datastore listeners to “Follower” also
30 listening mode without any change to the business logic of the
32 #. Datastore Job Coordination function gets the list of listenable
33 futures returned from each job.
34 #. The Job is deemed complete only when the onSuccess callback is
35 invoked and the next enqueued job for that job-key will be dequeued
37 #. On Failure, based on application input, retries and/or rollback will
38 be performed. Rollback failures are considered as double-fault and
39 system bails out with error message and moves on to the next job with