BuiltIn.Should_Contain ${text} "status":404 javax.management.InstanceNotFoundException
Restart_Test_Templ
- [Documentation] Kill every odl node and start again.
- ClusterManagement.Kill_Members_From_List_Or_All
+ [Documentation] Stop every odl node and start again.
+ ClusterManagement.Stop_Members_From_List_Or_All
ClusterManagement.Clean_Directories_On_List_Or_All tmp_dir=/tmp
ClusterManagement.Start_Members_From_List_Or_All
BuiltIn.Wait_Until_Keyword_Succeeds 300s 10s ShardStability.Shards_Stability_Get_Details ${DEFAULT_SHARD_LIST} verify_restconf=True
Reboot_People_Leader
[Documentation] Previous people Leader is rebooted. We should never stop the people first follower, this is where people are registered.
- ClusterManagement.Kill_Single_Member ${people_leader_index} confirm=True
+ ClusterManagement.Stop_Single_Member ${people_leader_index} confirm=True
ClusterManagement.Start_Single_Member ${people_leader_index} wait_for_sync=True timeout=${MEMBER_START_TIMEOUT}
BuiltIn.Wait_Until_Keyword_Succeeds 30s 2s ClusterManagement.Verify_Leader_Exists_For_Each_Shard shard_name_list=${SHARD_NAME_LIST} shard_type=config
... and is available at http://www.eclipse.org/legal/epl-v10.html
...
...
-... This test kills the current leader of the "car" shard and then executes CRD
-... operations on the new leader and a new follower. The killed member is brought back.
+... This test stops the current leader of the "car" shard and then executes CRD
+... operations on the new leader and a new follower. The stopped member is brought back.
... This suite uses 3 different car sets, same size but different starting ID.
...
... Other models and shards (people, car-people) are not accessed by this suite.
: FOR ${session} IN @{ClusterManagement__session_list}
\ TemplatedRequests.Get_As_Json_Templated folder=${VAR_DIR}/cars session=${session} verify=True iterations=${CAR_ITEMS} iter_start=${ORIGINAL_START_I}
-Kill_Original_Car_Leader
- [Documentation] Kill the car Leader to cause a new leader to get elected.
- ClusterManagement.Kill_Single_Member ${car_leader_index} confirm=True
+Stop_Original_Car_Leader
+ [Documentation] Stop the car Leader to cause a new leader to get elected.
+ ClusterManagement.Stop_Single_Member ${car_leader_index} confirm=True
Wait_For_New_Leader
[Documentation] Wait until new car Leader is elected.
\ TemplatedRequests.Get_As_Json_Templated folder=${VAR_DIR}/cars session=${session} verify=True iterations=${CAR_ITEMS} iter_start=${FOLLOWER_2NODE_START_I}
Start_Old_Car_Leader
- [Documentation] Start the killed member without deleting the persisted data.
+ [Documentation] Start the stopped member without deleting the persisted data.
ClusterManagement.Start_Single_Member ${car_leader_index} wait_for_sync=True timeout=${MEMBER_START_TIMEOUT}
BuiltIn.Wait_Until_Keyword_Succeeds 30s 2s ClusterManagement.Verify_Leader_Exists_For_Each_Shard shard_name_list=${SHARD_NAME_LIST} shard_type=config
... and is available at http://www.eclipse.org/legal/epl-v10.html
...
...
-... This test kills majority of the followers and verifies car addition is not possible,
+... This test stops majority of the followers and verifies car addition is not possible,
... then resumes single follower (first from original list) and checks that addition works.
... Then remaining members are brought up.
... Leader member is always up and assumed to remain Leading during the whole suite run.
${CLUSTER_DIR} ${CURDIR}/../../../variables/clustering
*** Test Cases ***
-Kill_Majority_Of_The_Followers
- [Documentation] Kill half plus one car Follower members and set reviving followers down (otherwsise tipping followers cannot join cluster).
- ... Mark most of killed members as explicitly down, to allow the surviving leader make progress.
- ClusterManagement.Kill_Members_From_List_Or_All member_index_list=${list_of_killing} confirm=True
+Stop_Majority_Of_The_Followers
+ [Documentation] Stop half plus one car Follower members and set reviving followers down (otherwsise tipping followers cannot join cluster).
+ ... Mark most of stopped members as explicitly down, to allow the surviving leader make progress.
+ ClusterManagement.Stop_Members_From_List_Or_All member_index_list=${list_of_stopping} confirm=True
: FOR ${index} IN @{list_of_reviving}
\ ${data} OperatingSystem.Get File ${CLUSTER_DIR}/member_down.json
\ ${member_ip} = Collections.Get_From_Dictionary ${ClusterManagement__index_to_ip_mapping} ${index}
BuiltIn.Set_Suite_Variable \${list_of_tipping} ${tipping_list}
${revive_list} = Collections.Get_Slice_From_List ${car_follower_indices} ${half_followers} ${number_followers}
BuiltIn.Set_Suite_Variable \${list_of_reviving} ${revive_list}
- ${kill_list} = Collections.Combine_Lists ${tipping_list} ${revive_list}
- BuiltIn.Set_Suite_Variable \${list_of_killing} ${kill_list}
+ ${stop_list} = Collections.Combine_Lists ${tipping_list} ${revive_list}
+ BuiltIn.Set_Suite_Variable \${list_of_stopping} ${stop_list}
: FOR ${session} IN @{ClusterManagement__session_list}
\ TemplatedRequests.Get_As_Json_Templated folder=${VAR_DIR}/cars session=${session} verify=True iterations=${CAR_ITEMS}
-Kill_All_Members
- [Documentation] Kill all controllers.
- ClusterManagement.Kill_Members_From_List_Or_All confirm=True
+Stop_All_Members
+ [Documentation] Stop all controllers.
+ ClusterManagement.Stop_Members_From_List_Or_All confirm=True
Start_All_Members
[Documentation] Start all controllers (should restore the persisted data).
${DATASTORE_CFG} /${WORKSPACE}/${BUNDLEFOLDER}/etc/org.opendaylight.controller.cluster.datastore.cfg
*** Test Cases ***
-Kill_All_Members
- [Documentation] Kill every odl node.
- ClusterManagement.Kill_Members_From_List_Or_All
+Stop_All_Members
+ [Documentation] Stop every odl node.
+ ClusterManagement.Stop_Members_From_List_Or_All
Unset_Tell_Based_Protocol_Usage
[Documentation] Comment out the flag usage in config file. Also clean most data except data/log/.
${DATASTORE_CFG} /${WORKSPACE}/${BUNDLEFOLDER}/etc/org.opendaylight.controller.cluster.datastore.cfg
*** Test Cases ***
-Kill_All_Members
- [Documentation] Kill every odl node.
- ClusterManagement.Kill_Members_From_List_Or_All
+Stop_All_Members
+ [Documentation] Stop every odl node.
+ ClusterManagement.Stop_Members_From_List_Or_All
Set_Tell_Based_Protocol_Usage
[Documentation] Un-comment the flag usage in config file. Also clean most data except data/log/.
[Documentation] Find a service owner and successors.
Get_Present_Brt_Owner_And_Successors 1 store=${True}
-Rpc_Before_Killing_On_Owner
+Rpc_Before_Stopping_On_Owner
[Documentation] Run rpc on the service owner.
Run_Rpc ${brt_owner}
-Rpc_Before_Kill_On_Successors
+Rpc_Before_Stop_On_Successors
[Documentation] Run rpc on non owher cluster nodes.
: FOR ${idx} IN @{brt_successors}
\ Run_Rpc ${idx}
-Kill_Current_Owner_Member
- [Documentation] Kill cluster node which is the owner.
- ClusterManagement.Kill_Single_Member ${brt_owner}
+Stop_Current_Owner_Member
+ [Documentation] Stop cluster node which is the owner.
+ ClusterManagement.Stop_Single_Member ${brt_owner}
BuiltIn.Set Suite variable ${old_brt_owner} ${brt_owner}
BuiltIn.Set Suite variable ${old_brt_successors} ${brt_successors}
: FOR ${idx} IN @{old_brt_successors}
\ BuiltIn.Wait_Until_Keyword_Succeeds 60s 5s Run_Rpc ${idx}
-Restart_Killed_Member
- [Documentation] Restart killed node
+Restart_Stopped_Member
+ [Documentation] Restart stopped node
ClusterManagement.Start_Single_Member ${old_brt_owner}
Verify_New_Owner_Remained_After_Rejoin
*** Test Cases ***
Kill_Odl
- [Documentation] The ODL instance consumes resources, kill it.
- ClusterManagement.Kill_Members_From_List_Or_All
+ [Documentation] The ODL instance consumes resources, stop it.
+ ClusterManagement.Stop_Members_From_List_Or_All
Detect_Config_Version
[Documentation] Examine ODL installation to figure out which version of binding-parent should be used.