cleaning up system variables
[integration/test.git] / csit / suites / mdsal / dsbenchmark / dsbenchmark.robot
1 *** Settings ***
2 Documentation     MD-SAL Data Store benchmarking.
3 ...
4 ...               Copyright (c) 2015 Cisco Systems, Inc. and others. All rights reserved.
5 ...
6 ...               This program and the accompanying materials are made available under the
7 ...               terms of the Eclipse Public License v1.0 which accompanies this distribution,
8 ...               and is available at http://www.eclipse.org/legal/epl-v10.html
9 ...
10 ...               This test suite uses the odl-dsbenchmark-impl feature controlled
11 ...               via dsbenchmark.py tool for testing the MD-SAL Data Store performance.
12 ...               (see the 'https://wiki.opendaylight.org/view/Controller_Core_Functionality_Tutorials:Tutorials:Data_Store_Benchmarking_and_Data_Access_Patterns')
13 ...
14 ...               Based on values in test suite variables it triggers required numbers of
15 ...               warm-up and measured test runs: odl-dsbenchmark-impl module generates
16 ...               (towards MD-SAL Data Store) specified structure, type and number of operations.
17 ...               The test suite performs checks for start-up and test execution timeouts
18 ...               (Start Measurement, Wait For Results) and basic checks for test runs results
19 ...               (Check Results). Finally it provides total numbers per operation structure and type
20 ...               (by default in the perf_per_struct.csv, perf_per_ops.csv files)
21 ...               suitable for plotting in system test environment. See also the
22 ...               'https://wiki.opendaylight.org/view/CrossProject:Integration_Group:System_Test:Step_by_Step_Guide#Optional_-_Plot_a_graph_from_your_job'
23 ...               Included totals can be filtered using the FILTER parameter (RegExp).
24 ...               Because of the way how graphs are drawn, it is recomended to keep
25 ...               all test suite variables unchanged as defined for the 1st build.
26 ...               Parameters WARMUPS, RUNS and accordingly the TIMEOUT value can be changed
27 ...               for each build if needed. Parameter UNITS defines time units returned
28 ...               by odl-dsbenchmark-impl module. The dsbenchmark.py tool always returns
29 ...               values in miliseconds.
30 Suite Setup       Setup_Everything
31 Suite Teardown    Teardown_Everything
32 Test Setup        SetupUtils.Setup_Test_With_Logging_And_Fast_Failing
33 Test Teardown     FailFast.Start_Failing_Fast_If_This_Failed
34 Library           OperatingSystem
35 Library           SSHLibrary    timeout=10s
36 Library           RequestsLibrary
37 Variables         ${CURDIR}/../../../variables/Variables.py
38 Resource          ${CURDIR}/../../../libraries/ConfigViaRestconf.robot
39 Resource          ${CURDIR}/../../../libraries/FailFast.robot
40 Resource          ${CURDIR}/../../../libraries/KarafKeywords.robot
41 Resource          ${CURDIR}/../../../libraries/SetupUtils.robot
42 Resource          ${CURDIR}/../../../libraries/Utils.robot
43 Resource          ${CURDIR}/../../../libraries/WaitForFailure.robot
44
45 *** Variables ***
46 ${ODL_LOG_LEVEL}    DEFAULT
47 ${TX_TYPE}        {TX-CHAINING,SIMPLE-TX}
48 ${OP_TYPE}        {PUT,MERGE,DELETE}
49 ${TOTAL_OPS}      100000
50 ${OPS_PER_TX}     100000
51 ${INNER_OPS}      100000
52 ${WARMUPS}        10
53 ${RUNS}           10
54 ${TIMEOUT}        30 min
55 ${FILTER}         EXEC
56 ${UNITS}          microseconds
57 ${tool}           dsbenchmark.py
58 ${tool_args}      ${EMPTY}
59 ${tool_startup_timeout}    10s
60 ${tool_log_name}    dsbenchmark.log
61 ${tool_output_name}    test.csv
62 ${tool_results1_name}    perf_per_struct.csv
63 ${tool_results2_name}    perf_per_ops.csv
64
65 *** Test Cases ***
66 Set Karaf Log Levels
67     [Documentation]    Set Karaf log level
68     KarafKeywords.Execute_Controller_Karaf_Command_On_Background    log:set ${ODL_LOG_LEVEL}
69
70 Start Measurement
71     [Documentation]    Start the benchmark tool. Fail if test not started.
72     [Tags]    critical
73     Start_Benchmark_Tool
74
75 Wait For Results
76     [Documentation]    Wait until results are available. Fail if timeout occures.
77     [Tags]    critical
78     Wait_Until_Benchmark_Tool_Finish    ${TIMEOUT}
79     SSHLibrary.File Should Exist    ${tool_results1_name}
80     SSHLibrary.File Should Exist    ${tool_results2_name}
81     Store_File_To_Robot    ${tool_results1_name}
82     Store_File_To_Robot    ${tool_results2_name}
83
84 Stop Measurement
85     [Documentation]    Stop the benchmark tool (if still running)
86     [Setup]    FailFast.Run_Even_When_Failing_Fast
87     Stop_Benchmark_Tool
88
89 Collect Logs
90     [Documentation]    Collect logs and detailed results for debugging
91     [Setup]    FailFast.Run_Even_When_Failing_Fast
92     ${files}=    SSHLibrary.List Files In Directory    .
93     ${tool_log}=    Get_Log_File    ${tool_log_name}
94     ${tool_output}=    Get_Log_File    ${tool_output_name}
95     ${tool_results1}=    Get_Log_File    ${tool_results1_name}
96     ${tool_results2}=    Get_Log_File    ${tool_results2_name}
97
98 Check Results
99     [Documentation]    Check outputs for expected content. Fail in case of unexpected content.
100     [Tags]    critical
101     ${tool_log}=    Get_Log_File    ${tool_log_name}
102     BuiltIn.Should Contain    ${tool_log}    Total execution time:
103     BuiltIn.Should Not Contain    ${tool_log}    status: NOK
104
105 *** Keywords ***
106 Setup_Everything
107     [Documentation]    Setup imported resources, SSH-login to mininet machine,
108     ...    create HTTP session, put Python tool to mininet machine.
109     SetupUtils.Setup_Utils_For_Setup_And_Teardown
110     SSHLibrary.Set_Default_Configuration    prompt=${TOOLS_SYSTEM_PROMPT}
111     SSHLibrary.Open_Connection    ${TOOLS_SYSTEM_IP}
112     Utils.Flexible_Mininet_Login
113     SSHLibrary.Put_File    ${CURDIR}/../../../../tools/mdsal_benchmark/${tool}
114
115 Teardown_Everything
116     [Documentation]    Cleaning-up
117     SSHLibrary.Close_All_Connections
118
119 Start_Benchmark_Tool
120     [Documentation]    Start the benchmark tool. Check that it has been running at least for ${tool_startup_timeout} period.
121     ${command}=    BuiltIn.Set_Variable    python ${tool} --host ${ODL_SYSTEM_IP} --port ${RESTCONFPORT} --warmup ${WARMUPS} --runs ${RUNS} --total ${TOTAL_OPS} --inner ${INNER_OPS} --txtype ${TX_TYPE} --ops ${OPS_PER_TX} --optype ${OP_TYPE} --plot ${FILTER} --units ${UNITS} ${tool_args} &> ${tool_log_name}
122     BuiltIn.Log    ${command}
123     ${output}=    SSHLibrary.Write    ${command}
124     ${status}    ${message}=    BuiltIn.Run Keyword And Ignore Error    Write Until Expected Output    ${EMPTY}    ${TOOLS_SYSTEM_PROMPT}    ${tool_startup_timeout}
125     ...    1s
126     BuiltIn.Log    ${status}
127     BuiltIn.Log    ${message}
128     BuiltIn.Run Keyword If    '${status}' == 'PASS'    BuiltIn.Fail    Benchmark tool is not running
129
130 Wait_Until_Benchmark_Tool_Finish
131     [Arguments]    ${timeout}
132     [Documentation]    Wait until the benchmark tool is finished. Fail in case of test timeout (${timeout}).
133     BuiltIn.Wait Until Keyword Succeeds    ${timeout}    15s    Read Until Prompt
134
135 Stop_Benchmark_Tool
136     [Documentation]    Stop the benchmark tool. Fail if still running.
137     Utils.Write_Bare_Ctrl_C
138     SSHLibrary.Read Until Prompt
139
140 Get_Log_File
141     [Arguments]    ${file_name}
142     [Documentation]    Return and log content of the provided file.
143     ${output_log}=    SSHLibrary.Execute_Command    cat ${file_name}
144     BuiltIn.Log    ${output_log}
145     [Return]    ${output_log}
146
147 Store_File_To_Robot
148     [Arguments]    ${file_name}
149     [Documentation]    Store the provided file from the MININET to the ROBOT machine.
150     ${output_log}=    SSHLibrary.Execute_Command    cat ${file_name}
151     BuiltIn.Log    ${output_log}
152     Create File    ${file_name}    ${output_log}