-
Type:
Task
-
Resolution: Unresolved
-
Priority:
Trivial - P5
-
None
-
Affects Version/s: None
-
Component/s: None
-
Storage Engines
-
2
-
StorEng - Defined Pipeline
The Performance Long Test uses wiredtiger connection configs in both evergreen and the .wtperf files. We should move this into the perf file so they are in a single place.
The run-perf-test should also take a new argument for ops.
For example
diff --git a/bench/wtperf/runners/500m-btree-populate.wtperf b/bench/wtperf/runners/500m-btree-populate.wtperf index 83f136c1a1..54065ac7f9 100644 --- a/bench/wtperf/runners/500m-btree-populate.wtperf +++ b/bench/wtperf/runners/500m-btree-populate.wtperf @@ -9,7 +9,7 @@ # # This generates about 80 Gb of uncompressed data. But it should compress # well and be small on disk. -conn_config="cache_size=16G,checkpoint=(wait=60,log_size=2GB),session_max=20000,log=(enabled),eviction=(threads_max=8)" +conn_config="create,statistics=(fast),statistics_log=(json,wait=1,sources=[file:]),cache_size=16G,checkpoint=(wait=60,log_size=2GB),session_max=20000,log=(enabled),eviction=(threads_max=8)" compact=true compression="snappy" sess_config="isolation=snapshot" diff --git a/test/evergreen.yml b/test/evergreen.yml index ea7ab61187..e40c0cdc9e 100644 --- a/test/evergreen.yml +++ b/test/evergreen.yml @@ -1131,8 +1131,8 @@ functions: virtualenv -p ${python_binary|python3} venv source venv/bin/activate pip3 install psutil==5.9.4 - ${python_binary|python3} ../../../bench/perf_run_py/perf_run.py --${test_type|wtperf} -e ${exec_path|./wtperf} -t ${perf-test-path|../../../bench/wtperf/runners}/${perf-test-name} -ho WT_TEST -m ${maxruns} -v -b -o test_stats/evergreen_out_${perf-test-name}.json ${wtarg} - ${python_binary|python3} ../../../bench/perf_run_py/perf_run.py --${test_type|wtperf} -e ${exec_path|./wtperf} -t ${perf-test-path|../../../bench/wtperf/runners}/${perf-test-name} -ho WT_TEST -m ${maxruns} -v -re -o test_stats/atlas_out_${perf-test-name}.json ${wtarg} + ${python_binary|python3} ../../../bench/perf_run_py/perf_run.py --${test_type|wtperf} -e ${exec_path|./wtperf} -t ${perf-test-path|../../../bench/wtperf/runners}/${perf-test-name} -ho WT_TEST -m ${maxruns} -v -b -o test_stats/evergreen_out_${perf-test-name}.json ${wtarg} -ops ${ops} + ${python_binary|python3} ../../../bench/perf_run_py/perf_run.py --${test_type|wtperf} -e ${exec_path|./wtperf} -t ${perf-test-path|../../../bench/wtperf/runners}/${perf-test-name} -ho WT_TEST -m ${maxruns} -v -re -o test_stats/atlas_out_${perf-test-name}.json ${wtarg} -ops ${ops} "csuite smoke test": command: shell.exec @@ -4891,7 +4891,7 @@ tasks: vars: perf-test-name: 500m-btree-populate.wtperf maxruns: 1 - wtarg: -args ['"-C create,statistics=(fast),statistics_log=(json,wait=1,sources=[file:])"'] -ops ['"load", "warnings", "max_latency_insert"'] + ops: ['"load", "warnings", "max_latency_insert"'] - func: "upload stats to atlas" vars: test-name: 500m-btree-populate.wtperf