| Steps To Reproduce: |
- insert n documents
- query for a range of (for example) 100 documents with a batch size of 10
- retrieve the first batch
- remove a document later in the batch
- exhaust the cursor
The attached code is more of a stress test. It can be run against a v2.4 and v2.6 server, and will produce a .json file for each run, which can be imported/aggregated as described below:
First, build with something like:
c++ -I/opt/libmongoclient/include -std=c++11 -o op_bench_repro.o -c op_bench_repro.cpp
|
c++ -I/opt/libmongoclient/include -std=c++11 -o scoped_probe.o -c scoped_probe.cpp
|
c++ op_bench_repro.o scoped_probe.o -o op_bench_repro -rdynamic -lmongoclient -lboost_thread-mt -lboost_filesystem
|
-lboost_program_options -lboost_system
|
Then import the results to mongodb:
echo '[' > tmp.json
|
find . -name '*.json' |xargs cat >> tmp.json
|
echo ']' >> tmp.json
|
mongoimport --jsonArray --collection regression tmp.json
|
Then aggregate:
db.regression.aggregate({
|
$group: {
|
_id: {ServerVersion:'$ServerVersion', TestName:'$TestName'},
|
Seconds: {'$sum': "$ClockSeconds"}
|
}
|
});
|
|