-
Type: Improvement
-
Resolution: Done
-
Priority: Major - P3
-
None
-
Affects Version/s: 2.6.3
-
Component/s: Querying
-
None
db.foo.drop(); var filler = ''; for(c=0;c < 100; c++) { filler += 'a'; } for( i =0; i < 100000; i++) { db.foo.insert({ "n" : Math.floor(100 * Math.random()), "fill" : filler} ); } > db.foo.find().explain() { "cursor" : "BasicCursor", "isMultiKey" : false, "n" : 100000, "nscannedObjects" : 100000, "nscanned" : 100000, "nscannedObjectsAllPlans" : 100000, "nscannedAllPlans" : 100000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 781, "nChunkSkips" : 0, "millis" : 15, "server" : "skye.local:27017", "filterSet" : false } > db.foo.find({n:{$gte:0}}).explain() { "cursor" : "BasicCursor", "isMultiKey" : false, "n" : 100000, "nscannedObjects" : 100000, "nscanned" : 100000, "nscannedObjectsAllPlans" : 100000, "nscannedAllPlans" : 100000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 781, "nChunkSkips" : 0, "millis" : 39, "server" : "skye.local:27017", "filterSet" : false }
In the test above, both queries return the same amount of documents, but looks like the overheard of the range comparison is unexpectedly high. Opening a ticket to see if there's some optimization we can do here.
- related to
-
SERVER-12871 Seemingly unreasonable overhead to range scans when using indexes
- Closed