Details
-
Bug
-
Resolution: Duplicate
-
Major - P3
-
None
-
3.4.10
-
None
-
Query
-
ALL
Description
I have a collection that has documents of the following form:
// Some comments here
|
{
|
"_id:": ObjectId(), |
"timestamp": ISODate(), |
"fieldA": Number, |
"fieldB": String, |
.... // irrelevant fields |
}
|
and the following indexes:
- _id index { _id :1}
- timestamp_index: { timestamp: 1 }
- compound_index: { fieldA: 1, fieldB: 1 }
The collection stores around 2.5 million documents
When I issue the query :
db.collection.find(
|
{
|
"timestamp": { |
"$gte": ISODate("2017-11-29T14:55:15.682Z"), |
"$lte": ISODate("2017-12-06T10:49:36.833Z") |
},
|
"fieldA": 0, |
"fieldB": "0013A20041673C35" |
}).sort({timestamp: -1}).toArray()
|
The result is about 3500 documents
The queryPlanner evaluates the usage of first using the compound_index and rejects it. It finally uses the timestamp_index as a first IXSCAN stage and the rest of the query in a FETCH stage... This operation results in 2.5 seconds execution time
However if I enforce the usage of the compound index in the IXSCAN stage with the
.hint({fieldA: 1, fieldB: 1})
|
So how does the query planner evaluate the performance of each plan and reject it?
The real problem is that this is used in an aggregation where I cannot use the *hint * functionality to enforce index usage, so this is a big stepdown in performance...
Attachments
Issue Links
- duplicates
-
SERVER-7568 Aggregation framework favors non-blocking sorts
-
- Closed
-