|
support@sesam.hk, can you please open a separate ticket? Query issues often have different root causes even when exhibiting the same symptoms.
Thanks,
Ramón.
|
|
We have the 'same' issue with mongo 3.0.5.
Randomly the query (slave or master) stop to use an index and start to scan the collection. Could you tell me what you need to debug this kind of issue ?
|
|
I'm seeing something that looks like this with 2.6.9:
2015-04-20T16:26:05.147+0000 [conn61829] query *****.followings query: { $query:
{ user_id: ObjectId('*********') }
, $orderby:
{ _id: 1 }
} planSummary: IXSCAN
{ _id: 1 }
ntoskip:0 nscanned:125454 nscannedObjects:125454 keyUpdates:0 numYields:0 locks(micros) r:181778 nreturned:0 reslen:20 181ms
db.followings.getIndexes()
[
{
"v" : 1,
"name" : "id",
"key" :
{
"_id" : 1
}
,
"ns" : "*****.followings"
},
{
"v" : 1,
"key" :
{
"user_id" : 1
}
,
"name" : "user_id_1",
"ns" : "***********.followings"
}
]
The query at the console on the primary always uses the 'user_id_1' index, but intermittently, the query will start running very slowly on slaves in the replica set. The behavior started after upgrading from 2.4 last week, and happens on some other simple collections too. If I drop and regenerate the index the problem resets itself, but I assume that's similar to just restarting mongod. What is normally a 1ms query jumps to 100ms or more.
|
|
hari.pulugurta your issue looks like SERVER-14071 to me
|
|
Bug still happens in version 2.6.4 on our production database.
tt2:PRIMARY> db.version()
2.6.4
Working set fits in ram on our prod servers
"workingSet" :
{ "note" : "thisIsAnEstimate", "pagesInMemory" : 10039, "computationTimeMicros" : 15146, "overSeconds" : 1722}
Collection name: tt2.surveys
Docs:2949936 (~2.9 mil)
From our logs we are seeing this when there is a heavy load and its taking down our systems and causing blockage because it ends up scanning 1000's of records instead of 10's
query tt2.surveys query: { $query: { AppPeriodId: BinData(3, F4EB45477D11D8429C22A2FB00A6CF64), Title: "Resume", AccountId: BinData(3, 23AF59F96695EE46B06EA40D0180F0F0),Status: { $ne: -1 } }, $orderby: { CreatedOn: 1 } }
|
planSummary: IXSCAN { Title: 1.0, CreatedOn: 1.0, _id: 1.0 }
|
ntoreturn:0 ntoskip:0 nscanned:213830 nscannedObjects:213830 keyUpdates:0 numYields:97 locks(micros) r:144946310 nreturned:1 reslen:2427
|
103581ms
|
When we run this query individually it hits the right index
"BtreeCursor AccountId_1_AppPeriodId_1_Category_1_PrimIndex_1"
|
|
r0mant: thanks so much for the report. I'm sorry for the delay in response; the query subsystem saw a lot of development in MongoDB 2.5.x, and we really appreciate reports of quirks in the plan ranking / index selection process.
I reviewed the information you sent about your three queries:
- Query shape {mid: 1, key: 1} uses suboptimal index {mid: 1, r1: -1} over optimal index {mid: 1, key: 1}.
- Query shape {did: 1, fcount: {$gte: 1}, key: 1} uses suboptimal index {did: 1, fcount: 1, _tim: -1} over optimal index {did: 1, key: 1}.
- Query shape {did: 1, fcount: {$gte: 1}} with sort {_tim: -1} uses suboptimal index {_tim: -1} over optimal index {did: 1, key: 1}.
My assessment is that issues #1 and #2 are a manifestation of SERVER-13675, and issue #3 is a manifestation of SERVER-12923.
Some background: to choose a query plan to use for a given query when multiple candidate plans exist, the query engine runs each candidate plan and then picks the plan that produced the most results during a trial period on a subset of the data to be scanned. The winning query plan is then cached, and used for subsequent queries of the same shape until the cache entry is invalidated (which happens under certain conditions, such as when the data distribution in the collection changes sufficiently or when the chosen query plan performs consistently much worse than it did during initial selection).
In short, the issues you're seeing are a result of overaggressive query plan caching. When a query returns few (or zero) results and two candidate plans perform just about equally well, one is arbitrarily chosen as the winning plan and cached. In your case, a query plan that performed acceptably once is cached and then used against a different query with the same shape (where it performs abysmally). The fix for SERVER-13675 addresses this issue; it's been pushed to development, and is scheduled for the upcoming 2.6.2 release. If you're able to, please test the 2.6.2-rc0 release candidate in a staging environment when it comes out (the release date hasn't been set in stone, but is likely within the next week) to see whether it resolves the problems you're encountering with these queries. SERVER-12923 is a bit more difficult to address, as fundamentally the fix is to tweak the tradeoff between CPU and I/O (large sort versus large index scan). The workaround for this issue is to apply a hint() or use index filters (or to drop these indexes, if they're unneeded by your application).
(styniu: the fix for SERVER-13675 will also resolve your issue.)
I'm resolving this ticket as a dup of 13675/12923. Please feel free to re-open it if you have further questions.
|
|
I confirm, we have the same problem on 2.6.1.
So far it happened only on secondary replica-set members though. The database contains ~2M documents.
Once mongo starts using wrong index, it goes for hours until restarted. Node is practically unusable during that time with server load jumping to 400 (on 12-core machine).
Here a sample log entry:
2014-05-29T09:07:58.890+0200 [conn18225] query descriptions.hotel_descriptions query: { mh_code: "BER0001P", language:
{ $in: [ "de", "en" ] }
, source:
{ $in: [ "GTA", "HILTON", "HOTELBEDS", "JACTRAVEL", "MIKI", "MONDIAL", "SIDETOURS", "SUNHOTELS", "SUNISLANDS", "TOURICO", "YOUTRAVEL" ] }
} planSummary: IXSCAN
{ source: 1.0, language: 1.0, detected_language: 1.0 }
ntoreturn:0 ntoskip:0 nscanned:476809 nscannedObjects:476794 keyUpdates:0 numYields:3582 locks(micros) r:118718924 nreturned:8 reslen:24724 225407ms
And the same query sent through console after mognod restart:
resfinity:SECONDARY> db.hotel_descriptions.find({ mh_code: "BER0001P", language:
{ $in: [ "de", "en" ] }
, source:
{ $in: [ "GTA", "HILTON", "HOTELBEDS", "JACTRAVEL", "MIKI", "MONDIAL","SIDETOURS", "SUNHOTELS", "SUNISLANDS", "TOURICO", "YOUTRAVEL" ] }
}).explain()
{
"cursor" : "BtreeCursor mh_code_1_language_1_source_1",
"isMultiKey" : false,
"n" : 8,
"nscannedObjects" : 8,
"nscanned" : 14,
"nscannedObjectsAllPlans" : 51,
"nscannedAllPlans" : 67,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 1,
"nChunkSkips" : 0,
"millis" : 16,
"indexBounds" :
{
"mh_code" : [
[
"BER0001P",
"BER0001P"
]
],
"language" : [
[
"de",
"de"
],
[
"en",
"en"
]
],
"source" : [
[
"GTA",
"GTA"
],
[
"HILTON",
"HILTON"
],
[
"HOTELBEDS",
"HOTELBEDS"
],
[
"JACTRAVEL",
"JACTRAVEL"
],
[
"MIKI",
"MIKI"
],
[
"MONDIAL",
"MONDIAL"
],
[
"SIDETOURS",
"SIDETOURS"
],
[
"SUNHOTELS",
"SUNHOTELS"
],
[
"SUNISLANDS",
"SUNISLANDS"
],
[
"TOURICO",
"TOURICO"
],
[
"YOUTRAVEL",
"YOUTRAVEL"
]
]
}
,
"server" : "mongo:27017",
"filterSet" : false
}
BR,
Sebastian
|
|
Hi guys,
Here are some more findings.
The collection referenced here is the second one from my previous comment. I looked into "db.currentOp()" output on the server and found a number of long-running operations for this collection:
{
|
"opid" : 312743,
|
"active" : true,
|
"secs_running" : 1859,
|
"op" : "query",
|
"ns" : "<db>.<coll>",
|
"query" : {
|
"$query" : {
|
"did" : ObjectId("52f3cf28fea3984688f1907a"),
|
"fcount" : {
|
"$gte" : 12
|
}
|
},
|
"$orderby" : {
|
"_tim" : -1
|
}
|
},
|
"planSummary" : "IXSCAN { _tim: -1.0 }",
|
"client" : "10.177.23.49:46948",
|
"desc" : "conn598",
|
"threadId" : "0x7f5a553c6700",
|
"connectionId" : 598,
|
"locks" : {
|
"^" : "r",
|
"^<db>" : "R"
|
},
|
"waitingForLock" : false,
|
"numYields" : 104925,
|
"lockStats" : {
|
"timeLockedMicros" : {
|
"r" : NumberLong("2593336689"),
|
"w" : NumberLong(0)
|
},
|
"timeAcquiringMicros" : {
|
"r" : NumberLong(38269123),
|
"w" : NumberLong(0)
|
}
|
}
|
}
|
As you can see, for some reason the whole index scan was picked (index on _tim) even though the more appropriate index is available. I guess this is the reason for the performance issues: the whole index scan on 80MM collection is a big deal and as these operations are piling up on the server, it is becoming less and less responsive.
Here is "getPlansByQuery" output for this type of query:
aux:PRIMARY> db.<coll>.getPlanCache().getPlansByQuery({ did: ObjectId('520d5f9bfea3981872941c66'), fcount: {$gte: 12}}, {}, {_tim: -1})
|
[
|
{
|
"details" : {
|
"solution" : "(index-tagged expression tree: tree=Node\n---Leaf { did: 1.0, key: 1.0 }, pos: 0\n---Leaf \n)"
|
},
|
"reason" : {
|
"score" : 1.001813186602565,
|
"stats" : {
|
"type" : "KEEP_MUTATIONS",
|
"works" : 61751,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 101,
|
"needTime" : 60792,
|
"needFetch" : 858,
|
"isEOF" : 0,
|
"children" : [
|
{
|
"type" : "OR",
|
"works" : 61751,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 101,
|
"needTime" : 60792,
|
"needFetch" : 858,
|
"isEOF" : 0,
|
"dupsTested" : 126,
|
"dupsDropped" : 25,
|
"locsForgotten" : 0,
|
"matchTested_0" : 0,
|
"matchTested_1" : 0,
|
"children" : [
|
{
|
"type" : "SORT",
|
"works" : 31267,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 25,
|
"needTime" : 30382,
|
"needFetch" : 858,
|
"isEOF" : 1,
|
"forcedFetches" : 0,
|
"memUsage" : 6974,
|
"memLimit" : 33554432,
|
"children" : [
|
{
|
"type" : "FETCH",
|
"works" : 31240,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 30270,
|
"needTime" : 111,
|
"needFetch" : 858,
|
"isEOF" : 1,
|
"alreadyHasObj" : 0,
|
"forcedFetches" : 0,
|
"matchTested" : 30270,
|
"children" : [
|
{
|
"type" : "IXSCAN",
|
"works" : 30381,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 30381,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 1,
|
"keyPattern" : "{ did: 1.0, key: 1.0 }",
|
"boundsVerbose" : "field #0['did']: [ObjectId('4fda4f1d3181332d1f0164f4'), ObjectId('4fda4f1d3181332d1f0164f4')], field #1['key']: [MinKey, MaxKey]",
|
"isMultiKey" : 0,
|
"yieldMovedCursor" : 0,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0,
|
"keysExamined" : 30381,
|
"children" : [ ]
|
}
|
]
|
}
|
]
|
},
|
{
|
"type" : "SORT",
|
"works" : 30484,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 101,
|
"needTime" : 30382,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"forcedFetches" : 0,
|
"memUsage" : 8109462,
|
"memLimit" : 33554432,
|
"children" : [
|
{
|
"type" : "FETCH",
|
"works" : 30382,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 30270,
|
"needTime" : 111,
|
"needFetch" : 0,
|
"isEOF" : 1,
|
"alreadyHasObj" : 0,
|
"forcedFetches" : 0,
|
"matchTested" : 30270,
|
"children" : [
|
{
|
"type" : "IXSCAN",
|
"works" : 30381,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 30381,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 1,
|
"keyPattern" : "{ did: 1.0, key: 1.0 }",
|
"boundsVerbose" : "field #0['did']: [ObjectId('4fda4f1d3181332d1f0164f4'), ObjectId('4fda4f1d3181332d1f0164f4')], field #1['key']: [MinKey, MaxKey]",
|
"isMultiKey" : 0,
|
"yieldMovedCursor" : 0,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0,
|
"keysExamined" : 30381,
|
"children" : [ ]
|
}
|
]
|
}
|
]
|
}
|
]
|
}
|
]
|
}
|
},
|
"feedback" : {
|
"nfeedback" : 0,
|
"averageScore" : 0,
|
"stdDevScore" : 0,
|
"scores" : [ ]
|
},
|
"filterSet" : false
|
},
|
{
|
"details" : {
|
"solution" : "(index-tagged expression tree: tree=Node\n---Leaf { did: 1, fcount: 1, _tim: -1 }, pos: 0\n---Leaf { did: 1, fcount: 1, _tim: -1 }, pos: 1\n)"
|
},
|
"reason" : {
|
"score" : 1.0005968316957412,
|
"stats" : {
|
"type" : "KEEP_MUTATIONS",
|
"works" : 61751,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 25,
|
"needTime" : 60478,
|
"needFetch" : 1248,
|
"isEOF" : 0,
|
"children" : [
|
{
|
"type" : "OR",
|
"works" : 61751,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 25,
|
"needTime" : 60478,
|
"needFetch" : 1248,
|
"isEOF" : 0,
|
"dupsTested" : 25,
|
"dupsDropped" : 0,
|
"locsForgotten" : 0,
|
"matchTested_0" : 0,
|
"matchTested_1" : 0,
|
"children" : [
|
{
|
"type" : "SORT",
|
"works" : 31546,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 25,
|
"needTime" : 30271,
|
"needFetch" : 1248,
|
"isEOF" : 1,
|
"forcedFetches" : 0,
|
"memUsage" : 6974,
|
"memLimit" : 33554432,
|
"children" : [
|
{
|
"type" : "FETCH",
|
"works" : 31519,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 30270,
|
"needTime" : 0,
|
"needFetch" : 1248,
|
"isEOF" : 1,
|
"alreadyHasObj" : 0,
|
"forcedFetches" : 0,
|
"matchTested" : 0,
|
"children" : [
|
{
|
"type" : "IXSCAN",
|
"works" : 30270,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 30270,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 1,
|
"keyPattern" : "{ did: 1, fcount: 1, _tim: -1 }",
|
"boundsVerbose" : "field #0['did']: [ObjectId('4fda4f1d3181332d1f0164f4'), ObjectId('4fda4f1d3181332d1f0164f4')], field #1['fcount']: [12, inf.0], field #2['_tim']: [MaxKey, MinKey]",
|
"isMultiKey" : 0,
|
"yieldMovedCursor" : 0,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0,
|
"keysExamined" : 30270,
|
"children" : [ ]
|
}
|
]
|
}
|
]
|
},
|
{
|
"type" : "SORT",
|
"works" : 30205,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 30204,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"forcedFetches" : 0,
|
"memUsage" : 8092321,
|
"memLimit" : 33554432,
|
"children" : [
|
{
|
"type" : "FETCH",
|
"works" : 30204,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 30204,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"alreadyHasObj" : 0,
|
"forcedFetches" : 0,
|
"matchTested" : 0,
|
"children" : [
|
{
|
"type" : "IXSCAN",
|
"works" : 30204,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 30204,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"keyPattern" : "{ did: 1, fcount: 1, _tim: -1 }",
|
"boundsVerbose" : "field #0['did']: [ObjectId('4fda4f1d3181332d1f0164f4'), ObjectId('4fda4f1d3181332d1f0164f4')], field #1['fcount']: [12, inf.0], field #2['_tim']: [MaxKey, MinKey]",
|
"isMultiKey" : 0,
|
"yieldMovedCursor" : 0,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0,
|
"keysExamined" : 30205,
|
"children" : [ ]
|
}
|
]
|
}
|
]
|
}
|
]
|
}
|
]
|
}
|
},
|
"feedback" : {
|
|
},
|
"filterSet" : false
|
},
|
{
|
"details" : {
|
"solution" : "(whole index scan solution: dir=1; tree=Leaf { _tim: -1.0 }, pos: 0\n)"
|
},
|
"reason" : {
|
"score" : 1.0003,
|
"stats" : {
|
"type" : "FETCH",
|
"works" : 61751,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 61751,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"alreadyHasObj" : 0,
|
"forcedFetches" : 0,
|
"matchTested" : 0,
|
"children" : [
|
{
|
"type" : "IXSCAN",
|
"works" : 61751,
|
"yields" : 3090,
|
"unyields" : 3090,
|
"invalidates" : 0,
|
"advanced" : 61751,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"keyPattern" : "{ _tim: -1.0 }",
|
"boundsVerbose" : "field #0['_tim']: [MaxKey, MinKey]",
|
"isMultiKey" : 0,
|
"yieldMovedCursor" : 0,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0,
|
"keysExamined" : 61752,
|
"children" : [ ]
|
}
|
]
|
}
|
},
|
"feedback" : {
|
|
},
|
"filterSet" : false
|
}
|
]
|
After I dropped the index on "_tim", those queries were interrupted and logged this, which confirms wrong index pick:
2014-05-12T21:35:31.739+0000 [conn215] query <db>.<coll> query: { $query: { did: ObjectId('52f3cf28fea3984688f1907a'), fcount: { $gte: 12 } }, $orderby: { _tim: -1 } } planSummary: IXSCAN { _tim: -1.0 } ntoreturn:25 ntoskip:0 nscanned:29691554 nscannedObjects:29691553 keyUpdates:0 numYields:106831 locks(micros) r:2673209846 nreturned:6 reslen:1741 1915146ms
|
Have you had a chance to figure out what's the issue here yet?
Thanks,
Roman
|
|
Hey guys,
Any luck figuring out what might be going on here? Let me know if you need more info.
Thanks,
Roman
|
|
Hi Jason,
Neither of these fields is an array.
Unfortunately, I already removed {"mid": 1, "r1": -1} and {"mid": 1, "r2": -1} indexes from the collection - I found out they were not needed for our types of queries and removed them hoping that would help Mongo pick the right one. But here's the output of those commands anyway:
aux:PRIMARY> db.<coll>.getIndexes()
|
[
|
{
|
"v" : 1,
|
"key" : {
|
"_id" : 1
|
},
|
"ns" : "<db>.<coll>",
|
"name" : "_id_"
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"key" : 1
|
},
|
"ns" : "<db>.<coll>",
|
"name" : "key_1",
|
"background" : true
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"mid" : 1,
|
"key" : 1
|
},
|
"ns" : "<db>.<coll>",
|
"name" : "mid_1_key_1",
|
"background" : true
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"mid" : 1
|
},
|
"name" : "mid_1",
|
"ns" : "<db>.<coll>",
|
"background" : true
|
}
|
]
|
aux:PRIMARY> db.<coll>.getPlanCache().getPlansByQuery({ mid: 1, key: 1 })
|
[
|
{
|
"details" : {
|
"solution" : "(index-tagged expression tree: tree=Node\n---Leaf { mid: 1.0, key: 1.0 }, pos: 1\n---Leaf { mid: 1.0, key: 1.0 }, pos: 0\n)"
|
},
|
"reason" : {
|
"score" : 1.0003000000000002,
|
"stats" : {
|
"type" : "LIMIT",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 1,
|
"children" : [
|
{
|
"type" : "FETCH",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 1,
|
"alreadyHasObj" : 0,
|
"forcedFetches" : 0,
|
"matchTested" : 0,
|
"children" : [
|
{
|
"type" : "IXSCAN",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 1,
|
"keyPattern" : "{ mid: 1.0, key: 1.0 }",
|
"boundsVerbose" : "field #0['mid']: [ObjectId('536a737b386774236981dd18'), ObjectId('536a737b386774236981dd18')], field #1['key']: [\"mahmmmod@hotmail.com\", \"mahmmmod@hotmail.com\"]",
|
"isMultiKey" : 0,
|
"yieldMovedCursor" : 0,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0,
|
"keysExamined" : 0,
|
"children" : [ ]
|
}
|
]
|
}
|
]
|
}
|
},
|
"feedback" : {
|
"nfeedback" : 20,
|
"averageScore" : 1.0002999999999997,
|
"stdDevScore" : 2.278129578503827e-16,
|
"scores" : [
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
}
|
]
|
},
|
"filterSet" : false
|
},
|
{
|
"details" : {
|
"solution" : "(index-tagged expression tree: tree=Node\n---Leaf { key: 1.0 }, pos: 0\n---Leaf \n)"
|
},
|
"reason" : {
|
"score" : 1.0003,
|
"stats" : {
|
"type" : "LIMIT",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 1,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"children" : [
|
{
|
"type" : "KEEP_MUTATIONS",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 1,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"children" : [
|
{
|
"type" : "FETCH",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 1,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"alreadyHasObj" : 0,
|
"forcedFetches" : 0,
|
"matchTested" : 0,
|
"children" : [
|
{
|
"type" : "IXSCAN",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 1,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"keyPattern" : "{ key: 1.0 }",
|
"boundsVerbose" : "field #0['key']: [\"mahmmmod@hotmail.com\", \"mahmmmod@hotmail.com\"]",
|
"isMultiKey" : 0,
|
"yieldMovedCursor" : 0,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0,
|
"keysExamined" : 2,
|
"children" : [ ]
|
}
|
]
|
}
|
]
|
}
|
]
|
}
|
},
|
"feedback" : {
|
|
},
|
"filterSet" : false
|
},
|
{
|
"details" : {
|
"solution" : "(index-tagged expression tree: tree=Node\n---Leaf \n---Leaf { mid: 1.0 }, pos: 0\n)"
|
},
|
"reason" : {
|
"score" : 1.0003,
|
"stats" : {
|
"type" : "LIMIT",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 1,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"children" : [
|
{
|
"type" : "KEEP_MUTATIONS",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 1,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"children" : [
|
{
|
"type" : "FETCH",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 1,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"alreadyHasObj" : 0,
|
"forcedFetches" : 0,
|
"matchTested" : 0,
|
"children" : [
|
{
|
"type" : "IXSCAN",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 1,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"keyPattern" : "{ mid: 1.0 }",
|
"boundsVerbose" : "field #0['mid']: [ObjectId('536a737b386774236981dd18'), ObjectId('536a737b386774236981dd18')]",
|
"isMultiKey" : 0,
|
"yieldMovedCursor" : 0,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0,
|
"keysExamined" : 2,
|
"children" : [ ]
|
}
|
]
|
}
|
]
|
}
|
]
|
}
|
},
|
"feedback" : {
|
|
},
|
"filterSet" : false
|
}
|
]
|
aux:PRIMARY> db.<coll>.count()
|
225015608
|
aux:PRIMARY> db.<coll>.count({ mid: ObjectId('4fa0d0914711596b17039e29') })
|
2500000
|
Here's another example, for another collection.
Sometimes the right index is being picked:
2014-05-07T18:10:30.345+0000 [conn6724] query <db>.<coll> query: { did: ObjectId('52228c82fea398670f61320c'), fcount: { $gte: 12 }, key: "eatyourveggies18@hotmail.com" } planSummary: IXSCAN { did: 1.0, key: 1.0 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:533 nreturned:1 reslen:229 101ms
|
Sometimes a wrong one:
2014-05-07T17:19:59.931+0000 [conn338293] query <db>.<coll> query: { did: ObjectId('51cff3f35a76182da8ef5a45'), fcount: { $gte: 12 }, key: "lauragonzalez44@hotmail.com" } planSummary: IXSCAN { did: 1, fcount: 1, _tim: -1 } ntoskip:0 nscanned:56336 nscannedObjects:56336 keyUpdates:0 numYields:464 locks(micros) r:18262542 nreturned:0 reslen:20 94940ms
|
Here's the output of the same commands for this collection:
aux:PRIMARY> db.<coll>.getIndexes()
|
[
|
{
|
"v" : 1,
|
"key" : {
|
"_id" : 1
|
},
|
"ns" : "<db>.<coll>",
|
"name" : "_id_"
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"did" : 1,
|
"fcount" : 1,
|
"_tim" : -1
|
},
|
"ns" : "<db>.<coll>",
|
"name" : "did_1_fcount_1__tim_-1"
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"_tim" : -1
|
},
|
"ns" : "<db>.<coll>",
|
"name" : "_tim_-1"
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"did" : 1,
|
"key" : 1
|
},
|
"ns" : "<db>.<coll>",
|
"name" : "did_1_key_1"
|
}
|
]
|
aux:PRIMARY> db.<coll>.getPlanCache().getPlansByQuery({ did: ObjectId('529e2a224fda9e7da20b4e6b'), fcount: { $gte: 12 }, key: "maggie7707@gmail.com" })
|
[
|
{
|
"details" : {
|
"solution" : "(index-tagged expression tree: tree=Node\n---Leaf { did: 1.0, key: 1.0 }, pos: 0\n---Leaf { did: 1.0, key: 1.0 }, pos: 1\n---Leaf \n)"
|
},
|
"reason" : {
|
"score" : 1.0003000000000002,
|
"stats" : {
|
"type" : "LIMIT",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 1,
|
"children" : [
|
{
|
"type" : "KEEP_MUTATIONS",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 1,
|
"children" : [
|
{
|
"type" : "FETCH",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 1,
|
"alreadyHasObj" : 0,
|
"forcedFetches" : 0,
|
"matchTested" : 0,
|
"children" : [
|
{
|
"type" : "IXSCAN",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 1,
|
"keyPattern" : "{ did: 1.0, key: 1.0 }",
|
"boundsVerbose" : "field #0['did']: [ObjectId('51483d48fea39833900003d9'), ObjectId('51483d48fea39833900003d9')], field #1['key']: [\"kirstyjlawton@btinternet.com\", \"kirstyjlawton@btinternet.com\"]",
|
"isMultiKey" : 0,
|
"yieldMovedCursor" : 0,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0,
|
"keysExamined" : 0,
|
"children" : [ ]
|
}
|
]
|
}
|
]
|
}
|
]
|
}
|
},
|
"feedback" : {
|
"nfeedback" : 20,
|
"averageScore" : 1.0502999999999996,
|
"stdDevScore" : 0.22360679774997902,
|
"scores" : [
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 1.0003
|
},
|
{
|
"score" : 2.0003
|
}
|
]
|
},
|
"filterSet" : false
|
},
|
{
|
"details" : {
|
"solution" : "(index-tagged expression tree: tree=Node\n---Leaf { did: 1, fcount: 1, _tim: -1 }, pos: 0\n---Leaf \n---Leaf { did: 1, fcount: 1, _tim: -1 }, pos: 1\n)"
|
},
|
"reason" : {
|
"score" : 1.0003,
|
"stats" : {
|
"type" : "LIMIT",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 1,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"children" : [
|
{
|
"type" : "KEEP_MUTATIONS",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 1,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"children" : [
|
{
|
"type" : "FETCH",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 0,
|
"needTime" : 1,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"alreadyHasObj" : 0,
|
"forcedFetches" : 0,
|
"matchTested" : 0,
|
"children" : [
|
{
|
"type" : "IXSCAN",
|
"works" : 1,
|
"yields" : 0,
|
"unyields" : 0,
|
"invalidates" : 0,
|
"advanced" : 1,
|
"needTime" : 0,
|
"needFetch" : 0,
|
"isEOF" : 0,
|
"keyPattern" : "{ did: 1, fcount: 1, _tim: -1 }",
|
"boundsVerbose" : "field #0['did']: [ObjectId('51483d48fea39833900003d9'), ObjectId('51483d48fea39833900003d9')], field #1['fcount']: [12, inf.0], field #2['_tim']: [MaxKey, MinKey]",
|
"isMultiKey" : 0,
|
"yieldMovedCursor" : 0,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0,
|
"keysExamined" : 2,
|
"children" : [ ]
|
}
|
]
|
}
|
]
|
}
|
]
|
}
|
},
|
"feedback" : {
|
|
},
|
"filterSet" : false
|
}
|
]
|
aux:PRIMARY> db.<coll>.count()
|
71894161
|
Hopefully this helps. Let me know if I can provide more info.
Thanks,
Roman
|
|
Hi Roman,
Thanks for the report.
I'd like to gather additional information to further diagnose this issue. Could you paste the output of running each of the following at the mongo shell? This should help create a picture as to why the {mid:1, key:1} index isn't always being picked for this query.
- db.getSiblingDB(<db>).getCollection(<coll>).getIndexes()
- db.getSiblingDB(<db>).getCollection(<coll>).getPlanCache().getPlansByQuery({ mid: 1, key: 1 })
- db.getSiblingDB(<db>).getCollection(<coll>).count()
- db.getSiblingDB(<db>).getCollection(<coll>).count({ mid: ObjectId('4fa0d0914711596b17039e29') })
Feel free to scrub the collection name from the output, noting that you did so in the ticket description.
In addition, for this collection do you have any documents where the type of value for the "mid", "key", or "r1" fields is an array?
~ Jason Rassi
|
Generated at Thu Feb 08 03:33:08 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.