-
Type:
Task
-
Resolution: Unresolved
-
Priority:
Major - P3
-
None
-
Affects Version/s: 3.4.0, 3.6.0
-
Component/s: Aggregation Framework, Diagnostics
-
None
-
Query Execution
-
(copied to CRM)
-
None
-
3
-
None
-
None
-
None
-
None
-
None
-
None
We will increment the scanAndOrder metric once per operation, which includes getMore operations. This makes for the following strange behaviors:
- The number of times this metric is incremented depends on the batchSize used. A smaller batch size will mean more getMores, and the metric will be incremented once per getMore.
- In a sharded aggregation which performs a blocking sort, this metric will be incremented at least twice, once for the 'cursor establishment' portion of the aggregate, sent with batchSize: 0, and once for the actual computation of the first batch, which happens in a getMore.
Original Description
Currently each operation that reports to have a sort stage will increment the global serverStatus counter "metrics.operation.scanAndOrder". When an aggregation is issued against a sharded cluster, this number can be incremented twice, once for the shards half of the pipeline, and once for the merging half (see also SERVER-32014).
This arguably makes sense, since from the mongod's perspective there are two operations performing a blocking sort. On the other hand there was only one operation that performed a blocking sort from the client's perspective. For one last perspective, we did technically scan and order documents twice for that operation. If this metric is tracking the number of times the server performed a blocking sort, then we should increment this once per blocking $sort stage, not once per operation.