Details
-
Question
-
Resolution: Unresolved
-
Major - P3
-
None
-
None
-
None
-
Query Optimization
-
Query Execution 2021-03-08, Query Execution 2021-03-22
Description
We currently reject {$limit: 0}:
> db.foo.explain().aggregate({$limit:0})
|
uncaught exception: Error: command failed: {
|
"ok" : 0,
|
"errmsg" : "the limit must be positive",
|
"code" : 15958,
|
"codeName" : "Location15958"
|
} with original command request: {
|
"aggregate" : "foo",
|
"pipeline" : [
|
{
|
"$limit" : 0
|
}
|
],
|
"explain" : true,
|
"cursor" : {
|
|
|
},
|
"lsid" : {
|
"id" : UUID("ffba48f6-ae33-4784-9a87-28962126ebdb")
|
}
|
}
|
But we allow it to happen after optimizations:
> db.foo.explain().aggregate({$limit:2}, {$skip:3})
|
{
|
"explainVersion" : "1",
|
"queryPlanner" : {
|
"namespace" : "blah.foo",
|
"indexFilterSet" : false,
|
"parsedQuery" : {
|
|
|
},
|
"queryHash" : "8B3D4AB8",
|
"planCacheKey" : "8B3D4AB8",
|
"optimizedPipeline" : true,
|
"maxIndexedOrSolutionsReached" : false,
|
"maxIndexedAndSolutionsReached" : false,
|
"maxScansToExplodeReached" : false,
|
"winningPlan" : {
|
"stage" : "LIMIT",
|
"limitAmount" : 0,
|
"inputStage" : {
|
"stage" : "SKIP",
|
"skipAmount" : 2,
|
"inputStage" : {
|
"stage" : "COLLSCAN",
|
"direction" : "forward"
|
}
|
}
|
},
|
"rejectedPlans" : [ ]
|
},
|
"command" : {
|
"aggregate" : "foo",
|
"pipeline" : [
|
{
|
"$limit" : 2
|
},
|
{
|
"$skip" : 3
|
}
|
],
|
"explain" : true,
|
"cursor" : {
|
|
|
},
|
"lsid" : {
|
"id" : UUID("ffba48f6-ae33-4784-9a87-28962126ebdb")
|
},
|
"$db" : "blah"
|
},
|
"serverInfo" : {
|
"host" : "ip-10-122-10-16",
|
"port" : 27017,
|
"version" : "4.9.0-alpha4-13-gbed3256",
|
"gitVersion" : "bed32560b4ef8df1eb6635c6d756119ab0e685a4"
|
},
|
"ok" : 1
|
}
|
This seems inconsistent. Either we should reject everything like this since it is likely to indicate user error, or we should allow both.
I haven't tested on a sharded system, but I could imagine the current behavior causing an issue there since we would serialize a {$limit:0} stage after optimization on mongos.