-
Type: Bug
-
Resolution: Works as Designed
-
Priority: Major - P3
-
None
-
Affects Version/s: 6.0.6, 5.0.18
-
Component/s: None
-
Query Execution
-
ALL
-
While running a batch insert on a Sharded Cluster environment, we noticed a shard getting flooded with the following messages:
{"t":{"$date":"2023-07-10T13:04:32.092-03:00"},"s":"I", "c":"-", "id":4760300, "ctx":"conn202","msg":"Gathering currentOp information, operation of size {size} exceeds the size limit of {limit} and will be truncated.","attr":{"size":254712,"limit":51200}} {"t":{"$date":"2023-07-10T13:04:32.155-03:00"},"s":"I", "c":"-", "id":4760300, "ctx":"conn202","msg":"Gathering currentOp information, operation of size {size} exceeds the size limit of {limit} and will be truncated.","attr":{"size":254519,"limit":51200}} {"t":{"$date":"2023-07-10T13:04:32.222-03:00"},"s":"I", "c":"-", "id":4760300, "ctx":"conn202","msg":"Gathering currentOp information, operation of size {size} exceeds the size limit of {limit} and will be truncated.","attr":{"size":254656,"limit":51200}} {"t":{"$date":"2023-07-10T13:04:32.286-03:00"},"s":"I", "c":"-", "id":4760300, "ctx":"conn202","msg":"Gathering currentOp information, operation of size {size} exceeds the size limit of {limit} and will be truncated.","attr":{"size":254613,"limit":51200}} {"t":{"$date":"2023-07-10T13:04:32.354-03:00"},"s":"I", "c":"-", "id":4760300, "ctx":"conn202","msg":"Gathering currentOp information, operation of size {size} exceeds the size limit of {limit} and will be truncated.","attr":{"size":254958,"limit":51200}} {"t":{"$date":"2023-07-10T13:04:32.418-03:00"},"s":"I", "c":"-", "id":4760300, "ctx":"conn202","msg":"Gathering currentOp information, operation of size {size} exceeds the size limit of {limit} and will be truncated.","attr":{"size":254862,"limit":51200}} {"t":{"$date":"2023-07-10T13:04:32.486-03:00"},"s":"I", "c":"-", "id":4760300, "ctx":"conn202","msg":"Gathering currentOp information, operation of size {size} exceeds the size limit of {limit} and will be truncated.","attr":{"size":254983,"limit":51200}} {"t":{"$date":"2023-07-10T13:04:32.549-03:00"},"s":"I", "c":"-", "id":4760300, "ctx":"conn202","msg":"Gathering currentOp information, operation of size {size} exceeds the size limit of {limit} and will be truncated.","attr":{"size":254639,"limit":51200}} {"t":{"$date":"2023-07-10T13:04:32.611-03:00"},"s":"I", "c":"-", "id":4760300, "ctx":"conn202","msg":"Gathering currentOp information, operation of size {size} exceeds the size limit of {limit} and will be truncated.","attr":{"size":255108,"limit":51200}}
However, while analyzing the problem, we saw that since 5.0, mongo changed how currentOP would work, as documented here and{} here.
Alongside that, the message itself is not very clear on the limits; Is that on bytes or kilobytes?
I'm assuming it's reporting in bytes, following the reference here from the source code.
// When the currentOp command is run, it returns a single response object containing all current // operations; this request will fail if the response exceeds the 16MB document limit. By // contrast, the $currentOp aggregation stage does not have this restriction. If 'truncateOps' // is true, limit the size of each op to 1000 bytes. Otherwise, do not truncate. const boost::optional<size_t> maxQuerySize{truncateOps, 1000};
If that's correct.
- Why is it limiting on such a lower threshold, should it be at least 16MB? 1000bytes per the code or 51200bytes per my tests.
- Also wondering why such divergence.
- Additionally, after 5.0; the currentOP() is now built around the $currentOp via aggregation pipeline, which removes the above limitation on the max size of 16MB BSON.
- Then, the operation itself should not fall into that condition.
I've tested that on 4.4.22, 5.0.18, and 6.0.6.
But got those messages only on 5.0.18 and 6.0.6.