-
Type: Bug
-
Resolution: Unresolved
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
-
Query Execution
-
ALL
The logic we use right now on mongos to determine whether adding a write to a batch would make a batch too big only takes into account the total estimated size of each individual write operation so far. We do not appear to take into account the size of top-level command fields, for example let on an update command, but we likely should.
Relevant logic:
- Calculate the size of the individual write op
- Check if that calculated size + existing ops calculated size pushes us over the MaxBSONObjSize
- Increase our size estimate each time we add a write
We are going to factor in such fields for the bulkWrite command specifically in SERVER-73536 but it seems we might want to make similar changes for insert/update/delete as well.
SERVER-74806 was a recent case where $merge generated a too-large internal update command because its similar logic did not factor in let and legacy runtime constants.
This is somewhat related to SERVER-53387 though that is about factoring in metadata size.
- related to
-
SERVER-53387 Large internal metadata can trigger BSONObjectTooLarge for commands under the BSON size limit
- Backlog
-
SERVER-74806 Write size estimation logic does not account for runtime/let constants
- Closed
-
SERVER-73536 Account for the size of the outgoing request in bulkWrite sub-batching logic
- Closed