[SERVER-77653] Batch write size estimation logic on mongos doesn't account for top-level command fields Created: 31/May/23  Updated: 30/Jan/24

Status: Open
Project: Core Server
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Kaitlin Mahar Assignee: Backlog - Query Execution
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Related
related to SERVER-53387 Large internal metadata can trigger B... Backlog
related to SERVER-74806 Write size estimation logic does not ... Closed
related to SERVER-73536 Account for the size of the outgoing ... Closed
Assigned Teams:
Query Execution
Operating System: ALL
Participants:

 Description   

The logic we use right now on mongos to determine whether adding a write to a batch would make a batch too big only takes into account the total estimated size of each individual write operation so far. We do not appear to take into account the size of top-level command fields, for example let on an update command, but we likely should.

Relevant logic:

We are going to factor in such fields for the bulkWrite command specifically in SERVER-73536 but it seems we might want to make similar changes for insert/update/delete as well.

SERVER-74806 was a recent case where $merge generated a too-large internal update command because its similar logic did not factor in let and legacy runtime constants.

This is somewhat related to SERVER-53387 though that is about factoring in metadata size.



 Comments   
Comment by Max Hirschhorn [ 31/May/23 ]

Sending this ticket along to the Query Execution team because my understanding is the Query Execution team is intended to own the sharded write path and Mihai has done similar work about batch write estimation in other tickets.

Generated at Thu Feb 08 06:36:12 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.