Batch write size estimation logic on mongos doesn't account for top-level command fields

XMLWordPrintableJSON

    • Type: Bug
    • Resolution: Fixed
    • Priority: Major - P3
    • 8.1.0-rc0
    • Affects Version/s: None
    • Component/s: None
    • None
    • Query Execution
    • Fully Compatible
    • ALL
    • None
    • 3
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      The logic we use right now on mongos to determine whether adding a write to a batch would make a batch too big only takes into account the total estimated size of each individual write operation so far. We do not appear to take into account the size of top-level command fields, for example let on an update command, but we likely should.

      Relevant logic:

      We are going to factor in such fields for the bulkWrite command specifically in SERVER-73536 but it seems we might want to make similar changes for insert/update/delete as well.

      SERVER-74806 was a recent case where $merge generated a too-large internal update command because its similar logic did not factor in let and legacy runtime constants.

      This is somewhat related to SERVER-53387 though that is about factoring in metadata size.

              Assignee:
              Rui Liu
              Reporter:
              Kaitlin Mahar
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

                Created:
                Updated:
                Resolved: