Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-82382

Batch/Bulk write size estimation logic on mongos doesn't account for sampleId fields

    • Type: Icon: Bug Bug
    • Resolution: Unresolved
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: None
    • Cluster Scalability
    • ALL
    • 3

      I think the size estimate is calculated here. And that eventually calls BatchItemRef::getSizeForBatchWriteBytes and write_ops::getUpdateSizeEstimate. But the writeOp we use for size calculation is the writeOp from the user request, which shouldn't have the sampleId field. In our implementation, we only attach the sampleId field after we target an operation and before we send to a shard/mongod. So that means that when getUpdateSizeEstimate is called, this would always return boost::none.

      This ticket should fix both batchWrite and bulkWrite.

            backlog-server-cluster-scalability [DO NOT USE] Backlog - Cluster Scalability
            lingzhi.deng@mongodb.com Lingzhi Deng
            0 Vote for this issue
            2 Start watching this issue