[SERVER-26194] mongos aborts in debug builds if additional options specified to update and delete bulk ops Created: 20/Sep/16  Updated: 19/Nov/16  Resolved: 01/Nov/16

Status: Closed
Project: Core Server
Component/s: Sharding, Write Ops
Affects Version/s: None
Fix Version/s: 3.4.0-rc3

Type: Bug Priority: Major - P3
Reporter: Max Hirschhorn Assignee: Marko Vojvodic
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Backwards Compatibility: Fully Compatible
Operating System: ALL
Steps To Reproduce:

python buildscripts/resmoke.py --executor=sharding_jscore_passthrough repro_estsize.js --storageEngine=wiredTiger

repro_estsize.js

db.mycoll.bulkWrite([{
    deleteMany: {
        filter: { str: 'FOO' },
        collation: {
            locale: "en_US",
            caseLevel: false,
            caseFirst: "off",
            strength: 3,
            numericOrdering: false,
            alternate: "non-ignorable",
            maxVariable: "punct",
            normalization: false,
            backwards: false
        }
    }
}]);

Output

[ShardedClusterFixture:job0:mongos] 2016-09-20T13:53:07.746-0400 I -        [conn5] Invariant failure estSize >= item.getDelete()->toBSON().objsize() src/mongo/s/write_ops/batch_write_op.cpp 167
[ShardedClusterFixture:job0:mongos] 2016-09-20T13:53:07.746-0400 I -        [conn5]
[ShardedClusterFixture:job0:mongos]
[ShardedClusterFixture:job0:mongos] ***aborting after invariant() failure

Sprint: Query 2016-10-31, Query 2016-11-21
Participants:

 Description   

The getWriteSizeBytes() function bases its estimate off of the size of the query filter in BSON and doesn't account for the size of the collation specification in BSON.

static int getWriteSizeBytes(const WriteOp& writeOp) {
    const BatchItemRef& item = writeOp.getWriteItem();
    BatchedCommandRequest::BatchType batchType = item.getOpType();
 
    if (batchType == BatchedCommandRequest::BatchType_Insert) {
        return item.getDocument().objsize();
    } else if (batchType == BatchedCommandRequest::BatchType_Update) {
        // Note: Be conservative here - it's okay if we send slightly too many batches
        int estSize = item.getUpdate()->getQuery().objsize() +
            item.getUpdate()->getUpdateExpr().objsize() + kEstUpdateOverheadBytes;
        dassert(estSize >= item.getUpdate()->toBSON().objsize());
        return estSize;
    } else {
        dassert(batchType == BatchedCommandRequest::BatchType_Delete);
        // Note: Be conservative here - it's okay if we send slightly too many batches
        int estSize = item.getDelete()->getQuery().objsize() + kEstDeleteOverheadBytes;
        dassert(estSize >= item.getDelete()->toBSON().objsize());
        return estSize;
    }
}



 Comments   
Comment by Githook User [ 01/Nov/16 ]

Author:

{u'username': u'm-vojvodic', u'name': u'Marko Vojvodic', u'email': u'marko.vojvodic@mongodb.com'}

Message: SERVER-26194 Account for collation specification size in batch_write_op getWriteSizeBytes
Branch: master
https://github.com/mongodb/mongo/commit/8e75b8cf7aaae55a0d68e1105a69ae9f505720ad

Comment by Max Hirschhorn [ 23/Sep/16 ]

charlie.swanson, the fuzzer would not have triggered this issue in our current Evergreen configuration. This is because we only run the jstestfuzz* tasks on non-debug builds. When attempting to reproduce a build failure, I ended up running the generated tests against a debug build on my Linux box and triggered this issue.

I'm not entirely sure the impact of getWriteSizeBytes() consistently being an underestimate when specifying a collation specification. It seems like we could also inadvertently send a batch that is "too big", but I don't know what impact that would have when the operation is sent to a shard.

Comment by Charlie Swanson [ 23/Sep/16 ]

max.hirschhorn did the fuzzer catch this? Would scheduling this help avoid work on the fuzzer? Otherwise I'm not sure we will schedule it anytime soon.

Generated at Thu Feb 08 04:11:24 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.