Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-22634

Data size change for oplog deletes can overflow 32-bit int

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Critical - P2 Critical - P2
    • 3.0.10
    • Affects Version/s: 3.0.9
    • Component/s: Storage
    • Labels:
    • Fully Compatible
    • ALL

      Issue Status as of Feb 29, 2016

      In MongoDB 3.0 nodes running with the WiredTiger storage engine, an integer overflow condition may cause a replica set to lose write availability when write concern is bigger than 1.

      Under write-intensive workloads, it is possible for the oplog of a replica set to grow past its configured size. If this happens, the system will attempt to remove up to 20,000 documents from the oplog to shrink it. If the total size of those 20,000 documents exceeds 2GB, this document removal will result in an overflow condition in the 32-bit integer that records the size change.

      As a result, the size change will be improperly recorded while the oplog will still appear to exceed the maximum configured size, so the system will attempt to delete more data from the oplog. In extreme cases this can result in the entire contents of the oplog being deleted.

      While regular capped collections can be affected by this bug as well, it is very unlikely given the nature of this bug.

      If this bug is triggered under the conditions described above, replication will cease and the affected replica set will need to be recovered manually.

      In the unlikely case a regular capped collection is affected, the system will remove data from the capped collection at a faster than normal pace, so it is possible that the collection is emptied completely.

      No workarounds exist for this issue. MongoDB users running or wishing to run with the WiredTiger storage engine must upgrade to 3.0.10 or newer. MongoDB 3.2 is not affected by this bug, so users may also consider upgrading to MongoDB version 3.2.3 or newer.

      Only MongoDB 3.0 users running with the WiredTiger storage engine may be affected by this issue. No other configuration of MongoDB is affected.

      The fix is included in the 3.0.10 production release. MongoDB 3.2 is not affected.

      Original description

      In wiredtiger_record_store.cpp, _increaseDataSize is declared to take an int for the size change:

      void WiredTigerRecordStore::_increaseDataSize(OperationContext* txn, int amount)

      But when called from cappedDeleteAsNeeded_inlock, the amount may overflow a 32-bit int if many large records are being deleted, resulting in (very) inaccurate accounting of the size of an oplog. This can result in the oplog deleter thread deleting everything in the oplog in order to try to get it back down to the configured maximum size, causing replication to cease.

            mathias@mongodb.com Mathias Stearn
            bruce.lucas@mongodb.com Bruce Lucas (Inactive)
            1 Vote for this issue
            31 Start watching this issue