Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-35780

`renameCollection` across databases incorrectly timestamps metadata for secondary index builds

    • Type: Icon: Bug Bug
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • 4.0.3, 4.1.2
    • Affects Version/s: None
    • Component/s: Storage
    • Labels:
    • Fully Compatible
    • ALL
    • v4.0
    • Storage NYC 2018-07-30, Storage NYC 2018-08-13
    • 50

      Renaming collections across databases is not a simple rename, but rather a process of:

      1. Creating a temp collection on the destination database + _id index.
      2. Build secondary indexes on the temp collection.
      3. Insert documents to the temp collection.
      4. Rename the temp collection to the desired destination.

      Copying the index definitions over uses a single MultiIndexBlock. All of the indexes are generated with `ready: false` writes in one WUOW and a single timestamp from a noop oplog entry. However, committing the `ready: true` writes has the following sequence (for demonstration, suppose two secondary indexes, A and B):

      1. Begin WT transaction.
      2. Set index A to ready.
      3. Set index B to ready.
      4. Write oplog entry creating A.
      5. Set timestamp 1.
      6. Write oplog entry creating B.
      7. Set timestamp 2.
      8. Commit WT transaction.

      In this case, both `ready: true` writes are given timestamp 2. Rolling back inbetween these times will see both indexes as `ready: false`, but replication recovery will only rebuild index B.

      This is an analogous bug to SERVER-35070.

      This ticket should consider adding the following invariant right before here:

              invariant(_indexes.size() == 1 || onCreateFn);

            maria.vankeulen@mongodb.com Maria van Keulen
            daniel.gottlieb@mongodb.com Daniel Gottlieb (Inactive)
            0 Vote for this issue
            6 Start watching this issue


                Error rendering 'slack.nextup.jira:slack-integration-plus'. Please contact your Jira administrators.