Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-35663

Replication recovery does not update the logical clock

    XMLWordPrintable

    Details

    • Backwards Compatibility:
      Fully Compatible
    • Operating System:
      ALL
    • Backport Requested:
      v4.0
    • Sprint:
      Sharding 2018-08-13, Sharding 2018-09-10, Sharding 2018-09-24, Sharding 2018-10-08, Sharding 2018-10-22, Sharding 2018-11-05, Sharding 2018-12-17, Sharding 2018-12-31, Sharding 2019-01-14, Sharding 2019-01-28
    • Linked BF Score:
      64

      Description

      If a node crashes with unapplied oplog entries, when it starts back up it will apply to the end of its oplog through ReplicationRecoveryImpl::recoverFromOplog. This applies the entries by directly calling SyncTail::multiApply (through an OplogApplier), which does not update the logical clock, unlike normal secondary application. Then when starting up its replication coordinator, the node will asynchronously schedule ReplicationCoordinatorImpl::_finishLoadLocalConfig which updates the logical clock after it updates the replication coordinator's lastAppliedOpTime to the opTime of the latest oplog entry.

      If a request is processed during this window in _finishLoadLocalConfig, when the node goes to compute logical time metadata for the response, it can hit this invariant because the operationTime, which is typically the lastAppliedOpTime, will be greater than the latest time in the logical clock.

      Two ways to fix this would be to have replication recovery update the logical clock when applying the unapplied oplog entries or to update the global timestamp before updating lastAppliedOpTime when finishing loading the local replica set config.

        Attachments

          Issue Links

            Activity

              People

              • Votes:
                0 Vote for this issue
                Watchers:
                7 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: