Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-27053

Possibility to confirm w:majority write that has been rolled back

    • Fully Compatible
    • ALL
    • Repl 2016-11-21
    • 0

      I think the following sequence of events will cause us to acknowlege a w:majority write that has been rolled back. It requires that the write comes from mongos so the flag to not drop the connection on stepdown has been set.

      1. mongos sends a write to a shard with w:majority
      2. write gets applied locally
      3. a majority of secondaries vote for a new primary and it wins the election without the original primary knowing
      4. the nodes that elected the new primary confirm the w:majority write to the old primary - the updatePosition command from those secondaries indicates a new term so the old primary steps down. When stepping down we cancel all user operations and kill all non-internal connections, but the connection that issued this write came from a mongos so it isn't closed
      5. the original primary and all the secondaries go into rollback and revert the write, and successfully replicate the new op that the new primary writes on election
      6. the original primary is re-elected
      7. the thread that issued the write on the original primary gets into awaitReplication(), sees that it is in state primary as exected and that the write it's waiting for has already been confirmed on a majority and returns success

      If awaitReplication_inlock() checked for interrupt before checking if the write was already satisfied, then we'd be okay since during stepdown we cancelled all running operations. But we don't ever check for interrupt in awaitReplication if the writeConcern is already satisfied by the time we reach awaitReplication().

            Assignee:
            spencer@mongodb.com Spencer Brody (Inactive)
            Reporter:
            mathias@mongodb.com Mathias Stearn
            Votes:
            0 Vote for this issue
            Watchers:
            15 Start watching this issue

              Created:
              Updated:
              Resolved: