Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-33610

If using two-phase locking, lock manager's list of resources to unlock can grow without bound

    • Type: Icon: Improvement Improvement
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • 3.7.4
    • Affects Version/s: None
    • Component/s: Concurrency
    • Labels:
      None
    • Fully Compatible
    • Storage NYC 2018-03-26, Storage NYC 2018-04-09

      In order to support readConcern level "snapshot", the Locker can be configured to use two-phase locking. This means that when unlock() is called, actually unlocking the resource is deferred until the WriteUnitOfWork ends. This is implemented by pushing the resource id onto a list of resources to unlock at the end of the transaction:

      https://github.com/mongodb/mongo/blob/a53005feed80e81610183b542b5aaa44b85dd3a9/src/mongo/db/concurrency/lock_state.cpp#L478

      If a lock is recursively acquired n times and unlocked n times, then this list will contain the same resource id repeating n times. This is a particular problem for query execution paths which can repeatedly call lock() and unlock(). Aggregation is the most important such code path, and in particular queries involving $lookup can lock and unlock the same resource repeatedly. The length of the _resourcesToUnlockAtEndOfUnitOfWork list can thus grow in proportion to the number of documents processed by the $lookup.

      I was able to verify this using an in-progress implementation of readConcern level "snapshot" support for agg (see SERVER-33541). The following repro involves just two collections and 20 documents, yet the length of the list of resources to unlock grows to a maximum of 43 (as indicated by logging I added to LockerImpl).

      (function() {
          "use strict";
      
          const dbName = "test";
          const kNumDocs = 10;
      
          let rst = new ReplSetTest({nodes: 1});
          rst.startSet();
          rst.initiate();
      
          const primaryDB = rst.getPrimary().getDB(dbName);
          const session = primaryDB.getMongo().startSession({causalConsistency: false});
          const sessionDb = session.getDatabase(dbName);
      
          // Insert documents into two collections, which we will join with $lookup.
          for (let i = 0; i < kNumDocs; i++) {
              assert.writeOK(sessionDb.c1.insert({_id: i}));
              assert.writeOK(sessionDb.c2.insert({_id: i}));
          }
      
          let pipeline = [{$lookup: {from: "c2", localField: "_id", foreignField: "_id", as: "as"}}];
          let aggCmd = {
              aggregate: "c1",
              pipeline: pipeline,
              cursor: {},
              readConcern: {level: "snapshot"},
              txnNumber: NumberLong(0)
          };
      
          assert.commandWorked(sessionDb.runCommand(aggCmd));
      
          rst.stopSet();
      }());
      

            Assignee:
            maria.vankeulen@mongodb.com Maria van Keulen
            Reporter:
            david.storch@mongodb.com David Storch
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: