Uploaded image for project: 'Node.js Driver'
  1. Node.js Driver
  2. NODE-5106

MongoClient concurrent connect can lead to Topology leak

    • Type: Icon: Bug Bug
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • 5.2.0
    • Affects Version/s: None
    • Component/s: None
    • 0
    • Not Needed
    • Not Needed
    • Hide

      1. What would you like to communicate to the user about this feature?
      2. Would you like the user to see examples of the syntax and/or executable code and its output?
      3. Which versions of the driver/connector does this apply to?

      Show
      1. What would you like to communicate to the user about this feature? 2. Would you like the user to see examples of the syntax and/or executable code and its output? 3. Which versions of the driver/connector does this apply to?

      Hello,

      There is currently a scenario where multiple Topology could be instantiated in parallel, leading to a state where Topologies  are living indefinitely (connected to the Mongo instance) but detached from the MongoClient instance (and so never cleaned-up).

      This behavior induce important side effects :

      • "leaked" topologies will never be closed (as the topology clean-up is performed when closing the client) keeping connection alives.
      • due to opened connections, the node process can never end by itself.

      Reproduction

      It can only appends when there is no topology opened on the MongoClient (as the connect method will return the current topology instance if already set).

      import { MongoClient } from "mongodb";
      
      (async function () {
          // init the MongoClient with the connection string
          const client = await new MongoClient('connection_string', {});
      
          // use the Client to execute commands (IT IS IMPORTANT that the connect is not called and awaited to let commands call it)
          const [rA, rB] = await Promise.all([
              client.db().collection('mycollecA').countDocuments(),
              client.db().collection('mycollecB').countDocuments(),
          ]);
      
          // close the client (and subsequent topology)
          await client.close();
      })()

      At this point, if two topologies were simultaneously created, one was properly attached to the MongoClient instance property (topology) but the second was lost, keeping running in backchannel and not closed with the client.close() (heartbeat continues, etc.). Leading aswell to node process not exiting.

      Notes :

      • After the creation of the MongoClient, do not await the connect() otherwise concurrent calls won't occur.
      • The connection duration to the Mongodb instance can impact the reproducibility (in local the and without TLS connection will be much faster than remote with TLS. Faster connection means faster connect method and maybe the concurrent calls will already have access to the topology defined on the MongoClient instance).
      • Bug is present in 5.1.0 and 4.14.0 at least (tested).

       

      Proposal

      I made a pull-request on Github with the fix applied in my team codebase meanwhile.
      The principle is to perform the connect (topology creation) in a more secure way via a "lock" like mechanism. When a connect is performed, an instance member is set with the connection Promise.
      This way, if any other connect call happens, the current connection Promise is returned instead of racing for another topology instance in parallel.

       

      I'm not able to write a test (IMO, this test fit pretty well in the resource_clean_up.test) because mocha beforeHooks for tests suite always await the first connect...
      I let someone of the team decide or guide me on the wanted way to handle it.

      Thanks.

            Assignee:
            bailey.pearson@mongodb.com Bailey Pearson
            Reporter:
            clement.cloux@yabawt.com Clément CLOUX
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: