Booming connections on primary mongod in sharded cluster

XMLWordPrintableJSON

    • Type: Question
    • Resolution: Community Answered
    • Priority: Major - P3
    • None
    • Affects Version/s: None
    • Component/s: None
    • None
    • None
    • 3
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Our structure:

      we have 3-sharded cluster, has ~8 mongoS and 20 mongod. We experience the booming connections on primary mongod in one of the shards. The mongod operation logs show thousands of these logs.

      2019-05-22T20:00:38.826-0500 I NETWORK [conn7381256] received client metadata from 10.112.1.181:51422 conn7381256: { driver: { name: "MongoDB Internal Client", version: "3.4.14" }, os: { type: "Linux", name: "CentOS release 6.10 (Final)", architecture: "x86_64", version: "Kernel 2.6.32-754.9.1.el6.x86_64" } }

      The total connection number jump from ~800 to ~5000 in two minutes. During this 2 minute, no other commands get executed, and after these booming connections, bunches of WRITE operation finished, with database intent exclusive lock timeAcquiringMicros: { w: 135344866 }.

      Looks like the problem is all WRITE is waiting for database intent exclusive lock, and have contention with some other resources. My questions are: what is that driver name: "MongoDB Internal Client" meaning? 

      It should not be the mongos -> mongod connection. They should be something like "NetworkInterfaceASIO-TaskExecutorPool-3".

      Also, it should not be replication from secondary to primary, because I also see some other connections that mean these: "NetworkInterfaceASIO-Replication", from mongodb doc.

      2019-05-22T19:48:46.856-0500 I NETWORK [conn7380126] received client metadata from 10.40.0.177:37562 conn7380126: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.4.14" }, os: { type: "Linux", name: "CentOS release 6.9 (Final)", architecture: "x86_64", versi
      on: "Kernel 2.6.32-696.23.1.el6.x86_64" } }

      Thank you very much for the help and really appreciate on any helps!

       

            Assignee:
            Dmitry Agranat
            Reporter:
            Zhexuan Chen
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: