Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-25168

Foreground index build blocks all R/W on ALL database on a sharded cluster with secondaryPreferred read preference

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.0.12
    • Labels:
      None

      One of the users of our MongoDB 3.0.12 6-shards sharded cluster has issued a foreground index build on its own database (pride_archive_ms), after the operation completed on the primary member of each replica set (shard), it got replicated to the secondaries as expected.

      I was expecting the database where the index build still is in progress to be blocked for read and writes, but actually read/writes to ALL databases are blocked when using secondary or secondaryPreferred read preference.

      Basically:

      • if I try to connect directly to the secondary member (admin database) with the mongo shell the session hangs before displaying the prompt getting blocked on this call:
        mongo --username root --password YYYYYYYY admin --port 27018
        MongoDB shell version: 3.0.12
        connecting to: 127.0.0.1:27018/admin
        
        [ .... ]
        getsockname(3, {sa_family=AF_INET, sin_port=htons(50864), sin_addr=inet_addr("127.0.0.1")}, [16]) = 0
        sendto(3, "<\0\0\0\0\0\0\0\0\0\0\0\324\7\0\0\0\0\0\0admin.$cmd\0\0"..., 60, MSG_NOSIGNAL, NULL, 0) = 60
        recvfrom(3, "N\0\0\0\257Z(\0\0\0\0\0\1\0\0\0", 16, MSG_NOSIGNAL, NULL, NULL) = 16
        recvfrom(3, "\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0*\0\0\0\2you\0\20\0\0"..., 62, MSG_NOSIGNAL, NULL, NULL) = 62
        futex(0x22b9fc8, FUTEX_WAKE_PRIVATE, 1) = 1
        futex(0x22b9fc8, FUTEX_WAKE_PRIVATE, 1) = 1
        futex(0x22b9fc8, FUTEX_WAKE_PRIVATE, 1) = 1
        sendto(3, ">\0\0\0\1\0\0\0\0\0\0\0\324\7\0\0\0\0\0\0admin.$cmd\0\0"..., 62, MSG_NOSIGNAL, NULL, 0) = 62
        recvfrom(3, "\277\1\0\0\260Z(\0\1\0\0\0\1\0\0\0", 16, MSG_NOSIGNAL, NULL, NULL) = 16
        recvfrom(3, "\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\233\1\0\0\2setName"..., 431, MSG_NOSIGNAL, NULL, NULL) = 431
        futex(0x22b9fc8, FUTEX_WAKE_PRIVATE, 1) = 1
        open("/dev/urandom", O_RDONLY)          = 4
        read(4, "\177\241\0316?s\361C\310HMd\205\300BJo\331'\347\324\356\33\225\24\23\265\314A\27D\353"..., 8191) = 8191
        close(4)                                = 0
        sendto(3, "\220\0\0\0\2\0\0\0\0\0\0\0\324\7\0\0\0\0\0\0admin.$cmd\0\0"..., 144, MSG_NOSIGNAL, NULL, 0) = 144
        recvfrom(3, ^C <unfinished ...>
        
      • Connecting via the Mongo router I can properly authenticate, but any query issued with secondary or secondaryPreferred read preference gets blocked on ANY database (please note, the database is different from the one the index is built on):
        mongo --host mongos-hxvm-001 --username ddi_user --password XXXX --authenticationDatabase admin ddi_db
        MongoDB shell version: 3.0.12
        connecting to: mongos-hxvm-001:27017/ddi_db
        ddi_db@  - undefined> db.getMongo().setReadPref("primary")
        ddi_db@  - undefined> db.datasets.dataset.count()
        78234
        ddi_db@  - undefined> db.getMongo().setReadPref("secondaryPreferred")
        ddi_db@  - undefined> db.datasets.dataset.count()
        

      MongoDB docs claim that:

      Any operation that requires a read or write lock on all databases (e.g. listDatabases) will wait for the foreground index build to complete.

      But I don't see how it matches the case. In fact, simple find() on distinct databases from the one the index is being built on can't carry on.

      It seems like:

      • the index build is blocking R/W access to ALL the databases on the secondaries (during the index build)
      • the mongo router is unable to detect that the secondary can't answer the query on ANY database and should steer it to the primary
      • writes using _ { w: majority }

        _ are also appended on ANY database

      Could you please comment if this is the expected behaviour? Our system is suffering of availability problems because of this.

      Thanks a lot

            Assignee:
            kelsey.schubert@mongodb.com Kelsey Schubert
            Reporter:
            alessio.checcucci@gmail.com Alessio Checcucci
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: