Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-27474

Eliminate "dropped" collections from config server list of collections

    • Type: Icon: Improvement Improvement
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.5.1
    • Component/s: Sharding
    • Labels:
      None
    • Sharding

      In releases up to and including 3.6, when a sharded collection was dropped, the config server, instead of deleting the dropped collection from its on-disk list of sharded collections, would instead tag it as "dropped". This was necessary because a <= 3.4 mongos, when a client tried to use the dropped collection, would get an error from the shard, and then would retrieve the whole collections list from the config server to update its collections cache. The mongos needs to see the collections with a "dropped" flag to know to discard its corresponding cached records of those collections and their chunks. As a consequence, other activities that read the list of collections have needed to filter out the "dropped" ones, and the records of dropped collections accumulate indefinitely.

      The 3.6 mongos no longer reads in the whole list of collections when it gets an "collection does not exist" error from a shard, but only the entry for the one collection, so does not need to see dropped collections identified. "Dropped" entries left over from <= 3.6 should be ignored/skipped when reading the collections list, and no new ones should be written by the 3.8 config server. Collection entries in 3.8 no longer need the "dropped" flag, and code that checks for it may be deleted.

      The upgrade process for 3.6 -> 3.8 config servers should scrub the remaining "dropped" entries from config.collections.

            Assignee:
            backlog-server-sharding [DO NOT USE] Backlog - Sharding Team
            Reporter:
            nathan.myers Nathan Myers
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: