Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-17397

Dropping a Database or Collection in a Sharded Cluster may not fully succeed

    • Sharding EMEA
    • Fully Compatible

      Issue Status as of Sep 18, 2020

      ISSUE SUMMARY
      When dropping a database / collection in a sharded cluster, even if the drop is reported as successful it is possible the database / collection may still be present in some nodes in the cluster. In MongoDB 4.2 and later, rerunning the drop command should clean up the data. In MongoDB 4.0 and earlier, we do not recommend that users drop a database or collection and then attempt to reuse the namespace.

      USER IMPACT
      When the database/collection is not successfully dropped in a given node, the corresponding files continue to use disk space in that node. Attempting to reuse the namespace may lead to undefined behavior.

      WORKAROUNDS

      To work around this issue one can follow the steps below to drop a database/collection in a sharded environment.

      MongoDB 4.4:

      1. Drop the database / collection using a mongos
      2. Rerun the drop command using a mongos

      MongoDB 4.2:

      1. Drop the database / collection using a mongos
      2. Rerun the drop command using a mongos
      3. Connect to each mongos and run flushRouterConfig

      MongoDB 4.0 and earlier:

      1. Drop the database / collection using a mongos
      2. Connect to each shard's primary and verify the namespace has been dropped. If it has not, please drop it. Dropping a database (e.g db.dropDatabase()) removes the data files on disk for the database being dropped.
      3. Connect to a mongos, switch to the config database and remove any reference to the removed namespace from the collections chunks, locks, databases and collections:
        When dropping a database:
        use config
        db.collections.remove( { _id: /^DATABASE\./ }, {writeConcern: {w: 'majority' }} )
        db.databases.remove( { _id: "DATABASE" }, {writeConcern: {w: 'majority' }} )
        db.chunks.remove( { ns: /^DATABASE\./ }, {writeConcern: {w: 'majority' }} )
        db.tags.remove( { ns: /^DATABASE\./ }, {writeConcern: {w: 'majority' }} )
        db.locks.remove( { _id: /^DATABASE\./ }, {writeConcern: {w: 'majority' }} )
        
        When dropping a collection:
        use config
        db.collections.remove( { _id: "DATABASE.COLLECTION" }, {writeConcern: {w: 'majority' }} )
        db.chunks.remove( { ns: "DATABASE.COLLECTION" }, {writeConcern: {w: 'majority' }} )
        db.tags.remove( { ns: "DATABASE.COLLECTION" }, {writeConcern: {w: 'majority' }} )
        db.locks.remove( { _id: "DATABASE.COLLECTION" }, {writeConcern: {w: 'majority' }} )
        
      4. Connect to the primary of each shard, remove any reference to the removed namespace from the collections cache.databases, cache.collections and cache.chunks.DATABASE.COLLECTION:
        When dropping a database:
        db.getSiblingDB("config").cache.databases.remove({_id:"DATABASE"}, {writeConcern: {w: 'majority' }});
        db.getSiblingDB("config").cache.collections.remove({_id:/^DATABASE.*/}, {writeConcern: {w: 'majority' }});
        db.getSiblingDB("config").getCollectionNames().forEach(function(y) {
        			if(y.indexOf("cache.chunks.DATABASE.") == 0)
        				db.getSiblingDB("config").getCollection(y).drop()
        	})
        
        When dropping a collection:
        db.getSiblingDB("config").cache.collections.remove({_id:"DATABASE.COLLECTION"}, {writeConcern: {w: 'majority' }});
        db.getSiblingDB("config").getCollection("cache.chunks.DATABASE.COLLECTION").drop()
        
      5. Connect to each mongos and run flushRouterConfig

            Votes:
            56 Vote for this issue
            Watchers:
            112 Start watching this issue

              Created:
              Updated:
              Resolved: