Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-17243

Dropped collection & database still has metadata

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.0.0-rc8
    • Component/s: Sharding
    • Labels:
    • Sharding
    • ALL

      After dropping the collection for a sharded collection, via mongos, the chunks are still visible in the config server. Same if the database is dropped.

      This collection had balancing failures, which seems to have left a lock in place, preventing full cleanup.

      mongos> db.cap1833hashed.drop()
      2015-02-10T20:37:33.450+0000 E QUERY    Error: drop failed: {
      	"code" : 16338,
      	"ok" : 0,
      	"errmsg" : "exception: Dropping collection failed on the following hosts: shard0/ip-10-102-151-133:30000,ip-10-203-175-233:30000,ip-10-63-43-228:30000: { ok: 0.0, errmsg: \"ns not found\", $gleStats: { lastOpTime: Timestamp 1423598324000|1, electionId: ObjectId('54d968cb8a8d61f9c59083ff') } }, shard1/ip-10-180-37-181:30000,ip-10-187-63-140:30000,ip-10-203-173-62:30000: { ok: 0.0, errmsg: \"ns not found\", $gleStats: { lastOpTime: Timestamp 1423600492000|1, electionId: ObjectId('54d4c393625d1974fec18a28') } }"
      }
          at Error (<anonymous>)
          at DBCollection.drop (src/mongo/shell/collection.js:619:15)
          at (shell):1:18 at src/mongo/shell/collection.js:619
      mongos> use config
      switched to db config
      mongos> db.collections.find()
      { "_id" : "test.cap1833hashed", "lastmod" : ISODate("2015-02-06T17:55:44.651Z"), "dropped" : false, "key" : { "idx" : "hashed" }, "unique" : false, "lastmodEpoch" : ObjectId("54d5002050c6400fb0af3055") }
      { "_id" : "t.foo", "lastmod" : ISODate("2015-02-10T20:35:07.767Z"), "dropped" : true, "lastmodEpoch" : ObjectId("000000000000000000000000") }
      mongos> db.locks.find()
      { "_id" : "configUpgrade", "state" : 0, "who" : "ip-10-123-129-75:27017:1423245217:1804289383:mongosMain:846930886", "ts" : ObjectId("54d4ffa250c6400fb0af3035"), "process" : "ip-10-123-129-75:27017:1423245217:1804289383", "when" : ISODate("2015-02-06T17:53:38.080Z"), "why" : "upgrading config database to new format v6" }
      { "_id" : "balancer", "state" : 0, "who" : "ip-10-123-129-75:27017:1423245217:1804289383:Balancer:1681692777", "ts" : ObjectId("54da507d50c6400fb0b0592c"), "process" : "ip-10-123-129-75:27017:1423245217:1804289383", "when" : ISODate("2015-02-10T18:39:57.034Z"), "why" : "doing balance round" }
      { "_id" : "test.cap1833hashed", "state" : 0, "who" : "ip-10-123-129-75:27017:1423245217:1804289383:conn23279:1957747793", "ts" : ObjectId("54da6c0e50c6400fb0b05945"), "process" : "ip-10-123-129-75:27017:1423245217:1804289383", "when" : ISODate("2015-02-10T20:37:34.650Z"), "why" : "drop" }
      { "_id" : "t.foo", "state" : 0, "who" : "ip-10-123-129-75:27017:1423245217:1804289383:conn23279:1957747793", "ts" : ObjectId("54da6b7b50c6400fb0b05942"), "process" : "ip-10-123-129-75:27017:1423245217:1804289383", "when" : ISODate("2015-02-10T20:35:07.203Z"), "why" : "drop" }
      mongos> sh.status()
      --- Sharding Status ---
        sharding version: {
      	"_id" : 1,
      	"minCompatibleVersion" : 5,
      	"currentVersion" : 6,
      	"clusterId" : ObjectId("54d4ffa250c6400fb0af3037")
      }
        shards:
      	{  "_id" : "shard0",  "host" : "shard0/ip-10-102-151-133:30000,ip-10-203-175-233:30000,ip-10-63-43-228:30000" }
      	{  "_id" : "shard1",  "host" : "shard1/ip-10-180-37-181:30000,ip-10-187-63-140:30000,ip-10-203-173-62:30000" }
        balancer:
      	Currently enabled:  no
      	Currently running:  no
      	Failed balancer rounds in last 5 attempts:  0
      	Migration Results for the last 24 hours:
      		19 : Success
      		1 : Failed with error 'data transfer error', from shard1 to shard0
      		3 : Failed with error 'migration already in progress', from shard1 to shard0
      		25 : Failed with error 'chunk too big to move', from shard1 to shard0
      		315 : Failed with error 'could not acquire collection lock for test.cap1833hashed to migrate chunk [{ : MinKey },{ : MaxKey }) :: caused by :: Lock for migrating chunk [{ : MinKey }, { : MaxKey }) in test.cap1833hashed is taken.', from shard1 to shard0
        databases:
      	{  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
      	{  "_id" : "test",  "partitioned" : true,  "primary" : "shard1" }
      		test.cap1833hashed
      			shard key: { "idx" : "hashed" }
      			chunks:
      				shard0	77978
      				shard1	93448
      			too many chunks to print, use verbose if you want to force print
      	{  "_id" : "t",  "partitioned" : true,  "primary" : "shard1" }
      	{  "_id" : "db",  "partitioned" : false,  "primary" : "shard1" }
      

        1. configdata.tar.gz
          23.69 MB
        2. mongos-1.log.gz
          57.04 MB
        3. mongos-2.log.gz
          117.79 MB

            Assignee:
            backlog-server-sharding [DO NOT USE] Backlog - Sharding Team
            Reporter:
            jonathan.abrahams Jonathan Abrahams
            Votes:
            3 Vote for this issue
            Watchers:
            19 Start watching this issue

              Created:
              Updated:
              Resolved: