-
Type: Bug
-
Resolution: Done
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
Sharding NYC
-
Fully Compatible
-
ALL
Initial state
[direct: mongos] test> sh.status() shardingVersion { _id: 1, clusterId: ObjectId("6407feaf46705024b23e5f69") } --- shards [ { _id: 'config', host: 'csshard/localhost:27019', state: 1, topologyTime: Timestamp({ t: 1678245624, i: 1 }) }, { _id: 'shard2', host: 'shard2/localhost:27021', state: 1, topologyTime: Timestamp({ t: 1678245703, i: 2 }) } ] --- active mongoses [ { '7.0.0-alpha-538-g7cec1b7': 1 } ] --- autosplit { 'Currently enabled': 'yes' } --- balancer { 'Currently enabled': 'yes', 'Currently running': 'no' } --- databases [ { database: { _id: 'config', primary: 'config', partitioned: true }, collections: { 'config.system.sessions': { shardKey: { _id: 1 }, unique: false, balancing: true, chunkMetadata: [ { shard: 'config', nChunks: 1024 } ], chunks: [ 'too many chunks to print, use verbose if you want to force print' ], tags: [] } } }, { database: { _id: 'test', primary: 'shard2', partitioned: false, version: { uuid: UUID("5ed830ec-10da-42f9-b92a-b7e3f78df969"), timestamp: Timestamp({ t: 1678245720, i: 1 }), lastMod: 1 } }, collections: { 'test.bar': { shardKey: { a: 1 }, unique: true, balancing: true, chunkMetadata: [ { shard: 'shard2', nChunks: 1 } ], chunks: [ { min: { a: MinKey() }, max: { a: MaxKey() }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) } ], tags: [] }, 'test.baz': { shardKey: { a: 1 }, unique: true, balancing: true, chunkMetadata: [ { shard: 'shard2', nChunks: 1 } ], chunks: [ { min: { a: MinKey() }, max: { a: MaxKey() }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) } ], tags: [] }, 'test.shards': { shardKey: { a: 1 }, unique: false, balancing: true, chunkMetadata: [ { shard: 'config', nChunks: 1 } ], chunks: [ { min: { a: MinKey() }, max: { a: MaxKey() }, 'on shard': 'config', 'last modified': Timestamp({ t: 1, i: 4 }) } ], tags: [] } } } ]
Start to end removal
[direct: mongos] test> db.adminCommand({removeShard: "config"}) { msg: 'draining started successfully', state: 'started', shard: 'config', note: 'you need to drop or movePrimary these databases', dbsToMove: [], ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1678337082, i: 3 }), signature: { hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0), keyId: Long("0") } }, operationTime: Timestamp({ t: 1678337082, i: 3 }) }
[direct: mongos] test> db.adminCommand({removeShard: "config"}) { msg: 'removeshard completed successfully', state: 'completed', shard: 'config', ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1678338347, i: 4 }), signature: { hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0), keyId: Long("0") } }, operationTime: Timestamp({ t: 1678338347, i: 4 }) }
End state:
[direct: mongos] test> sh.status() shardingVersion { _id: 1, clusterId: ObjectId("6407feaf46705024b23e5f69") } --- shards [ { _id: 'shard2', host: 'shard2/localhost:27021', state: 1, topologyTime: Timestamp({ t: 1678338347, i: 1 }) } ] --- active mongoses [ { '7.0.0-alpha-538-g7cec1b7': 1 } ] --- autosplit { 'Currently enabled': 'yes' } --- balancer { 'Currently enabled': 'yes', 'Currently running': 'no' } --- databases [ { database: { _id: 'config', primary: 'config', partitioned: true }, collections: { 'config.system.sessions': { shardKey: { _id: 1 }, unique: false, balancing: true, chunkMetadata: [ { shard: 'shard2', nChunks: 1024 } ], chunks: [ 'too many chunks to print, use verbose if you want to force print' ], tags: [] } } }, { database: { _id: 'test', primary: 'shard2', partitioned: false, version: { uuid: UUID("5ed830ec-10da-42f9-b92a-b7e3f78df969"), timestamp: Timestamp({ t: 1678245720, i: 1 }), lastMod: 1 } }, collections: { 'test.bar': { shardKey: { a: 1 }, unique: true, balancing: true, chunkMetadata: [ { shard: 'shard2', nChunks: 1 } ], chunks: [ { min: { a: MinKey() }, max: { a: MaxKey() }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) } ], tags: [] }, 'test.baz': { shardKey: { a: 1 }, unique: true, balancing: true, chunkMetadata: [ { shard: 'shard2', nChunks: 1 } ], chunks: [ { min: { a: MinKey() }, max: { a: MaxKey() }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 0 }) } ], tags: [] }, 'test.shards': { shardKey: { a: 1 }, unique: false, balancing: true, chunkMetadata: [ { shard: 'shard2', nChunks: 1 } ], chunks: [ { min: { a: MinKey() }, max: { a: MaxKey() }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 2, i: 0 }) } ], tags: [] } } } ]
Question 1: Is it valid to do this, and stop here - and now the old config shard can just function as a CSRS?
Trying to readd the catalog shard gives an error
[direct: mongos] test> db.adminCommand({ transitionToCatalogShard: 1 }); MongoServerError: can't add shard 'csshard/localhost:27019' because a local database 'test' exists in another shard2
However, dropping that database on csshard (it's empty anyway, all the data was moved) and then re-running is fine
[direct: mongos] test> db.adminCommand({ transitionToCatalogShard: 1 }); { ok: 1, '$clusterTime': { clusterTime: Timestamp({ t: 1678339266, i: 4 }), signature: { hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0), keyId: Long("0") } }, operationTime: Timestamp({ t: 1678339266, i: 3 }) }
Question 2: Is this sequence of events valid to get back to the original state, or is there something else I should be aware of?