[SERVER-45795] moveChunk Issue after mongorestore (continuation of SERVER-44143) Created: 27/Jan/20 Updated: 27/Oct/23 Resolved: 07/Jul/20 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | 4.0.4 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Max Isaev | Assignee: | Dmitry Agranat |
| Resolution: | Community Answered | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Operating System: | ALL |
| Steps To Reproduce: | Follow "Restore a Sharded Cluster" https://docs.mongodb.com/v4.0/tutorial/restore-sharded-cluster/#d-restore-each-shard-replica-set (with mongodump/mongorestore) |
| Participants: |
| Description |
|
Hello, this is continuation of We are getting the following error on our PROD sharded cluster (which was migrated from docker to other servers (withouth using the docker technology) by following the "Restore a Sharded Cluster" https://docs.mongodb.com/v4.0/tutorial/restore-sharded-cluster/#d-restore-each-shard-replica-set (with mongodump/mongorestore) =========================================== Here is the collections UUIDs we get if we connect to each shard: PROD:
Here we can see that UUID of the collection products in config replica set is the same as it is on shards in config.cache.collections
The thing is that we have 2 TEST environmets (sharded clusters) that we clone our PROD to, every week. (We are restoring PROD's backup to the 2 TEST clusters, following the same procedure stated above)' And I see, that on those two (cloned every week) environmets the UUID of products collection (productrepository.products) is every time unique and different between shards, as if mongorestore when we restore sequentially shards assigns new UUID to the sharded collection on each shard.
TEST cluster 1 :
TEST cluster №2
I have tried manually moving chunks in TEST env, using moveChunk command, and as expected, getting the issue with different UUIDs Is mongorestore supposed to assign new UUIDs to restored sharded collections? As I understand, to rectify the issue with chunk migration, the only way is to drop the collection through mongos (following the procedure described in P.S. After another clone that took place some hours later after writing the information above, I checked once againg the UUID of the sharded collection, and again it is different between shards and also don't correlate to the UUIDs from PROD Thank you. |
| Comments |
| Comment by Dmitry Agranat [ 07/Jul/20 ] |
|
We haven’t heard back from you for some time, so I’m going to mark this ticket as resolved. If this is still an issue for you, please provide additional information and we will reopen the ticket. Regards, |
| Comment by Carl Champain (Inactive) [ 09/Mar/20 ] |
|
Sorry for the late response! Thank you, |
| Comment by Max Isaev [ 08/Feb/20 ] |
|
Thank you for your responce! Well, you see, that in the procedure we followed https://docs.mongodb.com/v4.0/tutorial/restore-sharded-cluster (mongodump and mongorestore) we are not exactly encountering the issue decribed in In my opinion, I see only two ways of preventing any further confusions of anyone else who backs up and restores their sharded clusters with the mongodump and mongorestore utilities.
Since 4.2 mongodump and mongorestore are not the tools to be used for backup anymore with sharded clusters, I think the first option is the most optimal. Please let me know your thoughts. Best regards, Max
|
| Comment by Carl Champain (Inactive) [ 07/Feb/20 ] |
mongorestore will intentionally result in a new UUID for a collection, it indicates that a namespace has been reused. We really appreciate you writing this detailed ticket. I was able to recreate the migration error, and as you mentioned, this issue can be solved with the workaround in Kind regards, |
| Comment by Max Isaev [ 30/Jan/20 ] |
|
It was unintentionally, I meant to link the whole procedure. Yes, first of all we restore the config replica set, then shards. |
| Comment by Danny Hatcher (Inactive) [ 27/Jan/20 ] |
|
You specifically linked to the section describing restoring the shards. Are you performing the procedure of restoring the config servers beforehand? |