[SERVER-44143] moveChunk Issue (Mongo version 4.0.4) Created: 22/Oct/19 Updated: 31/Oct/19 Resolved: 31/Oct/19 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Gennadiy | Assignee: | Dmitry Agranat |
| Resolution: | Incomplete | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Backwards Compatibility: | Fully Compatible |
| Operating System: | ALL |
| Participants: |
| Description |
|
Hello MongoDB Team. We faced with the issue related to moveChunk process.
Sorry, I've changed the IPs and db and collection names due to security policy. Here is the sharded collection status:
The configuration is the following: 3 Shards with 5 nodes (PRIMARY + 4 Sec + 1 Arb) Here is the Config log error related to the issue:
Please let me know if any additional info are require form my side. Thank You |
| Comments |
| Comment by Dmitry Agranat [ 31/Oct/19 ] | ||||||||||||||||||
|
Hi Gennadiy, I am going to close this ticket but feel free to reopen if this happens again. Thanks, | ||||||||||||||||||
| Comment by Dmitry Agranat [ 24/Oct/19 ] | ||||||||||||||||||
|
Hi Gennadiy, There might be other possibilities for this issue to manifest itself. For example, if a collection was somehow on a replica set before it was added as a shard or it might be a manifestation of If this happens again, or you can reproduce it, please save the logs and we'll be happy to take a look. Thanks, | ||||||||||||||||||
| Comment by Gennadiy [ 23/Oct/19 ] | ||||||||||||||||||
|
Also, regaraing the migration - we migrated it by using the backup/restore each shard to shard. Thank You Gennadiy | ||||||||||||||||||
| Comment by Dmitry Agranat [ 23/Oct/19 ] | ||||||||||||||||||
|
Thanks Gennadiy for additional information. It is unfortunate we do not have any evidence for this collection creation. Based on the information so far, I have another question, when was the cluster upgraded to 4.0.x version and what MongoDB version this cluster was upgraded from? Thanks, | ||||||||||||||||||
| Comment by Gennadiy [ 23/Oct/19 ] | ||||||||||||||||||
|
Hello Dmitriy, Unfortunatelly we do not have such logs. The cluster was migrated from docker continers and there were not any issues before and after the migration.
Cluster works about an year after migration Thank You Gennadiy | ||||||||||||||||||
| Comment by Dmitry Agranat [ 23/Oct/19 ] | ||||||||||||||||||
|
Hi Gennadiy, Could you provide evidence of this collection creation, specifically showing when and from which source this collection was created? If logs are still available to back this up, please upload them to this secure portal. What we are looking for is to determine if this collection was created through mongoS or when directly connected to the shards. Thanks, | ||||||||||||||||||
| Comment by Gennadiy [ 22/Oct/19 ] | ||||||||||||||||||
|
Hello Dmitriy, Regarding the "usually arise if you connected directly to the shards and restored there instead of going through mongos" - this is PROD Cluster wich is not never restored yet. So, this is not the root cause of the issue. Thank You Gennadiy | ||||||||||||||||||
| Comment by Dmitry Agranat [ 22/Oct/19 ] | ||||||||||||||||||
|
Hi genacvali91, The situation in which different shards get different UUIDs can usually arise if you connected directly to the shards and restored there instead of going through mongos. Could you confirm if this is the case or otherwise provide more details about the procedure that might have caused this issue? I understand that you have redacted the original collection name, could you confirm the original collection name is not config.system.sessions? Thanks, |