[SERVER-27470] shardCollection and movePrimary should not be allowed to run in parallel Created: 20/Dec/16 Updated: 27/Oct/23 Resolved: 01/Mar/18 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | 3.5.1 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Randolph Tan | Assignee: | Esha Maharishi (Inactive) |
| Resolution: | Gone away | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||
| Operating System: | ALL | ||||
| Participants: | |||||
| Description |
|
on the same parent database. The last phase of the movePrimary command is to drop collections on the shard that were not sharded before the movePrimary command started. So if the collection becomes sharded half way through, the movePrimary command can accidentally drop the collection. |
| Comments |
| Comment by Esha Maharishi (Inactive) [ 01/Mar/18 ] |
|
Closing because both shardCollection and movePrimary now take the database distlock, so they can no longer run concurrently. |
| Comment by Dianna Hohensee (Inactive) [ 09/Jan/17 ] |
|
Since movePrimary and shardCollection can run simultaneously on different mongos servers, this would require taking distributed locks to fix. Moving this ticket into the epic to move metadata commands to the config server (PM-696). Rather than making a series of network calls to take the collection distributed locks for some undetermined number of collections, it's better to do it locally after the commands move to the config server. |