[SERVER-13340] parallel creation of hashed shard key for multiple collections may not distribute them evenly Created: 25/Mar/14 Updated: 10/Dec/14 Resolved: 09/Jul/14 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | 2.4.9, 2.6.0-rc2 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Asya Kamsky | Assignee: | Unassigned |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Operating System: | ALL | ||||||||
| Participants: | |||||||||
| Description |
|
This is captured by The problem is that it's a real life scenario so our test should include trying to do it in parallel and we should fix the distribution of initial chunks - when a migration of empty chunk errors, it should possibly try again several times rather than just continuing on as that leaves all initial chunks on the original shard. |
| Comments |
| Comment by Scott Hernandez (Inactive) [ 25/Mar/14 ] |
|
It sounds like there are any number of reasons the moveChunks could fail, no? All of them would leave the chunks unevenly distributed, yes? |
| Comment by Asya Kamsky [ 25/Mar/14 ] |
|
Add retry sounds like an enhancement. I was trying to describe a bug which is that due to races in migrations, new hashed sharded collections may end up unbalanced. |
| Comment by Scott Hernandez (Inactive) [ 25/Mar/14 ] |
|
Is the gist of this just "retry moveChunks which fail due to stale shard version/dlock conflict" in order to get the desired distribution during the initial splits? If so, please re-write this issue to reflect that. |