[SERVER-13340] parallel creation of hashed shard key for multiple collections may not distribute them evenly Created: 25/Mar/14  Updated: 10/Dec/14  Resolved: 09/Jul/14

Status: Closed
Project: Core Server
Component/s: Sharding
Affects Version/s: 2.4.9, 2.6.0-rc2
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Asya Kamsky Assignee: Unassigned
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
duplicates SERVER-14394 Create initial chunks directly on shards Closed
Operating System: ALL
Participants:

 Description   

This is captured by SERVER-9258 and SERVER-9260 where parallel unit tests were causing hashed key sharded collection not to be distributed evenly.

The problem is that it's a real life scenario so our test should include trying to do it in parallel and we should fix the distribution of initial chunks - when a migration of empty chunk errors, it should possibly try again several times rather than just continuing on as that leaves all initial chunks on the original shard.



 Comments   
Comment by Scott Hernandez (Inactive) [ 25/Mar/14 ]

It sounds like there are any number of reasons the moveChunks could fail, no? All of them would leave the chunks unevenly distributed, yes?

Comment by Asya Kamsky [ 25/Mar/14 ]

Add retry sounds like an enhancement.

I was trying to describe a bug which is that due to races in migrations, new hashed sharded collections may end up unbalanced.

Comment by Scott Hernandez (Inactive) [ 25/Mar/14 ]

Is the gist of this just "retry moveChunks which fail due to stale shard version/dlock conflict" in order to get the desired distribution during the initial splits? If so, please re-write this issue to reflect that.

Generated at Thu Feb 08 03:31:24 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.