Details
-
Bug
-
Resolution: Done
-
Major - P3
-
None
-
2.3.0
-
None
-
Sharding
-
ALL
Description
If for some reason the data being inserted has a few duplicate shard keys (no so many as to cause problems) we will never split the initial chunk.
For example, if we're inserting documents of size 300k and insert two documents in a row with the same shard key each time, we'll never split b/c for a split with only one chunk, the maxChunkSize is forced to 1024 bytes, and 1/2 of that is 500k.
Would be better to specify a different parameter as a target chunk size and split between targetChunkSize and maxChunkSize wherever possible. Mongod could also be smarter here since it knows the actual size of the initial chunk.