When splitting a chunk, if the shard keys are sufficiently small and the number of documents is really high, it may happen the following:
- Chunk splitter invokes autoSplitVector (or splitVector, in older versions), getting more than 8192 split points.
- The split points are passed to splitChunkAtMultiplePoints
- This assert is triggered
As a result, huge chunks will never be split, unless some smaller chunk is manually created.
This problem can also impact shardCollection: when sharding an existing non-empty collection in presence of zones for such collection, the SingleChunkOnPrimarySplitPolicy is applied. There will only be one gigantic chunk on the primary shard that is then expected to be split by the chunk splitter.