Add support for compound / shard keys:
Given the following example:
db.test.drop();
db.test.createIndex( { "a": 1, "b": 1 })
db.test.insertMany([
  {_id: 11, a: 1, b: 1},
  {_id: 12, a: 1, b: 2},
  {_id: 13, a: 1, b: 3},
  {_id: 14, a: 2, b: 1},
  {_id: 15, a: 2, b: 2},
  {_id: 16, a: 2, b: 3},
  {_id: 17, a: 2, b: 4}]);
If the partitioner was based on fields a and b and the generated partition ranges looked like:
{a: 1, b: 1}, {a: 1, b: 3}
{a: 1, b: 3}, {a: 2, b: 2}
{a: 2, b: 2}, {a: 2, b: 3}
The to match against {a: 1, b: 3}, {a: 2, b: 2} you would need to project out the compound fields so a full bson comparision can be made, rather than just key value comparisions.
eg:
db.test.aggregate([
  {"$addFields": {"__idx": {"a": "$a", "b": "$b"}}},
  {"$match": {"__idx": {"$gte": {"a": 1, "b": 3}, "$lt": {"a": 2, "b": 2}}}}])
A compound key partitioner should be added to generate valid partitions against multiple keys.
Given changes in chunk sizes post MongoDB 6.0 its recommended that this be a new partitioner that samples the database and provides the ranges and pipelines for the partitioner.
Was:
Support hashed shard keys and compound keys
SPARK-345 disabled compound shard key support but it appears they can be supported along with hashed shard keys.
Using similar logic from the shard repartitioner a aggregation pipeline could be constructed to lookup chunk ranges.
- related to
- 
                    SPARK-444 Set Auto Bucket Partitioner to be the default partitioning strategy -         
- Closed
 
-