Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-30653

Randomize Chunk Locations with chunkLocationRandomizer

    • Type: Icon: Improvement Improvement
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.4.7
    • Component/s: Performance, Sharding
    • Labels:
      None

      Current chunk migration strategies specify that some chunks will stay in a location until split. This can lead, over time and with imperfect choices of shard keys, to hot shards.

      The problem can be solved, to a point, by creating a job that randomly relocates chunks.

      The goal here is to eliminate any existing bias in where the chunks are located, and improve shard performance.

      This could be enabled or disabled by a setting in the balancer, and changed the same way we change the setting for secondaryThrottle. This setting, chunkLocationRandomizer , could randomly pick a chunk and move it to a different server.

      Of course, this could be made more intelligent by attempting to detect how many hot chunks there are, creating a plan that (if most are on one shard) moves them to another specific shard based on an optimized pattern. There would need to be hysteresis so it didn't constantly move the same chunk back and forth, but that wouldn't be difficult. But, it's enough to just relocate existing chunks into different shards to ensure uniform traffic.

      In the short run, however, the randomizer could significantly minimze hot-shard problems caused by historical accident.

      BTW, the reason I have a hot shard is that I can't have a shard key that's compound index of (field_one hashed and also field_two hashed-or-unhashed.). Instead, I have compound index, and it leads to a hot shard since field 2 is monotonically increasing.

            Assignee:
            kelsey.schubert@mongodb.com Kelsey Schubert
            Reporter:
            kevin.rice@searshc.com Kevin Rice
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: