Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-40346

Write shardCollection initial chunks with BatchWriter

    • Type: Icon: Improvement Improvement
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • 4.1.10, 4.0.11
    • Affects Version/s: 4.0.7, 4.1.9
    • Component/s: Sharding
    • Labels:
      None
    • Fully Compatible
    • v4.0
    • Sharding 2019-04-08, Sharding 2019-04-22

      Problem Statement

      Currently, when we shard a collection, we write each chunk document to the config server sequentially, using a {w:majority} write.

      This means that for 10,000 chunks we will make 10,000 calls from a shard to the config server. We can prevent this by batch writing these documents instead.

      Proposed Solution

      1. Make an AlternativeSessionRegion class like the AlternativeClientRegion to control the lifetime of a generated session RAII-style. This class will take in either a session or generate its own. It will stash the current session on the thread and replace with the new session. On destruction, it will replace the session with the original, and destroy the newly created session (but only if the class created the session itself). For now, it may be easier to restrict the class to only self-generate sessions, so I could be persuaded to do that as well.
      2. Make a new function in the ShardingCatalogClient: insertConfigDocumentsAsRetryableWrite(). This function will take in a vector of BSONObj documents, a namespace, and a transaction number. It will run a batch command to insert them with the idempotent retry policy.
      3. Change InitialSplitPolicy::writeFirstChunksToConfig() to run under the AlternativeSessionRegion and insert chunk documents in bulk fashion.

            Assignee:
            blake.oler@mongodb.com Blake Oler
            Reporter:
            blake.oler@mongodb.com Blake Oler
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: