Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-58092

Mitigate the impact of paging-in the filtering table after a shard node restart

    XMLWordPrintableJSON

Details

    • Icon: Improvement Improvement
    • Resolution: Duplicate
    • Icon: Major - P3 Major - P3
    • None
    • 3.6.23, 4.2.14, 4.4.6, 4.0.25, 5.0.0-rc4
    • None
    • None
    • Sharding EMEA

    Description

      On shard nodes, the routing/filtering table is kept on disk under the config.system.cache.chunks.* collections and secondary nodes request updates to these tables by contacting the primary and waiting for it to write the most-recent information.

      Whenever a shard receives request for the first time after restart, the steps which occur are:

      1. Secondary asks the primary to refresh from the config server and write the most-recent routing info
      2. Secondary waits for the write performed by the primary to show up locally
      3. Secondary reads and materialises these collections into a std::map used by the OrphanFilter

      Because after a restart of a node, that node is cold and potentially has some replication lag, for customers with large routing tables (millions of chunks) and high rate of writes, steps (1) and (2) above can potentially take very long time which in turn leads to minutes of downtime for operations against that node.

      This is normally only a problem after a crash of a node, which re-joins as secondary and impacts all secondary reads which hit that node.

      This ticket is a placeholder for figuring out a solution/mitigating this problem until we implement the changes to minimise the routing table size.

      Attachments

        Activity

          People

            backlog-server-sharding-emea [DO NOT USE] Backlog - Sharding EMEA
            kaloian.manassiev@mongodb.com Kaloian Manassiev
            Votes:
            2 Vote for this issue
            Watchers:
            10 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: