Uploaded image for project: 'Documentation'
  1. Documentation
  2. DOCS-15433

[SERVER] Secondary slowdown or hang due to pinned content

      Original Downstream Change Summary

      Document that maxNumberOfThreads is now capped with number of available cores.

      Description of Linked Ticket

      We only advance the oldest timestamp at oplog batch boundaries. This means that all dirty content generated by the application of the operations in a single batch will be pinned in cache. If the batch is large enough and the operations are heavy enough this dirty content can exceed eviction_dirty_trigger (default 20% of cache) and the rate of applying operations will become dramatically slower because it has to wait for the dirty data to be reduced below the threshold.

      This can be triggered by a momentary slowdown on a secondary causing it to lag momentarily, so the next batch it processes will be unusually large, causing it to exceed 20% dirty cache. This will cause it to lag even further, so the next batch will be even larger, and so on. In extreme cases the node can become completely stuck due to full cache preventing a batch from completing and unpinning the data that is keeping the cache full.

      This can also occur if a secondary is offline for maintenance; when it comes back online and begins to catch up, it will be processing large batches that risk exceeding the dirty trigger threshold, so it may apply operations at a much slower rate than a secondary that is keeping up and processing operations in small batches.

            Assignee:
            dave.cuthbert@mongodb.com Dave Cuthbert (Inactive)
            Reporter:
            backlog-server-pm Backlog - Core Eng Program Management Team
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved:
              1 year, 35 weeks ago