Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-34938

Secondary slowdown or hang due to content pinned in cache by single oplog batch

    • Minor Change
    • ALL
    • v4.9, v4.4, v4.2, v4.0, v3.6
    • Repl 2021-03-08, Repl 2021-04-05

      We only advance the oldest timestamp at oplog batch boundaries. This means that all dirty content generated by the application of the operations in a single batch will be pinned in cache. If the batch is large enough and the operations are heavy enough this dirty content can exceed eviction_dirty_trigger (default 20% of cache) and the rate of applying operations will become dramatically slower because it has to wait for the dirty data to be reduced below the threshold.

      This can be triggered by a momentary slowdown on a secondary causing it to lag momentarily, so the next batch it processes will be unusually large, causing it to exceed 20% dirty cache. This will cause it to lag even further, so the next batch will be even larger, and so on. In extreme cases the node can become completely stuck due to full cache preventing a batch from completing and unpinning the data that is keeping the cache full.

      This can also occur if a secondary is offline for maintenance; when it comes back online and begins to catch up, it will be processing large batches that risk exceeding the dirty trigger threshold, so it may apply operations at a much slower rate than a secondary that is keeping up and processing operations in small batches.

        1. batchsize-1.js
          0.5 kB
        2. batchsize-1.sh
          2 kB

            Assignee:
            m.maher@mongodb.com Moustafa Maher
            Reporter:
            bruce.lucas@mongodb.com Bruce Lucas (Inactive)
            Votes:
            6 Vote for this issue
            Watchers:
            78 Start watching this issue

              Created:
              Updated:
              Resolved: