Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-16247

Oplog declines in performance over time under WiredTiger

    • Fully Compatible
    • ALL

      This may be related to SERVER-16235, but filing as a separate ticket since my understanding is that a workaround is in place to prevent SERVER-16235 from impacting the oplog, plus the symptoms here are a bit different, so this may be a separate issue.

      Tested on build from master this afternoon (365cca0c47566d192ca847f0b077cedef4b3430e).

      • Test on single-node replica set repeatedly updates the document in a single-document collection in order to generate a lot of oplog entries while minimizing work to actually perform each op, in order to emphasize oplog performance.
      • Yellow graph below shows declining performance over time (measured in number of updates done). Graph shows same decline starting at first oplog wraparound at about 90 k inserts as seen in SERVER-16235, but then it recovers and begins a cycle of repeated declines and recoveries. But superimposed on this is a longer term downard trend, and that possibly distinguishes this issue from SERVER-16235. Not clear from this data whether the asymptote is >0.
      • Red graph shows that the decline goes away for the same test in a standalone instance, confirming that this is an oplog issue.
      • Blue graph shows that decline is not seen with mmapv1, confirming that this is a WT-specific issue.
      • Restarting mongod (not shown) resets the behavior back to time 0.

          db.c.drop()
          db.c.insert({_id:0, i:0})
      
          var every = 10000
          var bulk = db.c.initializeOrderedBulkOp();
          var t = new Date()
          for (var i=0; i<=count; i++) {
              if (i>0 && i%every==0) {
                  bulk.execute();
                  bulk = db.c.initializeOrderedBulkOp();
                  tt = new Date()
                  print(i, Math.floor(every / (tt-t) * 1000))
                  t = tt
              }
              bulk.find({_id:0}).updateOne({_id:0, i:i})
          }
      

        1. bytes-currently-in-cache.png
          bytes-currently-in-cache.png
          242 kB
        2. correlation.png
          correlation.png
          19 kB
        3. oplog_insert.png
          oplog_insert.png
          172 kB
        4. oplog.png
          oplog.png
          19 kB
        5. pages-evicted-because-exceed-in-memory-max.png
          pages-evicted-because-exceed-in-memory-max.png
          213 kB

            Assignee:
            alexander.gorrod@mongodb.com Alexander Gorrod
            Reporter:
            bruce.lucas@mongodb.com Bruce Lucas (Inactive)
            Votes:
            1 Vote for this issue
            Watchers:
            21 Start watching this issue

              Created:
              Updated:
              Resolved: