Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-16919

Oplog can grow much larger than configured size under WiredTiger

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major - P3
    • Resolution: Fixed
    • Affects Version/s: 2.8.0-rc5
    • Fix Version/s: 3.0.0-rc6
    • Component/s: WiredTiger
    • Labels:
    • Backwards Compatibility:
      Fully Compatible
    • Operating System:
      ALL

      Description

      Under heavy insert or update load the oplog can grow much larger than the configured size, possibly indefinitely. I've triggered this with a few different workloads, sometimes reproducibly and sometimes not. 100 threads of the following with a configured oplog size of 50 MB in a 1-node replica set seems to do it fairly reliably:

      function repro(t) {
       
          db.c.drop()
          db.c.insert({_id:t, i:0})
       
          big = ''
          for (var i=0; i<100000; i++)
              big += 'x'
       
          count = 1000000
          every = 100
          for (var i=0; i<count; ) {
              var bulk = db.c.initializeOrderedBulkOp();
              for (var j=0; j<every; j++, i++)
                  bulk.insert({x:big})
              try {
                  bulk.execute();
              } catch(e) {
                  print(t, 'OOPS')
              }
              if (t==1 && i%100==0)
                  print('MB', db.getSiblingDB('local').oplog.rs.stats(1024*1024).size)
          }
      }

      Oplog growth as reported by the above code for two different runs, in both cases going well past the configured 50 MB size:

      first run:
      MB 421
      MB 335
      MB 1381
      MB 2260

      second run:
      MB 263
      MB 989
      MB 1387
      MB 2417

      In both cases the cache size was 5GB, but virtual memory grew to about 8GB, and mongod was killed by the OOM killer.

      I think the issue is that there's no back-pressure in capped collection inserts to throttle inserts to prevent inserts from outrunning deletes.

        Attachments

          Issue Links

            Activity

              People

              Assignee:
              redbeard0531 Mathias Stearn
              Reporter:
              bruce.lucas Bruce Lucas
              Participants:
              Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved: