Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-21553

Oplog grows to 3x configured size

    • Fully Compatible
    • Hide
      python buildscripts/resmoke.py --executor no_server repro_server21553.js
      
      Show
      python buildscripts/resmoke.py --executor no_server repro_server21553.js
    • Build C (11/20/15)

      • 12 cores, 24 cpus, 64 GB memory
      • 1-member replica set
      • oplog size 5 GB, default cache
      • data on ssd, -journal on separate hdd- no journal
      • 5 threads inserting 1 KB random docs as fast as possible (see code below)
      • recent nightly build:
        2015-11-19T08:41:38.422-0500 I CONTROL  [initandlisten] db version v3.2.0-rc2-211-gbd58ea2
        2015-11-19T08:41:38.422-0500 I CONTROL  [initandlisten] git version: bd58ea2ba5d17b960981990bb97cab133d7e90ed
        

      • periods where oplog size exceeded configured size start at B and D
      • oplog size recovered during stall that started at C, but did not recover after D
      • growth period starting at D appears to coincide with cache-full condition, but excess size starting at B looks like it may be simply related to rate of inserts

        1. 1.png
          1.png
          502 kB
        2. 2.png
          2.png
          599 kB
        3. 3.png
          3.png
          596 kB
        4. 4.png
          4.png
          567 kB
        5. benchRun.uncompressible
          2 kB
        6. diagnostic.data.tar
          189 kB
        7. monitorOplog
          0.2 kB
        8. oplog_overflow.js
          1 kB
        9. oplog-grows.png
          oplog-grows.png
          156 kB
        10. oplog-unbounded.png
          oplog-unbounded.png
          174 kB
        11. repro_server21553.js
          0.8 kB
        12. truncate_span.patch
          5 kB

            Assignee:
            michael.cahill@mongodb.com Michael Cahill (Inactive)
            Reporter:
            bruce.lucas@mongodb.com Bruce Lucas (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            15 Start watching this issue

              Created:
              Updated:
              Resolved: