Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-32707

WiredTiger performance with the insert benchmark

    • Type: Icon: Improvement Improvement
    • Resolution: Unresolved
    • Priority: Icon: Minor - P4 Minor - P4
    • None
    • Affects Version/s: 3.6.0
    • Component/s: WiredTiger
    • Labels:
      None
    • Product Performance

      I ran the insert benchmark for WiredTiger in MongoDB 3.6.0 and summarize the problems I see here. For more details on the insert benchmark including a link to the source see this link. The overview of the insert benchmark is:
      1) load collection(s) using N clients (N=16 in this case). Measure the average insert rate and response time distributions.
      2) do a full scan of each collection. Compute time for the scan.
      3) use N writer clients and N reader clients (N=16 in this case). Each writer is rate limited to 1000 inserts/second and determine how fast the reader clients can do short range scans. Compute average query rate and query response time distributions.
      4) same as #3 but the rate limit is 100/second/writer

      I have seen this in all of the major MongoDB releases (3.0, 3.2, 3.4) and now 3.6.0. This time I archived diagnostic data.

      I tried 9 mongo.conf variations for 4 test configurations:

      • inMemory-1 - cached database with 16 clients and 1 collection
      • inMemory-16 - cached database with 16 clients and 16 collections (collection per client)
      • ioBound-none - database larger than memory, 16 clients, no compression
      • ioBound-zlib - database larger than memory, 16 clients, zlib compression

      The test server has 24 cores and 48 HW threads. Hyperthreading is enabled. For the in-memory benchmarks the server has 256gb of RAM. For the IO-bound benchmarks the server has 50gb of RAM. The server also has 2 or 3 fast PCIe-based SSDs.

      The template for mongo.conf for this in-memory benchmarks is below and comments at the end explain the 9 variations:

      processManagement:
        fork: true
      systemLog:
        destination: file
        path: /data/mysql/mmon360/log
        logAppend: true
      storage:
        syncPeriodSecs: 600
        dbPath: /data/mysql/mmon360/data
        journal:
          enabled: true
      
      operationProfiling.slowOpThresholdMs: 2000
      replication.oplogSizeMB: 4000
      
      storage.wiredTiger.collectionConfig.blockCompressor: none
      storage.wiredTiger.engineConfig.journalCompressor: none
      storage.wiredTiger.engineConfig.cacheSizeGB: 180
      
      storage.wiredTiger.engineConfig.configString: "eviction_dirty_target=60, eviction_dirty_trigger=80"
      
      # storage.wiredTiger.engineConfig.configString:
      # eviction_target=90,eviction_trigger=95,eviction_dirty_target=85,eviction=(threads_min=4,threads_max=8)
      # eviction_target=X
      # eviction_trigger=X
      # eviction_dirty_target=X
      # eviction_dirty_trigger=X
      # eviction=(threads_min=4,threads_max=4)
      # checkpoint=(log_size=1GB)
      
      # 1  - syncPeriodSecs=60, oplogSizeMB=4000
      # 2  - syncPeriodSecs=60, oplogSizeMB=16000
      # 3  - syncPeriodSecs=600, oplogSizeMB=16000
      # 4  - syncPeriodSecs=60, oplogSizeMB=16000, checkpoint=1g
      # 5  - syncPeriodSecs=600, oplogSizeMB=16000, checkpoint=1g
      # 6  - syncPeriodSecs=600, oplogSizeMB=4000
      # 7  - syncPeriodSecs=600, oplogSizeMB=4000, eviction_dirty_target=20, eviction_dirty_trigger=40
      # 8  - syncPeriodSecs=600, oplogSizeMB=4000, eviction=(threads_min=4,threads_max=8)
      # 9  - syncPeriodSecs=600, oplogSizeMB=4000, eviction_dirty_target=60, eviction_dirty_trigger=80
      

      The mongo.conf template for the IO-bound tests is below. The big difference from the configuration above is cacheSizeGB is reduces from 180 to 10. I won't paste mongo.conf for the test that used compression, but the change from the template below is obvious.

      processManagement:
        fork: true
      systemLog:
        destination: file
        path: /data/mysql/mmon360/log
        logAppend: true
      storage:
        syncPeriodSecs: 600
        dbPath: /data/mysql/mmon360/data
        journal:
          enabled: true
      
      operationProfiling.slowOpThresholdMs: 2000
      replication.oplogSizeMB: 4000
      
      storage.wiredTiger.collectionConfig.blockCompressor: none
      storage.wiredTiger.engineConfig.journalCompressor: none
      storage.wiredTiger.engineConfig.cacheSizeGB: 10
      
      storage.wiredTiger.engineConfig.configString: "eviction_dirty_target=60, eviction_dirty_trigger=80"
      
      # storage.wiredTiger.engineConfig.configString:
      # eviction_target=90,eviction_trigger=95,eviction_dirty_target=85,eviction=(threads_min=4,threads_max=8)
      # eviction_target=X
      # eviction_trigger=X
      # eviction_dirty_target=X
      # eviction_dirty_trigger=X
      # eviction=(threads_min=4,threads_max=4)
      # checkpoint=(log_size=1GB)
      
      # 1  - syncPeriodSecs=60, oplogSizeMB=4000
      # 2  - syncPeriodSecs=60, oplogSizeMB=16000
      # 3  - syncPeriodSecs=600, oplogSizeMB=16000
      # 4  - syncPeriodSecs=60, oplogSizeMB=16000, checkpoint=1g
      # 5  - syncPeriodSecs=600, oplogSizeMB=16000, checkpoint=1g
      # 6  - syncPeriodSecs=600, oplogSizeMB=4000
      # 7  - syncPeriodSecs=600, oplogSizeMB=4000, eviction_dirty_target=20, eviction_dirty_trigger=40
      # 8  - syncPeriodSecs=600, oplogSizeMB=4000, eviction=(threads_min=4,threads_max=8)
      # 9  - syncPeriodSecs=600, oplogSizeMB=4000, eviction_dirty_target=60, eviction_dirty_trigger=80
      

        1. metrics.2017-12-28T00-13-52Z-00000
          3.81 MB
        2. metrics.2017-12-28T00-14-02Z-00000
          9.92 MB
        3. metrics.2017-12-28T07-22-34Z-00000
          9.96 MB
        4. metrics.2017-12-28T14-27-34Z-00000
          9.93 MB
        5. metrics.2017-12-28T21-32-34Z-00000
          9.92 MB
        6. metrics.2017-12-29T04-31-37Z-00000
          3.30 MB
        7. metrics.2017-12-29T04-37-34Z-00000
          9.96 MB
        8. metrics.2017-12-29T11-42-34Z-00000
          9.95 MB
        9. metrics.2017-12-29T15-15-31Z-00000
          3.51 MB
        10. metrics.2017-12-29T19-07-34Z-00000
          5.09 MB
        11. metrics.interim
          43 kB
        12. metrics.interim
          85 kB
        13. metrics.interim
          49 kB
        14. metrics.interim
          73 kB

            Assignee:
            backlog-server-perf [DO NOT USE] Backlog - Performance Team
            Reporter:
            mdcallag Mark Callaghan
            Votes:
            1 Vote for this issue
            Watchers:
            23 Start watching this issue

              Created:
              Updated: