Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-2511

Recent WT builds have a large perf regression as thread count increases in read only workloads larger than cache

    XMLWordPrintable

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major - P3
    • Resolution: Duplicate
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Description

      Comparison of the tip of develop (2.8.0) vs fb8739f shows a very significant performance regression (4x) when doing a read only workload.

      The following wtperf config can be used to reproduce:

      conn_config="eviction_trigger=81,session_max=20000,cache_size=3GB,eviction=(threads_max=4),checkpoint=(wait=60,log_size=2GB),log=(enabled=true,archive=true,path=journal,compressor=snappy),statistics=(fast,clear),statistics_log=(wait=1)"
      table_config="type=file,leaf_page_max=32k,split_pct=90,memory_page_max=10M,leaf_value_max=64M"
      compression="snappy"
      icount=10000000
      report_interval=5
      table_count=2
      run_time=120
      populate_threads=1
      threads=((count=100,reads=1))
      sample_interval=5
      value_sz=10000
      

      Checking stats the biggest thing I noticed that there seemed to be far more blocks (and bytes) being read in during the read phase of this test. I've uploaded the wtstats.py outputs to this ticket for develop (good.html) or fb8739f (bad.html).

        Attachments

        1. bad.html
          1.08 MB
        2. good.html
          990 kB

          Issue Links

            Activity

              People

              • Assignee:
                david.hows David Hows
                Reporter:
                david.hows David Hows
              • Votes:
                0 Vote for this issue
                Watchers:
                8 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: