Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-3537

Split pages when nothing can be written

    • Type: Icon: Improvement Improvement
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • 3.6.0-rc0, WT3.0.0
    • Affects Version/s: None
    • Component/s: None
    • Storage 2017-09-11

      If workload is doing random inserts with a large key range, using timestamps (any non-zero value is sufficient, doesn't need to be in any order), and the oldest_timestamp doesn't move forward, WiredTiger has to keep all of the inserted data in cache.

      What currently happens is that a single page grows then may go through a single in-memory split, which creates a second page. Once the original page grows large enough to reach the memory_page_max setting, readers will continually try to force it to be evicted. However, that goes through reconciliation, and since none of the data can be written into a page image, reconciliation does not attempt to split the page and either gives up on the eviction, or does update/restore eviction without freeing any space.

      It would be better if we could split pages in this case. For example, if we were to split whenever we have more than 10% of memory_page_max in r->update_mem_saved, regardless of the size of the disk image, then this workload should be able to continue until the cache's dirty limit is reached, rather than stalling once a single page of data is loaded.

            Assignee:
            keith.bostic@mongodb.com Keith Bostic (Inactive)
            Reporter:
            michael.cahill@mongodb.com Michael Cahill (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: