If workload is doing random inserts with a large key range, using timestamps (any non-zero value is sufficient, doesn't need to be in any order), and the oldest_timestamp doesn't move forward, WiredTiger has to keep all of the inserted data in cache.
What currently happens is that a single page grows then may go through a single in-memory split, which creates a second page. Once the original page grows large enough to reach the memory_page_max setting, readers will continually try to force it to be evicted. However, that goes through reconciliation, and since none of the data can be written into a page image, reconciliation does not attempt to split the page and either gives up on the eviction, or does update/restore eviction without freeing any space.
It would be better if we could split pages in this case. For example, if we were to split whenever we have more than 10% of memory_page_max in r->update_mem_saved, regardless of the size of the disk image, then this workload should be able to continue until the cache's dirty limit is reached, rather than stalling once a single page of data is loaded.