Attached repro script does the following
- Inserts 1M documents with 10 indexes over fields that are random strings
- Stops one secondary then removes all documents via one of the indexes, therefore in random order.
- Restarts that secondary. While it is catching up (after the script completes), due to
SERVER-34938, batches will be large and take several minutes to complete, so multiple checkpoints will run while timestamps are pinning content in cache. It appears that additional unevictable clean content is generated at each checkpoint, exacerbating the effects of SERVER-34938.
- batch boundaries are at A, E, F
- at each checkpoint (e.g B, C, D) the amount of clean content is increased, and this is presumably unevictable because the oldest timestamp is only updated between batches
- at batch boundaries when oldest timestamp is advanced the cache content is reduced, presumably because it becomes evictable
Above was run on a machine with 24 cpus and a lot of memory, but the script uses numactl to limit each instance to two cpus and cache to 4 GB (during setup) and 1 GB (during recovery), so this should be runnable on a more modest machine.