Salvage can use a lot of memory when it builds the final root page, see
WT-5437 for an example, where it tied down most of a GB. While there were exacerbating features in that ticket (lots of overflow keys), salvaging a file with a sufficient number of pages will have the same effect.
The problem is that bulk-load, rebalance and salvage all build a single large internal (root) page in memory, which is eventually reconciled and split into multiple internal pages when the operation completes. Reconciliation has code to handle splitting large pages, which is why this approach was used, however the single internal page being built can become unacceptably large, tying down large amounts of cache.
I can think of two approaches to the problem: we could either write internal page key/address pairs to a backing file and then read them in during reconciliation to build the internal pages, or, we could build internal pages that can be individually evicted as the operation proceeds, and which are coalesced up the tree into a single root page. The first approach sounds easier to me (and would likely work for both bulk-load and salvage), but I haven't done any investigation.