Because WiredTiger does page-wide key prefix compression (each key is prefix compressed based on the immediately previous key on the page), cursor-backward walks could be too slow because each key in the page could potentially have to be instantiated by rolling forward from the first key on the page through every intermediate key.
To avoid this behavior, when doing a cursor-backward walk through the page, we instantiate a selected number of keys distributed throughout the page so each roll forward to instantiate a key, as we walk backwards through the page, is limited to a small number of keys. The keys chosen to be instantiated on the page are the keys we would have to instantiate during a binary search of the page (presumably in case we do a cursor-backward walk followed by normal tree search behavior).
It would be simpler/faster to just instantiate every Nth key reading forward through the page, and the recent prefix compression work in
WT-7234 should be taken into consideration, as keys which can be quickly built from the page's largest group of prefix-compressed keys don't need to be instantiated at all.
Additionally, there's no reason to do this work if the page doesn't have long runs of prefix-compressed keys. When reading a page into memory, we should check to see if this work is even useful.
Additionally, we should never instantiate overflow keys.