Major - P3
I've been running a workload that does random inserts. The keys are 24 bytes the values are 225 bytes.
The page size is 4k. After running for a while I end up with a ~7GB database. The distribution of key/value pairs on the disk pages is:
|Page count||Number of pairs|
That says to me: The page can fit up to 15 keys on it. There are 1.1 million leaf pages in total and 290 thousand of those have less than half the possible keys.
Ideally all of the pages would be at least half full. If we create pages with a small number of entries they widen the span of the tree, and are relatively less likely to be read back in to have content added in the future (i.e: they are likely to waste space indefinitely).
There are also a lot of pages that are very full - which is bad in this workload. Since the workload is evicting aggressively, those pages are being read in, updated and split into two unequal pages.