Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-9631

Improve progress messages for compact

    • StorEng - Refinement Pipeline

      Summary

      With verbose=[compact_progress:1], compact outputs progress messages roughly every 20 seconds

      [1658589049:839890][188874:0x7f01be853740], wt, file:collection-1--2947465233479438924.wt, WT_SESSION.compact: [WT_VERB_COMPACT_PROGRESS][DEBUG]:  compacting collection-1--2947465233479438924.wt for 15120 seconds; reviewed 10726680 pages, skipped 10726680 pages, rewritten 0pages

      (Note that there is currently a bug in managing the frequency of these messages. See WT-9607.)

      The problem with these messages is that the user doesn't know how many pages need to be reviewed or rewritten for compact to complete its work. So other than providing increasing numbers as a way to reassure the user that compact is doing something, the messages don't provide much value.

      Ideally, we would determine the total pages compact will have to look at and move and report these numbers as percentages — possibly including the page counts as a way of showing how much work is happening for each percentage point.

      There are two problems with this.

      First, I don't think we have access to the total number of pages in an on-disk BTree (maybe?).  A straight-forward workaround for this is available, however. We could report these metrics as byte counts rather than page counts. At the beginning of a pass, compact computes the amount of allocated space in the last 10% of the file. So that's a target for how much data it should rewrite.  Compact can get the size of each on-disk page from the address cookie. A lot of this work happens in the block manager which is already cracking the address cookies to determine whether a block should be relocated.  We can do the same for the pages reviewed, comparing the cumulative size to the size of the most recent checkpoint.

      The second problem is harder.  What I described above works if compact makes a single pass through a file. But in fact compact will typically make many passes through a file.  Each compact pass aims to move live data out of the last 10% of the target file.  So a file with a lot of free space could require several passes to fully compact. In addition, whenever a compact pass discovers checkpoint running in the same tree, the compact pass gives up and restarts later. So in a large tree, where a full pass takes a while, it could require dozen of restarted passes before compact finishes.  The current progress reporting keeps accumulating the counts of pages reviewed and rewritten across all of these passes and restarts. This makes the numbers even less meaning full, and the user can't even use an guestimate of the number of pages in the file to help in understanding progress.

      I'm not sure the best way to present progress in light of my second problem. But I think we can come up with something better than what we do today.

            Assignee:
            peter.macko@mongodb.com Peter Macko
            Reporter:
            keith.smith@mongodb.com Keith Smith
            Votes:
            2 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated:
              Resolved: