Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-7286

Avoid bucket walking when gathering handles for checkpoint

    • Type: Icon: Improvement Improvement
    • Resolution: Won't Do
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: None
    • Labels:
      None
    • 8
    • Storage - Ra 2021-04-19, Storage - Ra 2021-05-03

      In WT-6421 part of sue.loverso analysis indicated that checkpoint, when gathering handles, unnecessarily walks the hash buckets three times for every table when we shouldn’t and do not expect to find our dhandle in the list. These three full-bucket-list walks come from session_get_dhandle with the URI:WiredTigerCheckpoint.### form:

      1. The checkpoint thread calls session_find_dhandle with the uri and checkpoint name and does not find it in the session’s local dhandle cache.
      2. Since it is not found, session_find_shared_dhandle is called. That call then walks the connection’s hash bucket for the uri and checkpoint by calling wt_conn_dhandle_find. Again, this is a newly generated checkpoint name so that dhandle should not be there.
      3. session_find_shared_dhandle then calls wt_conn_dhandle_alloc after locking the handle list with the write lock to allocate and insert a dhandle for the URI:WiredTigerCheckpoint### name. wt_conn_dhandle_alloc will again walk the hash bucket list to make sure that we didn’t race inserting the dhandle while acquiring the write lock.

      sue.loverso suggests that if we are the checkpoint thread (via WT_SESSION_IS_CHECKPOINT()) and the checkpoint string is WT_CHECKPOINT then we can skip those searches and go straight to allocation and insertion. (In diagnostic mode we may want to assert that the dhandle does not exist in the list once after acquiring the lock.)

            Assignee:
            ravi.giri@mongodb.com Ravi Giri
            Reporter:
            sulabh.mahajan@mongodb.com Sulabh Mahajan
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: