-
Type:
Bug
-
Resolution: Won't Fix
-
Priority:
Major - P3
-
None
-
Affects Version/s: None
-
Component/s: Cache and Eviction
-
None
-
Storage Engines, Storage Engines - Transactions
-
SE Transactions - 2025-11-07
-
1
/*
* We depend on the atomic operation being a release barrier, that is, a barrier to ensure all
* changes to the page are written before updating the page state and/or marking the tree dirty,
* otherwise checkpoints and/or page reconciliation might be looking at a clean page/tree.
*
* Every time the page transitions from clean to dirty, update the cache and transactional
* information.
*
* The page state can only ever be incremented above dirty by the number of concurrently running
* threads, so the counter will never approach the point where it would wrap.
*
* Increase the dirty cache size before performing the compare-and-swap operation when the dirty
* cache size is low. This ensures the checkpoint does not reconcile and clean the page before
* the dirty cache size is incremented, as this could otherwise result in the dirty cache size
* going negative. Note that the checkpoint can only clean the page if it belongs to the
* metadata or the history store.
*/
size = __wt_atomic_load_size_relaxed(&page->memory_footprint);
if (WT_UNLIKELY(!WT_PAGE_IS_INTERNAL(page) &&
__wt_atomic_load_uint32_relaxed(&page->modify->page_state) == WT_PAGE_CLEAN &&
__wt_atomic_load_uint64_relaxed(&S2C(session)->cache->pages_dirty_leaf) < 10 &&
(WT_IS_METADATA(session->dhandle) || WT_IS_DISAGG_META(session->dhandle) ||
WT_IS_HS(session->dhandle)))) {
increase_dirty_size_first = true;
__wt_cache_dirty_incr_size(session, size, false);
} else
increase_dirty_size_first = false;
We should ensure memory_footprint is read after we have done the atomic operation based on the comment.