Ingest btree reconciliation wrongly removes data from the history store

XMLWordPrintableJSON

    • Storage Engines - Transactions
    • SE Transactions - 2026-04-10
    • 3

      We have seen that we write the history store pages to the SLS on follower. However, we should not do that. After some debugging, I surprised to find that the ingest reconciliation is writing data to the history store and then the dirtied history store is then written to the SLS.

      #0  0x0000ffffaeb347b4 in __pthread_kill_implementation () from /lib64/libc.so.6
      #1  0x0000ffffaeaeb3a0 [PAC] in raise () from /lib64/libc.so.6
      #2  0x0000aaaac3833cb8 [PAC] in mongo::endProcessWithSignal (signalNum=signalNum@entry=6) at ./src/mongo/util/signal_handlers_synchronous.cpp:431
      #3  0x0000aaaac38337c0 in abruptQuit (signalNum=6) at ./src/mongo/util/signal_handlers_synchronous.cpp:256
      #4  <signal handler called>
      #5  0x0000ffffaeb347b4 in __pthread_kill_implementation () from /lib64/libc.so.6
      #6  0x0000ffffaeaeb3a0 [PAC] in raise () from /lib64/libc.so.6
      #7  0x0000ffffaead7264 [PAC] in abort () from /lib64/libc.so.6
      #8  0x0000aaaac03b21d4 [PAC] in __wt_abort (session=0x10707d601da0) at ./src/third_party/wiredtiger/src/os_common/os_abort.c:32
      #9  0x0000aaaac0350b60 in __curhs_remove (cursor=<optimized out>) at ./src/third_party/wiredtiger/src/cursor/cur_hs.c:1187
      #10 0x0000aaaac03daf20 in __rec_hs_delete_reinsert_from_pos (session=<optimized out>, hs_cursor=0x107074fe7180, btree_id=<optimized out>, key=<optimized out>, ts=0,
          ts@entry=281473349637936, reinsert=false, no_ts_tombstone=true, error_on_ts_ordering=<optimized out>, counter=counter@entry=0xffff9f04d740)
          at ./src/third_party/wiredtiger/src/reconcile/rec_hs.c:229
      #11 0x0000aaaac03da0c0 in __wti_rec_hs_delete_key (session=session@entry=0x10707d601da0, hs_cursor=0x107074fe7180, btree_id=9785, key=key@entry=0x1070412ca3c0, reinsert=false,
          error_on_ts_ordering=<optimized out>) at ./src/third_party/wiredtiger/src/reconcile/rec_hs.c:1255
      #12 0x0000aaaac03d6ca0 in __wti_rec_hs_clear_on_tombstone (session=session@entry=0x10707d601da0, r=r@entry=0x10707a5c4100, recno=<optimized out>, rowkey=<optimized out>,
          reinsert=false) at ./src/third_party/wiredtiger/src/reconcile/rec_write.c:3474
      #13 0x0000aaaabd6b5588 in __wti_rec_row_leaf (session=session@entry=0x10707d601da0, r=r@entry=0x10707a5c4100, pageref=pageref@entry=0x1070491d35c0, salvage=salvage@entry=0x0)
          at ./src/third_party/wiredtiger/src/reconcile/rec_row.c:1285
      #14 0x0000aaaabd6b33d8 in __reconcile (session=0x10707d601da0, ref=0x1070491d35c0, salvage=0x0, flags=800, page_lockedp=0xffff9f04df0c)
          at ./src/third_party/wiredtiger/src/reconcile/rec_write.c:309
      #15 __wt_reconcile (session=<optimized out>, session@entry=0x10707d601da0, ref=ref@entry=0x1070491d35c0, salvage=0x0, flags=<optimized out>)
          at ./src/third_party/wiredtiger/src/reconcile/rec_write.c:127
      #16 0x0000aaaabd6af3e4 in __evict_reconcile (session=0x10707d601da0, ref=0x1070491d35c0, evict_flags=0) at ./src/third_party/wiredtiger/src/evict/evict_page.c:1268
      #17 __wt_evict (session=session@entry=0x10707d601da0, ref=ref@entry=0x1070491d35c0, previous_state=previous_state@entry=3 '\003', flags=<optimized out>, flags@entry=2667899936)
          at ./src/third_party/wiredtiger/src/evict/evict_page.c:434
      #18 0x0000aaaac03818a4 in __evict_page (session=session@entry=0x10707d601da0, is_server=<optimized out>) at ./src/third_party/wiredtiger/src/evict/evict_lru.c:3093
      #19 0x0000aaaac0382edc in __evict_lru_pages (session=session@entry=0x10707d601da0, is_server=false) at ./src/third_party/wiredtiger/src/evict/evict_lru.c:1418
      #20 0x0000aaaac03807bc in __evict_thread_run (session=0x10707d601da0, thread=0x10707f63c280) at ./src/third_party/wiredtiger/src/evict/evict_lru.c:355
      
      (gdb) p btree->dhandle->name
      $5 = 0x10705fc69840 "file:index-5f5c89aa-8de2-44bd-837b-11e24bd18094.wt_ingest"
      

            Assignee:
            Chenhao Qu
            Reporter:
            Chenhao Qu
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: