The GDB debug script tools/gdb/gdb_scripts/wt_debug_script_update.py mishandles WT_UPDATE_TOMBSTONE entries in dump_update_chain().
Problem
val_bytes = gdb.selected_inferior().read_memory(wt_val['data'], wt_val['size']) can_bson = wt_val['type'] == 3 if can_bson: try: obj = bson.decode_all(val_bytes)[0] ... except: pass print(' ' + '\n '.join(str(wt_val).split('\n')) + " " + str(obj) + " =>")
Two issues:
- read_memory(wt_val['data'], wt_val['size']) is called unconditionally. For WT_UPDATE_TOMBSTONE (type=4) — and WT_UPDATE_RESERVE (type=2) — size is 0 and data is uninitialized / NULL. The read is unsafe and the resulting bytes are meaningless.
- The printed line gives no indication of the WT_UPDATE type, so a tombstone prints identically to a STANDARD value whose BSON decode failed (both show None). The operator cannot distinguish a delete from a non-decodable value while walking an update chain in gdb.
WT_UPDATE type values (from src/include/btmem.h):
| value | name |
|---|---|
| 0 | WT_UPDATE_INVALID |
| 1 | WT_UPDATE_MODIFY |
| 2 | WT_UPDATE_RESERVE |
| 3 | WT_UPDATE_STANDARD |
| 4 | WT_UPDATE_TOMBSTONE |
Fix
- Guard read_memory with size > 0 so tombstone/reserve entries skip the read.
- Decode BSON only when type == WT_UPDATE_STANDARD and size > 0.
- Prefix each printed update with [MODIFY] / [RESERVE] / [STANDARD] / [TOMBSTONE] derived from wt_val['type'].
The import bson at the top of the file is intentionally left as a hard import — silently falling back would let the operator believe the script "decoded nothing" rather than realizing pymongo is missing.
Affected files
- tools/gdb/gdb_scripts/wt_debug_script_update.py