Description
When the cache is small we can allow internal pages to grow large enough that the cache becomes stuck.
The test/format configuration at the bottom of this report ended up with a cache containing 4 pages. Two pages for the metadata (that can't be evicted). Two internal pages in a tree:
0x98f790: row-store internal
|
disk 0x98f710, entries 7, dirty, disk-alloc, write generation=2
|
0x7fa9880b5e70: row-store internal
|
disk 0x7fa98806b000, entries 4981, dirty, disk-alloc, write generation=6
|
The configured cache size is 2MB and these 4 pages take up 2.5 MB. The second page with nearly 5000 entries is so large we can no longer make progress.
############################################
|
# RUN PARAMETERS
|
############################################
|
abort=0
|
auto_throttle=1
|
firstfit=0
|
bitcnt=8
|
bloom=1
|
bloom_bit_count=7
|
bloom_hash_count=14
|
bloom_oldest=0
|
cache=2
|
checkpoints=1
|
checksum=uncompressed
|
chunk_size=3
|
compaction=0
|
compression=bzip-raw
|
data_extend=0
|
data_source=table
|
delete_pct=28
|
dictionary=0
|
evict_max=2
|
file_type=row-store
|
backups=0
|
huffman_key=0
|
huffman_value=0
|
insert_pct=3
|
internal_key_truncation=1
|
internal_page_max=16
|
isolation=snapshot
|
key_gap=3
|
key_max=72
|
key_min=28
|
leak_memory=0
|
leaf_page_max=13
|
logging=0
|
logging_archive=0
|
logging_prealloc=0
|
lsm_worker_threads=4
|
merge_max=20
|
mmap=1
|
ops=100000
|
prefix_compression=1
|
prefix_compression_min=6
|
repeat_data_pct=3
|
reverse=0
|
rows=100000
|
runs=100
|
split_pct=67
|
statistics=1
|
statistics_server=0
|
threads=16
|
timer=20
|
value_max=3805
|
value_min=16
|
wiredtiger_config=
|
write_pct=59
|
############################################
|