Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-371

test/format failure

    • Type: Icon: Task Task
    • Resolution: Done
    • WT1.3.5
    • Affects Version/s: None
    • Component/s: None
    • None

      Running the following (LSM) config with test/format causes a segfault with the current develop tree.

      ops=10000
      rows=100000
      cache=100
      threads=1
      runs=1
      data_source=lsm
      reverse=0
      bzip=0
      value_min=10
      dictionary=1

      The stack trace is:

      Program received signal EXC_BAD_ACCESS, Could not access memory.
      Reason: KERN_INVALID_ADDRESS at address: 0x0000000001d0219d
      [Switching to process 20106 thread 0x1203]
      0x0000000100040b8b in __wt_cell_unpack_safe (cell=0x1d0219d, unpack=0x1004f0508, end=0x101d02be8 "nt.1=(addr=\"01c20d81e457796bb9c20e81e458908464808080808080e3047fc0e3047bc083\",order=1,time=1350598867,size=302080)),checksum=,collator=,columns=,dictionary=500,huffman_key=,huffman_value=,internal_ite"...) at cell.i:494
      494		if (cell->__chunk[0] & WT_CELL_VALUE_SHORT) {
      (gdb) where
      #0  0x0000000100040b8b in __wt_cell_unpack_safe (cell=0x1d0219d, unpack=0x1004f0508, end=0x101d02be8 "nt.1=(addr=\"01c20d81e457796bb9c20e81e458908464808080808080e3047fc0e3047bc083\",order=1,time=1350598867,size=302080)),checksum=,collator=,columns=,dictionary=500,huffman_key=,huffman_value=,internal_ite"...) at cell.i:494
      WT-1  0x0000000100040f9c in __wt_cell_unpack_safe (cell=0x1d0219d, unpack=0x1004f0508, end=0x101d02be8 "nt.1=(addr=\"01c20d81e457796bb9c20e81e458908464808080808080e3047fc0e3047bc083\",order=1,time=1350598867,size=302080)),checksum=,collator=,columns=,dictionary=500,huffman_key=,huffman_value=,internal_ite"...) at cell.i:578
      WT-2  0x000000010003ffb6 in __verify_dsk_row (session=0x100833e30, addr=0x1001c57a8 "[write-check]", dsk=0x101d01000) at bt_vrfy_dsk.c:162
      WT-3  0x000000010003fd0e in __wt_verify_dsk (session=0x100833e30, addr=0x1001c57a8 "[write-check]", buf=0x101a11918) at bt_vrfy_dsk.c:114
      WT-4  0x000000010001564f in __wt_block_write_off (session=0x100833e30, block=0x101006e00, buf=0x101a11918, offsetp=0x1004f07f8, sizep=0x1004f07f4, cksump=0x1004f07f0, locked=0) at block_write.c:99
      WT-5  0x0000000100015554 in __wt_block_write (session=0x100833e30, block=0x101006e00, buf=0x101a11918, addr=0x1004f08d5 "", addr_size=0x1004f09d4) at block_write.c:50
      WT-6  0x0000000100011b2a in __wt_bm_write (session=0x100833e30, buf=0x101a11918, addr=0x1004f08d5 "", addr_size=0x1004f09d4) at block_mgr.c:297
      WT-7  0x000000010004c649 in __rec_split_write (session=0x100833e30, r=0x101a11910, bnd=0x10275fa00, buf=0x101a11918, checkpoint=0) at rec_write.c:1243
      WT-8  0x000000010004bfbe in __rec_split_finish (session=0x100833e30, r=0x101a11910) at rec_write.c:1112
      WT-9  0x0000000100052c17 in __rec_row_leaf (session=0x100833e30, r=0x101a11910, page=0x100601070, salvage=0x0) at rec_write.c:3050
      WT-10 0x000000010004a9c1 in __wt_rec_write (session=0x100833e30, page=0x100601070, salvage=0x0, flags=1) at rec_write.c:447
      WT-11 0x0000000100029a9f in __evict_file_request (session=0x100833e30, syncop=1) at bt_evict.c:439
      WT-12 0x0000000100029884 in __evict_file_request_walk (session=0x100833800) at bt_evict.c:400
      WT-13 0x000000010002925b in __evict_worker (session=0x100833800) at bt_evict.c:224
      WT-14 0x0000000100028f76 in __wt_cache_evict_server (arg=0x100833800) at bt_evict.c:167
      WT-15 0x00007fff8ebb3742 in _pthread_start ()
      WT-16 0x00007fff8eba0181 in thread_start ()
      (gdb) 
      

      Disabling dictionary support causes the failure to go away. The failure happens during the bulk load phase.

      I've also seen a call stack with the __cell_unpack_safe method ends up in an infinite recursion loop.

            Assignee:
            keith.bostic@mongodb.com Keith Bostic (Inactive)
            Reporter:
            alexander.gorrod@mongodb.com Alexander Gorrod
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: