Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-796

Deadlock with fill100K benchmark

    • Type: Icon: Task Task
    • Resolution: Done
    • WT2.2
    • Affects Version/s: None
    • Component/s: None
    • None

      I am not sure if this is a leveldb issue or a wiredtiger issue, but the fill100k benchmark seems to deadlock when launched with multiple threads (>= 16). With lower number of threads, the deadlock does not seem to occur.

      Here is the command I use:

      time env LD_LIBRARY_PATH=../wiredtiger/.libs:../wiredtiger/ext/compressors/snappy/.libs/ ./db_bench_wiredtiger --cache_size=17179869184 --threads=64 --db=/tmpfs/leveldb --benchmarks=fill100K &> /dev/null
      

      I am using the develp branch of wiredtiger (commit 72020c57fbd6673ad45d32f6f07572e7cd819aac).

      Some debug information; I have not yet looked at the code:

      • Most threads are waiting in a select() call. Here is the backtrace:
      WT-1  0x00007fb6e92ccef8 in __wt_sleep (seconds=<optimized out>, micro_seconds=<optimized out>) at src/os_posix/os_sleep.c:22
      WT-2  0x00007fb6e92c059f in __clsm_put (position=0, value=<optimized out>, key=<optimized out>, clsm=0x7fb630000d80, session=0x657050)
          at src/lsm/lsm_cursor.c:1115
      WT-3  __clsm_insert (cursor=0x7fb630000d80) at src/lsm/lsm_cursor.c:1197
      WT-4  0x000000000040898a in leveldb::Benchmark::DoWrite(leveldb::(anonymous namespace)::ThreadState*, bool) ()
      WT-5  0x000000000040939e in leveldb::Benchmark::ThreadBody(void*) ()
      WT-6  0x00000000004337fa in leveldb::(anonymous namespace)::StartThreadWrapper(void*) ()
      WT-7  0x00007fb6e881fe9a in start_thread (arg=0x7fb6c77de700) at pthread_create.c:308
      
      • Three threads are waiting in a pthread_cond_timedwait:
      WT-1  0x00007fb6e92cbc8f in __wt_cond_wait (session=0x650690, cond=0x6728a0, usecs=<optimized out>) at src/os_posix/os_mtx.c:89
      WT-2  0x00007fb6e92785a9 in __wt_cache_evict_server (arg=0x650690) at src/btree/bt_evict.c:172
      WT-3  0x00007fb6e881fe9a in start_thread (arg=0x7fb6e7e3f700) at pthread_create.c:308
      

      Hope it can help.

      (Note: this deadlock is not really an issue for me... Currently I am just looking for workloads that really stress the memory subsystem )

            Assignee:
            Unassigned Unassigned
            Reporter:
            BLepers Baptiste Lepers [X]
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

              Created:
              Updated:
              Resolved: