-
Type: Bug
-
Resolution: Won't Do
-
Priority: Major - P3
-
None
-
Affects Version/s: 2.8.0-rc3
-
Component/s: Storage, WiredTiger
-
Storage Engines
-
ALL
This might be the same as SERVER-16356, but 16356 doesn't have enough details and has been closed as fixed without details. The problem is that 12 hours after mongod instance because idle it still uses 3 CPU cores (3 cores are each 40% busy). The server isn't doing much IO per iostat:
# iostat -kx 10 | grep fioa Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util fioa 0.00 0.00 0.00 12.30 0.00 231.20 37.59 0.00 0.07 0.00 0.00 fioa 0.00 0.00 0.00 8.10 0.00 128.80 31.80 0.00 0.17 0.00 0.00 fioa 0.00 0.00 0.00 24.00 0.00 474.80 39.57 0.00 0.78 0.00 0.00 fioa 0.00 0.00 0.00 4.40 0.00 86.80 39.45 0.00 0.02 0.00 0.00 fioa 0.00 0.00 0.00 4.00 0.00 57.20 28.60 0.00 0.03 0.00 0.00 fioa 0.00 0.00 0.00 5.10 0.00 60.80 23.84 0.00 0.00 0.00 0.00 fioa 0.00 0.00 0.10 12.50 0.40 107.20 17.08 0.00 0.01 0.00 0.00 fioa 0.00 0.00 0.00 5.20 0.00 62.80 24.15 0.00 0.04 0.00 0.00
The top-N sources of CPU per "perf record ..."
79.89% mongod libc-2.17.so [.] __strcmp_sse42 13.54% mongod mongod [.] __wt_conn_dhandle_close_all 0.85% mongod [kernel.kallsyms] [k] update_sd_lb_stats 0.25% mongod [kernel.kallsyms] [k] idle_cpu 0.25% mongod [kernel.kallsyms] [k] find_next_bit 0.24% mongod [kernel.kallsyms] [k] _raw_spin_lock 0.23% mongod mongod [.] __wt_session_get_btree 0.23% mongod mongod [.] __config_getraw.isra.0 0.21% mongod [kernel.kallsyms] [k] cpumask_next_and 0.18% mongod libpthread-2.17.so [.] __lll_lock_wait
A thread stack sample shows the busy threads:
2 threads here --> __lll_lock_wait,_L_lock_943,pthread_mutex_lock,__wt_spin_lock,__lsm_drop_file,__wt_lsm_free_chunks,__lsm_worker_general_op,__lsm_worker,start_thread,clone 1 thread here --> __strcmp_sse42,__wt_conn_dhandle_close_all,__drop_file,__wt_schema_drop,__lsm_drop_file,__wt_lsm_free_chunks,__lsm_worker_general_op,__lsm_worker,start_thread,clone