Details
-
Bug
-
Resolution: Done
-
Major - P3
-
None
-
2.6.8
-
None
-
Linux
-
Description
I'm facing the following issue after MongoDB does not shutdown cleanly. I tried to run --repair parameter but recovery process exit with the same error message.
mongod --shardsvr --profile 0 --nojournal --bind_ip 192.168.10.11 --port 20002 --dbpath /home/shard2 --replSet crawler2
|
2015-05-29T22:53:42.682-0700 [initandlisten] MongoDB starting : pid=29125 port=20002 dbpath=/home/shard2 64-bit host=Crawler1-web1
|
2015-05-29T22:53:42.683-0700 [initandlisten]
|
2015-05-29T22:53:42.683-0700 [initandlisten] ** WARNING: You are running on a NUMA machine.
|
2015-05-29T22:53:42.683-0700 [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
|
2015-05-29T22:53:42.683-0700 [initandlisten] ** numactl --interleave=all mongod [other options]
|
2015-05-29T22:53:42.683-0700 [initandlisten]
|
2015-05-29T22:53:42.683-0700 [initandlisten] db version v2.6.8
|
2015-05-29T22:53:42.683-0700 [initandlisten] git version: 3abc04d6d4f71de00b57378e3277def8fd7a6700
|
2015-05-29T22:53:42.683-0700 [initandlisten] build info: Linux build5.nj1.10gen.cc 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49
|
2015-05-29T22:53:42.683-0700 [initandlisten] allocator: tcmalloc
|
2015-05-29T22:53:42.683-0700 [initandlisten] options: { net: { bindIp: "192.168.10.11", port: 20002 }, operationProfiling: { mode: "off" }, replication: { replSet: "crawler2" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/home/shard2", journal: { enabled: false } } }
|
2015-05-29T22:53:42.734-0700 [initandlisten] waiting for connections on port 20002
|
2015-05-29T22:53:42.738-0700 [rsStart] replSet I am 192.168.10.11:20002
|
2015-05-29T22:53:42.740-0700 [rsHealthPoll] replset info 192.168.10.13:20002 thinks that we are down
|
2015-05-29T22:53:42.740-0700 [rsHealthPoll] replSet member 192.168.10.13:20002 is up
|
2015-05-29T22:53:42.740-0700 [rsHealthPoll] replSet member 192.168.10.13:20002 is now in state SECONDARY
|
2015-05-29T22:53:42.740-0700 [rsHealthPoll] replset info 192.168.10.12:20002 thinks that we are down
|
2015-05-29T22:53:42.740-0700 [rsHealthPoll] replSet member 192.168.10.12:20002 is up
|
2015-05-29T22:53:42.740-0700 [rsHealthPoll] replSet member 192.168.10.12:20002 is now in state PRIMARY
|
2015-05-29T22:53:42.751-0700 [rsStart] local.oplog.rs Fatal Assertion 16968
|
2015-05-29T22:53:42.764-0700 [rsStart] local.oplog.rs 0x1205431 0x11a7229 0x1189d5d 0xf0479d 0xf048e4 0xf42b53 0xf42fa5 0xaa2a4c 0xd8237f 0xa4b65e 0xe7942f 0xe7bdb3 0xe85406 0x1249dc9 0x33188079d1 0x33184e88fd
|
mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x1205431]
|
mongod(_ZN5mongo10logContextEPKc+0x159) [0x11a7229]
|
mongod(_ZN5mongo13fassertFailedEi+0xcd) [0x1189d5d]
|
mongod() [0xf0479d]
|
mongod(_ZNK5mongo13ExtentManager13getPrevRecordERKNS_7DiskLocE+0x24) [0xf048e4]
|
mongod(_ZN5mongo14CappedIterator13getNextCappedEPKNS_16NamespaceDetailsEPKNS_13ExtentManagerERKNS_7DiskLocENS_20CollectionScanParams9DirectionE+0x63) [0xf42b53]
|
mongod(_ZN5mongo14CappedIterator7getNextEv+0x115) [0xf42fa5]
|
mongod(_ZN5mongo14CollectionScan4workEPm+0xfc) [0xaa2a4c]
|
mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_7DiskLocE+0xef) [0xd8237f]
|
mongod(_ZN5mongo7Helpers7getLastEPKcRNS_7BSONObjE+0x9e) [0xa4b65e]
|
mongod(_ZN5mongo11ReplSetImpl21loadLastOpTimeWrittenEb+0x7f) [0xe7942f]
|
mongod(_ZN5mongo11ReplSetImpl3_goEv+0x543) [0xe7bdb3]
|
mongod(_ZN5mongo13startReplSetsEPNS_14ReplSetCmdlineE+0x56) [0xe85406]
|
mongod() [0x1249dc9]
|
/lib64/libpthread.so.0() [0x33188079d1]
|
/lib64/libc.so.6(clone+0x6d) [0x33184e88fd]
|
2015-05-29T22:53:42.764-0700 [rsStart]
|
|
|
***aborting after fassert() failure
|
|
|
|
|
2015-05-29T22:53:42.773-0700 [rsStart] SEVERE: Got signal: 6 (Aborted).
|
Backtrace:0x1205431 0x120480e 0x33184326a0 0x3318432625 0x3318433e05 0x1189dca 0xf0479d 0xf048e4 0xf42b53 0xf42fa5 0xaa2a4c 0xd8237f 0xa4b65e 0xe7942f 0xe7bdb3 0xe85406 0x1249dc9 0x33188079d1 0x33184e88fd
|
mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x1205431]
|
mongod() [0x120480e]
|
/lib64/libc.so.6() [0x33184326a0]
|
/lib64/libc.so.6(gsignal+0x35) [0x3318432625]
|
/lib64/libc.so.6(abort+0x175) [0x3318433e05]
|
mongod(_ZN5mongo13fassertFailedEi+0x13a) [0x1189dca]
|
mongod() [0xf0479d]
|
mongod(_ZNK5mongo13ExtentManager13getPrevRecordERKNS_7DiskLocE+0x24) [0xf048e4]
|
mongod(_ZN5mongo14CappedIterator13getNextCappedEPKNS_16NamespaceDetailsEPKNS_13ExtentManagerERKNS_7DiskLocENS_20CollectionScanParams9DirectionE+0x63) [0xf42b53]
|
mongod(_ZN5mongo14CappedIterator7getNextEv+0x115) [0xf42fa5]
|
mongod(_ZN5mongo14CollectionScan4workEPm+0xfc) [0xaa2a4c]
|
mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_7DiskLocE+0xef) [0xd8237f]
|
mongod(_ZN5mongo7Helpers7getLastEPKcRNS_7BSONObjE+0x9e) [0xa4b65e]
|
mongod(_ZN5mongo11ReplSetImpl21loadLastOpTimeWrittenEb+0x7f) [0xe7942f]
|
mongod(_ZN5mongo11ReplSetImpl3_goEv+0x543) [0xe7bdb3]
|
mongod(_ZN5mongo13startReplSetsEPNS_14ReplSetCmdlineE+0x56) [0xe85406]
|
mongod() [0x1249dc9]
|
/lib64/libpthread.so.0() [0x33188079d1]
|
/lib64/libc.so.6(clone+0x6d) [0x33184e88fd]
|
|
|
|