Details
-
Bug
-
Resolution: Incomplete
-
Major - P3
-
None
-
2.2.0
-
None
-
EC2
-
Linux
Description
We recently upgraded from 2.0 to 2.2, and several of our replica sets have had 1 or more nodes crash. An example is, in a 6-node replica set, the primary has crashed twice today. Bottom of mongod.log output below:
**********
:
:
Thu Sep 13 16:38:58 [rsHealthPoll] replSet member EC2_PUBLIC_DNS_HOSTNAME:27017 is up
Thu Sep 13 16:38:58 [rsHealthPoll] replSet member EC2_PUBLIC_DNS_HOSTNAME:27017 is now in state SECONDARY
Thu Sep 13 16:38:58 [rsMgr] replSet warning caught unexpected exception in electSelf()
Thu Sep 13 16:38:58 Invalid access at address: 0x7fc305dde6f0 from thread:
Thu Sep 13 16:38:58 Invalid access at address: 0x7fc305dde720 from thread:
Thu Sep 13 16:38:58 Got signal: 11 (Segmentation fault).
Thu Sep 13 16:38:58 Got signal: 11 (Segmentation fault).
Thu Sep 13 16:38:58 Backtrace:
0xade6e1 0x5582d9 0x558862 0x7fc98faff500 0x7fc305dde6f0
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xade6e1]
/usr/bin/mongod(_ZN5mongo10abruptQuitEi+0x399) [0x5582d9]
/usr/bin/mongod(_ZN5mongo24abruptQuitWithAddrSignalEiP7siginfoPv+0x262) [0x558862]
/lib64/libpthread.so.0(+0xf500) [0x7fc98faff500]
[0x7fc305dde6f0]
**********