-
Type:
Bug
-
Resolution: Done
-
Priority:
Critical - P2
-
Affects Version/s: 2.1.2
-
Component/s: Replication
-
None
-
ALL
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Found while testing PyMongo against 2.1.2. I'm still trying to track down the actual sequence of events that causes this. Here's the traceback running log level 5:
Wed Jun 27 14:28:41 [repl writer worker] New namespace: pymongo_test.test.$_id_
Wed Jun 27 14:28:41 [repl writer worker] done building bottom layer, going to commit
Wed Jun 27 14:28:41 [repl writer worker] build index done. scanned 0 total records. 0 secs
Wed Jun 27 14:28:41 [repl writer worker] New namespace: pymongo_test.test
Wed Jun 27 14:28:41 [repl writer worker] build index pymongo_test.test { i: 1 }
mem info: before index start vsize: 9493 resident: 40 mapped: 4382
Wed Jun 27 14:28:41 [repl writer worker] external sort root: /home/behackett/data/rs1/_tmp/esort.1340832521.50/
mem info: before final sort vsize: 9493 resident: 40 mapped: 4382
Wed Jun 27 14:28:41 [repl writer worker] not using file. size:35 _compares:0
mem info: after final sort vsize: 9493 resident: 40 mapped: 4382
Wed Jun 27 14:28:41 [repl writer worker] external sort used : 0 files in 0 secs
Wed Jun 27 14:28:41 [repl writer worker] reset extent was:pymongo_test.test.$keyb_1 now:pymongo_test.test.$i_1
allocExtent pymongo_test.test.$i_1 size 36864 1
Wed Jun 27 14:28:41 [repl writer worker] New namespace: pymongo_test.test.$i_1
Wed Jun 27 14:28:41 [repl writer worker] done building bottom layer, going to commit
Wed Jun 27 14:28:41 [repl writer worker] fastBuildIndex dupsToDrop:0
Wed Jun 27 14:28:41 [repl writer worker] build index done. scanned 1 total records. 0 secs
Wed Jun 27 14:28:41 [repl writer worker] User Assertion: 11000:E11000 duplicate key error index: pymongo_test.test.$i_1 dup key: { : 2 }
Wed Jun 27 14:28:41 [repl writer worker] ERROR: writer worker caught exception: E11000 duplicate key error index: pymongo_test.test.$i_1 dup key: { : 2 } on: { ts: Timestamp 1340832519000|21, h: -352243138702237989, op: "i", ns: "pymongo_test.test", o: { _id: ObjectId('4feb7b07fba5223bd4d67d72'), i: 2 } }
Wed Jun 27 14:28:41 [repl writer worker] Fatal Assertion 16360
0x8964e1 0xaa2373 0x8e4b00 0x8a6abd 0x666189 0x7fda040cfd0c 0x7fda03469bed
./mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x8964e1]
./mongod(_ZN5mongo13fassertFailedEi+0xa3) [0xaa2373]
./mongod(_ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x120) [0x8e4b00]
./mongod(_ZN5mongo10threadpool6Worker4loopEv+0x26d) [0x8a6abd]
./mongod() [0x666189]
/lib64/libpthread.so.0(+0x7d0c) [0x7fda040cfd0c]
/lib64/libc.so.6(clone+0x6d) [0x7fda03469bed]
Wed Jun 27 14:28:41 [repl writer worker]
***aborting after fassert() failure
Wed Jun 27 14:28:41 Got signal: 6 (Aborted).
Wed Jun 27 14:28:41 Backtrace:
0x8964e1 0x550cd9 0x7fda033c2bc0 0x7fda033c2b45 0x7fda033c3fc6 0xaa23ae 0x8e4b00 0x8a6abd 0x666189 0x7fda040cfd0c 0x7fda03469bed
./mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x8964e1]
./mongod(_ZN5mongo10abruptQuitEi+0x399) [0x550cd9]
/lib64/libc.so.6(+0x35bc0) [0x7fda033c2bc0]
/lib64/libc.so.6(gsignal+0x35) [0x7fda033c2b45]
/lib64/libc.so.6(abort+0x186) [0x7fda033c3fc6]
./mongod(_ZN5mongo13fassertFailedEi+0xde) [0xaa23ae]
./mongod(_ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x120) [0x8e4b00]
./mongod(_ZN5mongo10threadpool6Worker4loopEv+0x26d) [0x8a6abd]
./mongod() [0x666189]
/lib64/libpthread.so.0(+0x7d0c) [0x7fda040cfd0c]
/lib64/libc.so.6(clone+0x6d) [0x7fda03469bed]
I can reproduce this consistently running PyMongo's test_collection module, although running tests individually doesn't seem to cause it. I'll attach full logs from each replica set member shortly.