[SERVER-13483] segmentation fault after dropping oplog Created: 03/Apr/14  Updated: 10/Dec/14  Resolved: 04/Apr/14

Status: Closed
Project: Core Server
Component/s: Replication, Storage
Affects Version/s: 2.6.0-rc3
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Luke Lovett Assignee: Unassigned
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Related
related to SERVER-8511 Live oplog can be dropped Closed
Operating System: OS X
Steps To Reproduce:

1. Start up a replica set
2. Drop the oplog.rs collection
3. Try to insert a document anywhere

Participants:

 Description   

mongod exits with a segmentation fault when doing an insert after dropping the oplog.rs collection

2014-04-03T23:14:52.198+0000 [conn7] SEVERE: Invalid access at address: 0
2014-04-03T23:14:52.203+0000 [conn7] SEVERE: Got signal: 11 (Segmentation fault: 11).
Backtrace:0x1006a7d0b 0x1006a7a4f 0x1006a7b52 0x7fff8cdb45aa 0x1004fd49c 0x100467694 0x10046a27f 0x1001a2160 0x1001a2ca1 0x1001a3eee 0x1001a4cff 0x1001a7ac7 0x1001b5e75 0x1001b6971 0x1001b78ec 0x1003ce0ff 0x10029f9e0 0x100006754 0x10066d0b1 0x1006dc115
 0   mongod                              0x00000001006a7d0b _ZN5mongo15printStackTraceERSo + 43
 1   mongod                              0x00000001006a7a4f _ZN5mongo12_GLOBAL__N_110abruptQuitEi + 191
 2   mongod                              0x00000001006a7b52 _ZN5mongo12_GLOBAL__N_124abruptQuitWithAddrSignalEiP9__siginfoPv + 210
 3   libsystem_platform.dylib            0x00007fff8cdb45aa _sigtramp + 26
 4   mongod                              0x00000001004fd49c _ZN5mongo14RamLogAppender6appendERKNS_6logger21MessageEventEphemeralE + 160
 5   mongod                              0x0000000100467694 _ZN5mongoL8_logOpRSEPKcS1_S1_RKNS_7BSONObjEPS2_Pbb + 2100
 6   mongod                              0x000000010046a27f _ZN5mongo5logOpEPKcS1_RKNS_7BSONObjEPS2_PbbPS3_ + 79
 7   mongod                              0x00000001001a2160 _ZN5mongo18WriteBatchExecutor13execOneInsertEPNS0_16ExecInsertsStateEPPNS_16WriteErrorDetailE + 4204
 8   mongod                              0x00000001001a2ca1 _ZN5mongo18WriteBatchExecutor11execInsertsERKNS_21BatchedCommandRequestEPSt6vectorIPNS_16WriteErrorDetailESaIS6_EE + 1293
 9   mongod                              0x00000001001a3eee _ZN5mongo18WriteBatchExecutor11bulkExecuteERKNS_21BatchedCommandRequestEPSt6vectorIPNS_19BatchedUpsertDetailESaIS6_EEPS4_IPNS_16WriteErrorDetailESaISB_EE + 66
 10  mongod                              0x00000001001a4cff _ZN5mongo18WriteBatchExecutor12executeBatchERKNS_21BatchedCommandRequestEPNS_22BatchedCommandResponseE + 2547
 11  mongod                              0x00000001001a7ac7 _ZN5mongo8WriteCmd3runERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb + 645
 12  mongod                              0x00000001001b5e75 _ZN5mongo12_execCommandEPNS_7CommandERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb + 37
 13  mongod                              0x00000001001b6971 _ZN5mongo7Command11execCommandEPS0_RNS_6ClientEiPKcRNS_7BSONObjERNS_14BSONObjBuilderEb + 2245
 14  mongod                              0x00000001001b78ec _ZN5mongo12_runCommandsEPKcRNS_7BSONObjERNS_11_BufBuilderINS_16TrivialAllocatorEEERNS_14BSONObjBuilderEbi + 1388
 15  mongod                              0x00000001003ce0ff _ZN5mongo11newRunQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1_ + 1615
 16  mongod                              0x000000010029f9e0 _ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE + 1968
 17  mongod                              0x0000000100006754 _ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE + 308
 18  mongod                              0x000000010066d0b1 _ZN5mongo17PortMessageServer17handleIncomingMsgEPv + 1681
 19  mongod                              0x00000001006dc115 thread_proxy + 229



 Comments   
Comment by Eliot Horowitz (Inactive) [ 04/Apr/14 ]

Dup of server-8511

Comment by Luke Lovett [ 03/Apr/14 ]

spencer dropping the oplog may not have a huge use case, but it does happen to come in handy when testing mongo-connector, which tails the oplog. It's faster than starting a replica set, wiping everything and starting another.

Comment by Scott Hernandez (Inactive) [ 03/Apr/14 ]

This is a regression – see SERVER-8511

Comment by Spencer Brody (Inactive) [ 03/Apr/14 ]

Sounds like the real problem is that we let you drop the oplog at all?

Generated at Thu Feb 08 03:31:52 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.