[SERVER-12186] TTLMonitor error Created: 21/Dec/13  Updated: 10/Dec/14  Resolved: 19/Mar/14

Status: Closed
Project: Core Server
Component/s: None
Affects Version/s: 2.4.8
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Dharshan Rangegowda Assignee: Scott Hernandez (Inactive)
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Related
related to SERVER-9184 Cleanup TTL logic/locking Closed
Operating System: ALL
Participants:

 Description   

I have a 2+1 node replica set. I dropped my TTL indexes from my collections and restarted the servers. In my secondary I continuously see this message in the log. I don't even have the TTL index anymore - should I still be seeing this message?

Sat Dec 21 21:25:58.731 [TTLMonitor] Assertion: 13312:replSet error : logOp() but not primary?
0xdc7f71 0xd8963b 0xa63ca3 0xa60f69 0xa72fd4 0xc3d4c1 0xc3e725 0xd8c233 0xd8cce4 0xe10879 0x7f1a43c55851 0x7f1a42ff811d
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdc7f71]
 /usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0x9b) [0xd8963b]
 /usr/bin/mongod() [0xa63ca3]
 /usr/bin/mongod(_ZN5mongo5logOpEPKcS1_RKNS_7BSONObjEPS2_Pbb+0x49) [0xa60f69]
 /usr/bin/mongod(_ZN5mongo13deleteObjectsEPKcNS_7BSONObjEbbbPNS_11RemoveSaverE+0x10d4) [0xa72fd4]
 /usr/bin/mongod(_ZN5mongo10TTLMonitor10doTTLForDBERKSs+0xfe1) [0xc3d4c1]
 /usr/bin/mongod(_ZN5mongo10TTLMonitor3runEv+0x345) [0xc3e725]
 /usr/bin/mongod(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xc3) [0xd8c233]
 /usr/bin/mongod(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x74)
 [0xd8cce4]
 /usr/bin/mongod() [0xe10879]
 /lib64/libpthread.so.0(+0x7851) [0x7f1a43c55851]
 /lib64/libc.so.6(clone+0x6d) [0x7f1a42ff811d]



 Comments   
Comment by Stennie Steneker (Inactive) [ 19/Mar/14 ]

Hi Dharshan,

Please be advised I'm now closing this issue as we do not have enough details to investigate the problem.

If you do have any further information that would help us reproduce this issue, please let us know.

Thanks,
Stephen

Comment by Daniel Pasette (Inactive) [ 21/Dec/13 ]

You should not be seeing that on the secondary, though it should be harmless. There is an explicit check for whether the node is a primary or not before running the deletes.

On the secondary with the error messages, turn the logLevel to 1 for a few minutes and attach the log messages:

db.adminCommand( { setParameter: 1, logLevel: 1 } )

http://docs.mongodb.org/manual/reference/parameters/#param.logLevel

Generated at Thu Feb 08 03:27:50 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.