[SERVER-28545] Replication subsystem holds Global lock in MODE_X while waiting for member state to change to ROLLBACK causing server to hang Created: 29/Mar/17  Updated: 04/Mar/19  Resolved: 26/Oct/17

Status: Closed
Project: Core Server
Component/s: Replication
Affects Version/s: None
Fix Version/s: 3.5.10

Type: Bug Priority: Major - P3
Reporter: Max Hirschhorn Assignee: Spencer Brody (Inactive)
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Backports
Depends
Related
is related to SERVER-27154 replSetRequestVotes command should wa... Closed
is related to SERVER-23908 MMAPv1 DurableImpl::waitUntilDurable ... Closed
is related to SERVER-27282 Clean up and fix bugs in RS rollback ... Closed
Backwards Compatibility: Fully Compatible
Operating System: ALL
Backport Requested:
v3.4
Steps To Reproduce:

I have not attempted to reproduce this issue outside of running Jepsen's "set" test which was integrated into Evergreen as part of SERVER-28461.

lein run test --test set --tarball "file:///root/mongo-binaries.tgz" --ssh-private-key ~/.ssh/id_rsa_lxc --clock-skew faketime --libfaketime-path /opt/mongodb/libfaketime.so.1 --key-time-limit 15 --protocol-version 1 --read-concern linearizable --storage-engine mmapv1 --time-limit 300

I will update this section of the ticket if I'm able to reproduce the issue via simpler means.

Sprint: Repl 2017-05-08, Repl 2017-05-29, Repl 2017-07-10
Participants:
Case:
Linked BF Score: 15

 Description   

It is possible that while the "rsBackgroundSync" thread is changing the member state to ROLLBACK for a thread running work on the ReplicationExecutor to need to acquire a lock. This design of holding a LockManager lock while waiting on a condition variable outside of the lock hierarchy seems prone to deadlock. For example, in the GDB output below, thread #39 is holding the Global lock in MODE_X and waiting for its task to set the follower mode to MemberState::RS_ROLLBACK in the ReplicationExecutor. The ReplicationExecutor is currently processing a vote response in thread #13 which waiting for the storage engine to make it durable. The durability thread (#6) is waiting to acquire the MMAPv1 flush lock, which is implicitly held by thread #39 as part of acquiring the global lock.

Thread 39 (Thread 0x7fc1e03f0700 (LWP 20506)):
#0  0x00007fc27f0f5404 in pthread_cond_wait@@GLIBC_2.3.2 () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x00007fc2826fba7c in std::condition_variable::wait(std::unique_lock<std::mutex>&) ()
#2  0x00007fc28170ea8b in mongo::repl::ReplicationExecutor::Event::waitUntilSignaled() ()
#3  0x00007fc2816f0e7d in mongo::repl::ReplicationCoordinatorImpl::setFollowerMode(mongo::repl::MemberState const&) ()
#4  0x00007fc281735ef8 in mongo::repl::rollback(mongo::OperationContext*, mongo::repl::OplogInterface const&, mongo::repl::RollbackSource const&, int, mongo::repl::ReplicationCoordinator*, mongo::repl::StorageInterface*, std::function<void (int)>) ()
#5  0x00007fc2816037c2 in mongo::repl::BackgroundSync::_runRollback(mongo::OperationContext*, mongo::Status const&, mongo::HostAndPort const&, int, mongo::repl::StorageInterface*) ()
#6  0x00007fc281605b0e in mongo::repl::BackgroundSync::_produce(mongo::OperationContext*) ()
#7  0x00007fc28160661a in mongo::repl::BackgroundSync::_runProducer() ()
#8  0x00007fc28160679a in mongo::repl::BackgroundSync::_run() ()
#9  0x00007fc2826fe690 in execute_native_thread_routine ()
#10 0x00007fc27f0f1184 in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#11 0x00007fc27ee1ebed in clone () from target:/lib/x86_64-linux-gnu/libc.so.6
...
Thread 13 (Thread 0x7fc1ed615700 (LWP 20473)):
#0  0x00007fc27f0f5404 in pthread_cond_wait@@GLIBC_2.3.2 () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x00007fc2826fba7c in std::condition_variable::wait(std::unique_lock<std::mutex>&) ()
#2  0x00007fc2818d2cab in mongo::CommitNotifier::awaitBeyondNow() ()
#3  0x00007fc2818d6a40 in mongo::dur::(anonymous namespace)::DurableImpl::waitUntilDurable() ()
#4  0x00007fc2816d57e0 in mongo::repl::ReplicationCoordinatorExternalStateImpl::storeLocalLastVoteDocument(mongo::OperationContext*, mongo::repl::LastVote const&) ()
#5  0x00007fc2816ff04b in mongo::repl::ReplicationCoordinatorImpl::_writeLastVoteForMyElection(mongo::repl::LastVote, mongo::executor::TaskExecutor::CallbackArgs const&) ()
#6  0x00007fc28170f840 in mongo::repl::ReplicationExecutor::_doOperation(mongo::OperationContext*, mongo::Status const&, mongo::executor::TaskExecutor::CallbackHandle const&, std::__cxx11::list<mongo::repl::ReplicationExecutor::WorkItem, std::allocator<mongo::repl::ReplicationExecutor::WorkItem> >*, std::mutex*) ()
#7  0x00007fc28170e0ed in mongo::repl::(anonymous namespace)::callNoExcept(std::function<void ()> const&) ()
#8  0x00007fc281715a30 in std::_Function_handler<mongo::repl::TaskRunner::NextAction (mongo::OperationContext*, mongo::Status const&), mongo::repl::ReplicationExecutor::scheduleDBWork(std::function<void (mongo::executor::TaskExecutor::CallbackArgs const&)> const&, mongo::NamespaceString const&, mongo::LockMode)::{lambda(mongo::OperationContext*, mongo::Status const&)#1}>::_M_invoke(std::_Any_data const&, mongo::OperationContext*&&, mongo::Status const&) ()
#9  0x00007fc28175d349 in mongo::repl::(anonymous namespace)::runSingleTask(std::function<mongo::repl::TaskRunner::NextAction (mongo::OperationContext*, mongo::Status const&)> const&, mongo::OperationContext*, mongo::Status const&) [clone .constprop.72] ()
#10 0x00007fc28175e46f in mongo::repl::TaskRunner::_runTasks() ()
#11 0x00007fc281bf38ec in mongo::ThreadPool::_doOneTask(std::unique_lock<std::mutex>*) ()
#12 0x00007fc281bf439c in mongo::ThreadPool::_consumeTasks() ()
#13 0x00007fc281bf4d56 in mongo::ThreadPool::_workerThreadBody(mongo::ThreadPool*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
#14 0x00007fc2826fe690 in execute_native_thread_routine ()
#15 0x00007fc27f0f1184 in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#16 0x00007fc27ee1ebed in clone () from target:/lib/x86_64-linux-gnu/libc.so.6
...
Thread 6 (Thread 0x7fc27cd1c700 (LWP 20466)):
#0  0x00007fc27f0f57be in pthread_cond_timedwait@@GLIBC_2.3.2 () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x00007fc281225fb8 in mongo::CondVarLockGrantNotification::wait(unsigned int) ()
#2  0x00007fc28122a6be in mongo::LockerImpl<true>::lockComplete(mongo::ResourceId, mongo::LockMode, unsigned int, bool) ()
#3  0x00007fc2812261d6 in mongo::AutoAcquireFlushLockForMMAPV1Commit::AutoAcquireFlushLockForMMAPV1Commit(mongo::Locker*) ()
#4  0x00007fc2818d7f1f in mongo::dur::durThread(mongo::ClockSource*, long) ()
#5  0x00007fc2826fe690 in execute_native_thread_routine ()
#6  0x00007fc27f0f1184 in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#7  0x00007fc27ee1ebed in clone () from target:/lib/x86_64-linux-gnu/libc.so.6


Thank you to benety.goh for helping me with the GDB output.



 Comments   
Comment by Githook User [ 28/Jun/17 ]

Author:

{u'username': u'stbrody', u'name': u'Spencer T Brody', u'email': u'spencer@mongodb.com'}

Message: SERVER-28545 Do not wait for election to finish while holding global lock
Branch: master
https://github.com/mongodb/mongo/commit/f577af234306bdceeb27c5ec09606143d7347f9d

Comment by Githook User [ 28/Jun/17 ]

Author:

{u'username': u'stbrody', u'name': u'Spencer T Brody', u'email': u'spencer@mongodb.com'}

Message: SERVER-28545 Change ReplicationCoordinator::setFollowerMode to return a Status instead of a bool
Branch: master
https://github.com/mongodb/mongo/commit/4926efd55328fadd997e69080b1ca544df210e7e

Comment by Spencer Brody (Inactive) [ 13/Jun/17 ]

Spent some time thinking about this this afternoon. I think the thing to do here is change ReplicationCoordinatorImpl::setFollowerMode to return a Status instead of a bool, so it can indicate to its callers why it failed, and then remove the behavior of waiting for canceled elections to finish. If setFollowerMode discovers an ongoing election that conflicts with setting the mode as requested, it should cancel that election and return a Status indicating why. Then it's up to the callers to decide if and how to retry. In the case of rollback, it should probably just abort the rollback attempt - most likely this will just result in a new rollback attempt starting a few seconds later when it restarts trying to replicate, at which point the new rollback attempt should succeed.

Comment by Max Hirschhorn [ 05/May/17 ]

I am re-opening this ticket because this deadlock scenario is still occurring in Evergreen after Judah's changes from 0cd3a79. See this task timeout as a recent example.

The ReplicationExecutor is currently processing a vote response in thread #13 which waiting for the storage engine to make it durable.

After discussing and looking at ReplicationCoordinatorImpl::setFollowerMode() more with siyuan.zhou, I've learned that the ReplicationExecutor has a separate worker thread (replExecDBWorker-0) for executing the callbacks under the necessary locks and therefore wouldn't cause the ReplicationExecutor thread itself to block. The underlying issue that is still present in the current version of the code is ReplicationCoordinatorImpl::setFollowerMode() waits for the electionFinishedEvent to get signaled. (Prior to the changes from 0cd3a79 this occurred in ReplicationCoordinatorImpl::_setFollowerModeFinish().) The electionFinishedEvent event is signaled in replication_coordinator_impl_elect_v1.cpp when the election is won or is lost. However, after the dry-run election succeeds, a callback is scheduled to persist the node's vote for itself (to avoid voting multiple times in the same term), which waits for the write to become durable.

For example, in the GDB output below, the "rsBackgroundSync" thread (#38) is holding the Global lock in MODE_X and waiting for the electionFinishedEvent to get signaled. The "replExecDBWorker-0" thread (#13) is handling when the dry-run election has succeeded by persisting the node's vote for itself and waiting for the storage engine to make it durable. The "durability" thread (#6) is waiting to acquire the MMAPv1 flush lock, which is implicitly held by the "rsBackgroundSync" thread as part of acquiring the global lock.

Thread 38: "rsBackgroundSync" (Thread 0x7f350204e700 (LWP 24880))
#0  0x00007f35a0d53404 in pthread_cond_wait@@GLIBC_2.3.2 () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x00007f35a436c03c in __gthread_cond_wait (__mutex=<optimized out>, __cond=__cond@entry=0x7f35a987ab28) at /data/mci/d4fd0a0771c6aae8fcb2a4bad42d3271/toolchain-builder/build-gcc-v2.sh-J5F/x86_64-mongodb-linux/libstdc++-v3/include/x86_64-mongodb-linux/bits/gthr-default.h:864
#2  std::condition_variable::wait (this=this@entry=0x7f35a987ab28, __lock=...) at ../../../../../gcc-5.4.0/libstdc++-v3/src/c++11/condition_variable.cc:53
#3  0x00007f35a3230b6b in mongo::repl::ReplicationExecutor::Event::waitUntilSignaled (this=0x7f35a987ab10) at src/mongo/db/repl/replication_executor.cpp:565
#4  0x00007f35a31fd713 in mongo::repl::ReplicationCoordinatorImpl::setFollowerMode (this=0x7f35a6f1ed00, newState=...) at src/mongo/db/repl/replication_coordinator_impl.cpp:835
#5  0x00007f35a2f01768 in mongo::repl::rollback(mongo::OperationContext*, mongo::repl::OplogInterface const&, mongo::repl::RollbackSource const&, int, mongo::repl::ReplicationCoordinator*, mongo::repl::StorageInterface*, std::function<void (int)>) (opCtx=opCtx@entry=0x7f35a8b65ce0, localOplog=..., rollbackSource=..., requiredRBID=requiredRBID@entry=3, replCoord=0x7f35a6f1ed00, storageInterface=storageInterface@entry=0x7f35a6ff1f80, sleepSecsFn=...) at src/mongo/db/repl/rs_rollback.cpp:895
#6  0x00007f35a2ee8f46 in mongo::repl::BackgroundSync::_fallBackOn3dot4Rollback (this=this@entry=0x7f35a7180c80, opCtx=opCtx@entry=0x7f35a8b65ce0, source=..., requiredRBID=requiredRBID@entry=3, localOplog=localOplog@entry=0x7f350204c470, storageInterface=storageInterface@entry=0x7f35a6ff1f80) at src/mongo/db/repl/bgsync.cpp:691
#7  0x00007f35a2ee9630 in mongo::repl::BackgroundSync::_runRollback (this=this@entry=0x7f35a7180c80, opCtx=opCtx@entry=0x7f35a8b65ce0, fetcherReturnStatus=Status(OplogStartMissing, {static npos = 18446744073709551615, _M_dataplus = {<std::allocator<char>> = {<__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0x7f35a7149120 "Received an empty batch from sync source."}, _M_string_length = 41, {_M_local_buf = ")\000\000\000\000\000\000\000\000\000\266\250\065\177\000", _M_allocated_capacity = 41}}), source=..., requiredRBID=3, storageInterface=storageInterface@entry=0x7f35a6ff1f80) at src/mongo/db/repl/bgsync.cpp:621
#8  0x00007f35a2eebd32 in mongo::repl::BackgroundSync::_produce (this=this@entry=0x7f35a7180c80, opCtx=0x7f35a8b65ce0) at src/mongo/db/repl/bgsync.cpp:480
#9  0x00007f35a2eec542 in mongo::repl::BackgroundSync::_runProducer (this=this@entry=0x7f35a7180c80) at src/mongo/db/repl/bgsync.cpp:241
#10 0x00007f35a2eec6ca in mongo::repl::BackgroundSync::_run (this=0x7f35a7180c80) at src/mongo/db/repl/bgsync.cpp:198
#11 0x00007f35a436ec50 in std::execute_native_thread_routine (__p=<optimized out>) at ../../../../../gcc-5.4.0/libstdc++-v3/src/c++11/thread.cc:84
#12 0x00007f35a0d4f184 in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#13 0x00007f35a0a7cbed in clone () from target:/lib/x86_64-linux-gnu/libc.so.6
...
Thread 13: "replExecDBWorker-0" (Thread 0x7f350f273700 (LWP 24847))
#0  0x00007f35a0d53404 in pthread_cond_wait@@GLIBC_2.3.2 () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x00007f35a436c03c in __gthread_cond_wait (__mutex=<optimized out>, __cond=__cond@entry=0x7f35a506f548 <mongo::dur::(anonymous namespace)::commitNotify+40>) at /data/mci/d4fd0a0771c6aae8fcb2a4bad42d3271/toolchain-builder/build-gcc-v2.sh-J5F/x86_64-mongodb-linux/libstdc++-v3/include/x86_64-mongodb-linux/bits/gthr-default.h:864
#2  std::condition_variable::wait (this=this@entry=0x7f35a506f548 <mongo::dur::(anonymous namespace)::commitNotify+40>, __lock=...) at ../../../../../gcc-5.4.0/libstdc++-v3/src/c++11/condition_variable.cc:53
#3  0x00007f35a301a66b in mongo::CommitNotifier::awaitBeyondNow (this=this@entry=0x7f35a506f520 <mongo::dur::(anonymous namespace)::commitNotify>) at src/mongo/db/storage/mmap_v1/commit_notifier.cpp:61
#4  0x00007f35a301fce0 in mongo::dur::(anonymous namespace)::DurableImpl::waitUntilDurable (this=<optimized out>) at src/mongo/db/storage/mmap_v1/dur.cpp:538
#5  0x00007f35a2e895a0 in mongo::repl::ReplicationCoordinatorExternalStateImpl::storeLocalLastVoteDocument (this=<optimized out>, opCtx=0x7f35a8b64e00, lastVote=...) at src/mongo/db/repl/replication_coordinator_external_state_impl.cpp:529
#6  0x00007f35a32058db in mongo::repl::ReplicationCoordinatorImpl::_writeLastVoteForMyElection (this=0x7f35a6f1ed00, lastVote=..., cbData=...) at src/mongo/db/repl/replication_coordinator_impl_elect_v1.cpp:212
#7  0x00007f35a3231911 in std::function<void (mongo::executor::TaskExecutor::CallbackArgs const&)>::operator()(mongo::executor::TaskExecutor::CallbackArgs const&) const (__args#0=..., this=0x7f35a985e7e0) at /opt/mongodbtoolchain/v2/include/c++/5.4.0/functional:2267
#8  mongo::repl::ReplicationExecutor::_doOperation (this=0x7f35a702bf00, opCtx=0x7f35a8b64e00, taskRunnerStatus=Status::OK(), cbHandle=..., workQueue=0x7f35a702bfd8, terribleExLockSyncMutex=<optimized out>) at src/mongo/db/repl/replication_executor.cpp:431
#9  0x00007f35a32301bd in std::function<void ()>::operator()() const (this=<optimized out>) at /opt/mongodbtoolchain/v2/include/c++/5.4.0/functional:2267
#10 mongo::repl::(anonymous namespace)::callNoExcept(const std::function<void()> &) (fn=...) at src/mongo/db/repl/replication_executor.cpp:629
#11 0x00007f35a3238030 in std::function<void ()>::operator()() const (this=0x7f350f272070) at /opt/mongodbtoolchain/v2/include/c++/5.4.0/functional:2267
#12 mongo::repl::ReplicationExecutor::<lambda(mongo::OperationContext*, const mongo::Status&)>::operator() (status=..., opCtx=<optimized out>, __closure=<optimized out>) at src/mongo/db/repl/replication_executor.cpp:394
#13 std::_Function_handler<mongo::repl::TaskRunner::NextAction(mongo::OperationContext*, const mongo::Status&), mongo::repl::ReplicationExecutor::scheduleDBWork(const CallbackFn&, const mongo::NamespaceString&, mongo::LockMode)::<lambda(mongo::OperationContext*, const mongo::Status&)> >::_M_invoke(const std::_Any_data &, <unknown type in /data/mci/97b0f6547a665d19479b7e861c111d14/src/mongod.debug, CU 0xb2d27c3, DIE 0xb359591>, const mongo::Status &) (__functor=..., __args#0=<optimized out>, __args#1=...) at /opt/mongodbtoolchain/v2/include/c++/5.4.0/functional:1857
#14 0x00007f35a32b4b49 in std::function<mongo::repl::TaskRunner::NextAction (mongo::OperationContext*, mongo::Status const&)>::operator()(mongo::OperationContext*, mongo::Status const&) const (__args#1=Status::OK(), __args#0=0x7f35a8b64e00, this=0x7f350f272260) at /opt/mongodbtoolchain/v2/include/c++/5.4.0/functional:2267
#15 mongo::repl::(anonymous namespace)::runSingleTask (task=..., opCtx=<optimized out>, status=Status::OK()) at src/mongo/db/repl/task_runner.cpp:66
#16 0x00007f35a32b5c6f in mongo::repl::TaskRunner::_runTasks (this=0x7f35a702c270) at src/mongo/db/repl/task_runner.cpp:151
#17 0x00007f35a3aa869c in std::function<void ()>::operator()() const (this=0x7f350f272350) at /opt/mongodbtoolchain/v2/include/c++/5.4.0/functional:2267
#18 mongo::ThreadPool::_doOneTask (this=this@entry=0x7f35a702c0b0, lk=lk@entry=0x7f350f272430) at src/mongo/util/concurrency/thread_pool.cpp:329
#19 0x00007f35a3aa914c in mongo::ThreadPool::_consumeTasks (this=this@entry=0x7f35a702c0b0) at src/mongo/util/concurrency/thread_pool.cpp:281
#20 0x00007f35a3aa9b06 in mongo::ThreadPool::_workerThreadBody (pool=0x7f35a702c0b0, threadName=...) at src/mongo/util/concurrency/thread_pool.cpp:229
#21 0x00007f35a436ec50 in std::execute_native_thread_routine (__p=<optimized out>) at ../../../../../gcc-5.4.0/libstdc++-v3/src/c++11/thread.cc:84
#22 0x00007f35a0d4f184 in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#23 0x00007f35a0a7cbed in clone () from target:/lib/x86_64-linux-gnu/libc.so.6
...
Thread 6: "durability" (Thread 0x7f359e97a700 (LWP 24840))
#0  0x00007f35a0d537be in pthread_cond_timedwait@@GLIBC_2.3.2 () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#1  0x00007f35a3c34d78 in __gthread_cond_timedwait (__abs_timeout=0x7f359e977ff0, __mutex=<optimized out>, __cond=0x7f35a981b9d8) at /opt/mongodbtoolchain/v2/include/c++/5.4.0/x86_64-mongodb-linux/bits/gthr-default.h:871
#2  std::condition_variable::__wait_until_impl<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (__atime=..., __lock=<synthetic pointer>..., this=0x7f35a981b9d8) at /opt/mongodbtoolchain/v2/include/c++/5.4.0/condition_variable:165
#3  std::condition_variable::wait_until<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (__atime=..., __lock=<synthetic pointer>..., this=0x7f35a981b9d8) at /opt/mongodbtoolchain/v2/include/c++/5.4.0/condition_variable:105
#4  std::condition_variable::wait_until<std::chrono::_V2::system_clock, std::chrono::duration<long int, std::ratio<1l, 1000000000l> >, mongo::CondVarLockGrantNotification::wait(mongo::Milliseconds)::<lambda()> > (__p=..., __atime=..., __lock=<synthetic pointer>..., this=0x7f35a981b9d8) at /opt/mongodbtoolchain/v2/include/c++/5.4.0/condition_variable:128
#5  std::condition_variable::wait_for<long int, std::ratio<1l, 1000000000l>, mongo::CondVarLockGrantNotification::wait(mongo::Milliseconds)::<lambda()> > (__p=..., __rtime=..., __lock=<synthetic pointer>..., this=0x7f35a981b9d8) at /opt/mongodbtoolchain/v2/include/c++/5.4.0/condition_variable:144
#6  mongo::CondVarLockGrantNotification::wait (this=this@entry=0x7f35a981b9a8, timeout=..., timeout@entry=...) at src/mongo/db/concurrency/lock_state.cpp:225
#7  0x00007f35a3c3963e in mongo::LockerImpl<true>::lockComplete (this=0x7f35a981b400, resId=..., mode=<optimized out>, timeout=..., checkDeadlock=true) at src/mongo/db/concurrency/lock_state.cpp:744
#8  0x00007f35a3c34fab in mongo::AutoAcquireFlushLockForMMAPV1Commit::AutoAcquireFlushLockForMMAPV1Commit (this=0x7f359e978420, locker=<optimized out>) at src/mongo/db/concurrency/lock_state.cpp:880
#9  0x00007f35a30211d7 in mongo::dur::durThread (cs=0x7f35a702a360, serverStartMs=1493846747694) at src/mongo/db/storage/mmap_v1/dur.cpp:731
#10 0x00007f35a436ec50 in std::execute_native_thread_routine (__p=<optimized out>) at ../../../../../gcc-5.4.0/libstdc++-v3/src/c++11/thread.cc:84
#11 0x00007f35a0d4f184 in start_thread () from target:/lib/x86_64-linux-gnu/libpthread.so.0
#12 0x00007f35a0a7cbed in clone () from target:/lib/x86_64-linux-gnu/libc.so.6

Comment by Githook User [ 24/Apr/17 ]

Author:

{u'username': u'judahschvimer', u'name': u'Judah Schvimer', u'email': u'judah@mongodb.com'}

Message: SERVER-28545 don't schedule setFollowerMode on ReplicationExecutor
Branch: master
https://github.com/mongodb/mongo/commit/0cd3a79bd4c275f0cd8eadc9dd94c69d465b5e11

Comment by Spencer Brody (Inactive) [ 10/Apr/17 ]

I think this can be fixed by changing setFollowerMode to just do its work inline without scheduling it to run on the ReplicationExecutor.

Generated at Thu Feb 08 04:18:26 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.