Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-83453

Release TPTE iterators while holding the mutex

    • Type: Icon: Task Task
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • 7.3.0-rc0
    • Affects Version/s: None
    • Component/s: Internal Code
    • None
    • Fully Compatible
    • Service Arch 2023-11-27, Service Arch 2023-12-11
    • 130
    • None
    • 3
    • None
    • None
    • None
    • None
    • None
    • None

      This is more of a temporary fix to evaluate a hypothesis: destroying iterators without synchronizing with the corresponding list container can result into a data-race in DEBUG builds. This may show up as a read-after-free in ASAN builds, and a segmentation fault otherwise.

      The following wraps the iterators in optional to ensure they are always destroyed before releasing the ThreadPoolTaskExecutor mutex – I couldn't find any other iterators that were created/destroyed without holding this mutex, and applying the following patch seems to fix the issue.

      diff --git a/src/mongo/executor/thread_pool_task_executor.cpp b/src/mongo/executor/thread_pool_task_executor.cpp
      index 5be6b56a2cb..5fdf50f7c8b 100644
      --- a/src/mongo/executor/thread_pool_task_executor.cpp
      +++ b/src/mongo/executor/thread_pool_task_executor.cpp
      @@ -28,6 +28,7 @@
        */
       
       
      +#include "boost/optional/optional.hpp"
       #include "mongo/platform/basic.h"
       
       #include "mongo/executor/thread_pool_task_executor.h"
      @@ -584,18 +585,21 @@ void ThreadPoolTaskExecutor::scheduleIntoPool_inlock(WorkQueue* fromQueue,
       }
       
       void ThreadPoolTaskExecutor::scheduleIntoPool_inlock(WorkQueue* fromQueue,
      -                                                     const WorkQueue::iterator& iter,
      +                                                     boost::optional<WorkQueue::iterator> iter,
                                                            stdx::unique_lock<Latch> lk) {
      -    scheduleIntoPool_inlock(fromQueue, iter, std::next(iter), std::move(lk));
      +    boost::optional<WorkQueue::iterator> end{std::next(*iter)};
      +    scheduleIntoPool_inlock(fromQueue, std::move(iter), std::move(end), std::move(lk));
       }
       
       void ThreadPoolTaskExecutor::scheduleIntoPool_inlock(WorkQueue* fromQueue,
      -                                                     const WorkQueue::iterator& begin,
      -                                                     const WorkQueue::iterator& end,
      +                                                     boost::optional<WorkQueue::iterator> begin,
      +                                                     boost::optional<WorkQueue::iterator> end,
                                                            stdx::unique_lock<Latch> lk) {
           dassert(fromQueue != &_poolInProgressQueue);
      -    std::vector<std::shared_ptr<CallbackState>> todo(begin, end);
      -    _poolInProgressQueue.splice(_poolInProgressQueue.end(), *fromQueue, begin, end);
      +    std::vector<std::shared_ptr<CallbackState>> todo(*begin, *end);
      +    _poolInProgressQueue.splice(_poolInProgressQueue.end(), *fromQueue, *begin, *end);
      +    begin = boost::none;
      +    end = boost::none;
       
           lk.unlock();
       
      diff --git a/src/mongo/executor/thread_pool_task_executor.h b/src/mongo/executor/thread_pool_task_executor.h
      index 9c95467118f..b0766d75d52 100644
      --- a/src/mongo/executor/thread_pool_task_executor.h
      +++ b/src/mongo/executor/thread_pool_task_executor.h
      @@ -172,7 +172,7 @@ private:
            * _poolInProgressQueue.
            */
           void scheduleIntoPool_inlock(WorkQueue* fromQueue,
      -                                 const WorkQueue::iterator& iter,
      +                                 boost::optional<WorkQueue::iterator> iter,
                                        stdx::unique_lock<Latch> lk);
       
           /**
      @@ -180,8 +180,8 @@ private:
            * and moves them into _poolInProgressQueue.
            */
           void scheduleIntoPool_inlock(WorkQueue* fromQueue,
      -                                 const WorkQueue::iterator& begin,
      -                                 const WorkQueue::iterator& end,
      +                                 boost::optional<WorkQueue::iterator> begin,
      +                                 boost::optional<WorkQueue::iterator> end,
                                        stdx::unique_lock<Latch> lk);
       
           /**
      

            Assignee:
            amirsaman.memaripour@mongodb.com Amirsaman Memaripour
            Reporter:
            amirsaman.memaripour@mongodb.com Amirsaman Memaripour
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: