Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-12733

Flush mmap files in parallel to achieve better flush times on Windows

    XMLWordPrintable

    Details

    • Type: Task
    • Status: Closed
    • Priority: Critical - P2
    • Resolution: Won't Fix
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: MMAPv1, Performance, Storage
    • Labels:

      Description

      Presently mmap'ed file flush happens in sequence for multiple files. This results in long flush times on Azure / Windows platform where the OS is not able to do concurrent flushes of the contents of the file. See SERVER-12401 for more details.

      The issue is especially critical if there are a lot of random updates that dirty large parts of the mmap'ed region. One way to do it, that is available in the short term and can be done solely on mongod side is to flush database files in parallel. We could see this does result in parallel data flush by OS and achieves higher throughput. This could be one of the ways to get better flush times on all platforms.

      The proposed changes would consist for number 2 are as follows:
      1. A fixed number of threads (8 is proposed) using mongo::ThreadPool to process file flushes.
      2. MongoFile::_flushAll will now schedule 1 file flush per file into this thread pool. When all flush requests are done, _flushAll will finish.
      3. A change of _globalFlushMutex (a Windows only lock) to a Read Write Lock so that WRITETODATAFILES would take an exclusive lock, and file flushes would take a read lock. Individual flies are allowed to flush in parallel of each other per SERVER-7378, but not in parallel with WRITETODATAFILES. Also, we will ensure the lock is only held for the duration of the FlushViewOfFile call, and not the additional FlushFileBuffers call.

        Attachments

          Activity

            People

            Assignee:
            backlog-server-execution Backlog - Execution Team
            Reporter:
            anil.kumar Anil Kumar
            Participants:
            Votes:
            1 Vote for this issue
            Watchers:
            10 Start watching this issue

              Dates

              Created:
              Updated:
              Resolved: