Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-17514

Don't close handles until a threshold of in-use handles has been reached

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major - P3
    • Resolution: Fixed
    • Affects Version/s: 3.0.0
    • Fix Version/s: 3.1.2
    • Component/s: Storage, WiredTiger
    • Labels:
      None
    • Backwards Compatibility:
      Fully Compatible
    • Operating System:
      ALL

      Description

      WiredTiger has a sweep server that automatically closes file handles if they have been idle for over 30 seconds. Idle means that there are no cursors open on the file.

      That can lead to undesirable behavior - we may flush a file from cache, even when the cache isn't full.

      It makes sense to only consider closing handles once we are using a certain number of file handles.

      See: https://github.com/wiredtiger/wiredtiger/issues/1856

      Original Description
      Reported on Google Group: https://groups.google.com/d/msg/mongodb-user/LZFEr5-NDR0/5gcXcdN9ICEJ

      After loading data and some short time of non-activity, all cache seems to be evicted and resident memory goes to near 0. Haven't been able to reproduce on Linux or seen it on other Windows platforms (don't have Windows 7 to test with).

      From the google groups thread:

      The summary workload is read only from the main collection with writes to another collection on the same DB. There are five aggregation queries that run serially, and then 17 aggregation queries that run in parallel and basically scan the entire collection. Then, I have about 3200 finds for min/max values that utilize covered queries. (These results are all written to the separate collection.) This is all done using the C# driver. On the main collection I have a series of indices including a 2dsphere index. The data is mostly homogeneous but different enough that I am utilizing sparse queries to cut down on index size.

      I'm on Windows 7 for my MongoDB instance, and using the Resource Monitor, I can watch the private memory allocated to mongod. When the job starts, I see the allocation grow to 20GB. About a minute later, the allocation shrinks to 300MB. I also turned on zlib compression since CPU time/cores are not a bottleneck.

        Attachments

          Issue Links

            Activity

              People

              • Votes:
                0 Vote for this issue
                Watchers:
                10 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: