Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-4336

With MongoDB, sweep attempts after table drops do not always close active dhandles

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.4.1, 3.6.1, 4.0.0
    • Component/s: None
    • Labels:
      None
    • 5
    • Storage Engines 2018-10-08, Storage Engines 2018-10-22, Storage Engines 2018-11-05, Storage Engines 2018-11-19, Storage Engines 2018-12-03, Storage Engines 2018-12-17, Storage Engines 2018-12-31

      I ran this test on MongoDB 3.4.15 with the following config string:
      file_manager=(close_handle_minimum=1,close_idle_time=5,close_scan_interval=10)

      Test:

      1. I inserted a document each in two collections, hence creating these collections in the process
      2. I dropped these two collections

      Behaviour that needs investigation:
      Even though the sweep server is configured to sweep every 10 secs and close handles if there are any (close_handle_minimum=1), after several minutes the dhandle count has not reduced to what was before creating the collections.

      In the t2 data above:

      • 2 creates at A, 2 drops at B
      • Several minutes after the drops, after several data handle sweep attempts, active handles has NOT gone down.

        1. testscript_403.py
          3 kB
        2. run_me.sh
          4 kB
        3. r3.6.9.png
          r3.6.9.png
          63 kB
        4. handles_longrun_403.png
          handles_longrun_403.png
          22 kB
        5. 403_handles_stuck.png
          403_handles_stuck.png
          173 kB
        6. 4.0.3.png
          4.0.3.png
          74 kB
        7. 3.4.15_drop_behavior.png
          3.4.15_drop_behavior.png
          131 kB

            Assignee:
            sulabh.mahajan@mongodb.com Sulabh Mahajan
            Reporter:
            sulabh.mahajan@mongodb.com Sulabh Mahajan
            Votes:
            3 Vote for this issue
            Watchers:
            19 Start watching this issue

              Created:
              Updated:
              Resolved: