WiredTiger (24) opendir: Too many open files

XMLWordPrintableJSON

    • Type: Bug
    • Resolution: Done
    • Priority: Major - P3
    • None
    • Affects Version/s: 3.2.10
    • Component/s: None
    • None
    • Environment:
    • ALL
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Hello. 
      It's a system that's been working well for many years, and suddenly this issue has occurred. 

      The service stopped after the first few query errors were output followed by a storage error. 

      Please check. I wonder why these symptoms occur and what solutions are there?

      The contents below are the contents of the log.
      Thank you.

      Query contents: 
      "2022-10-21T14:59:47.386+0900 E QUERY    [conn1469211] Plan executor error during find command: FAILURE, stats: { stage: "SORT", nReturned: 0, executionTimeMillisEstimate: 420, works: 3704, advanced: 0, needTime: 3703, needYield: 0, saveState: 37, restoreState: 37, isEOF: 0, invalidates: 0, sortPattern:

      { create_date: -1.0 }

      , memUsage: 33554957, memLimit: 33554432, inputStage: { stage: "SORT_KEY_GENERATOR", nReturned: 0, executionTimeMillisEstimate: 420, works: 3703, advanced: 0, needTime: 2, needYield: 0, saveState: 37, restoreState: 37, isEOF: 0, invalidates: 0, inputStage: { stage: "COLLSCAN", filter:

      { $and: [] }

      , nReturned: 3701, executionTimeMillisEstimate: 400, works: 3702, advanced: 3701, needTime: 1, needYield: 0, saveState: 37, restoreState: 37, isEOF: 0, invalidates: 0, direction: "forward", docsExamined: 3701 } } }"

      Storage Errors: 
      "2022-10-21T23:19:15.942+0900 E STORAGE  [thread2] WiredTiger (24) [1666361955:942512][25242:0x7f45fbb27700], checkpoint-server: checkpoint server error: Too many open files
      2022-10-21T23:19:15.942+0900 E STORAGE  [thread2] WiredTiger (-31804) [1666361955:942550][25242:0x7f45fbb27700], checkpoint-server: the process must exit and restart: WT_PANIC: WiredTiger library panic"

        1. mongo-1.log
          8.58 MB
          _chingwen.wang@saltlux.com

            Assignee:
            Chris Kelly
            Reporter:
            brian wang
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: