Details
-
Question
-
Status: Closed
-
Major - P3
-
Resolution: Duplicate
-
None
-
None
-
None
-
None
-
3.0.15
Description
Hi,
Currently I am using Mongodb 3.0.15 with Wiredtiger storage engine. I am facing "too many open files" issue while performing mongodump. we have approximately 3000 databases having 10 collections each. I have increased "max open files limit" for mongod process. I have changed "max open files limit" to 1 lac from 64000. I can see 62000 file descriptors for mongod process using lsof command even though I am not performing any operation on it. I have read that Wiredtiger tries to open 2 files for each collection and 1 for each index. If so is there a way to limit number of file descriptors and how can I scale going forward as in practice kernel has some limit on number of file descriptor. Please tell me if I am missing something. Thanks in advance.
Attachments
Issue Links
- is duplicated by
-
SERVER-17675 Support file per database for WiredTiger
-
- Closed
-