[SERVER-55469] Uploading ~500000 small tables into collections interrupts with error and even breaks MongoDB Created: 24/Mar/21 Updated: 27/Oct/23 Resolved: 04/Apr/21 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Platon workaccount | Assignee: | Dmitry Agranat |
| Resolution: | Works as Designed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
elementary OS 5.1.7 |
||
| Attachments: |
|
| Operating System: | ALL |
| Steps To Reproduce: | |
| Participants: |
| Description |
|
Source data.
My actions. Error.
Global MongoDB breakdown.
|
| Comments |
| Comment by Platon workaccount [ 05/Apr/21 ] | ||||||||||||||||||
|
Thanks for the quick guide. As a result, it became possible to get the right print:
I propose to note in MongoDB Limits and Thresholds, that this case does not refer to MongoDB limits. I also think it is useful to insert into UNIX ulimit Settings the instruction from comment-3700289. | ||||||||||||||||||
| Comment by Dmitry Agranat [ 05/Apr/21 ] | ||||||||||||||||||
|
Sure platon.work@gmail.com, first, you can grep for the mongod process:
Now that you know the mongod process id, you can get the list of all limits in the /proc file-system which stores the per-process limits:
where 'pid' is the mongod’s process identifier you have retrieved with the grep command. For example, if mongod process id is 4741, the command would look like this:
As the SERVER project is for bugs and feature suggestions for the MongoDB server, for general questions about MongoDB we'd like to encourage you to start by asking our community for help by posting on the MongoDB Developer Community Forums. Regards, | ||||||||||||||||||
| Comment by Platon workaccount [ 04/Apr/21 ] | ||||||||||||||||||
|
Hello, @dmitry.agranat The output you've provided does not look like to be related to the mongod process I couldn't google a tutorial on how to output ulimit for a specific process. Can you give me a link to the docs or quote the necessary commands? Perhaps this output is for the root user?
| ||||||||||||||||||
| Comment by Dmitry Agranat [ 04/Apr/21 ] | ||||||||||||||||||
|
The output you've provided does not look like to be related to the mongod process as we've already identified (via the earlier provided diagnostic.data) that your current open files are set to 64k. Perhaps this output is for the root user? Just to reiterate the issue you've experienced, in order for the mongod process to be able to process 500k tables (collections and indexes), first, you'll need to adjust your default Unix settings. As this is not a bug, I will go ahead and close this ticket. Regards, | ||||||||||||||||||
| Comment by Platon workaccount [ 30/Mar/21 ] | ||||||||||||||||||
| ||||||||||||||||||
| Comment by Dmitry Agranat [ 29/Mar/21 ] | ||||||||||||||||||
|
Hi platon.work@gmail.com, the issue you are reporting too many open files is related to your ulimit settings. If I am not mistaken, it is currently set for your mongod process to only 64k while you are aiming to get to 500k. Please post the command and the output of ulimit -a for the mongod process. In addition, taking into account ~30kb per data handle, you will need ~15GB of memory just for data handles, so your current 15GB server might not be sufficient. Dima | ||||||||||||||||||
| Comment by Platon workaccount [ 24/Mar/21 ] | ||||||||||||||||||
|
I uploaded the debug info. Archive name is | ||||||||||||||||||
| Comment by Dmitry Agranat [ 24/Mar/21 ] | ||||||||||||||||||
|
Would you please archive (tar or zip) the full mongod.log files covering the test and the $dbpath/diagnostic.data directory (the contents are described here) and upload them to this support uploader location? Files uploaded to this portal are visible only to MongoDB employees and are routinely deleted after some time. Regards, |