[SERVER-30120] InMemory engine: 'inMemorySizeGB' doesn't limit the process memory usage Created: 13/Jul/17 Updated: 27/Oct/23 Resolved: 27/Oct/23 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Internal Code, Performance, Stability |
| Affects Version/s: | 3.2.15 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Critical - P2 |
| Reporter: | Assaf Oren | Assignee: | Mark Agarunov |
| Resolution: | Works as Designed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Operating System: | ALL | ||||||||
| Steps To Reproduce: | Start in memory mongod process using: Create enough DBs to make the process memory higher than 2GB. |
||||||||
| Participants: | |||||||||
| Description |
|
Using MongoDB Enterprise v3.2.15 on Ubuntu 16.04 It seems that the memory limit doesn't really limit the process memory, mongod process could take more than 3GB of memory. Note that on our implementation we work with few hundreds of DBs. Also after a while the process start to take ~100% CPU even without any client load. Thanks, |
| Comments |
| Comment by assaf-oren [ 09/Sep/17 ] |
|
Thanks for your response. For the WiredTiger regular and in memory engines, isn't there any way to limit the amount of RAM taken by mongod process? (also for regular use without deleting DBs/collections) |
| Comment by Alexander Gorrod [ 07/Sep/17 ] |
|
assaf-oren Sorry for the extended delay in getting a response to you on this ticket. First: we should have done a better job explaining that what you are reporting is generally expected behavior. The --inMemorySizeGB option configures the buffer cache size for the storage engine - sorry that it is poorly named. There are many other components of MongoDB that use memory as well, for example:
The memory used outside the buffer cache varies widely depending on the workload - so you need to measure the usage for your application to understand how much memory is required. The reason we have been delaying answering is that there appears to also be a problem reclaiming space when a collection is dropped - so an application using the inMemory storage engine that is creating/dropping collections will observe increased memory usage over time. It is taking us time to isolate the root cause for this issue. I've opened WT-3566 to further investigate that particular issue. The memory growth issue is specific to the inMemory storage engine. If the behavior you are seeing is similar when running the WiredTiger storage engine and the inMemory storage engine it is expected behavior, and you need to configure --inMemorySizeGB to be small enough to allow enough RAM for the other memory used by MongoDB and other processes on the system. |
| Comment by assaf-oren [ 05/Sep/17 ] |
|
Hi Guys, any progress with this issue? |
| Comment by Mark Agarunov [ 18/Aug/17 ] |
|
Hello assaf-oren, My apologies for the delay in response. We are still investigating this behavior and will update you once more information becomes available. Thanks, |
| Comment by assaf-oren [ 03/Aug/17 ] |
|
Hi, Is there any update on this? Thanks, |
| Comment by assaf-oren [ 26/Jul/17 ] |
|
Hi Mark, I think there were about 4700 DBs created. BTW, we tried the latest enterprise release - v3.4.6 and we can see this issue there as well. Thanks, |
| Comment by Mark Agarunov [ 26/Jul/17 ] |
|
Hello assaf-oren, Thank you for providing the data and logs. After looking over this, it seems that the memory usage is mostly due to index builds. According to the logs, there were 199321 index builds during this duration. Spacing these out or reducing the number of index rebuilds may alleviate the memory usage. Thanks, |
| Comment by assaf-oren [ 25/Jul/17 ] |
|
Hi, Is there any update on this? Thanks, |
| Comment by assaf-oren [ 19/Jul/17 ] |
|
Hi Mark, I uploaded two files (see those with '2017_07_19' in the filename), it include the diagnostics and full log. Thanks, |
| Comment by Mark Agarunov [ 17/Jul/17 ] |
|
Hello assaf-oren, Thank you for providing this data. After looking over this I agree that the memory is growing beyond what it should be using; this may be indicative of a memory leak. So that we can see exactly where the memory may be leaking, please do the following:
Thanks, |
| Comment by assaf-oren [ 17/Jul/17 ] |
|
Thanks Mark, just upload the file 'JIRA-30120--....tar.gz' including the diagnostic data for the Jul 12 and 13 where mongo memory got above 4GB while limited to 2GB. |
| Comment by Mark Agarunov [ 17/Jul/17 ] |
|
Hello assaf-oren, I've generated a secure upload portal so that you can send us the diagnostic data privately. The number of collections/indexes may cause overhead in some circumstances, however the diagnostic data may allow us to diagnose if this is the case. Thanks, |
| Comment by assaf-oren [ 16/Jul/17 ] |
|
Thank you for your reply. I suspect the cause for this is the amount of DBs we are using. Thanks, |
| Comment by Mark Agarunov [ 13/Jul/17 ] |
|
Hello assaf-oren, Thank you for the report. The --inMemorySizeGB will set the size for the storage engine, not the entire process, so it is possible for the total memory usage to be greater than the value set for inMemorySizeGB. To better investigate this, please archive and upload the $dbpath/diagnostic.data directory so that we can get some insight into way may be causing the higher memory and high CPU usage. Thanks, |