[SERVER-29871] Resident Memory higher than cacheSizeGB Created: 27/Jun/17 Updated: 07/Jan/18 Resolved: 01/Dec/17 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | 3.2.11 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Emanuel Freitas | Assignee: | Mark Agarunov |
| Resolution: | Incomplete | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
||||
| Issue Links: |
|
||||
| Operating System: | ALL | ||||
| Participants: | |||||
| Case: | (copied to CRM) | ||||
| Description |
|
I have a mongodb server running v3.2.11 with a memory comsumption much higher than what I configured on cacheSizeGB. Currently it is using 3.9g but the cacheSizeGB parameter is configured with 1 (1g). I expected to be a little bit over 1g due to active connections and other stuff but right now is almost 4x that and seems to be increasing. The huge pages are disabled:
I'm running CentOS
I attached the information that I was able to collect:
Is this a normal behaviour? There is other way that I can use to limit the memory used by the mongodb process? |
| Comments |
| Comment by Mark Agarunov [ 01/Dec/17 ] | ||
|
Hello ejsfreitas, We haven’t heard back from you for some time, so I’m going to mark this ticket as resolved. If this is still an issue for you, please provide additional information and we will reopen the ticket. Thanks, | ||
| Comment by Mark Agarunov [ 10/Nov/17 ] | ||
|
Hello ejsfreitas, We still need additional information to diagnose the problem. If this is still an issue for you, would you please provide the extended diagnostic data? Thanks, | ||
| Comment by Mark Agarunov [ 08/Sep/17 ] | ||
|
Hello ejsfreitas, Unfortunately the diagnostic data is not pointing at any causes for this behavior. If this is still an issue, please collect the diagnostic data with these parameters for a longer period of time as this issue appears to slowly happen over a long period of time, and has not yet been long enough to show any significant indicators in the diagnostic data. I suspect that a longer collection period may allow us to correlate the cause of the issue in the diagnostic data. Thanks, | ||
| Comment by Mark Agarunov [ 30/Aug/17 ] | ||
|
Hello ejsfreitas, Thank you for providing the additional information. My apologies for the delay in response. We are still investigating this behavior but have unfortunately not yet determined the cause of it. Thanks, | ||
| Comment by Emanuel Freitas [ 29/Aug/17 ] | ||
|
Hi Mark, Did you had the chance to check this issue? Thanks! | ||
| Comment by Emanuel Freitas [ 03/Aug/17 ] | ||
|
Hello Mark, sorry for the delay. I uploaded a new diagnostic.data with the parameters as you asked using the upload portal. Thanks, | ||
| Comment by Mark Agarunov [ 07/Jul/17 ] | ||
|
Hello ejsfreitas, Thank you for providing this data. Unfortunately, due to the size of the data, it looks like it was truncated to a short timeframe. If possible, please increase the diagnostic data size and reduce the sampling rate to 10 seconds with the following parameters:
I've generated an upload portal so that you can send us this data. Thanks, | ||
| Comment by Emanuel Freitas [ 03/Jul/17 ] | ||
|
Hello Mark, I'm sorry for the late response. I was trying to replicate the problem in our lab environment because I'm not sure about the implications of enableling that flag in production. I attached (heapProfilingEnabled.tar.gz) the information that you asked. As you can see it's already using 1072.0 MiB (I configured 1g). I know that the difference is very small but I hope that it can help you. Meanwhile I will let this instance running to see if it keeps growing. | ||
| Comment by Mark Agarunov [ 27/Jun/17 ] | ||
|
Hello ejsfreitas, Thank you for providing this data. It appears that this may be due to a memory leak in mongod. To investigate what may be causing this leak, I'd like to request heap profiler data from mongod. To obtain this, please do the following:
This should provide some insight into which component is causing this leak. Thanks, | ||
| Comment by Emanuel Freitas [ 27/Jun/17 ] | ||
|
Hello Bruce, I attached the diagnostic.data directory. Thanks for your help. Kind regards, | ||
| Comment by Bruce Lucas (Inactive) [ 27/Jun/17 ] | ||
|
Hi Emanuel, In order for us to continue diagnosing this can you please archive and attach to this ticket the content of the $dbpath/diagnostic.data directory? Thanks, |