[SERVER-5589] Possible memory leak in Linux 64-bit server Created: 12/Apr/12 Updated: 08/Mar/13 Resolved: 16/Aug/12 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | 2.0.3 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Arun Bhalla | Assignee: | Daniel Pasette (Inactive) |
| Resolution: | Incomplete | Votes: | 1 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Linux 2.6.18-128.1.1.el5 #1 SMP Mon Jan 26 13:58:24 EST 2009 x86_64 GNU/Linux |
||
| Operating System: | Linux |
| Participants: |
| Description |
|
We have been running a mongod process since March 28. I've noticed entries like the following since April 8. The log has been rotated, but it's possible the warnings started appearing on April 7.
We have Mongo monitoring set up at https://mms.10gen.com/host/detail/dcc39f13a93138e874baf774e498d6f. The increase in memory usage is after our most recently application deploy (on April 6) and a job on April 7 (around 12:00 PDT) to create a processed copy of all the files in GridFS, but I can't account for the increase in memory consumption that begins around 21:00 PDT on April 7 and has continued to a lesser degree. |
| Comments |
| Comment by Daniel Pasette (Inactive) [ 16/Aug/12 ] |
|
Please re-open if you have further information on this issue. |
| Comment by Daniel Pasette (Inactive) [ 13/Jul/12 ] |
|
Hi Arun, I just took a peak at your MMS charts and it certainly is exhibiting leak symptoms: https://mms.10gen.com/host/detail/dcc39f13a93138e874baf774e498d6fd#chartHour |
| Comment by Daniel Crosta [ 29/Jun/12 ] |
|
Hi Arun, are you still experiencing these problems? If you could share your map-reduce jobs, perhaps we could isolate the issue. |
| Comment by Daniel Crosta [ 29/May/12 ] |
|
Would you mind sharing your map-reduce jobs here? (If they contain confidential information, we can create a ticket not visible to the public) We'll also need some sense of what the schema looks like – example documents would be helpful. There have been some memory leaks with map-reduce (or, in particular, with the javascript engine we use) in the past, so this may be another case of that, and I'd like to rule that out first. Another option might be to try using 2.0.5 instead of 2.0.3, to see if the non-mapped memory is comparably high using the latest version – note also that 2.0.6 is due out soon. It would be best to do this in your development or staging environment, rather than in production, if that's an option for you. |
| Comment by Arun Bhalla [ 28/May/12 ] |
|
The URL in the description was truncated. It should be https://mms.10gen.com/host/detail/dcc39f13a93138e874baf774e498d6fd. |
| Comment by Arun Bhalla [ 28/May/12 ] |
|
Yes, we have 8 map-reduce jobs that run every 5 minutes and another 4 that run nightly. We don't use db.eval or $where. |
| Comment by Daniel Crosta [ 25/May/12 ] |
|
Do you do anything extensive with javascript? Any db.eval, $where, or frequent map-reduce jobs? |
| Comment by Patrick Kaeding [ 25/May/12 ] |
|
Hi Dan I'm not sure how to determine the name of the MMS group, but this is for plugins.atlassian.com. |
| Comment by Daniel Crosta [ 25/May/12 ] |
|
Hi Arun, That MMS link does not seem to work for me – what is your MMS group, and what is the hostname of the machine you'd like us to look at? |