[SERVER-10602] Server does not show right resident memory Created: 22/Aug/13 Updated: 10/Dec/14 Resolved: 18/Mar/14 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Diagnostics, Performance |
| Affects Version/s: | 2.0.7, 2.2.5 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Minor - P4 |
| Reporter: | Emre Hasegeli | Assignee: | Unassigned |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Tested on:
On:
With:
|
||
| Issue Links: |
|
||||||||
| Operating System: | Linux | ||||||||
| Participants: | |||||||||
| Description |
|
Our server does not show right value on the "resident" field in the "mem" object of the server status, since the beginning. We started using MongoDB with 2.0. We use only one database and one collections. Is is about 120 GB now. I whink it is wrong because of several reasons. First of all MongoDB cannot respond some queries in out peak times. It gets 30 - 50 queries per seconds and shows 200 - 500 page faults per seconds in the peak times. We solved that problem two times by increasing the memory from 16 GB to 32 GB and from 32 GB to 64 GB. Now, It is happening again. In all of these cases server status was showing less than half of the available memory as resident. My second experiment is monitoring caches from the operating system. I try to drop caches, restart the MongoDB server and it gets full again in 30 - 60 seconds but the server status does not show it. For example after 30 minutes after dropping caches, Linux free command shows 62 GB as caches but MongoDB server status show only 7 GB. My third experiment is with the tool named Mongomem from ContextLogic, that I found lately. I think it shows the right value which is nearly all of the memory. We have tested it with different servers and with different configurations but the situation did not changed. I have tried to "compact" the collections which we have not done before. Server status started show even less resident memory after that. It is impossible that we are reading from 1 / 10 of the whole database. So we did not want to believe that MongoDB was using so much memory, for a long time. https://github.com/ContextLogic/mongotools |
| Comments |
| Comment by Stennie Steneker (Inactive) [ 18/Mar/14 ] |
|
Hi Emre, Apologies for the delay in follow-up on this issue. There are a number of situations where data may be in RAM (eg filesystem cache) but not currently associated with the MongoDB process. There is some further discussion/investigation on It looks like the Mongomem tool you mentioned uses the fadvise() system call to determine what pages of a file are in memory (irrespective of whether those are currently associated with MongoDB), so you may indeed have more data in RAM than MongoDB's serverStatus is currently reporting as resident. It sounds like you have investigated the obvious tuning options such as readahead and compacting your database. I'm going to resolve this issue as a duplicate of Thanks, |
| Comment by Emre Hasegeli [ 23/Oct/13 ] |
|
I monitored them more user heavy load. Some problem persists. I have tried to reduce readahead to 16, 8 and finally 0, reboot the server after before each time. My conclusion is it is not because of readahead but it is still because lack of memory. |
| Comment by Emre Hasegeli [ 26/Sep/13 ] |
|
I reduced readahead from 256 to 32 on one of the slaves. It seems to work better, reponds more queries, gets less page faults. |
| Comment by Gregor Macadam [ 10/Sep/13 ] |
|
Hi This could be explained by have too high readahead - please see this blog for details. http://www.kchodorow.com/blog/2012/05/10/thursday-5-diagnosing-high-readahead/ In that case - the disk is reading a load of extra information from the disk into RAM - which is not included in the resident memory calculation in MongoDB - since it it usually useless Can you check readahead? thx |