[SERVER-2793] Understanding memory/disk metrics Created: 18/Mar/11  Updated: 30/Mar/12  Resolved: 01/Apr/11

Status: Closed
Project: Core Server
Component/s: Performance
Affects Version/s: 1.6.5
Fix Version/s: None

Type: Question Priority: Major - P3
Reporter: charso Assignee: Gaetan Voyer-Perrault
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Ubuntu 10.04


Attachments: PNG File memory-mongodb-shard1-primary-2011-03-18.png     PNG File memory-mongodb-shard2-primary-2011-03-18.png    
Participants:

 Description   

I'm trying to delve deeper into MongoDB memory/disk usage patterns to both optimize my configuration and to identify warning signs for impending performance problems. I have some high level questions first:

  • mongod resident memory: I generally don't see this go above half the available RAM. Any idea what causes resident memory size to grow vs pages just sticking in the OS file cache?
  • Is there any way to distinguish between memory warming up and an actual memory crunch? Any metrics/ratios to look out for in tools like iostat, sar, vmstat, etc.?
  • Are there any metrics you look at to indicate that RAM is nearly full? I'm graphing most metrics in "/proc/meminfo" over time, however, when it appears I've hit my RAM threshold, things like active RAM in /proc/meminfo is reported at only half of RAM. Inactive is the other half.
  • Shouldn't the amount reported as "Mapped" in /proc/meminfo be a comprehensive number that indicates how much MongoDB has mapped and therefore how much memory it's using?


 Comments   
Comment by Gaetan Voyer-Perrault [ 30/Mar/11 ]

Any additional follow-up required or are we ready to close this issue?

Comment by Gaetan Voyer-Perrault [ 23/Mar/11 ]

Here's the simplest definition I can find for the "mapped' value of /proc/meminfo:

Mapped — The total amount of memory, in kilobytes, which have been used to map devices, files, or libraries using the mmap command.

Looking at your charts, that's the gold line.

> the resident and mapped memory footprint is leveling off at half of available RAM?

Based on the attached images, you re-started the mongo process just before midnight. At this point the gold line plummets to 0. Then the line shoots up to over 30GBs and hovers around 30GBs.

However, if you follow the gold line prior to the restart it was clearly above half RAM. So it's quite possible to "mmap" more than half of the memory.

> I'm constantly adding data, so you'd think if it still had half of available RAM left that resident/mapped would keep growing...

Well, if you're constantly adding data, then MongoDB only needs to "mmap" two big things:
1. The relevant part of the index
2. The newly added data

Notice how turning off MongoDB did not free all of the your memory (yellow stuff)?
Take a look at the "cache" section of your RAM. (that's the purple stuff)

By your own charts you either have something else using up the RAM or the OS is doing some caching for you.

At this point, I'm not seeing anything clearly "wrong" here.

Is there a documented performance issue here?

Comment by charso [ 22/Mar/11 ]
  • resident and mapped memory

I found this explanation about resident memory:

number of megabytes resident. It is typical over time, on a dedicated database server, for this number to approach the amount of physical ram on the box.
http://www.mongodb.org/display/DOCS/serverStatus

However, this doesn't explain why, in my configuration which data and index exceeding RAM, the resident and mapped memory footprint is leveling off at half of available RAM. I'm constantly adding data, so you'd think if it still had half of available RAM left that resident/mapped would keep growing, but it isn't. I'm looking for thoughts around why it levels off at half of RAM.

Comment by charso [ 22/Mar/11 ]

I became busy dealing with CS-415. I'm looking at this now and will follow-up soon.

Comment by Gaetan Voyer-Perrault [ 22/Mar/11 ]

Is there any additional follow-up required?

Comment by Gaetan Voyer-Perrault [ 18/Mar/11 ]

Lots of questions here, however, it looks like we have documents for much of this.

> Any idea what causes resident memory size to grow vs pages just sticking in the OS file cache?
> Is there any way to distinguish between memory warming up and an actual memory crunch?

http://www.mongodb.org/display/DOCS/Monitoring+and+Diagnostics

> Any metrics/ratios to look out for in tools like iostat, sar, vmstat, etc.?

http://www.mongodb.org/display/DOCS/iostat
http://www.mongodb.org/display/DOCS/mongostat

Is there something specific missing from those docs?

Comment by Gaetan Voyer-Perrault [ 18/Mar/11 ]

So the first thing that jumps at me is that these are not equal shards. It looks like shard1 has two additional collections and a few extra indexes.

Can you clarify the difference in collections?

Comment by charso [ 18/Mar/11 ]

Attached is an example of the memory usage on two different shard primaries. This example illustrates some of my questions. Note that only the last few hours are relevant; previous to that there was a repair.

Shard1 happens to be doing a lot of read IO, whereas Shard2 has very little IO. The only discrepancy I can see is the difference between inactive and mapped memory. Here are the associated db stats:

Shard1
"Fri Mar 18 2011 07:39:37 GMT+0000 (UTC)"
> prod.stats();
{
"collections" : 7,
"objects" : 248997611,
"avgObjSize" : 195.65127450158548,
"dataSize" : 48716699940,
"storageSize" : 55574353408,
"numExtents" : 120,
"indexes" : 8,
"indexSize" : 29139508896,
"fileSize" : 105109258240,
"ok" : 1
}

Shard2
"Fri Mar 18 2011 07:39:49 GMT+0000 (UTC)"
> prod.stats();
{
"collections" : 5,
"objects" : 227033080,
"avgObjSize" : 195.37059565064263,
"dataSize" : 44355588072,
"storageSize" : 49278558976,
"numExtents" : 111,
"indexes" : 5,
"indexSize" : 27489705632,
"fileSize" : 92230647808,
"ok" : 1
}

Generated at Thu Feb 08 03:01:12 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.