[SERVER-26312] Multiple high memory usage alerts, MongoDB using 75% of memory for small data size Created: 25/Sep/16  Updated: 06/Apr/23  Resolved: 27/Sep/16

Status: Closed
Project: Core Server
Component/s: WiredTiger
Affects Version/s: 3.2.6
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Ati And Assignee: Kelsey Schubert
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
duplicates SERVER-22906 MongoD uses excessive memory over and... Closed
Participants:

 Description   

We are using MongoDB 3.2.6 with WiredTiger storageEngine. Below is the Replica set Configuration:

Replica set Configuration:
Total members: 3
Roles: Primary, Secondary, Secondary
Memory Allocated to each server: 32GB
Relative data size: 3-4GB

Issue Summary:
Memory usage of one of the secondary members has slowly grown and reached up to 75%. This triggers continuous alerts from the monitoring system used to monitor the replica set deployment.

The memory usage keeps growing gradually hence the alerts are triggered, We mongodb to release memory when the usage is reduced.

This document on WiredTigers memory usage outlines the memory usage percentage by WiredTiger cache and File system cache. Adhering to those estimations for the aforementioned deployments it should use:
WiredTiger internal cache memory usage: 60% of RAM minus 1 GB = 16.4GB

Below are some concerns we need to get addressed:

1. Will the mongod process go out of memory if the memory usage keeps growing gradually?

2. Why does it use 75% of memory of relatively small size of data?

3. Why doesn't wiredTiger releases memory when other operations are performed?

4. What is the best solution or workaround to mitigate the high memory usage?

Below are the server specific stats:

Memory usage:

PID      USER       PR    NI    VIRT    RES     SHR S  %CPU   %MEM    TIME+ COMMAND
24371  mongodb   20   0 25.490g 0.022t      7576 S      3.6         75.5      4520:27 mongod

mongostat:

% dirty % used flushes vsize   res
0.0        73.6     0          25.5G  22.6G



 Comments   
Comment by Ati And [ 06/Nov/16 ]

Hi Thomas,

We have updated the mongodb to 3.2.10 for one of our new replica set.

However, we still see high memory usage for one of the secondary members configured for MMS backups.

This member holds upto 4GB data and using 80% of memory(total 32GB).

The memory usage has grown upto 80% and is still 80%.

We're expecting it to release memory after a certain period of time.

Any thoughts on this?

Thanks!

Comment by Kelsey Schubert [ 28/Sep/16 ]

Hi astro,

The release candidate, MongoDB 3.2.10-rc2 has been released if you would like to begin testing, and we expect that MongoDB 3.2.10 GA will be released by Tuesday.

Best regards,
Thomas

Comment by Ati And [ 28/Sep/16 ]

Thanks, Thomas!

What is the expected release date for this upcoming release?

Comment by Kelsey Schubert [ 27/Sep/16 ]

Hi astro,

Thanks for providing some additional information. It looks like you are hitting SERVER-22906, which will be resolved in the upcoming release, MongoDB 3.2.10. I would recommend upgrading once it is released.

Kind regards,
Thomas

Comment by Kelsey Schubert [ 27/Sep/16 ]

Hi astro,

Thank you for answering my questions. The diagnostic.data directory does not contain any user data. It periodically collects the output of the following commands, which you are welcome to execute yourself to examine the output.

serverStatus: db.serverStatus({tcmalloc: true})
replSetGetStatus: rs.status()
collStats for local.oplog.rs: db.getSiblingDB('local').oplog.rs.stats()
getCmdLineOpts: db.adminCommand({getCmdLineOpts: true})
buildInfo: db.adminCommand({buildInfo: true})
hostInfo: db.adminCommand({hostInfo: true})

Additionally, you can examine the source code to confirm. However, since you have expressed some concern, I've gone ahead and created a secure upload portal for you to use here. Files uploaded to this portal are only visible to MongoDB employees and are routinely deleted after some time.

Kind regards,
Thomas

Comment by Ati And [ 27/Sep/16 ]

Hi Thomas,

Thank you for looking into the issue. Below are some instance specific details:

1. Are operations being performed on the affected node such as map-reduce or aggregations?

=> No MapReduce operations are performed on this node whereas the aggregations are performed rarely.

2. What is the memory consumption of the other nodes?

=> Another secondary member has ~52.8% memory usage, while primary member has 49.8% memory usage.

3. Is there anything unique about the affected node's workload?

=> This node has no specific loads or purpose. It normally serves as ordinary secondary.

4. Would you please attach an archive of the diagnostic.data directory for the affected secondary to this ticket?

=> Can you please elaborate what diagnostic.data directory contains? There is very less documentation found on mongodb's diagnostic.data. Does that contain any operational data, so that we can decide how to share the data?

Thanks,

Comment by Kelsey Schubert [ 25/Sep/16 ]

Hi astro,

Thank you for opening this ticket. Please answer the following questions so we can continue to investigate:

  1. Are operations being performed on the affected node such as map-reduce or aggregations?
  2. What is the memory consumption of the other nodes?
  3. Is there anything unique about the affected node's workload?
  4. Would you please attach an archive of the diagnostic.data directory for the affected secondary to this ticket?

Thanks again,
Thomas

Generated at Thu Feb 08 04:11:44 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.