[SERVER-27516] OOM(Out of memory) killer killed mongod process Created: 26/Dec/16  Updated: 03/Jan/17  Resolved: 03/Jan/17

Status: Closed
Project: Core Server
Component/s: Diagnostics
Affects Version/s: 3.2.3
Fix Version/s: None

Type: Question Priority: Minor - P4
Reporter: Suraj Sawant Assignee: Kelsey Schubert
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: File diagnostic.data.gz.tar    
Participants:

 Description   

Hi,
While doing sharding in testing environment,mongod process is being killed by OOM(Out of memory killer) killer.

Testing environment
OS:CentOS release 6.7
Arch:64 Bit
Ram:16G

Does anybody have idea on how to prevent OOM from killing mongod process.

Thanks,
Suraj Sawant



 Comments   
Comment by Kelsey Schubert [ 03/Jan/17 ]

Hi sawantsuraj91@gmail.com,

I've examined the diagnostic.data you've provided and identified that the machine appears to have 64GB of ram. It appears that MonogoDB does not consume more than ~32GB. Are there any other processes running on this machine?

At this time, I do not see anything to indicate a bug in the MongoDB server. For MongoDB-related support discussion please post on the mongodb-user group or Stack Overflow with the mongodb tag. A question like this involving more discussion would be best posted on the mongodb-users group.

Kind regards,
Thomas

Comment by Suraj Sawant [ 29/Dec/16 ]

Hi,

Have attached the tar of diagnostic data of mongo instance which was being killed.
Pls have a look.

Thanks,
Suraj Sawant

Comment by Kelsey Schubert [ 29/Dec/16 ]

Hi sawantsuraj91@gmail.com,

Would you please attach an archive of the diagnostic.data directory from your test node, so we can continue investigate?

Thank you,
Thomas

Generated at Thu Feb 08 04:15:22 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.