[SERVER-29868] swap keeps growing up even though there are a plenty of available RAM Created: 27/Jun/17  Updated: 02/Aug/17  Resolved: 28/Jun/17

Status: Closed
Project: Core Server
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: John Wang [X] Assignee: Mark Agarunov
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: Microsoft Word diagnose_data.docx     File diagnostic.data.1.rar     File diagnostic.data.2.rar     PNG File swap.png    
Operating System: ALL
Participants:

 Description   

We have a Mongodb sharded cluster consisting of one router server, one config server and two shard servers. The swap space in the shard servers keeps growing up even though there are plenty of available RAM. The speed is about tens of megabyte a week.

System of shard server:
operating system: Ubuntu 16,
CPU: 4 core 3.4GHZ,
RAM:16GB,
database: Mongodb 3.4.2 Community version

There is no error in the Mongodb log file. The swap space in router and config server are always zero.
Please let me know anything else you need. I appreciate your support so much!

John Wang



 Comments   
Comment by John Wang [X] [ 02/Aug/17 ]

Hi Ramon,

Thanks for your update so much!

Best Regards,

John Wang

Comment by Ramon Fernandez Marina [ 02/Aug/17 ]

The contents of diagnostic.data is explained in this comment, which also points to the source code if you're interested about the details of the format (which is specially formatted and compressed).

Unfortunately I'm not aware of any publicly available tools to read and parse the output of the diagnostic.data, but the details are public so a tool can be written for it.

Regards,
Ramón.

Comment by John Wang [X] [ 29/Jun/17 ]

Hi Mark,

I agree with your point. Most of the swap space are used by Mongodb. I think probably the kernel swaps lots of old mapping files of Mongodb.
What tool can read the diagnostic.data file?
Thanks for your support so much!

John Wang

Comment by Mark Agarunov [ 28/Jun/17 ]

Hello John_Wang,

Thank you for providing this data. Looking over this, I don't believe the swap usage is related to mongod itself. This is likely the linux kernel just swapping memory that hasn't been recently accessed. The kernel will generally not fill swap to the point of causing an out of memory condition, however you could try to set the vm.swapiness value lower (Ubuntu recommends 10 for servers), which should cause the system to only use swap when needed, and not swap out infrequently accessed pages.

Please note that SERVER project is for reporting bugs or feature suggestions for the MongoDB server. For MongoDB-related support discussion please post on the mongodb-user group or Stack Overflow with the mongodb tag. A question like this involving more discussion would be best posted on the mongodb-user group.

Thanks,
Mark

Comment by John Wang [X] [ 28/Jun/17 ]

Hi Mark,

I appreciate your support so much!
The vm.swapiness value is 60 that is the default value in Ubuntu.
I uploaded the two diagnostic.data.1&2.rar files. What tools can read the diagnostic.data? Thanks for your time.
We have 1000 clients keep writing about 3MB/minute data into the Mongodb cluster. If the swap keeps going until full, is it possible to cause memory leak in the shard server cause there is no more space to swap?

Please let me know anything you need. Thanks so much!

Best,

John Wang

Comment by John Wang [X] [ 28/Jun/17 ]

diagnostic data in the primary server

Comment by Mark Agarunov [ 27/Jun/17 ]

Hello John_Wang,

Thank you for the report. From your description of the issue, I believe this may be due to the vm.swapiness value of the Linux kernel swapping out pages that haven't been recently accessed. However, to get a better idea of what may be causing this, please archive and upload the $dbPath/diagnostic.data directory. This should give a bit more information as to what is causing this.

Thanks,
Mark

Generated at Thu Feb 08 04:22:01 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.