[SERVER-56996] mogodb consumes up to 98% of the available memory Created: 17/May/21 Updated: 27/Oct/23 Resolved: 20/May/21 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | 4.2.2 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | S P | Assignee: | Dmitry Agranat |
| Resolution: | Works as Designed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
||||||||||||
| Issue Links: |
|
||||||||||||
| Operating System: | ALL | ||||||||||||
| Steps To Reproduce: | Start mongo pod Increase the no of records to 50K run locust test with client requests as below number_of_users = 10 |
||||||||||||
| Participants: | |||||||||||||
| Description |
|
Hello, Using mongo in kubernates environ ment with centos as base. Limits: When the pod started can see it occupies upto 4G of memory and gradually when the parallel connections and queries increases, [root@node00 cloud-user]# kubectl top pod -n test | grep mngo The wiredtiger cache set to default(around 4G )
WitreTiger info: |
| Comments |
| Comment by S P [ 21/May/21 ] |
|
Below image in my local set up clearly shows the virtual memory increased from 6GB to 8GB during insert operation adn after that remained high at 8GB despite no active connections or any curd operation. This memory came down when a restart of mongo triggered. Why the behavior is like this ? is this not expected that mongo should release the memory if there are no operations ?
|
| Comment by S P [ 21/May/21 ] |
|
Hello Dima, still not convinced as I can see the number of active connections are only 2 which include the current terminal. with just 2 active connections the system consuming lesser CPU and higher memory(all available) ! Is this a expected behavior. What is the need of consuming all memory with minimal operation? As you mentioned the crud operation happening continuously can you please let know from which IP the traffic is more , so that I can check locally. Is there a way we can check ? where it shows the active traffic from client?. Also this behavior is consistent . The metrics shared is just for 2 day but mongodb in this lab never releases memory (always 98%). Would like to understand more.
PRIMARY> db.serverStatus().connections { "current" : 151, "available" : 838709, "totalCreated" : 98058, "active" : 2 } |
| Comment by Dmitry Agranat [ 20/May/21 ] |
|
Thanks ece.sagar@gmail.com for the additional context. Based on what I see in your workload, CRUD operations never really stop. There are some periods of time when CRUD operations significantly decrease for a couple of seconds but this is not expected that the consumed resident memory of a process would be returned to the OS immediately during these 1-2 seconds of low activity. Regarding the comparison between the CPU and memory utilization, this is also not expected that the moment the CPU utilization drops to 50%, the same should happen to memory. I've noticed that about 3% of the total memory is fragmented and we can try to force it to return to the OS more aggressively but I am not sure this aligns with the expectation of reclaiming all the memory. As this works as designed, I will go ahead and close this ticket. The SERVER project is for bugs and feature suggestions for the MongoDB server. If you have further questions about memory, we'd like to encourage you to start by asking our community for help by posting on the MongoDB Developer Community Forums. Regards, |
| Comment by S P [ 19/May/21 ] |
|
Dima, the version is for db servier version eden-csf:PRIMARY> db.version() It says Percona Server for MongoDB shell version v4.2.2-3 ------------------ Regarding the logs the the time stamp its UTC time zone bash-4.4$ date ----------------- CRUD operations were performed for specific time period . say for 1~1.5 hour . In that period the memory usage is high make sense along with cpu utilization is also high. But once the operations completes the memory utilization of the pod should also reduce , like the cpu usages fall back to lowest. for e.g Before starting the crud operation for replica-0 , the cpu is 50 and during crud operation it shooots up to 5008. once its complete the cpu comes back to 50~100 . But the same behavior is not for the memory usage of the pod . the start memory is 5788 and it increases to 9862. but post crud operation its remain same
|
| Comment by Dmitry Agranat [ 19/May/21 ] |
|
ece.sagar@gmail.com, I have a couple of clarifying questions:
|
| Comment by S P [ 19/May/21 ] |
|
metrics.tar uploaded to support uploader location |
| Comment by S P [ 19/May/21 ] |
|
Hello Dima, Will try to get the required logs and upload asap. need some time . Meanwhile can you please update is this the expected behavior of mongo ? As I read couple of blogs stating out of total available memory wiredtiger consumes as per the configuration and the remaining memory consumed by file system cache, as mentioned in https://docs.mongodb.com/manual/core/wiredtiger/#std-label-storage-wiredtiger-journal
Regards
|
| Comment by Dmitry Agranat [ 18/May/21 ] |
|
Would you please archive (tar or zip) the mongod.log files covering the reported event and the $dbpath/diagnostic.data directory (the contents are described here) and upload them to this support uploader location? Files uploaded to this portal are visible only to MongoDB employees and are routinely deleted after some time. Dima |