[JAVA-3768] Memory Leak Created: 18/Jun/20 Updated: 27/Oct/23 Resolved: 18/Jun/20 |
|
| Status: | Closed |
| Project: | Java Driver |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Sharad Keer | Assignee: | Unassigned |
| Resolution: | Works as Designed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
| Description |
|
We are experiencing trend in our production environment of high heap memory consumption. We did various performance correction via code optimizing, however pattern seems consistent and by inspecting heapdump we observed all primary suspect are due to tons of object being pilled up in the memory and not being GC correctly later. On looking more deeper using eclipse memory analyzer, we observed all the leak suspect has common thread pattern. For reference i have added the stack-trace below :-
I have attached the dominator tree for reference that shows retained object size in the heap. The used mongoDB driver version is 3.4.2 and currently we are operating in MSA architecture where hosting microservice has allocated heap size is 4GB. Application built using JDK 8 and other supporting frameworks are Sping-OSS.
|
| Comments |
| Comment by Jeffrey Yemin [ 02/Nov/22 ] | |||||||||
|
sanveer1995@gmail.com can you run the following command in your production environment and comment back with the output: java -XX:+PrintFlagsFinal -version | grep -i HeapSize. Or if you're setting the max heap size manually, what are you setting it to? As an example, on my Mac Mini with 32 GB RAM, the output is:
So on this class of machine, 100 MB of retained memory is around 1% total available memory, given a max heap size of ~8GB. So I wonder, is this really causing a problem in your production application? Are you encountering OutOfMemoryError exceptions?
| |||||||||
| Comment by Sanveer Singh [ 02/Nov/22 ] | |||||||||
|
Thanks for the response @Jeffrey Yemin. Decreasing the batch size did work but querying large data is slower now. I guess due to increase in round trips to DB. Not sure on the commas though but believe me that value is slightly more than 100 MB. | |||||||||
| Comment by Jeffrey Yemin [ 31/Oct/22 ] | |||||||||
|
The 16MB refers to the size of the batch when encoded as BSON. When decoded into Java hash maps, the size is expected to increase considerably. I suggest you either increase the size of your heap or decrease the batch size of the operations. I'm also unclear what the number 10,56,01,944 even means. Why are there only two digits to the left of each comma? | |||||||||
| Comment by Sanveer Singh [ 31/Oct/22 ] | |||||||||
|
@Jeffrey Yemin According to the doc https://www.mongodb.com/docs/manual/tutorial/iterate-a-cursor/#cursor-batches,
But in actual, the size of the data that is fetched in a batch is going way beyond 16MB In the below image of my heapdump, you can see that the sizes are going upto 100MB
We expect to run multiple jobs like this in parallel, so this is restricting the amount of load that we can hold in memory. | |||||||||
| Comment by Jeffrey Yemin [ 18/Jun/20 ] | |||||||||
|
DBCursor does retain references to objects decoded from query results (in this case instances of BasicDBObject and the ArrayList instances contained within them), but they will eligible for garbage collection as soon as the DBCursor is. So I do not think that the issue is with the driver. The evidence that you provided only shows that the driver created the objects that are being retained, but not that the driver is at fault for retaining them. A few things to look at:
I'm going to close this issue, but if you find more evidence that the driver is at fault, we can re-open it.
|