[GODRIVER-431] Memory leak in InsertMany? Created: 25/May/18  Updated: 11/Sep/19  Resolved: 31/May/18

Status: Closed
Project: Go Driver
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Task Priority: Major - P3
Reporter: Thomas Geulen Assignee: Kristofer Brandow (Inactive)
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Related
related to GODRIVER-432 topology.SelectServer leaks timers Closed

 Description   

Is it possible that there is a memory leak when InsertMany is used?

I created a simple program GIST and get the following output in pprof

Showing nodes accounting for 22425.05kB, 100% of 22425.05kB total
Showing top 10 nodes out of 23
flat flat% sum% cum cum%
16144.01kB 71.99% 71.99% 22425.05kB 100% main.main.func2
3638.89kB 16.23% 88.22% 3638.89kB 16.23% github.com/mongodb/mongo-go-driver/core/wiremessage.Query.AppendWireMessage
1536.07kB 6.85% 95.07% 1536.07kB 6.85% github.com/mongodb/mongo-go-driver/bson.newElement
553.04kB 2.47% 97.53% 553.04kB 2.47% github.com/mongodb/mongo-go-driver/bson.NewDocument
553.04kB 2.47% 100% 6281.04kB 28.01% github.com/mongodb/mongo-go-driver/mongo.(*Collection).InsertMany
0 0% 100% 553.04kB 2.47% github.com/mongodb/mongo-go-driver/bson.ElementConstructor.ArrayFromElements
0 0% 100% 512.02kB 2.28% github.com/mongodb/mongo-go-driver/bson.ElementConstructor.ObjectID
0 0% 100% 1024.05kB 4.57% github.com/mongodb/mongo-go-driver/bson.ElementConstructor.SubDocument
0 0% 100% 553.04kB 2.47% github.com/mongodb/mongo-go-driver/bson.NewArray
0 0% 100% 1024.05kB 4.57% github.com/mongodb/mongo-go-driver/bson.ValueConstructor.Document

 

When I use the same simple program and add every struct one by one with InsertOne, I cannot see that something is waiting to get garbage collected.



 Comments   
Comment by Kristofer Brandow (Inactive) [ 27/May/18 ]

Hi thomas.geulen,

TL;DR: This isn't a memory leak, it's an underlying optimization.

On the connections we create we make a read and write buffer to lower the amount of pressure we put on the garbage collector and the number of allocations we need to do. When we create these buffers, we allocate a slice with a 256 byte capacity. The total size of the wire protocol message for the documents in the example is 116 bytes. This means that the AppendWireMessage method will not allocate a new slice when used with InsertOne. When InsertMany is used the batch size is much larger than 256 bytes, so AppendWireMessage creates a new slice. Since the mongo.Client created for the HTTP request is not Disconnect}}ed after being used, the connection remains in the pool indefinitely. Even when the {{mongo.Client is garbage collected, the underlying topology.Topology still has goroutines running which means that everything below the topology.Topology including the connection.Pool cannot be garbage collected. This keeps the connection used alive, along with it's not increased write buffer. This is what you're seeing. I believe pprof shows where something was allocated, which is why wiremessage.Query.AppendWireMessage is shown there.

Let me know if this clears things up.

As a bonus, I did discover that we are leaking timers while debugging this. So while the exact thing you filed a bug for wasn't a memory leak, there was a memory leak related to this code.

--Kris

Generated at Thu Feb 08 08:34:12 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.