We're trying to load all docs from a collection in batches of 100 (to avoid locking the DB for a long period of time). Our objects are large and this eventually causes the MongoDB server to run out of memory.
I've run a test locally to recreate the problem and it seems that cursors on the server are not being closed. When running the code below the number of open cursors on the server gradually increases.
int batchSize = 100;
int i = 0;
while(true)
The output of db.serverStatus() after about 1 min:
"cursros" :
{ "totalOpen" : 51, "clientCursors_size" : 51, "timedOut" : 0 },
}
It appears that using a negative limit fixes this, but there appears to be a 4MB upper limit of the number of docs returned. Why does the server keep the cursor open when all the results have been read?