Details
-
Bug
-
Resolution: Incomplete
-
Major - P3
-
None
-
4.0.18
-
None
-
ALL
-
Description
Hello,
I have the following situation:
- I have a GridFS bucket with 810,959 files (.files count) stored in 2,499,160 chunks (.chunks count)
- I want to iterate over all entries in .chunks and print (just for sake of simplicity) the _id of each entry
- for this I use the following command
-
db.sandboxReports.chunks.find({},{_id:1}).forEach(r=>print(r._id))
- what I expect is to have the screen filled with 2m+ IDs (I know, useless, but again, for sake of simplicity)
- what happens is that after exactly 101 IDs, the script freezes. Waited for hours (I am a curious guy), but it never recovered
- I read the same issue https://jira.mongodb.org/browse/SERVER-35106 (unfortunately closed with CNR) and tried setting the batchSize to 100
- this time it does not freeze forever, but between batches there is significant delays of up to minutes (which makes the solution unusable since there are 2m+ records and there are only 1,440 minutes in a day)
- I also found out that the problem comes up occasionally, but never a working solution is provided (seems hard to reproduce), for eg. see:
https://stackoverflow.com/questions/36407641/node-js-mongo-find-each-stopping-after-first-batch
https://stackoverflow.com/questions/52064752/mongo-cursor-hangs-after-first-fetch
During the time the query executes, I have the following output from currentOp:
{
|
"host" : "HOST_NAME_HERE:27017", |
"desc" : "conn11650", |
"connectionId" : 11650, |
"client" : "127.0.0.1:48568", |
"appName" : "MongoDB Shell", |
"clientMetadata" : { |
"application" : { |
"name" : "MongoDB Shell" |
},
|
"driver" : { |
"name" : "MongoDB Internal Client", |
"version" : "4.0.18" |
},
|
"os" : { |
"type" : "Linux", |
"name" : "Ubuntu", |
"architecture" : "x86_64", |
"version" : "18.04" |
}
|
},
|
"active" : true, |
"currentOpTime" : "2020-05-18T21:37:24.801+0000", |
"opid" : 9652856, |
"lsid" : { |
"id" : UUID("ab8e0656-ddd0-496e-8c1b-6c95dc9e35b3"), |
"uid" : BinData(0,"lrHY+QTs83BmclcR9MKKYcSvfFmsJ+jW+bP+OfRjiFM=") |
},
|
"secs_running" : NumberLong(906), |
"microsecs_running" : NumberLong(906036759), |
"op" : "getmore", |
"ns" : "DB_NAME.sandboxReports.chunks", |
"command" : { |
"getMore" : NumberLong("171272634643"), |
"collection" : "sandboxReports.chunks", |
"lsid" : { |
"id" : UUID("ab8e0656-ddd0-496e-8c1b-6c95dc9e35b3") |
},
|
"$clusterTime" : { |
"clusterTime" : Timestamp(1589836937, 5), |
"signature" : { |
"hash" : BinData(0,"v8OUtyPICaqX5BTz+tZ3u/rAsV8="), |
"keyId" : NumberLong("6785026434701197326") |
}
|
},
|
"$db" : "sandboxReports" |
},
|
"originatingCommand" : { |
"find" : "sandboxReports.chunks", |
"filter" : { }, |
"projection" : { |
"_id" : 1 |
},
|
"lsid" : { |
"id" : UUID("ab8e0656-ddd0-496e-8c1b-6c95dc9e35b3") |
},
|
"$clusterTime" : { |
"clusterTime" : Timestamp(1589816462, 13), |
"signature" : { |
"hash" : BinData(0,"ZTJm56Md0AIWuKYHN/EiyyopmLY="), |
"keyId" : NumberLong("6785026434701197326") |
}
|
},
|
"$db" : "sandboxReports" |
},
|
"planSummary" : "COLLSCAN", |
"numYields" : 42856, |
"locks" : { |
"Global" : "r", |
"Database" : "r", |
"Collection" : "r" |
},
|
"waitingForLock" : false, |
"lockStats" : { |
"Global" : { |
"acquireCount" : { |
"r" : NumberLong(42857) |
}
|
},
|
"Database" : { |
"acquireCount" : { |
"r" : NumberLong(42857) |
}
|
},
|
"Collection" : { |
"acquireCount" : { |
"r" : NumberLong(42857) |
}
|
}
|
}
|
},
|
|
Server is MongoDB 4.0.18 running on Ubuntu 18.04 setup in replica set mode.
I have tried to run the command on different servers from replica set
If I can help you with other information, please let me know.
Any help is greatly appreciated.
Regards,
Puiu
Attachments
Issue Links
- related to
-
SERVER-35106 cursor iteration freeze after the 101 docs (1st batch + 1)
-
- Closed
-