|
Doc changes only needed if we already documented the locking behavior. If not, I think it makes sense to leave this as an undocumented internal implementation detail since there should be no behavior changes.
|
|
Author:
{u'username': u'RedBeard0531', u'name': u'Mathias Stearn', u'email': u'mathias@10gen.com'}
Message: SERVER-6296 Batch fetching in DocumentSourceCursor
The main win here is not grabbing and releasing the read lock for each
document to be processed.
Branch: master
https://github.com/mongodb/mongo/commit/8f0c10ec3f576b9c44213114ce8540f8a6698206
|
|
The implementation is the same as described above, however the batchsizes are always 4MB (MaxBytesToReturnToClientAtOnce) regardless of the number of documents or any yielding done while fetching them.
|
|
Current plan is to hold read lock only while in DocumentSourceCursor and have it fetch documents in batches of ~100 or until first yield point (such as doc not in memory). It can then release the lock and pass that batch of results down the pipeline. I think this is the best way to handle both multithreading the processing of the pipeline and the need to take a writelock for $out.
|
Generated at Thu Feb 08 03:11:17 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.