-
Type: Improvement
-
Resolution: Duplicate
-
Priority: Major - P3
-
None
-
Affects Version/s: 3.2.11
-
Component/s: Aggregation Framework
-
None
-
Query
We're doing aggregations on tens of millions of documents, every like 30 minutes and what we're facing is most of the time, aggregations fail since one of the returned "documents" exceed the strange 16Mb limit.
I know that MongoDB's internal structure, the way it copies and moves documents into RAM and out of it, will be heavily influenced if this limit was to be increased for "Stored Documents", but aggregation results vary from this in a profound way, Most of the times we don't want to store them, we just want to use them.
My usecase here is simple, I want to $group documents by an Id, and then $push ALL unprocessed documents that point to that ID into a sub-field of that document, get them an do some logic with them, and finally, bulk updating them to set them to processed. (We don't care about the RAM since we've thrown ~200GBs of RAM for this)
It's really a headache that the 16MB limit is even in place for aggregation results ! Why is that required ? Is it just a code problem or is it a design decision ?
(As a solution, we're currently evaluating ElasticSearch for what we do, but we were really happy with MongoDB other than the aggregation limits)
- duplicates
-
SERVER-12305 Allow command request and response BSON objects to exceed 16MB
- Backlog
- is related to
-
SERVER-5923 Increase max document size to at least 64mb
- Closed