Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-27013

Allow results of the aggregation framework to exceed 16MB.



    • Improvement
    • Status: Closed
    • Major - P3
    • Resolution: Duplicate
    • 3.2.11
    • None
    • Aggregation Framework
    • None


      We're doing aggregations on tens of millions of documents, every like 30 minutes and what we're facing is most of the time, aggregations fail since one of the returned "documents" exceed the strange 16Mb limit.

      I know that MongoDB's internal structure, the way it copies and moves documents into RAM and out of it, will be heavily influenced if this limit was to be increased for "Stored Documents", but aggregation results vary from this in a profound way, Most of the times we don't want to store them, we just want to use them.

      My usecase here is simple, I want to $group documents by an Id, and then $push ALL unprocessed documents that point to that ID into a sub-field of that document, get them an do some logic with them, and finally, bulk updating them to set them to processed. (We don't care about the RAM since we've thrown ~200GBs of RAM for this)

      It's really a headache that the 16MB limit is even in place for aggregation results ! Why is that required ? Is it just a code problem or is it a design decision ?

      (As a solution, we're currently evaluating ElasticSearch for what we do, but we were really happy with MongoDB other than the aggregation limits)


        Issue Links



              backlog-server-query Backlog - Query Team (Inactive)
              SpiXy Alireza
              0 Vote for this issue
              11 Start watching this issue