[SERVER-20363] Aggregation's $push accumulator should error when it receives more than 16MB of data Created: 10/Sep/15 Updated: 06/Dec/22 |
|
| Status: | Backlog |
| Project: | Core Server |
| Component/s: | Aggregation Framework |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Charlie Swanson | Assignee: | Backlog - Query Optimization |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||
| Assigned Teams: |
Query Optimization
|
||||||||||||
| Participants: | |||||||||||||
| Comments |
| Comment by Mathias Stearn [ 04/Nov/15 ] |
|
Reopening following discussion. We should limit the size in $push since any operation that depends on pushing to an array >16MB is likely to fail anyway when sharded. |
| Comment by Charlie Swanson [ 04/Nov/15 ] |
|
I'm closing this ticket, because it strikes me that the current behavior is the correct behavior. The only restrictions we have on the size of the documents in the pipeline is that each must be at most 16MB by the end of the pipeline. Documents are allowed to exceed this size while being processed, so an array of 16MB might end up small enough to fit the size limit (e.g. by using a $slice, or an $arrayElemAt, etc.) cc redbeard0531 |