[SERVER-44878] Support to read and write to/from a parallel data pipeline as part of aggregation stages Created: 28/Nov/19 Updated: 06/Dec/22 |
|
| Status: | Backlog |
| Project: | Core Server |
| Component/s: | Aggregation Framework |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Mohammed Siraj Ahmed | Assignee: | Backlog - Query Optimization |
| Resolution: | Unresolved | Votes: | 1 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Assigned Teams: |
Query Optimization
|
||||||||
| Participants: | |||||||||
| Description |
|
Ability to read and write variable values to an in-memory data pipeline as part of aggregation pipeline stages. As the documents are streamed through the aggregation pipeline stages, this functionality can open up enormous computational capabilities, already loving the conditional constructs switch/case,cond and added regex support. Scenario: A document can mark its index position before and after sort stage and calculate its offset. Understand that streaming behavior may vary on sharded clusters, but then onus will be on end-user how he leverages the feature, mongodb need not promise on any sequencing behavior, just an ability to leverage this stream based sequencing by end-users where-all possible. |
| Comments |
| Comment by Eric Sedor [ 04/Dec/19 ] |
|
Thanks for your report sirajahmed17@gmail.com, We will take a look and get back to you if we have questions. |