Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-44878

Support to read and write to/from a parallel data pipeline as part of aggregation stages

    • Type: Icon: Improvement Improvement
    • Resolution: Unresolved
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: Aggregation Framework
    • Labels:
      None
    • Query Optimization

      Ability to read and write variable values to an in-memory data pipeline as part of aggregation pipeline stages. As the documents are streamed through the aggregation pipeline stages, this functionality can open up enormous computational capabilities, already loving the conditional constructs switch/case,cond and added regex support.

      Scenario: A document can mark its index position before and after sort stage and calculate its offset.

      Understand that streaming behavior may vary on sharded clusters, but then onus will be on end-user how he leverages the feature, mongodb need not promise on any sequencing behavior, just an ability to leverage this stream based sequencing by end-users where-all possible.

            Assignee:
            backlog-query-optimization [DO NOT USE] Backlog - Query Optimization
            Reporter:
            sirajahmed17@gmail.com Mohammed Siraj Ahmed
            Votes:
            1 Vote for this issue
            Watchers:
            12 Start watching this issue

              Created:
              Updated: