Uploaded image for project: 'Node.js Driver'
  1. Node.js Driver
  2. NODE-4788

[v6 nice to have] Correct Stream Implementation for GridFSBucketWriteStream

    • 3
    • Not Needed
    • None
    • Hide

      https://docs.google.com/document/d/16dCTvvkMKGwfgY4S3tgDgyOAiq4X-I1HVSdTFdsExHc/edit

      Review the ticket description for general accuracy and completeness

      • Bug - Confirm that the bug still exists
      • Task / Feature / Improvement - Ensure every section of the template is filled out and makes sense
      • Build failure - Investigate and confirm the cause of the build failure
      • Spec change - Check whether any more recent changes have been made to the spec that might affect the implementation requirements

      What is the expected behavior?

      • What do the official driver or server docs currently say about this functionality?
        • What should they say?
          • If revisions or additions are needed, mark the ticket as docs changes needed and fill out the doc changes form
      • What do our api or readme docs currently say about this functionality?
        • What should they say?
        • Capture any revisions or additions in the ticket documentation AC
      • If applicable, what does the common drivers spec say? (Note: your kickoff partner should independently review the spec)
        • Are any clarifications or revisions needed?
      • If applicable, what do other drivers do?
        • If there is no common spec, is a common spec needed?
      • What should the behavior be?
      • Update the ticket description and implementation requirements as needed

      Review and address any unknowns explicitly called out in the ticket

      What will be the impact on users?

      • Who will be impacted?
      • Why might users care about this change?
      • Capture relevant detail in the "User Impact" section of the ticket description

      What will be the impact on any downstream projects? (e.g., shell, mongoose)

      • Update follow up requirements and create subtasks for follow up or coordination actions

      What variables affect the feature in question?

      • Server versions
      • Deployment types
      • Auth settings
      • Server and client configuration options
      • Specific apis / api options
      • Runtime or bundler settings
      • Special sequences of operations
      • Any other special conditions

      How should all the identified variables be tested?

      • Identify happy path and error case combinations of variables
        • Given [variables], when [action is performed], [feature] should [behave in the expected way]
      • How will we achieve the necessary coverage for these cases?
        • Automated spec tests?
          • Are there test runner changes required?
          • How up to date are our current tests and runners?
        • New integration or prose tests?
        • Unit tests?
      • Will we need to modify any existing tests?
      • Is there technical debt that will affect the implementation of new or existing tests?
      • Do we have the necessary tooling infrastructure already in place for any new tests?
      • Update test requirements on the ticket to reflect reality
      • Create subtasks for any testing groundwork that can happen independently of the implementation

      What is the scope of the code changes?

      • List the code bases and the areas of each code base that will need changes
      • Is there technical debt in any of these areas that will affect the implementation?
      • Identify any existing adjacent functionality that could be impacted by these changes
        • Is there sufficient existing test coverage for the adjacent functionality?
          • Update ticket test AC and create subtask(s) to cover existing functionality if coverage is missing
      • If multiple libraries are affected, determine the order in which changes need to go in
      • Create subtasks for the implementation (at least one per affected codebase)

      What is the expected impact on performance?

      • Do we have existing performance coverage for the affected areas?
      • Do we need to add new coverage?
        • Update ticket test AC and create subtask(s) as needed

      Consider backport requirements

      • Should this be backported?
      • What would be the cost of a backport?

      Is the metadata of this ticket accurate and complete?

      • Double check the acceptance criteria to ensure it accurately captures the expected behavior, test, and follow-up requirements
      • Double check the documentation requirements
      • Double check the task breakdown to ensure it covers all actionable items in the ticket AC
      Show
      https://docs.google.com/document/d/16dCTvvkMKGwfgY4S3tgDgyOAiq4X-I1HVSdTFdsExHc/edit Review the ticket description for general accuracy and completeness Bug - Confirm that the bug still exists Task / Feature / Improvement - Ensure every section of the template is filled out and makes sense Build failure - Investigate and confirm the cause of the build failure Spec change - Check whether any more recent changes have been made to the spec that might affect the implementation requirements What is the expected behavior? What do the official driver or server docs currently say about this functionality? What should they say? If revisions or additions are needed, mark the ticket as docs changes needed and fill out the doc changes form What do our api or readme docs currently say about this functionality? What should they say? Capture any revisions or additions in the ticket documentation AC If applicable, what does the common drivers spec say? (Note: your kickoff partner should independently review the spec) Are any clarifications or revisions needed? If applicable, what do other drivers do? If there is no common spec, is a common spec needed? What should the behavior be? Update the ticket description and implementation requirements as needed Review and address any unknowns explicitly called out in the ticket What will be the impact on users? Who will be impacted? Why might users care about this change? Capture relevant detail in the "User Impact" section of the ticket description What will be the impact on any downstream projects? (e.g., shell, mongoose) Update follow up requirements and create subtasks for follow up or coordination actions What variables affect the feature in question? Server versions Deployment types Auth settings Server and client configuration options Specific apis / api options Runtime or bundler settings Special sequences of operations Any other special conditions How should all the identified variables be tested? Identify happy path and error case combinations of variables Given [variables] , when [action is performed] , [feature] should [behave in the expected way] How will we achieve the necessary coverage for these cases? Automated spec tests? Are there test runner changes required? How up to date are our current tests and runners? New integration or prose tests? Unit tests? Will we need to modify any existing tests? Is there technical debt that will affect the implementation of new or existing tests? Do we have the necessary tooling infrastructure already in place for any new tests? Update test requirements on the ticket to reflect reality Create subtasks for any testing groundwork that can happen independently of the implementation What is the scope of the code changes? List the code bases and the areas of each code base that will need changes Is there technical debt in any of these areas that will affect the implementation? Identify any existing adjacent functionality that could be impacted by these changes Is there sufficient existing test coverage for the adjacent functionality? Update ticket test AC and create subtask(s) to cover existing functionality if coverage is missing If multiple libraries are affected, determine the order in which changes need to go in Create subtasks for the implementation (at least one per affected codebase) What is the expected impact on performance? Do we have existing performance coverage for the affected areas? Do we need to add new coverage? Update ticket test AC and create subtask(s) as needed Consider backport requirements Should this be backported? What would be the cost of a backport? Is the metadata of this ticket accurate and complete? Double check the acceptance criteria to ensure it accurately captures the expected behavior, test, and follow-up requirements Double check the documentation requirements Double check the task breakdown to ensure it covers all actionable items in the ticket AC
    • Not Needed
    • Hide

      1. What would you like to communicate to the user about this feature?
      2. Would you like the user to see examples of the syntax and/or executable code and its output?
      3. Which versions of the driver/connector does this apply to?

      Show
      1. What would you like to communicate to the user about this feature? 2. Would you like the user to see examples of the syntax and/or executable code and its output? 3. Which versions of the driver/connector does this apply to?
    • None
    • None
    • None
    • None
    • None
    • None

      Use Case

      As a developer
      I want a correct Writable stream imlpementation for GridFSBucketWriteStream
      So that the stream adheres to Node.js standards.

      User Impact

      Corrects the writable stream implementation

      Dependencies

      • None

      Unknowns

      • Effect on TypeScript types

      Acceptance Criteria

      • Remove override of the write method in favour of _write
      • Use destroy instead of manually emitting the end events.

      Implementation Requirements

      Node.js streams are meant to be implemented via the _ (underscore) prefixed methods, the non-prefixed methods provide a consistent API regardless of underlying stream (generally allowing any stream to be chained to another).

      We currently have an implementation of write and end, as well as manual emits for FINISH, CLOSE, and DRAIN events. Remove all emits

      We should change our write to _write, our end to _final, and remove emitting finish, error, and close events.

      The helpers for write (doWrite) and end (writeRemnant) will always be given a callback now, that must always be invoked, changing callback optionality and carefully following the helpers early return cases will correct that.

      Our write/end methods ensured indexes were created prior to beginning their respective operations. We can use _construct to make sure indexes are made before _write/_final are entered.

      We should not invoke callbacks synchronously. Consider refactoring to async/await

      Remove boolean return values from all helpers. (except for isAborted)

      The finish and drain event is supposed to not receive any arguments however we emit, the file document. We should store the fileDocument as a class property when upload is complete.

      Testing Requirements

      • Add unit tests for the stream implementation

      Documentation Requirements

      • Release notes

      Follow Up Requirements

      • None

            Assignee:
            neal.beeken@mongodb.com Neal Beeken
            Reporter:
            durran.jordan@mongodb.com Durran Jordan
            Durran Jordan
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved:
              None
              None
              None
              None