Priority: Major - P3
Affects Version/s: 2.1.4
Fix Version/s: 2.1.7
The new 2.1 GridFSBucket streams are missing a feature: once the Readable or Writable stream has started flowing, there is no way to abort before the end.
It's not part of the nodejs Stream standard, but the use-cases are legit nonetheless:
- BucketReadStream piped into a http response, and the http client aborts before the end: we need to stop the read stream, otherwise the internal cursor will probably leak
- some readable stream piped into BucketWriteStream, and the readable stream has an error (for example for a http incoming message the client may abort or the network fail): we need to abort/destroy the BucketWriteStream before it's commited, because otherwise the gridfs file would be truncated, and other gridfs clients wouldn't know that.
Other writable stream implementations usually implement the `destroy` method, and other module use that pseudo convention.
see http://maxogden.com/node-streams.html, https://www.npmjs.com/package/through and https://www.npmjs.com/package/through2
Also, https://www.npmjs.com/package/pump tries to use destroy or close or other known methods.
- node fs has `close`
- request implements `abort`
- aws-sdk doesn't implements a writable stream to upload to aws s3, but instead takes a readable stream and writes it to s3. This api has a abort method.
I know that gridfs is not atomic like aws s3, so in some cases aborting a write will not restore the previous state (for example if the write is an overwrite, the old data is probably deleted before starting writing the new one (at least it was the case in 2.0)), but it's still better than leaking cursors, chunks, or committing a truncated file.