-
Type: Bug
-
Resolution: Fixed
-
Priority: Critical - P2
-
Affects Version/s: None
-
Component/s: mongofiles
-
None
-
v4.0, v3.6
If a put_id is attempted when there already is a file using that _id, the existing file's chunks are deleted from the chunks collection. The entry in the files collection is untouched.
This is almost certainly a bug in mgo, so it will be fixed once mongofiles is ported. It's serious enough, however, that it will need a backport.
repro:
$ mongofiles --db=bugtest put_id myfile.data 1 2019-02-27T13:01:07.870-0500 connected to: localhost 2019-02-27T13:01:07.871-0500 added file: myfile.data $ mongo bugtest > db.fs.files.find() { "_id" : 1, "chunkSize" : 261120, "uploadDate" : ISODate("2019-02-27T18:01:07.915Z"), "length" : 12, "md5" : "6f5902ac237024bdd0c176cb93063dc4", "filename" : "myfile.data" } > db.fs.chunks.find() { "_id" : ObjectId("5c76d063bb84e01bee38eff6"), "files_id" : 1, "n" : 0, "data" : BinData(0,"aGVsbG8gd29ybGQK") } > exit bye $ mongofiles --db=bugtest put_id myfile.data 1 2019-02-27T13:01:44.435-0500 connected to: localhost 2019-02-27T13:01:44.435-0500 added file: myfile.data 2019-02-27T13:01:44.438-0500 Failed: error while storing 'myfile.data' into GridFS: E11000 duplicate key error collection: bugtest.fs.chunks index: files_id_1_n_1 dup key: { : 1, : 0 } $ mongo bugtest > db.fs.files.find() { "_id" : 1, "chunkSize" : 261120, "uploadDate" : ISODate("2019-02-27T18:01:07.915Z"), "length" : 12, "md5" : "6f5902ac237024bdd0c176cb93063dc4", "filename" : "myfile.data" } > db.fs.chunks.find() > exit bye