|
Author:
{u'username': u'trnl', u'name': u'Uladzimir Mihura', u'email': u'trnl.me@gmail.com'}
Message: JAVA-851: disable check for upper bound on chunkSize in GridFS
Branch: 2.11.x
https://github.com/mongodb/mongo-java-driver/commit/150e08e177963c4791f32189c68f2a050199efee
|
|
Author:
{u'username': u'trnl', u'name': u'Uladzimir Mihura', u'email': u'trnl.me@gmail.com'}
Message: JAVA-851: disable check for upper bound on chunkSize in GridFS
Branch: master
https://github.com/mongodb/mongo-java-driver/commit/ebbf0dcfdc3691f499ffbe287ce40c192069abea
|
|
Author:
{u'username': u'trnl', u'name': u'Uladzimir Mihura', u'email': u'trnl.me@gmail.com'}
Message: fix(driver-compat) JAVA-851 disable check for upper bound on chunkSize in GridFS
Branch: 3.0.x
https://github.com/mongodb/mongo-java-driver/commit/af2f4b03e5d066367de7c61cfd5aa9155e81a32b
|
|
How about dropping this check entirely and rely on underlying serialization engine?
For 2.x we will have:
com.mongodb.MongoInternalException: DBObject of size 16777278 is over Max BSON size 16777216
|
at com.mongodb.OutMessage.putObject(OutMessage.java:291)
|
at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:242)
|
at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:207)
|
at com.mongodb.DBCollection.insert(DBCollection.java:146)
|
at com.mongodb.DBCollection.insert(DBCollection.java:89)
|
at com.mongodb.DBCollection.save(DBCollection.java:819)
|
at com.mongodb.DBCollection.save(DBCollection.java:795)
|
at com.mongodb.gridfs.GridFSInputFile._dumpBuffer(GridFSInputFile.java:277)
|
at com.mongodb.gridfs.GridFSInputFile.saveChunks(GridFSInputFile.java:227)
|
at com.mongodb.gridfs.GridFSInputFile.save(GridFSInputFile.java:177)
|
at com.mongodb.gridfs.GridFSTest.testBadChunkSize2(GridFSTest.java:292)
|
For 3.x:
org.bson.BSONSerializationException: Size 16777278 is larger than MaxDocumentSize 16777216.
|
at org.bson.BSONBinaryWriter.backpatchSize(BSONBinaryWriter.java:367)
|
at org.bson.BSONBinaryWriter.writeEndDocument(BSONBinaryWriter.java:326)
|
at com.mongodb.codecs.DBObjectCodec.encode(DBObjectCodec.java:86)
|
at com.mongodb.codecs.DBObjectCodec.encode(DBObjectCodec.java:47)
|
at com.mongodb.codecs.CompoundDBObjectCodec.encode(CompoundDBObjectCodec.java:50)
|
at com.mongodb.codecs.CompoundDBObjectCodec.encode(CompoundDBObjectCodec.java:27)
|
at org.mongodb.operation.protocol.RequestMessage.addDocument(RequestMessage.java:87)
|
at org.mongodb.operation.protocol.InsertMessage.encodeMessageBody(InsertMessage.java:42)
|
at org.mongodb.operation.protocol.RequestMessage.encode(RequestMessage.java:76)
|
at org.mongodb.operation.protocol.WriteProtocol.sendMessage(WriteProtocol.java:75)
|
at org.mongodb.operation.protocol.WriteProtocol.execute(WriteProtocol.java:64)
|
at org.mongodb.operation.WriteOperationBase.execute(WriteOperationBase.java:52)
|
|
|
By the way, I using "files_id""-""n" as the chunk's _id, and remove the old index
{"files_id":1, "n":1}
.
|
|
Thanks.
In my test, when upload small files(like 1KB), 256KB is a good choice;
When upload large files(like >10MB), 2MB is a good choice.
Using 2MB as chunk size, we can reduce the whole database's index.
|
|
OK, thanks.
|
|
It should be fine to increase the chunk size.
|
Generated at Thu Feb 08 08:53:13 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.