[SERVER-39833] BSONObjectTooLarge when updating 250k documents with an unordered bulk write Created: 26/Feb/19  Updated: 05/Apr/19  Resolved: 05/Apr/19

Status: Closed
Project: Core Server
Component/s: None
Affects Version/s: 4.0.2
Fix Version/s: None

Type: Bug Priority: Minor - P4
Reporter: Anon Assignee: Eric Sedor
Resolution: Incomplete Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Operating System: ALL
Steps To Reproduce:

Again, sorry, deadlines to meet so I can't produce a minimal example right now.

Participants:

 Description   

Sorry, this will be a mostly useless bug report, because I don't have time to put together a minimal reproducable example. I'm only reporting it because I figure it might help gives some clues is someone else comes across a similar issue.

I was pushing about 250,000 updates to the server at once via an unordered bulk write, and was getting this message:

Assertion: BSONObjectTooLarge: BSONObj size: 17390105 (0x1095A19) is invalid. Size must be between 0 and 16793600(16MB) First element: update: "my-collection-name" src/mongo/bson/bsonobj.cpp 101
 
D COMMAND  [conn14143] assertion while parsing command: BSONObjectTooLarge: BSONObj size: 17390105 (0x1095A19) is invalid. Size must be between 0 and 16793600(16MB) First element: update: "my-collection-name"
 
I COMMAND  [conn14143] query  numYields:0 ok:0 errMsg:"BSONObj size: 17390105 (0x1095A19) is invalid. Size must be between 0 and 16793600(16MB) First element: update: \"my-collection-name\"" errName:BSONObjectTooLarge errCode:10334 reslen:245 locks:{} 0ms

I solved it by breaking the writes into groups of 1000.

I confirmed that each of the update objects in the ops array was only a few hundred bytes at the absolute most. Each update object was similar to `{$set:{prop1:

{a:1,b:2,c:3}

}}`

Here are my mongo details:

> mongo --version
MongoDB shell version v4.0.2
git version: fc1573ba18aee42f97a3bb13b67af7d837826b47
OpenSSL version: OpenSSL 1.1.0g  2 Nov 2017
allocator: tcmalloc
modules: none
build environment:
    distmod: ubuntu1804
    distarch: x86_64
    target_arch: x86_64

Here are my driver details:

I NETWORK  [conn14177] received client metadata from 3.81.81.59:59234 conn14177: { driver: { name: "nodejs", version: "3.1.13" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.4.0-1066-aws" }, platform: "Node.js v10.15.0, LE, mongodb-core: 3.1.11" }

If this is expected behaviour (e.g. because bulkWrite can't handle 250k at a time), then it would be neat if the error message were a bit more clear.



 Comments   
Comment by Eric Sedor [ 05/Apr/19 ]

Hi,

We haven’t heard back from you for some time, so I’m going to mark this ticket as resolved. If this is still an issue for you, please let us know specifically how the bulkWrite is being issued.

Regards,
Eric

Comment by Eric Sedor [ 20/Mar/19 ]

Hi,

We still need additional information to diagnose the problem. If this is still an issue for you, would you please clarify how the bulkWrite is being issued in code?

Thanks,
Eric

Comment by Eric Sedor [ 04/Mar/19 ]

The driver should be silently splitting a bulk write of 250k statements into smaller batches of up to 100k each, as described here so it's possible this is similar to NODE-1778.

However, it's possible to invoke db.runCommand directly to avoid this protection. anon2313, can you clarify how the bulkWrite is being issued in code?

Comment by Eric Sedor [ 27/Feb/19 ]

Thanks anon2313, we understand and will look into this.

Comment by Anon [ 26/Feb/19 ]

I thought maybe it had something to do with this: https://jira.mongodb.org/browse/SERVER-14123

Generated at Thu Feb 08 04:53:15 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.