[SERVER-39833] BSONObjectTooLarge when updating 250k documents with an unordered bulk write Created: 26/Feb/19 Updated: 05/Apr/19 Resolved: 05/Apr/19 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | 4.0.2 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Minor - P4 |
| Reporter: | Anon | Assignee: | Eric Sedor |
| Resolution: | Incomplete | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Operating System: | ALL |
| Steps To Reproduce: | Again, sorry, deadlines to meet so I can't produce a minimal example right now. |
| Participants: |
| Description |
|
Sorry, this will be a mostly useless bug report, because I don't have time to put together a minimal reproducable example. I'm only reporting it because I figure it might help gives some clues is someone else comes across a similar issue. I was pushing about 250,000 updates to the server at once via an unordered bulk write, and was getting this message:
I solved it by breaking the writes into groups of 1000. I confirmed that each of the update objects in the ops array was only a few hundred bytes at the absolute most. Each update object was similar to `{$set:{prop1: {a:1,b:2,c:3}}}` Here are my mongo details:
Here are my driver details:
If this is expected behaviour (e.g. because bulkWrite can't handle 250k at a time), then it would be neat if the error message were a bit more clear. |
| Comments |
| Comment by Eric Sedor [ 05/Apr/19 ] |
|
Hi, We haven’t heard back from you for some time, so I’m going to mark this ticket as resolved. If this is still an issue for you, please let us know specifically how the bulkWrite is being issued. Regards, |
| Comment by Eric Sedor [ 20/Mar/19 ] |
|
Hi, We still need additional information to diagnose the problem. If this is still an issue for you, would you please clarify how the bulkWrite is being issued in code? Thanks, |
| Comment by Eric Sedor [ 04/Mar/19 ] |
|
The driver should be silently splitting a bulk write of 250k statements into smaller batches of up to 100k each, as described here so it's possible this is similar to However, it's possible to invoke db.runCommand directly to avoid this protection. anon2313, can you clarify how the bulkWrite is being issued in code? |
| Comment by Eric Sedor [ 27/Feb/19 ] |
|
Thanks anon2313, we understand and will look into this. |
| Comment by Anon [ 26/Feb/19 ] |
|
I thought maybe it had something to do with this: https://jira.mongodb.org/browse/SERVER-14123 |