[SERVER-82450] MongoServerError: batched writes must generate a single applyOps entry Created: 26/Oct/23 Updated: 24/Jan/24 Resolved: 11/Jan/24 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | 7.0.2 |
| Fix Version/s: | 7.3.0-rc0, 7.0.6 |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Sven Varkel | Assignee: | Matt Kneiser |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||
| Assigned Teams: |
Storage Execution
|
||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||
| Operating System: | ALL | ||||||||||||
| Backport Requested: |
v7.2, v7.1, v7.0
|
||||||||||||
| Steps To Reproduce: |
|
||||||||||||
| Sprint: | Execution Team 2024-01-08, Execution Team 2024-01-22 | ||||||||||||
| Participants: | |||||||||||||
| Description |
|
Since MongoDB 7 (and perhaps even since 6) there's an issue that prevents moving large collections between databases by using db.runCommand({renameCollection:'database1.a', to:'database2.a'}) It fails with an error message: MongoServerError: batched writes must generate a single applyOps entry ``` Reading the logs refers that it tries to do it as a transaction and there's that infamous 16 MB limit or something that is met in case a larger collection is being moved. |
| Comments |
| Comment by Githook User [ 19/Jan/24 ] | |||||||||||||||||||
|
Author: {'name': 'Matt Kneiser', 'email': 'matt.kneiser@mongodb.com', 'username': 'themattman'}Message: (cherry picked from commit 532cd3934c9b734420bb36d296466bb70d8ad38b) GitOrigin-RevId: 2d13dc1647052ed393384e86fe2163b5528b1420 | |||||||||||||||||||
| Comment by Githook User [ 11/Jan/24 ] | |||||||||||||||||||
|
Author: {'name': 'Matt Kneiser', 'email': 'matt.kneiser@mongodb.com', 'username': 'themattman'}Message: GitOrigin-RevId: 532cd3934c9b734420bb36d296466bb70d8ad38b | |||||||||||||||||||
| Comment by Sven Varkel [ 04/Jan/24 ] | |||||||||||||||||||
|
Thanks a lot, Matt and others! That makes sense as the collection that I tried to rename over to another db contains fairly large documents indeed. I tried that server parameter and it helped for now! Thanks a lot. | |||||||||||||||||||
| Comment by Matt Kneiser [ 04/Jan/24 ] | |||||||||||||||||||
|
Hi Sven, Thanks for reporting this issue. I have three clarifying points to make about the report
While a fix is being evaluated and will be part of a future release, in the meantime setting the following server parameter - maxSizeOfBatchedInsertsForRenameAcrossDatabasesBytes - to a smaller value, perhaps near 3000000 (~3MB) will alleviate the issue and allow renameCollection to complete. A smaller value will only result in slightly worse performance but is more likely to succeed. The default value of this server parameter is (16MB - 1,000). Note that this server parameter only impacts the size of batched inserts. Individual documents are still confined to the server's 16MB limit, and will be properly handled as individual inserts if they are above the batch size limit. | |||||||||||||||||||
| Comment by Edwin Zhou [ 14/Dec/23 ] | |||||||||||||||||||
|
Hi sven.varkel+mongodb@gmail.com, Thank you for your patience while I investigate this issue. I also suspect that the renameCollection operation is creating a series of batched writes that are appended to a single applyOps entry, which ends up exceeding the 16MB document limit and therefore aborts the transaction with "TransactionTooLarge". I'm sending this ticket over to our Catalog and Routing team to further investigate this behavior. Kind regards, | |||||||||||||||||||
| Comment by Edwin Zhou [ 13/Dec/23 ] | |||||||||||||||||||
|
sven.varkel+mongodb@gmail.com, thank you for providing the data. I will take a look. | |||||||||||||||||||
| Comment by Sven Varkel [ 07/Nov/23 ] | |||||||||||||||||||
|
Hi! Thanks for upload info. I uploaded files as required. This is the exact command I executed in shell:
And this is the exact output from that command:
Sven | |||||||||||||||||||
| Comment by Edwin Zhou [ 02/Nov/23 ] | |||||||||||||||||||
|
Hi sven.varkel+mongodb@gmail.com, Thank you for your report. To help us further investigate this issue, could you provide us with some additional information? More importantly is the mongod logs covering the move collection attempt. I've created a secure upload portal for you. Files uploaded to this portal are hosted on Box, are visible only to MongoDB employees, and are routinely deleted after some time. For each node in the replica set spanning a time period that includes the incident, would you please archive (tar or zip) and upload to that link:
Kind regards, |