[SERVER-82450] MongoServerError: batched writes must generate a single applyOps entry Created: 26/Oct/23  Updated: 24/Jan/24  Resolved: 11/Jan/24

Status: Closed
Project: Core Server
Component/s: None
Affects Version/s: 7.0.2
Fix Version/s: 7.3.0-rc0, 7.0.6

Type: Bug Priority: Major - P3
Reporter: Sven Varkel Assignee: Matt Kneiser
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Backports
Related
related to SERVER-84779 Fix batch size computation in batched... Backlog
Assigned Teams:
Storage Execution
Backwards Compatibility: Fully Compatible
Operating System: ALL
Backport Requested:
v7.2, v7.1, v7.0
Steps To Reproduce:
  1. create a larger collection "collection1" in database1, preferably with size > 100 MB.
  2. issue command in the shell to move that collection to database2:
    ```
    db.runCommand( {             renameCollection: `database1.collection1`,             to: `database2.collection1`,         }

    );
    ```

  3. See the error
Sprint: Execution Team 2024-01-08, Execution Team 2024-01-22
Participants:

 Description   

Since MongoDB 7 (and perhaps even since 6) there's an issue that prevents moving large collections between databases by using

db.runCommand({renameCollection:'database1.a', to:'database2.a'})

It fails with an error message:
```

MongoServerError: batched writes must generate a single applyOps entry

```

Reading the logs refers that it tries to do it as a transaction and there's that infamous 16 MB limit or something that is met in case a  larger collection is being moved.



 Comments   
Comment by Githook User [ 19/Jan/24 ]

Author:

{'name': 'Matt Kneiser', 'email': 'matt.kneiser@mongodb.com', 'username': 'themattman'}

Message: SERVER-82450 Adjust defaults for renameCollection across dbs (#17974)

(cherry picked from commit 532cd3934c9b734420bb36d296466bb70d8ad38b)

GitOrigin-RevId: 2d13dc1647052ed393384e86fe2163b5528b1420
Branch: v7.0
https://github.com/mongodb/mongo/commit/201aa1efdc9368d14412086fc8d012c95de844d5

Comment by Githook User [ 11/Jan/24 ]

Author:

{'name': 'Matt Kneiser', 'email': 'matt.kneiser@mongodb.com', 'username': 'themattman'}

Message: SERVER-82450 Adjust defaults for renameCollection across dbs (#17974)

GitOrigin-RevId: 532cd3934c9b734420bb36d296466bb70d8ad38b
Branch: master
https://github.com/mongodb/mongo/commit/13fd6da0249963df46644f795f1381db45b0c048

Comment by Sven Varkel [ 04/Jan/24 ]

Thanks a lot, Matt and others!

That makes sense as the collection that I tried to rename over to another db contains fairly large documents indeed. I tried that server parameter and it helped for now! Thanks a lot.

Comment by Matt Kneiser [ 04/Jan/24 ]

Hi Sven,

Thanks for reporting this issue.

I have three clarifying points to make about the report

  • As you noted, it only impacts collection renames across different databases, not within the same database.
  • The size of the collection is not the key determinant of this issue, but rather the size of documents in the collection.
  • It only affects 7.0+, 6.0 is not effected.

While a fix is being evaluated and will be part of a future release, in the meantime setting the following server parameter - maxSizeOfBatchedInsertsForRenameAcrossDatabasesBytes - to a smaller value, perhaps near 3000000 (~3MB) will alleviate the issue and allow renameCollection to complete. A smaller value will only result in slightly worse performance but is more likely to succeed. The default value of this server parameter is (16MB - 1,000). Note that this server parameter only impacts the size of batched inserts. Individual documents are still confined to the server's 16MB limit, and will be properly handled as individual inserts if they are above the batch size limit.

Comment by Edwin Zhou [ 14/Dec/23 ]

Hi sven.varkel+mongodb@gmail.com,

Thank you for your patience while I investigate this issue. I also suspect that the renameCollection operation is creating a series of batched writes that are appended to a single applyOps entry, which ends up exceeding the 16MB document limit and therefore aborts the transaction with "TransactionTooLarge".

I'm sending this ticket over to our Catalog and Routing team to further investigate this behavior.

Kind regards,
Edwin

Comment by Edwin Zhou [ 13/Dec/23 ]

sven.varkel+mongodb@gmail.com, thank you for providing the data. I will take a look.

Comment by Sven Varkel [ 07/Nov/23 ]

Hi!

Thanks for upload info. I uploaded files as required.

This is the exact command I executed in shell:

 

use admin;
db.runCommand(
{         
    renameCollection: "stablewood_staging.b_backup_underwriting_202310110503",
    to: "backup_stablewood_staging.b_backup_underwriting_202310110503",     
}
)

 

 

And this is the exact output from that command:

 

{
    "ok" : 0.0,
    "errmsg" : "batched writes must generate a single applyOps entry",
    "code" : 257.0,
    "codeName" : "TransactionTooLarge",
    "$clusterTime" : {
        "clusterTime" : Timestamp(1699363719, 462),
        "signature" :
{             "hash" : BinData(0, "+l1+g2dfZpo1/95mBGqtDwQH0+0="),             "keyId" : 7264567569183408129         }
    },
    "operationTime" : Timestamp(1699363719, 461)
}

 

 

Sven

Comment by Edwin Zhou [ 02/Nov/23 ]

Hi sven.varkel+mongodb@gmail.com,

Thank you for your report. To help us further investigate this issue, could you provide us with some additional information? More importantly is the mongod logs covering the move collection attempt.

I've created a secure upload portal for you. Files uploaded to this portal are hosted on Box, are visible only to MongoDB employees, and are routinely deleted after some time.

For each node in the replica set spanning a time period that includes the incident, would you please archive (tar or zip) and upload to that link:

  • the mongod logs
  • the $dbpath/diagnostic.data directory (the contents are described here)

Kind regards,
Edwin

Generated at Thu Feb 08 06:49:16 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.