Details
-
Task
-
Status: Backlog
-
Major - P3
-
Resolution: Unresolved
-
None
-
None
-
false
Description
Documentation Request Summary:
A new command replSetResizeOplog allows changing oplog size on replica set members using the WiredTiger storage engine.
Engineering Ticket Description:
Proposed title: Dynamic oplog sizing of replica set
FEATURE DESCRIPTION
This new feature enables the dynamic resizing of a node's oplog, allowing users to grow and shrink the oplog to satisfy the operational needs of each node in a replica set.
VERSIONS
This feature is available in the 3.5.10 and newer development versions of MongoDB, and in the 3.6 and newer production releases.
RATIONALE
In a MongoDB replica set, the oplog is a special capped collection used to replicate data to other nodes. Users may specify the size of the oplog for each node when deploying a replica set, but altering that size required maintenance.
This new feature bypasses the need for maintance and allows users to dynamically change the size of the oplog in MongoDB nodes running with the WiredTiger storage engine only (nodes running the MMAPv1 storage engine still need to follow the maintenance procedure linked above).
OPERATION
To display the size of the oplog, one can use the stats() command. Here's a replica set primary node with a 9.1GB oplog:
replset:PRIMARY> use local
|
replset:PRIMARY> db.oplog.rs.stats()
|
{
|
"ns" : "local.oplog.rs",
|
"size" : 6781,
|
"count" : 60,
|
"avgObjSize" : 113,
|
"storageSize" : 36864,
|
"capped" : true,
|
"max" : -1,
|
"maxSize" : NumberLong("9790804377"),
|
...
|
}
|
The size of the oplog is displayed at deployment time in the logs as follows:
2017-07-17T17:47:11.870-0400 I REPL [replication-0] creating replication oplog of size: 9337MB...
|
Users can change the size of the oplog with the replSetResizeOplog command, specifying a size in MB. For example:
replset:PRIMARY> use admin
|
replset:PRIMARY> db.runCommand({replSetResizeOplog:1, size: 16384})
|
{ "ok" : 1, "operationTime" : Timestamp(1500329291, 1) }
|
The command above changes the oplog size to 16384 MB (16GB/17179869184 bytes). This operation is recorded in the logs as follows:
2017-07-17T17:52:40.396-0400 I STORAGE [conn11] replSetResizeOplog success, currentSize:17179869184
|
The output of the stats() command also reflects the change:
replset:PRIMARY> use local
|
replset:PRIMARY> db.oplog.rs.stats()
|
{
|
"ns" : "local.oplog.rs",
|
"size" : 6781,
|
"count" : 60,
|
"avgObjSize" : 113,
|
"storageSize" : 36864,
|
"capped" : true,
|
"max" : -1,
|
"maxSize" : NumberLong("17179869184")
|
...
|
}
|
ADDITIONAL NOTES
Reducing the size of the oplog removes data from it. Reducing the size of the oplog in a given node may cause any members of a replica set syncing from that node to become stale and need a resync.
Original description
It would be very, very handy to have a method to alter the oplog size for an entire replicaset at the same time. That is, issue a command on the primary, it expands its oplog size, and the changes trickle down to the replicas.
Better yet would be a dynamic oplog where it doesn't need to be a specific size. Capped collections have advantages, I understand, but perhaps there are ways around limitations of uncapped ones for oplog purposes.
Attachments
Issue Links
- documents
-
SERVER-22766 Dynamic oplog sizing for WiredTiger nodes
-
- Closed
-
- is duplicated by
-
DOCS-10811 Docs for SERVER-30151: Size specification for oplog resizing
-
- Closed
-
- related to
-
DOCS-12231 3.6 and 4.0 oplog resize procedures only work for WT
-
- Closed
-
-
DOCS-12266 Docs for SERVER-38501: swap out new ActionType for replSetResizeOplog command on 3.4
-
- Closed
-