Details
-
Task
-
Status: Closed
-
Major - P3
-
Resolution: Fixed
-
None
Description
Starting in 4.0, if the replication majority commit point lags past where the oplog would normally truncate, instead the oplog will grow to avoid deleting the majority commit point*. This means that the oplog size configuration is now a minimum size for the oplog, not a maximum. If writes are coming in but the majority point isn't moving (for example if secondaries aren't keeping up, or if you have a PSA set but the secondary is down), the oplog can grow past its configured size.
- this growing behavior is actually related to the 'stable checkpoint timestamp', which is a pretty low level concept we probably don't want to expose to users, but it tracks the replication majority commit point pretty closely, so saying the replication commit point is probably close enough to accurate to get by.
Scope of changes:
- mongod --oplogSize and replication.oplogSizeMB
- source/administration/monitoring.txt
- source/administration/production-checklist-operations.txt
- source/core/capped-collections.txt
- source/core/replica-set-delayed-member.txt
- source/core/replica-set-oplog.txt
- source/reference/command/replSetResizeOplog.txt
- source/reference/command/serverStatus.txt
- source/reference/limits.txt
- source/reference/local-database.txt
- source/reference/method/db.getReplicationInfo.txt
- source/tutorial/change-oplog-size.txt
- source/tutorial/configure-a-delayed-replica-set-member.txt
- source/tutorial/deploy-replica-set-for-testing.txt
- source/tutorial/expand-replica-set.txt
- source/tutorial/resync-replica-set-member.txt
- source/tutorial/troubleshoot-replica-sets.txt
- source/tutorial/troubleshoot-sharded-clusters.txt
Impact to other docs outside of this product:
MVP:
Resources:
Attachments
Issue Links
- documents
-
SERVER-29213 Have KVWiredTigerEngine implement StorageEngine::recoverToStableTimestamp
-
- Closed
-