[DOCS-13207] Mongod Journaling docs are incomplete and/or incorrect Created: 07/Nov/19 Updated: 30/Oct/23 Resolved: 12/Nov/19 |
|
| Status: | Closed |
| Project: | Documentation |
| Component/s: | manual, Server |
| Affects Version/s: | None |
| Fix Version/s: | Server_Docs_20231030 |
| Type: | Bug | Priority: | Critical - P2 |
| Reporter: | Paul Done | Assignee: | Kay Kim (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | docs-administration, docs-query | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Participants: | |||||||||
| Days since reply: | 4 years, 13 weeks, 1 day ago | ||||||||
| Epic Link: | DOCSP-1769 | ||||||||
| Description |
DescriptionJournaling docs at https://docs.mongodb.com/manual/core/journaling/ says "MongoDB syncs the buffered journal data to disk every 50 milliseconds (Starting in MongoDB 3.2)" However this is incorrect (or at least not completely the answer) and is potentially causing users to think MongoDB can't provide a highly available data solution (ie. write concern majority) and also yield response times in far less than 50 milliseconds at the same time. However, for example: I am seeing in my own tests, with write concern = majority for an Atlas hosted replica-set across 3 availability zones in one region, an average response time of around 5ms and maximum response time of around 10ms. I've just been informed that actually for write concern majority at least, the journal behaviour is:
Therefore the latency of a client performing a write to a 3 node replica set using a write concern of majority is: 2Xjournal-flush + 1X network roundtrip, which will be in the order of 5-10 milliseconds for SSD disks and a fairly local network of 3 replicas. Scope of changesImpact to Other DocsMVP (Work and Date)Resources (Scope or Design Docs, Invision, etc.) |
| Comments |
| Comment by Githook User [ 12/Nov/19 ] |
|
Author: {'name': 'Kay Kim', 'username': 'kay-kim', 'email': 'kay.kim@10gen.com'}Message: |
| Comment by Githook User [ 12/Nov/19 ] |
|
Author: {'name': 'Kay Kim', 'username': 'kay-kim', 'email': 'kay.kim@10gen.com'}Message: |
| Comment by Githook User [ 12/Nov/19 ] |
|
Author: {'username': 'kay-kim', 'email': 'kay.kim@10gen.com', 'name': 'Kay Kim'}Message: |
| Comment by Githook User [ 12/Nov/19 ] |
|
Author: {'name': 'Kay Kim', 'username': 'kay-kim', 'email': 'kay.kim@10gen.com'}Message: |
| Comment by Githook User [ 12/Nov/19 ] |
|
Author: {'name': 'Kay Kim', 'username': 'kay-kim', 'email': 'kay.kim@10gen.com'}Message: |
| Comment by Githook User [ 12/Nov/19 ] |
|
Author: {'name': 'Kay Kim', 'username': 'kay-kim', 'email': 'kay.kim@10gen.com'}Message: |
| Comment by Ravind Kumar (Inactive) [ 07/Nov/19 ] |
|
We may also need to clarify the following to really close up this hole:
|
| Comment by Ravind Kumar (Inactive) [ 07/Nov/19 ] |
|
This is related to (and probably can be absorbed into) However, based on the conversation in that ticket, we only covered secondary oplog getMore's resulting in an immediate flush. Based on Paul's comments, it looks like majority write concern also causes immediate flushing? This raises a few follow-ups:
cc boschg@mac.com, its been a while but our last conversation on this did not cover client-triggered journal flushing, only flushing due to replica set members. Taking the findings here and in that ticket together, it seems like the behavior is:
So something like:
This is all specific to WiredTiger. It is unclear how much of this behavior applies to MMAPv1, which has a default 30 ms commitIntervalMs. |