[DOCS-13307] Improve majority (M) Explanation on Write Concern Doc Page Created: 18/Dec/19 Updated: 30/Oct/23 Resolved: 08/Jan/20 |
|
| Status: | Closed |
| Project: | Documentation |
| Component/s: | Server |
| Affects Version/s: | 3.2 Required, 3.6 Required, 4.0 Required, 3.4 Required, 4.2 Required |
| Fix Version/s: | Server_Docs_20231030 |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Diego Rodriguez (Inactive) | Assignee: | Kay Kim (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | majority, writeconcern | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
It affects all OS, Software Platforms and/or Hardwares |
||
| Participants: | |
| Days since reply: | 3 years, 40 weeks, 1 day ago |
| Epic Link: | DOCSP-1769 |
| Description |
DescriptionThe documentation on “Write Concern” is missing some edge cases when it comes to “writeMajorityCount”. The documentation says:
This can be found at the end of following documentation pages:
Version 3.2 documentation page doesn’t have the same saying but the same seems to apply:
If we look through line 779 to 787 in the code at https://github.com/mongodb/mongo/blob/master/src/mongo/db/repl/repl_set_config.cpp_, it uses a formula that considers (voters - arbiters) as writeMajority in case that number is less than the _majorityVoteCount:
The documentation on Write Concern should describe precisely how “writeMajorityCount” is calculated, making explicit mention to the formula used in the code that makes it clear and non ambiguous. Wherever the explanation for majority (M) appears on the previously mentioned documentation pages (or any other that I might be missing), it should say:
I have tested the following scenarios which support and confirm that the previously mentioned formula is used: Scenario 1: 7 Node Replica Set (3 Data Bearing Nodes, 4 Arbiters)
Scenario 2: 3 Node Replica Set (1 Data Bearing Node, 2 Arbiters)
Scenario 3: Typical PSA
Regards |
| Comments |
| Comment by Githook User [ 05/May/20 ] |
|
Author: {'name': 'Kay Kim', 'email': 'kay.kim@10gen.com', 'username': 'kay-kim'}Message: |
| Comment by Githook User [ 05/May/20 ] |
|
Author: {'name': 'Kay Kim', 'email': 'kay.kim@10gen.com', 'username': 'kay-kim'}Message: |
| Comment by Githook User [ 05/May/20 ] |
|
Author: {'name': 'Kay Kim', 'email': 'kay.kim@10gen.com', 'username': 'kay-kim'}Message: |
| Comment by Githook User [ 05/May/20 ] |
|
Author: {'name': 'Kay Kim', 'email': 'kay.kim@10gen.com', 'username': 'kay-kim'}Message: |
| Comment by Githook User [ 05/May/20 ] |
|
Author: {'name': 'Kay Kim', 'email': 'kay.kim@10gen.com', 'username': 'kay-kim'}Message: |
| Comment by Githook User [ 06/Jan/20 ] |
|
Author: {'name': 'Kay Kim', 'email': 'kay.kim@10gen.com', 'username': 'kay-kim'}Message: |
| Comment by Githook User [ 06/Jan/20 ] |
|
Author: {'name': 'Kay Kim', 'email': 'kay.kim@10gen.com', 'username': 'kay-kim'}Message: |
| Comment by Githook User [ 06/Jan/20 ] |
|
Author: {'name': 'Kay Kim', 'email': 'kay.kim@10gen.com', 'username': 'kay-kim'}Message: |
| Comment by Githook User [ 06/Jan/20 ] |
|
Author: {'name': 'Kay Kim', 'email': 'kay.kim@10gen.com', 'username': 'kay-kim'}Message: |
| Comment by Githook User [ 06/Jan/20 ] |
|
Author: {'name': 'Kay Kim', 'email': 'kay.kim@10gen.com', 'username': 'kay-kim'}Message: |
| Comment by Arnie Listhaus [ 18/Dec/19 ] |
|
My thoughts are that we should somehow convey an accurate description of the algorithm. However, in addition we should convey the dangers of of having a topology that requires all data bearing members to be available. For example: In environments with multiple arbiters (not recommended), the majority write concern will at most require acknowledgement from all of the data bearing nodes . We do not recommend using write concern majority in deployments where the majority would require all of the data nodes to acknowledge writes (as in the case of a PSA deployment). This type of environment could lead to writes that are either never acknowledged or that timeout when/if a data bearing member is down or unreachable (network partition) and that would defeat the ability to provide high availability which is one of the major benefits of using replica sets.
|