[DOCS-2008] Geographically distributed replica set - why majority of voting nodes in one place? Created: 27/Sep/13  Updated: 11/Jan/17  Resolved: 27/Sep/13

Status: Closed
Project: Documentation
Component/s: manual
Affects Version/s: None
Fix Version/s: 01112017-cleanup

Type: Bug Priority: Major - P3
Reporter: Jason Walton Assignee: Kay Kim (Inactive)
Resolution: Done Votes: 0
Labels: replicaset, replication
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Related
related to DOCS-4011 Need a tutorial on setting up a DR site. Closed
Participants:
Days since reply: 10 years, 20 weeks, 5 days ago

 Description   

In http://docs.mongodb.org/manual/tutorial/deploy-geographically-distributed-replica-set/, the documentation claims that you should "Ensure that a majority of the voting members are within a primary facility", however:

1) The documentation offers no explanation for why you should ensure the majority of voting members are all in one place. This seems counter-intuitive, as if this entire data center goes down, it will take a majority of the voting nodes with it, rendering the replica set unable to elect a new master. It seems that placing two nodes an the arbiter each in three different locations would be a better solution.

2) The latest version of this documentation in master states: "For more information on the need to keep the voting majority on one site, see
:doc:`/core/replica-set-elections`". The referenced page, however, only talks about network partitions, and not about the failure of a data center.

https://github.com/mongodb/docs/blob/master/source/tutorial/deploy-geographically-distributed-replica-set.txt#L39

3) At the bottom of the document is this diagram:

http://docs.mongodb.org/manual/_images/replica-set-three-data-centers.png

Where there are clearly not a majority of voting nodes in a single data center.



 Comments   
Comment by Jason Walton [ 27/Sep/13 ]

Thanks Kay, that does clarify. We were planning three nodes total; one in east-1, one in west-1, and an arbiter somewhere else.

Comment by Kay Kim (Inactive) [ 27/Sep/13 ]

Hey Jason –
the documentation is considering just the network partition scenario. So, for your suggested distribution where it's 2 member + 1 arbitrary per datacenter , and there are 3 data centers, 9 total members, so you would need majority of 5 to elect a new primary, but if network partition prevents communications among members, then if any one of your data centers go down or if a member goes down, you can't get a majority.

As for the case of if a whole data center goes down – let's say for a 4 members + 1 arbiter set, then you could have 2 members each in 2 data center and an arbiter not within either datacenter, but separate and yet can see both. Then if a center goes down, you're still left with 2 member + 1 arbiter.

Hope this helps.

Regards,

Kay

Comment by Jason Walton [ 27/Sep/13 ]

For example, a week ago EC2 US-East-1 went unreachable on Friday morning for about an hour. If we'd had Mongo configured with a majority of nodes in US-East-1, then our application would have been entirely unreachable.

Comment by Jason Walton [ 27/Sep/13 ]

I still don't understand this, though; if the majority of voting nodes are in a single data center, then if that data center fails the remaining nodes will be unable to elect a majority, so why would you set up your configuration this way?

Comment by auto [ 27/Sep/13 ]

Author:

{u'username': u'kay-kim', u'name': u'kay', u'email': u'kay.kim@10gen.com'}

Message: DOCS-2008: remove erroneous 3 data centers image and clarify benefit of multiple locations
Branch: master
https://github.com/mongodb/docs/commit/3ea78d9e005d78b0d636015dea7084371343911a

Generated at Thu Feb 08 07:42:27 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.