[DOCS-2008] Geographically distributed replica set - why majority of voting nodes in one place? Created: 27/Sep/13 Updated: 11/Jan/17 Resolved: 27/Sep/13 |
|
| Status: | Closed |
| Project: | Documentation |
| Component/s: | manual |
| Affects Version/s: | None |
| Fix Version/s: | 01112017-cleanup |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Jason Walton | Assignee: | Kay Kim (Inactive) |
| Resolution: | Done | Votes: | 0 |
| Labels: | replicaset, replication | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Participants: | |||||||||
| Days since reply: | 10 years, 20 weeks, 5 days ago | ||||||||
| Description |
|
In http://docs.mongodb.org/manual/tutorial/deploy-geographically-distributed-replica-set/, the documentation claims that you should "Ensure that a majority of the voting members are within a primary facility", however: 1) The documentation offers no explanation for why you should ensure the majority of voting members are all in one place. This seems counter-intuitive, as if this entire data center goes down, it will take a majority of the voting nodes with it, rendering the replica set unable to elect a new master. It seems that placing two nodes an the arbiter each in three different locations would be a better solution. 2) The latest version of this documentation in master states: "For more information on the need to keep the voting majority on one site, see 3) At the bottom of the document is this diagram: http://docs.mongodb.org/manual/_images/replica-set-three-data-centers.png Where there are clearly not a majority of voting nodes in a single data center. |
| Comments |
| Comment by Jason Walton [ 27/Sep/13 ] |
|
Thanks Kay, that does clarify. We were planning three nodes total; one in east-1, one in west-1, and an arbiter somewhere else. |
| Comment by Kay Kim (Inactive) [ 27/Sep/13 ] |
|
Hey Jason – As for the case of if a whole data center goes down – let's say for a 4 members + 1 arbiter set, then you could have 2 members each in 2 data center and an arbiter not within either datacenter, but separate and yet can see both. Then if a center goes down, you're still left with 2 member + 1 arbiter. Hope this helps. Regards, Kay |
| Comment by Jason Walton [ 27/Sep/13 ] |
|
For example, a week ago EC2 US-East-1 went unreachable on Friday morning for about an hour. If we'd had Mongo configured with a majority of nodes in US-East-1, then our application would have been entirely unreachable. |
| Comment by Jason Walton [ 27/Sep/13 ] |
|
I still don't understand this, though; if the majority of voting nodes are in a single data center, then if that data center fails the remaining nodes will be unable to elect a majority, so why would you set up your configuration this way? |
| Comment by auto [ 27/Sep/13 ] |
|
Author: {u'username': u'kay-kim', u'name': u'kay', u'email': u'kay.kim@10gen.com'}Message: |