[SERVER-13408] "priority: 0" nodes still participate in node election Created: 30/Mar/14  Updated: 09/Jul/16  Resolved: 31/Mar/14

Status: Closed
Project: Core Server
Component/s: Replication
Affects Version/s: 2.4.9
Fix Version/s: None

Type: Improvement Priority: Major - P3
Reporter: Ernestas Lukoševi?ius Assignee: Unassigned
Resolution: Done Votes: 0
Labels: replicaset, replication, voting
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Participants:

 Description   

I have a 2 node replica set. 1 node is supposed to always be primary and the other is for backup purposes.

My rs.conf() looks like that:
{
"_id" : "shard0001",
"version" : 6,
"members" : [

{ "_id" : 0, "host" : "one:27017", "priority" : 2 }

,

{ "_id" : 3, "host" : "backup:20002", "priority" : 0, "slaveDelay" : 10800, "hidden" : true }

]
}

The problem is that when backup goes down, "one" starts thinking that it's secondary, because it's afraid that backup will start believing that . Such fencing mechanism should not be valid when other node is hidden and priority 0. Or your docs are wrong by saying that priority 0 node cannot ever become a primary (http://docs.mongodb.org/manual/tutorial/configure-secondary-only-replica-set-member/).



 Comments   
Comment by Spencer Brody (Inactive) [ 31/Mar/14 ]

rs.conf() just shows the contents of local.system.replset. You should not modify local.system.replset directly, however, you should use the replSetReconfig command.

Comment by Ernestas Lukoševi?ius [ 31/Mar/14 ]

Nevermind, found local.system.replset.members[n].votes. Although, I'd prefer it having in rs.conf, there's not much wrong with that too.

Comment by Ernestas Lukoševi?ius [ 31/Mar/14 ]

A nice feature to have would be a new flag for nodes (like "primary" or "hidden", but called "votingrights" or something like that), which would allow eliminate some nodes from voting. This is especially nice when 50% or more of your replica set is not in production, but, say, backup data center. Say, your network goes down between the 2 DCs. What would happen? Would the primary DC fence itself and some node in secondary DC start acting as primary? Not having such a flag is a lack of flexibility, even if I can setup arbiters. Plus, it might eliminate the need of arbiters.

Comment by Ernestas Lukoševi?ius [ 31/Mar/14 ]

Well... I didn't expect a different answer at first... That's why I marked it as improvement, not as a bug.

Having an additional arbiter is unnecessary overhead in this case, as I have only 1 node that can ever become a primary and there's no point in evaluating connectivity or doing any fencing.

Comment by Daniel Pasette (Inactive) [ 31/Mar/14 ]

All members participate in elections. You need a third member of your replica set or an arbiter to keep your one as primary when your backup node goes down.

See: http://docs.mongodb.org/manual/core/replica-set-members/#arbiter

and:
http://docs.mongodb.org/manual/tutorial/add-replica-set-arbiter/

Generated at Thu Feb 08 03:31:37 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.