[SERVER-7201] Replica set with more than one primary node Created: 28/Sep/12  Updated: 10/Dec/14  Resolved: 07/Mar/14

Status: Closed
Project: Core Server
Component/s: Replication
Affects Version/s: 2.0.7
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Miguel Vilá Assignee: Adinoyi Omuya
Resolution: Incomplete Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Operating System: ALL
Participants:

 Description   

After initializing a replica set every node that is added is set to primary.
This is the output for "rs.status()" from the "slaved" node (Also I added every other node from this one):
{
"set" : "rs0",
"date" : ISODate("2012-09-28T19:58:56Z"),
"myState" : 1,
"members" : [

{ "_id" : 0, "name" : "slaved:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 2144, "optime" : Timestamp(1348862047000, 1), "optimeDate" : ISODate("2012-09-28T19:54:07Z"), "self" : true }

,

{ "_id" : 1, "name" : "slavec:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 1052, "optime" : Timestamp(1348860655000, 1), "optimeDate" : ISODate("2012-09-28T19:30:55Z"), "lastHeartbeat" : ISODate("2012-09-28T19:58:55Z"), "pingMs" : 0 }

,

{ "_id" : 2, "name" : "slavee:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 637, "optime" : Timestamp(1348861573000, 1), "optimeDate" : ISODate("2012-09-28T19:46:13Z"), "lastHeartbeat" : ISODate("2012-09-28T19:58:55Z"), "pingMs" : 0 }

,

{ "_id" : 3, "name" : "master:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 289, "lastHeartbeat" : ISODate("2012-09-28T19:58:55Z"), "pingMs" : 0 }

],
"ok" : 1
}
(The last one is an arbiter). Also if I run rs.status() from every other node the only node listed is the self one.



 Comments   
Comment by Adinoyi Omuya [ 12/Mar/13 ]

Are you still experiencing this problem?

Comment by Miguel Vilá [ 29/Sep/12 ]

"rs.config" from slavec:
{
"_id" : "rs0",
"version" : 1,
"members" : [

{ "_id" : 0, "host" : "BigData-15-C:27017" }

]
}
"rs.config()" from slaved (the one from which I ran the "rs.add(...)" commands):
{
"_id" : "rs0",
"version" : 4,
"members" : [

{ "_id" : 0, "host" : "BigData-15-D:27017" }

,

{ "_id" : 1, "host" : "slavec:27017" }

,

{ "_id" : 2, "host" : "slavee:27017" }

,

{ "_id" : 3, "host" : "master:27017", "arbiterOnly" : true }

]
}
"rs.config()" from slavee:
{
"_id" : "rs0",
"version" : 1,
"members" : [

{ "_id" : 0, "host" : "BigData-15-E:27017" }

]
}
"rs.config()" from master (the arbiter I tried to add):
{
"_id" : "rs0",
"version" : 1,
"members" : [

{ "_id" : 0, "host" : "BigData-15-A:27017" }

]
}

this is in the hosts file in the master:

127.0.0.1 BigData-15-A
127.0.0.1 master
10.0.1.201 slavec
10.0.1.202 slaved
10.0.1.203 slavee

the others ones look similar with the appropriate changes. (BigData-15-C is the same as slavec, etc...)

Comment by Eliot Horowitz (Inactive) [ 29/Sep/12 ]

Can you send rs.conf() from each node, and what the host names resolve to?

Generated at Thu Feb 08 03:13:53 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.