[SERVER-11416] First Production Deployment - Problems with initiating replication Created: 28/Oct/13  Updated: 11/Jul/16  Resolved: 04/Nov/13

Status: Closed
Project: Core Server
Component/s: Replication
Affects Version/s: 2.4.6
Fix Version/s: None

Type: Question Priority: Critical - P2
Reporter: Joshua Cox Assignee: Unassigned
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Ubuntu 13.10 release at AWS in on m1.large multiple regions


Attachments: File mongo(primary).conf     File mongo(secondary).conf    
Participants:

 Description   

Having trouble adding nodes to a replica set after converting a single standalone to a replica set. I get the following error:

sun01:PRIMARY> rs.add("secondary.mongodb.sungevity.com")
{
"errmsg" : "exception: need most members up to reconfigure, not ok : secondary.mongodb.sungevity.com:27017",
"code" : 13144,
"ok" : 0
}

I can connect to the node I'm attempting to add from the replica node:

ubuntu@primary:~$ mongo secondary.mongodb.sungevity.com/admin
MongoDB shell version: 2.4.6
connecting to: secondary.mongodb.sungevity.com/admin
> db
admin

And I can connect to the replica node from the node I wish to add:

ubuntu@secondary:~$ mongo primary.mongodb.sungevity.com/admin
MongoDB shell version: 2.4.6
connecting to: primary.mongodb.sungevity.com/admin
> db
admin

Here is replica status output from the primary that I initiated the replica set on:

sun01:PRIMARY> db.runCommand (

{ replSetGetStatus: 1 }

)
{
"set" : "sun01",
"date" : ISODate("2013-10-28T17:16:05Z"),
"myState" : 1,
"members" : [

{ "_id" : 0, "name" : "primary.mongodb.sungevity.com:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 331538, "optime" : Timestamp(1382575785, 1), "optimeDate" : ISODate("2013-10-24T00:49:45Z"), "self" : true }

],
"ok" : 1
}
sun01:PRIMARY> rs.conf()
{
"_id" : "sun01",
"version" : 1,
"members" : [

{ "_id" : 0, "host" : "primary.mongodb.sungevity.com:27017" }

]
}

I have attached both the primary (replica set node) and secondary (node I want to add) mongodb.conf files to this ticket.

The documentation is so simple and clear, I figure I'm missing something arcane or so obvious I'm going to be embarrassed.

Thanks for your assistance.



 Comments   
Comment by Joshua Cox [ 29/Oct/13 ]

A couple of problems precipitated this ticket, both of which are solved.

1) I wasn't looking at the logs at the correct path until I searched them out for this ticket, referring to my mongodb.conf file.
2) As a result, I didn't know the problem was due to authentication, and so then didn't realize I was incorrectly using auth instead of keyfile for inter-node authentication purposes.

Now that both are corrected, my replica set appears to be running well. You may close this ticket.

Comment by Joshua Cox [ 29/Oct/13 ]

Below are the command I ran and the logs from that time slice. From what I can see, I'm not authorized to add the secondary.

I am using authentication on these systems, and I've previously attempted to configure an 'admin' user on the admin database with identical privileges and authentication on both hosts, but were still unsuccessful. How do I manage replication in this situation? Do I need to pass authentication credentials as part of the rs.add command?

sun01:PRIMARY> rs.add("secondary.mongodb.sungevity.com:27017")
{
"errmsg" : "exception: need most members up to reconfigure, not ok : secondary.mongodb.sungevity.com:27017",
"code" : 13144,
"ok" : 0
}

Tue Oct 29 09:43:56.869 [conn528] run command local.$cmd { count: "system.replset", query: {}, fields: {} }
Tue Oct 29 09:43:56.869 [conn528] command local.$cmd command: { count: "system.replset", query: {}, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:40 reslen:48 0ms
Tue Oct 29 09:43:56.870 [conn528] query local.system.replset ntoreturn:1 ntoskip:0 nscanned:1 keyUpdates:0 locks(micros) r:51 nreturned:1 reslen:130 0ms
Tue Oct 29 09:43:56.871 [conn528] run command admin.$cmd { replSetReconfig: { _id: "sun01", version: 2, members: [

{ _id: 0, host: "primary.mongodb.sungevity.com:27017" }

,

{ _id: 1.0, host: "secondary.mongodb.sungevity.com:27017" }

] } }
Tue Oct 29 09:43:56.871 [conn528] replSet replSetReconfig config object parses ok, 2 members specified
Tue Oct 29 09:43:56.973 [conn528] replSet warning secondary.mongodb.sungevity.com:27017 replied:

{ ok: 0.0, errmsg: "unauthorized" }

Tue Oct 29 09:43:56.973 [conn528] User Assertion: 13144:need most members up to reconfigure, not ok : secondary.mongodb.sungevity.com:27017
Tue Oct 29 09:43:56.977 [conn528] replSet replSetReconfig exception: need most members up to reconfigure, not ok : secondary.mongodb.sungevity.com:27017
Tue Oct 29 09:43:56.977 [conn528] command admin.$cmd command: { replSetReconfig: { _id: "sun01", version: 2, members: [

{ _id: 0, host: "primary.mongodb.sungevity.com:27017" }

,

{ _id: 1.0, host: "secondary.mongodb.sungevity.com:27017" }

] } } ntoreturn:1 keyUpdates:0 locks(micros) W:2 reslen:154 106ms
Tue Oct 29 09:43:56.981 [conn528] run command admin.$cmd

{ replSetGetStatus: 1.0, forShell: 1.0 }

Tue Oct 29 09:43:56.981 [conn528] command admin.$cmd command:

{ replSetGetStatus: 1.0, forShell: 1.0 }

ntoreturn:1 keyUpdates:0 reslen:260 0ms
Tue Oct 29 09:43:59.570 [conn1] run command admin.$cmd

{ ismaster: 1 }

Tue Oct 29 09:43:59.570 [conn1] command admin.$cmd command:

{ ismaster: 1 }

ntoreturn:1 keyUpdates:0 reslen:294 0ms
Tue Oct 29 09:43:59.642 [conn3] run command admin.$cmd

{ ismaster: 1 }

Tue Oct 29 09:43:59.647 [conn3] command admin.$cmd command:

{ ismaster: 1 }

ntoreturn:1 keyUpdates:0 reslen:294 0ms
Tue Oct 29 09:43:59.643 [conn2] run command admin.$cmd

{ ismaster: 1 }

Tue Oct 29 09:43:59.648 [conn2] command admin.$cmd command:

{ ismaster: 1 }

ntoreturn:1 keyUpdates:0 reslen:294 0ms
Tue Oct 29 09:44:04.438 [conn527] run command admin.$cmd

{ ismaster: 1 }

Tue Oct 29 09:44:04.438 [conn527] command admin.$cmd command:

{ ismaster: 1 }

ntoreturn:1 keyUpdates:0 reslen:294 0ms
Tue Oct 29 09:44:04.440 [conn527] run command admin.$cmd

{ serverStatus: 1 }

Tue Oct 29 09:44:04.440 [conn527] command admin.$cmd command:

{ serverStatus: 1 }

ntoreturn:1 keyUpdates:0 locks(micros) r:56 reslen:3490 0ms
Tue Oct 29 09:44:04.446 [conn527] run command admin.$cmd

{ replSetGetStatus: 1 }

Tue Oct 29 09:44:04.446 [conn527] command admin.$cmd command:

{ replSetGetStatus: 1 }

ntoreturn:1 keyUpdates:0 reslen:260 0ms
Tue Oct 29 09:44:04.448 [conn527] query local.system.replset ntoreturn:1 ntoskip:0 nscanned:1 keyUpdates:0 locks(micros) r:48 nreturned:1 reslen:130 0ms
Tue Oct 29 09:44:04.449 [conn527] query local.oplog.rs query: { $query: {}, $orderby:

{ $natural: 1 }

} ntoreturn:1 ntoskip:0 nscanned:1 keyUpdates:0 locks(micros) r:55 nreturned:1 reslen:37 0ms
Tue Oct 29 09:44:04.451 [conn527] query local.oplog.rs query: { $query: {}, $orderby:

{ $natural: -1 }

} ntoreturn:1 ntoskip:0 nscanned:1 keyUpdates:0 locks(micros) r:40 nreturned:1 reslen:37 0ms
Tue Oct 29 09:44:04.452 [conn527] run command local.$cmd

{ collstats: "oplog.rs" }

Tue Oct 29 09:44:04.452 [conn527] command local.$cmd command:

{ collstats: "oplog.rs" }

ntoreturn:1 keyUpdates:0 locks(micros) r:45 reslen:286 0ms
Tue Oct 29 09:44:04.454 [conn527] query config.settings ntoreturn:0 ntoskip:0 nscanned:0 keyUpdates:0 locks(micros) r:23 nreturned:0 reslen:20 0ms

Comment by Daniel Pasette (Inactive) [ 29/Oct/13 ]

This should normally work fine. When you try adding the secondary, can you attach the log from the primary?

I'm pretty sure you don't need to, but can you try adding with the port explicitly included, like so: rs.add("secondary.mongodb.sungevity.com:27017")

Generated at Thu Feb 08 03:25:44 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.