[SERVER-25432] Migrating using Replication - Issue with db authentication on new Replica Member Created: 03/Aug/16  Updated: 03/Aug/16  Resolved: 03/Aug/16

Status: Closed
Project: Core Server
Component/s: Admin, Replication
Affects Version/s: None
Fix Version/s: None

Type: Question Priority: Critical - P2
Reporter: Mitchell Harding Assignee: Unassigned
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Participants:

 Description   

Our team is working on a migration from a clustered replica set mongo on AWS EC2 to a replica set cluster on Google's Compute Engine.

We were able to add one of the new Google mongo instances as a member to the AWS cluster with a priority 0, however it appears that there is an issue with authentication which won't allow the new mongo to sync.

Google Compute Instance:

INFRA-GENERAL-00:SECONDARY> rs.status()
{
	"set" : "INFRA-GENERAL-00",
	"date" : ISODate("2016-08-03T00:22:37.275Z"),
	"myState" : 2,
	"term" : NumberLong(2),
	"heartbeatIntervalMillis" : NumberLong(2000),
	"members" : [
		{
			"_id" : 0,
			"name" : "****:27017",
			"health" : 0,
			"state" : 8,
			"stateStr" : "(not reachable/healthy)",
			"uptime" : 0,
			"optime" : {
				"ts" : Timestamp(0, 0),
				"t" : NumberLong(-1)
			},
			"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
			"lastHeartbeat" : ISODate("2016-08-03T00:22:35.205Z"),
			"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "exception: field not found, expected type 16",
			"configVersion" : -1
		},
		{
			"_id" : 1,
			"name" : "****:27017",
			"health" : 0,
			"state" : 8,
			"stateStr" : "(not reachable/healthy)",
			"uptime" : 0,
			"optime" : {
				"ts" : Timestamp(0, 0),
				"t" : NumberLong(-1)
			},
			"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
			"lastHeartbeat" : ISODate("2016-08-03T00:22:34.728Z"),
			"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "exception: field not found, expected type 16",
			"configVersion" : -1
		},
		{
			"_id" : 2,
			"name" : "****:27017",
			"health" : 0,
			"state" : 8,
			"stateStr" : "(not reachable/healthy)",
			"uptime" : 0,
			"optime" : {
				"ts" : Timestamp(0, 0),
				"t" : NumberLong(-1)
			},
			"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
			"lastHeartbeat" : ISODate("2016-08-03T00:22:35.248Z"),
			"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
			"pingMs" : NumberLong(0),
			"lastHeartbeatMessage" : "exception: field not found, expected type 16",
			"configVersion" : -1
		},
		{
			"_id" : 3,
			"name" : "****:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 338,
			"optime" : {
				"ts" : Timestamp(1470169610, 1),
				"t" : NumberLong(2)
			},
			"optimeDate" : ISODate("2016-08-02T20:26:50Z"),
			"configVersion" : 175485,
			"self" : true
		}
	],
	"ok" : 1
}

AWS Primary Instance:

INFRA-GENERAL-00:PRIMARY> rs.status()
{
	"set" : "INFRA-GENERAL-00",
	"date" : ISODate("2016-08-03T16:00:59Z"),
	"myState" : 1,
	"members" : [
		{
			"_id" : 0,
			"name" : "****:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 690001,
			"optime" : Timestamp(1470240059, 4),
			"optimeDate" : ISODate("2016-08-03T16:00:59Z"),
			"electionTime" : Timestamp(1470167971, 1),
			"electionDate" : ISODate("2016-08-02T19:59:31Z"),
			"self" : true
		},
		{
			"_id" : 1,
			"name" : "****:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 72094,
			"optime" : Timestamp(1470240058, 29),
			"optimeDate" : ISODate("2016-08-03T16:00:58Z"),
			"lastHeartbeat" : ISODate("2016-08-03T16:00:58Z"),
			"lastHeartbeatRecv" : ISODate("2016-08-03T16:00:59Z"),
			"pingMs" : 1,
			"syncingTo" : "mongorep01.restdev.com:27017"
		},
		{
			"_id" : 2,
			"name" : "****:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 72094,
			"optime" : Timestamp(1470240057, 215),
			"optimeDate" : ISODate("2016-08-03T16:00:57Z"),
			"lastHeartbeat" : ISODate("2016-08-03T16:00:57Z"),
			"lastHeartbeatRecv" : ISODate("2016-08-03T16:00:58Z"),
			"pingMs" : 0,
			"syncingTo" : "mongorep01.restdev.com:27017"
		},
		{
			"_id" : 3,
			"name" : "****:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",
			"uptime" : 54470,
			"optime" : Timestamp(1470169610, 1),
			"optimeDate" : ISODate("2016-08-02T20:26:50Z"),
			"lastHeartbeat" : ISODate("2016-08-03T16:00:58Z"),
			"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
			"pingMs" : 37
		}
	],
	"ok" : 1
}

While checking the Google Compute Instance's mongo log, I see the following:

2016-08-03T00:45:31.801+0000 I REPL     [ReplicationExecutor] waiting for 6 pings from other members before syncing
2016-08-03T00:45:32.251+0000 I NETWORK  [conn272] end connection ***:60176 (4 connections now open)
2016-08-03T00:45:32.288+0000 I NETWORK  [initandlisten] connection accepted from ***:60179 #275 (5 connections now open)
2016-08-03T00:45:32.325+0000 I ACCESS   [conn275]  authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" }
2016-08-03T00:45:32.809+0000 I REPL     [ReplicationExecutor] Error in heartbeat request to ***:27017; Location13111: exception: field not found, expected type 16
2016-08-03T00:45:32.850+0000 I REPL     [ReplicationExecutor] Error in heartbeat request to ***:27017; Location13111: exception: field not found, expected type 16
2016-08-03T00:45:32.890+0000 I REPL     [ReplicationExecutor] Error in heartbeat request to ***:27017; Location13111: exception: field not found, expected type 16

The Primary and other AWS instances seem to require authentication to be done through the 'admin' db, not 'local'. I'm assuming this is where the problem lies.

Is there a way to have the new replica set member authenticate against admin, not local?



 Comments   
Comment by Ramon Fernandez Marina [ 03/Aug/16 ]

mitchell@restorationmedia.com, I'm closing this ticket now, but please note that the configuration you're running is untested and unsupported – you need to be running 3.0 before you upgrade to 3.2. Even if the sync works you may have issues later on, so I'd strongly recommend you follow the documented upgrade procedures.

Please note that the SERVER project is for reporting bugs or feature suggestions for the MongoDB server. For MongoDB-related support discussion please post on the mongodb-user group or Stack Overflow with the mongodb tag, where your question will reach a larger audience. In the future, a question like this involving more discussion would be best posted on the mongodb-user group. See also our Technical Support page for additional support resources.

Regards,
Ramón.

Comment by Mitchell Harding [ 03/Aug/16 ]

Hi Ramón,

I actually figured out the issue related to the new node (version 3.2) and the old nodes (2.6) had issues with the schema version.

We went ahead and just installed the same version on the new node and it seems to be syncing fine now.

This ticket can be closed.
thanks again for the help

Comment by Ramon Fernandez Marina [ 03/Aug/16 ]

mitchell@restorationmedia.com, what versions of MongoDB are running on each node? Also, can you please upload logs for the Google node as well as the current AWS primary from the last restart?

Thanks,
Ramón.

Generated at Thu Feb 08 04:09:10 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.