[SERVER-13233] Arbiter-Only note becomes Secondary Created: 17/Mar/14  Updated: 10/Dec/14  Resolved: 18/Mar/14

Status: Closed
Project: Core Server
Component/s: Replication
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Minor - P4
Reporter: Henry Resheto Assignee: Unassigned
Resolution: Incomplete Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Operating System: ALL
Participants:

 Description   

This happened in a three member replica set:
1. Member_1: Data node
2. Member_2 : Data node
3. Member_3: Arbiter-Only node
At some point of time when Member_1 was Primary and Member_2 was Secondary and Member_3 was Arbiter I brought down Member_2 node to do some hardware repairs and upgrades.
Member_2 was down for a couple of hours. What I noticed just before bringing Member_2 node up again is that Member_1 is, as expected, Primary, but Member_3 is not Arbiter anymore but rather SECONDAY!
Somehow it is promoted itself. Its Secondary status was evident from connecting to it from the shell: it would display a prompt with “SECONDARY” label in it. Also it would show up as Secondary when rs.status() command was executed.
Furthermore, the data directory if Member_3 was showing some new files which were collections data files from the rest of the replica set. The total size of these files was 10 GB at that time.
I brought up Member_2 and continued monitoring the situation. After several hours my replica set had one Primary and two Secondaries: Member_3 continued to be secondary.
Furthermore, the data directory for Member_3 continued to grow to 60GB (which is the size of Member_1 data directory). Which means that by that time it fully synced with the Primary.
And so Member_3 continued to be secondary until I removed it from replica set, restarted Member_3 instance and added it back to the Replica Set. After that it carried on as Arbiter.



 Comments   
Comment by Eric Milkie [ 18/Mar/14 ]

Indeed, it is not intended behavior and should never happen.

Comment by Henry Resheto [ 18/Mar/14 ]

I was Using v 2.4.9 64bit running on Centos. Unfortunately I did not keep output of rs.status() or log files (I recreated my cluster since)
But can please somebody confirm that what I had is not the intended behavior? Thank you

Comment by J Rassi [ 17/Mar/14 ]

Could you please provide the following information, to help further diagnose this issue:

  • the version of MongoDB for each member of the replica set (available via "db.version()")
  • the output of running "rs.conf()" and "rs.status()" on the arbiter
  • a log file for each member of the replica set covering this time period
Generated at Thu Feb 08 03:31:04 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.