[SERVER-13545] rs0:FATAL error for Previous Primary Member in 3 Machine cluster created using VMWare Created: 10/Apr/14  Updated: 10/Dec/14  Resolved: 16/May/14

Status: Closed
Project: Core Server
Component/s: Replication
Affects Version/s: 2.4.2
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Santosh Kumar Panigrahy Assignee: Ramon Fernandez Marina
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Unix Operating system(CentOS 6.4)


Attachments: PNG File Configuring OVF Template Procedure-1.png     PNG File Configuring OVF Template Procedure-2.png     PNG File image.png     PNG File image.png     PNG File image.png     PNG File image.png     PNG File image.png     PNG File image.png     PNG File image.png    
Participants:

 Description   

In my project MongoDb is installed in our software.I created 3 Machines in cloud Using VMWare. Regarding my testbed I can say that, I have ESXI software installed in UCS-Blade and above that,we are creating our VM Machines with our created software(MongoDB 2.4.6 is already pre-installed in our software).

For checking cluster creation,I created 3 VM Machines and created cluster among themselves.I created a database and put some data in primary and its successfully reflecting in other Machines.

Then to check replication,I switched off the primary VM and other machine from secondary become primary as excepted.

But when I created the machine by using the same ip(The IP of the Machine which I deleted Previously),in mongodb its giving fs):FATAL error.It is not going to secondary VM as excepted.

If I type rs.status() in that machine its always telling its in syncing state.

Request you to kindly help on this regard or if its a known bug,pls give me the bug ID.



 Comments   
Comment by Ramon Fernandez Marina [ 16/May/14 ]

Hi Santosh,

I haven't heard back from you for some time, so I'm going to mark this ticket as resolved. If this is still an issue for you, feel free to re-open the ticket at any time and provide the information requested by Asya back in April.

Regards,
Ramón.

Comment by Ramon Fernandez Marina [ 02/May/14 ]

No problem, thanks for letting us know.

Comment by Santosh Kumar Panigrahy [ 02/May/14 ]

Ramon sorry for delay.But I am outstation for some time due to some
work.I will get back to you,once I return to my workplace.


Thanking You...
*
*
SANTOSH KUMAR PANIGRAHY (संतोष कुमार पाणिग्रही)
*
*
*
*
*
*
*
*
*
*
*
*
*
*

Comment by Ramon Fernandez Marina [ 01/May/14 ]

Hi Santosh,

as per Asya's last comment, we still need the logs that show where the server enters the FATAL state to diagnose this problem. If you're still having trouble with the issue reported in this ticket, can you please send us the logs Asya requested?

Thanks,
Ramón.

Comment by Asya Kamsky [ 13/Apr/14 ]

Member in FATAL state has encountered an error, without knowing what the error is, it's not possible to tell if this is a bug or expected behavior caused by the environment.

On the machine in FATAL state, please find in the logs where it enters that state and include this information. Without this information it won't be possible to proceed with debugging this issue.

Comment by Santosh Kumar Panigrahy [ 11/Apr/14 ]

Hi Asya,

I am sending you the logs & also the screen shots of cli output and the
configuration procedure as attachment in the below mail.Sorry our network
don't support outside authorization or ssh into VM machine.so I cannot
extract the full log and can send.

Testing Procedure:-
I have configured 3 machines PM-50,PM-51,PM-52(Please refer the below
screen shot).I shutdown the machine PM-50(Which was earlier Primary) and
deleted from the testbed.Then PM-52 becomes primary as expected.Then I
again created PM-50(WIth Same IP which was there Previous

{Refer attacment}

),but instead of the mongo coming as secondary.it has gone to
FATAL state.I tested same procedure many times.but each time I am getting
the same result.

Request you to refer the below screenshost and if anything else I can
assist,Please feel free to mail me in this mail or official
mail:-sanpanig@cisco.com.

Mongo Version(2.4.6)
[image: Inline image 10]

rs.status():-
--------------

[image: Inline image 1]

rs.conf()
------------

[image: Inline image 5]

mongod.log:-

[image: Inline image 8]

[image: Inline image 9]

rs.status()
[image: Inline image 2]

[image: Inline image 1]


Thanking You...

SANTOSH KUMAR PANIGRAHY (संतोष कुमार पाणिग्रही)

Comment by Asya Kamsky [ 11/Apr/14 ]

You say you created a VM to replace the one you deleted - did it have the same hostname and was it running mongod on the same port with the same --replSet options?

Which node is giving FATAL error? Can you please provide the log file from that host and the other members of the replica set as well as outputs from "rs.conf()" and "rs.status()" commands.

Asya

Comment by A. Jesse Jiryu Davis [ 10/Apr/14 ]

Since it sounds like your question is about server behavior, not about PyMongo's behavior, I've moved this issue into the "server" project.

Generated at Thu Feb 08 03:32:04 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.