[SERVER-13545] rs0:FATAL error for Previous Primary Member in 3 Machine cluster created using VMWare Created: 10/Apr/14 Updated: 10/Dec/14 Resolved: 16/May/14 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Replication |
| Affects Version/s: | 2.4.2 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Santosh Kumar Panigrahy | Assignee: | Ramon Fernandez Marina |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Unix Operating system(CentOS 6.4) |
||
| Attachments: |
|
| Participants: |
| Description |
|
In my project MongoDb is installed in our software.I created 3 Machines in cloud Using VMWare. Regarding my testbed I can say that, I have ESXI software installed in UCS-Blade and above that,we are creating our VM Machines with our created software(MongoDB 2.4.6 is already pre-installed in our software). For checking cluster creation,I created 3 VM Machines and created cluster among themselves.I created a database and put some data in primary and its successfully reflecting in other Machines. Then to check replication,I switched off the primary VM and other machine from secondary become primary as excepted. But when I created the machine by using the same ip(The IP of the Machine which I deleted Previously),in mongodb its giving fs):FATAL error.It is not going to secondary VM as excepted. If I type rs.status() in that machine its always telling its in syncing state. Request you to kindly help on this regard or if its a known bug,pls give me the bug ID. |
| Comments |
| Comment by Ramon Fernandez Marina [ 16/May/14 ] |
|
Hi Santosh, I haven't heard back from you for some time, so I'm going to mark this ticket as resolved. If this is still an issue for you, feel free to re-open the ticket at any time and provide the information requested by Asya back in April. Regards, |
| Comment by Ramon Fernandez Marina [ 02/May/14 ] |
|
No problem, thanks for letting us know. |
| Comment by Santosh Kumar Panigrahy [ 02/May/14 ] |
|
Ramon sorry for delay.But I am outstation for some time due to some – |
| Comment by Ramon Fernandez Marina [ 01/May/14 ] |
|
Hi Santosh, as per Asya's last comment, we still need the logs that show where the server enters the FATAL state to diagnose this problem. If you're still having trouble with the issue reported in this ticket, can you please send us the logs Asya requested? Thanks, |
| Comment by Asya Kamsky [ 13/Apr/14 ] |
|
Member in FATAL state has encountered an error, without knowing what the error is, it's not possible to tell if this is a bug or expected behavior caused by the environment. On the machine in FATAL state, please find in the logs where it enters that state and include this information. Without this information it won't be possible to proceed with debugging this issue. |
| Comment by Santosh Kumar Panigrahy [ 11/Apr/14 ] |
|
Hi Asya, I am sending you the logs & also the screen shots of cli output and the Testing Procedure:- ),but instead of the mongo coming as secondary.it has gone to Request you to refer the below screenshost and if anything else I can Mongo Version(2.4.6) rs.status():- [image: Inline image 1] rs.conf() [image: Inline image 5] mongod.log:- [image: Inline image 8] [image: Inline image 9] rs.status() [image: Inline image 1] – SANTOSH KUMAR PANIGRAHY (संतोष कुमार पाणिग्रही) |
| Comment by Asya Kamsky [ 11/Apr/14 ] |
|
You say you created a VM to replace the one you deleted - did it have the same hostname and was it running mongod on the same port with the same --replSet options? Which node is giving FATAL error? Can you please provide the log file from that host and the other members of the replica set as well as outputs from "rs.conf()" and "rs.status()" commands. Asya |
| Comment by A. Jesse Jiryu Davis [ 10/Apr/14 ] |
|
Since it sounds like your question is about server behavior, not about PyMongo's behavior, I've moved this issue into the "server" project. |