Uploaded image for project: 'Mongoid'
  1. Mongoid
  2. MONGOID-3003

Replicaset Failover not working

    • Type: Icon: Task Task
    • Resolution: Done
    • 3.1.5
    • Affects Version/s: None
    • Component/s: None
    • Labels:

      I was doing the failover testing of mongodb on my local environment. I have two mongo servers(hostname1, hostname2) and an arbiter.

      I have the following configuration in my mongoid.yml file

        development: 
          hosts: 
          - - hostname1
            - 27017
          - - hostname2
            - 27017
          database: myApp_development
          read: :primary
          use_activesupport_time_zone: true
      

      Now when I start my rails application, everything works fine, and the data is read from the primary(hostname1). Then I kill the mongo process of the primary(hostname1), so the secondary(hostname2) becomes the primary and starts serving the data.

      Then after some time I start the mongo process of hostname1 then it becomes the secondary in the replica set. Now the primary(hostname2) and secondary(hostname1) are working all right.

      The real problem starts here.

      I kill the mongo process of my new primary(hostname2), but this time, the secondary(hostname1) does not become the primary, and any further requests to the rails application raises the following error

      Cannot connect to a replica set using seeds hostname2
      Please help. Thanks in advance.

      I entered some loggers in the mongo repl_connection class, and came across this.

      When I boot the rails app, I have both the hosts in the seeds array, that the mongo driver keeps track of. But during the second failover only the host that went down is present in this array.

      Hence I would also like to know, how and when one of the hosts get removed from the seed list.

            Assignee:
            Unassigned Unassigned
            Reporter:
            rohit9889 rohit9889
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

              Created:
              Updated:
              Resolved: