should we allow this reconfig case for replica sets?

XMLWordPrintableJSON

    • Type: Question
    • Resolution: Done
    • Priority: Minor - P4
    • None
    • Affects Version/s: 2.2.0-rc0
    • Component/s: Replication
    • None
    • None
    • 3
    • None
    • None
    • None
    • None
    • None
    • None

      i don't see why not, is there a nuance? i am sending the reconfig to the current primary. however it isn't eligible thereafter. doesn't this mean if you want to "flip/flop" priority 0 and 1 on a 2 member set, it is impossible to do that?

      i stepped down, went to the other one, reconfiged, as a workaround.

      x:PRIMARY> cfg = rs.conf()
      {
              "_id" : "x",
              "version" : 2,
              "members" : [
                      {
                              "_id" : 0,
                              "host" : "dm_hp:27017"
                      },
                      {
                              "_id" : 1,
                              "host" : "dm_hp:27000"
                      }
              ]
      }
      x:PRIMARY> cfg.members[1].priority=0
      0
      x:PRIMARY> cfg
      {
              "_id" : "x",
              "version" : 2,
              "members" : [
                      {
                              "_id" : 0,
                              "host" : "dm_hp:27017"
                      },
                      {
                              "_id" : 1,
                              "host" : "dm_hp:27000",
                              "priority" : 0
                      }
              ]
      }
      x:PRIMARY> rs.reconfig(cfg)
      {
              "errmsg" : "exception: initiation and reconfiguration of a replica set must be sent to a node that can become primary",
              "code" : 13420,
              "ok" : 0
      }
      x:PRIMARY>
      

            Assignee:
            Unassigned
            Reporter:
            Dwight Merriman
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: