All of config servers go down. Just restart and not recover from backup, config server's metadata of sharding chunks information can resync up to date as APP request is still writing to mongod

XMLWordPrintableJSON

    • Type: Question
    • Resolution: Done
    • Priority: Critical - P2
    • None
    • Affects Version/s: 2.0.6
    • Component/s: Sharding
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      There is a MongoDB Sharding cluster:

      3 mongos nodes: A/B/C, 3 config server nodes: X/Y/Z, 2 shards with replica sets: s1/s2.

      — Sharding Status —
      sharding version:

      { "_id" : 1, "version" : 3 }

      shards:

      { "_id" : "s1", "host" : "shard1/host1:27032,host2:27032" } { "_id" : "s2", "host" : "shard2/host3:27032,host4:27032" }

      databases:

      { "_id" : "test", "partitioned" : false, "primary" : "s1" } { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "d", "partitioned" : true, "primary" : "s1" }

      d.t chunks:
      s1 139
      s2 137
      too many chunks to print, use verbose if you want to force print

      From document, if any of the config servers is down (X or Y or Z), the cluster's metadata goes read-only. However, even in such a failure state, the MongoDB cluster can still be read from and written to. Shutdown all of config server nodes as one by one, app request is still writing to mongod from replica sets.

      But because of app writing, lots of chunks are created. I am not sure that they are on s1 or s2? Can split or move chunks to banlance between s1/s2? Can the config servers metadata keep up to date?

      After all of config servers go down. To solve, do just restart all of config servers or recover config server node from old backup? If restart all of config servers without recover from backup, is the cluster available and is need to reconfig the cluster? The config server's metadata will resync up to date?

      Thanks

            Assignee:
            Unassigned
            Reporter:
            Jianfeng Xu
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: