sh.addShard can cause inconsistent/wrong config servers

XMLWordPrintableJSON

    • Type: Bug
    • Resolution: Done
    • Priority: Minor - P4
    • None
    • Affects Version/s: None
    • Component/s: Sharding
    • None
    • Sharding
    • ALL
    • Hide

      2.6.3 enterprise

      3 servers with micro sharding. 16 stand alone MongoD per server. (total 48 shards)

      All 3 servers simultaneously execute the following to add shards:

      DISKSTART=0
      DISKEND=3
      MONGOSTART=0
      MONGOEND=3
      HOST=`hostname`
      for d in $(seq $DISKSTART $DISKEND); do
        for m in $(seq $MONGOSTART $MONGOEND); do
          echo "sh.addShard(\"$HOST:270$d$m\")" | mongo --port 27090
        done
      done
      
      Show
      2.6.3 enterprise 3 servers with micro sharding. 16 stand alone MongoD per server. (total 48 shards) All 3 servers simultaneously execute the following to add shards: DISKSTART=0 DISKEND=3 MONGOSTART=0 MONGOEND=3 HOST=`hostname` for d in $(seq $DISKSTART $DISKEND); do for m in $(seq $MONGOSTART $MONGOEND); do echo "sh.addShard(\" $HOST:270$d$m\ ")" | mongo --port 27090 done done
    • None
    • 3
    • None
    • None
    • None
    • None
    • None
    • None

      {
      "ok" : 0,
      "errmsg" : "config write was not consistent, manual intervention may be required. config responses: { 172.31.20.42:27099:

      { ok: 1, n: 1 }

      , 172.31.20.40:27099: { ok: 1, n: 0, writeErrors: [ { index: 0, code: 11000, errmsg: \"insertDocument :: caused by :: 11000 E11000 duplicate key error index: config.shards.$id dup key: { : \"shard0026\" }\" } ] }, 172.31.20.41:27099: { ok: 1, n: 0, writeErrors: [ { index: 0, code: 11000, errmsg: \"insertDocument :: caused by :: 11000 E11000 duplicate key error index: config.shards.$id dup key: { : \"shard0026\" }\" } ] } }"
      }

      I also ended up with 78 shards. I only called sh.addShard 48 times.

            Assignee:
            [DO NOT USE] Backlog - Sharding Team
            Reporter:
            Charlie Page (Inactive)
            Votes:
            1 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: