Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-11268

Allow configuring write concern in moveChunk

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Critical - P2 Critical - P2
    • None
    • Affects Version/s: 2.4.6
    • Component/s: Sharding
    • Labels:
    • ALL
    • Hide

      Simply, Deploy sharded cluster with some replica sets. I used 2 replica set with 2 standard nodes and 1 arbiter.

      $ mkdir -p /Users/k2hyun/Database/mongodb/26101; mongod --port 26101 --dbpath /Users/k2hyun/Database/mongodb/26101 --logpath /Users/k2hyun/Database/mongodb/26101/log --fork --logappend --replSet rs1 --oplogSize 1000
      $ mkdir -p /Users/k2hyun/Database/mongodb/26102; mongod --port 26102 --dbpath /Users/k2hyun/Database/mongodb/26102 --logpath /Users/k2hyun/Database/mongodb/26102/log --fork --logappend --replSet rs1 --oplogSize 1000
      $ mkdir -p /Users/k2hyun/Database/mongodb/26103; mongod --port 26103 --dbpath /Users/k2hyun/Database/mongodb/26103 --logpath /Users/k2hyun/Database/mongodb/26103/log --fork --logappend --replSet rs1 --oplogSize 1000
      $ mongo localhost:26101
      > rs.initiate({"_id":"rs1", members:[{"_id":1, "host":"localhost:26101"}, {"_id":2, "host":"localhost:26102"}, {"_id":3, "host":"localhost:26103", "arbiterOnly":true}]})
      rs1:PRIMARY>
      
      $ mkdir -p /Users/k2hyun/Database/mongodb/26201; mongod --port 26201 --dbpath /Users/k2hyun/Database/mongodb/26201 --logpath /Users/k2hyun/Database/mongodb/26201/log --fork --logappend --replSet rs2 --oplogSize 1000
      $ mkdir -p /Users/k2hyun/Database/mongodb/26202; mongod --port 26202 --dbpath /Users/k2hyun/Database/mongodb/26202 --logpath /Users/k2hyun/Database/mongodb/26202/log --fork --logappend --replSet rs2 --oplogSize 1000
      $ mkdir -p /Users/k2hyun/Database/mongodb/26203; mongod --port 26203 --dbpath /Users/k2hyun/Database/mongodb/26203 --logpath /Users/k2hyun/Database/mongodb/26203/log --fork --logappend --replSet rs2 --oplogSize 1000
      $ mongo localhost:26201
      > rs.initiate({"_id":"rs2", members:[{"_id":1, "host":"localhost:26201"}, {"_id":2, "host":"localhost:26202"}, {"_id":3, "host":"localhost:26203", "arbiterOnly":true}]})
      rs1:PRIMARY>
      
      $ mkdir -p /Users/k2hyun/Database/mongodb/26001; mongod --port 26001 --dbpath /Users/k2hyun/Database/mongodb/26001 --logpath /Users/k2hyun/Database/mongodb/26001/log --fork --logappend --configsvr
      $ mkdir -p /Users/k2hyun/Database/mongodb/26002; mongod --port 26002 --dbpath /Users/k2hyun/Database/mongodb/26002 --logpath /Users/k2hyun/Database/mongodb/26002/log --fork --logappend --configsvr
      $ mkdir -p /Users/k2hyun/Database/mongodb/26003; mongod --port 26003 --dbpath /Users/k2hyun/Database/mongodb/26003 --logpath /Users/k2hyun/Database/mongodb/26003/log --fork --logappend --configsvr
      
      $ mkdir -p /Users/k2hyun/Database/mongodb/26400; mongos --port 26400 --logpath /Users/k2hyun/Database/mongodb/26400/log --fork --logappend --configdb "localhost:26001,localhost:26002,localhost:26003"
      $ mongos localhost:26400
      mongos> sh.addShard("rs1/localhost:26101,localhost:26102")
      { "shardAdded" : "rs1", "ok" : 1 }
      mongos> sh.addShard("rs2/localhost:26201,localhost:26202")
      { "shardAdded" : "rs2", "ok" : 1 }
      

      Then just kill one of standard node. (I killed the mongd using port 26202)
      The final step is just create a hashed collection. It NEVER ends. The moveChunk command is same.

      mongos> db.runCommand({"enableSharding":"foo"})
      { "ok" : 1 }
      mongos> db.runCommand({"shardCollection":"foo.bar", "key":{"name":"hashed"}})
      
      Show
      Simply, Deploy sharded cluster with some replica sets. I used 2 replica set with 2 standard nodes and 1 arbiter. $ mkdir -p /Users/k2hyun/Database/mongodb/26101; mongod --port 26101 --dbpath /Users/k2hyun/Database/mongodb/26101 --logpath /Users/k2hyun/Database/mongodb/26101/log --fork --logappend --replSet rs1 --oplogSize 1000 $ mkdir -p /Users/k2hyun/Database/mongodb/26102; mongod --port 26102 --dbpath /Users/k2hyun/Database/mongodb/26102 --logpath /Users/k2hyun/Database/mongodb/26102/log --fork --logappend --replSet rs1 --oplogSize 1000 $ mkdir -p /Users/k2hyun/Database/mongodb/26103; mongod --port 26103 --dbpath /Users/k2hyun/Database/mongodb/26103 --logpath /Users/k2hyun/Database/mongodb/26103/log --fork --logappend --replSet rs1 --oplogSize 1000 $ mongo localhost:26101 > rs.initiate({ "_id" : "rs1" , members:[{ "_id" :1, "host" : "localhost:26101" }, { "_id" :2, "host" : "localhost:26102" }, { "_id" :3, "host" : "localhost:26103" , "arbiterOnly" : true }]}) rs1:PRIMARY> $ mkdir -p /Users/k2hyun/Database/mongodb/26201; mongod --port 26201 --dbpath /Users/k2hyun/Database/mongodb/26201 --logpath /Users/k2hyun/Database/mongodb/26201/log --fork --logappend --replSet rs2 --oplogSize 1000 $ mkdir -p /Users/k2hyun/Database/mongodb/26202; mongod --port 26202 --dbpath /Users/k2hyun/Database/mongodb/26202 --logpath /Users/k2hyun/Database/mongodb/26202/log --fork --logappend --replSet rs2 --oplogSize 1000 $ mkdir -p /Users/k2hyun/Database/mongodb/26203; mongod --port 26203 --dbpath /Users/k2hyun/Database/mongodb/26203 --logpath /Users/k2hyun/Database/mongodb/26203/log --fork --logappend --replSet rs2 --oplogSize 1000 $ mongo localhost:26201 > rs.initiate({ "_id" : "rs2" , members:[{ "_id" :1, "host" : "localhost:26201" }, { "_id" :2, "host" : "localhost:26202" }, { "_id" :3, "host" : "localhost:26203" , "arbiterOnly" : true }]}) rs1:PRIMARY> $ mkdir -p /Users/k2hyun/Database/mongodb/26001; mongod --port 26001 --dbpath /Users/k2hyun/Database/mongodb/26001 --logpath /Users/k2hyun/Database/mongodb/26001/log --fork --logappend --configsvr $ mkdir -p /Users/k2hyun/Database/mongodb/26002; mongod --port 26002 --dbpath /Users/k2hyun/Database/mongodb/26002 --logpath /Users/k2hyun/Database/mongodb/26002/log --fork --logappend --configsvr $ mkdir -p /Users/k2hyun/Database/mongodb/26003; mongod --port 26003 --dbpath /Users/k2hyun/Database/mongodb/26003 --logpath /Users/k2hyun/Database/mongodb/26003/log --fork --logappend --configsvr $ mkdir -p /Users/k2hyun/Database/mongodb/26400; mongos --port 26400 --logpath /Users/k2hyun/Database/mongodb/26400/log --fork --logappend --configdb "localhost:26001,localhost:26002,localhost:26003" $ mongos localhost:26400 mongos> sh.addShard( "rs1/localhost:26101,localhost:26102" ) { "shardAdded" : "rs1" , "ok" : 1 } mongos> sh.addShard( "rs2/localhost:26201,localhost:26202" ) { "shardAdded" : "rs2" , "ok" : 1 } Then just kill one of standard node. (I killed the mongd using port 26202) The final step is just create a hashed collection. It NEVER ends. The moveChunk command is same. mongos> db.runCommand({ "enableSharding" : "foo" }) { "ok" : 1 } mongos> db.runCommand({ "shardCollection" : "foo.bar" , "key" :{ "name" : "hashed" }})

      In a sharded cluster with replica sets, if any standard node goes down, when run command "moveChunk" or create sharded collection with "hashed" key, the operation does not end.

      The weak replica set's primary node shows below on log file.

      Sat Oct 19 01:34:28.611 [rsHealthPoll] couldn't connect to localhost:26202: couldn't connect to server localhost:26202
      Sat Oct 19 01:34:29.292 [migrateThread] Waiting for replication to catch up before entering critical section
      Sat Oct 19 01:34:29.293 [migrateThread] warning: migrate commit waiting for 2 slaves for 'foo.bar' { name: 0 } -> { name: MaxKey } waiting for: 526162d9:2

      I tried this for 2.4.2 but It works well. This happens only 2.4.6.

            Assignee:
            randolph@mongodb.com Randolph Tan
            Reporter:
            k2hyun Kihyun Kim
            Votes:
            1 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: