Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-44648

Sharded Cluster not working when one of the shard goes down

    • Type: Icon: Question Question
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 4.0.13
    • Component/s: None
    • None

      I setup a testing environment to test the availability of sharded Mongo cluster. Below is my setup:

      1. All the Mongo components are running on my local machine in windows
      2. The cluster has 2 shard, A, B, and C. Each shard is a replica set of 3 mongod servers
      3. There are three config servers running
      4. 1 mongos running

      After the above setup is done, a db and a collection of the db are enabled with sharding. Shared collection is then populated with some data. By checking shard status, it is confirmed that both shards carry data in the sharded collection.

      Then the following steps are taken for the sharded cluster availability test.
      1. perform a select and insert operation on sharded collection. Both operations are successful.
      2. Shut down a secondary node of Shard A. Then, perform a select and insert operation on sharded collection. Again both operations are successful. Query for selecting and inserting was happening in mongos server.
      3. Shut down one more secondary node of Shard A. perform a select and insert operation on sharded collection. Again both operations are successful. Query for selecting and inserting was happening in mongos server

      4. Shut down primary node of Shard A. Query for selecting and inserting was happening in mongos server but query is giving error 

      { 
         "ok":0,
         "errmsg":"Could not find host matching read preference \{ mode: \"secondarypreferred\", tags: [ {} ] } for set s0",
         "code":133,
         "codeName":"FailedToSatisfyReadPreference",
         "operationTime":Timestamp(1573819851,
         2),
         "$clusterTime":{ 
            "clusterTime":Timestamp(1573819851,
            2),
            "signature":{ 
               "hash":BinData(0,
               "AAAAAAAAAAAAAAAAAAAAAAAAAAA="         ),
               "keyId":NumberLong(0)
            }
         }
      }
      

      These all my setup command which is used 

      shard1

       mongod --replSet s0 --logpath "/data3/s0-r0.log" --dbpath /data3/shard0/rs0 --port 37017 --shardsvr --smallfiles
       mongod --replSet s0 --logpath "/data3/s0-r1.log" --dbpath /data3/shard0/rs1 --port 37018 --shardsvr --smallfiles
       mongod --replSet s0 --logpath "/data3/s0-r2.log" --dbpath /data3/shard0/rs2 --port 37019 --shardsvr --smallfiles
       mongo --port 37017
       config = { _id: "s0", members:[{ _id : 0, host : "localhost:37017" },
       { _id : 1, host : "localhost:37018" },
       { _id : 2, host : "localhost:37019" }]};
       rs.initiate(config);
      

      shard2

       mongod --replSet s1 --logpath "/data3/s1-r0.log" --dbpath /data3/shard1/rs0 --port 47017 --shardsvr --smallfiles
       mongod --replSet s1 --logpath "/data3/s1-r1.log" --dbpath /data3/shard1/rs1 --port 47018 --shardsvr --smallfiles
       mongod --replSet s1 --logpath "/data3/s1-r2.log" --dbpath /data3/shard1/rs2 --port 47019 --shardsvr --smallfiles
       mongo --port 47017
       config = { _id: "s1", members:[{ _id : 0, host : "localhost:47017" },
       { _id : 1, host : "localhost:47018" },
       { _id : 2, host : "localhost:47019" }]};
       rs.initiate(config);
      

      shard3

      mongod --replSet s2 --logpath "/data3/s2-r0.log" --dbpath /data3/shard2/rs0 --port 57017 --shardsvr --smallfiles
       mongod --replSet s2 --logpath "/data3/s2-r1.log" --dbpath /data3/shard2/rs1 --port 57018 --shardsvr --smallfiles
       mongod --replSet s2 --logpath "/data3/s2-r2.log" --dbpath /data3/shard2/rs2 --port 57019 --shardsvr --smallfiles
       mongo --port 57017
       config = { _id: "s2", members:[{ _id : 0, host : "localhost:57017" },
       { _id : 1, host : "localhost:57018" },
       { _id : 2, host : "localhost:57019" }]};
       rs.initiate(config);
      

      config server

       mongod --replSet cs --logpath "/data3/cfg-a.log" --dbpath /data3/config/config-a --port 57040 --configsvr --smallfiles
       mongod --replSet cs --logpath "/data3/cfg-b.log" --dbpath /data3/config/config-b --port 57041 --configsvr --smallfiles
       mongod --replSet cs --logpath "/data3/cfg-c.log" --dbpath /data3/config/config-c --port 57042 --configsvr --smallfiles
      

      mongo --port 57040

       config = { _id: "cs", members:[
      { _id : 0, host : "localhost:57040"}
      ,
      { _id : 1, host : "localhost:57041"}
      ,
      { _id : 2, host : "localhost:57042"}]};
       rs.initiate(config);
      

      mongos server

       mongos --logpath "/data3/mongos-1.log" --configdb cs/localhost:57040,localhost:57041,localhost:57042
      

      mongo

       db.adminCommand( { addshard : "s0/"+"localhost:37017", name : "shard0" } );
       db.adminCommand( { addshard : "s1/"+"localhost:47017", name : "shard1" } );
       db.adminCommand( { addshard : "s2/"+"localhost:57017", name : "shard2" } );
       db.adminCommand( {enableSharding: "countingwell"} );
       db.adminCommand( {shardCollection: "countingwell.users", key: {_id:"hashed"} });
      

      as per Mongo documentation, it is not the expected behavior. Can you point out if there is anything wrong in my setup/steps above?

            Assignee:
            carl.champain@mongodb.com Carl Champain (Inactive)
            Reporter:
            vvishwakarma123@gmail.com Vivek Vishwakarma
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: