Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-12389

insert to not sharded collection, insert to different singleShard

    XMLWordPrintableJSON

Details

    • Icon: Question Question
    • Resolution: Done
    • Icon: Major - P3 Major - P3
    • None
    • None
    • Sharding
    • None

    Description

      Hi,

      Im trying insert document to not sharded database.

      {  "_id" : "score",  "partitioned" : false,  "primary" : "t2" }

      so i`m conecting to mongos, doin inserts, but documents count in collecion is diffrerent than count of input.

      There is a array returned by insert command:

      {"singleShard":"t1\/10.0.9.1:27017,10.0.9.2:27017","n":0,"lastOp":{"sec":1389944912,"inc":404},"connectionId":215537,"err":null,"ok":1}
      {"singleShard":"t1\/10.0.9.1:27017,10.0.9.2:27017","n":0,"lastOp":{"sec":1389944912,"inc":435},"connectionId":215537,"err":null,"ok":1}
      {"singleShard":"t1\/10.0.9.1:27017,10.0.9.2:27017","n":0,"lastOp":{"sec":1389944912,"inc":441},"connectionId":215537,"err":null,"ok":1}
      {"singleShard":"t2\/10.0.9.3:27017,10.0.9.4:27017","n":0,"lastOp":{"sec":1389944912,"inc":162},"connectionId":199872,"err":null,"ok":1}
      {"singleShard":"t2\/10.0.9.3:27017,10.0.9.4:27017","n":0,"lastOp":{"sec":1389944912,"inc":168},"connectionId":199872,"err":null,"ok":1}
      {"singleShard":"t2\/10.0.9.3:27017,10.0.9.4:27017","n":0,"lastOp":{"sec":1389944912,"inc":171},"connectionId":199872,"err":null,"ok":1}

      why some of inserts are sended to t1 replica set??
      This will be synced by balancer or its a bug ??

      mongos> version()
      2.4.6
      t1:PRIMARY> version()
      2.4.6
      t2:PRIMARY> version()
      2.4.6

      Attachments

        Activity

          People

            thomas.rueckstiess@mongodb.com Thomas Rueckstiess
            djack Jack Glowacki
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: