printShardingStatus causes high vsize on config server

XMLWordPrintableJSON

    • Type: Bug
    • Resolution: Done
    • Priority: Major - P3
    • None
    • Affects Version/s: 2.4.6
    • Component/s: Stability
    • None
    • ALL
    • Hide

      1. start a sharded cluster with 3 config servers.
      2. enable sharding, shard a collection, create some chunks.
      3. connect a shell (through mongos), call printShardingStatus(); a few times

      vsize on "first" config server in cfg server list has much higher vsize than others:

      config1: 4.08gb
      config2: 2.56gb
      config3: 2.56gb

      Show
      1. start a sharded cluster with 3 config servers. 2. enable sharding, shard a collection, create some chunks. 3. connect a shell (through mongos), call printShardingStatus(); a few times vsize on "first" config server in cfg server list has much higher vsize than others: config1: 4.08gb config2: 2.56gb config3: 2.56gb
    • None
    • 3
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Noticing much higher vsize on one config server than others in cluster. In addition to higher vsize, notice some slow queries in logs which identify a groupby command.

      Tue Mar 4 15:04:33.011 [conn169143] command config.$cmd command: { group: { $reduce: function (doc, out) {out.nchunks++;}, ns: "chunks", cond: { ns: "database.collection" }, key: { shard: 1 }, initial: { nchunks: 0 } } } ntoreturn:1 keyUpdates:0 numYields: 4 locks(micros) r:385815 reslen:2321 226ms
      

      That groupby command is part of the printShardingStatus() command in the shell.

            Assignee:
            Jared Rosoff (Inactive)
            Reporter:
            Jared Rosoff (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: