Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-7960

Chunk size different on shards of same MongoDB cluster

    XMLWordPrintable

    Details

    • Operating System:
      Linux
    • Steps To Reproduce:
      Hide

      Set up a 6 node shard and start the router without explicitly specifying the values for chunkSize and oplogSize parameters.
      Load the data into the cluster using YCSB clients.
      Check the distribution of data once the data loading is done.
      For this log in to any of the routers and navigate to the database and then issue the following command.
      db.<collection>.getShardDistribution()

      Show
      Set up a 6 node shard and start the router without explicitly specifying the values for chunkSize and oplogSize parameters. Load the data into the cluster using YCSB clients. Check the distribution of data once the data loading is done. For this log in to any of the routers and navigate to the database and then issue the following command. db.<collection>.getShardDistribution()

      Description

      We have setup a 6 Shard MongoDB cluster with a replication factor of 3.
      When starting the router process, default chunk size and oplog size was chosen by not specifying the values for these explicitly.

      Shard3 has a chunk size of 161 MB while the rest have 60-90 MB per chunk.
      All shards are similar type of instances on Amazon EC2 environment.
      What we have noticed using db.<collection>.getShardDistribution() command is as follows:

      Shard shard1 at shard1/<ips of shard1>
      data : 38.8Gb docs : 43049426 chunks : 621
      estimated data per chunk : 63.99Mb
      estimated docs per chunk : 69322

      Shard shard2 at shard2/<ips of shard2>
      data : 40.24Gb docs : 44644092 chunks : 620
      estimated data per chunk : 66.47Mb
      estimated docs per chunk : 72006

      Shard shard3 at shard3/<ips of shard3>
      data : 102.65Gb docs : 113874252 chunks : 649
      estimated data per chunk : 161.97Mb
      estimated docs per chunk : 175461

      Shard shard4 at shard4/<ips of shard4>
      data : 54.51Gb docs : 60472368 chunks : 620
      estimated data per chunk : 90.04Mb
      estimated docs per chunk : 97536

      Shard shard5 at shard5/<ips of shard5>
      data : 50.48Gb docs : 56005174 chunks : 620
      estimated data per chunk : 83.38Mb
      estimated docs per chunk : 90330

      Shard shard6 at shard6/<ips of shard6>
      data : 46.32Gb docs : 51388397 chunks : 620
      estimated data per chunk : 76.51Mb
      estimated docs per chunk : 82884

      Totals
      data : 333.05Gb docs : 369433709 chunks : 3750
      Shard shard1 contains 11.65% data, 11.65% docs in cluster, avg obj size on shard : 967b
      Shard shard2 contains 12.08% data, 12.08% docs in cluster, avg obj size on shard : 967b
      Shard shard3 contains 30.82% data, 30.82% docs in cluster, avg obj size on shard : 967b
      Shard shard4 contains 16.36% data, 16.36% docs in cluster, avg obj size on shard : 967b
      Shard shard5 contains 15.15% data, 15.15% docs in cluster, avg obj size on shard : 967b
      Shard shard6 contains 13.91% data, 13.91% docs in cluster, avg obj size on shard : 967b

        Attachments

          Activity

            People

            Assignee:
            stennie Stennie Steneker
            Reporter:
            krisant007 Santosh Kumar L
            Participants:
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Dates

              Created:
              Updated:
              Resolved: