Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-9945

Hard coded MaxObjectPerChunk limit

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: Sharding
    • Labels:
      None
    • ALL

      Hi,

      why ist the MaxObjectPerChunk hard coded in line: https://github.com/mongodb/mongo/blob/master/src/mongo/s/chunk.cpp#L54?
      We want to move a chunk to a other shard, but the move chunk command will be aborted because "chunk too big to move".

      I found that in line: https://github.com/mongodb/mongo/blob/master/src/mongo/s/d_migrate.cpp#L408 the calculated maxRecsWhenFull will be reseted to the hard coded MaxObjectPerChunk limit if its greater.

      Our configured chunk size is 64mb and the chunk has a size of ~32mb, so it should be moved without problems.
      Why this limit?

      I added to the failed response some values to validate this and here is the output:

      {
      	"cause" : {
      		"chunkTooBig" : true,
      		"estimatedChunkSize" : 34355880,
      		"recCount" : 477165,
      		"avgRecSize" : 72,
      		"maxRecsWhenFull" : 250001,
      		"maxChunkSize" : 67108864,
      		"totalRecs" : 47328811,
      		"Chunk::MaxObjectPerChunk" : 250000,
      		"ok" : 0,
      		"errmsg" : "chunk too big to move"
      	},
      	"ok" : 0,
      	"errmsg" : "move failed"
      }
      

      So a chunk with over 250000 entries can't be moved? IMHO it's not expacted or? We think for that reason is our cluster not good distributed. We have many abort logs in the config.changelog.

      Any thoughts?

      Thanks & Regards
      Thomas

            Assignee:
            Unassigned Unassigned
            Reporter:
            tecbot Thomas Adam
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated:
              Resolved: