Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-9365

mongod always split at 250000 position

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major - P3
    • Resolution: Fixed
    • Affects Version/s: 2.2.1
    • Fix Version/s: 2.4.6, 2.5.2
    • Component/s: Sharding
    • Labels:
      None
    • Backwards Compatibility:
      Fully Compatible
    • Operating System:
      ALL

      Description

      use mongod store small values , average size is 80 bytes,
      chunk size is 64MB.

      when add new shard into cluster, moveChunk start, and always say chunk is too large,

      can't move, then split the chunk with force:true option.

      mongod log:
      limiting split vector to 250000 (from 634775618) objects

      634775618 seems to large! it is half of the collection objects count.

      i found in source code ,

      s/d_split.cpp

      const long long recCount = d->stats.nrecords;
      const long long dataSize = d->stats.datasize;
       // 'force'-ing a split is equivalent to having maxChunkSize be the size of the current chunk, i.e., the
       // logic below will split that chunk in half
       long long maxChunkSize = 0;
       bool force = false;
       {
           BSONElement maxSizeElem = jsobj[ "maxChunkSize" ];
           BSONElement forceElem = jsobj[ "force" ];
       
           if ( forceElem.trueValue() ) {
               force = true;
               maxChunkSize = dataSize;
       
           }

      when force is true, it will set maxChunkSize to the chunk size.

      but i think dataSize is the size of whole collection , not the chunk,

      so when force is true, it will always split at 250000 pos, not at 1/2 of the chunk.

      am i wrong ?

        Attachments

          Issue Links

            Activity

              People

              • Votes:
                3 Vote for this issue
                Watchers:
                12 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: