Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-2132

data loss when use a small chunksize

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 1.6.3
    • Component/s: Stability
    • Labels:
      None
    • Environment:
      OS: Red Hat Enterprise Linux AS release 4 . Kernal: 2.6.9_5 x86_64
      mongodb version: mongodb-linux-x86_64-static-legacy-1.6.3
    • Linux

      When using mongoimport tool to insert sample data (100000000 rows totally),
      if chunkSize is set to 50MB, there is about 20000 rows data lost .
      But when using default chunkSize (200MB) , there is No data loss.

      The shard environment contains 2 Shards , 2 Shards + 2 Replicas, 4 Shards + 4 Replicas.
      Once chunkSize is set to 50MB, the data must be lost.

      When chunkSize is set to 50MB, the mongos log contains too many autosplit failure.such as
      ERROR: splitIfShould failed: locking namespace failed
      or
      ERROR: saving chunks failed.

      I think the autosplit process failure is normal when use small chunkSize.
      But the data loss is weird.

      the attachment contains all logs when using 2 shards.

            Assignee:
            Unassigned Unassigned
            Reporter:
            tullyliu Tao Liu
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: