• Type: Icon: Question Question
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 2.4.6
    • Component/s: Performance, Sharding, Storage
    • Labels:
      None

      We use Mongodb 2.4.6
      We added a new shard to our cluster. Each shard is a replica set with 3 nodes (2 powerful nodes and one backup hidden secondary)
      We use tag aware sharding.
      The new shard was added to a collection which was on 2 shards only before. So the data was balanced from 2 old nodes to the new one. 3 in total for this collection. Chunk size per shard is now equal (1172 each). On these shards there is only one collection (tag aware sharding).

      But on one of the 2 old shards we see a lot cleanup messages in the log, thus we have high IO on this server

      Tue Feb 18 11:15:33.860 [cleanupOldData-52f25c698cd98919080ca194] moveChunk deleted 124451 documents for database.collection from { targetUid: -5991322687277590967 } -> { targetUid: -5985241970436000175 }
      Tue Feb 18 11:15:33.860 [cleanupOldData-52f25e478cd98919080ca199] moveChunk starting delete for: database.collection from { targetUid: -5985241970436000175 } -> { targetUid: -5981234632543141839 }
      

      It seems the the cleaner goes through the whole hashed shard key range.
      1. Is there a way to skip this cleanup with compaction, repair or full resync?

      We also disabled "_secondaryThrottle" afterwards on the balancer setting, but it seems that this is not recognized on the replicaset

      Tue Feb 18 06:07:23.792 [cleanupOldData-52f24e398cd98919080ca17b] Helpers::removeRangeUnlocked time spent waiting for replication: 1151188ms
      

      2. Is there a way to disable it afterwards?

            Assignee:
            thomas.rueckstiess@mongodb.com Thomas Rueckstiess
            Reporter:
            steffen Steffen
            Votes:
            2 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated:
              Resolved: