Priority: Major - P3
Affects Version/s: 2.6.4
Fix Version/s: None
We are running a sharded cluster of 7 shards. Every shard is a replicaset of 3 replicas.
We do pre-splitting in order to distribute evenly new inserted documents among all shards dependent on their amount of RAM. Our database must fit in RAM else it performance drops down significantly. The balancer is switched off because it thinks all servers have the same amount of RAM which would lead to overload of shards having less RAM.
From time to time, i.e. when adding a new shard, we need to balance manually. Our script executes first sh.moveChunk and when it returns ok, it deletes the moved documents from the source shards because we observed that sh.moveChunk does not always clean-up 100%.
In pseudo code it gives:
We observed that mongo may wait forever when we add the writeConcern "majority" to the remove command.
In the currentOp we see something like this:
While mongo was waiting, we verified the number of documents of this chunk on all replicaset members of the donor shard. It was 0! So all documents of this chunk were removed from the donor shard but mongo still waited for it. Why?
Also, still more weird, the output of the above pseudo script was like this:
Move 4 hung forever.
As you can see, mongo was endlessly waiting for the query $gte:2577460009. However, the result of the delete returned already since it printed the number of deleted documents (3561). Only the following chunk move #4 got stuck. Why?
Also, as you can see, our script had always to delete documents that should have been deleted by the sh.moveChunk already, shouldn't it?