-
Type:
Bug
-
Resolution: Done
-
Priority:
Major - P3
-
Affects Version/s: 2.6.3, 2.6.5
-
Component/s: Replication
-
None
-
Fully Compatible
-
ALL
-
-
None
-
None
-
None
-
None
-
None
-
None
-
None
We recently reduced the number of nodes in our replica set from 4 (3 + 1 hidden) to 3. After removing the 4th node and changing the configuration of the other, the cluster comes back just fine. After failing over, the cluster won't take any writes with a write concern > 1. If you fail back to the original primary, the replica works fine. There is a workaround - you can simply restart all mongod processes after the reconfig, and everything works. We have been able to consistently reproduce this bug in versions 2.6.3 and 2.6.5. It does appear that the issue is not present in the rc0 version of 2.8.0.