[SERVER-26216] ReplicaSet servers able to make a putsch due to bad network timeout parameters Created: 21/Sep/16 Updated: 07/Apr/17 Resolved: 23/Sep/16 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Networking, Replication |
| Affects Version/s: | 3.2.9 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Matthieu Rigal | Assignee: | Kelsey Schubert |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Operating System: | ALL |
| Steps To Reproduce: | Take a running RS with 5 members in two networks of 3 (including arbiter) and 2 nodes. Cut the network for a while. Then re-establish it with delaying the network process on the non-arbiters machines (especially the primary), one of the other nodes will become PRIMARY even if the arbiter is still able to communicate with the PRIMARY |
| Participants: |
| Description |
|
Hi guys, We recently had some serious troubles after a failure due to network handling in the mongo replication process. The config is the following. 3 Replica sets made of 5 servers, two barebone plus 1 arbiter on one datacenter, two other barebones in another datacenter. At some point, we had a connectivity problem between the datacenters, for a couple of hours. We expected everything to get back to normal later on, with a sync of the oplog. Unfortunately one of the three replica sets failed, the two disconnected servers managed to get a majority and a big rollback started. On the datacenter A, all three nodes were running smoothly. On datacenter B, the two nodes were able to see each other, but lost connection to the primary. One of the node then tried to issue an election, that it couldn't win because it had no majority. Unfortunately, when the connection was established again, the node first saw the arbiter and only some milliseconds later the other members. But where it should have resetted the replicationTimeout to 10s at the moment where there is a majority, it just started a replication within the few milliseconds, without waiting to connect to the other members. This node got 2 votes and was then able to become a primary, even with several hours of data less... -> Big Rollback I see two possible fixes (maybe both good):
Here the logs on this node:
|
| Comments |
| Comment by Spencer Brody (Inactive) [ 07/Apr/17 ] |
|
MRigal, FYI we wound up implementing the change to arbiter behavior you describe in Cheers, |
| Comment by Kelsey Schubert [ 23/Sep/16 ] |
|
Hi MRigal, Thank you for reporting this behavior. Unfortunately, to ensure elections readily occur in the case of failover, the fixes you suggest cannot be implemented. When writes are not being replicated to the majority of data bearing nodes, there is a risk that rollbacks may occur. To avoid rollbacks in this specific situation, there are a number of configuration changes that you may want to consider, including changing your write concern or protocol version. Looking forward, my recommendation to resolve this issue would be to replace your arbiter with a mongod. This would ensure that, in the case of a network partition between your data centers, write would continue to be replicated to a majority of data bearing nodes. There is an open ticket, SERVER-14539, which would add an oplog to the arbiter. This new feature would provide an additional solution to this issue. I recommend reading through the description on this ticket for a more complete explanation of this feature and its impact. For MongoDB-related support discussion please post on the mongodb-users group or Stack Overflow with the mongodb tag. A question like this involving more discussion would be best posted on the mongodb-users group. Kind regards, |
| Comment by Matthieu Rigal [ 21/Sep/16 ] |
|
Please note that the network timeout reset problem might be fixed at a more global level, as the same mechanism led to following problem also https://jira.mongodb.org/browse/SERVER-26215 |