[JAVA-4535] Driver should reconnect to replica set restarted as 'fresh' Created: 16/Mar/22  Updated: 27/Oct/23  Resolved: 13/Apr/22

Status: Closed
Project: Java Driver
Component/s: Cluster Management
Affects Version/s: 4.4.2
Fix Version/s: None

Type: Improvement Priority: Unknown
Reporter: Tymoteusz Machowski Assignee: Jeffrey Yemin
Resolution: Gone away Votes: 0
Labels: external-user
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Related
related to JAVA-4375 SDAM should give priority to election... Closed

 Description   

Summary

Driver is unable to reconnect to primary after replica set is restarted as 'fresh'. In the driver, maxSetVersion and maxElectionId are stored in memory, but when replica set is shut down and restarted from scratch (with new data directories), elections are also done from scratch and no longer comparable to the ones stored in driver.

Driver version: 4.4.2

Mongo version: 5.0.3

Topology: replica set, 3 members, 1 primary, 2 secondaries

How to Reproduce

1. Setup a Java client connected to a mongo replica set.

2. Trigger a few elections to overwrite maxElectionId.

3. Shutdown the replica set and wipe out all the data.

4. Start up the replica set again.

5. Java app will not be able to reconnect to primary and perform writes (also reads if readPreference is primary).

Additional Background

In my setup, I have a Java application connected to a mongo replica set with 3 members (primary, secondary, secondary). I want to test a Disaster-Recovery scenario, so I shutdown the replica set and wipe out all the data. Then I start up the replica set from scratch and restore the data from backups. After that, the still-running Java app is unable to reconnect to primary to perform write operations.

The exceptions that are thrown look like this:

com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=REPLICA_SET, servers=[{address=mongo1.host:27017, type=UNKNOWN, state=CONNECTING}, {address=mongo2.host:27017, type=REPLICA_SET_SECONDARY, roundTripTime=0.9 ms, state=CONNECTED}, {address=mongo3.host:27017, type=REPLICA_SET_SECONDARY, roundTripTime=1.5 ms, state=CONNECTED}]

But the underlying issue is this:

org.mongodb.driver.cluster: Invalidating potential primary mongo1.host:27017 whose (set version, election id) tuple of (5, 7fffffff0000000000000002) is less than one already seen of (13, 7fffffff0000000000000013)

So it seems that the driver is unable to connect to the 'new' primary, because it claims that it has seen a primary with higher electionId, but in the meantime the whole replica set was restarted and elections were done from scratch.



 Comments   
Comment by PM Bot [ 13/Apr/22 ]

There hasn't been any recent activity on this ticket, so we're resolving it. Thanks for reaching out! Please feel free to comment on this if you're able to provide more information.

Comment by Jeffrey Yemin [ 29/Mar/22 ]

tymoteuszm@qualtrics.com

We've discussed internally and determined that this can't really be addressed solely on the client side. The setVersion and electionId checks serve an important function in preventing drivers from using stale primaries in split-brain network scenarios, so we don't want to remove those checks.

Anything else we could do to address would appear to require additional functionality in the MongoDB server, but it's unlikely to be prioritized unless we get signal from more users that it's an important use case for them.

As a workaround you can do the same as what Atlas itself does when it restored from a backup: restore the local.system.replset collection so that the electionId/setVersion values are not seen as stale by clients.

Comment by Tymoteusz Machowski [ 29/Mar/22 ]

This is not (at least not only) related to Atlas. I was describing a case with own hosted mongodb, and a DNS/service registry handling connecting to a different (newly created) cluster under the same name. But I can imagine it could happen with Atlas too, with restoring a backup in place as you mentioned. 

Comment by Andrew Davidson [ 25/Mar/22 ]

tymoteuszm@qualtrics.com does this happen in Atlas?

I would imagine the closest parallel in Atlas would be when you restore a backup in place to an existing cluster? (bc where you do a restore to a distinct cluster you have to change you connection string anyway – does that sound right? cc benjamin.cefalo

Comment by Tymoteusz Machowski [ 25/Mar/22 ]

Thanks for the comments.

The entire replacement of a replica set could happen e.g. in a Disaster Recovery scenario, where machines hosting mongo or disks fail completely, and the replica set has to be set up from scratch on different machines, with data recovered from backups (but not including replica set state or configuration, which is stored e.g. statically in a separate configuration system).

It would be nice if client applications using the driver wouldn't have to be restarted in such case, but rather they would automatically reconnect to the "new" replica set.

Comment by Jeffrey Yemin [ 24/Mar/22 ]

tymoteuszm@qualtrics.com thanks for bringing this to our attention. As you correctly deduced, this is the expected behavior given the algorithm that the driver implements to prevent sending writes to stale primaries.

Currently, we are not aware of any production use cases/scenarios where it's necessary to completely replace a replica set as you described. I can see how this could happen in a dev environment, though I've never encountered it in my own development.

If you could describe your use case (for replacing a replica set entirely) in more detail, that would help us to decide whether it's something that we think should be addressed.

Comment by Lamont Nelson [ 24/Mar/22 ]

If I understand the scenario, we are resetting the (electionId, setVersion) tuple to an initial value. For any RSM instances that are monitoring this replica set, they would see a response from the primary as stale and ignore it causing unavailability until the node is restarted or until the tuple increases past the cached version.

I think the question of why the election id doesn't fill in the timestamp portion is probably best for the replication team, but I'll take a stab at interpreting the code. I think the value originates from here. So it is derived from the raft term and a timestamp. However here we have something interesting. The comments state "Set max timestamp because the drivers compare ElectionId's to determine valid new primaries, and we want ElectionId's with terms to supercede ones without terms." This is new information for me personally. So we know why the high bits are the max value, but as far as why we coded this way I am not sure.

Generated at Thu Feb 08 09:02:20 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.