[JAVA-4535] Driver should reconnect to replica set restarted as 'fresh' Created: 16/Mar/22 Updated: 27/Oct/23 Resolved: 13/Apr/22 |
|
| Status: | Closed |
| Project: | Java Driver |
| Component/s: | Cluster Management |
| Affects Version/s: | 4.4.2 |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Unknown |
| Reporter: | Tymoteusz Machowski | Assignee: | Jeffrey Yemin |
| Resolution: | Gone away | Votes: | 0 |
| Labels: | external-user | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Description |
| Comments |
| Comment by PM Bot [ 13/Apr/22 ] |
|
There hasn't been any recent activity on this ticket, so we're resolving it. Thanks for reaching out! Please feel free to comment on this if you're able to provide more information. |
| Comment by Jeffrey Yemin [ 29/Mar/22 ] |
|
We've discussed internally and determined that this can't really be addressed solely on the client side. The setVersion and electionId checks serve an important function in preventing drivers from using stale primaries in split-brain network scenarios, so we don't want to remove those checks. Anything else we could do to address would appear to require additional functionality in the MongoDB server, but it's unlikely to be prioritized unless we get signal from more users that it's an important use case for them. As a workaround you can do the same as what Atlas itself does when it restored from a backup: restore the local.system.replset collection so that the electionId/setVersion values are not seen as stale by clients. |
| Comment by Tymoteusz Machowski [ 29/Mar/22 ] |
|
This is not (at least not only) related to Atlas. I was describing a case with own hosted mongodb, and a DNS/service registry handling connecting to a different (newly created) cluster under the same name. But I can imagine it could happen with Atlas too, with restoring a backup in place as you mentioned. |
| Comment by Andrew Davidson [ 25/Mar/22 ] |
|
tymoteuszm@qualtrics.com does this happen in Atlas? I would imagine the closest parallel in Atlas would be when you restore a backup in place to an existing cluster? (bc where you do a restore to a distinct cluster you have to change you connection string anyway – does that sound right? cc benjamin.cefalo |
| Comment by Tymoteusz Machowski [ 25/Mar/22 ] |
|
Thanks for the comments. The entire replacement of a replica set could happen e.g. in a Disaster Recovery scenario, where machines hosting mongo or disks fail completely, and the replica set has to be set up from scratch on different machines, with data recovered from backups (but not including replica set state or configuration, which is stored e.g. statically in a separate configuration system). It would be nice if client applications using the driver wouldn't have to be restarted in such case, but rather they would automatically reconnect to the "new" replica set. |
| Comment by Jeffrey Yemin [ 24/Mar/22 ] |
|
tymoteuszm@qualtrics.com thanks for bringing this to our attention. As you correctly deduced, this is the expected behavior given the algorithm that the driver implements to prevent sending writes to stale primaries. Currently, we are not aware of any production use cases/scenarios where it's necessary to completely replace a replica set as you described. I can see how this could happen in a dev environment, though I've never encountered it in my own development. If you could describe your use case (for replacing a replica set entirely) in more detail, that would help us to decide whether it's something that we think should be addressed. |
| Comment by Lamont Nelson [ 24/Mar/22 ] |
|
If I understand the scenario, we are resetting the (electionId, setVersion) tuple to an initial value. For any RSM instances that are monitoring this replica set, they would see a response from the primary as stale and ignore it causing unavailability until the node is restarted or until the tuple increases past the cached version. I think the question of why the election id doesn't fill in the timestamp portion is probably best for the replication team, but I'll take a stab at interpreting the code. I think the value originates from here. So it is derived from the raft term and a timestamp. However here we have something interesting. The comments state "Set max timestamp because the drivers compare ElectionId's to determine valid new primaries, and we want ElectionId's with terms to supercede ones without terms." This is new information for me personally. So we know why the high bits are the max value, but as far as why we coded this way I am not sure. |