[SERVER-55193] Support back-to-back migration of the same tenant Created: 15/Mar/21 Updated: 29/Oct/23 Resolved: 12/Apr/21 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | 4.9.0-rc1, 5.0.0-rc0 |
| Type: | Task | Priority: | Major - P3 |
| Reporter: | Lingzhi Deng | Assignee: | Jason Chan |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | pm-1791_non-cloud-blocking, pm-1791_optimizations | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||
| Backport Requested: |
v4.9
|
||||||||||||
| Sprint: | Repl 2021-04-05, Repl 2021-04-19 | ||||||||||||
| Participants: | |||||||||||||
| Description |
|
It looks like there is a case where we need both a donor and recipient TenantMigrationAccessBlocker. This is when we do a successful migration of A->B, then immediately (before the garbage collection period has elapsed) do a migration of B->C. The migration B->C currently would fail due to conflicting migration errors (until after the garbage collection period). We can handle this by
But before we do any of these, we should confirm with Cloud on whether this is something Cloud is expected to do. |
| Comments |
| Comment by Githook User [ 12/Apr/21 ] |
|
Author: {'name': 'Jason Chan', 'email': 'jason.chan@mongodb.com', 'username': 'jasonjhchan'}Message: (cherry picked from commit 7c4fdf48f8882818e778c3c2931b0e24aa99711d) |
| Comment by Githook User [ 12/Apr/21 ] |
|
Author: {'name': 'Jason Chan', 'email': 'jason.chan@mongodb.com', 'username': 'jasonjhchan'}Message: |
| Comment by Jason Chan [ 05/Apr/21 ] |
|
As described, this ticket is meant to handle the back-to-back migration case of A -> B -> C, where A,B,C are all separate replica sets. We do not expect the back-to-back immediate migration case of A - migration 1 -> B - migration 2 -> A because the migration protocol guarantees that Cloud will wait the "grace period" to allow clients to finish exhausting their cursors on replica set A (from migration 1) before cleaning up the orphaned data. Implementation-wise, this means we can guarantee the following:
|