[SERVER-29792] Write multiversion tests for causal consistency Created: 22/Jun/17 Updated: 30/Oct/23 Resolved: 10/Oct/17 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | None |
| Fix Version/s: | 3.6.0-rc0 |
| Type: | Task | Priority: | Major - P3 |
| Reporter: | Jack Mulrow | Assignee: | Misha Tyulenev |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | PM-221 | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Backwards Compatibility: | Fully Compatible |
| Sprint: | Sharding 2017-10-02, Sharding 2017-10-23 |
| Participants: |
| Description |
|
Verify the following mixed version scenarios (from the design doc): v3.6 primary and v3.4 secondary replica set node The primary will be sending out the clusterTime but the old secondaries will ignore it. This is fine, this just means that the old secondaries won't be able to help in propagating the clock. The drivers won't be able to use afterClusterTime on the v3.4 secondaries because it will result in parse error. v3.4 primary and v3.6 secondary replica set node v3.6 mongos and v3.4 shard is not valid configuration, so the case where v3.6 mongos learns about a new LogicalTime from a different v3.6 shard and tries to talk to a replica set shard with v3.4 primary can be discounted (this can end up blocking until the no-op writer writes an oplog entry). In a valid configuration where the replica set primary is in v3.4, then it means that replica set will never advance LogicalTime_MEM. This is okay, since the LogicalTime_MEM advancement is only important in a replica set if it is part of a sharded cluster and when the client wants to do causal reads (which it can't since it still has yet to upgrade the mongos). v3.6 mongod and v3.4 mongos Clients will get parse errors when they try to use afterClusterTime. |
| Comments |
| Comment by Githook User [ 10/Oct/17 ] |
|
Author: {'email': 'misha@mongodb.com', 'name': 'Misha Tyulenev', 'username': 'mikety'}Message: |