The Convert a Replica Set to a Sharded Cluster flow has users take ordinary replica set members out of rotation and start them up again with --shardsvr. The user's application remains directly connected to the replica set during this step. A driver would have previously received signed $clusterTimes from the ordinary replica set members and will therefore attempt to gossip them back to the members after they've been started up again with --shardsvr. As noted in SERVER-60466, the driver sending these signed $clusterTimes will result in application errors during the conversion process.
- From the time starting after the replica set member are restarted with --shardsvr and until the operator runs the addShard command, an application which had been connected beforehand to the replica set beforehand will continuously error with CannotVerifyAndSignLogicalTime.
- Restarting the application after restarting all of the replica set members with --shardsvr is a method to reset the driver's notion of the signed $clusterTime and restore availability.
- It isn't possible to avoid downtime entirely. Either the application will error from CannotVerifyAndSignLogicalTime or will be unavailable from restarting the application servers.
- We should probably reorder the steps so the Deploy Config Server Replica Set and mongos section happens before the replica sets members are restarted with --shardsvr. This way the addShard command can be run more promptly.