In ReplicationStateTransitionLockGuard destructor, we have an invariant which checks for lock result not equal to LOCK_WAITING before unlocking the rstl lock. But, for the below valid event sequence, we would be calling the ReplicationStateTransitionLockGuard destructor with _result set as "LOCK_WAITING" to unlock the rstl lock.
1) Thread A issues stepdown cmd ( can be triggered either by heartbeat or user) .
2) Thread B issues conditional stepdown triggered by the user.
Thread A marks thread B as killed. (EDIT : One step down thread cannot mark another step down thread as killed, because during step down we currently kill only user operations that have taken global lock in X, IX and IS mode. Step down thread only try to take RSTL lock in X mode and not global lock.)
3) Thread A acquires the rstl lock in X mode.
4) Thread B enqueues the rstl lock and set the _result as LOCK_WAITING.
5) Thread B calls ReplicationStateTransitionLockGuard::waitForLockUntil with non-zero timeout.
6) Thread B wait for rstl lock
is interrupted (or even for time out case) and lead to calling ReplicationStateTransitionLockGuard destructor with _result as "LOCK_WAITING" leading to invariant failure. (EDIT: Thread B can time out waiting for the lock).
Note: There is no need to worry that the rstl lock state won't be cleaned up, because unlockOnErrorGuard in LockerImpl::lockComplete will clean up the state in the lock manger and in the locker on any failed lock attempts. Effectively when we hit the ReplicationStateTransitionLockGuard destructor, there is nothing to clean up for the above scenario.