-
Type: Bug
-
Resolution: Fixed
-
Priority: Major - P3
-
Affects Version/s: 4.2.0
-
Component/s: None
-
None
-
Fully Compatible
-
ALL
-
v4.2
-
Service Arch 2019-08-26, Service Arch 2019-09-09, Service Arch 2019-09-23, Service Arch 2019-10-21, Service Arch 2019-11-04, Service Arch 2019-11-18, Service Arch 2019-12-02, Service Arch 2019-12-16, Service Arch 2019-12-30, Service Arch 2020-01-13
-
17
Each ConnectionPool::ControllerInterface implementation has its own state. As a consequence of this, its state is only brought in line with each SpecificPool during updateHost(). This can lead to the following race when there are two related pools to HostA and HostB:
- Pool to HostA is expired initially. It has no outstanding call to updateHost(), i.e. it has already happened.
- Pool to HostB is now expired. It schedules a call to updateHost(). Both hosts are now expired, thus both can shut down.
- Pool to HostA is no longer expired. It schedules a call to updateHost(). We should not be able to shutdown anymore, but the controller doesn't know that until updateHost() is called.
- The updateHost() for the pool to HostB runs. It updates the controller's understanding of HostB and the controller believes both pools can shutdown. Which is wrong, but HostA's updateHost() is queued behind the updateHost() for HostB.
The solution is to run all in flight updateHost() calls before we do anything on the ConnectionPool side. This probably involves moving all controller logic to ConnectionPool instead of SpecificPool. Hopefully, this shouldn't ruin the perf, but in the worst case we can maybe shift the spawnConnection() to be scheduled alone and call updateHost() inline.
- is duplicated by
-
SERVER-42871 ConnectionPool::SpecificPool is able to shutdown with outstanding requests
- Closed