[SERVER-59636] Default timeout for receiveChunkWaitForRangeDeleterTimeoutMS is too low Created: 27/Aug/21 Updated: 29/Oct/23 Resolved: 31/Aug/21 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | None |
| Fix Version/s: | 5.1.0-rc0 |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Marcos José Grillo Ramirez | Assignee: | Marcos José Grillo Ramirez |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | sharding-wfbf-day | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||||||||||
| Operating System: | ALL | ||||||||||||||||||||
| Sprint: | Sharding 2021-08-23 | ||||||||||||||||||||
| Participants: | |||||||||||||||||||||
| Linked BF Score: | 49 | ||||||||||||||||||||
| Description |
|
The receiveChunkWaitForRangeDeleterTimeoutMS parameter makes a recipient of a migration to wait for range deletions in intersecting chunk ranges to finish before continuing with the ongoing migration. The default timeout is currently 10 seconds, which is a number that is too low for the current test environment. We've had this problem before in some concurrency tests, so, we could increase the default value to a minute and a half, which is the current overall timeout for migrations. |
| Comments |
| Comment by Vivian Ge (Inactive) [ 06/Oct/21 ] |
|
Updating the fixversion since branching activities occurred yesterday. This ticket will be in rc0 when it’s been triggered. For more active release information, please keep an eye on #server-release. Thank you! |
| Comment by Githook User [ 31/Aug/21 ] |
|
Author: {'name': 'Marcos José Grillo Ramirez', 'email': 'marcos.grillo@mongodb.com', 'username': 'm4nti5'}Message: |
| Comment by Marcos José Grillo Ramirez [ 31/Aug/21 ] |
|
Given the fact that this is a low occurrence problem, it doesn't make sense to alter significantly the purpose of the timeout in order to solve this scenario, the best option right now is to add the configuration parameter into the suite and increase the timeout there. |