[SERVER-60146] Check memory usage more frequently in HashAgg stage Created: 22/Sep/21 Updated: 29/Oct/23 Resolved: 23/Sep/21 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | 5.1.0-rc0 |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Eric Cox (Inactive) | Assignee: | Eric Cox (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Backwards Compatibility: | Fully Compatible |
| Operating System: | ALL |
| Sprint: | QE 2021-10-04 |
| Participants: |
| Description |
|
The new memory tracking algorithm in HashAgg is probablistic. So we found in spill_to_disk.js we aren't throwing errors in the similar cases when allowDiskUse = false. If we set `internalQuerySlotBasedExecutionHashAggMemoryUseSampleRate` to one we have parity with reporting memory limit errors with the classic engine, but this didn't fit well into the testing framework (we would have to move spill_to_disk.js to no passthrough) and loose coverage for the sharded passthroughs. This fix will check the memory usage in HashAgg for every 100 insertions of the hash table or the random coin flip. |
| Comments |
| Comment by Vivian Ge (Inactive) [ 06/Oct/21 ] |
|
Updating the fixversion since branching activities occurred yesterday. This ticket will be in rc0 when it’s been triggered. For more active release information, please keep an eye on #server-release. Thank you! |
| Comment by Githook User [ 23/Sep/21 ] |
|
Author: {'name': 'Eric Cox', 'email': 'eric.cox@mongodb.com', 'username': 'ericox'}Message: |