[SERVER-49150] Make merge_causes_infinite_loop.js more robust Created: 26/Jun/20 Updated: 29/Oct/23 Resolved: 30/Jun/20 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | 4.4.0-rc13, 4.7.0 |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Mihai Andrei | Assignee: | Mihai Andrei |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||
| Operating System: | ALL | ||||||||
| Backport Requested: |
v4.4
|
||||||||
| Sprint: | Query 2020-07-13 | ||||||||
| Participants: | |||||||||
| Linked BF Score: | 10 | ||||||||
| Description |
|
From the linked BF: Instead of verifying that the aggregate times out as expected, this test would be not only more robust, but also more correct if we verified that the contents of the output collection are what we expect. More precisely, the purpose of this test is to verify that when $merge outputs to the collection that is being aggregated over, it can trigger an infinite loop of updates, so in the control case (i.e. the aggregate which outputs to a different collection), it doesn’t matter whether the aggregate takes under 2500ms or not, rather, it matters that each document has a value of ‘a’ that is double to its original value. This would confirm in the control case that each original document was updated exactly once as expected. |
| Comments |
| Comment by Githook User [ 10/Jul/20 ] |
|
Author: {'name': 'Mihai Andrei', 'email': 'mihai.andrei@10gen.com', 'username': 'mtandrei'}Message: (cherry picked from commit 9c61abb63b5acdc5f2e99c1185bb3be3d7342ac9) |
| Comment by Githook User [ 30/Jun/20 ] |
|
Author: {'name': 'Mihai Andrei', 'email': 'mihai.andrei@10gen.com', 'username': 'mtandrei'}Message: |