[SERVER-49507] Reduce memory consumption in startup repair when rebuilding unique indexes with a large number of duplicate records Created: 14/Jul/20 Updated: 29/Oct/23 Resolved: 05/Aug/20 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | 4.7.0, 4.4.2 |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Fausto Leyva (Inactive) | Assignee: | Fausto Leyva (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | KP44 | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||||||
| Sprint: | Execution Team 2020-07-27, Execution Team 2020-08-10 | ||||||||||||||||
| Participants: | |||||||||||||||||
| Case: | (copied to CRM) | ||||||||||||||||
| Linked BF Score: | 40 | ||||||||||||||||
| Description |
|
Currently, dumpInsertsFromBulk populates a set of RecordIds (in memory) which contain duplicate unique keys.
The duplicate record ids are iterated through, moved to a lost_and_found collection and deleted. |
| Comments |
| Comment by Githook User [ 08/Oct/20 ] |
|
Author: {'name': 'Faustoleyva54', 'email': 'fausto.leyva@mongodb.com', 'username': 'Faustoleyva54'}Message: (cherry picked from commit 80f11e6ae0708e8c8da49208ef2cf71cdd06877c)
(cherry picked from commit e25d43ca2b5e99e6484cb0e13ca5f9e2d014ac30) |
| Comment by Githook User [ 05/Aug/20 ] |
|
Author: {'name': 'Faustoleyva54', 'email': 'fausto.leyva@mongodb.com', 'username': 'Faustoleyva54'}Message: |
| Comment by Fausto Leyva (Inactive) [ 30/Jul/20 ] |
| Comment by Louis Williams [ 16/Jul/20 ] |
|
bruce.lucas, yes this is a follow-up to work that has yet to be committed in |
| Comment by Bruce Lucas (Inactive) [ 16/Jul/20 ] |
|
Is this different from |