[SERVER-61319] BucketCatalog should allow clearing the buckets on a range of meta and/or time values Created: 08/Nov/21 Updated: 06/Dec/22 Resolved: 19/Sep/22 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | New Feature | Priority: | Major - P3 |
| Reporter: | Arun Banala | Assignee: | Backlog - Storage Execution Team |
| Resolution: | Won't Do | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||
| Assigned Teams: |
Storage Execution
|
||||||||||||
| Sprint: | Execution Team 2021-12-13, Execution Team 2022-02-07, Execution Team 2022-02-21, Execution Team 2022-03-07 | ||||||||||||
| Participants: | |||||||||||||
| Description |
|
During a chunk migration of a sharded time-series collections, we clear all the open buckets on the source shard. This seems to have an impact on the write throughput. One of the suggested ways to improve the performance was to clear only the buckets that match the current chunk range that is being migrated. To achieve this, we need an alternative interface to BucketCatalog::clear(const NamespaceString& ns) which can accepts a start and end values of a shard key pattern and clear only the overlapping buckets. Note that this function is called in the critical section. So it is also important that this implementation does not iterator over all the open buckets to identify the overlapping buckets. |
| Comments |
| Comment by Dan Larkin-York [ 19/Sep/22 ] |
|
Closing as Won't Do, since the latency and negative effects of premature closure for chunk migrations are addressed by other tickets. |