[SERVER-66579] Initial tests of write heads vs. insert performance for RecordId format Created: 19/May/22 Updated: 12/Jun/23 Resolved: 12/Jun/23 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Task | Priority: | Major - P3 |
| Reporter: | Esha Maharishi (Inactive) | Assignee: | [DO NOT USE] Backlog - Server Serverless (Inactive) |
| Resolution: | Incomplete | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Assigned Teams: |
Serverless
|
| Sprint: | Server Serverless 2022-05-30, Server Serverless 2022-06-13 |
| Participants: |
| Description |
|
Ideally in Serverless we would like to shard on hash(_id) and use a RecordId format of (hash(_id), per-hash-counter), where the hash function has low cardinality. This allows split cleanup to be on ranges of RecordId's and merge to "stitch" the record heaps But, we might have to choose a very low cardinality to maintain few write heads per shard, and therefore have to refine/reshard most customers to allow them to have more shards. This ticket is to do initial tests of write heads vs. insert performance to determine an order of magnitude for a feasible default cardinality. |