[SERVER-76027] Limit memory usage for bulkWrite (mongos) Created: 12/Apr/23 Updated: 19/Jan/24 Resolved: 19/Jan/24 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | 7.3.0-rc0 |
| Type: | Task | Priority: | Major - P3 |
| Reporter: | Lingzhi Deng | Assignee: | Sean Zimmerman |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | milestone-2 | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||||||
| Assigned Teams: |
Replication
|
||||||||||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||||||||||
| Sprint: | Repl 2024-01-22 | ||||||||||||||||||||
| Participants: | |||||||||||||||||||||
| Description |
|
Mongos can't use remoteCursors for bulkWrite like other cluster commands because it will need to consume write results from the shards first before determining what to do next as a router. Given this nature of a router, mongos needs to cache response for each individual WriteOp. And there is no memory limit for it today. Similar to The end goal is to limit the memory usage when we build or cache responses for bulkWrite ops. Once this limit is hit, we should stop executing the remaining operations in this bulkWrite. |
| Comments |
| Comment by Githook User [ 19/Jan/24 ] |
|
Author: {'name': 'seanzimm', 'email': '102551488+seanzimm@users.noreply.github.com', 'username': 'seanzimm'}Message: GitOrigin-RevId: 81e883615c2f01286d16dd1aa763485245c20899 |
| Comment by Lingzhi Deng [ 14/Apr/23 ] |
|
More detailed explanation: This was an “easy but good enough” solution for now to work around the fact that mongos/the router logic needs to cache shard responses before knowing what to do next. We think this should be rare for people to hit this limit in practice. If this becomes an issue in the future, ideally we will want to:
|