-
Type: New Feature
-
Resolution: Won't Fix
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
Currently we offer a set of knobs for users to tune the concurrency of their dump/restore jobs. This can produce very efficient runs, but at the expense of requiring the user to profile their use-case before making backups--not an absurd task for a user who wants automatic backups, but too much work for someone who wants to take a one-off copy of their data.
It would be useful if these tools could dynamically increase/decrease load based on how much impact doing so has on the throughput. We could do this by scaling the number of inserters or collections on the fly, or by outright throttling network traffic. Then users would just have one "knob" for setting a maximum throughput instead of 3+.
- is related to
-
TOOLS-432 Re-benchmark default BulkWriters
- Closed