-
Type: Improvement
-
Resolution: Unresolved
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
-
Build
-
Build OnDeck
Currently we build many different benchmark binaries. Considering that benchmarks binaries are generally going to be static to improve and match release configurations, this means the debug info for these numerous static binaries is going to be significant. As a result, each benchmark binary itself is large (2+G).
Our current build process tars up all those binaries into a single tarball, which is getting pretty close to S3's current file size limit of 50G. This results in occasional failures of tasks due to exceeding S3 limitations.
We should explore options for how to remediate the situation. Options thus far discussed include:
- Refactor the benchmark binary build process to instead build a single "collection" binary of all benchmarks. This would eliminate the duplication-of-libraries issue, which should theoretically reduce total tarball size.
- Eliminate some benchmark binaries entirely
- Stop including debug info in the benchmark binaries by default (which will significantly reduce the tarball size). Additionally, provide instructions to engineers on how to re-create debug info when needed. (This is possible currently, but requires expert knowledge).
- Shard the benchmark tarball itself into multiple tarballs (and update all call sites to obtain N tarballs accordingly).
Note: There's a thread (https://mongodb.slack.com/archives/C0V896UV8/p1713886468742869 ) discussing what to do about this.
- related to
-
SERVER-86329 improve compile_upload_benchmarks required functionality
- Closed
-
SERVER-89793 Disable compile_upload_benchmarks_debug task
- Closed