[SERVER-67044] Create build metrics CLI interface and generic output Created: 06/Jun/22 Updated: 29/Oct/23 Resolved: 17/Jun/22 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | 6.1.0-rc0 |
| Type: | New Feature | Priority: | Major - P3 |
| Reporter: | Daniel Moody | Assignee: | Daniel Moody |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||||||||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||||||||||||||||||||||
| Sprint: | Dev Platform 2022-06-27 | ||||||||||||||||||||||||||||||||
| Participants: | |||||||||||||||||||||||||||||||||
| Description |
|
The command line interface has two options. Implement the add options and generate an output json file with just the meta data present:
The JSON format will be documented with the jsonschema python module and a validation check will be done in effort to make sure modifications to the format maintain documentation in the schema. A bad validation should fail the evergreen task but still report the data. The metrics level can be functionally ignored for now. But the output should generate the generic meta data:
For local builds, the evg_id and variant fields should just be "UNKNOWN". This should be tested and verified in stdout in evergreen patch build using the parameters UI to a compile option. |
| Comments |
| Comment by Githook User [ 16/Jun/22 ] |
|
Author: {'name': 'Daniel Moody', 'email': 'daniel.moody@mongodb.com', 'username': 'dmoody256'}Message: |
| Comment by Daniel Moody [ 07/Jun/22 ] |
any extra code executing potentially affects perf, so any measuring taking place could affect perf. The levels would control the amount of measuring taking place.
More data is always nice to have, but theres a cost. It's hard to predict the value of the extra data or not at the moment, because we don't use it or have a build perf BF history. |
| Comment by Alex Neben [ 07/Jun/22 ] |
|
The only thing I can think of that would affect perf would be dependency graphs. Anything else that would affect perf? But also, I think it is reasonable to lose perf when we measure because we don't have to measure all the time. You think we would want to measure everything? |
| Comment by Daniel Moody [ 07/Jun/22 ] |
|
I would prefer single level not only because it makes the code much simpler, but it also removes a lot of output variability. Level 3 means more perf impact as heavier cpu and memory cost to perform the analysis, but also much more data about the build. |
| Comment by Alex Neben [ 07/Jun/22 ] |
|
Do we really need levels: {}--metrics-level=[1,2,3]? In what case wouldn't we want level 3. I ask because if we decide on just a single level that will remove a lot of if statements. |