[SERVER-21763] Track "compare" task for performance results as regular regression tasks. Created: 03/Dec/15 Updated: 16/Nov/16 Resolved: 14/Dec/15 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Performance |
| Affects Version/s: | None |
| Fix Version/s: | 3.2.1, 3.3.0 |
| Type: | Task | Priority: | Major - P3 |
| Reporter: | Chung-yen Chang | Assignee: | Chung-yen Chang |
| Resolution: | Done | Votes: | 0 |
| Labels: | test-only | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Backwards Compatibility: | Fully Compatible |
| Sprint: | Performance D (12/14/15) |
| Participants: |
| Description |
|
We currently make pass/fail decisions in the compare.py script when trying to compare to performance results. As it turned out, this is an inflexible solution considering the threshold needed for different tests varies. The solution is to not make that decision in compare.py script, but run the regression analysis script after the ratios are calculated by compare.py. |
| Comments |
| Comment by Githook User [ 24/Dec/15 ] |
|
Author: {u'username': u'chungyen100', u'name': u'Chung-Yen Chang', u'email': u'chung-yen.chang@10gen.com'}Message: (cherry picked from commit 4cf67910519aee0d5af7a24b84db970551b0135b) |
| Comment by Githook User [ 11/Dec/15 ] |
|
Author: {u'username': u'chungyen100', u'name': u'Chung-Yen Chang', u'email': u'chung-yen.chang@10gen.com'}Message: |