[DRIVERS-2779] Standardize performance benchmark reporting metrics Created: 16/Nov/23  Updated: 18/Dec/23

Status: Backlog
Project: Drivers
Component/s: Performance Benchmarking
Fix Version/s: None

Type: Task Priority: Minor - P4
Reporter: Shane Harvey Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Related
related to PYTHON-3823 Standardize performance testing infra... Closed
related to DRIVERS-2666 Standardize performance testing infra... Implementing
is related to NODE-5794 Upload test results for individual pe... Backlog
Driver Changes: Needed

 Description   

The Benchmarking spec says:

In addition to timing data, all micro-benchmark tasks will be measured in terms of "megabytes/second" (MB/s) of documents processed, with higher scores being better. (In this document, "megabyte" refers to the SI decimal unit, i.e. 1,000,000 bytes.) This makes cross-benchmark comparisons easier.

However, each driver's benchmark suite reports different measurements right now. Python reports "bytes_per_second", Java reports ops_per_second, C says "ops_per_second" but actually reports bytes/sec, etc..

We should also standardize the "Test" names that are reported so that we can more easier compare results across drivers. For example, the "GridFS download" test is called "GridFS Download" in Java, "TestGridFsDownload" in Python and C, "gridFsDownload" in Node.

Related to DRIVERS-2666.


Generated at Thu Feb 08 08:26:23 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.