-
Type: Improvement
-
Resolution: Unresolved
-
Priority: Major - P3
-
None
-
Component/s: Atlas Testing
-
None
-
Needed
This issue has been repurposed to clarify expected output for events.json and results.json in all cases:
- results.json and events.json should always be created by the workload executor (even if the test runner propagates an error not caught in a loop)
- events.json must always report arrays for events, errors, and failures. Those arrays may be empty. If any events, errors, or failures are obtained from the entity map, they will be appended to those arrays. If the unified test runner propagates an error (e.g. not caught by a loop), the workload executor is expected to report the error/failure and append it to an array accordingly.
- numErrors and numFailures in results.json will always report the size of the respective array in events.json and never be unset (-1).
- numSuccesses and numIterations in results.json may be unset (-1) if their respective values cannot be obtained from the entity map.
These changes will be incorporated into the workload executor spec document and the validation tests will be revised accordingly.
For additional context, this issue's original summary and description follows:
Incorrect assertions for result.json in ValidateWorkloadExecutor
Behavioral Description: #4 states:
If the unified test runner raises an error while executing the workload, the error MUST be reported using the same format as errors handled by the unified test runner, as described in the unified test runner specification under the loop operation. Errors handled by the workload executor MUST be included in the calculated (and reported) error count.
If the unified test runner reports a failure while executing the workload, the failure MUST be reported using the same format as failures handled by the unified test runner, as described in the unified test runner specification under the loop operation. Failures handled by the workload executor MUST be included in the calculated (and reported) failure count. If the driver’s unified test runner is intended to handle all failures internally, failures that propagate out of the unified test runner MAY be treated as errors by the workload executor.
If the loop operation does not store errors or failures in an entity, those exceptions are expected to interrupt the loop and propagate to the test runner. My understanding of the text above is that the workload executor is expected to capture and report those errors in the same manner – and thus product results.json.
I'm confused as to why the validator-numFailures-not-captured.yml test expects -1 to be reported for numFailures in that case (and likewise for validator-numErrors-not-captured.yml). If anything, ValidateWorkloadExecutor's.test_num_failures_not_captured should expect 1 for numFailures (and possibly -1 for numErrors) since an uncaught failure will abort the loop on its first iteration and the workload executor should report that single failure on its own.
Behavior Description: #8 states:
MUST calculate the aggregate counts of errors (numErrors) and failures (numFailures) from the error and failure lists. If the errors or failures were not reported by the test runner, such as because the respective options were not specified in the test scenario, the workload executor MUST use -1 as the value for the respective counts.
This does not agree with the expectations for ValidateWorkloadExecutor's "simple test", which does not specify storeErrorsAsEntity or storeFailuresAsEntity but later asserts that results.json does not have any -1 values. Per the quoted text, a workload executor should use -1 for the error/failure counts.
This issue extends to other ValidateWorkloadExecutor tests, which specify either storeErrorsAsEntity or storeFailuresAsEntity (but not both). For example, validator-numErrors.yml and validator-numFailures-as-errors.yml only use storeErrorsAsEntity. I would expect results.json to produce an actual count for numErrors (based on the size of the errors array in events.json) and leave numFailures unset (i.e. -1); however, this conflicts with the assertions in ValidateWorkloadExecutor.run_test:
if any(val < 0 for val in stats.values()): self.fail("The workload executor reported incorrect execution " "statistics. Reported statistics MUST NOT be negative.")
I'm not sure what needs to change here, nor do I understand how existing implementations are currently passing the ValidateWorkloadExecutor tests. Does the documented behavior for the workload executor need to be changed, or are the assertions in ValidateWorkloadExecutor incorrect?
- split to
-
PHPLIB-714 Clarify events.json and result.json produced by workload executor
- Closed
-
RUBY-2587 Update Atlas workload executor for revised ValidateWorkloadExecutor tests
- Closed
-
CSHARP-3826 Clarify events.json and result.json produced by workload executor
- Backlog
-
CXX-2363 Clarify events.json and result.json produced by workload executor
- Backlog
-
GODRIVER-2144 Clarify events.json and result.json produced by workload executor
- Backlog
-
JAVA-4114 Update Atlas workload executor for revised ValidateWorkloadExecutor tests
- Backlog
-
CDRIVER-4144 Clarify events.json and result.json produced by workload executor
- Closed
-
MOTOR-816 Clarify events.json and result.json produced by workload executor
- Closed
-
NODE-3584 Clarify events.json and result.json produced by workload executor
- Closed
-
PYTHON-2892 Clarify events.json and result.json produced by workload executor
- Closed
-
RUST-1010 Clarify events.json and result.json produced by workload executor
- Closed