-
Type:
Task
-
Resolution: Fixed
-
Priority:
Major - P3
-
Affects Version/s: None
-
Component/s: None
-
None
-
Storage Execution
-
Fully Compatible
-
Storage Execution 2025-05-26, Storage Execution 2025-06-09
-
0
-
None
-
3
-
TBD
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
0
timeseries_min_max.js has a test case where we insert two measurements with one schema, and then insert another measurement with a different schema. This creates a total of two buckets (we generate a new bucket for the new schema).
The test performs a find on the buckets collection to identify the bucket that the measurement was inserted into. It does this by performing a query for (control.min.id <= measurement.id <= control.max._id). However, if the first two measurements and the third measurement (the one with a different schema) are inserted within the same second, the timestamp portion of the id fields will be the same across the two buckets, and this query will return both buckets, which fails this assertion.
At a higher level, this test was originally created to test whether the min and max values of the bucket were correct. The logic around checking for correctly identifying which bucket that a measurement went into only exists as a side effect of the fact that, when the test was written, measurements with different schema could go into the same bucket, but the current behavior is that the measurement with different schema generates a new bucket. I would argue that testing around schema change detection doesn't belong here, and that we can remove these cases and simplify the test without losing coverage.