When checking for possible bucket rollover in `determineRolloverReason()`, we check for schema incompatibility, in particular for cases where we have
timeseriesBucketsMayHaveMixedSchemaData: false
We also have a check that can keep a bucket open for large measurements:
if (keepBucketOpenForLargeMeasurements) { if (bucket.size + sizesToBeAdded.total() > absoluteMaxSize) { if (absoluteMaxSize != Bucket::kLargeMeasurementsMaxBucketSize) { return RolloverReason::kCachePressure; } return RolloverReason::kSize; } // There's enough space to add this measurement and we're still below the large // measurement threshold. if (!bucket.keptOpenDueToLargeMeasurements) { // Only increment this metric once per bucket. bucket.keptOpenDueToLargeMeasurements = true; stats.incNumBucketsKeptOpenDueToLargeMeasurements(); } return RolloverReason::kNone; } else {
This check bails out of the rollover determination with RolloverReason::kNone immediately, which results in skipping remaining schema incompatibility checks. This was the cause of SERVER-105854, where a document with a mixed schema (different canonical type for the same field) was missed because the measurements are very large.
- is depended on by
-
SERVER-105854 "Mixed schema" error while attempting to delete docs from a time series bucket
-
- Closed
-
- is related to
-
SERVER-105854 "Mixed schema" error while attempting to delete docs from a time series bucket
-
- Closed
-
- related to
-
SERVER-108368 Add a multiversion exemption for SERVER-107361
-
- Closed
-
-
SERVER-107377 Add jstests to exercise mixed schema data bucket rollover in timeseries
-
- Closed
-