[SERVER-79063] Block from lowering to SBE $sample queries over timeseries Created: 18/Jul/23 Updated: 17/Oct/23 Resolved: 07/Sep/23 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Task | Priority: | Major - P3 |
| Reporter: | Irina Yatsenko (Inactive) | Assignee: | Irina Yatsenko (Inactive) |
| Resolution: | Won't Do | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Assigned Teams: |
Query Integration
|
| Participants: |
| Description |
|
The $sample stage is pushed into $_internalUnpackBucket (see PipelineD::buildInnerQueryExecutor). |
| Comments |
| Comment by Irina Yatsenko (Inactive) [ 07/Sep/23 ] |
|
The second case of $sample stage following bucket unpacking is similar to any other pipeline that might contain stages not supported in SBE yet. There is no reason for $sample to get special treatment and doing so would complicate the code. So let's not do it. We'll add a query knob to disable TS lowering to SBE so that in the case of critical workloads the customers could revert to the classic behaviour. |
| Comment by Irina Yatsenko (Inactive) [ 31/Aug/23 ] |
|
If you add a $project to the pipeline it would translate into an inclusion in $_internalUnpackBucket. However, I was confused about how $sample plans are constructed. Essentially, the $sample stage over time-series might generate the following plans depending on the ratio between the sample and collection sizes: The TRIAL plans aren’t supported in SBE yet and will stay fully classic, but in the second case we might end up with a hybrid mode if the unpacking stage is lowered but the sample stage isn’t (it’s not supported in SBE yet). If the hybrid mode regresses the performance we’d need to block it. |