-
Type: Improvement
-
Resolution: Unresolved
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: Querying
-
Labels:None
-
Query Execution
All updates currently do document storage validation (depth, field name checks, type checks for _id) using mutablebson. We should consider changing this interface to instead accept BSONObj so that pipeline-based updates, replacement updates, and (soon) $v: 2 delta updates don't need to go through mutablebson.
The current modifier-style update system does storage validation node by node (rather than once at the end), so to do this efficiently that would have to change.
EDIT: After more reading, there's a good chance this would cause perf regressions for modifier-style updates. Doing storage validation on BSONObj in the "update with damages" path would likely require us to serialize the mutablebson post-image object into BSONObj so that it could be passed to the validation code. The serialized post image would be discarded, and WT would apply the damage vector. Re-serializing the post image could be a decent amount of wasted work, and could lead to performance regressions, especially for cases where a very small in-place update is done to a very large document.
We should keep this in mind when triaging this ticket.