-
Type: Task
-
Resolution: Unresolved
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
-
Query Execution
$changeStreamSplitLargeEvent greedily splits change events by root-level fields in such a way that no resulting 'fragment' should exceed 16MB.
In case of large document _ids, the resulting resume token will also be large, so we might need to accommodate this when "packing" the fragments. In other words, don't pack it tightly up to 16MB, because the batch will need to include the post-batch resume token which might not fit.
If one of the root-level fields (e.g., "updateDescription" or "fullDocument" or "fullDocumentBeforeChange") is close to 16MB and the resume token is large (does not fit into 16KB 'metadata' margin), splitting such a document with $changeStreamSplitLargeEvent will fail.
To-Dos:
- Modify the splitting algorithm to produce 16MB - size(resumeToken) fragments.
- Check for the possible origin of "Tried to create string longer than 16MB" message.
- Test the limits of _id field sizes we support in change events with the $changeStreamSplitLargeEvent stage.
- Document these limitations in a most suitable way.
- related to
-
SERVER-92904 Reply size exceeds BSONObjMaxInternalSize whilst batch is within BSONObjMaxUserSize
- Closed