The latest Kafka connector uses Java 3.11 and thus takes advantage of SERVER-35740 (HighWaterMarkToken). Under the hood though, it is the resumeToken (offset) and the corresponding event (matching the change stream filter) that are published as topics. If a connector crash, the offset from the last published Source Record is used as the resumeToken upon restart.
In certain situation, such as when the connector is listening a dormant db/collection, there is a potential for the resumeToken to be out of the oplog upon restart. Saving the postBatchResumeToken will reduce the likelihood of such a failure to occur
- is duplicated by
-
KAFKA-96 Source Connector: The resume token UUID does not exist
- Closed
-
KAFKA-93 Connector issue with high load: Query failed with error code 136 and error message 'Error in $cursor stage :: caused by :: errmsg: "CollectionScan died due to failure to restore tailable cursor position
- Closed
- is related to
-
KAFKA-176 Improve heartbeat usability
- Closed