[KAFKA-70] Source Connector supports startAtOperationTime Created: 26/Sep/19 Updated: 25/Jul/22 Resolved: 29/Mar/22 |
|
| Status: | Closed |
| Project: | Kafka Connector |
| Component/s: | None |
| Affects Version/s: | 0.2 |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Christian Kurze (Inactive) | Assignee: | Unassigned |
| Resolution: | Won't Fix | Votes: | 5 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||
| Case: | (copied to CRM) | ||||||||||||
| Description |
|
The Source Connector should support the option startAtOperationTime to start watching a change stream at a certain point in time (assuming that the oplog is still available). Docs: https://docs.mongodb.com/manual/changeStreams/#start-time |
| Comments |
| Comment by Robert Walters [ 29/Mar/22 ] |
|
The use cases backing this request are around data replication. These customers have either moved to a different solution or are beta testing cluster to cluster replication. For this reason we are not going to fix this. |
| Comment by Robert Walters [ 18/Oct/21 ] |
|
Scheduled for 1.8 |
| Comment by Christian Kurze (Inactive) [ 16/Jun/20 ] |
|
ross.lawley I agree with the limitations, we discussed the approach in combination with https://jira.mongodb.org/projects/KAFKA/issues/KAFKA-51. So there has to be governance around it that the oplog slice still exists and the replication can keep up with the changes. In the discussed case, the operation timestamp is taken form a process (can be custom code, backup restore, or something else) and starts the replication of events after the operation timestamp. Due to idempotent characteristics of the oplog, there can be an overlap into the past, but that depends on the use case and the characteristics of the sink. |