[KAFKA-247] Recreate change stream from the point of failure for event > 16 MB Created: 12/Aug/21 Updated: 01/Sep/23 Resolved: 25/Jul/23 |
|
| Status: | Closed |
| Project: | Kafka Connector |
| Component/s: | Source |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Unknown |
| Reporter: | Dhruvang Makadia | Assignee: | Unassigned |
| Resolution: | Won't Fix | Votes: | 0 |
| Labels: | external-user | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||||||
| Description |
| Comments |
| Comment by Robert Walters [ 25/Jul/23 ] |
|
Handling of large messages will be implemented with KAFKA-381. |
| Comment by Ross Lawley [ 25/Jul/23 ] |
|
I think this ticket should be closed as "Won't fix" as we cannot resume a change stream from the point of failure. Recommend directing users to KAFKA-381 |
| Comment by Ross Lawley [ 17/Aug/21 ] |
|
Hi dhruvangmakadia1@gmail.com, The last seen resume token is stored as the offset. So resiliency is there for other events as the connector will continue after the last seen event. It's just this exception is non resumable as the last consumed event occurs before the too large event and the change stream if restarted at the last seen (processed) event would continue to see the same error. So the challenge is to capture the message too large error and process it differently to other errors (essentially skip that event). However, it will depend on users configuration as missing that event will result in data loss. The only way to ensure no data loss would be to restart and go through the copy data process. Ross |
| Comment by Dhruvang Makadia [ 16/Aug/21 ] |
|
Hi Ross Lawley, Although I did the investigation and filed the ticket just for large event exception, I wonder if similar improvement can be made for other exceptions resulting in change stream exception as well. In an ideal world, we would like to have no data loss between kafka and updates to mongo. |
| Comment by Ross Lawley [ 16/Aug/21 ] |
|
Hi dhruvangmakadia1@gmail.com, Thanks for the ticket. This is something we can look into improving. Unfortunately, until All the best, Ross |