[KAFKA-374] Implement an error handler to address specific scenarios Created: 12/Jun/23 Updated: 22/Jan/24 |
|
| Status: | Backlog |
| Project: | Kafka Connector |
| Component/s: | None |
| Affects Version/s: | 1.12.0 |
| Fix Version/s: | 1.12.0 |
| Type: | New Feature | Priority: | Unknown |
| Reporter: | Robert Walters | Assignee: | Unassigned |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||
| Quarter: | FY24Q3 | ||||||||||||||||
| Description |
|
There is currently a mongo.errors.tolerance flag which doesn't work as some customers expect. There is a need to be able to instruct the connector to fail under certain circumstances versus proceed. For example, it should be possible to fail the connector if the network connectivity to mongodb fails and in other cases throw data into the DLQ for malformed data for example. This ability to fine tune the behavior of errors is the feature request.
|
| Comments |
| Comment by Ross Lawley [ 25/Jul/23 ] |
|
Transient errors are already retryable by the driver thanks to retryable writes. Retrying bulk operations from within the connector itself was removed in The reason retrying from within the connector was broken, was it doesn't know about the internals of the bulk operation (including any batching) that may have occurred and unlike retryable writes it does not differentiate between transient and non-transient errors. Note that all errors that occur are reported to the DLQ. This includes any "transient" style errors that fail on retry. Once the Client side operation timeout is implemented (JAVA-3828) then that should be the timeout to use for setting the limits of retryability. So I'm not sure there should be any change to the connector here and think it should be closed as "Won't fix". |