[KAFKA-78] Publish error messages to a topic Created: 24/Dec/19  Updated: 28/Oct/23  Resolved: 11/Sep/20

Status: Closed
Project: Kafka Connector
Component/s: Source
Affects Version/s: None
Fix Version/s: 1.3.0

Type: Improvement Priority: Major - P3
Reporter: Davenson Lombard Assignee: Ross Lawley
Resolution: Fixed Votes: 2
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Documented
Duplicate
is duplicated by KAFKA-149 After applying the update 'immutable'... Closed
Epic Link: Error Handling
Case:
Documentation Changes: Needed
Documentation Changes Summary:

Added support for the following settings:

errors.tolerance = [none, all]
Errors tolerance controls: The behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' "changes the behavior to skip over problematic records.";

errors.log.enable=[true,false]
If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.

errors.deadletterqueue.topic.name="someTopic"
Whether to output conversion errors to the dead letter queue. Stops poison messages when using schemas, any message will be outputted as extended json on the specified topic. By default messages are not outputted to the dead letter queue. Also requires `errors.tolerance=all`


 Description   

In case of failure to process events (conversion failing, retryable errors, etc) on the connector side, we should have the ability to publish a message to an error topic with sufficient details to identify a potential series of missed events.



 Comments   
Comment by Githook User [ 11/Sep/20 ]

Author:

{'name': 'Ross Lawley', 'email': 'ross.lawley@gmail.com', 'username': 'rozza'}

Message: Added dead letter queue support for the source connector

KAFKA-78
Branch: master
https://github.com/mongodb/mongo-kafka/commit/d68d7da45c98148bed2fef00ef2cf5ba64031d3a

Comment by Ross Lawley [ 08/Sep/20 ]

PR: https://github.com/mongodb/mongo-kafka/pull/36

Comment by Rajaramesh Yaramati [ 21/Apr/20 ]

I think other way. deadletter. sink has deadletterqueue option not source. https://kafka.apache.org/documentation/#errors.deadletterqueue.topic.name

Comment by Davenson Lombard [ 15/Jan/20 ]

Nevermind about my previous comment. The dead letter queue is available for source connectors, not sink.

Comment by Davenson Lombard [ 09/Jan/20 ]

ross.lawley, I completely agree with you.

I don't have much experience with Kafka itself and most of the knowledge I have is coming from the Confluent blog posts and documentation. In looking for a potential solution to this customer request, I found that since Kafka 2.0, Kafka Connect includes some error handling mechanism. One can take advantage of it to write the messages that can't be processed to a "dead letter queue".

More details here: https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues/

If I read this correctly, by setting errors.tolerance and errors.deadletterqueue.topic.name in the connector config, one can have the "Failed messages" sent to the dead letter queue.

If this works as described, I believe it will be sufficient to have errors and message that we failed to process be sent directly to a queue with a name defined by the customer.

Comment by Ross Lawley [ 06/Jan/20 ]

I'm not sure exactly what should be published and / or if this a pattern that other Kafka connectors follow. It would be good to follow any existing conventions rather than introducing any new paradigms.

Generated at Thu Feb 08 09:05:31 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.