[KAFKA-177] MongoDB Sink Connector deadletterqueuetopic issue Created: 26/Nov/20 Updated: 27/Oct/23 Resolved: 27/Nov/20 |
|
| Status: | Closed |
| Project: | Kafka Connector |
| Component/s: | Documentation, Sink |
| Affects Version/s: | 1.3.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Adam Cowin | Assignee: | Ross Lawley |
| Resolution: | Works as Designed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Dev |
||
| Issue Links: |
|
||||||||
| Case: | (copied to CRM) | ||||||||
| Description |
|
Our development team has found a defect on the Kafka Connect MongoDB Sink (https://docs.mongodb.com/kafka-connector/master/kafka-sink). It seems that support for the deadletterqueue has been removed/disappeared and cant get the feature to working in the latest version as per the MongoDB documentation. (we also confirmed the code is missing from the mainline in GIT) According this document, the sink connector is supposed to support deadletterqueue topic (https://docs.mongodb.com/kafka-connector/master/kafka-sink-properties#dead-letter-queue-configuration-settings errors.deadletterqueue.topic.name=example.deadletterqueue However, when I publish a message that violates the MongoDB unique index, I do not see the message in dead letter queue topic. I tried both console consumer and in Spring Boot application, no luck. And then, I went to the GitHub (master) to check the source code. It seems the dead letter queue topic is not part of MongoSinkTopicConfig.java (https://github.com/mongodb/mongo-kafka/blob/master/src/main/java/com/mongodb/kafka/connect/sink/MongoSinkTopicConfig.java), i.e. errors.deadletterqueue.topic.name is not part of the configuration. On the other hand, that properties is available in MongoSourceConfig.java (https://github.com/mongodb/mongo-kafka/blob/master/src/main/java/com/mongodb/kafka/connect/source/MongoSourceConfig.java). |
| Comments |
| Comment by Ross Lawley [ 27/Nov/20 ] | ||||||||||||||||||
|
Hi ming.li2@td.com, The dead letter queue is really a feature of Kafka Connect and not of the connector implementations. It's an abstraction at a higher level and as such is not controlled by the connectors. Ross | ||||||||||||||||||
| Comment by Ming Li [ 27/Nov/20 ] | ||||||||||||||||||
|
Hi Ross, Thank you for the explanation. From the table that you provided, it means both source and sink connectors cannot publish messages to dead letter queue due to the nature of Apache Kafka connector? Regards, Ming Li | ||||||||||||||||||
| Comment by Ross Lawley [ 27/Nov/20 ] | ||||||||||||||||||
|
Thanks for the ticket. The dead letter queue functionality is provided by Kafka connect and not the actual connectors: See: https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues/ The documentation contains a reference to the main kafka dead letter queue settings for users. The following table is what Kafka connect supports with regards to dead letter queues.
The sink connector cannot publish messages to the dead letter queue itself, but it does respect the errors.log.enable and errors.tolerance settings. Note: for the source connector, we added similar functionality to the dead letter queue that mimics the sink version. This is used when change stream messages cannot be converted to the set schema. For future reference, for questions like these, I wanted to give you some resources to get this question answered more quickly:
Just in case you have already opened a support case and are not receiving sufficient help, please let me know and I can facilitate escalating your issue. Thank you! Ross |