[KAFKA-305] Duplicate Key Errors Created: 04/Apr/22  Updated: 27/Oct/23  Resolved: 20/Apr/22

Status: Closed
Project: Kafka Connector
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Question Priority: Major - P3
Reporter: Juan Soto (Inactive) Assignee: Ross Lawley
Resolution: Works as Designed Votes: 0
Labels: internal-user
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

Hello team!

I am testing if Retryable Writes works on the new version. I have a docker compose with a PSA. I shutdown the secondary and try to write on two topics.

I have the following error.

//  com.mongodb.MongoBulkWriteException: Bulk write operation error on server mongo2:27017. Write errors: [BulkWriteError{index=0, code=11000, message='E11000 duplicate key error collection: kafka.schema index: _id_ dup key: { _id: "113" }', details={}}].
connect            | com.mongodb.kafka.connect.sink.dlq.WriteException: v=1, code=11000, message=E11000 duplicate key error collection: kafka.schema index: _id_ dup key: { _id: "113" }, details={}

https://www.mongodb.com/docs/manual/core/retryable-writes/#duplicate-key-errors-on-upsert What I assumed was that the dirver was going to retry the writing. I am using MongoDB 4.4 (MongoDB shell version v4.4.13.

Could you help me?

Should I manage by DLQ or diver will retry it?

Regards,

Juan

 



 Comments   
Comment by Ross Lawley [ 20/Apr/22 ]

This isn't a Kafka issue but rather a combination of cluster setup (PSA) and how the Server handles the error scenario and the error labels it returns back to the driver.

Ross

Comment by PM Bot [ 20/Apr/22 ]

There hasn't been any recent activity on this ticket, so we're resolving it. Thanks for reaching out! Please feel free to comment on this if you're able to provide more information.

Comment by Ross Lawley [ 05/Apr/22 ]

Hi juan.soto,

In general we ask users to use the community forum for usage questions. However,
I want to determine if there is a bug in the connector or driver or if an upsert can lead to duplicate key exception.

To clarify an update with upsert true should not lead to a duplicate key exception. In a retryable write scenario it shouldn't impact the server and how the server handles the operation.

So to understand why the connector / underlying Java driver is throwing a duplicate key exception, we need to confirm that an update operation was actually used.

Can you replicate this scenario, if so what are the kafka settings / configuration?

Ross

Generated at Thu Feb 08 09:06:04 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.