[SERVER-72028] E11000 duplicate key error collection: <col name> index: _id_ dup key: { _id: "xxxxxx 2022-12-10" }', details={}} Created: 11/Dec/22 Updated: 19/Dec/22 Resolved: 19/Dec/22 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | 5.0.14 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Witold Kupś | Assignee: | Yuan Fang |
| Resolution: | Done | Votes: | 0 |
| Labels: | Bug | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Operating System: | ALL | ||||||||
| Participants: | |||||||||
| Description |
|
Hello,
(`RequestCountReport` has the following structure)
...which is translated to the following query to mongo
it sometimes gives an error like this
I initially had a single op write (one for each entry) and it also occurred, but then I could at least retry the given entry write. Now, when it is a bulk, I am not sure even how to do it (something may be saved, it is not in the transaction I assume). Nevertheless, it is a bug IMO |
| Comments |
| Comment by Yuan Fang [ 19/Dec/22 ] |
|
Thank you for reporting this issue. My investigation leads me to believe that this likely happens when two updates come in with upsert:true, and both can't find a match, resulting in both attempting to insert new documents which conflict on unique index violations of the query predicate.
While the server can automatically retry upserts on DuplicateKey error in some circumstances, it cannot in all circumstances, Even though this seems to be an expected behavior, from a user perspective, I understand the need to handing partial failures in bulk updates, it is worth considering doing retries on the application end. For a solution that suits your use case, we'd like to encourage you to discuss your use case on the MongoDB Developer Community Forums. If the discussion there leads you to suspect a bug in the MongoDB server, please revisit this ticket with more information, then we will reopen the ticket and investigate it. Regards, |