[SERVER-22607] Duplicate key error on replace with upsert Created: 13/Feb/16 Updated: 18/May/16 Resolved: 13/Feb/16 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Write Ops |
| Affects Version/s: | 3.0.0, 3.2.0, 3.2.1 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Critical - P2 |
| Reporter: | Andrew Cuga [X] | Assignee: | Unassigned |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||
| Operating System: | ALL | ||||||||||||||||
| Steps To Reproduce: | // Unique index on the collection // Document locating the record to replace // Document being inserted (or replaced) // Code used to do replace with upsert(true) call // Error bubbling up from mongo Other steps to reproduce: |
||||||||||||||||
| Participants: | |||||||||||||||||
| Description |
|
We're getting a duplicate key error exception in a situation where one should never be occurring. It occurs infrequently, but consistently. Our collection has a unique key defined on a combination of fields. We then performa `replaceOne()` with `upsert(true)`. Generally this works fine, but a tiny percentage of the writes result in an error being thrown by mongo stating: `com.mongodb.MongoWriteException: E11000 duplicate key error index` Our mongo instance is a single installation with no clustering and using the latest driver / server- 3.2.1. Our client is using the default connection pool and is sending a hundreds of requests per second to the database. That said, our requests are unique in each is writing a different record to the database each time. It's not the case where we'd be sending a record with the same complex key via multiple threads at the same time. It's worth noting that when this error occurs and an attempt made to re-upsert the exact same record record, it has succeeded every time. We're using the Java driver, but research has shown other people have run into this same error on StackOverflow: |
| Comments |
| Comment by Ramon Fernandez Marina [ 18/May/16 ] |
|
Apologies for the radio silence TheAndruu. This is to let you know that I've opened Note also that the DOCS project is open to the public, so please feel free to open tickets there when you see areas of the documentation that can be improved. Thanks, |
| Comment by Andrew Cuga [X] [ 13/Feb/16 ] |
|
Here's one example: the replaceOne upsert behavior documentation: It mentions no warning as to the use of unique indices to ensure uniqueness. This error is not occurring due to a race condition of the same record being written by two threads at the same time. It's occurring when a record has already been persisted and propagated fully. Specifically from the documentation, this line is misleading as it's not a guarantee the operation will succeed despite this condition being met, with no warnings anywhere on the page: |
| Comment by Ramon Fernandez Marina [ 13/Feb/16 ] |
|
Digging through the documentation I can see that this behavior has been documented since 2013 (see this commit): update() with upsert set to true may insert duplicate documents unless one uses unique indexes. The same behavior applies to findAndModify(). That being said, if you've found places in our documentation that you think may lead users to believe that certain operations are atomic when in fact they're not, could you please link them here? I definitely see value in reviewing and improving them for other users. Thanks, |
| Comment by Andrew Cuga [X] [ 13/Feb/16 ] |
|
Understandable if there's already a ticket open on this issue. Thought it's unfair to say this behavior is expected. According to the spec, this condition should never occur. Luckily we were overly cautious and used unique indices as a additional layer of protection. If not, we'd be running into serious issues as a result of mongo behaving contrary to expectations. There are at least 12 issues recorded in JIRA on this issue, dating to July 2013. Is there a chance this will either get fixed or at least documented better to explain it's not an atomic operation? It's a serious flaw to allow the upsert operation appear as atomic when it in fact would be inserting duplicate records if not for a unique index. |
| Comment by Ramon Fernandez Marina [ 13/Feb/16 ] |
|
Hi TheAndruu, the behavior you're seeing is expected, and your solution (retrying at the application level) is the correct one. Regards, |