[SERVER-22988] MongoDB Concurrency Lock Failed for Multithread Insert/Update Created: 07/Mar/16 Updated: 10/Mar/16 Resolved: 10/Mar/16 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Concurrency, WiredTiger |
| Affects Version/s: | 3.2.1 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Mahbubur Rub Talha | Assignee: | Unassigned |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Operating System: | ALL | ||||||||
| Participants: | |||||||||
| Description |
|
I did some test on MongoDB from java multithread program to check how MongoDB ensure concurrent access on document. My sample program ensure that only one copy will exists on collection. it inserts a document if it not exists in collection otherwise it updates document frequency. After several tests i found that some documents exists with same value. Mongo DB Collection
Index:
Here "doc" field is used for searching and if anything found then "doc_freq" is updated. If nothing is found then new document will be inserted.I have inserted 100000 entries which has same doc value and used 200 thread. And found that more than one entries found on collection with same doc value. I have used Java MongoDB Driver Version: 3.0.4 and **MongoDB Version:3.2.1 on OS Yosemite and Wire Tiger engine. Please find my test code from this url https://talha13@bitbucket.org/talha13/duplicatetest.git . To reproduce this issue please run several times and make sure before each run doc collection are clear. |
| Comments |
| Comment by Ramon Fernandez Marina [ 10/Mar/16 ] |
|
talha13@gmail.com, modifying the current behavior is being discussed in In cases where the server can describe a serial order over updates to the index region / index / index entry being modified it could declare a winner. Determining when such a serialization actually exists or could exist requires further study. Feel free to add your voice to Regards, |
| Comment by Mahbubur Rub Talha [ 07/Mar/16 ] |
|
Yes, i bypass this issue by creating Unique Index. But i'm not happy with this solution. I think MongoDB should handle this issue. |
| Comment by Scott Hernandez (Inactive) [ 07/Mar/16 ] |
|
This looks like a duplicate of In short, you need a unique index if you want to ensure that the update + insert parts of your update do not produce duplicates (see the docs and your use of upsert:true) |