[SERVER-4639] update yielding with upsert can duplicate a document Created: 06/Jan/12  Updated: 11/Jul/16  Resolved: 18/Jan/12

Status: Closed
Project: Core Server
Component/s: Concurrency, Write Ops
Affects Version/s: None
Fix Version/s: 2.1.0

Type: Bug Priority: Major - P3
Reporter: Eliot Horowitz (Inactive) Assignee: Eliot Horowitz (Inactive)
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Depends
Related
related to SERVER-7170 Upserts using unidexed query can resu... Closed
is related to SERVER-3357 disk/yield lock - any update Closed
Operating System: ALL
Participants:

 Description   

if 2 threads are accessing the same document, and 1 thread moves the document "back" the 2nd thread can miss it



 Comments   
Comment by Ian Whalen (Inactive) [ 21/Sep/12 ]

litaopier as this ticket is closed, can you please open up a separate SERVER ticket so that we can debug with you and see if the problem is still outstanding?

Comment by taopier [ 21/Sep/12 ]

db version v2.0.5

using strom, the threads(also in different process) run code : myDB("mytable").update(DBObject("key1" -> new ObjectId(myKey), "key2" -> coKey), myUpdates, true, false, WriteConcern.SAFE)

although the possibility is quite low, say, I use storm to upsert around 60k records, and 4 duplicated appeared, most of their behavior is as what we expected.
I run my code several times, and each time the duplicated record is random both in content or number, say (2-10)

I checked this issue, it seems that what we described is probably the same thing.
So I DO upgrade to v2.2, and restart my DB, run my case, still I get 1-3 duplicated record.

the following is the related DB log:
Fri Sep 21 13:18:57 [initandlisten] MongoDB starting : pid=32671 port=27017 dbpath=/home/myuser/mydb 64-bit host=mymachine
Fri Sep 21 13:18:57 [initandlisten] db version v2.2.0, pdfile version 4.5

....
Fri Sep 21 13:35:29 [conn196] update mydb.mytable query:

{ key1: "k12306", key2: "k86" }

update:

{ field1: "f1", field2: "f2" }

nscanned:3341 nupdated:1 upsert:1 keyUpdates:0 numYields: 26 locks(micros) w:7033 333ms
...
Fri Sep 21 13:35:29 [conn192] update mydb.mytable query:

{ key1: "k12306", key2: "k86" }

update:

{ field1: "f1", field2: "f2" }

nscanned:3343 nupdated:1 upsert:1 keyUpdates:0 numYields: 26 locks(micros) w:6906 335ms
....

we can see that the commands which cause duplicated records is shown above.

other conditions are:
a) when mytable is with index on db.mytable.ensureIndex(

{"key1":1, "key2":1}

), it works ok, no duplicated record generated.
b) when mytable is without any index, which will cause every mongo command cost much longer time, bang~, the duplicated records appeared..

any idea?

I'm wondering whether this bug is fixed thoroughly.

Comment by auto [ 06/Jan/12 ]

Author:

{u'login': u'erh', u'name': u'Eliot Horowitz', u'email': u'eliot@10gen.com'}

Message: using PageFaultException in various places in update SERVER-4639
Branch: master
https://github.com/mongodb/mongo/commit/e39d2a7372460e3cdbf7e8e4601531bc73ccb712

Generated at Thu Feb 08 03:06:34 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.