We're getting a handful of live exceptions resulting from concurrent upserts, with the same query, complaining about unique index violations. It was my understanding that the upsert process would be atomic, with respect to the decision to update or insert, and therefore the 'winner' of the obvious race condition would perform an insert and second place would update the new document.
An example of the exception:
"serverUsed" : "localhost/127.0.0.1:27018" ,
"singleShard" : removed,
"err" : "E11000 duplicate key error index: index-name dup key:
"code" : 11000 ,
"n" : 0 ,
"connectionId" : 26451 ,
"ok" : 1.0}
The unique index in this case is a composite of four fields, one guid, the others are shorter strings all contained within a sub-document. The query element of the upsert consists solely of the unique index fields, the update is a document containing a single field $inc and a $set for half a dozen other fields. I would post an example, but i've been unable to recreate the issue with test code. The database is not under significant load (~100 updates per second, ~4% locked, rarely a single item in either write or read queues), the concurrent queries would be coming from separate clients, so not the same connection pool. The upserts are operating within an "ensureConnection" block (java driver), so we can check a variety of write concerns.
It's my understanding this shouldn't be happening. Is there any extra information you need to help confirm this issue and track it down?