[SERVER-19920] Duplicate Key Error on Upsert with multi processes Created: 13/Aug/15  Updated: 14/Aug/15  Resolved: 13/Aug/15

Status: Closed
Project: Core Server
Component/s: WiredTiger
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: memorybox Assignee: Unassigned
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: File mongod.conf     Text File test_mongo_update.py    
Issue Links:
Duplicate
duplicates SERVER-19600 Upsert results in E11000 duplicate ke... Closed
Operating System: ALL
Steps To Reproduce:

1. run test_mongo_update.py
2. run test_mongo_update.py as other process

it will raise duplicate key error

Participants:

 Description   

Hi, all

I just got a weird error sent through from our applcation:

when i updated with two processes, it was complaining of a duplicate key error on a collection with a unique index on it, but the operation in question was an upsert.

case code(test_mongo_update.py):

Bar.python

import time
from bson import Binary
from pymongo import MongoClient, DESCENDING
 
bucket = MongoClient('127.0.0.1', 27017)['test']['foo']
bucket.drop()
bucket.update({'timestamp': 0}, {'$addToSet': {'_exists_caps': 'cap15'}}, upsert=True, safe=True, w=1, wtimeout=10)
bucket.create_index([('timestamp', DESCENDING)], unique=True)
while True:
    timestamp =  str(int(1000000 * time.time()))
    bucket.update({'timestamp': timestamp}, {'$addToSet': {'_exists_foos': 'fooxxxxx'}}, upsert=True, safe=True, w=1, wtimeout=10)

When i run script with two processes, Pymongo Exception:

Bar.python

Traceback (most recent call last):
  File "test_mongo_update.py", line 11, in <module>
    bucket.update({'timestamp': timestamp}, {'$addToSet': {'_exists_foos': 'fooxxxxx'}}, upsert=True, safe=True, w=1, wtimeout=10)
  File "build/bdist.linux-x86_64/egg/pymongo/collection.py", line 552, in update
  File "build/bdist.linux-x86_64/egg/pymongo/helpers.py", line 202, in _check_write_command_response
pymongo.errors.DuplicateKeyError: E11000 duplicate key error collection: test.foo index: timestamp_-1 dup key: { : "1439374020348044" }

Env:

  1. mongodb 3.0.5, WiredTiger
  1. single mongodb instance
  1. pymongo 2.8.1
  1. centos6.6

mongo.conf

Bar.ini

systemLog:
   destination: file
   logAppend: true
   logRotate: reopen
   path: /opt/lib/log/mongod.log
 
# Where and how to store data.
storage:
   dbPath: /opt/lib/mongo
   journal:
     enabled: true
 
   engine: "wiredTiger"
   directoryPerDB: true
 
# how the process runs
processManagement:
   fork: true  # fork and run in background
   pidFilePath: /opt/lib/mongo/mongod.pid
 
# network interfaces
net:
   port: 27017
   bindIp: 0.0.0.0  # Listen to local interface only, comment to listen on all interfaces.
 
setParameter:
   enableLocalhostAuthBypass: false
Any thoughts on what could be going wrong here?

PS:

I retried the same case in MMAPV1 storage engine, it works fine, why?

I've asked this same issue in StackOverflow (http://stackoverflow.com/questions/31962539/duplicate-key-error-on-upsert-with-multi-processesmongo-3-0-4-wiredtiger) and in the mongodb-user mailing list

I found something related here:
https://jira.mongodb.org/browse/SERVER-18213

but after this bug fix, it cases this error, so it looks like this bug is not fixed completely.

Cheers



 Comments   
Comment by Scott Hernandez (Inactive) [ 13/Aug/15 ]

This is a duplicate of SERVER-19600 and its linked issues.

Please read those for more information.

Generated at Thu Feb 08 03:52:34 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.