-
Type: Bug
-
Resolution: Won't Fix
-
Priority: Major - P3
-
None
-
Affects Version/s: 2.6.2
-
Component/s: None
-
None
-
Environment:MongoDB 2.4.7, sharded environment, Amazon linux
-
Minor Change
After updating MongoDB from 2.2 to most recent 2.4.7 we started to see that pymongo no longer process duplicate unique key inserts the way we expect it. Previously insert with duplicated key and continue_on_error=True would rise DuplicateKeyError and now it's OperationFailure.
It seems that in sharded environment MongoDB indicates duplicated keys errors in slightly different way then before. OperationFailure instance that we're getting has `code` attribute set to 16460. It's different from (11000, 11001, 12582) that are checked in __check_response_to_last_error to raise DuplicateKeyError. Looks like 16460 error wraps up of 11000 error. That's what we're seeing in mongos logs:
Tue Oct 29 00:15:37.346 [conn41741] warning: swallowing exception during insert :: caused by :: 16460 error inserting 1 documents to shard bill1shard1:bill1shard1/shard1bill1a.ninternal.com:27018,shard1bill1b.ninternal.com:27018 at version 12|3||524f2310655b64ff77504d58 :: caused by :: E11000 duplicate key error index: bigcollection.$f_1_t_1 dup key: { : "100000760454128", : "1650665635" }
Tue Oct 29 00:15:37.353 [conn41741] warning: exception during insert (continue on error set) :: caused by :: 16460 error inserting 38 documents to shard bill1shard2:bill1shard2/shard2bill1a.ninternal.com:27018,shard2bill1b.ninternal.com:27018 at version 12|3||524f2310655b64ff77504d58 :: caused by :: E11000 duplicate key error index: bigcollection.$f_1_t_1 dup key: { : "100000760454128", : "1650665635" }
So far I workaround it by ugly code that checks "E11000" substring in error message, but it's far from perfect.
- is related to
-
SERVER-11493 Use a specific error code for duplicate key error when sharded
- Closed
- related to
-
JAVA-1103 CommandResult.getException not detecting "caused by" duplicate key exception
- Closed