[SERVER-26351] E11000 duplicate key error, why multiple formats? Created: 27/Sep/16  Updated: 06/Dec/22  Resolved: 10/Jun/19

Status: Closed
Project: Core Server
Component/s: Logging
Affects Version/s: 3.2.9
Fix Version/s: None

Type: Improvement Priority: Minor - P4
Reporter: Christopher Antonellis Assignee: Backlog - Storage Execution Team
Resolution: Done Votes: 0
Labels: neweng
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Assigned Teams:
Storage Execution
Participants:

 Description   

I am receiving the E11000 duplicate key error in different formats when running our application on different local machines.

Format 1:
u'E11000 duplicate key error collection: baseclass.users index: email dup key: { : "jianyang@mailinator.com" }'

Format 2:
u'E11000 duplicate key error index: baseclass.users.$email dup key: { : "jianyang@mailinator.com" }'

What is the reason for this, and how can I control this formatting? I am trying to reliably extract some data from this string.



 Comments   
Comment by Eric Milkie [ 10/Jun/19 ]

We believe the multiple code paths have now been unified.

Comment by Geert Bosch [ 14/Oct/16 ]

The issue is that these errors are generated by different storage engines (MMAPv1 / WiredTiger). In order to make sure the same message is generated, we'd have to pull that logic out of the storage engines. Additionally, WiredTiger includes the collection name in the message, which is not accessible in the BtreeLogic class where MMAPv1 generates the message.

Comment by Christopher Antonellis [ 12/Oct/16 ]

Thomas & Ian, thanks for your responses.

Ian, could you elaborate on what you mean by "different code paths" so I can better understand the issue? I would like to be able to describe the issue to my team in the best detail possible. Thank You

Comment by Ian Whalen (Inactive) [ 03/Oct/16 ]

Hi Chris, you're seeing the different in log lines because these are two different code paths - and there is not a way you can currently control the formatting. We're currently in freeze to prep for the 3.4 release but will look at cleaning up these log messages in the future.

Comment by Kelsey Schubert [ 27/Sep/16 ]

Hi cantonellis,

Thanks for reporting this issue – I've assigned it to our Integration Team to investigate. Please continue to watch for updates.

Kind regards,
Thomas

Generated at Thu Feb 08 04:11:51 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.