[SERVER-1068] unique indexes not enforced within array of single document Created: 30/Apr/10 Updated: 06/Dec/22 |
|
| Status: | Backlog |
| Project: | Core Server |
| Component/s: | Index Maintenance |
| Affects Version/s: | 1.4.0 |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Wes | Assignee: | Backlog - Query Optimization |
| Resolution: | Unresolved | Votes: | 91 |
| Labels: | index-version | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
linux |
||
| Issue Links: |
|
||||||||
| Assigned Teams: |
Query Optimization
|
||||||||
| Participants: | |||||||||
| Case: | (copied to CRM) | ||||||||
| Description |
|
I am finding that a unique index is not applied to multiple elements of an array in the same document of a collection. They are only applied between documents in a collection. More details available at: http://groups.google.com/group/mongodb-user/browse_thread/thread/56bf4f84f7cad835 Is this intended behavior? Is there a way around it? |
| Comments |
| Comment by Asya Kamsky [ 08/Jun/20 ] | ||||||||||||||||||||||||
|
There is a workaround to prevent such duplicates using document validation. You have to create a rule (aka query) that will accept only documents which don't have duplicate values within the same array, and then rely on the unique index to ensure that there are no duplicates across separate documents like before. Here is an example validator where "a" is an array and within "a" subdocument field "b" value must be unique. This assumes the collection is either empty or already complies with the rule:
Note that this validator assumes that "a" will always be an array. If "a" is allowed to be a single document then change the validator to
meaning "a" can be a non-array type OR an array where "b" is not duplicated. Please note that until | ||||||||||||||||||||||||
| Comment by Prateek Arvind [ 24/May/20 ] | ||||||||||||||||||||||||
|
@All - Could you please update on this issue ? Or if fixed already, can anyone please suggest to apply index or compound index ? Anxiously awaiting for response. Please do the needful. | ||||||||||||||||||||||||
| Comment by Wissem Mazgou [ 09/Dec/19 ] | ||||||||||||||||||||||||
|
80 votes, still not enough ? | ||||||||||||||||||||||||
| Comment by Thinh Kieu [ 11/Nov/19 ] | ||||||||||||||||||||||||
|
How many votes to have this one implemented? | ||||||||||||||||||||||||
| Comment by Blake West [ 15/May/19 ] | ||||||||||||||||||||||||
|
Yes! We'd love this too. This just caused financial bugs on our side, because this is highly unexpected behavior. | ||||||||||||||||||||||||
| Comment by sudhanshu gaur [ 28/Nov/18 ] | ||||||||||||||||||||||||
|
Is there any Updates till now, Please fix this I am really waiting for a fix | ||||||||||||||||||||||||
| Comment by Asya Kamsky [ 07/May/18 ] | ||||||||||||||||||||||||
|
For arrays of scalars or cases where entire subdocument must be unique, $jsonSchema (available in 3.6.0) allows constraining that all array elements must be unique. This does not resolve the case where a single field of the subdocument of an array must be unique so this ticket is still relevant to tracking that.
| ||||||||||||||||||||||||
| Comment by tony kerz [ 03/Feb/17 ] | ||||||||||||||||||||||||
|
i just came up against this, and it is resulting in a non-trivial amount of effort to enforce this at the application level. gotta say, the fact that there are almost 60 votes for this issue, half a dozen stack overflow type posts, and it has been sitting here for 7 years after Eliot stated "its mostly clear its a bug", kind of makes me sad. | ||||||||||||||||||||||||
| Comment by Cameron Stitt [ 18/Oct/16 ] | ||||||||||||||||||||||||
|
Again, any movement on this? I will have to re-architect my solution to go across multiple collections (which would then argue why use mongo) rather than have the ability for intra-document uniqueness. | ||||||||||||||||||||||||
| Comment by Simon Ho [ 27/Mar/16 ] | ||||||||||||||||||||||||
|
Is there any update for this bug? | ||||||||||||||||||||||||
| Comment by Bret Curtis [ 03/Jul/15 ] | ||||||||||||||||||||||||
|
Any movement on this? | ||||||||||||||||||||||||
| Comment by Douglas Hubler [ 18/Mar/13 ] | ||||||||||||||||||||||||
|
I was able to use a workaround, you can "normalize" the data $db.bar.find() $db.foo.find() Then you can
| ||||||||||||||||||||||||
| Comment by Ethan Garofolo [ 09/Mar/13 ] | ||||||||||||||||||||||||
|
+1 for the issue and for either of Joe's solutions. Those both feel very natural. | ||||||||||||||||||||||||
| Comment by Joe Gornick [ 25/Oct/12 ] | ||||||||||||||||||||||||
|
I wrote a post about unique indexes @ http://joegornick.com/2012/10/25/mongodb-unique-indexes-on-single-embedded-documents/ I also have some proposed solutions for declaring single document unique indexes and would love to get some feedback! | ||||||||||||||||||||||||
| Comment by A. Jesse Jiryu Davis [ 23/Mar/12 ] | ||||||||||||||||||||||||
|
Also, it seems this should fail but doesn't: db.foo.insert({a:[ {b:1}, {b:1}]}) | ||||||||||||||||||||||||
| Comment by Tzury Bar Yochay [ 22/Feb/12 ] | ||||||||||||||||||||||||
|
Wasn't aware of this ticket, there is a small thread on this at stack-overflow. http://stackoverflow.com/questions/9390590/mongodb-indexing-embedded-fields-dot-notation | ||||||||||||||||||||||||
| Comment by Eliot Horowitz (Inactive) [ 14/Jun/11 ] | ||||||||||||||||||||||||
|
Its mostly clear its a bug, though its a little odd. Pretty big semantic change, so not sure we can squeeze in 1.9 | ||||||||||||||||||||||||
| Comment by Matt Albright [ 08/Jun/11 ] | ||||||||||||||||||||||||
|
This seems to me to be a pretty clear cut bug... why is it labeled a "question"? If you're moving from a two table relational model of 1:N, and embedding the 'N' documents inside of each parent document in Mongo, you want to ensure the id's of all those child documents don't collide: both across parent documents as well as within one. I'm trying to do just this, and this bug is going to require me to add some nasty hack to enforce uniqueness... and I'll probably still have to deal with a possible race condition. Any chance of getting this into 1.9? | ||||||||||||||||||||||||
| Comment by jp [ 28/Apr/11 ] | ||||||||||||||||||||||||
|
I have need for this exact situation as described by db.foo.ensureIndex(...) above. Since the index is a separate collection I could see the server code checking for the existence of the pair in the index prior to making attempts to append to the array. Otherwise, I have to make an initialize search for every element to see if the document exists having those subdocuments. I have found in the past using SQL databases that it can be more efficient to simply perform an INSERT, have the INSERT fail on the server side, respond to or ignore the error on the client and move on. This obviously is for write only situations and assumes you do not want to update that previously inserted record. Regardless, you still need the index across the subdoc so might as well have it enforce unique and save one round trip. This can be much more efficient than having to select each time to see if a subdoc record exists prior to inserting one. This can be a huge performance main in my situation. Cheers | ||||||||||||||||||||||||
| Comment by Ryan Cole [ 15/Oct/10 ] | ||||||||||||||||||||||||
|
I just ran across this scenario, too. I do not know about the complexities involved in adding this functionality, but this would be very helpful to me. | ||||||||||||||||||||||||
| Comment by Eliot Horowitz (Inactive) [ 03/May/10 ] | ||||||||||||||||||||||||
|
from Should it be possible to enforce uniqueness of an object within an array within a single document using a unique index? For instance: db.foo.save( { a : [ { b : 1 } ] } ) ); // This succeeds. Should it fail? } }); |