[SERVER-5290] fail to insert docs with fields too long to index, and fail to create indexes where doc keys are too big Created: 12/Mar/12 Updated: 08/Nov/21 Resolved: 04/Dec/13 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Index Maintenance |
| Affects Version/s: | None |
| Fix Version/s: | 2.5.5 |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Richard Kreuter (Inactive) | Assignee: | Eliot Horowitz (Inactive) |
| Resolution: | Done | Votes: | 2 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Participants: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Case: | (copied to CRM) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description |
|
When writes default to getting errors, we should just fail fast. Behavior when a document with an index key is found to exceed the maximum key size:
Behavior on secondary nodes:
OLD DESCRIPTION:
The fix for this issue must encompass both insert/update as well as failing during ensureIndex calls if this condition is violated (similar to a unique index constraint failing). Need to think hard about how this will work when a user upgrades a node with indexes built on top of invalid data. When they re-sync a new replset member the index create step would fail. |
| Comments |
| Comment by Githook User [ 04/Dec/13 ] |
|
Author: {u'username': u'erh', u'name': u'Eliot Horowitz', u'email': u'eliot@10gen.com'}Message: |
| Comment by Githook User [ 04/Dec/13 ] |
|
Author: {u'username': u'erh', u'name': u'Eliot Horowitz', u'email': u'eliot@10gen.com'}Message: |
| Comment by Githook User [ 04/Dec/13 ] |
|
Author: {u'username': u'erh', u'name': u'Eliot Horowitz', u'email': u'eliot@10gen.com'}Message: |
| Comment by Githook User [ 04/Dec/13 ] |
|
Author: {u'username': u'erh', u'name': u'Eliot Horowitz', u'email': u'eliot@10gen.com'}Message: |
| Comment by Githook User [ 04/Dec/13 ] |
|
Author: {u'username': u'erh', u'name': u'Eliot Horowitz', u'email': u'eliot@10gen.com'}Message: |
| Comment by Daniel Pasette (Inactive) [ 02/Oct/12 ] |
|
A side effect of this issue is that mongodump will skip documents whose _id field is too long to index unless run with --forceTableScan because it uses a snapshot query to dump docs. |
| Comment by tony tam [ 01/May/12 ] |
|
I think this is more serious than just the above description. The bigger issue is that documents are inserted and the index field is too long, you can get in a situation where there are a large number of "unfindable" objects. This gets even worse when you spin up a new replica where it will FAIL to sync when exceeding a fixed number unfindable objects. In my situation, we inserted an object and could not find it again. Then when it was not found, it was re-inserted. This caused > 1M bad records to be added in a single collection, which caused the replication to completely fail. If a record is going to become unfindable or effectively corrupt the database, the insertion should be rejected by the server. If the client sends the insert as FAF, the data loss should be expected. If a safe write is requested, the client can "do the right thing" with knowledge that the write cannot be performed. |