When writes default to getting errors, we should just fail fast.
Behavior when a document with an index key is found to exceed the maximum key size:
- insert new document with over-size index key fails with error msg. The document is not inserted.
- update existing document with over-size index key fails with error msg. The existing document remains unchanged.
- ensureIndex / reIndex on a collection with over-size index key fails with error msg. The index is not created.
- compact on a collection with over-size index key succeeds. Documents with over-size keys are not inserted into the index.
- mongorestore / mongoimport with indexed values too large rejects objects which do not suit. Effectively the result of doing an insert of each individual object.
Behavior on secondary nodes:
- New replica set secondaries will insert document and build indexes on initial sync with an warning in the logs.
- Replica set secondaries will replicate documents insert to a 2.4 primary, but print an error msg in the log.
- Replica set secondaries will update documents updated on a 2.4 primary, but print an error msg in the log.
When inserting a new document, if an indexed field is too long to store in the btree, we don't add the doc to the index, but do store the doc. This leads to peculiar behaviors (examples below). It would be good to have a mechanism to make these insertions just be errors and not store the document (we already do this for unique indexes, after all, so programmers using fire and forget can't really expect the docs to be present if they haven't checked).
The fix for this issue must encompass both insert/update as well as failing during ensureIndex calls if this condition is violated (similar to a unique index constraint failing).
Need to think hard about how this will work when a user upgrades a node with indexes built on top of invalid data. When they re-sync a new replset member the index create step would fail.