-
Type: Task
-
Resolution: Done
-
Priority: Major - P3
-
Affects Version/s: 2.5.5
-
Component/s: Index Maintenance, Text Search
-
Fully Compatible
> db.test.drop(); > db.test.ensureIndex({a: 'text'}); > var long = ''; > for(var i=0; i<1024; i++){ long = long + 'a'; } > db.test.insert({a: long})
Shell says:
SingleWriteResult({ "writeErrors" : [ { "index" : 0, "code" : 17280, "errmsg" : "insertDocument :: caused by :: 17280 Btree::insert: key too large to index, failing test.test.$a_text 1047 { : \"<long-key>...\", : 1.1 }", "op" : { "_id" : ObjectId("52e833df70104fa5ad62f2b1"), "a" : "<long-key>" } } ], "writeConcernErrors" : [ ], "nInserted" : 0, "nUpserted" : 0, "nUpdated" : 0, "nModified" : 0, "nRemoved" : 0, "upserted" : [ ] })
Mongod says:
2014-01-28T17:49:03.940-0500 [conn2] test.test Btree::insert: key too large to index, failing test.test.$a_text 1047 { : "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa...", : 1.1 } 2014-01-28T17:49:03.940-0500 [conn2] test.test caught assertion addKeysToIndex test.test.$a_text_id: ObjectId('52e833df70104fa5ad62f2b1')
This is new behavior for indexing in 2.6, in 2.4 the document was allowed to be inserted even though the index entry would not be created. The concern is that this default might not be the best for a text index given that the data being indexed may commonly have this error.
Clients could obviously catch the SingleWriteResult "writeError" and handle it at the application level by storing the field that would have created the large index key under a non-indexed document e_name and attempt to re-save but it would be nice if they didn't have to.