[SERVER-8125] Text search with large result set raises unhandled exception Created: 09/Jan/13  Updated: 11/Jul/16  Resolved: 23/Jan/13

Status: Closed
Project: Core Server
Component/s: Text Search
Affects Version/s: None
Fix Version/s: 2.4.0-rc0

Type: Bug Priority: Major - P3
Reporter: J Rassi Assignee: Eliot Horowitz (Inactive)
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Related
related to SERVER-8287 Large array result causes v8 shell to... Closed
Operating System: ALL
Participants:

 Description   

Works as designed?

> db.foo.ensureIndex({a:"text"})
> for (i=0; i<100000; i++) {db.foo.insert({a:"word",b:"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"})}
> db.foo.runCommand("text", {search:"word", limit:100000})
Wed Jan  9 14:30:43 uncaught exception: error {
	"$err" : "BSONObj size: 16872191 (0xFF720101) is invalid. Size must be between 0 and 16793600(16MB) First element: queryDebugString: \"word||||||\"",
	"code" : 10334
}
> 



 Comments   
Comment by Ben Becker [ 23/Jan/13 ]

Moved v8 issue to SERVER-8287

Comment by Ben Becker [ 23/Jan/13 ]

One work-around is to remove (or significantly increase) the resource limits we set for each isolate. In a local test, the mongo client uses ~750mb of resident memory when resource constraints are lifted. gc() reduces consumption to 112mb (much higher than the 10.1mb we start with). Seems like we're allocating too much memory, and GC isn't freeing everything it can, even after multiple runs.

Comment by J Rassi [ 22/Jan/13 ]

Mmm, I linked to SERVER-7938 since its title described the issue at hand ("running out of memory on v8 but not spidermonkey"), want to open a new ticket?

The above run was with "2.3.2-pre" (the shell doesn't get more specific with the release number). Here's another run with 4c8fe4f520c872cb6bc5cefd3b0ab10cd9fc1b43 (build from today), pretty much the same, but posting again below in case the updated stack trace is helpful. Note the setup for reproducing this is pretty simple (just need to run mongod --setParameter textSearchEnabled=true) with 2.3.2+.

$ ./mongo
MongoDB shell version: 2.3.3-pre-
connecting to: test
Server has startup warnings: 
Tue Jan 22 17:37:06.899 [initandlisten] 
Tue Jan 22 17:37:06.899 [initandlisten] ** NOTE: This is a development version (2.3.3-pre-) of MongoDB.
Tue Jan 22 17:37:06.900 [initandlisten] **       Not recommended for production.
Tue Jan 22 17:37:06.900 [initandlisten] 
> db.foo.runCommand("text", {search:"word", limit:100000})
 
#
# Fatal error in CALL_AND_RETRY_2
# Allocation failed - process out of memory
#
 
Tue Jan 22 18:50:32.157 mongo got signal 6 (Abort trap: 6), stack trace: 
 
Tue Jan 22 18:50:32.159 0x1039080cb 0x1038211cf 0x7fff8594b92a 0 0x7fff859a2dfa 0x103b30bbb 0x1039ca046 0x103987d3b 0x1039a4e92 0x103b06079 0x103bab4ca 0x103a92518 0x103a93a96 0x103a8c206 0x369284306362 
 0   mongo                               0x00000001039080cb _ZN5mongo15printStackTraceERSo + 43
 1   mongo                               0x00000001038211cf _Z12quitAbruptlyi + 191
 2   libsystem_c.dylib                   0x00007fff8594b92a _sigtramp + 26
 3   ???                                 0x0000000000000000 0x0 + 0
 4   libsystem_c.dylib                   0x00007fff859a2dfa abort + 143
 5   mongo                               0x0000000103b30bbb _ZN2v88internal2OS5AbortEv + 9
 6   mongo                               0x00000001039ca046 _Z9API_FatalPKcS0_z + 166
 7   mongo                               0x0000000103987d3b _ZN2v8L24DefaultFatalErrorHandlerEPKcS1_ + 107
 8   mongo                               0x00000001039a4e92 _ZN2v88internal2V823FatalProcessOutOfMemoryEPKcb + 518
 9   mongo                               0x0000000103b06079 _ZN2v88internal8JSObject18UpdateMapCodeCacheENS0_6HandleIS1_EENS2_INS0_6StringEEENS2_INS0_4CodeEEE + 457
 10  mongo                               0x0000000103bab4ca _ZN2v88internal9StubCache22ComputeLoadInterceptorENS0_6HandleINS0_6StringEEENS2_INS0_8JSObjectEEES6_ + 334
 11  mongo                               0x0000000103a92518 _ZN2v88internal6LoadIC12UpdateCachesEPNS0_12LookupResultENS0_16InlineCacheStateENS0_6HandleINS0_6ObjectEEENS5_INS0_6StringEEE + 1540
 12  mongo                               0x0000000103a93a96 _ZN2v88internal6LoadIC4LoadENS0_16InlineCacheStateENS0_6HandleINS0_6ObjectEEENS3_INS0_6StringEEE + 1188
 13  mongo                               0x0000000103a8c206 _ZN2v88internal11LoadIC_MissENS0_9ArgumentsEPNS0_7IsolateE + 454
 14  ???                                 0x0000369284306362 0x0 + 60002910888802

Comment by Ben Becker [ 22/Jan/13 ]

rassi@10gen.com, thanks. I don't think this bug is related to SERVER-7938 though. That case was due to a ~27mb .js file hitting the OOM during the compile phase.

Also, could you post the version of the shell?

Comment by J Rassi [ 22/Jan/13 ]

Looks like 15,817,862 bytes, see below:

>>> import pymongo
>>> d=pymongo.MongoClient()['test']
>>> res=d.command("text", "foo", search="word", limit=100000)
>>> len(res['results'])
14416
>>> res['stats']
{u'timeMicros': 1742842, u'nfound': 100000, u'nscannedObjects': 0, u'nscanned': 100000, u'n': 14416}

$ sudo ./mongosniff --source NET lo0
sniffing... 27017 
127.0.0.1:52711  -->> 127.0.0.1:27017 test.$cmd  85 bytes  id:4431b782  1144108930
        query: { text: "foo", search: "word", limit: 100000 }  ntoreturn: -1 ntoskip: 0
127.0.0.1:27017  <<--  127.0.0.1:52711   15817862 bytes  id:59  89 - 1144108930
        reply n:1 cursorId: 0
        { queryDebugString: "word||||||", language: "english", results: [ { score: 1.1, obj: { _id: ObjectId('50ff14ac354146f13bd4b426'), a: "word", b: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..........<snip>

Comment by Ben Becker [ 22/Jan/13 ]

rassi@10gen.com, do you know how large the result set should have been in the case where V8 reported OOM?

Comment by J Rassi [ 22/Jan/13 ]

Note that the example in the description now unfortunately fails with a less nice error message (see below) in the shell due to a V8-related issue (SERVER-7938, now linking).

eliot: I'm leaving this closed for now, but note that if this needs to be done for 2.4 then we need to either fix SERVER-7938 or implement a workaround.

> db.foo.runCommand("text", {search:"word", limit:100000})
 
#
# Fatal error in CALL_AND_RETRY_2
# Allocation failed - process out of memory
#
 
Tue Jan 22 17:57:45.855 mongo got signal 6 (Abort trap: 6), stack trace: 
 
Tue Jan 22 17:57:45.857 0x10fd835cb 0x10fc9c4ef 0x7fff8594b92a 0 0x7fff859a2dfa 0x10ffac41b 0x10fe458a6 0x10fe0359b 0x10fe206f2 0x10ff818d9 0x110026d2a 0x10ff0dd78 0x10ff0f2f6 0x10ff07a66 0x1c0849306362 
 0   mongo                               0x000000010fd835cb _ZN5mongo15printStackTraceERSo + 43
 1   mongo                               0x000000010fc9c4ef _Z12quitAbruptlyi + 191
 2   libsystem_c.dylib                   0x00007fff8594b92a _sigtramp + 26
 3   ???                                 0x0000000000000000 0x0 + 0
 4   libsystem_c.dylib                   0x00007fff859a2dfa abort + 143
 5   mongo                               0x000000010ffac41b _ZN2v88internal2OS5AbortEv + 9
 6   mongo                               0x000000010fe458a6 _Z9API_FatalPKcS0_z + 166
 7   mongo                               0x000000010fe0359b _ZN2v8L24DefaultFatalErrorHandlerEPKcS1_ + 107
 8   mongo                               0x000000010fe206f2 _ZN2v88internal2V823FatalProcessOutOfMemoryEPKcb + 518
 9   mongo                               0x000000010ff818d9 _ZN2v88internal8JSObject18UpdateMapCodeCacheENS0_6HandleIS1_EENS2_INS0_6StringEEENS2_INS0_4CodeEEE + 457
 10  mongo                               0x0000000110026d2a _ZN2v88internal9StubCache22ComputeLoadInterceptorENS0_6HandleINS0_6StringEEENS2_INS0_8JSObjectEEES6_ + 334
 11  mongo                               0x000000010ff0dd78 _ZN2v88internal6LoadIC12UpdateCachesEPNS0_12LookupResultENS0_16InlineCacheStateENS0_6HandleINS0_6ObjectEEENS5_INS0_6StringEEE + 1540
 12  mongo                               0x000000010ff0f2f6 _ZN2v88internal6LoadIC4LoadENS0_16InlineCacheStateENS0_6HandleINS0_6ObjectEEENS3_INS0_6StringEEE + 1188
 13  mongo                               0x000000010ff07a66 _ZN2v88internal11LoadIC_MissENS0_9ArgumentsEPNS0_7IsolateE + 454
 14  ???                                 0x00001c0849306362 0x0 + 30821913224034

Comment by auto [ 14/Jan/13 ]

Author:

{u'date': u'2013-01-14T16:07:20Z', u'email': u'eliot@10gen.com', u'name': u'Eliot Horowitz'}

Message: SERVER-8125: improve text command error handling of large results
Branch: master
https://github.com/mongodb/mongo/commit/0909a37d8a61700bb499d21d3597f38451ac9bff

Comment by Eliot Horowitz (Inactive) [ 09/Jan/13 ]

Should error, but need a nicer error message.

Generated at Thu Feb 08 03:16:37 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.