[SERVER-18472] Query Exception: Assertion: 13548:BufBuilder attempted to grow() to 134217728 bytes, past the 64MB limit. Created: 14/May/15  Updated: 16/Nov/21  Resolved: 29/Oct/15

Status: Closed
Project: Core Server
Component/s: Querying
Affects Version/s: 3.0.2
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Stefan Seiffarth [X] Assignee: Unassigned
Resolution: Done Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Windows 7 64bit, MongoDB 3.0.2/3.0.3


Attachments: Zip Archive MongoBulkWriteException.zip    
Operating System: ALL
Participants:

 Description   

Steps to reproduce:

  • Execute attached project in c# with a document count of 100000 documents

Actual result:

 Assertion: 13548:BufBuilder attempted to grow() to 134217728 bytes, past the 64MB limit.
 mongod.exe    ...\src\mongo\util\stacktrace_win.cpp(175)                       mongo::printStackTrace+0x43
 mongod.exe    ...\src\mongo\util\log.cpp(135)                                  mongo::logContext+0x97
 mongod.exe    ...\src\mongo\util\assert_util.cpp(214)                          mongo::msgasserted+0xd7
 mongod.exe    ...\src\mongo\bson\util\builder.h(284)                           mongo::_BufBuilder<mongo::TrivialAllocator>::grow_reallocate+0x145
 mongod.exe    ...\src\mongo\bson\bsonobjbuilder.h(226)                         mongo::BSONObjBuilder::append+0x6f
 mongod.exe    ...\src\mongo\db\query\explain.cpp(236)                          mongo::Explain::statsToBSON+0x372
 mongod.exe    ...\src\mongo\db\query\explain.cpp(465)                          mongo::Explain::statsToBSON+0x1d19
 mongod.exe    ...\src\mongo\db\query\explain.cpp(455)                          mongo::Explain::statsToBSON+0x1c3d
 mongod.exe    ...\src\mongo\db\query\find.cpp(863)                             mongo::runQuery+0x1366
 mongod.exe    ...\src\mongo\db\instance.cpp(218)                               mongo::receivedQuery+0x36b
 mongod.exe    ...\src\mongo\db\instance.cpp(400)                               mongo::assembleResponse+0x352
 mongod.exe    ...\src\mongo\db\db.cpp(207)                                     mongo::MyMessageHandler::process+0xb8
 mongod.exe    ...\src\mongo\util\net\message_server_port.cpp(231)              mongo::PortMessageServer::handleIncomingMsg+0x573
 mongod.exe    ...\src\third_party\boost\libs\thread\src\win32\thread.cpp(185)  boost::`anonymous namespace'::thread_start_function+0x21
 MSVCR120.dll                                                                   beginthreadex+0x107
 MSVCR120.dll                                                                   endthreadex+0x192
 KERNEL32.DLL                                                                   BaseThreadInitThunk+0x22

Expected result:
No exception, atleast none in the server

Background:
I'm currently evaluating MongoDb and testing several optimistic concurrency check schemes.

That's why I'm building a very long query which looks like this

{id=id1, revision=revision1}

or ... or

{id=id1000, revision=revision1000}

to see if if my current documents are actually still in the collection and are not updated for unknown reasons.

If this is the wrong category for this bug, please move it accordingly, I imagine it might be a bug in the core project, but my reproduction is in C# that's why I created it for the C# driver.



 Comments   
Comment by Ramon Fernandez Marina [ 29/Oct/15 ]

I'm not able to reproduce this behavior in the master branch starting with version 3.1.1; here's the repro I've been using:

var docs = []
for(i = 0; i < 100000; i++) {
    docs[i] = { _id: i, revision: 0 }
}
 
db.foo.insert(docs)
var orFilter = []
for(i = 0; i < docs.length; i++) {
    orFilter[i] = docs[i];
}
var filter = { $or: orFilter }
 
// This find will trigger the 64MB limit error message
db.foo.find(filter)

Users affected by this issue may want to download the latest 3.2 release candidate and check that their workloads no longer trigger the 64MB limit.

Regards,
Ramón.

Comment by Ed Ivanushkin [ 29/Oct/15 ]

Seems like this issue is there for awhile. Previously, trying to do aggregation of large scale would just bring server down. Now server sustains but client doesn't get results. Are there any plans to fix this permanently? 64Mb is not that much considering amount of data aggregation can produce.

Comment by Craig Wilson [ 14/May/15 ]

Hi Steffan,

I've moved this to the SERVER group as this appears to be an issue on that end. They might come back and ask for more information.

Craig

Generated at Thu Feb 08 03:47:47 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.