[SERVER-63329] Segmentation fault when nested documents too deep. Created: 06/Feb/22  Updated: 27/Oct/23  Resolved: 23/Feb/22

Status: Closed
Project: Core Server
Component/s: None
Affects Version/s: 5.0.6, 4.4.11
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Jeremiah Gleason Assignee: Dmitry Agranat
Resolution: Community Answered Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: HTML File admin_getParameters     HTML File admin_parameters     File replica_set_primary_1_12898.log     File replica_set_primary_instance_mongodb.log     File single_server_instance_mongod.log    
Operating System: ALL
Steps To Reproduce:

1) Insert a document into a collection ('test' in my code). Fields are irrelevant.

2) From the shell, run below code on the collection.

var my_arr = []

for (var i = 0; i <= 150000; i++) my_arr.push

db.test.aggregate([
{$project: {
"random_field_name": {
$reduce: {
"input": my_arr,
"initialValue": "this_value_does_not_matter",
"in":

{ "this_name_does_not_matter_TWO": "$$value" }

}
}
}
}
])

Participants:

 Description   

A Segmentation Fault occurs when nested documents get to a certain level. On my machine it was ~13000. This isn't terribly important to me (I found it because of a bug in my own code) but I don't think a Segmentation Fault is ever supposed to happen. Also it is fairly simple to reproduce. This crashed both a simple 3 server replica set (will crash secondaries if 'prefer_secondary' is enabled) as well as a single server.  The Segmentation Fault seems much more pronounced on the replica set (I am probably just a noob at reading MongoDB status though).

I am running Ubuntu 20.04.3 LTS and I tried this on MongoDB releases 5.0.6 and 4.4.11.



 Comments   
Comment by Dmitry Agranat [ 23/Feb/22 ]

Closing this out as the reported issue was identified.

Comment by Dmitry Agranat [ 08/Feb/22 ]

Yes, this might be possible when you hit some defined limits, for example, maxBSONDepth, internalPipelineLengthLimit but there might be others.

Comment by Jeremiah Gleason [ 08/Feb/22 ]

I see. I an certainly no expert at mongoDB. Is it normal to seg fault over a certain number as well?

Comment by Dmitry Agranat [ 08/Feb/22 ]

Thanks letsgoappworker101@gmail.com, this is the same result from my experiments with your command where I was getting right away the cannot convert document to BSON because it exceeds the limit of 200 levels of nesting error with iterations number above 199:

{“t”:{“$date”:“2022-02-07T12:25:40.743+00:00”},“s”:“W”,  “c”:“COMMAND”,  “id”:23799,   “ctx”:“conn62",“msg”:“Aggregate command executor error”,“attr”:{“error”:{“code”:15,“codeName”:“Overflow”,“errmsg”:“cannot convert document to BSON because it exceeds the limit of 200 levels of nesting”},“stats”:{“stage”:“PROJECTION_DEFAULT”,“nReturned”:1,“works”:2,“advanced”:1,“needTime”:1,“needYield”:0,“saveState”:0,“restoreState”:0,“isEOF”:0,“transformBy”:{},“inputStage”:{“stage”:“COLLSCAN”,“nReturned”:1,“works”:2,“advanced”:1,“needTime”:1,“needYield”:0,“saveState”:0,“restoreState”:0,“isEOF”:0,“direction”:“forward”,“docsExamined”:1}},“cmd”:{“aggregate”:“test_collection”,“pipeline”:[{“$project”:{“random_field_name”:{“$reduce”:{“input”:[1,1............1,1]}}}}]}},“truncated”:{“cmd”:{“pipeline”:{“0":{“$project”:{“random_field_name”:{“$reduce”:{“input”:{“5072”:{“type”:“double”,“size”:8}}}}}}}}},“size”:{“cmd”:2289292}}

This is because there is a limit for nested depth for BSON documents which is documented as 100 but is actually set to 200, you can see maxBSONDepth in your getParameter output:

"maxBSONDepth" : 200

Comment by Jeremiah Gleason [ 07/Feb/22 ]

admin_getParameters

replica_set_primary_1_12898.log

I got the parameters and put them inside the attached file. I also reproduced the bug immediately afterwards on the replica set to make sure it was still breaking.

I also tried your experiment, I used 198 and it worked just fine. So I decided to expand on it a little bit. I tried it with every number from 1 and up and caught the errors all the way up. (I attached the log)

  • 1 -> 199; Everything works fine.
  • 200 -> 12,897; Error 'cannot convert document to BSON because it exceeds the limit of 200 levels of nesting'.
  • 12,898; Primary server Segmentation Fault.

Sending any value 12,898 or higher seems to cause a Segmentation Fault. Also it might be worth mentioning that I wrote the code for this test in mongocxx. However, the bug is reproducible in both the shell and the driver for me with the same code.

 

Comment by Dmitry Agranat [ 07/Feb/22 ]

Hi letsgoappworker101@gmail.com, can you please try the same experiment but change the for loop to push 198 into an array instead of 150000?

In addition, please post the output for getParameter:

use admin
db.runCommand({ "getParameter" : "*" })

Comment by Jeremiah Gleason [ 06/Feb/22 ]

single_server_instance_mongod.log

replica_set_primary_instance_mongodb.log

 

Here is a log for a replica set and for the single server.

In the replica set the terminal that is running the server will say 'Segmentation fault (core dumped)' at the end and crash.

To generate those logs all I did was.

1) Reinstall MongoDB.

2) Start the server or replica set (no data in them at the time).

3) Insert a blank document to the collection 'test_collection'.

4) Run the shell command above (changing test to test_collection).

Comment by Dmitry Agranat [ 06/Feb/22 ]

Hi letsgoappworker101@gmail.com, can you attach the mongod log with the full segmentation faults output?

Comment by Jeremiah Gleason [ 06/Feb/22 ]

Adding a comment because my code didn't turn out very well apparently ( i ) makes a symbol and I don't know how to do indentation.

var my_arr = []

for (var i = 0; i <= 150000; i++) my_arr.push(1)

db.test.aggregate([{$project: {"random_field_name": {$reduce: { "input": my_arr, "initialValue": "this_value_does_not_matter",  "in":

{ "this_name_does_not_matter_TWO": "$$value"}

}}}}])

Generated at Thu Feb 08 05:57:30 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.