[SERVER-10136] Passing impossible value to skip on aggregation framework causes mongo to exit with out of memory Created: 08/Jul/13  Updated: 11/Jul/16  Resolved: 22/Jul/13

Status: Closed
Project: Core Server
Component/s: Aggregation Framework, Stability
Affects Version/s: 2.4.3
Fix Version/s: 2.5.2

Type: Bug Priority: Critical - P2
Reporter: Chen Fisher Assignee: Mathias Stearn
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Ubuntu 12.04 LTS, MongoDB 2.4.3


Issue Links:
Depends
depends on SERVER-9444 Use new Sorter for Aggregation $sort ... Closed
Related
Backwards Compatibility: Fully Compatible
Operating System: ALL
Steps To Reproduce:

send "aggregate" command with some $match and pass 4294967294 to $skip operator. mongo should immediately exit.

1. Did not try this with different number but I suspect big enough number would cause mongo to exit as well

2. Did not try this with "normal" query (non aggregation framework)

Participants:

 Description   

When passing the following value to $skip on aggregation framework, mongo immediately exists with log message "out of memory"
The value passed to skip was: 4294967294

It appears mongo tries to allocate memory according to this number and fails with out of memory.

In the log file:
tcmalloc: large alloc 103079223296 bytes == (nil) @
Mon Jul 8 19:41:46.915 out of memory, printing stack and exiting:



 Comments   
Comment by auto [ 22/Jul/13 ]

Author:

{u'username': u'RedBeard0531', u'name': u'Mathias Stearn', u'email': u'mathias@10gen.com'}

Message: Add sort with large limit case to SERVER-9444 test

Crash reported in SERVER-10136, but resolved automatically by SERVER-9444.
Branch: master
https://github.com/mongodb/mongo/commit/c8fac60210477e7c716dc4c5e67556d63b703bb4

Comment by Mathias Stearn [ 22/Jul/13 ]

Ok, then it is the same issue and it is solved in master by SERVER-9444. Note that we move skips after adjacent limits (adding the skip amout to the limit) since that enables further optimizations. In this case the sort absorbs the limit since we can use a more optimal sorting algorithm when we know we only need some of the results, which led to the crash in 2.4 when we tried to allocate an array large enough to hold that many documents. In 2.5.1 and master, we now only do that if the array size would be under some threshhold size.

Comment by Chen Fisher [ 22/Jul/13 ]

I've managed to reproduce it with the following pipeline (I narrowed it down to this):
sort, skip, limit

example:
db.collection.aggregate([{$sort:{"something":-1}}, {$skip:4294967294}, {$limit:25}])

Comment by Mathias Stearn [ 18/Jul/13 ]

Looking at this ticket again, I'm not sure this is the same issue. I know there was an issue with unindexed $sort + $limit, but I'm not able to repro with $skip.

Could you provide the exact pipeline that is crashing for you? It sounds like it should crash w/o any data, but if data is nessisary, please provide it or a script that can generate crashing data.

Comment by Mathias Stearn [ 08/Jul/13 ]

This will automatically be fixed by SERVER-9444 which should be pushed out soon.

Generated at Thu Feb 08 03:22:22 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.