[SERVER-30633] Large performance regression for large aggregation queries Created: 14/Aug/17 Updated: 09/Oct/17 Resolved: 31/Aug/17 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Aggregation Framework |
| Affects Version/s: | 3.3.9, 3.4.4, 3.5.11 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Daniel Grigg | Assignee: | Mark Agarunov |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | Bug | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Just a mid-2014 MacBook Pro |
||
| Attachments: |
|
||||||||||||||||
| Issue Links: |
|
||||||||||||||||
| Operating System: | ALL | ||||||||||||||||
| Steps To Reproduce: | I've attached data.tgz containing a dump of a collection (with a single document) and query1.js, the query demonstrating the bug. $ docker run -d -p23380:27017 --name mongo-3.3.8 -v $PWD/data:/data mongo:3.3.8 … $ docker run -d -p23440:27017 --name mongo-3.4.4 -v $PWD/data:/data mongo:3.4.4 … $ docker run -d -p23511:27017 --name mongo-3.5.11 -v $PWD/data:/data mongo:3.5.11 |
||||||||||||||||
| Participants: | |||||||||||||||||
| Description |
|
MongoDB 3.3.9 onwards added a very significant performance regression running large aggregation queries. Query times now seem to grow quadratically with query size. My bug was found using the aggregation framework but it may be a more general bug. I've only tested up to version 3.5.11. We recently explored upgrading our current mongo at 3.0.11 to 3.4.x. However we found the execution time of one of queries which averaged a respectable ~660ms had jumped 200x to a staggering ~40 000ms. I benchmarked against a few versions to narrow it down. All queries were matched to a single document. 3.0.15 633ms It appears something changed in 3.3.9 which was then partially improved in 3.3.11. None of the JIRA issues over those versions stood out for me so I assume it’s a side affect or something else. Our aggregation query does happen to be very long at ~300K lines and is auto generated. The query is essentially performing a long list of $max operations in the $group stage to compute the union of many hyper-log-log counters. A similarly long $project stage just renames the fields. The schema has ~75 counters with ~1K buckets per counter, each bucket having its own field. I’ve attached a sample generated query and sample collection with a single corresponding document as well as a screenshot of the schema to help visualise the schema. I ran some benchmarks on 3.4.4 varying both the contents of the document as well as the number of counters queried in a single aggregation. The former had no affect. For the latter I took some sample times. Note I used a different environment compared to the query used for comparing the versions above so the times won’t match: counters ms ms/counter Initially query time looks linear but seems to grow quadratically as n becomes larger. I can only assume there’s a linear scan in an inner loop somewhere We currently rely on being able to query all counters within a few seconds at most and this is blocking us from upgrading beyond 3.2. One temporary workaround would be issuing multiple smaller queries for those counters. However even when doing that the query performance looks too poor for users. Perhaps there’s a better schema / approach we could take for this type of problem of storing many HLL over time and computing their unions at query-time you could suggest? I know I’d love a feature to perform parallel operations on arrays in the $group stage? Eg given two arrays [1,4,2] and [2,3,1] compute [2,4,2]? The ideal outcome would be for performance to be improved to pre 3.3.9 levels. If you need more information please let me know. Also thanks for regularly publishing docker images, it made comparing versions a breeze. |
| Comments |
| Comment by Mark Agarunov [ 31/Aug/17 ] |
|
Hello danielgrigg, Thank you for the additional information. As this behavior looks to be due to the same issue as Thanks, |
| Comment by Daniel Grigg [ 27/Aug/17 ] |
|
Hi Charlie, I re-ran the attached query with the projection stage removed and your hypothesis is indeed correct! Without the $project the query completed in 370ms. Running the original again was still 48s. |
| Comment by Charlie Swanson [ 25/Aug/17 ] |
|
Hi all, I have a suspicion that this was caused by my work in In fact, it might be the case that you are experiencing |
| Comment by Daniel Grigg [ 22/Aug/17 ] |
|
Hi @mark.agarunov thanks for looking into this |
| Comment by Mark Agarunov [ 14/Aug/17 ] |
|
Hello danielgrigg, Thank you for the report. I've been able to reproduce this issue using the data and query you've provided and am currently investigating possible causes for this behavior. Thanks, |