[SERVER-16150] Large in memory collection scan very slow with WiredTiger compared to mmapv1 Created: 14/Nov/14 Updated: 23/Feb/15 Resolved: 05/Dec/14 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Performance, Storage |
| Affects Version/s: | 2.8.0-rc0 |
| Fix Version/s: | 2.8.0-rc1 |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Adam Comerford | Assignee: | Mathias Stearn |
| Resolution: | Done | Votes: | 0 |
| Labels: | wiredtiger | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
|||||||||||||||||||||||||||||||
| Issue Links: |
|
|||||||||||||||||||||||||||||||
| Operating System: | ALL | |||||||||||||||||||||||||||||||
| Steps To Reproduce: | Add a large amount of data into a collection (my test data generation is outlined in this gist). Here are the various storage config options used:
To force a collection scan, run the following:
Start and end are not really required since this is an explain and contains timing info, but I was also using this to compare to (for example) the touch command, so I wanted apples to apples timing comparisons. |
|||||||||||||||||||||||||||||||
| Participants: | ||||||||||||||||||||||||||||||||
| Description |
|
This may be "works as designed" (i.e. that WT is going to be slower for traversing large data structures in memory) but I would like to make sure and quantify the expected behavior here if that is the case. While attempting to profile the benefits of compression in terms of bandwidth savings, the expected performance of the default snappy compression (which delivered decent on-disk compression) looked slower than expected, significantly slower than mmapv1. That led to a round of testing to better understand what was going on here. So, I used 4 basic storage engine configurations:
The only WT config that came close to the mmapv1 performance was zlib, and that was on the read from disk test. So, I decided to test on SSD rather than spinning media, the result was that everything got a bit faster, but the relative differences remained - WT was still significantly slower. For my initial testing methodology, since I was trying to demonstrate the benefits of compression for IO bandwidth savings, I had been clearing the caches on the system after each run. Now that IO appeared to have no effect I decided to do consecutive runs of the collection scan, which would make the second run all in-memory (the collection is <16GiB and the test machine has 32GiB RAM, even with indexes it would fit in memory, but indexes are not in play) However the collection scan was still slow with WiredTiger even when the data was already loaded into RAM. The mmapv1 test dropped from the ~300 second range down to 13 seconds, but the WT testing showed no similar reduction - it did improve, but was still in the hundreds of seconds range rather than double digits. I have tried tweaking cache_size, lsm, directio, readahead to no effect (the last two before I had ruled out IO issues completely), but no significant improvement either. Basic initial graph attached, I will add detailed timing information, graphs, perf output below to avoid bloating the description too much. |
| Comments |
| Comment by Mathias Stearn [ 05/Dec/14 ] | |||||||||||||||||||||||||||||||||||||||||||||||
|
Further improvements possible with | |||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Adam Comerford [ 24/Nov/14 ] | |||||||||||||||||||||||||||||||||||||||||||||||
|
In case anyone was concerned about the variance in performance of WT "in-memory" here, it turns out that my data set size was exploding the default (15G) cache size in WiredTiger. Once I increased the cache_size parameter to 24G, the results looked as below, far more consistent for WT in-memory collection scans:
| |||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Adam Comerford [ 17/Nov/14 ] | |||||||||||||||||||||||||||||||||||||||||||||||
|
Summary table, patched (improved CollectionScan) 2.8.0-rc1-pre, SSD based storage:
| |||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Mathias Stearn [ 15/Nov/14 ] | |||||||||||||||||||||||||||||||||||||||||||||||
|
We will be changing CollectionScan::work to call dataFor on the result of curr() before getNext(). | |||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Alex Gorrod [ 15/Nov/14 ] | |||||||||||||||||||||||||||||||||||||||||||||||
|
I believe the issue here is with the code in CollectionScan::work. All code examples below are psuedo code of what is actually in the tree, in an attempt to simplify the explanation:
The code in the WiredTiger iterator WiredTigerRecordStore::Iterator::getNext is doing (in pseudo code):
Then the code in WiredTigerRecordStore::Iterator::dataFor is doing:
So the CollectionScan::work method is doing both a cursor->next and a cursor search because the value it passes to dataFor is no longer the value of curr() in the iterator (we've already stepped past that). | |||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Adam Comerford [ 14/Nov/14 ] | |||||||||||||||||||||||||||||||||||||||||||||||
|
Here's an at-a-glance timing table for the perf runs above, all performed on Ubuntu 14.04, with 2.8.0-rc0 on SSD storage:
| |||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Adam Comerford [ 14/Nov/14 ] | |||||||||||||||||||||||||||||||||||||||||||||||
|
For reference here's the perf report for mmapv1 (note, all testing with perf recording is being done on SSD, so this eclipses even zlib easily since IO bottleneck is gone)
| |||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Adam Comerford [ 14/Nov/14 ] | |||||||||||||||||||||||||||||||||||||||||||||||
|
Here is the snappy perf report - very similar to no compression, notably significantly slower than zlib, even after data is in-mem:
| |||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Adam Comerford [ 14/Nov/14 ] | |||||||||||||||||||||||||||||||||||||||||||||||
|
Some perf data for WiredTiger testing is below. I've only done zlib and non-compressed runs so far:
| |||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Eliot Horowitz (Inactive) [ 14/Nov/14 ] | |||||||||||||||||||||||||||||||||||||||||||||||
|
mathias was looking at this yesterday |