[SERVER-17741] LZ4 compressor for mongod Created: 25/Mar/15 Updated: 09/Jan/18 Resolved: 03/Aug/15 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Storage, WiredTiger |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | New Feature | Priority: | Minor - P4 |
| Reporter: | Quentin Conner | Assignee: | David Hows |
| Resolution: | Won't Fix | Votes: | 2 |
| Labels: | 32qa | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
||||||||||||||||
| Issue Links: |
|
||||||||||||||||
| Participants: | |||||||||||||||||
| Description |
|
An LZ4 compressor PR was accepted by wiredtiger/wiredtiger and is coming to mongodb/mongo sometime soon. Patches to mongod and the scons build are needed to integrate this wiredTiger storage engine compression engine. A patch against 3.0.0-rc11 is attached to this ticket, but patches both the vendored wiredTiger library as well as mongod and the scons build. The src/third_party/wiredtiger portions will not be needed once a newer wiredTiger library is imported. Ping me when/if ready to adopt this feature and I can produce a new patch just for the scons build and mongod. |
| Comments |
| Comment by Oleg Rekutin [ 09/Feb/17 ] | |||||||||||||||||||||||||||||
|
CPU is improved for comparable compression ratios. I'm interested in reduced CPU. This tends to come into play when nodes are catching up w/ oplog after being down (or after backup restore or a copy-data-style init). | |||||||||||||||||||||||||||||
| Comment by Michael Cahill (Inactive) [ 03/Aug/15 ] | |||||||||||||||||||||||||||||
|
The performance results weren't compelling over snappy in our testing. We can revisit later if we see workloads where snappy is the bottleneck. | |||||||||||||||||||||||||||||
| Comment by David Hows [ 09/Jul/15 ] | |||||||||||||||||||||||||||||
|
Ran a workload as described above. This shows that the LZ4 compressor seems to underperform considerably compared to snappy - generating only 1/2 the throughput. Results were:
The YCSB workload file was as follows (generates 102GB)
| |||||||||||||||||||||||||||||
| Comment by David Hows [ 06/Jul/15 ] | |||||||||||||||||||||||||||||
|
Ran some testing with LZ4 r127, snappy and zlib in MongoDB to compare times to insert and re-read data.
david.hows, I edited this table to be legible. I'm assuming that the numbers are compared to no compression? I'm very interested in the perf of lz4, but ycsb by default uses random binary data, so not positive it's the best tester for this. In my (limited) tests, I've seen YCSB data compress only about 90% using snappy. I have seen about 54% with zlib. |