Journal writes are always compressed and then padded to 8kb. When the write is small enough (less than ~8kb today) then it doesn't need to be compressed. Compression adds latency and CPU overhead that makes some workloads slower.
By disabling compression I get ~10% more inserts/second for a workload that does journal writes which are less than 8k prior to compression – http://smalldatum.blogspot.com/2014/03/redo-logs-in-mongodb-and-innodb.html
This is likely to help workloads that use journalCommitInterval:2 and j:1 on writes.
A related JIRA is SERVER-9802.
void Journal::journal(const JSectHeader& h, const AlignedBuilder& uncompressed) { RACECHECK static AlignedBuilder b(32*1024*1024); /* buffer to journal will be JSectHeader compressed operations JSectFooter */ const unsigned headTailSize = sizeof(JSectHeader) + sizeof(JSectFooter); const unsigned max = maxCompressedLength(uncompressed.len()) + headTailSize; b.reset(max); { dassert( h.sectionLen() == (unsigned) 0xffffffff ); // we will backfill later b.appendStruct(h); } size_t compressedLength = 0; rawCompress(uncompressed.buf(), uncompressed.len(), b.cur(), &compressedLength); verify( compressedLength < 0xffffffff ); verify( compressedLength < max ); b.skip(compressedLength);
- related to
-
SERVER-9802 Single-threaded journal compression becomes a bottleneck when using "durable" writes
- Closed