[SERVER-17622] Total storage size for Journal substantially different for WT and MMAPv1 Created: 16/Mar/15  Updated: 30/Mar/15  Resolved: 17/Mar/15

Status: Closed
Project: Core Server
Component/s: MMAPv1, Storage, WiredTiger
Affects Version/s: 3.0.0
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Alvin Richards (Inactive) Assignee: Unassigned
Resolution: Done Votes: 0
Labels: 28qa
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Backwards Compatibility: Fully Compatible
Operating System: ALL
Steps To Reproduce:

/home/alvin/mongodb-linux-x86_64-3.0.1/bin/mongod --dbpath /data-nfs/db --logpath /data3/logs/db/sysbench/server.log --fork --syncdelay 14400 --storageEngine=mmapv1 --bind_ip 127.0.0.1

sh /home/alvin/alvin-sysbench-new/run.simple.bash 16

Participants:

 Description   

Problem

Using sysbench to load 16M documents (in 16 collections) and then execute a 10 minute workload results in very different storage usage. WiredTiger uses 5GB, MMAPv1 20GB. The substantial difference is in the journal directory 1.8 GB(WT) versus 16GB (MMAPv1).

wiredTiger

 
alvin@bismark:/data-nfs/db$ du -sh
5.0G	.
 
16K	collection-0--6859785001528658125.wt
186M	collection-11--6859785001528658125.wt
186M	collection-14--6859785001528658125.wt
186M	collection-17--6859785001528658125.wt
186M	collection-20--6859785001528658125.wt
186M	collection-23--6859785001528658125.wt
186M	collection-26--6859785001528658125.wt
186M	collection-2--6859785001528658125.wt
186M	collection-29--6859785001528658125.wt
186M	collection-32--6859785001528658125.wt
186M	collection-35--6859785001528658125.wt
186M	collection-38--6859785001528658125.wt
186M	collection-41--6859785001528658125.wt
186M	collection-44--6859785001528658125.wt
186M	collection-47--6859785001528658125.wt
186M	collection-5--6859785001528658125.wt
186M	collection-8--6859785001528658125.wt
9.0M	index-10--6859785001528658125.wt
9.8M	index-12--6859785001528658125.wt
8.9M	index-13--6859785001528658125.wt
9.8M	index-15--6859785001528658125.wt
8.8M	index-16--6859785001528658125.wt
16K	index-1--6859785001528658125.wt
9.8M	index-18--6859785001528658125.wt
9.0M	index-19--6859785001528658125.wt
9.8M	index-21--6859785001528658125.wt
9.0M	index-22--6859785001528658125.wt
9.8M	index-24--6859785001528658125.wt
9.0M	index-25--6859785001528658125.wt
9.8M	index-27--6859785001528658125.wt
9.1M	index-28--6859785001528658125.wt
9.8M	index-30--6859785001528658125.wt
9.1M	index-31--6859785001528658125.wt
9.8M	index-33--6859785001528658125.wt
9.1M	index-34--6859785001528658125.wt
9.8M	index-36--6859785001528658125.wt
9.8M	index-3--6859785001528658125.wt
9.1M	index-37--6859785001528658125.wt
9.8M	index-39--6859785001528658125.wt
9.0M	index-40--6859785001528658125.wt
9.8M	index-42--6859785001528658125.wt
9.1M	index-43--6859785001528658125.wt
9.8M	index-45--6859785001528658125.wt
9.1M	index-46--6859785001528658125.wt
8.9M	index-4--6859785001528658125.wt
9.8M	index-48--6859785001528658125.wt
9.1M	index-49--6859785001528658125.wt
9.8M	index-6--6859785001528658125.wt
9.0M	index-7--6859785001528658125.wt
9.8M	index-9--6859785001528658125.wt
1.8G	journal
16K	_mdb_catalog.wt
0	mongod.lock
32K	sizeStorer.wt
4.0K	storage.bson
12K	_tmp
0	WiredTiger
4.0K	WiredTiger.basecfg
0	WiredTiger.lock
4.0K	WiredTiger.turtle
164K	WiredTiger.wt

MMAPv1

== MMAP
alvin@bismark:/data-nfs/db$ du -sh
20G     .
 
alvin@bismark:/data-nfs/db$ du -sh *
16G     journal
52K     local.0
17M     local.ns
0       mongod.lock
63M     sbtest.0
121M    sbtest.1
254M    sbtest.2
502M    sbtest.3
1003M   sbtest.4
2.0G    sbtest.5
898M    sbtest.6
37M     sbtest.7
17M     sbtest.ns
4.0K    storage.bson
4.0K    _tmp



 Comments   
Comment by Michael Cahill (Inactive) [ 16/Mar/15 ]

It's true that syncdelay is ignored by WiredTiger. See SERVER-16734 where the decision was made.

I'm left unclear about whether you consider this a bug in the WiredTiger storage engine – what would you consider to be a fix for this issue?

Comment by Alvin Richards (Inactive) [ 16/Mar/15 ]

--syncdelay 14400 causes the journal directory to be pruned only after this limit has been exceeded on MMAPv1. Reducing this limit to the default (60 seconds) resulted in a 2GB journal directory (similar to WiredTiger's).

It appears that WiredTiger does not use this parameter to control how the journal directory is pruned when its the storage engine.

Not clear which is the right model here - but clearly MMAPva and WitedTiger are different.

Generated at Thu Feb 08 03:45:04 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.