[SERVER-17421] WiredTiger b-tree uses much more memory than wiredTigerCacheSizeGB Created: 01/Mar/15  Updated: 06/May/15  Resolved: 28/Apr/15

Status: Closed
Project: Core Server
Component/s: Storage, WiredTiger
Affects Version/s: 3.0.0-rc10
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Mark Callaghan Assignee: Michael Cahill (Inactive)
Resolution: Duplicate Votes: 0
Labels: wttt
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: Java Source File jmongoiibench.java     File mongo.conf     File o.stat.snappy     File o.stat.zlib     File run.simple.bash.q1     File run.simple.bash.q10    
Issue Links:
Duplicate
duplicates SERVER-17424 WiredTiger uses substantially more me... Closed
Related
related to SERVER-16977 Memory increase trend when running ha... Closed
related to SERVER-17495 Stand alone mongod throughout dropped... Closed
Backwards Compatibility: Fully Compatible
Operating System: ALL
Steps To Reproduce:

1. Run iibench-mongodb with 10 insert threads and 1 query thread to insert 400M docs (bash run.simple.bash.q1)
2. Run iibench-mongodb with 1 insert thread rate limited to 100 docs/second and 10 query threads (bash run.simple.bash.q10)

Participants:

 Description   

After running iibench-mongodb with --wiredTigerCacheSizeGB=70 the process size (vsz) was 87G with snappy and 94G with zlib. I don't know yet whether it will continue to grow. Regardless, 1.24X or 1.34X beyond 70G seems like too much when that is an extra 17G and 24G.



 Comments   
Comment by Michael Cahill (Inactive) [ 28/Apr/15 ]

Thanks for this report, we've had a few similar reports and I'm consolidating under SERVER-17424.

Comment by Nick Judson [ 16/Mar/15 ]

Possible dup: SERVER-17386, although seen on windows

Comment by Mark Callaghan [ 02/Mar/15 ]

After another 8 hours of testing WT+snappy has grown to 95.3 GB and WT+zlib to 102.7 GB. I am using a mix of the run.simple.bash* scripts although too many runs of run.simple.bash.q1 leads to full disk.

Comment by Mark Callaghan [ 01/Mar/15 ]

I use jemalloc for these tests. For previous release candidates I used tcmalloc and glibc malloc and they were no better than jemalloc at reducing vsz.

Comment by Mark Callaghan [ 01/Mar/15 ]

Started 10 insert/1 query test for snappy and 1 insert/10 query test for zlib. The mongod binary for snappy quickly grew from 87G to 94G and for zlib from 94G to 96G.

Comment by Mark Callaghan [ 01/Mar/15 ]

The bash script requires one argument:
bash run.simple.bash.q1 1000000
bash run.simple.bash.q10 480

Comment by Mark Callaghan [ 01/Mar/15 ]

For snappy vsz was ~77G after the test with 10 insert threads and 1 query thread. Then grew to 87G when I ran the 1 insert/10 query thread test for 8 hours. I repeated the 1 insert/10 query thread test and vsz stayed at 87G.

For zlib, it was ~77G after the 10 insert/1 query thread test. Then grew to 92G after the 1 insert/10 query thread test. Then grew to 94G after repeating the 1 insert/10 query thread test.

One run of the 10 insert/1 query thread inserts 400M docs.
One run of the 1 insert/10 query thread test is for 480 minutes.

Comment by Mark Callaghan [ 01/Mar/15 ]

Output from db.serverStatus() for snappy and zlib when vsz is ~90G

Generated at Thu Feb 08 03:44:22 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.