[SERVER-16977] Memory increase trend when running hammar.mongo with WT Created: 21/Jan/15  Updated: 01/Aug/15  Resolved: 30/Jul/15

Status: Closed
Project: Core Server
Component/s: Storage, WiredTiger
Affects Version/s: 2.8.0-rc5
Fix Version/s: None

Type: Bug Priority: Critical - P2
Reporter: Eitan Klein Assignee: Eitan Klein
Resolution: Incomplete Votes: 1
Labels: 28qa, wttt
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Topology: 3 replica set
OS: Windows 2012
Traffic:
Hammar.mongo

Command line below:
-initdb=false –server ip:port –profile=INSERT –totaltime=52000 –worker=90 –rps=0


Attachments: PNG File 304-Insertonly.png     Text File 34844-diff.txt     PNG File Cache_overhead=30%.png     PNG File CompareBetweenFrashToLoaded.png     PNG File Defualt.png     File Report150131.vspx     JPEG File WT-HammarMongo.jpg     File WT_Insert_AfterLoad.vsps     JPEG File WT_Load1.jpg     HTML File iLnx.html     PNG File rs3a-slow100ms.png     HTML File ts1.html    
Issue Links:
Related
is related to SERVER-17456 Mongodb 3.0 wiredTiger storage engine... Closed
is related to SERVER-16902 wt cache: maximum page size at evicti... Closed
is related to SERVER-16941 Cache can grow indefinitely under Wir... Closed
is related to SERVER-17421 WiredTiger b-tree uses much more memo... Closed
is related to SERVER-17424 WiredTiger uses substantially more me... Closed
Backwards Compatibility: Fully Compatible
Operating System: ALL
Participants:

 Description   

Running on pre RC6 build from (1/20/2015)

We investigated the followings:

1) We used a private build w/ separated heap for WT, On this investigation VMMAP indicates that majority of memory increase happen inside of WT code.
2) We tried to use WT stats, but we failed as the tool doesn’t seems working / usable for this workload.
3) We reduced the WT cache size to 1 GB and observe the execution can pass this threshold.

Next steps:

1) Reduce the amount of load and verify if the memory/resources stabilize (means that evict process is able to keep up 4 threads instead of 90 threads)
2) If the above steps will stabilize, we will consider to find way to tune the amount of evict threads in WT



 Comments   
Comment by David Daly [ 27/Apr/15 ]

Linking possibly related memory growth tickets.

Comment by Eitan Klein [ 17/Apr/15 ]

keith.bostic Thanks for the update, I will re-run and see if I can spot more data w/ most recent builds. I will update this tickets with more data,.

Comment by Eitan Klein [ 09/Mar/15 ]

See default graph (cache size configured to 1GB memory grew above 2.5 GB).

The second bitmap cache overhead = 30, memory capped to a level above the cache size but it appear below it used to be (at least it demonstrate bound by upper limit)

Comment by Eitan Klein [ 08/Feb/15 ]

Issue has not been resolved , fresh results from RC8

https://docs.google.com/a/10gen.com/document/d/15NP6DcQPuE3-yTvSKqcsvU77FTndCKCZ6kZmMZKClBI/edit

Comment by Eitan Klein [ 01/Feb/15 ]

Summary

Comment by Eitan Klein [ 21/Jan/15 ]

Summary meeting Keith / Eitan

Generated at Thu Feb 08 03:42:53 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.