[SERVER-17157] Seeing pauses in YCSB performance workload with WiredTiger Created: 03/Feb/15 Updated: 11/Sep/15 Resolved: 11/Sep/15 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | WiredTiger |
| Affects Version/s: | 3.0.0-rc7 |
| Fix Version/s: | 3.0.6 |
| Type: | Bug | Priority: | Critical - P2 |
| Reporter: | Brian Towles | Assignee: | Michael Cahill (Inactive) |
| Resolution: | Done | Votes: | 1 |
| Labels: | 28qa, wttt | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
||||||||||||||||||||||||
| Issue Links: |
|
||||||||||||||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||||||||||||||
| Operating System: | ALL | ||||||||||||||||||||||||
| Steps To Reproduce: | mongod is run with the following command line and no additional config file:
This is using thumbtack-ycsb with the following config as the YCSB workload file:
It is run with a YCSB load
immediately followed by the YCSB run phase
Hardware wise, this is an Amazon Linux c3-4xlarge EC2 instance with 16 cores bound to the mongod instance The high number for the operationcount is that this particular run is target to a longevity test but the pauses start within the first 200 seconds and continues throughout the length of the run. |
||||||||||||||||||||||||
| Participants: | |||||||||||||||||||||||||
| Description |
|
During performance tests on a Amazon linux EC2 instance (dedicated c3-4xlarge) stand alone instance, I am seeing extended pauses during a mixed 50/50 update and read run that seem to correspond to high write activity in the general but no corresponding write activity in the disk or cache.
The benchmark is running the mixed 50/50 workload of updates and reads. All the threads doing the reading are returning in a reasonable time giving low thread latency for Read threads. But the Update threads seem to be blocked. The latency when the updates start up again is showing as very high.
Looking at the metrics is shows a large plateau in general write activity but there does not seem to be any noticeable uptick in write activity in WiredTiger, on disk (xvdb is data only disk) or in the cache. I have tested in full and not full cache modes with the same result.
This appears similar to This was actually tested with the b70d96e build (post rc7), but has been showing up in multiple RCs. |
| Comments |
| Comment by Michael Cahill (Inactive) [ 11/Sep/15 ] | ||||||||||||||
|
Agreed, let's resolve this one and open new tickets if we see more pauses during workloads. | ||||||||||||||
| Comment by Alexander Gorrod [ 11/Sep/15 ] | ||||||||||||||
|
I reran the test as defined in the description of this issue with the 3.0.6 release on a c3.4xlarge (as described in the ticket). I don't see any pauses during the 20 minutes of running the test (the initial report stated that pauses started within the first 3 minutes). I don't see any pauses at all - the lowest throughput is 1348 ops/sec. Can we call this one done? If there are other configurations that trigger throughput pauses they deserve their own tickets. | ||||||||||||||
| Comment by Eitan Klein [ 28/Apr/15 ] | ||||||||||||||
|
I followed up on asya concern related to zlib with this YCSB workload. I've ran the test w/ my write_through and TCMalloc private binary w/ zlib compression enabled. I noticed the pauses and took a dump file when i captured 0 throughput , apparently TCMalloc seems to the primary suspect – See attached threads dumps CC: mark.benvenuto | ||||||||||||||
| Comment by Eitan Klein [ 26/Apr/15 ] | ||||||||||||||
|
WT-No Change - BaseLine YCSB output | ||||||||||||||
| Comment by Eitan Klein [ 26/Apr/15 ] | ||||||||||||||
|
On windows (AWS environment w/ the repro as David mentioned above), i took a dump file and have noticed all threads waiting on the FlushFileBuffers call (see attached) I've decided to remove the FlushFileBuffers w/ the experiment described in the table below (check point will happen but the idea to achieve durability w/o FlushFileBuffers ).
Fyi, MSDN article on this topic https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx#caching_behavior A write-through request via FILE_FLAG_WRITE_THROUGH also causes NTFS to flush any metadata changes, such as a time stamp update or a rename operation, that result from processing the request. For this reason, the FILE_FLAG_WRITE_THROUGH flag is often used with the FILE_FLAG_NO_BUFFERING flag as a replacement for calling the FlushFileBuffers function after each write, which can cause unnecessary performance penalties. Using these flags together avoids those penalties. For general information about the caching of files and metadata, see File Caching | ||||||||||||||
| Comment by David Daly [ 22/Apr/15 ] | ||||||||||||||
|
The linux AMI I used had 3.14. We were also able to reproduce against a windows box. | ||||||||||||||
| Comment by Asya Kamsky [ 22/Apr/15 ] | ||||||||||||||
|
what's the kernel version on the repros? I ask because I discovered that 3.13 behaves very poorly during these slow-downs (which seem to be all checkpoint activity related in my repros) but 3.16.0 is much better behaved - slows down in the same places but does not completely stall and become unresponsive. | ||||||||||||||
| Comment by David Daly [ 22/Apr/15 ] | ||||||||||||||
|
Copied repro steps from comment to steps to reproduce above. | ||||||||||||||
| Comment by Michael Cahill (Inactive) [ 22/Mar/15 ] | ||||||||||||||
|
brian.towles, are you able to gather statistics during a run with 3.0.1 and upload those logs? | ||||||||||||||
| Comment by Brian Towles [ 20/Mar/15 ] | ||||||||||||||
|
alexander.gorrod Last runs I did with 3.0.1 were still seeing pauses of up to 25 to 30 seconds on the updates sections. | ||||||||||||||
| Comment by Alexander Gorrod [ 20/Mar/15 ] | ||||||||||||||
|
brian.towles@mongodb.com I just re-ran the original workload using an AWS c3.4xlarge machine and MongoDB 3.0.1. I'm not seeing the performance drop outs you report. Could you retest with the latest release? | ||||||||||||||
| Comment by Asya Kamsky [ 22/Feb/15 ] | ||||||||||||||
|
rc9 seems to have improved this use case. | ||||||||||||||
| Comment by Brian Towles [ 17/Feb/15 ] | ||||||||||||||
|
darren.wood@10gen.com Yes I am only running the default config with the storageEngine change to WT. The command line used is as above:
With my YCSB run (which is straight thumbtack YCSB) its showing up around 242 seconds. Running the get_pauses_with_updates.js (after the data has been loaded using the load_pause_data.js) I was able to replicate it in within 120 to 150 seconds. | ||||||||||||||
| Comment by Daniel Pasette (Inactive) [ 06/Feb/15 ] | ||||||||||||||
|
Would be interested in re-running this workload with the fix for | ||||||||||||||
| Comment by Daniel Pasette (Inactive) [ 05/Feb/15 ] | ||||||||||||||
|
I'm also able to reproduce pauses and very large througput dropoffs with a simple javascript test when running with an oplog present. Set up is simple: journal is symlinked to separate spindle, create oplog of 5GB
Run attached script set to use 10 worker threads and run for 1000 seconds, but can be easily configured in script:
Using mongostat and timeseries tools to monitor throughput. Wt stats (using timeseries tools) show absurdly high number of pages walked for eviction and page reconciliation calls recorded on the oplog collection. |