[SERVER-17183] Btree depth for oplog grows faster than expected Created: 04/Feb/15  Updated: 17/Mar/15  Resolved: 12/Feb/15

Status: Closed
Project: Core Server
Component/s: WiredTiger
Affects Version/s: 3.0.0-rc7
Fix Version/s: 3.0.0-rc9, 3.1.0

Type: Bug Priority: Major - P3
Reporter: Daniel Pasette (Inactive) Assignee: Keith Bostic (Inactive)
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: File deep_tree.js    
Issue Links:
Related
Backwards Compatibility: Fully Compatible
Operating System: ALL
Backport Completed:
Participants:

 Description   

While doing perf testing of an insert only workload, I noticed that the oplog collection's btree was deepening faster than expected. After only ~1 minute, the tree depth grows to 4 and after ~2 minutes it grows to 5.

I'm not sure this is a problem per se, but seems abnormal.

I couldn't seem to reproduce this with an oplog size of 1GB or 3GB, but happens very quickly with 5GB. Tested with rc7 and latest master (git hash: e4c60053b2967e16f765fa25d16aa6d629faa196)

Running mongodb with --master and --oplogSize 5000:

./mongod --dbpath /home/dan/wt-data/ --storageEngine wiredTiger --master --oplogSize 5000

Run workload:

mongo deep_tree.js

Tracking the btree depth here:

> use local
switched to db local
> while(1) {print(""+ Date()+" " + db.oplog.$main.stats().wiredTiger.btree['maximum tree depth']);sleep(5000)}
Wed Feb 04 2015 15:32:58 GMT-0500 (EST) 3
Wed Feb 04 2015 15:32:59 GMT-0500 (EST) 3
Wed Feb 04 2015 15:33:00 GMT-0500 (EST) 3
Wed Feb 04 2015 15:33:05 GMT-0500 (EST) 3
Wed Feb 04 2015 15:33:10 GMT-0500 (EST) 3
Wed Feb 04 2015 15:33:15 GMT-0500 (EST) 3
Wed Feb 04 2015 15:33:20 GMT-0500 (EST) 3
Wed Feb 04 2015 15:33:25 GMT-0500 (EST) 3
Wed Feb 04 2015 15:33:30 GMT-0500 (EST) 3
Wed Feb 04 2015 15:33:35 GMT-0500 (EST) 3
Wed Feb 04 2015 15:33:40 GMT-0500 (EST) 4
.
.
.
Wed Feb 04 2015 15:34:37 GMT-0500 (EST) 5

Workload generated using dead simple insertion of identical 565 byte docs.



 Comments   
Comment by Daniel Pasette (Inactive) [ 12/Feb/15 ]

Fixed in wiredTiger drop:

commit 94cb08d1d1a14733ebe875f941f4ec1eb8a44b91
Author: Dan Pasette <dan@10gen.com>
Date:   Thu Jan 29 11:44:16 2015 -0500
 
    Import wiredtiger-wiredtiger-mongodb-2.8-rc6-73-g97b55e3.tar.gz from wiredtiger branch mongodb-2.8

Comment by Daniel Pasette (Inactive) [ 11/Feb/15 ]

This will come with the next WT drop

Comment by Keith Bostic (Inactive) [ 09/Feb/15 ]

What's happening here is that periodically the inserting thread gets far enough ahead of the removing thread that the root page fills up and we split it. As Dan noticed, this is going to be tuning sensitive: a different workload, a differently sized cache, whatever, and the situation won't reproduce. That said, it's a workload we should do better with, and improvements here will help with other workloads.

This problem is significantly improved in WiredTiger issue #1649 which changes WiredTiger's eviction algorithms to evict empty pages as soon as possible (even if they're internal pages).

I think there's still more work to be done here, so I'm not resolving the issue.

Generated at Thu Feb 08 03:43:34 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.