[SERVER-21063] MongoDB with WiredTiger can build very deep trees Created: 22/Oct/15  Updated: 16/Nov/21  Resolved: 13/Nov/15

Status: Closed
Project: Core Server
Component/s: WiredTiger
Affects Version/s: 3.0.6, 3.0.7
Fix Version/s: 3.0.8

Type: Bug Priority: Major - P3
Reporter: David Hows Assignee: Michael Cahill (Inactive)
Resolution: Done Votes: 1
Labels: WTplaybook
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Depends
is depended on by SERVER-21442 WiredTiger changes for MongoDB 3.0.8 Closed
Related
Backwards Compatibility: Fully Compatible
Operating System: ALL
Participants:

 Description   

Under certain circumstances, MongoDB + WT can build very deep trees. This has a considerable performance impact as the tree depth will equate with increased work when looking for a leaf page.



 Comments   
Comment by David Hows [ 20/Jan/16 ]

Correct. You will need to rebuild the collection. Performing a resync on the repl member is also an option, but may be overkill.

Comment by Tim Hawkins [ 20/Jan/16 ]

Presumably from above, that the safest course of action after determining that a collection is still impacted post 3.0.8 upgrade, would be to dump the collection, drop it, and then reload it?

Comment by David Hows [ 19/Jan/16 ]

Hi All,

Just wanted to clarify a few minor details with regard to this issue.

In MongoDB with WiredTiger both collection and index data are stored in btree structures. At this stage, it has been more common for the collection data to experience this issue than the MongoDB indexes.

When running the .stats() command as suggested by Stuart H above the btree details seen in the output will be for the collection data only. You can see the stats for a given collection by running stats with the indexDetails option as below:

db.foo.stats({indexDetails:true})

Comment by Stuart Hall [ 19/Jan/16 ]

Hi Timothy,

Identify collections with indexes that are impacted by this issue:

db.<collection name>.stats( ).wiredTiger.btree["maximum tree depth"];

This should be a small number (< 10). We've seen values approaching 1000 on large collections with versions < 3.0.8

After an upgrade to 3.0.8 containing the fix, forcing an update to the indexes to rebuild the trees.:

I'm afraid that you need to do a wipe and rebuild of the associated node. I assume that a repair would do this, but if you use replica sets, it's usually easier to stop one node, wipe the data files, and then restart it, allowing it to rebuild from another node in the replica set. Unfortunately, there is no other way to do this as this bug actually affects how the data is stored within wiredTiger.

Regards,

Stuart H.
(Disclaimer: I am not a MongoDB employee but was involved with the reporting and fixing of the original issue)

Comment by Tim Hawkins [ 19/Jan/16 ]

We are running 3.0.6

What is the recommended method to:

1. Identify collections with indexes that are impacted by this issue
2. After an upgrade to 3.0.8 containing the fix, forcing an update to the indexes to rebuild the trees.

Comment by Michael Cahill (Inactive) [ 09/Dec/15 ]

Yes, zhifan, collections created with 3.2 do not have this problem.

Be aware that collections created by MongoDB version 3.0.7 and earlier that are unbalanced are not automatically repaired by switching to 3.2. An initial sync may be required to create balanced trees.

Comment by a zhifan [ 09/Dec/15 ]

may I know if this issue is fixed in the just released version 3.2?

Comment by Githook User [ 22/Oct/15 ]

Author:

{u'username': u'michaelcahill', u'name': u'Michael Cahill', u'email': u'michael.cahill@mongodb.com'}

Message: SERVER-21063 Avoid creating deep trees for append-only workloads.

Merge pull request #1988 from wiredtiger/split-deepen-for-append

(cherry picked from commit a98417879da9eacefecd74242fd3924b46e31183)
Branch: mongodb-3.0
https://github.com/wiredtiger/wiredtiger/commit/cb642366f168caadd56bed3c257e4d3e4c5cc4f0

Generated at Thu Feb 08 03:56:10 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.