[SERVER-21063] MongoDB with WiredTiger can build very deep trees Created: 22/Oct/15 Updated: 16/Nov/21 Resolved: 13/Nov/15 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | WiredTiger |
| Affects Version/s: | 3.0.6, 3.0.7 |
| Fix Version/s: | 3.0.8 |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | David Hows | Assignee: | Michael Cahill (Inactive) |
| Resolution: | Done | Votes: | 1 |
| Labels: | WTplaybook | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||
| Operating System: | ALL | ||||||||||||
| Participants: | |||||||||||||
| Description |
|
Under certain circumstances, MongoDB + WT can build very deep trees. This has a considerable performance impact as the tree depth will equate with increased work when looking for a leaf page. |
| Comments |
| Comment by David Hows [ 20/Jan/16 ] | |
|
Correct. You will need to rebuild the collection. Performing a resync on the repl member is also an option, but may be overkill. | |
| Comment by Tim Hawkins [ 20/Jan/16 ] | |
|
Presumably from above, that the safest course of action after determining that a collection is still impacted post 3.0.8 upgrade, would be to dump the collection, drop it, and then reload it? | |
| Comment by David Hows [ 19/Jan/16 ] | |
|
Hi All, Just wanted to clarify a few minor details with regard to this issue. In MongoDB with WiredTiger both collection and index data are stored in btree structures. At this stage, it has been more common for the collection data to experience this issue than the MongoDB indexes. When running the .stats() command as suggested by Stuart H above the btree details seen in the output will be for the collection data only. You can see the stats for a given collection by running stats with the indexDetails option as below:
| |
| Comment by Stuart Hall [ 19/Jan/16 ] | |
|
Hi Timothy,
This should be a small number (< 10). We've seen values approaching 1000 on large collections with versions < 3.0.8
I'm afraid that you need to do a wipe and rebuild of the associated node. I assume that a repair would do this, but if you use replica sets, it's usually easier to stop one node, wipe the data files, and then restart it, allowing it to rebuild from another node in the replica set. Unfortunately, there is no other way to do this as this bug actually affects how the data is stored within wiredTiger. Regards, Stuart H. | |
| Comment by Tim Hawkins [ 19/Jan/16 ] | |
|
We are running 3.0.6 What is the recommended method to: 1. Identify collections with indexes that are impacted by this issue | |
| Comment by Michael Cahill (Inactive) [ 09/Dec/15 ] | |
|
Yes, zhifan, collections created with 3.2 do not have this problem. Be aware that collections created by MongoDB version 3.0.7 and earlier that are unbalanced are not automatically repaired by switching to 3.2. An initial sync may be required to create balanced trees. | |
| Comment by a zhifan [ 09/Dec/15 ] | |
|
may I know if this issue is fixed in the just released version 3.2? | |
| Comment by Githook User [ 22/Oct/15 ] | |
|
Author: {u'username': u'michaelcahill', u'name': u'Michael Cahill', u'email': u'michael.cahill@mongodb.com'}Message: Merge pull request #1988 from wiredtiger/split-deepen-for-append (cherry picked from commit a98417879da9eacefecd74242fd3924b46e31183) |