[SERVER-20197] find() on collection with large documents slower on WiredTiger than MMAPv1 Created: 29/Aug/15 Updated: 21/Sep/15 Resolved: 21/Sep/15 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | WiredTiger |
| Affects Version/s: | 3.0.6 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | James Wahlin | Assignee: | Mathias Stearn |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||
| Operating System: | ALL | ||||||||||||
| Steps To Reproduce: | 1) Setup a 2 member MongoDB 3.0.6 replica set with:
2) Load a data set with large documents:
3) Run the following explain 2 times against both the WiredTiger and the MMAPv1 members:
|
||||||||||||
| Sprint: | Quint 9 09/18/15, QuInt A (10/12/15) | ||||||||||||
| Participants: | |||||||||||||
| Description |
|
Retrieval of large documents under MongoDB 3.0.6 and WiredTiger is significantly slower than 3.0.6 running MMAPv1. With the following reproduction on my macbook, I saw the following execution times, running the find() operation twice to account for cold data:
|
| Comments |
| Comment by Mathias Stearn [ 21/Sep/15 ] |
|
Closing as Gone Away as the "second find" problem has been resolved by the combination of The "first read" issue is due to internal design differences that allow for features like compression and checksumming. Reading uncached data in wiredtiger requires processing the data to make it useable. |