[SERVER-9314] cursors that return over 60 million objects are extremely slow Created: 10/Apr/13 Updated: 10/Dec/14 Resolved: 10/Jun/14 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Performance, Querying |
| Affects Version/s: | 2.2.3 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | charity majors | Assignee: | Rui Zhang (Inactive) |
| Resolution: | Cannot Reproduce | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Ubuntu 12.04 |
||
| Operating System: | ALL |
| Participants: |
| Description |
|
It looks like getmores on cursors that return a large number of objects run significantly slower than cursors that return fewer objects. We noticed this trying to run mongodump on one of our collections which has 84M objects. collection.stats() returns: , If we try to mongodump this collection it takes about 7 hours. If we instead dump the collection by parts (i.e. split the _id space into 4 parts) and dump them individually, the total run time is about 1.5 hours. We have another collection whose on disk size is greater, but with fewer objects which dumps in about 2 hours. Here is collection.stat() on that collection: , Experimentally, the point at which performance falls off a cliff is about 60M objects in the result set. |
| Comments |
| Comment by Ramon Fernandez Marina [ 10/Jun/14 ] |
|
we haven't heard back from you for some time, so I'm going to mark this ticket as resolved. If this is still an issue for you, feel free to re-open and provide the additional information that rui.zhang requested. Regards, |
| Comment by Rui Zhang (Inactive) [ 14/May/14 ] |
|
Just a friendly reminder. Could you please provide more details on this? Thanks, |
| Comment by Rui Zhang (Inactive) [ 25/Apr/14 ] |
|
I tried to reproduce this issue, so far, I haven't be able to do this with my setup. I have done my test with 80M doc collection, average doc size is ~2k. I did not see significant slowdown by varying dump size between 20M to 80M. are you still seeing this issue? if so, could you provide me more details:
Thanks for your help! |
| Comment by Daniel Pasette (Inactive) [ 10/Apr/13 ] |
|
Thanks for the detailed report; we will attempt to reproduce. |