[SERVER-17688] Add wiredTiger support to return more than one cursor for parallelCollectionScan Created: 23/Mar/15 Updated: 06/Dec/22 Resolved: 30/Mar/18 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Storage, WiredTiger |
| Affects Version/s: | 3.0.0 |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Scott Hernandez (Inactive) | Assignee: | Backlog - Storage Execution Team |
| Resolution: | Won't Fix | Votes: | 10 |
| Labels: | 3.7BackgroundTask, parallelCollectionScan, wiredtiger | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||||||||||||||||||
| Assigned Teams: |
Storage Execution
|
||||||||||||||||||||||||||||||||
| Participants: | |||||||||||||||||||||||||||||||||
| Description |
|
Add support for more than one cursor to be returned, like the MMapV1, in wiredTiger storage engine. |
| Comments |
| Comment by Ian Whalen (Inactive) [ 30/Mar/18 ] |
|
Planning to deprecate parallelCollectionScan in |
| Comment by Kevin Rice [ 12/Dec/16 ] |
|
We're looking for this concept for many reasons, only one of which is mongodump / mongoexport. We have several batch operations that we have to scan all the docs in a collection and fixup, but I have to carry around a field with a random number in it for each record, index on that field, and select numeric ranges of that field for each process to update. This is a giant hassle; would be great if I could just get random records from the collection in each batch process and have all of them scan all the records and going to next rec if current rec was already processed thru another process's fixup. |
| Comment by Jamie Ivanov [X] [ 11/Dec/16 ] |
|
Obviously it wasn't added ergo this feature request that has been open for almost 2 years. This is one of the MANY reasons why MongoDB will never make it in the professional world. I know that I dropped MongoDB because of this and a number of other reasons which was only fueled by their cavalier and unprofessional attitudes towards issues like these. |
| Comment by Eurico Doirado [ 10/Dec/16 ] |
|
It looks like it was not implemented in wiredTiger: |
| Comment by Kevin Rice [ 07/Oct/16 ] |
|
With the preferred engine now being WiredTiger (we're certainly using it), this is the direction to go. We have a large collection, several hundred GB, and need to back it up. This currently takes 24 hours with mongodump, despite huge numbers of available processors and lots of disk, it's single-threaded. This is a common use case, to back up a database, and it could be, and should be, fast. |
| Comment by Jamie Ivanov [X] [ 29/Sep/16 ] |
|
This would have been really nice to have for a project right now. This is kind-of a pointless feature if one is using wiredTiger. Why bother including a feature that doesn't work? |