[SERVER-23017] Fast approximate count with predicate Created: 08/Mar/16 Updated: 06/Dec/22 |
|
| Status: | Backlog |
| Project: | Core Server |
| Component/s: | Querying |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | New Feature | Priority: | Minor - P4 |
| Reporter: | Tudor Aursulesei | Assignee: | Backlog - Query Optimization |
| Resolution: | Unresolved | Votes: | 6 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Assigned Teams: |
Query Optimization
|
||||||||
| Participants: | |||||||||
| Description |
|
I've indexed the queried fields, which makes the count works very fast when nobody is updating documents in the collection, but whenever i start more workers, or whenever there is some load, the output is very slow:
Is there any way to make an approximative, but FAST count which ignores the updates? It doesn't matter if it displays a few hundred more or less items in a collection with 10 million docs. I'm on a sharded cluster btw (8 rs). |
| Comments |
| Comment by Tudor Aursulesei [ 05/Mar/17 ] |
|
How difficult is this feature? Does it take more than a few hours of work? |
| Comment by Tudor Aursulesei [ 02/Oct/16 ] |
|
Any update on this? Most people who use mongo just store a large amount of rows in a collection, then create some indexes on the fields and expect a reasonable fast count. When you insert, update or delete data in those rows the counts become very sluggish. Just google for "mongo slow count" and you'll find lots of reports. |
| Comment by Kelsey Schubert [ 10/Mar/16 ] |
|
Hi thestick613, Thank you for opening this improvement request. I am marking this ticket to be considered during the next round of planning. Please continue to watch this ticket for updates. Kind regards, |