(Optimisation) Scan index in reverse order if skip is large enough

XMLWordPrintableJSON

    • Type: Bug
    • Resolution: Done
    • Priority: Major - P3
    • None
    • Affects Version/s: None
    • Component/s: None
    • ALL
    • None
    • 3
    • None
    • None
    • None
    • None
    • None
    • None

      Would it be possible to increase performance for pagination with indexes for large collections ?

      Consider a collection with 200000 documents, paginated with $skip and $limit and indexed on the sort order.

      The first page takes a few milliseconds, but the last page can take a few seconds because the index is scanned from the start counting the $skip.

      What if, as an optimization, the query planner would check if $skip is greater than half the amount of indexed documents and if it is: go through the index in reverse and shift the results into the array instead of pushing them.

      This would result in never having to scan more than half the index.

            Assignee:
            Unassigned
            Reporter:
            Jean-Samuel Girard
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: