Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-12579

blocking sort's memory accounting is wrong

    XMLWordPrintableJSON

Details

    • Icon: Improvement Improvement
    • Resolution: Unresolved
    • Icon: Major - P3 Major - P3
    • None
    • 2.5.5
    • Querying
    • None
    • Query Execution

    Description

      We keep count of the memory usage while executing a blocking sort and the stage will (purposefully) kill itself if we use too much. Problem is, the accounting we're using isn't quite right. Most of the data going into sort isn't actually an owned object; the BSONObj is almost always an unowned object pointing into the memory-mapped collection. As such the only overhead incurred is that of the various query-specific wrappers around it, and the actual pointer to the on-disk mmap'd data.

      Anyway, unless we have lots of document invalidations, I don't think we will actually be using a lot of memory when we sort; we just think we are because we account for the size of the on-disk item. We could change the accounting and greatly up the limit for how many things we will in-memory sort.

      If we do this, do we want the cut-off to be memory usage based, or # of docs based? If the former we could probably have a *lot* more documents in a blocking sort. The latter is a departure from previous behavior but could preserve the same effective behavior (a lower limit).

      Attachments

        Activity

          People

            backlog-query-execution Backlog - Query Execution
            hari.khalsa@10gen.com hari.khalsa@10gen.com
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated: