Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-24833

Certain queries are 2000x time slower with WiredTiger than with MMAPv1

    • Type: Icon: Bug Bug
    • Resolution: Cannot Reproduce
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.2.7
    • Component/s: WiredTiger
    • Labels:
      None

      In certain queries, where there isn't an index available, wiredTiger takes way too long, when comparison with MMAPv1.
      In the example below the difference is 6min vs 0.2 sec.

      Example:

      db.forms.find({'dimension': 2,'degree':2 }).count()
      
      And, there isn't any entry with the key "degree".
      
      
      wiredTiger:
      config:
        engine: 'wiredTiger'
        wiredTiger:
          collectionConfig: 
            blockCompressor: none
      

      We also have tried with snappy and zlib.
      The filesystem is ext4, and we have also tried xfs without a significant difference (when compared to MMAPv1).

      log entry:
      2016-06-28T05:09:22.454+0000 I COMMAND  [conn4] command hmfs.forms command: count { count: "forms", query: { dimension: 2, degree: 2 } } planSummary: IXSCAN { dimension: -1, field_label: 1, level_norm: -1 } fromMultiPlanner:1 keyUpdates:0 writeConflicts:0 numYields:13831 reslen:62 locks:{ Global: { acquireCount: { r: 27664 } }, Database: { acquireCount: { r: 13832 } }, Collection: { acquireCount: { r: 13832 } } } protocol:op_query 369464ms
      
      MMAPv1
      config:
      engine: 'mmapv1'
      
      log entry:
      2016-06-28T05:10:12.412+0000 I COMMAND  [conn15] command hmfs.forms command: count { count: "forms", query: { dimension: 2, degree: 2 } } planSummary: IXSCAN { dimension: -1, field_label: 1, level_norm: -1 } keyUpdates:0 writeConflicts:0 numYields:447 reslen:62 locks:{ Global: { acquireCount: { r: 896 } }, MMAPV1Journal: { acquireCount: { r: 448 } }, Database: { acquireCount: { r: 448 } }, Collection: { acquireCount: { R: 448 } } } protocol:op_query 181ms 
      

      This two examples were run on the same machine, on different empty dbs where the only difference is the storage engine. In both cases, the collections was restored from the same dump file.

      Perhaps we are doing something wrong, if so, please enlighten us, as we are planning switch all our servers to MMAPv1.

            Assignee:
            kelsey.schubert@mongodb.com Kelsey Schubert
            Reporter:
            edgarcosta Edgar Costa
            Votes:
            3 Vote for this issue
            Watchers:
            15 Start watching this issue

              Created:
              Updated:
              Resolved: