Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-12878

Aggregations that do not return cursors need to clean up internal cursors

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major - P3
    • Resolution: Fixed
    • Affects Version/s: 2.6.0-rc0
    • Fix Version/s: 2.6.0-rc1
    • Component/s: Performance
    • Labels:
    • Environment:
      Ubuntu 13.10
    • Backwards Compatibility:
      Fully Compatible
    • Operating System:
      Linux
    • Steps To Reproduce:
      Hide
      • start a mongodb 2.4.9 or 2.6.0.rc0 server
      • on the same server,
        • git clone https://github.com/tmcallaghan/sysbench-mongodb.git
        • cd sysbench-mongodb
        • edit run.simple.bash to modify the benchmark behavior
          • modify "export NUM_DOCUMENTS_PER_COLLECTION=10000000", by changing 10 million to "1000000". This will keep the loaded data to under 8GB.
        • Note, you must have ant and Java 1.7 installed to run the benchmark application. If using Java 1.6, change the build.xml file's target="1.7" to target="1.6". You also need to have the MongoDB Java driver in your CLASSPATH, as in "export CLASSPATH=/home/tcallaghan/java_goodies/mongo-2.11.2.jar:.".
        • ./run.simple.bash

      The benchmark runs in two stages:
      1. 16 collections are created, and each is loaded with 1 million documents. Eight loader threads run simultaneously, each with their own collection.
      2. 64 threads run the Sysbench workload

      Both stages output their respective performance.

      • I'm getting around 30K inserts per second for the loader.
      • For the mixed workload I'm getting 550 TPS on 2.4.9, but less than 10 TPS on 2.6.0.rc0
      Show
      start a mongodb 2.4.9 or 2.6.0.rc0 server on the same server, git clone https://github.com/tmcallaghan/sysbench-mongodb.git cd sysbench-mongodb edit run.simple.bash to modify the benchmark behavior modify "export NUM_DOCUMENTS_PER_COLLECTION=10000000", by changing 10 million to "1000000". This will keep the loaded data to under 8GB. Note, you must have ant and Java 1.7 installed to run the benchmark application. If using Java 1.6, change the build.xml file's target="1.7" to target="1.6". You also need to have the MongoDB Java driver in your CLASSPATH, as in "export CLASSPATH=/home/tcallaghan/java_goodies/mongo-2.11.2.jar:.". ./run.simple.bash The benchmark runs in two stages: 1. 16 collections are created, and each is loaded with 1 million documents. Eight loader threads run simultaneously, each with their own collection. 2. 64 threads run the Sysbench workload Both stages output their respective performance. I'm getting around 30K inserts per second for the loader. For the mixed workload I'm getting 550 TPS on 2.4.9, but less than 10 TPS on 2.6.0.rc0

      Description

      Aggregation commands that do not return a cursor need to release the ClientCursor object that they create to do their work. As of 2.6.0-rc0, they do not. This leads to increasing workload for every update and delete operation, as they must scan those zombie cursors and inform them of certain relevant events.

      Original description / steps to reproduce:

      In running benchmarks to compare the performance of 2.4.9 vs. 2.6.0.rc0 I see a significant drop in my Sysbench workload.

      The performance of both versions is comparable for loading data (insert only), but in the mixed workload phase (reads, aggregation, update, delete, insert) the performance of 2.6.0.rc0 is 50x lower on an in-memory test. The performance drop is similar for > RAM testing as well.

      I have confirmed this on my desktop (single Corei7) and a server (dual Xeon 5560), both running Ubuntu 13.10, with read-ahead set to 32.

      I'm running with the Java driver version 2.11.2

        Attachments

        1. prof-0.svg
          108 kB
        2. prof-8.svg
          106 kB
        3. SERVER-12878.png
          SERVER-12878.png
          212 kB
        4. SERVER-12878-249.png
          SERVER-12878-249.png
          185 kB
        5. SERVER-12878-249-aggr.png
          SERVER-12878-249-aggr.png
          58 kB
        6. SERVER-12878-249-qry.png
          SERVER-12878-249-qry.png
          40 kB
        7. SERVER-12878-249-readlock.png
          SERVER-12878-249-readlock.png
          48 kB
        8. SERVER-12878-249-write.png
          SERVER-12878-249-write.png
          152 kB
        9. SERVER-12878-249-writelock.png
          SERVER-12878-249-writelock.png
          401 kB
        10. SERVER-12878-rc1-aggr.png
          SERVER-12878-rc1-aggr.png
          125 kB
        11. SERVER-12878-rc1-qry.png
          SERVER-12878-rc1-qry.png
          148 kB
        12. SERVER-12878-rc1-readlock.png
          SERVER-12878-rc1-readlock.png
          286 kB
        13. SERVER-12878-rc1-write.png
          SERVER-12878-rc1-write.png
          108 kB
        14. SERVER-12878-rc1-writelock.png
          SERVER-12878-rc1-writelock.png
          175 kB

          Issue Links

            Activity

              People

              • Votes:
                0 Vote for this issue
                Watchers:
                24 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: