Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-6796

Deleting documents takes longer when there are many updates in the history store.

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 4.4.1
    • Component/s: None
    • Labels:
    • 8
    • Storage - Ra 2021-05-31, Storage - Ra 2021-06-14

      I see slower performance when deleting documents after growing the history store. I don't know to what extent this behavior is expected/unexpected.

      This came up in the eMRCf testing described at WT-6776.

      Setup: I start with 1 million smallish (50 - 200 byte) documents in a single collection on a PSA replica set. After creating the documents, I shutdown the secondary. Because I am using the default enableMajorityReadConcern=true, this means that the primary stops advancing the stable timestamp.

      I compare two different scenarios starting from this setup.

      1. Immediately delete 100,000 randomly chosen documents (1/10 of collection)
      2. Perform 8 million updates, 1 million inserts, 1 million deletes, keeping the collection size at 1 million documents, but growing the history store to 3.6 GB. Leave the system idle for 5 minutes so any delayed activity or checkpoints can quiesce. Then delete 100,000 documents at random, as in the first case.

      In both cases Genny reports the latency of each delete operation. In the second case, the deletes take noticeably longer to complete. I'll add the data in a comment.

      Note that this workload is being driven through MongoDB. So each "delete" here is a deleteOne request where we look up a randomly chosen key and delete the oldest document with the key. The collection is indexed on the key field. So the "delete" time here includes the lookup and deleting both the item and the index entries that point to it. And, of course, since the stable timestamp isn't advancing, some (all?) of these deleted values get pushed into the history store.

      I ran these tests with MongoDB 4.4.1.

        1. 4.4.4-rc0.tgz
          9.97 MB
          Keith Smith
        2. 444rc0_delete.png
          58 kB
          Keith Smith
        3. 444rc0_update.png
          94 kB
          Keith Smith
        4. Benchmark timeline.png
          61 kB
          Keith Smith
        5. comparison.png
          334 kB
          Bruce Lucas
        6. csv.tar.gz
          135.10 MB
          Keith Smith
        7. declining-delete-performance-low-io.png
          370 kB
          Kelsey Schubert
        8. del.png
          395 kB
          Bruce Lucas
        9. delete.jpg
          39 kB
          Keith Smith
        10. HS Spikes.png
          51 kB
          Keith Smith
        11. image-2021-05-24-14-10-03-024.png
          149 kB
          Haseeb Bokhari
        12. image-2021-05-24-14-11-07-725.png
          195 kB
          Haseeb Bokhari
        13. image-2021-05-24-14-27-05-030.Edit.png
          162 kB
          Keith Smith
        14. image-2021-05-24-14-27-05-030.png
          740 kB
          Haseeb Bokhari
        15. image-2021-05-25-15-33-36-855.png
          392 kB
          Haseeb Bokhari
        16. image-2021-05-25-15-35-03-462.png
          413 kB
          Haseeb Bokhari
        17. image-2021-05-27-14-38-19-242.png
          282 kB
          Haseeb Bokhari
        18. image-2021-05-28-10-55-41-100.png
          243 kB
          Haseeb Bokhari
        19. image-2021-05-31-16-27-44-375.png
          593 kB
          Haseeb Bokhari
        20. metrics.2020-11-02T15-38-21Z-00000
          7.67 MB
          Keith Smith
        21. Queuing Delay.png
          107 kB
          Keith Smith
        22. run_with_log.tgz
          10.71 MB
          Keith Smith
        23. scatter-1.jpg
          52 kB
          Keith Smith
        24. Screen Shot 2020-11-03 at 1.10.35 PM.png
          39 kB
          Keith Smith

            Assignee:
            haseeb.bokhari@mongodb.com Haseeb Bokhari (Inactive)
            Reporter:
            keith.smith@mongodb.com Keith Smith
            Votes:
            0 Vote for this issue
            Watchers:
            17 Start watching this issue

              Created:
              Updated:
              Resolved: