Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-9339

Inserts can be committed despite going over the cache size limit

    • Storage Engines
    • StorEng - Refinement Pipeline

      Summary
      It seems the cache size is not considered when trying to insert data bigger than its value. We have already encountered issues where we would go beyond the cache size limit with the cpp suite and it resurfaced again while implementing WT-9094. I am attaching a python repro and the T2 stats it generates.

      Motivation

      • Does this affect any team outside of WT?
        It could.
      • How likely is it that this use case or problem will occur?
        Every time unless I am not using WT correctly
      • If the problem does occur, what are the consequences and how severe are they?
        WT could use more memory than expected
      • Is this issue urgent?
        Not sure

      Acceptance Criteria (Definition of Done)
      Understand if this behavior is correct and if not, fix it.

        1. reproducer.rtf
          4 kB
        2. Screen Shot 2022-05-19 at 5.25.05 pm.png
          Screen Shot 2022-05-19 at 5.25.05 pm.png
          83 kB
        3. wt9339-stats.png
          wt9339-stats.png
          44 kB

            Assignee:
            backlog-server-storage-engines [DO NOT USE] Backlog - Storage Engines Team
            Reporter:
            etienne.petrel@mongodb.com Etienne Petrel
            Votes:
            0 Vote for this issue
            Watchers:
            9 Start watching this issue

              Created:
              Updated: