Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-9741

verify write intents are efficient for btree bucket operations.

    • Type: Icon: Improvement Improvement
    • Resolution: Won't Fix
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 2.4.3
    • Component/s: Index Maintenance, MMAPv1
    • None
    • Storage Execution
    • None
    • 3
    • None
    • None
    • None
    • None
    • None
    • None

      with indexed arrays ("multikeys") the number of btree buckets hit by a single update can be very high.

      now if a single key were added to a bucket, efficiently, that might be a very small amount of data to journal. but if we did something akin to a memmove() then the data to journal might be quite large. The request with this ticket is to verify that we don't do the latter; and if we do, to improve it by logging a "memmove" op instead of a significant % of the total data in the btree bucket. perhaps that doesn't work, the memmove op, given group commits, but the concept of auditing this is sound, and then we figure it out from there. perhaps it is already efficient, not sure.

      subitem: check the "merge" code with regards to the above. merging likely requires significant journaling. if we are near a commitIfNeeded limited we might simply skip a few merges and do them later?

            Assignee:
            backlog-server-execution [DO NOT USE] Backlog - Storage Execution Team
            Reporter:
            dwight@mongodb.com Dwight Merriman
            Votes:
            3 Vote for this issue
            Watchers:
            8 Start watching this issue

              Created:
              Updated:
              Resolved: