Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-2123

Don't clear allocated memory if not required

    • Type: Icon: Improvement Improvement
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • WT2.8.0
    • Affects Version/s: None
    • Component/s: None
    • Labels:
      None

      While trying to repro https://jira.mongodb.org/browse/SERVER-20197 I noticed something odd. If I disable compression, the breakdown of time to iterate a table for the first time (pulling it into WT cache) is roughly:

      28.84% pread64 (This seem like the unavoidable "real work")
      21.49% __wt_cksum_hw (WT-2121 should help this)
      46.00% memset from __wt_realloc from __wt_buf_grow_worker from __wt_block_read_off from __wt_bm_read from __wt_bt_read
      
      The 46% is split into 37.38% servicing minor pagefaults (which would move somewhere else if we didn't memset) and ~8% in memset itself (which wouldn't).
      
      The call to memset is prefaced with the following comment:
      
          /*
           * Clear the allocated memory -- an application might: allocate memory,
           * write secret stuff into it, free the memory, then we re-allocate the
           * memory and use it for a file page or log record, and then write it to
           * disk.  That would result in the secret stuff being protected by the
           * WiredTiger permission mechanisms, potentially inappropriate for the
           * secret stuff.
           */
      

      I don't think this applies to mongod, since if someone has access to the (decrypted) data files they have access to all of the secret data anyway. Would allowing applications to opt-out of this zeroing be an easy way to boost performance or is there some reason why this can't be avoided?

            Assignee:
            keith.bostic@mongodb.com Keith Bostic (Inactive)
            Reporter:
            mathias@mongodb.com Mathias Stearn
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: