[SERVER-17616] Removing or inserting documents with large indexed arrays consumes excessive memory Created: 16/Mar/15  Updated: 06/Apr/23  Resolved: 31/Mar/15

Status: Closed
Project: Core Server
Component/s: MMAPv1
Affects Version/s: 3.0.0
Fix Version/s: 3.0.2, 3.1.1

Type: Bug Priority: Major - P3
Reporter: Bruce Lucas (Inactive) Assignee: Geert Bosch
Resolution: Done Votes: 2
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File perf.png    
Issue Links:
Depends
Related
Backwards Compatibility: Fully Compatible
Operating System: ALL
Backport Completed:
Sprint: Quint Iteration 3.1.1
Participants:
Case:

 Description   
Issue Status as of Apr 02, 2015

ISSUE SUMMARY
A mongod using the MMAPv1 storage engine may use too much memory to insert and remove documents with large indexed arrays.

USER IMPACT
MongoDB consumes an unnecessarily large amount of memory.

WORKAROUNDS
None.

AFFECTED VERSIONS
MongoDB 3.0.0 and 3.0.1 are affected by this issue.

FIX VERSION
The fix is included in the 3.0.2 production release.

Original description

Reproduce as follows:

    db.c.ensureIndex({a:1})
    doc = {a:[]}
    for (var i=0; i<100000; i++)
        doc.a.push(i)
    db.c.insert(doc)
    print('before:', db.serverStatus({tcmalloc:1}).tcmalloc.generic.heap_size/1024/1024, 'MB')
    db.c.remove({})
    print('after:', db.serverStatus({tcmalloc:1}).tcmalloc.generic.heap_size/1024/1024, 'MB')

Results:

3.0, mmapv1, 100k entries:
before: 124.9453125 MB
after: 3148.9453125 MB <===
 
3.0, mmapv1, 50k entries:
before: 93.234375 MB
after: 1605.234375 MB <===
 
3.0, wiredTiger, 100k entries:
before: 72.28125 MB
after: 98.53125 MB
 
2.6.8, mmapv1, 100k entries
before: 79.4453125 MB
after: 79.4453125 MB

  • memory consumed is proportional to number of index entries, about 30 kB per entry
  • issue is specific to mmapv1, does not occur with WT - so maybe issue is in the journal?
  • problem does not occur in 2.6


 Comments   
Comment by Githook User [ 31/Mar/15 ]

Author:

{u'username': u'GeertBosch', u'name': u'Geert Bosch', u'email': u'geert@mongodb.com'}

Message: SERVER-17616: Bound memory usage of MMAPv1 rollback buffers.

(back ported from commit 991ccba6e29ea5b7f51a6ed4a549b8a9291a209b)
Branch: v3.0
https://github.com/mongodb/mongo/commit/b87a1c395527b0981b6613d6ecf949f2c7465ad8

Comment by Githook User [ 31/Mar/15 ]

Author:

{u'username': u'GeertBosch', u'name': u'Geert Bosch', u'email': u'geert@mongodb.com'}

Message: SERVER-17616: Bound memory usage of MMAPv1 rollback buffers.
Branch: master
https://github.com/mongodb/mongo/commit/991ccba6e29ea5b7f51a6ed4a549b8a9291a209b

Comment by Bruce Lucas (Inactive) [ 18/Mar/15 ]

The issue also affects insert, although to a lesser degree:

    db.c.ensureIndex({a:1})
 
    // populate the index a bit
    var bulk = db.c.initializeUnorderedBulkOp();
    for (var i=0; i<10000; i++)
        bulk.insert({a:100000+i})
    bulk.execute()
 
    // now insert single doc with large indexed array
    print('before:', db.serverStatus({tcmalloc:1}).tcmalloc.generic.heap_size/1024/1024, 'MB')
    doc = {a:[]}
    for (var i=0; i<100000; i++)
        doc.a.push(i)
    db.c.insert(doc)
    print('after:', db.serverStatus({tcmalloc:1}).tcmalloc.generic.heap_size/1024/1024, 'MB')

output:

before: 65.0234375 MB
after: 801.4453125 MB <===

Comment by Geert Bosch [ 17/Mar/15 ]

Thanks, I'll look into this.

Comment by Bruce Lucas (Inactive) [ 17/Mar/15 ]

Some memory profiling using perf shows that it indeed seems to be the journaling system that is using the memory:

Comment by Christian Tonhäuser [ 17/Mar/15 ]

Removed comment.

Generated at Thu Feb 08 03:45:03 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.