Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-29871

Resident Memory higher than cacheSizeGB

    • Type: Icon: Bug Bug
    • Resolution: Incomplete
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.2.11
    • Component/s: None
    • None
    • ALL

      I have a mongodb server running v3.2.11 with a memory comsumption much higher than what I configured on cacheSizeGB.

      Currently it is using 3.9g but the cacheSizeGB parameter is configured with 1 (1g).

      I expected to be a little bit over 1g due to active connections and other stuff but right now is almost 4x that and seems to be increasing.

      The huge pages are disabled:

      grep AnonHugePages /proc/meminfo          
      AnonHugePages:         0 kB
      
      cat /sys/kernel/mm/transparent_hugepage/enabled
      always madvise [never]
      

      I'm running CentOS

      uname -a
      Linux XXXXXX 2.6.32-642.el6.x86_64 #1 SMP Tue May 10 17:27:01 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
      
      cat /etc/redhat-release 
      CentOS release 6.8 (Final)
      

      I attached the information that I was able to collect:

      • top
      • serverStatus
      • mongostat
      • mongo_oms_1.conf --> the configuration file

      Is this a normal behaviour? There is other way that I can use to limit the memory used by the mongodb process?

        1. diagnostic.tar.gz
          106.22 MB
          Emanuel Freitas
        2. heapProfilingEnabled.tar.gz
          101.03 MB
          Emanuel Freitas
        3. mongo_oms_1.conf
          0.8 kB
          Emanuel Freitas
        4. mongostat.txt
          1.0 kB
          Emanuel Freitas
        5. serverStatus.txt
          33 kB
          Emanuel Freitas
        6. top.txt
          0.7 kB
          Emanuel Freitas

            Assignee:
            mark.agarunov Mark Agarunov
            Reporter:
            ejsfreitas Emanuel Freitas
            Votes:
            0 Vote for this issue
            Watchers:
            9 Start watching this issue

              Created:
              Updated:
              Resolved: