Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-33742

MongoDb Ram Usage in a Kubernetes cluster

    • Type: Icon: Improvement Improvement
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: WiredTiger
    • Labels:
    • Environment:
      Kubernetes, Google Container Engine

      I've a mongodb database in a kubernetes cluster. (configuration see below)
      This works in general very well. But when we have operations with heavy data operations, like inserting big batches of data the ram of the database increases very fast.
      This is a normal and expected behavior. On Physical machines this also isn't a problem.

      But in Kubernetes it seems like mongodb is not aware of the ram a single node has and the node runs in out ouf memory scenarios and terminates the mongodb instace.
      Also i'm currently not aware of beingable to do something like cgroups in a k8s cluster (maybe i'm wrong).

      It would be important to have some kind of more reliable and predictable behavior in a k8s environment so that single heavy load scenarios don't lead to terminated databases.

      ---
      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      metadata: 
        name: fast
      provisioner: kubernetes.io/gce-pd
      parameters: 
        type: pd-ssd
      ---
      apiVersion: apps/v1beta1
      kind: StatefulSet
      metadata: 
        name: mongo
      spec: 
        serviceName: "mongo"
        replicas: 1
        template: 
          metadata: 
            labels: 
              role: mongo
              environment: test
          spec:    
            terminationGracePeriodSeconds: 10
            containers: 
              - name: mongo
                image: mongo:3.6
                command: 
                  - mongod
                  - "--replSet"
                  - rs0
                  - "--bind_ip"
                  - "0.0.0.0"
                  - "--noprealloc"
                  - "--wiredTigerCacheSizeGB"
                  - "0.25"
                  - "--wiredTigerEngineConfigString"
                  - "hazard_max=5000"
                ports: 
                  - containerPort: 27017
                resources: 
                  limits: 
                    memory: 2.5Gi
                  requests: 
                    memory: 2Gi
                volumeMounts: 
                  - name: mongo-persistent-storage
                    mountPath: /data/db
              - name: mongo-sidecar
                image: cvallance/mongo-k8s-sidecar
                env: 
                  - name: MONGO_SIDECAR_POD_LABELS
                    value: "role=mongo,environment=test"
        volumeClaimTemplates: 
        - metadata: 
            name: mongo-persistent-storage
            annotations: 
              volume.beta.kubernetes.io/storage-class: "fast"
          spec: 
            accessModes: [ "ReadWriteOnce" ]
            resources: 
              requests: 
                storage: 32Gi
      

            Assignee:
            ramon.fernandez@mongodb.com Ramon Fernandez Marina
            Reporter:
            boas.enkler@busliniensuche.de Boas Enkler
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: