[SERVER-33742] MongoDb Ram Usage in a Kubernetes cluster Created: 08/Mar/18  Updated: 02/Apr/18  Resolved: 08/Mar/18

Status: Closed
Project: Core Server
Component/s: WiredTiger
Affects Version/s: None
Fix Version/s: None

Type: Improvement Priority: Major - P3
Reporter: Boas Enkler Assignee: Ramon Fernandez Marina
Resolution: Duplicate Votes: 0
Labels: Kubernetes
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Kubernetes, Google Container Engine


Issue Links:
Duplicate
duplicates SERVER-28818 Enable configuring RAM usage for Mong... Closed
Participants:

 Description   

I've a mongodb database in a kubernetes cluster. (configuration see below)
This works in general very well. But when we have operations with heavy data operations, like inserting big batches of data the ram of the database increases very fast.
This is a normal and expected behavior. On Physical machines this also isn't a problem.

But in Kubernetes it seems like mongodb is not aware of the ram a single node has and the node runs in out ouf memory scenarios and terminates the mongodb instace.
Also i'm currently not aware of beingable to do something like cgroups in a k8s cluster (maybe i'm wrong).

It would be important to have some kind of more reliable and predictable behavior in a k8s environment so that single heavy load scenarios don't lead to terminated databases.

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
spec:
  serviceName: "mongo"
  replicas: 1
  template:
    metadata:
      labels:
        role: mongo
        environment: test
    spec:    
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo:3.6
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--bind_ip"
            - "0.0.0.0"
            - "--noprealloc"
            - "--wiredTigerCacheSizeGB"
            - "0.25"
            - "--wiredTigerEngineConfigString"
            - "hazard_max=5000"
          ports:
            - containerPort: 27017
          resources:
            limits:
              memory: 2.5Gi
            requests:
              memory: 2Gi
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo,environment=test"
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage
      annotations:
        volume.beta.kubernetes.io/storage-class: "fast"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 32Gi



 Comments   
Comment by Ramon Fernandez Marina [ 08/Mar/18 ]

boas.enkler@busliniensuche.de, this functionality has been requested before in SERVER-28818, so I'm going to mark this ticket as a duplicate – please watch and vote for SERVER-28818 going forward.

Regards,
Ramón.

Generated at Thu Feb 08 04:34:26 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.