On a single server we're hosting 4 different mongod instances from two different sharded clusters:
- One primary from cluster1
- One arbiter from cluster1
- One secondary from cluster2
- One arbiter from cluster2
Server has following specs:
We have 20 servers with pretty much the same deployment.
One primary from one cluster.
One secondary from the other cluster
One arbiter from each clusters
We currently are facing issues with memory and swap consumption.
Usually the Primary is consuming most of the memory and most of the swap.
We recently restarted the secondary on a server and it appear the primary uses almost 40GB of RAM.
We tried to limit WTCacheSizeGB to 12GB per mongod few weeks ago but the issue still stands.
We tried to lower it to 10GB but we started to have slowness and perfomances issues so we went back up to 12GB.
It can be seen that clust-users-1-shard8-1 (primary) uses 55.5% of the memory.
clust-users-2-shard6-1 (secondary) uses already 33% of memory despite being restarted less than a day ago and not being a primary at any point.
We do not understand how and why a single instance can use more than half the total memory of the server despite having set the WiredTigerCacheSizeGB.
You'll find attached the configuration files from each instances including arbiters.