-
Type: Bug
-
Resolution: Community Answered
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: MapReduce
-
None
-
Fully Compatible
-
ALL
Hi
We face an issue when we do many map reduce operations on a sharded cluster.
The used memory increases more and more and when it is on the limit of the host memory, the mongod process crashes with an out of memry exception.
Environment:
- 24 shards, 80 GByte RAM each
- Default value for storage.wiredTiger.engineConfig.cacheSizeGB (no value set in the config)
- Continuous parallel running map reduce operations on different input and output collections (Java applicaton with 10 threads)
We believe that the memory can be used by mongo up to the max availabe RAM.
But the mongod process should never crash.
Many thank for a feedback!