I've migrated my write-heavy application to MongoDB 3 and WiredTiger. This helped a lot with some of the performance issues I had, but now I am experiencing some memory problems on my DB host. My setup is fairly simple: I have an EC2 instance with 30GB system memory dedicated to running MongoDB; I only have one mongod process; I don't use sharding or replica sets.
When I first started using v3 and WiredTiger I've set the cache size to something large, like 27GB. Considering that I'm not running anything else on the instance, I thought it was safe to leave 3GB for the linux kernel and non-cache related MongoDB objects. My mongod was killed by OOM event on the next day. Then again after another day.
I've changed the cacheSizeGB to 24, leaving 6GB for the linux kernel and non-cache related MongoDB objects. Many days later I am running out of memory again.
I've been running the same application on Mongo 2.x for months without restarting the DB process. I don't use noTimeout cursors. I only have 8 web worker processes and 10 delayed_job processes accessing the database. That puts a cap of about 30-40 cursors that I might have open at the same time. It's far fewer most of the time.
I'm not sure how I can help you debug. `top` reports 26g memory used by the mongod process or 28.8g virtual image size. That's far above what is allowed for cache. Attached is the `top -abcn1` output and serverStatus info.