[SERVER-5312] Resync fails due to out of memory Created: 14/Mar/12  Updated: 15/Aug/12  Resolved: 11/Jul/12

Status: Closed
Project: Core Server
Component/s: Replication
Affects Version/s: 2.0.3
Fix Version/s: None

Type: Bug Priority: Critical - P2
Reporter: Kenn Ejima Assignee: siddharth.singh@10gen.com
Resolution: Duplicate Votes: 1
Labels: replicaset
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Linux 3.0.18


Issue Links:
Depends
Duplicate
duplicates SERVER-6414 use regular file io, not mmap for ext... Closed
Related
is related to SERVER-6141 can't successfully replicate our shar... Closed
Operating System: Linux
Participants:

 Description   

We newly initiated a replica set, but the to-be secondary never gets out of "RECOVERING" state, as the mongod process is killed by oom-killer in the middle of resync (seemingly last step of resync- when building secondary indexes, to be precise) and start from scratch every time.

journal is turned on, vm.overcommit_memory is set to 1, as suggested before.

Right now, testing "echo -17 > /proc/`cat /var/run/mongodb.pid`/oom_adj" (and "swapoff -a"), but every trial takes hours.

The data size is 10x larger than the physical memory, it seems unlikely that simply doubling the RAM would fix the problem, as the heuristics of oom-killer is rather unpredictable.

I'd like to know what triggers this failure, and what I should keep in mind.

What should we do to get resync done?



 Comments   
Comment by Daniel Pasette (Inactive) [ 11/Jul/12 ]

This was found and fixed by SERVER-6414. Going into 2.0.7 and 2.2.0.

Comment by siddharth.singh@10gen.com [ 30/May/12 ]

I had turned swapoff to try repro it faster as well. Didn't see the repro though. It took a while for the secondaries to sycn but it happened successfully.

Comment by Kenn Ejima [ 29/May/12 ]

"swapoff -a" is to quickly let success or fail those experiments, that's all. I didn't want to let it run days or weeks by thrashing.

what is your swap size? in our case, swap size was 256MB (default of Linode).

Comment by siddharth.singh@10gen.com [ 29/May/12 ]

No repro yet. Tried it with index size of 3.5, 9 and 21 GB on a 1 GB machine and didn't see anything. I will go ahead and close this one for now as "Unable to Reproduce". Please feel free to reopen if you come across a reproducible test case. Thanks for reporting this.

Comment by siddharth.singh@10gen.com [ 27/Apr/12 ]

Unable to reproduce so far. Tried with a ~3.5GB index size on a VM with 1 GB machine. Will update as soon as I get a repro case.

Comment by Kenn Ejima [ 02/Apr/12 ]

Here's the stats on the new 2.0 instances, it's a lot more compact than the old 1.8 for the same data. We discarded the old instances (upgrading mongodb was the purpose of the replication / migration) so the information on them is not available.

> db.stats()
{
"db" : "my_production",
"collections" : 15,
"objects" : 41682353,
"avgObjSize" : 290.2968635191972,
"dataSize" : 12100256340,
"storageSize" : 14245875712,
"numExtents" : 123,
"indexes" : 34,
"indexSize" : 7570043936,
"fileSize" : 25691160576,
"nsSizeMB" : 16,
"ok" : 1
}

> db.files.stats()
{
"ns" : "my_production.uploads",
"count" : 16494054,
"size" : 4854031388,
"avgObjSize" : 294.28977181716516,
"storageSize" : 5727051776,
"numExtents" : 32,
"nindexes" : 4,
"lastExtentSize" : 961069056,
"paddingFactor" : 1.6499999999847526,
"flags" : 1,
"totalIndexSize" : 2346225840,
"indexSizes" :

{ "_id_" : 505930880, "id2_1" : 435870736, "game_id_1_filename_1" : 1108739184, "expires_at_1" : 295685040 }

,
"ok" : 1
}

db.system.index.find() returned nothing.

There are too many collections and stats - I only list the collection that caused the problem. The "game_id_1_filename_1" index is like 1GB now but it was much larger before.

Try reproducing on a server with small RAM capacity, it should give you a repro quicker, I think.

Hope this helps.

Comment by siddharth.singh@10gen.com [ 30/Mar/12 ]

I would like to reproduce this case. Can you give us information:

i) About the database and collection level stats by running (db.stats(), db.collection.stats()).
ii) The kind of indices you have and information from database indices (db.system.index.find())

I noticed that it says "external sort used: ". Presumably memory mapping these external sort files might be causing the memory usage to go up. I"ll try to reproduce this case and see what's going on.

Comment by Kenn Ejima [ 29/Mar/12 ]

Unfortunately, no. Luckily for us, doubling the RAM fixed the problem this time, but I know the bottom cause is still there and might have the same problem as our data set grow.

To me it seems that the problem occurs when you have a large INDEX, probably larger than RAM.

Comment by Rakesh Sankar [ 29/Mar/12 ]

I have the same problem and it's a production instance. I am running a Amazon's m1.medium 64-bit machine.

@Kenn were you able to fix this problem?? If yes, please let me know, Thanks.

Comment by Kenn Ejima [ 17/Mar/12 ]

We use linode.com, a Xen based VPS with unknown type of RAID (most likely RAID10). Their disk performance is fairly fast. steal was about 0 with sysstat, so it's not that other users on the same hardware occupied the resources.

Comment by Eliot Horowitz (Inactive) [ 17/Mar/12 ]

What hardware is this on?
What kind of drives?

Comment by Kenn Ejima [ 15/Mar/12 ]

Yes, disk was saturated as detailed more at http://groups.google.com/group/mongodb-user/browse_thread/thread/f4e4089b8439ff9f

At that time iowait was like 90%. Before to that point, it was more like RES=4.0g, SHR=3.8g, but when running building the secondary indexes, SHR went down to 0 pretty rapidly and crashed hard (Even console couldn't accept login shell and had to restart physically, however it was virtualized and emulated).

Comment by Eliot Horowitz (Inactive) [ 15/Mar/12 ]

Is disk saturated?

Comment by Kenn Ejima [ 14/Mar/12 ]

Follow up:

Okay this is third trial (oom adjusted and swapoff -a), and this time mongod is not get killed, but progress has gone incredibly slow. Right now it's rebuilding secondary indexes:

external sort used : 10 files in 54 secs
570700/9432323 6%
575500/9432323 6%
579800/9432323 6%
...

and it's taking forever.

Looking at top, mongod eats up all the memory and only 1-2MB is left for file cache. VIRT=70g, RES=4.0g, SHR=0.

Generated at Thu Feb 08 03:08:31 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.