[SERVER-19917] MongoDB crashed while loading bulk data Created: 13/Aug/15 Updated: 25/Aug/15 Resolved: 25/Aug/15 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | WiredTiger |
| Affects Version/s: | 3.0.4 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Venkatesh Sankar | Assignee: | Ramon Fernandez Marina |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
||||
| Issue Links: |
|
||||
| Operating System: | ALL | ||||
| Participants: | |||||
| Description |
|
MongoDB getting crashed frequently ( once in an hour) . I am running Single node Mongo DB 3.0.4 with Wired tiger on 3.14.26-24.46.amzn1.x86_64. The log says out of memory . But the instance has 50 % disk space.
Any help would be much appreciated. |
| Comments |
| Comment by Ramon Fernandez Marina [ 25/Aug/15 ] | ||||||||
|
vengireturns@gmail.com, this machine is seriously underpowered to run MongoDB. For starters, the default WiredTiger cache size is 1GB, so it's not surprising mongod run quickly out of memory. You may be able to get things to work by lowering the WiredTiger cache considerably; for example, here's the command line argument to set it to 100M:
Needless to say there's a performance tradeoff, so you may want to consider a machine with a lot more memory. I'm closing this ticket as I don't see an evidence of a bug, and we keep the SERVER project for reporting bugs or feature suggestions for the MongoDB server. For MongoDB-related support discussion please post on the mongodb-user group or Stack Overflow with the mongodb tag, where your question will reach a larger audience. See also our Technical Support page for additional support resources. Regards, | ||||||||
| Comment by Venkatesh Sankar [ 25/Aug/15 ] | ||||||||
|
Ramon, Our development team is loading the data from Spring data repository using a java program (12 million documents in chunks of 10000 each). So i couldn't share the data to you. plz let me know if you need any details. | ||||||||
| Comment by Venkatesh Sankar [ 25/Aug/15 ] | ||||||||
|
output from ulit -a: core file size (blocks, -c) 0 | ||||||||
| Comment by Ramon Fernandez Marina [ 25/Aug/15 ] | ||||||||
|
Hi vengireturns@gmail.com, apologies for the radio silence. The mongostat output you sent shows an increase in memory usage, but should not be enough to trigger this issue. Can you please tell us:
Also, what kind of bulk loading are you doing and how? Which driver are you using? I'm asking because if you provide enough details, even share the data you're uploading[1], we can try to reproduce locally. Thanks, [1] If you can upload your data let me know and I'll create an upload portal for you; JIRA only supports 150MB uploads. | ||||||||
| Comment by Venkatesh Sankar [ 24/Aug/15 ] | ||||||||
|
Ramon, Can you please update me on this ? | ||||||||
| Comment by Venkatesh Sankar [ 14/Aug/15 ] | ||||||||
|
Herewith am attaching the mongostat details from the start of the load till it crashed. | ||||||||
| Comment by Venkatesh Sankar [ 14/Aug/15 ] | ||||||||
|
Hi Ramon, Thanks for your quick response. I did tested the load again in 3.0.5. It performed well this time. However at the end of the load , it failed again with memory issue. below the error for your reference.
| ||||||||
| Comment by Ramon Fernandez Marina [ 13/Aug/15 ] | ||||||||
|
vengireturns@gmail.com, I wanted to add a bit more information to this ticket. The stack trace indicates that an attempt to allocate memory failed, which caused mongod to terminate (this is by design). In 3.0.4 there were a number of cases where WiredTiger would consume large amounts of memory, which we can trigger behaviors like the one you described in this ticket. MongoDB 3.0.5 shipped with fixes for this cases, but also included a performance enhancement that may also cause increased memory consumption. The relevant tickets are:
MongoDB 3.0.6-rc0 release candidate includes fixes for all three tickets, and we believe it should address the out-of-memory condition you're seeing – hence the suggestion to try it out. If you're open to sharing the data you're bulk-uploading with us we can run the experiments on our end – you can upload data privately and securely here. If the out-of-memory condition reproduces with 3.0.6-rc0 then we may be looking at a new problem, and we'll ask you to collect some data to investigate further. Thanks, | ||||||||
| Comment by Ramon Fernandez Marina [ 13/Aug/15 ] | ||||||||
|
vengireturns@gmail.com, this could be another instance of Thanks, |