[SERVER-32706] Mongodb failing with message Got signal: 6 (Aborted) Created: 15/Jan/18  Updated: 02/Apr/18  Resolved: 09/Mar/18

Status: Closed
Project: Core Server
Component/s: Admin
Affects Version/s: 3.4.9
Fix Version/s: None

Type: Question Priority: Major - P3
Reporter: Philip Assignee: Kelsey Schubert
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: Text File mongoDB_log_2018-01-10.txt     Text File mongoDB_log_2018-01-14.txt     Text File mongoDB_log_2018-01-23.txt    
Participants:

 Description   

My mongodb has been crashing repeatedly.
The last two times the log showed the same error: Got signal: 6 (Aborted)

I'm running my server on AWS using:
MEAN Bitnami-3.4.9-0 instance Ubuntu 14.04

I updated my ulimit to unlimited previously but I'm still receiving the same error.
I've attached the latest logs.



 Comments   
Comment by Kelsey Schubert [ 09/Mar/18 ]

Hi philip.leesha,

We haven’t heard back from you for some time, so I’m going to mark this ticket as resolved. If this is still an issue for you, please provide additional information and we will reopen the ticket.

Regards,
Kelsey

Comment by Kelsey Schubert [ 20/Feb/18 ]

Hi philip.leesha,

I just wanted to confirm that this issue has been resolved by upgrading your instance.

Thanks,
Kelsey

Comment by Philip [ 26/Jan/18 ]

Hi Mark,

I checked the wired tiger cache size and it was set to use only about 40% of my total ram on my instance. However since this seems to be a persistent error I upgraded my instance so now I have a lot more ram to work with. I checked in with the wired tiger cache size on my upgraded instance and its still a percentage of my total ram. I'm hoping the increased memory leeway solves my issue.

Comment by Mark Agarunov [ 24/Jan/18 ]

Hello philip.leesha,

Thank you for the additional information. The out of memory error generally indicates that MongoDB is set to use more memory than is available on the server and was killed by the system once it had exhausted the memory, however if you provide the complete logs from the crash we can confirm that. To reduce memory usage, you can set the Wired Tiger cache size to a lower value.

Thanks,
Mark

Comment by Philip [ 24/Jan/18 ]

Hi Mark,

My issue is still not resolved unfortunately. I've attached a new log failure. this time with a different kind of error: "out of memory."
I also answered your questions below.

  • Is this using docker and/or containers?
    I don't think its using either.
  • Is this running in a virtual machine or directly on hardware, if in a vm, which virtualization is it using(xen, kvm, hyperv, etc)?
    It's a vm using xen
  • Has this been an issue in the past with this setup, or has this just started happening?
    its hard to pinpoint the exact moment this started happening
  • If this is a new issue, were there any recent changes made to the system or mongodb configuration?
    it is a new issue, as in it happened long after I set up my configuration.
Comment by Mark Agarunov [ 19/Jan/18 ]

Hello philip.leesha,

Thank you for the additional information. I'll leave this ticket in "waiting for user input" for a few days and check in next week (unless you hit the error before then) so we can confirm that increasing the ulimits has fixed the problem.

Thanks,
Mark

Comment by Philip [ 17/Jan/18 ]

Hi Mark,

I updated my ulimits to the recommended values. As mongodb was failing intermittently every few days, I'll have to wait a few days to see if the limit increase fixed my issue.

Thank you for your help,
Phil

Comment by Mark Agarunov [ 16/Jan/18 ]

Hello philip.leesha,

Thank you for providing the additional info. My initial suggestion would be to attempt to increase the following ulimits:

pending signals [-i]
open files [-n] 
max user processes [-u]

to at least the recommended values. If this is still an issue after increasing the limits, please provide a description of how this node is set up:

  • Is this using docker and/or containers?
  • Is this running in a virtual machine or directly on hardware, if in a vm, which virtualization is it using(xen, kvm, hyperv, etc)?
  • Has this been an issue in the past with this setup, or has this just started happening?
  • If this is a new issue, were there any recent changes made to the system or mongodb configuration?

Thanks,
Mark

Comment by Philip [ 16/Jan/18 ]

Hi Mark,

Thank you so much for your quick reply. I'm not sure if I increased the thread limits. Below are the ulimits for all limits set on my server.

Here are the hard limits: (ulimit -H -a)
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7858
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 32768
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Comment by Mark Agarunov [ 16/Jan/18 ]

Hello philip.leesha,

Thank you for the report. Looking over the logs, there appears to be a significant amount of errors similar to pthread_create failed: Resource temporarily unavailable before the crash. Generally this indicates that the thread limit has been reached and the process cannot fork. When increasing ulimits, was the thread limit increased as well? If possible, please provide the ulimit parameters set for all limits.

Thanks,
Mark

Generated at Thu Feb 08 04:31:03 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.