[SERVER-1382] mongod server process exits when file size limit exceeded Created: 08/Jul/10  Updated: 19/Apr/12  Resolved: 08/Jul/10

Status: Closed
Project: Core Server
Component/s: Stability
Affects Version/s: 1.5.5
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: John Purcell Assignee: Eliot Horowitz (Inactive)
Resolution: Won't Fix Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

FC8 x86_64 on an EC2 Large instance


Attachments: Text File mongod.log    
Operating System: Linux
Participants:

 Description   

Ran into this one using mongodb-linux-x86_64-2010-07-08. Since mongo tends to be file hungry, I was going to do a ulimit -n 300000. However, I dropped the '-n' by accident (i.e. ulimit 300000), which defaults to a -f (file size (blocks, -f) 300000).

Started up mongod and ran a mongorestore. Last thing in the mongod log is:

Thu Jul 8 20:42:08 done allocating datafile /mnt/data/lindex.2, size: 256MB, took 2.1 secs
Thu Jul 8 20:42:55 allocating new datafile /mnt/data/lindex.3, filling with zeroes...

when the mongod process just exits with "File size limit exceeded." in stderr. I can understand the process not being able to write new data, but just exiting was unexpected.



 Comments   
Comment by Eliot Horowitz (Inactive) [ 08/Jul/10 ]

We can't fix this.
That's the OS killing the process because it violated its rules.

No way around it that I know of

Comment by John Purcell [ 08/Jul/10 ]

This is the server log with std out and err appended to it.

Comment by Eliot Horowitz (Inactive) [ 08/Jul/10 ]

Can you send entire log?
I'm not seeing that

Generated at Thu Feb 08 02:56:52 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.