[SERVER-84145] Mongodb 5.0.20 process is getting crashed due to higher OS cache memory utilization. Created: 05/Dec/23  Updated: 25/Jan/24  Resolved: 25/Jan/24

Status: Closed
Project: Core Server
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Sreedhar N Assignee: Chris Kelly
Resolution: Done Votes: 51
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Ubuntu 20.04 and Mongodb 5.0.20


Attachments: PNG File image-2024-01-25-14-46-52-089.png    
Assigned Teams:
Server Triage
Participants:

 Description   

Mongodb process is getting crashed after upgrading to 5.0.20 from 4.4.18. OS cache memory utlization is going higher with in few hours and mongdb process is getting crashed. 

Setup details:
Replica sets are having 7 members, 1 primary, 3 secondaries and 3 arbiters. Priamry and Secondary members are distributed across different sites like A,B,C and D. Arbiter members are distributed across A,B,C,D sites. Each site will be having DB VMs and each VM contains mongodb containers. One VM having 1 primary , 3 secondaries and 2 arbiters members. 

Scenario:
1. Bring down one Site , say Site B and start the traffic. Send the traffic such a way that linearly increment up to X limit. After reaching the limit X, send same traffic upto 48 hours. 

Crash Info:

{"t":\{"$date":"2023-11-28T15:30:56.755+00:00"}

,"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"Checkpointer","msg":"WiredTiger error","attr":{"error":22,"message":"[1701185456:755161][12787:0x7fd437302700], file:collection-45-1301129529321625809.wt, WT_SESSION.checkpoint: __wt_block_checkpoint_resolve, 928: collection-45-1301129529321625809.wt: the checkpoint failed, the system must restart: Invalid argument"}}

{"t":\{"$date":"2023-11-28T15:30:56.755+00:00"}

,"s":"E",  "c":"STORAGE",  "id":22435,   "ctx":"Checkpointer","msg":"WiredTiger error","attr":{"error":-31804,"message":"[1701185456:755177][12787:0x7fd437302700], file:collection-45-1301129529321625809.wt, WT_SESSION.checkpoint: __wt_block_checkpoint_resolve, 928: the process must exit and restart: WT_PANIC: WiredTiger library panic"}}

{"t":\{"$date":"2023-11-28T15:30:56.755+00:00"}

,"s":"F",  "c":"-",        "id":23089,   "ctx":"Checkpointer","msg":"Fatal assertion","attr":{"msgid":50853,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp","line":574}}

{"t":\{"$date":"2023-11-28T15:30:56.755+00:00"}

,"s":"F",  "c":"-",        "id":23090,   "ctx":"Checkpointer","msg":"\n\n***aborting after fassert() failure\n\n"}

{"t":\{"$date":"2023-11-28T15:30:56.755+00:00"}

,"s":"F",  "c":"CONTROL",  "id":6384300, "ctx":"Checkpointer","msg":"Writing fatal message","attr":{"message":"Got signal: 6 (Aborted).\n"}}

Please note that in Mongodb 4.4.18, we have performed the above scenario and not seen the crash and OS cache memory utilization was constant. 
In mongodb 4.4.18 this flag --enableMajorityReadConcern false was during the startup. 
In Mongodb 5.0.20, this flag is --enableMajorityReadConcern false is removed and disabled flowControl feature. 
In 5.0.20, after traffic is max limit , the cache memory is getting incrementing exponentially and mongodb processes are getting crashed across cluster. Only primary members are getting crashed. 
If we bring up Site B , then mongodb not crashed and running fine more than 48 hours. 



 Comments   
Comment by Chris Kelly [ 25/Jan/24 ]

Hi sreedhar.nalgonda@gmail.com,

It looks like the server is crashing due to "No space left on device" errors immediately preceding the errors you are pointing out. Specifically:

  • I do not observe an exponential increase in os cache memory usage, and the diagnostic.data provided only appears to show after the issue occurs.

For this issue we'd like to encourage you to start by asking our community for help by posting on the MongoDB Developer Community Forums.

If the discussion there leads you to suspect a bug in the MongoDB server, then we'd want to investigate it as a possible bug here in the SERVER project.

Comment by Sreedhar N [ 03/Jan/24 ]

Hi Chris Kelly,

Thanks for providing an access to upload files. Files are uploaded now. Kindly have a look and provide solution for crash. 

Thanks,

Sreedhar

Comment by Chris Kelly [ 27/Dec/23 ]

Hi sreedhar.nalgonda@gmail.com!

Thanks for your report, and your patience here. Anecdotally, this an error I've seen when we run out of space on a device (WT-11906) but I can't discern enough  information here.

To look into this further, would you please archive (tar or zip) the mongod.log files and the $dbpath/diagnostic.data directory (the contents are described here) and upload them to this support uploader location?

Files uploaded to this portal are visible only to MongoDB employees and are routinely deleted after some time.

Generated at Thu Feb 08 06:54:10 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.