[SERVER-40638] How to influence files (pre)allocation steps (max growing value) in MongoDB 3.4 with WiredTiger Created: 15/Apr/19  Updated: 06/Dec/22  Resolved: 15/Apr/19

Status: Closed
Project: Core Server
Component/s: WiredTiger
Affects Version/s: 3.4.19
Fix Version/s: None

Type: Question Priority: Major - P3
Reporter: Zdeněk Mašat Assignee: Backlog - Triage Team
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
Assigned Teams:
Server Triage
Participants:

 Description   

We have in environment deployed some really small DBs (5GB) as directory per DB. Under dirs is LVM with XFS, where we have implemented logic for "brake" when DB space should be full-filled (all is due to some strict disk quotas - not really interesting at all I think...). Our solution runs on MongoDB v3.4.19 on CentOS 7.5. Everything looks ok.

Some of DBs has more than 60% used disk space by single collection and in our scenario we encountered that, while restoring 2.3GB dump (all into 5GB database dir/=disk, the preallocation after 2GBs asks OS for another space in (I think) power of 2 strategy (or something very similar).

This situation hits ours triggers, and eventually should lead to lock of DB (which is still better than fall of whole cluster because of no space left). Basically same situation we have with 10GB DB. (And in some way we can say that similar situation can occur with literally any capacity...)

My point of question is: Am I able to influence steps (maximum size lower than ~2GB) of (pre)allocation growing of Mongo/WT for restoring (only) or at all? **(As I've found that 2GB should be the max. value of growing up...)

Performance impact is not highest prioritized in this case.

The same I've asked on DBA StackExchange, but no solution found yet... (question [here|https://dba.stackexchange.com/questions/234301/how-to-influence-files-preallocation-steps-max-growing-value-in-mongodb-3-4])

If some additional info needed, let me know, please. Any hint/tip would be really appreciated...

Best regards

Zdenek.



 Comments   
Comment by Zdeněk Mašat [ 16/Apr/19 ]

Hi @Eric Sedor, actually it was the very first thing what I've done (here).

And thus there weren't much suitable advice, I tried to ask directly at source...

 

it is expected that size on disk can be larger than data size due to the persistence of checkpoints

I understand prerequisites. I'd like to know if there is possibility to change builtin (default) values only. And if yes, how to do this. Because through documentation reading, I've found that WT should have parameters to inflict change allocation behavior.

Like e.g. file_extend or checkpoint - taken from WT documentaion

But I don't know how to pass (if possible) to WT integrated with MongoDB.

Let me ask one more question anyway - is it true that data file growth has max value as of 2GB or it can consume even more (by default) when preallocating while some heavier writes are happening (e.g. like restore or initial data load)?

Best regards

Zdenek.

Comment by Eric Sedor [ 15/Apr/19 ]

Hi dennism, it is expected that size on disk can be larger than data size due to the persistence of checkpoints.

To discuss how to influence this, you may also want to submit your question to the MongoDB community via the mongodb-user group. This SERVER project is for bugs and feature suggestions for the MongoDB server.

Generated at Thu Feb 08 04:55:35 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.