-
Type:
Bug
-
Resolution: Done
-
Priority:
Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
-
ALL
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Hi guys
We have a MongoDB cluster with 1 primary server and 2 secondary servers.
All three servers are using using MongoDB version 3.6.11.
Recently, we found both secondary servers down and it is caused by the space being full. We found the file WiredTigerLAS.wt has grown very big to over 20GB. The whole mongodb data folder is supposed to be below 4GB. We tried to remove the WiredTigerLAS.wt and restarted the secondary servers, but the WiredTigerLAS.wt got created and started growing to be full of the disk again. The primary server has been OK, no impact is found.
Can someone please help and advice what we shall do now? If you know the reason behind the unexpected file growth, please let us know.
rs.conf:
// code placeholder rs0:PRIMARY> rs.conf() { "_id" : "rs0", "version" : 7, "protocolVersion" : NumberLong(1), "members" : [ { "_id" : 0, "host" : "primary:50001", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 1, "host" : "replica1:50001", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 0, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 0 }, { "_id" : 2, "host" : "replica2:50001", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 0, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 0 } ], "settings" : { "chainingAllowed" : true, "heartbeatIntervalMillis" : 2000, "heartbeatTimeoutSecs" : 10, "electionTimeoutMillis" : 10000, "catchUpTimeoutMillis" : -1, "catchUpTakeoverDelayMillis" : 30000, "getLastErrorModes" : { }, "getLastErrorDefaults" : { "w" : 1, "wtimeout" : 0 }, "replicaSetId" : ObjectId("5b52ac682b4bd7ae7913b1cf") } }