[SERVER-26592] Unlimited growth of WiredTigerlas.wt when enableMajorityReadConcern: true and mixed version replica set Created: 12/Oct/16  Updated: 30/Mar/23  Resolved: 17/Apr/20

Status: Closed
Project: Core Server
Component/s: Replication
Affects Version/s: 3.2.9, 3.3.11
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Jean-Marc Assignee: Backlog - Replication Team
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Containerized MongoDB on Linux Debian Jessie coming from Official Docker Hub repo


Issue Links:
Related
Assigned Teams:
Replication
Operating System: ALL
Participants:
Case:

 Description   

We have a replicaset of MongoDB running in 3.2.9 (the last stable release).

We experiment the fact that the file WiredTigerLAS.wt is growing without limits :

$ ll WiredTigerLAS.wt
-rw-r--r-- 1 mongodb users 134003650560 Sep 28 11:59 WiredTigerLAS.wt

It seems very similar to the case describe here : SERVER-21585 but this issue should have been fixed into 3.2.9.

I try with 3.3.x, and the same problems occurs.

Removing the file and restarting solve (temporary) the issue.

In our RS, we have some members in 3.0.x (in which we get the PRIMARY) and some other in 3.2.x. The issue only occurs in the members in 3.2.x which are forced to be hidden to avoid crashing a PRIMARY.

Our config :

root$ more /etc/mongodb.conf
# mongod.conf
 
# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/
 
# Where and how to store data.
storage:
  dbPath: /datas/mongodb
  journal:
    enabled: true
  directoryPerDB: true
  engine: wiredTiger
  wiredTiger:
    engineConfig:
      # Set cacheSizeGB to the size of RAM authorized in container (60% of the available RAM on instance)
      cacheSizeGB: 1
 
processManagement:
  fork: false
 
# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log
 
# network interfaces
net:
  port: 27017
  http:
    enabled: false
    JSONPEnabled: false
    RESTInterfaceEnabled: false
  ssl:
    mode: disabled
 
replication:
  oplogSizeMB: 990
  replSetName: kluck
  enableMajorityReadConcern: true

Removing the enableMajorityReadConcern: true seems to resolve the issue.

 



 Comments   
Comment by Judah Schvimer [ 17/Apr/20 ]

PV0 no longer exists and readConcern: majority works very differently. Closing "Gone Away".

Comment by Ramon Fernandez Marina [ 14/Oct/16 ]

jmcollin, this appears to be an issue with having a mixed-version cluster with different protocolVersion for replication. You have two options:

  1. upgrade all nodes to 3.2
  2. set the protocolVersion to 0 on all your 3.2 nodes

As per SERVER-21590, majority read concern operations should be rejected by replica sets running protocol pv0. Perhaps a tighter check is needed here – if that's the case we can keep this ticket to explore that option, but I recommend you try one of the above solutions so you can move forward.

Thanks,
Ramón.

Comment by Jean-Marc [ 14/Oct/16 ]

I'm sorry but I can't do that because we are on production server on a securized environment and no data from our Mongo can be send outside. I guess it should be relatively easy to reproduce. See most details here : https://groups.google.com/forum/#!topic/mongodb-user/3D65wsYinCA
michael...@10gen.com seems to know exactly what happens:
"
One further question: do you need for enableMajorityReadConcern: true your production application? It seems unlikely that is required if you have a replicaset with some 3.0 nodes.

I suspect this issue is only happening because that option is enabled on a hidden secondary.

Michael.
"

Comment by Kelsey Schubert [ 12/Oct/16 ]

Hi jmcollin,

Thank you for opening this server ticket and providing additional details. To help us continue to investigate, would you please attach an archive of the diagnostic.data for the affected node?

Thanks again,
Thomas

Comment by Ramon Fernandez Marina [ 12/Oct/16 ]

jmcollin, I've moved this ticket to the SERVER project, as the WT project is for stand-alone usage of the WiredTiger storage engine. We're investigating this issue and will post updates on this ticket.

Regards,
Ramón.

Generated at Thu Feb 08 04:12:36 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.