Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-30660

Replica set fsync blocks secondary reads significantly

    XMLWordPrintableJSON

Details

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major - P3 Major - P3
    • None
    • 3.4.6
    • Performance
    • None
    • Linux version 2.6.32-431.el6.x86_64 (gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) )
    • ALL
    • Hide

      Works fine steps:
      1. Setup java client with read preferences "primaryPreferred"
      2. Check read performance (aprox. 2200 rps with avg. time 0.3 ms)
      3. During fsync operation (every minute) response time dropped to 0.5 ms

      Change secondary and primary via rs.stepDown - time pattern is the same

      Works strange steps:
      1. Setup java client with read preferences "secondaryPreferred"
      2. Check read performance (aprox. 2200 rps with avg. time 0.3 ms)
      3. During fsync operation (every minute) response time dropped to 10 ms (20 times slower)

      Change secondary and primary via rs.stepDown - time pattern is the same

      Looks like secondary node works incorrect during fsync events and blocks most read operations

      Show
      Works fine steps: 1. Setup java client with read preferences "primaryPreferred" 2. Check read performance (aprox. 2200 rps with avg. time 0.3 ms) 3. During fsync operation (every minute) response time dropped to 0.5 ms Change secondary and primary via rs.stepDown - time pattern is the same Works strange steps: 1. Setup java client with read preferences "secondaryPreferred" 2. Check read performance (aprox. 2200 rps with avg. time 0.3 ms) 3. During fsync operation (every minute) response time dropped to 10 ms ( 20 times slower ) Change secondary and primary via rs.stepDown - time pattern is the same Looks like secondary node works incorrect during fsync events and blocks most read operations

    Description

      I have next mongo configuration:

      1. 2 data nodes in replica set
      2. 1 arbiter for the replica set
      3. Java based client

      Both data nodes are configured next way:
      replication:
      oplogSizeMB: 1024
      replSetName: arb

      storage:
      dbPath: /mnt/raid10/mongo
      journal:
      enabled: true
      commitIntervalMs: 500
      directoryPerDB: true
      syncPeriodSecs: 60
      engine: wiredTiger
      wiredTiger:
      engineConfig:
      directoryForIndexes: true

      Attachments

        Activity

          People

            kelsey.schubert@mongodb.com Kelsey Schubert
            dorlov Denis Orlov
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: