Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-53338

The best method resolving BUG SERVER-45119 of mongodb 4.2.3 for rhel7 x86_64

    XMLWordPrintable

Details

    • Question
    • Status: Closed
    • Major - P3
    • Resolution: Done
    • 4.2.3
    • None
    • Internal Code
    • None

    Description

      Hi!

          Resently,My mongodb server having 16 shardings which mongodb version is 4.2.3 for rhel7 x86_64 ,one of the 16 shardings throws error messages about mongodb 4.x verion's BUG SERVER-45119. As the BUG SERVER-45119 description ,we can refresh mongod's chunk version data using the command as  "db.adminCommand({_flushRoutingTableCacheUpdates: ns, syncFromConfig: true})"  trying to resolve the issue "requested shard version differs from config shard version". But I have five questions about the  BUG SERVER-45119's "REMEDIATION AND WORKAROUNDS".

      1. Can I just refresh the chunk version data of the abnormal collection but refresh global mongod's chunk version data?
      2. Is the abnormal sharding beening locked if I use the command "db.adminCommand({_flushRoutingTableCacheUpdates: ns, syncFromConfig: true})"  to refresh global mongod's chunk version data?
      3. Is my application's reading and writing beening affected before the completed of the command "db.adminCommand({_flushRoutingTableCacheUpdates: ns, syncFromConfig: true})" ?
      4. Is there other way to resolve the problem?
      5. How can I reproduce the scene of the BUG SERVER-45119 with mongodb 4.2.3 sharding for rhel7 x86_64? I was failed to reproduce the secene of the BUG SERVER-45119 using method descriped in SERVER-45119.

      Attachments

        Issue Links

          Activity

            People

              eric.sedor@mongodb.com Eric Sedor
              fengjing@vastdata.com.cn Jing Feng
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: