Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-53346

The best method resolving BUG SERVER-45119 of mongodb 4.2.3 for rhel7 x86_64

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Critical - P2 Critical - P2
    • None
    • Affects Version/s: 4.2.3
    • Component/s: None
    • Labels:
    • Environment:
      OS:RHEL7 x86_64
      mongodb:4.2.3
      mode:16 shardings
    • Server Triage

      Hi!

           Resently,My mongodb server having 16 shardings which mongodb version is 4.2.3 for rhel7 x86_64 ,one of the 16 shardings throws error messages about mongodb 4.x verion BUG SERVER-45119. As the BUG SERVER-45119 description ,we can refresh mongod's chunk version data using the command as "db.adminCommand({_flushRoutingTableCacheUpdates: ns, syncFromConfig: true})" trying to resolve the issue "requested shard version differs from config shard version". But I have five questions about the  BUG SERVER-45119's "REMEDIATION AND WORKAROUNDS".

      1. Can I just refresh the chunk version data of the abnormal collection but refresh global mongod's chunk version data?
      2. Is the abnormal sharding beening locked if I use the command "db.adminCommand({_flushRoutingTableCacheUpdates: ns, syncFromConfig: true})"  to refresh global mongod's chunk version data?
      3. Is my application's reading and writing beening affected before the completed of the command?"db.adminCommand({_flushRoutingTableCacheUpdates: ns, syncFromConfig: true})" ?
      4. Is there other ways to resolve the problem?
      5. How can I reproduce the scene of the BUG SERVER-45119 with mongodb 4.2.3 sharding for rhel7 x86_64? I was failed to reproduce the secene of the BUG SERVER-45119 using method descriped in SERVER-45119.

       

            Assignee:
            backlog-server-triage [HELP ONLY] Backlog - Triage Team
            Reporter:
            fengjing@vastdata.com.cn Jing Feng
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: