Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-21222

minOpTime recovery should only be written if the config server is replica set

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • 3.2.0-rc3
    • Affects Version/s: None
    • Component/s: Sharding
    • Labels:
      None
    • Fully Compatible
    • ALL
    • Sharding C (11/20/15)

      The minOpTime recovery record is currently written regardless of the type of the catalog manager. This causes problems at a later time, when the shard is being restored, because it will try to instantiate a legacy catalog manager, even though it might not be in use.

      Example log output:

      2015-10-30T17:03:46.552+0000 I CONTROL  [initandlisten] MongoDB starting : pid=31310 port=27503 dbpath=/data/backups/daemon/544e5cb9e4b00ae3893aaa70/cluster_test_1/head/ 64-bit host=mms-qa-daemon-1
      2015-10-30T17:03:46.553+0000 I CONTROL  [initandlisten] db version v3.2.0-rc1
      2015-10-30T17:03:46.553+0000 I CONTROL  [initandlisten] git version: beabb900fa05c3b090fc62e887d41d9c43562b3f
      2015-10-30T17:03:46.553+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1c 10 May 2012
      2015-10-30T17:03:46.553+0000 I CONTROL  [initandlisten] allocator: tcmalloc
      2015-10-30T17:03:46.553+0000 I CONTROL  [initandlisten] modules: none
      2015-10-30T17:03:46.553+0000 I CONTROL  [initandlisten] build environment:
      2015-10-30T17:03:46.553+0000 I CONTROL  [initandlisten]     distmod: ubuntu1204
      2015-10-30T17:03:46.553+0000 I CONTROL  [initandlisten]     distarch: x86_64
      2015-10-30T17:03:46.553+0000 I CONTROL  [initandlisten]     target_arch: x86_64
      2015-10-30T17:03:46.553+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "127.0.0.1", port: 27503 }, operationProfiling: { slowOpThresholdMs: 1000 }, setParameter: { failIndexKeyTooLong: "false", ttlMonitorEnabled: "false" }, storage: { dbPath: "/data/backups/daemon/544e5cb9e4b00ae3893aaa70/cluster_test_1/head/", engine: "mmapv1", journal: { enabled: false } }, systemLog: { destination: "file", logAppend: true, path: "/data/backups/daemon/544e5cb9e4b00ae3893aaa70/cluster_test_1/mongod.log", quiet: true } }
      2015-10-30T17:03:46.586+0000 I FTDC     [initandlisten] Starting full-time diagnostic data capture with directory '/data/backups/daemon/544e5cb9e4b00ae3893aaa70/cluster_test_1/head/diagnostic.data'
      2015-10-30T17:03:46.586+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
      2015-10-30T17:03:46.586+0000 I SHARDING [initandlisten] Sharding state recovery process found document { _id: "minOpTimeRecovery", configsvrConnectionString: "keylime:30005", shardName: "cluster_test_1", minOpTime: { ts: Timestamp 0|0, t: -1 }, minOpTimeUpdaters: 0 }
      2015-10-30T17:03:46.586+0000 I SHARDING [initandlisten] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
      2015-10-30T17:03:46.586+0000 I SHARDING [initandlisten] Updating config server connection string to: keylime:30005
      2015-10-30T17:03:46.589+0000 I NETWORK  [initandlisten] getaddrinfo("keylime") failed: Name or service not known
      2015-10-30T17:03:46.589+0000 I SHARDING [initandlisten] can't resolve DNS for [keylime]  sleeping and trying 10 more times
      2015-10-30T17:03:56.590+0000 I NETWORK  [initandlisten] getaddrinfo("keylime") failed: Name or service not known
      2015-10-30T17:03:56.590+0000 I SHARDING [initandlisten] can't resolve DNS for [keylime]  sleeping and trying 9 more times
      2015-10-30T17:04:06.591+0000 I NETWORK  [initandlisten] getaddrinfo("keylime") failed: Name or service not known
      2015-10-30T17:04:06.591+0000 I SHARDING [initandlisten] can't resolve DNS for [keylime]  sleeping and trying 8 more times
      2015-10-30T17:04:16.592+0000 I NETWORK  [initandlisten] getaddrinfo("keylime") failed: Name or service not known
      2015-10-30T17:04:16.592+0000 I SHARDING [initandlisten] can't resolve DNS for [keylime]  sleeping and trying 7 more times
      2015-10-30T17:04:26.592+0000 I NETWORK  [initandlisten] getaddrinfo("keylime") failed: Name or service not known
      2015-10-30T17:04:26.593+0000 I SHARDING [initandlisten] can't resolve DNS for [keylime]  sleeping and trying 6 more times
      2015-10-30T17:04:36.593+0000 I NETWORK  [initandlisten] getaddrinfo("keylime") failed: Name or service not known
      2015-10-30T17:04:36.593+0000 I SHARDING [initandlisten] can't resolve DNS for [keylime]  sleeping and trying 5 more times
      2015-10-30T17:04:42.371+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
      2015-10-30T17:04:42.371+0000 I FTDC     [signalProcessingThread] Stopping full-time diagnostic data capture
      2015-10-30T17:04:46.595+0000 I NETWORK  [initandlisten] getaddrinfo("keylime") failed: Name or service not known
      2015-10-30T17:04:46.596+0000 I SHARDING [initandlisten] can't resolve DNS for [keylime]  sleeping and trying 4 more times
      2015-10-30T17:04:56.596+0000 I NETWORK  [initandlisten] getaddrinfo("keylime") failed: Name or service not known
      2015-10-30T17:04:56.596+0000 I SHARDING [initandlisten] can't resolve DNS for [keylime]  sleeping and trying 3 more times
      2015-10-30T17:05:06.597+0000 I NETWORK  [initandlisten] getaddrinfo("keylime") failed: Name or service not known
      2015-10-30T17:05:06.597+0000 I SHARDING [initandlisten] can't resolve DNS for [keylime]  sleeping and trying 2 more times
      2015-10-30T17:05:16.598+0000 I NETWORK  [initandlisten] getaddrinfo("keylime") failed: Name or service not known
      2015-10-30T17:05:16.598+0000 I SHARDING [initandlisten] can't resolve DNS for [keylime]  sleeping and trying 1 more times
      2015-10-30T17:05:26.599+0000 I STORAGE  [initandlisten] exception in initAndListen: 7 unable to resolve DNS for host keylime, terminating
      

            Assignee:
            randolph@mongodb.com Randolph Tan
            Reporter:
            kaloian.manassiev@mongodb.com Kaloian Manassiev
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: