Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-55440

[rsBackgroundSync] Fatal Assertion 51121 at src/mongo/db/repl/rollback_impl.cpp 969

    XMLWordPrintableJSON

Details

    • Icon: Question Question
    • Resolution: Incomplete
    • Icon: Major - P3 Major - P3
    • None
    • None
    • None
    • None

    Description

      Mongo Db with version 4.2, Suddenly started throwing this error. To Check complete log please refer to the file attached.

       

      2021-03-23T04:42:58.899+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
      2021-03-23T04:42:58.903+0000 W ASIO [main] No TransportLayer configured during NetworkInterface startup
      2021-03-23T04:42:58.996+0000 I CONTROL [initandlisten] MongoDB starting : pid=12 port=27017 dbpath=/var/lib/mongodb 64-bit host=mongo-2
      2021-03-23T04:42:58.996+0000 I CONTROL [initandlisten] db version v4.2.13
      2021-03-23T04:42:58.996+0000 I CONTROL [initandlisten] git version: 82dd40f60c55dae12426c08fd7150d79a0e28e23
      2021-03-23T04:42:58.996+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018
      2021-03-23T04:42:58.996+0000 I CONTROL [initandlisten] allocator: tcmalloc
      2021-03-23T04:42:58.996+0000 I CONTROL [initandlisten] modules: none
      2021-03-23T04:42:58.996+0000 I CONTROL [initandlisten] build environment:
      2021-03-23T04:42:58.996+0000 I CONTROL [initandlisten] distmod: ubuntu1804
      2021-03-23T04:42:58.996+0000 I CONTROL [initandlisten] distarch: x86_64
      2021-03-23T04:42:58.996+0000 I CONTROL [initandlisten] target_arch: x86_64
      2021-03-23T04:42:58.996+0000 I CONTROL [initandlisten] options:

      ....

      ....

      , namespace: "cluster-spectro-mgmt", resourceVersion: "1752850" }, message: "Pulling image "gcr.io/spectro-images-public/release/capv-static-ip:0.0.2-20201203"", reason: "Pulling", source:
      Unknown macro: { component}
      , severity: "Normal" } }, took 198ms
      2021-03-23T04:46:02.285+0000 I SHARDING [repl-writer-worker-0] Marking collection hubbledb.registries as collection version: <unsharded>
      2021-03-23T04:46:02.376+0000 I REPL [initandlisten] Applied 120 operations in 1 batches. Last operation applied with optime:
      Unknown macro: { ts}
      2021-03-23T04:46:02.400+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
      2021-03-23T04:46:02.401+0000 I SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: <unsharded>
      2021-03-23T04:46:02.402+0000 I CONTROL [LogicalSessionCacheReap] Failed to reap transaction table: NotYetInitialized: Replication has not yet been configured
      2021-03-23T04:46:02.408+0000 I NETWORK [listener] Listening on /tmp/mongodb-27017.sock
      2021-03-23T04:46:02.408+0000 I NETWORK [listener] Listening on 0.0.0.0
      2021-03-23T04:46:02.408+0000 I NETWORK [listener] waiting for connections on port 27017
      2021-03-23T04:46:02.913+0000 I REPL [replexec-0] New replica set config in use: { _id: "rs0", version: 437633405, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "mongo-0.mongo.hubble-system.svc.cluster.local:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongo-1.mongo.hubble-system.svc.cluster.local:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongo-2.mongo.hubble-system.svc.cluster.local:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults:
      Unknown macro: { w}
      , replicaSetId: ObjectId('60539c8d947c43ce9035f3b7') } }
      2021-03-23T04:46:02.913+0000 I REPL [replexec-0] This node is mongo-2.mongo.hubble-system.svc.cluster.local:27017 in the config
      2021-03-23T04:46:02.913+0000 I REPL [replexec-0] transition to STARTUP2 from STARTUP
      2021-03-23T04:46:02.926+0000 I REPL [replexec-0] Starting replication storage threads
      2021-03-23T04:46:02.928+0000 I CONNPOOL [Replication] Connecting to mongo-0.mongo.hubble-system.svc.cluster.local:27017
      2021-03-23T04:46:02.928+0000 I CONNPOOL [Replication] Connecting to mongo-1.mongo.hubble-system.svc.cluster.local:27017
      2021-03-23T04:46:02.931+0000 I REPL [replexec-0] transition to RECOVERING from STARTUP2
      2021-03-23T04:46:02.931+0000 I REPL [replexec-0] Starting replication fetcher thread
      2021-03-23T04:46:02.931+0000 I REPL [replexec-0] Starting replication applier thread
      2021-03-23T04:46:02.931+0000 I REPL [replexec-0] Starting replication reporter thread
      2021-03-23T04:46:02.931+0000 I REPL [rsSync-0] Starting oplog application
      2021-03-23T04:46:02.933+0000 I REPL [rsBackgroundSync] waiting for 4 pings from other members before syncing
      2021-03-23T04:46:02.934+0000 I REPL [rsSync-0] transition to SECONDARY from RECOVERING
      2021-03-23T04:46:02.934+0000 I REPL [rsSync-0] Resetting sync source to empty, which was :27017
      2021-03-23T04:46:03.105+0000 I NETWORK [listener] connection accepted from 127.0.0.1:45896 #5 (1 connection now open)
      2021-03-23T04:46:03.105+0000 I SHARDING [conn5] Marking collection admin.system.users as collection version: <unsharded>
      2021-03-23T04:46:03.107+0000 I NETWORK [conn5] received client metadata from 127.0.0.1:45896 conn5:
      Unknown macro: { driver}
      , os:
      Unknown macro: { type}
      , platform: "Node.js v11.2.0, LE, mongodb-core: 2.1.20" }
      2021-03-23T04:46:03.133+0000 I ACCESS [conn5] Successfully authenticated as principal root on admin from client 127.0.0.1:45896
      2021-03-23T04:46:03.138+0000 I NETWORK [conn5] end connection 127.0.0.1:45896 (0 connections now open)
      2021-03-23T04:46:03.547+0000 I REPL [replexec-1] New replica set config in use: { _id: "rs0", version: 437797448, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "mongo-0.mongo.hubble-system.svc.cluster.local:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongo-1.mongo.hubble-system.svc.cluster.local:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "mongo-2.mongo.hubble-system.svc.cluster.local:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults:
      Unknown macro: { w}
      , replicaSetId: ObjectId('60539c8d947c43ce9035f3b7') } }
      2021-03-23T04:46:03.547+0000 I REPL [replexec-1] This node is mongo-2.mongo.hubble-system.svc.cluster.local:27017 in the config
      2021-03-23T04:46:03.550+0000 I REPL [replexec-3] Member mongo-0.mongo.hubble-system.svc.cluster.local:27017 is now in state PRIMARY
      2021-03-23T04:46:03.550+0000 I REPL [replexec-4] Member mongo-1.mongo.hubble-system.svc.cluster.local:27017 is now in state SECONDARY
      2021-03-23T04:46:04.934+0000 I REPL [rsBackgroundSync] sync source candidate: mongo-0.mongo.hubble-system.svc.cluster.local:27017
      2021-03-23T04:46:04.934+0000 I CONNPOOL [RS] Connecting to mongo-0.mongo.hubble-system.svc.cluster.local:27017
      2021-03-23T04:46:05.122+0000 I REPL [rsBackgroundSync] Changed sync source from empty to mongo-0.mongo.hubble-system.svc.cluster.local:27017
      2021-03-23T04:46:05.472+0000 I REPL [rsBackgroundSync] Starting rollback due to OplogStartMissing: Our last optime fetched:
      Unknown macro: { ts}
      . source's GTE:
      Unknown macro: { ts}
      2021-03-23T04:46:05.472+0000 I REPL [rsBackgroundSync] Replication commit point:
      Unknown macro: { ts}
      2021-03-23T04:46:05.472+0000 I REPL [rsBackgroundSync] Rollback using 'recoverToStableTimestamp' method.
      2021-03-23T04:46:05.472+0000 I REPL [rsBackgroundSync] Scheduling rollback (sync source: mongo-0.mongo.hubble-system.svc.cluster.local:27017)
      2021-03-23T04:46:05.472+0000 I ROLLBACK [rsBackgroundSync] transition to ROLLBACK
      2021-03-23T04:46:05.472+0000 I REPL [rsBackgroundSync] State transition ops metrics:
      Unknown macro: { lastStateTransition}
      2021-03-23T04:46:05.472+0000 I REPL [rsBackgroundSync] transition to ROLLBACK from SECONDARY
      2021-03-23T04:46:05.473+0000 I ROLLBACK [rsBackgroundSync] Waiting for all background operations to complete before starting rollback
      2021-03-23T04:46:05.473+0000 I ROLLBACK [rsBackgroundSync] Finished waiting for background operations to complete before rollback
      2021-03-23T04:46:05.473+0000 I ROLLBACK [rsBackgroundSync] finding common point
      2021-03-23T04:46:06.574+0000 I ROLLBACK [rsBackgroundSync] Rollback common point is
      Unknown macro: { ts}
      2021-03-23T04:46:06.574+0000 F ROLLBACK [rsBackgroundSync] Common point must be at least stable timestamp, common point: Timestamp(1616355150, 1), stable timestamp: Timestamp(1616356417, 96)
      2021-03-23T04:46:06.574+0000 F - [rsBackgroundSync] Fatal Assertion 51121 at src/mongo/db/repl/rollback_impl.cpp 969
      2021-03-23T04:46:06.574+0000 F - [rsBackgroundSync] \n\n***aborting after fassert() failure\n\n

       

      Attachments

        Activity

          People

            dmitry.agranat@mongodb.com Dmitry Agranat
            shantanu@spectrocloud.com Shantanu Bansal
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: