Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-27142

aborting after fassert() failure

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Critical - P2 Critical - P2
    • None
    • Affects Version/s: None
    • Component/s: Replication
    • Labels:
      None
    • ALL

      2016-11-16T03:15:17.815+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
      2016-11-16T03:15:18.931+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
      2016-11-16T03:15:18.941+0000 I CONTROL  [initandlisten] MongoDB starting : pid=3657 port=27218 dbpath=/data0/mongo-config/data/ 64-bit host=nlhypheap041.navitaire.com
      2016-11-16T03:15:18.941+0000 I CONTROL  [initandlisten] db version v3.2.5
      2016-11-16T03:15:18.941+0000 I CONTROL  [initandlisten] git version: 34e65e5383f7ea1726332cb175b73077ec4a1b02
      2016-11-16T03:15:18.941+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
      2016-11-16T03:15:18.941+0000 I CONTROL  [initandlisten] allocator: tcmalloc
      2016-11-16T03:15:18.941+0000 I CONTROL  [initandlisten] modules: enterprise
      2016-11-16T03:15:18.941+0000 I CONTROL  [initandlisten] build environment:
      2016-11-16T03:15:18.941+0000 I CONTROL  [initandlisten]     distmod: rhel70
      2016-11-16T03:15:18.941+0000 I CONTROL  [initandlisten]     distarch: x86_64
      2016-11-16T03:15:18.941+0000 I CONTROL  [initandlisten]     target_arch: x86_64
      2016-11-16T03:15:18.941+0000 I CONTROL  [initandlisten] options: { net: { port: 27218 }, processManagement: { fork: true, pidFilePath: "/data0/mongo-config/logs/mongo.pid" }, replication: { replSet: "nl-con
      fig" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data0/mongo-config/data/", directoryPerDB: true }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: "/data0/
      mongo-config/logs/mongo.log" } }
      2016-11-16T03:15:18.967+0000 I -        [initandlisten] Detected data files in /data0/mongo-config/data/ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
      2016-11-16T03:15:18.968+0000 W -        [initandlisten] Detected unclean shutdown - /data0/mongo-config/data/mongod.lock is not empty.
      2016-11-16T03:15:18.968+0000 W STORAGE  [initandlisten] Recovering data from the last clean checkpoint.
      2016-11-16T03:15:18.968+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=8G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=
      true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
      2016-11-16T03:15:19.312+0000 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs
      2016-11-16T03:15:19.312+0000 I STORAGE  [initandlisten] The size storer reports that the oplog contains 20609664 records totaling to 4061406103 bytes
      2016-11-16T03:15:19.312+0000 I STORAGE  [initandlisten] Sampling from the oplog between May 19 14:47:21:1 and Oct 25 02:57:50:1 to determine where to place markers for truncation
      2016-11-16T03:15:19.312+0000 I STORAGE  [initandlisten] Taking 254 samples and assuming that each section of oplog contains approximately 809888 records totaling to 159599111 bytes
      2016-11-16T03:15:19.368+0000 I STORAGE  [initandlisten] Placing a marker at optime Jun  7 11:19:24:3
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Jun 13 02:15:58:1
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Jun 18 17:01:32:3
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Jun 24 11:54:17:1
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Jun 30 07:43:00:1
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Jul  5 23:03:52:1
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Jul 11 11:09:09:4
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Jul 17 00:15:26:1
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Jul 23 21:23:32:3
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Aug  3 00:31:17:2
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Aug  8 16:23:12:4
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Aug 14 08:43:22:2
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Aug 19 10:08:03:2
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Aug 24 17:35:23:2
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Aug 29 23:49:29:3
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Sep  4 08:40:28:7
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Sep  9 19:07:06:1
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Sep 15 02:15:51:2
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Sep 20 17:42:11:1
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Sep 26 03:12:45:3
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Oct  1 10:07:27:4
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Oct  6 17:05:48:1
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Oct 11 19:52:32:5
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Oct 17 05:09:14:1
      2016-11-16T03:15:19.369+0000 I STORAGE  [initandlisten] Placing a marker at optime Oct 22 11:59:40:1
      2016-11-16T03:15:19.410+0000 I CONTROL  [initandlisten]
      2016-11-16T03:15:19.410+0000 I CONTROL  [initandlisten] ** WARNING: Insecure configuration, access control is not enabled and no --bind_ip has been specified.
      2016-11-16T03:15:19.410+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted,
      2016-11-16T03:15:19.410+0000 I CONTROL  [initandlisten] **          and the server listens on all available network interfaces.
      2016-11-16T03:15:19.410+0000 I CONTROL  [initandlisten]
      2016-11-16T03:15:19.420+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data0/mongo-config/data/diagnostic.data'
      2016-11-16T03:15:19.420+0000 I NETWORK  [initandlisten] waiting for connections on port 27218
      2016-11-16T03:15:19.420+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
      2016-11-16T03:15:19.440+0000 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "nl-config", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "nlhypseap041.navitaire.com:27218", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 4.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "nlhypheap041.navitaire.com:27218", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 3.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "nlhypseap040.navitaire.com:27218", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 2.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 3, host: "nlhypheap040.navitaire.com:27218", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('573dd1f90c43e512f78a0eb1') } }
      2016-11-16T03:15:19.440+0000 I REPL     [ReplicationExecutor] This node is nlhypheap041.navitaire.com:27218 in the config
      2016-11-16T03:15:19.440+0000 I REPL     [ReplicationExecutor] transition to STARTUP2
      2016-11-16T03:15:19.440+0000 I REPL     [ReplicationExecutor] Starting replication applier threads
      2016-11-16T03:15:19.440+0000 I REPL     [ReplicationExecutor] transition to RECOVERING
      2016-11-16T03:15:19.445+0000 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to nlhypheap040.navitaire.com:27218
      2016-11-16T03:15:19.446+0000 I REPL     [ReplicationExecutor] Member nlhypheap040.navitaire.com:27218 is now in state SECONDARY
      2016-11-16T03:15:19.446+0000 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to nlhypseap041.navitaire.com:27218
      2016-11-16T03:15:19.448+0000 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to nlhypseap040.navitaire.com:27218
      2016-11-16T03:15:19.448+0000 I REPL     [ReplicationExecutor] Member nlhypseap041.navitaire.com:27218 is now in state PRIMARY
      2016-11-16T03:15:19.448+0000 I NETWORK  [initandlisten] connection accepted from 10.213.26.71:45776 #1 (1 connection now open)
      2016-11-16T03:15:19.450+0000 I REPL     [ReplicationExecutor] Member nlhypseap040.navitaire.com:27218 is now in state SECONDARY
      2016-11-16T03:15:20.130+0000 I NETWORK  [initandlisten] connection accepted from 10.213.26.70:53168 #2 (2 connections now open)
      2016-11-16T03:15:20.340+0000 I NETWORK  [initandlisten] connection accepted from 10.212.26.71:35348 #3 (3 connections now open)
      2016-11-16T03:15:21.110+0000 I NETWORK  [initandlisten] connection accepted from 10.213.26.70:53169 #4 (4 connections now open)
      2016-11-16T03:15:21.239+0000 I NETWORK  [initandlisten] connection accepted from 10.212.26.70:41543 #5 (5 connections now open)
      2016-11-16T03:15:22.237+0000 I NETWORK  [initandlisten] connection accepted from 10.213.26.71:45782 #6 (6 connections now open)
      2016-11-16T03:15:24.249+0000 I NETWORK  [initandlisten] connection accepted from 10.212.26.70:41545 #7 (7 connections now open)
      2016-11-16T03:15:24.249+0000 I NETWORK  [initandlisten] connection accepted from 10.212.26.70:41546 #8 (8 connections now open)
      2016-11-16T03:15:24.349+0000 I NETWORK  [initandlisten] connection accepted from 10.212.26.71:35352 #9 (9 connections now open)
      2016-11-16T03:15:24.350+0000 I NETWORK  [initandlisten] connection accepted from 10.212.26.71:35351 #10 (10 connections now open)
      2016-11-16T03:15:25.442+0000 I REPL     [ReplicationExecutor] syncing from: nlhypheap040.navitaire.com:27218
      2016-11-16T03:15:25.445+0000 I REPL     [SyncSourceFeedback] setting syncSourceFeedback to nlhypheap040.navitaire.com:27218
      2016-11-16T03:15:25.447+0000 I ASIO     [NetworkInterfaceASIO-BGSync-0] Successfully connected to nlhypheap040.navitaire.com:27218
      2016-11-16T03:15:25.449+0000 I REPL     [rsBackgroundSync] Starting rollback due to OplogStartMissing: our last op time fetched: (term: 47, timestamp: Oct 25 02:57:50:1). source's GTE: (term: 48, timestamp: Oct 25 02:57:50:1) hashes: (502322552398015638/711015327143776815)
      2016-11-16T03:15:25.449+0000 I -        [rsBackgroundSync] Fatal assertion 18750 UnrecoverableRollbackError: need to rollback, but in inconsistent state. minvalid: (term: 48, timestamp: Oct 25 02:57:50:3) > our last optime: (term: 47, timestamp: Oct 25 02:57:50:1)
      2016-11-16T03:15:25.449+0000 I -        [rsBackgroundSync]
      
      ***aborting after fassert() failure
      

            Assignee:
            kelsey.schubert@mongodb.com Kelsey Schubert
            Reporter:
            bhaskar.mittal@accenture.com Bhaskar Mittal
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: