Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-21244

Fatal Assertion 18750

    XMLWordPrintableJSON

Details

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major - P3 Major - P3
    • None
    • 3.0.4
    • WiredTiger
    • None
    • ALL
    • Hide

      shard with 3 member replica set, 40 cpu, 128g

      Show
      shard with 3 member replica set, 40 cpu, 128g

    Description

      ***** SERVER RESTARTED *****
      2015-11-02T02:09:00.191+0800 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
      2015-11-02T02:09:00.207+0800 W -        [initandlisten] Detected unclean shutdown - /mongo/data/mongod.lock is not empty.
      2015-11-02T02:09:00.207+0800 W STORAGE  [initandlisten] Recovering data from the last clean checkpoint.
      2015-11-02T02:09:00.207+0800 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=48G,session_max=20000,eviction=(threads_max=4),statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=5,log_size=2GB),statistics_log=(wait=0),
      2015-11-02T02:09:02.448+0800 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs
      2015-11-02T02:09:03.523+0800 I CONTROL  [initandlisten] MongoDB starting : pid=190762 port=21111 dbpath=/mongo/data 64-bit host=102
      2015-11-02T02:09:03.523+0800 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
      2015-11-02T02:09:03.523+0800 I CONTROL  [initandlisten] 
      2015-11-02T02:09:03.525+0800 I CONTROL  [initandlisten] db version v3.0.4
      2015-11-02T02:09:03.525+0800 I CONTROL  [initandlisten] git version: 0481c958daeb2969800511e7475dc66986fa9ed5
      2015-11-02T02:09:03.525+0800 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 0.9.8j-fips 07 Jan 2009
      2015-11-02T02:09:03.525+0800 I CONTROL  [initandlisten] build info: Linux ip-10-156-16-176 3.0.13-0.27-ec2 #1 SMP Wed Feb 15 13:33:49 UTC 2012 (d73692b) x86_64 BOOST_LIB_VERSION=1_49
      2015-11-02T02:09:03.525+0800 I CONTROL  [initandlisten] allocator: tcmalloc
      2015-11-02T02:09:03.525+0800 I CONTROL  [initandlisten] options: { config: "/mongo/conf/mongodb_rep_1.conf", cpu: false, fastsync: false, net: { bindIp: "0.0.0.0", http: { RESTInterfaceEnabled: false }, maxIncomingConnections: 800, port: 21111, wireObjectCheck: false }, notablescan: false, operationProfiling: { mode: "off", slowOpThresholdMs: 100 }, processManagement: { fork: true, pidFilePath: "/mongo/mongo.pid" }, replication: { oplogSizeMB: 8192, replSet: "rep_1/102.site:21111,117.site:21111,118.site:21111" }, security: { authorization: "enabled", javascriptEnabled: true, keyFile: "/mongo/bin/key" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/mongo/data", directoryPerDB: true, engine: "wiredTiger", journal: { enabled: true }, mmapv1: { journal: { commitIntervalMs: 300 }, nsSize: 64, preallocDataFiles: true, quota: { enforced: false }, smallFiles: false }, syncPeriodSecs: 5.0, wiredTiger: { collectionConfig: { blockCompressor: "snappy" }, engineConfig: { cacheSizeGB: 48, journalCompressor: "snappy" }, indexConfig: { prefixCompression: true } } }, systemLog: { destination: "file", logAppend: true, path: "/mongo/logs/mongostatus.log", verbosity: 0 } }
      2015-11-02T02:09:03.932+0800 I NETWORK  [initandlisten] waiting for connections on port 21111
      2015-11-02T02:09:03.974+0800 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "rep_1", version: 1, members: [ { _id: 0, host: "102:21111", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "117.site:21111", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "118.site:21111", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
      2015-11-02T02:09:03.974+0800 I REPL     [ReplicationExecutor] This node is 102:21111 in the config
      2015-11-02T02:09:03.974+0800 I REPL     [ReplicationExecutor] transition to STARTUP2
      2015-11-02T02:09:03.974+0800 I REPL     [ReplicationExecutor] Starting replication applier threads
      2015-11-02T02:09:03.975+0800 I REPL     [ReplicationExecutor] transition to RECOVERING
      2015-11-02T02:09:04.005+0800 I REPL     [ReplicationExecutor] Member 117.site:21111 is now in state PRIMARY
      2015-11-02T02:09:04.006+0800 I REPL     [ReplicationExecutor] Member 118.site:21111 is now in state SECONDARY
      2015-11-02T02:09:04.974+0800 I NETWORK  [initandlisten] connection accepted from1.1.1.118:49271 #1 (1 connection now open)
      2015-11-02T02:09:04.977+0800 I NETWORK  [initandlisten] connection accepted from1.1.1.117:57571 #2 (2 connections now open)
      2015-11-02T02:09:04.999+0800 I ACCESS   [conn2] Successfully authenticated as principal __system on local
      2015-11-02T02:09:04.999+0800 I ACCESS   [conn1] Successfully authenticated as principal __system on local
      2015-11-02T02:09:06.976+0800 I REPL     [ReplicationExecutor] syncing from: 118.site:21111
      2015-11-02T02:09:06.998+0800 I REPL     [SyncSourceFeedback] replset setting syncSourceFeedback to 118.site:21111
      2015-11-02T02:09:06.999+0800 I REPL     [rsBackgroundSync] replSet our last op time fetched: Nov  2 02:05:20:d
      2015-11-02T02:09:06.999+0800 I REPL     [rsBackgroundSync] replset source's GTE: Nov  2 02:05:27:1
      2015-11-02T02:09:06.999+0800 F REPL     [rsBackgroundSync] replSet need to rollback, but in inconsistent state
      2015-11-02T02:09:06.999+0800 I REPL     [rsBackgroundSync] minvalid: 56365467:3e7 our last optime: 56365460:d
      2015-11-02T02:09:06.999+0800 I -        [rsBackgroundSync] Fatal Assertion 18750
      2015-11-02T02:09:06.999+0800 I -        [rsBackgroundSync] 
       
      ***aborting after fassert() failure
      

      Attachments

        Activity

          People

            Unassigned Unassigned
            rujun1 rujun1
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: