2016-11-16T13:41:06.922-0500 I CONTROL [initandlisten] MongoDB starting : pid=44144 port=29102 dbpath=/data_backup/mongodata/configdb/ 64-bit host=bxb-ppe-oas012 2016-11-16T13:41:06.922-0500 I CONTROL [initandlisten] db version v3.2.5 2016-11-16T13:41:06.922-0500 I CONTROL [initandlisten] git version: 34e65e5383f7ea1726332cb175b73077ec4a1b02 2016-11-16T13:41:06.922-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 2016-11-16T13:41:06.922-0500 I CONTROL [initandlisten] allocator: tcmalloc 2016-11-16T13:41:06.922-0500 I CONTROL [initandlisten] modules: none 2016-11-16T13:41:06.922-0500 I CONTROL [initandlisten] build environment: 2016-11-16T13:41:06.922-0500 I CONTROL [initandlisten] distmod: rhel62 2016-11-16T13:41:06.922-0500 I CONTROL [initandlisten] distarch: x86_64 2016-11-16T13:41:06.922-0500 I CONTROL [initandlisten] target_arch: x86_64 2016-11-16T13:41:06.922-0500 I CONTROL [initandlisten] options: { net: { port: 29102 }, processManagement: { fork: true, pidFilePath: "/data_backup/mongodata/configdb/mongodb_config.pid" }, replication: { replSet: "csReplSet" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data_backup/mongodata/configdb/", wiredTiger: { engineConfig: { cacheSizeGB: 20 } } }, systemLog: { destination: "file", path: "/data_backup/mongodata/configdb/mongodb_configdb.log", quiet: true } } 2016-11-16T13:41:06.937-0500 I - [initandlisten] Detected data files in /data_backup/mongodata/configdb/ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2016-11-16T13:41:06.938-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=20G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), 2016-11-16T13:41:07.880-0500 I STORAGE [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs 2016-11-16T13:41:07.880-0500 I STORAGE [initandlisten] The size storer reports that the oplog contains 17503150 records totaling to 3121062422 bytes 2016-11-16T13:41:07.880-0500 I STORAGE [initandlisten] Sampling from the oplog between Oct 26 15:40:55:1 and Nov 16 13:39:29:1 to determine where to place markers for truncation 2016-11-16T13:41:07.880-0500 I STORAGE [initandlisten] Taking 58 samples and assuming that each section of oplog contains approximately 3010812 records totaling to 536870917 bytes 2016-11-16T13:41:08.386-0500 I STORAGE [initandlisten] Placing a marker at optime Oct 29 17:02:05:10 2016-11-16T13:41:08.386-0500 I STORAGE [initandlisten] Placing a marker at optime Nov 2 09:50:05:1 2016-11-16T13:41:08.386-0500 I STORAGE [initandlisten] Placing a marker at optime Nov 5 21:36:41:c 2016-11-16T13:41:08.386-0500 I STORAGE [initandlisten] Placing a marker at optime Nov 9 09:37:31:c 2016-11-16T13:41:08.386-0500 I STORAGE [initandlisten] Placing a marker at optime Nov 13 02:52:33:1 2016-11-16T13:41:08.410-0500 I CONTROL [initandlisten] 2016-11-16T13:41:08.410-0500 I CONTROL [initandlisten] ** WARNING: Insecure configuration, access control is not enabled and no --bind_ip has been specified. 2016-11-16T13:41:08.410-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted, 2016-11-16T13:41:08.410-0500 I CONTROL [initandlisten] ** and the server listens on all available network interfaces. 2016-11-16T13:41:08.410-0500 I CONTROL [initandlisten] 2016-11-16T13:41:08.411-0500 I CONTROL [initandlisten] 2016-11-16T13:41:08.411-0500 I CONTROL [initandlisten] ** WARNING: /proc/sys/vm/overcommit_memory is 2 2016-11-16T13:41:08.411-0500 I CONTROL [initandlisten] ** Journaling works best with it set to 0 or 1 2016-11-16T13:41:08.411-0500 I CONTROL [initandlisten] 2016-11-16T13:41:08.411-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2016-11-16T13:41:08.411-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2016-11-16T13:41:08.411-0500 I CONTROL [initandlisten] 2016-11-16T13:41:08.411-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2016-11-16T13:41:08.411-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2016-11-16T13:41:08.411-0500 I CONTROL [initandlisten] 2016-11-16T13:41:08.415-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data_backup/mongodata/configdb/diagnostic.data' 2016-11-16T13:41:08.415-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker 2016-11-16T13:41:08.416-0500 I NETWORK [initandlisten] waiting for connections on port 29102 2016-11-16T13:41:08.547-0500 I NETWORK [ReplicationExecutor] Socket recv() errno:104 Connection reset by peer 10.174.247.47:29102 2016-11-16T13:41:08.548-0500 I NETWORK [ReplicationExecutor] SocketException: remote: (NONE):0 error: 9001 socket exception [RECV_ERROR] server [10.174.247.47:29102] 2016-11-16T13:41:08.548-0500 W NETWORK [ReplicationExecutor] couldn't check isSelf (chi-ppe-oas019:29102) network error while attempting to run command '_isSelf' on host 'chi-ppe-oas019:29102' 2016-11-16T13:41:08.548-0500 I REPL [ReplicationExecutor] New replica set config in use: { _id: "csReplSet", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "bxb-ppe-oas002:29102", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "chi-ppe-oas019:29102", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "bxb-ppe-oas012:29102", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('581106c7761be5f6ab75baed') } } 2016-11-16T13:41:08.548-0500 I REPL [ReplicationExecutor] This node is bxb-ppe-oas012:29102 in the config 2016-11-16T13:41:08.548-0500 I REPL [ReplicationExecutor] transition to STARTUP2 2016-11-16T13:41:08.548-0500 I REPL [ReplicationExecutor] Starting replication applier threads 2016-11-16T13:41:08.548-0500 I REPL [ReplicationExecutor] transition to RECOVERING 2016-11-16T13:41:08.549-0500 I REPL [ReplicationExecutor] transition to SECONDARY 2016-11-16T13:41:08.549-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to bxb-ppe-oas002:29102 2016-11-16T13:41:08.550-0500 I REPL [ReplicationExecutor] Member bxb-ppe-oas002:29102 is now in state SECONDARY 2016-11-16T13:41:08.578-0500 I REPL [ReplicationExecutor] Error in heartbeat request to chi-ppe-oas019:29102; HostUnreachable: Connection refused 2016-11-16T13:41:08.607-0500 I REPL [ReplicationExecutor] Error in heartbeat request to chi-ppe-oas019:29102; HostUnreachable: Connection refused 2016-11-16T13:41:08.636-0500 I REPL [ReplicationExecutor] Error in heartbeat request to chi-ppe-oas019:29102; HostUnreachable: Connection refused 2016-11-16T13:41:13.550-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to bxb-ppe-oas002:29102 2016-11-16T13:41:13.550-0500 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections 2016-11-16T13:41:13.551-0500 I REPL [ReplicationExecutor] Error in heartbeat request to bxb-ppe-oas002:29102; HostUnreachable: Connection refused 2016-11-16T13:41:13.552-0500 I REPL [ReplicationExecutor] Error in heartbeat request to bxb-ppe-oas002:29102; HostUnreachable: Connection refused 2016-11-16T13:41:13.553-0500 I REPL [ReplicationExecutor] Error in heartbeat request to bxb-ppe-oas002:29102; HostUnreachable: Connection refused 2016-11-16T13:41:13.665-0500 I REPL [ReplicationExecutor] Error in heartbeat request to chi-ppe-oas019:29102; HostUnreachable: Connection refused 2016-11-16T13:41:13.692-0500 I REPL [ReplicationExecutor] Error in heartbeat request to chi-ppe-oas019:29102; HostUnreachable: Connection refused 2016-11-16T13:41:13.720-0500 I REPL [ReplicationExecutor] Error in heartbeat request to chi-ppe-oas019:29102; HostUnreachable: Connection refused 2016-11-16T13:41:17.140-0500 I COMMAND [conn18] terminating, shutdown command received 2016-11-16T13:41:17.140-0500 I FTDC [conn18] Shutting down full-time diagnostic data capture 2016-11-16T13:41:17.150-0500 I COMMAND [conn9] command config.$cmd command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } keyUpdates:0 writeConflicts:0 exception: interrupted at shutdown code:11600 numYields:0 reslen:68 locks:{} protocol:op_command 4198ms 2016-11-16T13:41:17.150-0500 I REPL [conn18] Stopping replication applier threads 2016-11-16T13:41:17.165-0500 I STORAGE [conn3] got request after shutdown() 2016-11-16T13:41:18.552-0500 I CONTROL [conn18] now exiting 2016-11-16T13:41:18.552-0500 I NETWORK [conn18] shutdown: going to close listening sockets... 2016-11-16T13:41:18.552-0500 I NETWORK [conn18] closing listening socket: 6 2016-11-16T13:41:18.552-0500 I NETWORK [conn18] closing listening socket: 7 2016-11-16T13:41:18.552-0500 I NETWORK [conn18] removing socket file: /tmp/mongodb-29102.sock 2016-11-16T13:41:18.552-0500 I NETWORK [conn18] shutdown: going to flush diaglog... 2016-11-16T13:41:18.552-0500 I NETWORK [conn18] shutdown: going to close sockets... 2016-11-16T13:41:18.552-0500 I STORAGE [conn18] WiredTigerKVEngine shutting down 2016-11-16T13:41:20.474-0500 I STORAGE [conn18] shutdown: removing fs lock... 2016-11-16T13:41:20.511-0500 I CONTROL [conn18] dbexit: rc: 0