2018-02-27T14:52:44.088+0000 I CONTROL [main] ***** SERVER RESTARTED ***** 2018-02-27T14:52:44.095+0000 I CONTROL [initandlisten] MongoDB starting : pid=11583 port=27040 dbpath=/var/lib/mongo 64-bit host=vcp1-master-1.asml.tibco.aws 2018-02-27T14:52:44.095+0000 I CONTROL [initandlisten] db version v3.4.5 2018-02-27T14:52:44.095+0000 I CONTROL [initandlisten] git version: 520b8f3092c48d934f0cd78ab5f40fe594f96863 2018-02-27T14:52:44.095+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 2018-02-27T14:52:44.095+0000 I CONTROL [initandlisten] allocator: tcmalloc 2018-02-27T14:52:44.095+0000 I CONTROL [initandlisten] modules: none 2018-02-27T14:52:44.095+0000 I CONTROL [initandlisten] build environment: 2018-02-27T14:52:44.095+0000 I CONTROL [initandlisten] distmod: rhel70 2018-02-27T14:52:44.095+0000 I CONTROL [initandlisten] distarch: x86_64 2018-02-27T14:52:44.095+0000 I CONTROL [initandlisten] target_arch: x86_64 2018-02-27T14:52:44.095+0000 I CONTROL [initandlisten] options: { command: [ "run" ], config: "/etc/mongod.conf", net: { port: 27040 }, processManagement: { fork: true, pidFilePath: "/var/run/mongodb/mongod.pid" }, replication: { replSetName: "rs0" }, storage: { dbPath: "/var/lib/mongo", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: "/var/log/mongodb/mongod.log", quiet: true } } 2018-02-27T14:52:44.119+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3270M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), 2018-02-27T14:52:44.148+0000 I CONTROL [initandlisten] 2018-02-27T14:52:44.148+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-02-27T14:52:44.148+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-02-27T14:52:44.148+0000 I CONTROL [initandlisten] 2018-02-27T14:52:44.148+0000 I CONTROL [initandlisten] 2018-02-27T14:52:44.148+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2018-02-27T14:52:44.148+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-27T14:52:44.148+0000 I CONTROL [initandlisten] 2018-02-27T14:52:44.148+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2018-02-27T14:52:44.149+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-27T14:52:44.149+0000 I CONTROL [initandlisten] 2018-02-27T14:52:44.157+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongo/diagnostic.data' 2018-02-27T14:52:44.164+0000 I REPL [initandlisten] Did not find local voted for document at startup. 2018-02-27T14:52:44.164+0000 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset 2018-02-27T14:52:44.165+0000 I NETWORK [thread1] waiting for connections on port 27040 2018-02-27T14:52:47.488+0000 I NETWORK [conn2] received client metadata from 172.31.25.21:49090 conn2: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.4.5" }, os: { type: "Linux", name: "CentOS Linux release 7.3.1611 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-514.10.2.el7.x86_64" } } 2018-02-27T14:52:47.488+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to vcp1-master-0.asml.tibco.aws:27040 2018-02-27T14:52:47.491+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to vcp1-master-0.asml.tibco.aws:27040, took 3ms (1 connections now open to vcp1-master-0.asml.tibco.aws:27040) 2018-02-27T14:52:47.500+0000 I REPL [replExecDBWorker-0] Starting replication storage threads 2018-02-27T14:52:47.515+0000 I REPL [replication-0] Starting initial sync (attempt 1 of 10) 2018-02-27T14:52:47.515+0000 I REPL [ReplicationExecutor] New replica set config in use: { _id: "rs0", version: 2, protocolVersion: 1, members: [ { _id: 0, host: "vcp1-master-0.asml.tibco.aws:27040", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "vcp1-master-1.asml.tibco.aws:27040", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 2000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5a9570bdaab671324d743bd3') } } 2018-02-27T14:52:47.515+0000 I REPL [ReplicationExecutor] This node is vcp1-master-1.asml.tibco.aws:27040 in the config 2018-02-27T14:52:47.515+0000 I REPL [ReplicationExecutor] transition to STARTUP2 2018-02-27T14:52:47.526+0000 I REPL [ReplicationExecutor] New replica set config in use: { _id: "rs0", version: 3, protocolVersion: 1, members: [ { _id: 0, host: "vcp1-master-0.asml.tibco.aws:27040", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "vcp1-master-1.asml.tibco.aws:27040", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "vcp1-master-2.asml.tibco.aws:27040", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 2000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5a9570bdaab671324d743bd3') } } 2018-02-27T14:52:47.526+0000 I REPL [ReplicationExecutor] This node is vcp1-master-1.asml.tibco.aws:27040 in the config 2018-02-27T14:52:47.526+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to vcp1-master-2.asml.tibco.aws:27040 2018-02-27T14:52:47.526+0000 I REPL [ReplicationExecutor] Member vcp1-master-0.asml.tibco.aws:27040 is now in state PRIMARY 2018-02-27T14:52:47.536+0000 I NETWORK [conn5] received client metadata from 172.31.25.23:42462 conn5: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.4.5" }, os: { type: "Linux", name: "CentOS Linux release 7.3.1611 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-514.10.2.el7.x86_64" } } 2018-02-27T14:52:47.536+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to vcp1-master-2.asml.tibco.aws:27040, took 10ms (1 connections now open to vcp1-master-2.asml.tibco.aws:27040) 2018-02-27T14:52:47.537+0000 I REPL [ReplicationExecutor] Member vcp1-master-2.asml.tibco.aws:27040 is now in state STARTUP2 2018-02-27T14:52:48.522+0000 I REPL [replication-1] sync source candidate: vcp1-master-0.asml.tibco.aws:27040 2018-02-27T14:52:48.522+0000 I STORAGE [replication-1] dropAllDatabasesExceptLocal 1 2018-02-27T14:52:48.522+0000 I REPL [replication-1] ****** 2018-02-27T14:52:48.522+0000 I REPL [replication-1] creating replication oplog of size: 990MB... 2018-02-27T14:52:48.525+0000 I STORAGE [replication-1] Starting WiredTigerRecordStoreThread local.oplog.rs 2018-02-27T14:52:48.525+0000 I STORAGE [replication-1] The size storer reports that the oplog contains 0 records totaling to 0 bytes 2018-02-27T14:52:48.525+0000 I STORAGE [replication-1] Scanning the oplog to determine where to place markers for truncation 2018-02-27T14:52:48.545+0000 I REPL [replication-1] ****** 2018-02-27T14:52:48.546+0000 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to vcp1-master-0.asml.tibco.aws:27040 2018-02-27T14:52:48.547+0000 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to vcp1-master-0.asml.tibco.aws:27040, took 2ms (1 connections now open to vcp1-master-0.asml.tibco.aws:27040) 2018-02-27T14:52:48.548+0000 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to vcp1-master-0.asml.tibco.aws:27040 2018-02-27T14:52:48.549+0000 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to vcp1-master-0.asml.tibco.aws:27040, took 1ms (2 connections now open to vcp1-master-0.asml.tibco.aws:27040) 2018-02-27T14:52:48.550+0000 I REPL [replication-0] CollectionCloner::start called, on ns:admin.system.version 2018-02-27T14:52:48.557+0000 I INDEX [InitialSyncInserters-admin.system.version0] build index on: admin.system.version properties: { v: 2, key: { version: 1 }, name: "incompatible_with_version_32", ns: "admin.system.version" } 2018-02-27T14:52:48.557+0000 I INDEX [InitialSyncInserters-admin.system.version0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-02-27T14:52:48.560+0000 I INDEX [InitialSyncInserters-admin.system.version0] build index on: admin.system.version properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" } 2018-02-27T14:52:48.560+0000 I INDEX [InitialSyncInserters-admin.system.version0] building index using bulk method; build may temporarily use up to 500 megabytes of RAM 2018-02-27T14:52:48.561+0000 I COMMAND [InitialSyncInserters-admin.system.version0] setting featureCompatibilityVersion to 3.4 2018-02-27T14:52:48.565+0000 I REPL [replication-0] No need to apply operations. (currently at { : Timestamp 1519743167000|3 }) 2018-02-27T14:52:48.566+0000 I ASIO [NetworkInterfaceASIO-RS-0] Ending connection to host vcp1-master-0.asml.tibco.aws:27040 due to bad connection status; 1 connections to that host remain open 2018-02-27T14:52:48.566+0000 I REPL [replication-0] Finished fetching oplog during initial sync: CallbackCanceled: Callback canceled. Last fetched optime and hash: { ts: Timestamp 1519743167000|3, t: 1 }[-1826841502371837297] 2018-02-27T14:52:48.566+0000 I REPL [replication-0] Initial sync attempt finishing up. 2018-02-27T14:52:48.566+0000 I REPL [replication-0] Initial Sync Attempt Statistics: { failedInitialSyncAttempts: 0, maxFailedInitialSyncAttempts: 10, initialSyncStart: new Date(1519743167515), initialSyncAttempts: [], fetchedMissingDocs: 0, appliedOps: 0, initialSyncOplogStart: Timestamp 1519743167000|3, initialSyncOplogEnd: Timestamp 1519743167000|3, databases: { databasesCloned: 1, admin: { collections: 1, clonedCollections: 1, start: new Date(1519743168549), end: new Date(1519743168565), elapsedMillis: 16, admin.system.version: { documentsToCopy: 1, documentsCopied: 1, indexes: 2, fetchedBatches: 1, start: new Date(1519743168550), end: new Date(1519743168565), elapsedMillis: 15 } } } } 2018-02-27T14:52:48.569+0000 I REPL [replication-0] initial sync done; took 1s. 2018-02-27T14:52:48.569+0000 I REPL [replication-0] Starting replication fetcher thread 2018-02-27T14:52:48.569+0000 I REPL [replication-0] Starting replication applier thread 2018-02-27T14:52:48.569+0000 I REPL [replication-0] Starting replication reporter thread 2018-02-27T14:52:48.569+0000 I REPL [rsSync] transition to RECOVERING 2018-02-27T14:52:48.569+0000 I REPL [rsBackgroundSync] could not find member to sync from 2018-02-27T14:52:48.569+0000 I REPL [rsSync] transition to SECONDARY 2018-02-27T14:52:53.571+0000 I REPL [ReplicationExecutor] Member vcp1-master-2.asml.tibco.aws:27040 is now in state SECONDARY 2018-02-27T14:52:58.571+0000 I REPL [rsBackgroundSync] sync source candidate: vcp1-master-0.asml.tibco.aws:27040 2018-02-27T14:52:58.572+0000 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to vcp1-master-0.asml.tibco.aws:27040 2018-02-27T14:52:58.574+0000 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to vcp1-master-0.asml.tibco.aws:27040, took 2ms (2 connections now open to vcp1-master-0.asml.tibco.aws:27040) 2018-02-27T14:53:02.589+0000 I NETWORK [conn6] received client metadata from 172.31.25.23:42468 conn6: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.4.5" }, os: { type: "Linux", name: "CentOS Linux release 7.3.1611 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-514.10.2.el7.x86_64" } } 2018-02-27T14:53:02.591+0000 I NETWORK [conn7] received client metadata from 172.31.25.23:42470 conn7: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.4.5" }, os: { type: "Linux", name: "CentOS Linux release 7.3.1611 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-514.10.2.el7.x86_64" } } 2018-02-27T15:03:06.188+0000 I NETWORK [conn8] received client metadata from 127.0.0.1:41890 conn8: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.5" }, os: { type: "Linux", name: "CentOS Linux release 7.3.1611 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-514.10.2.el7.x86_64" } } 2018-02-27T15:12:44.199+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends 2018-02-27T15:12:44.199+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... 2018-02-27T15:12:44.199+0000 I NETWORK [signalProcessingThread] closing listening socket: 7 2018-02-27T15:12:44.199+0000 I NETWORK [signalProcessingThread] closing listening socket: 8 2018-02-27T15:12:44.199+0000 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27040.sock 2018-02-27T15:12:44.199+0000 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... 2018-02-27T15:12:44.199+0000 I REPL [signalProcessingThread] shutting down replication subsystems 2018-02-27T15:12:44.199+0000 I REPL [signalProcessingThread] Stopping replication reporter thread 2018-02-27T15:12:44.199+0000 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to vcp1-master-0.asml.tibco.aws:27040: CallbackCanceled: Reporter no longer valid 2018-02-27T15:12:44.200+0000 I REPL [signalProcessingThread] Stopping replication fetcher thread 2018-02-27T15:12:44.200+0000 I REPL [signalProcessingThread] Stopping replication applier thread 2018-02-27T15:12:44.200+0000 I ASIO [NetworkInterfaceASIO-RS-0] Ending connection to host vcp1-master-0.asml.tibco.aws:27040 due to bad connection status; 1 connections to that host remain open 2018-02-27T15:12:44.200+0000 I REPL [rsBackgroundSync] Replication producer stopped after oplog fetcher finished returning a batch from our sync source. Abandoning this batch of oplog entries and re-evaluating our sync source. 2018-02-27T15:12:44.201+0000 I REPL [signalProcessingThread] Stopping replication storage threads 2018-02-27T15:12:44.202+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture 2018-02-27T15:12:44.205+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down 2018-02-27T15:12:44.261+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock... 2018-02-27T15:12:44.261+0000 I CONTROL [signalProcessingThread] now exiting 2018-02-27T15:12:44.261+0000 I CONTROL [signalProcessingThread] shutting down with code:0 2018-02-27T15:12:44.261+0000 I CONTROL [initandlisten] shutting down with code:0 2018-02-27T15:12:57.124+0000 I CONTROL [main] ***** SERVER RESTARTED ***** 2018-02-27T15:12:57.131+0000 I CONTROL [initandlisten] MongoDB starting : pid=18991 port=27040 dbpath=/var/lib/mongo 64-bit host=vcp1-master-1.asml.tibco.aws 2018-02-27T15:12:57.131+0000 I CONTROL [initandlisten] db version v3.6.2 2018-02-27T15:12:57.131+0000 I CONTROL [initandlisten] git version: 489d177dbd0f0420a8ca04d39fd78d0a2c539420 2018-02-27T15:12:57.131+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 2018-02-27T15:12:57.131+0000 I CONTROL [initandlisten] allocator: tcmalloc 2018-02-27T15:12:57.131+0000 I CONTROL [initandlisten] modules: none 2018-02-27T15:12:57.131+0000 I CONTROL [initandlisten] build environment: 2018-02-27T15:12:57.131+0000 I CONTROL [initandlisten] distmod: rhel70 2018-02-27T15:12:57.131+0000 I CONTROL [initandlisten] distarch: x86_64 2018-02-27T15:12:57.131+0000 I CONTROL [initandlisten] target_arch: x86_64 2018-02-27T15:12:57.131+0000 I CONTROL [initandlisten] options: { command: [ "run" ], config: "/etc/mongod.conf", net: { port: 27040 }, processManagement: { fork: true, pidFilePath: "/var/run/mongodb/mongod.pid" }, replication: { replSetName: "rs0" }, storage: { dbPath: "/var/lib/mongo", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: "/var/log/mongodb/mongod.log", quiet: true } } 2018-02-27T15:12:57.131+0000 I - [initandlisten] Detected data files in /var/lib/mongo created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2018-02-27T15:12:57.131+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=3270M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress), 2018-02-27T15:12:57.294+0000 I STORAGE [initandlisten] WiredTiger message [1519744377:294686][18991:0x7f55622bcb00], txn-recover: Main recovery loop: starting at 1/174592 2018-02-27T15:12:57.382+0000 I STORAGE [initandlisten] WiredTiger message [1519744377:382581][18991:0x7f55622bcb00], txn-recover: Recovering log 1 through 2 2018-02-27T15:12:57.436+0000 I STORAGE [initandlisten] WiredTiger message [1519744377:436784][18991:0x7f55622bcb00], txn-recover: Recovering log 2 through 2 2018-02-27T15:12:57.492+0000 I STORAGE [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs 2018-02-27T15:12:57.492+0000 I STORAGE [initandlisten] The size storer reports that the oplog contains 120 records totaling to 11518 bytes 2018-02-27T15:12:57.492+0000 I STORAGE [initandlisten] Scanning the oplog to determine where to place markers for truncation 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost. 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server. 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** Start the server with --bind_ip
to specify which IP 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning. 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-27T15:12:57.495+0000 I CONTROL [initandlisten] 2018-02-27T15:12:57.506+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongo/diagnostic.data' 2018-02-27T15:12:57.507+0000 I REPL [initandlisten] Did not find local voted for document at startup. 2018-02-27T15:12:57.507+0000 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one. 2018-02-27T15:12:57.507+0000 I STORAGE [initandlisten] createCollection: local.system.rollback.id with no UUID. 2018-02-27T15:12:57.522+0000 I REPL [initandlisten] Initialized the rollback ID to 1 2018-02-27T15:12:57.522+0000 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with no UUID. 2018-02-27T15:12:57.529+0000 I REPL [initandlisten] No oplog entries to apply for recovery. appliedThrough and checkpointTimestamp are both null. 2018-02-27T15:12:57.530+0000 I NETWORK [initandlisten] waiting for connections on port 27040 2018-02-27T15:12:57.534+0000 I REPL [replexec-0] New replica set config in use: { _id: "rs0", version: 3, protocolVersion: 1, members: [ { _id: 0, host: "vcp1-master-0.asml.tibco.aws:27040", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "vcp1-master-1.asml.tibco.aws:27040", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "vcp1-master-2.asml.tibco.aws:27040", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 2000, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5a9570bdaab671324d743bd3') } } 2018-02-27T15:12:57.534+0000 I REPL [replexec-0] This node is vcp1-master-1.asml.tibco.aws:27040 in the config 2018-02-27T15:12:57.534+0000 I REPL [replexec-0] transition to STARTUP2 from STARTUP 2018-02-27T15:12:57.534+0000 I REPL [replexec-0] Starting replication storage threads 2018-02-27T15:12:57.535+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to vcp1-master-0.asml.tibco.aws:27040 2018-02-27T15:12:57.535+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to vcp1-master-2.asml.tibco.aws:27040 2018-02-27T15:12:57.535+0000 I REPL [replexec-0] transition to RECOVERING from STARTUP2 2018-02-27T15:12:57.535+0000 I REPL [replexec-0] Starting replication fetcher thread 2018-02-27T15:12:57.535+0000 I REPL [replexec-0] Starting replication applier thread 2018-02-27T15:12:57.535+0000 I REPL [replexec-0] Starting replication reporter thread 2018-02-27T15:12:57.536+0000 I REPL [rsSync] transition to SECONDARY from RECOVERING 2018-02-27T15:12:57.537+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to vcp1-master-0.asml.tibco.aws:27040, took 3ms (1 connections now open to vcp1-master-0.asml.tibco.aws:27040) 2018-02-27T15:12:57.537+0000 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to vcp1-master-2.asml.tibco.aws:27040, took 2ms (1 connections now open to vcp1-master-2.asml.tibco.aws:27040) 2018-02-27T15:12:57.537+0000 I REPL [replexec-1] Member vcp1-master-0.asml.tibco.aws:27040 is now in state PRIMARY 2018-02-27T15:12:57.537+0000 I REPL [replexec-0] Member vcp1-master-2.asml.tibco.aws:27040 is now in state SECONDARY 2018-02-27T15:13:03.536+0000 I REPL [rsBackgroundSync] sync source candidate: vcp1-master-2.asml.tibco.aws:27040 2018-02-27T15:13:03.536+0000 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to vcp1-master-2.asml.tibco.aws:27040 2018-02-27T15:13:03.538+0000 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to vcp1-master-2.asml.tibco.aws:27040, took 2ms (1 connections now open to vcp1-master-2.asml.tibco.aws:27040) 2018-02-27T15:13:03.540+0000 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to vcp1-master-2.asml.tibco.aws:27040 2018-02-27T15:13:03.541+0000 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to vcp1-master-2.asml.tibco.aws:27040, took 1ms (2 connections now open to vcp1-master-2.asml.tibco.aws:27040) 2018-02-27T15:47:02.859+0000 I ASIO [NetworkInterfaceASIO-RS-0] Connecting to vcp1-master-2.asml.tibco.aws:27040 2018-02-27T15:47:02.862+0000 I ASIO [NetworkInterfaceASIO-RS-0] Successfully connected to vcp1-master-2.asml.tibco.aws:27040, took 3ms (3 connections now open to vcp1-master-2.asml.tibco.aws:27040) 2018-02-27T15:48:02.860+0000 I ASIO [NetworkInterfaceASIO-RS-0] Ending idle connection to host vcp1-master-2.asml.tibco.aws:27040 because the pool meets constraints; 2 connections to that host remain open 2018-02-27T15:48:05.525+0000 I NETWORK [conn1] received client metadata from 127.0.0.1:41916 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.2" }, os: { type: "Linux", name: "CentOS Linux release 7.3.1611 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-514.10.2.el7.x86_64" } } 2018-02-27T15:54:01.272+0000 I NETWORK [conn1] end connection 127.0.0.1:41916 (0 connections now open)